Professional Documents
Culture Documents
Assessment
Handbook
Continuing Education Program
Volume 1, May 2009
The Assessment Handbook contains articles and materials presented in the Continuing
Education Program of the Philippine Educational Measurement and Evaluation
Association.
Articles
43 An Assessment Toolkit
Paz Diaz, Roosevelt College System, Cainta
Inventive Thinking
Effective Communication
High Productivity
Figure 1
Design of the Curriculum
Figure 2
Conventional Curriculum Design
Figure 3
Backward Design (Understanding By Design)
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 7
Figure 4
Curriculum Process
I. Results/Desired Outcomes
Defines what students should be able to know and do at the end of the program, course, or
unit of study; generally expressed in terms of overall goals, and specifically defined in terms
of content and performance standards.
Essential Understandings
• These are the big and enduring ideas at the heart of the discipline.
Essential Questions
• These are open-ended, provocative questions that spark thinking and further inquiry into
the essential meanings and understandings.
II. Assessment
• Students demonstrate conceptual understanding, and content and skill acquisition or show
evidence of their learning through products and performances.
• Products and performances promote self-understanding, self-monitoring, and self-
assessment.
• They include opportunities for authentic audiences to experience and critique results
• They permit choices and combinations of oral, written, visual, and kinesthetic modes
Facets of Understanding
• Explanation
• Interpretation
• Application
• Perspective
• Empathy
• Self-knowledge
The teacher can determine if students have developed conceptual understanding if they
can demonstrate this in a number of ways, that is, by explaining, interpreting, applying,
giving their perspective, showing empathy, and revealing their self-knowledge. These
are referred to as the facets of understanding.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 9
A student who has understanding of a current event should be able to do the following:
• Explain the event (e.g. Explain why, for example, the MILF is waging war against
the government.)
• Interpret it (e.g. Interpret the message that the MILF is conveying when it drives
away the residents of a community and thereafter occupies it.)
• Apply it (e.g. Apply their knowledge of the effects of conflicts in predicting what the
outcome of this conflict might be.)
• Give his/her own perspective about the event (e.g. Give their perspective on what
could influence the MILF to go back to the negotiation table. )
• Show empathy with the people who figure in the event (e.g. Share their thoughts
about why the MILF believes its actions are justified (Empathy)
• Reveal self-knowledge about the event (e.g. Express their level of confidence about
making a judgment on the crisis in Mindanao in light of what they have read or
heard, or the background knowledge they have about the local history of the people
of Mindanao (Self Knowledge)
• The learning activities are aligned with the standards and are designed to promote
attainment of desired results.
• They include instructional resources, both digital and non-digital that students will need
to perform the activities and produce the products and performances.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 10
Figure 5
Implementation of the Curriculum in a Child-Friendly School Environment
Figure 6
Managing for Excellence
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 11
The learning-centered leader moves from working with individual teachers to working with
teams of teachers in order to promote student learning.
Why is there a need to revise the Bloom’s Taxonomy? What are the major changes
made? How can we use the revised taxonomy in teaching and assessing students’ learning?
Background
Although the Bloom’s Taxonomy is named after Benjamin Bloom, the taxonomy was
actually the work of the many individuals hired to help manage the influx of veterans into the
education system following World War II. Discharged soldiers, home from fighting World War
II, were eligible for the GI education stipend, which paid college tuition, textbook fees, living
expenses, and support for the ex-soldier’s dependents. The GI stipend enabled many World War
II veterans to attend college, flooding campuses with new students even though few new faculty
members were hired to educate this deluge of students. In recognition of the life experiences of
these veterans, the concept of “credit-by-examination” was developed with support from the
Department of Defense. The work that eventually became the Taxonomy of Education resulted
from the collective efforts of many including the psychology graduates hired to design,
administer, and score tests for college-credit-by-examination, hence their title of “Examiners.”
The Examiners first met formally following the annual meeting of the American Psychological
Association (APA) in 1948. They continued to meet after the annual APA conventions to further
their discussions of ways to define and structure intellectual content. They were attempting to
make sense of the multiple educational fields needing tests, with a goal of reducing the
complexity of their tasks by categorizing knowledge into hierarchies. Once developed, these
hierarchies would provide them with a framework for writing test items in a variety of subjects
(Pickard, 2007).
express qualitatively the different kinds of intellectual skills and abilities. The cognitive and
affective domains provided a way to organize thinking skills into six levels, from the most basic
to levels that are more complex. It was a one-dimensional cumulative hierarchy, with
achievement at each lower level considered necessary to move up to the next level (Anderson,
2006). The original development committee produced the hierarchical levels for the cognitive
and affective domains, but not for the psychomotor domain. Their explanation for this omission
was that they saw little need to teach manual skills to college students (Anderson & Krathwohl,
2001) thus completely overlooking athletics, drama, and applied programs of study such as
music.
Requests were made to Dr. Lorin Anderson, a former student of Bloom’s at the
University of Chicago, to update the Taxonomy prior to his retirement. At the urging of
publishers and education professionals, he agreed to the task, to reflect the enlarged
understanding of the teaching and learning processes now available. He and co-editor, the elderly
David Krathwohl, one of the editors of the original taxonomy, collaborated with seven other
educators to produce the revised Taxonomy (Pickard, 2007).
During the revision processes, the editors identified 19 alternative frameworks, developed
to supplement, clarify, and improve upon the original Bloom’s Taxonomy. The alternative
frameworks were examined to determine how they might contribute to the revision of the
updated taxonomy. Of these, 11 represented a single dimension like the original taxonomy while
eight frameworks represented two or more dimensions (Pickard, 2007).
The Revised Taxonomy is seen as “a tool to help educators clarify and communicate
what they intended students to learn as a result of instruction” (Anderson & Krathwohl, 2001, p
23). Incorporated into the Revised Taxonomy are advances in teaching and learning since
publication of the original. The term knowledge was deemed an inappropriate term to describe a
category of thinking and was replaced with the term remembering. In addition, the revision
reconceptualized the original single dimension taxonomy into two dimensions with both a
Cognitive Process Dimension and a Knowledge Dimension.
Figure 1
Cognitive Domain of the Bloom’s Taxonomy
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 14
Table 1
The Revised Taxonomy
The Revised Taxonomy is not a cumulative hierarchy, as the original was. Instead, the six
stages are viewed as a “cognitive processing” dimension. Our current concepts of learning view
students as active participants in the learning process. Students select the information to which
they attend and construct their own meanings from the selected information. This constructivist
perspective of learning emphasizes how learners cognitively process new knowledge as they
engage in meaningful learning. Thus, the cognitive process dimension reflects students’
cognitive and metacognitive activity as expressed within the opportunities and constraints of the
learning setting. “This constructivist process of ‘making sense’ involves the activation of prior
knowledge as well as various cognitive processes that operate on that knowledge” (Anderson &
Krathwohl, 2001, p. 38). In addition to the cognitive processing dimension, the Revised
Taxonomy authors identified four general types of knowledge: factual, conceptual, procedural,
and metacognitive which make up the Knowledge Dimension.
Table 2
The Cognitive Processing Dimension of the Revised Taxonomy
Table 3
The Detailed Cognitive Processing Dimension of the Revised Taxonomy
CATEGORIES
& COGNITIVE ALTERNATIVE
PROCESSES NAMES DEFINITIONS AND EXAMPLES
2.2 EXEMPLIFYING illustrating, Finding a specific example or illustration of a concept or principle (e.g.,
instantiating Give examples of various artistic painting styles)
2.3 CLASSIFYING Categorizing, Determining that something belongs to a category (e.g.,
subsuming Concept or principle) (e.g., Classify observed or described
Cases of mental disorders)
2.4 SUMMARIZING Abstracting, Abstracting a general theme or major point(s) (e.g., Write a
Generalizing Short summary of the events portrayed on a videotape)
2.5 INFERRING Concluding, Drawing a logical conclusion from presented information
Extrapolating, (e.g., In learning a foreign language, infer grammatical
Interpolating, Principles from examples)
predicting
2.6 COMPARING Contrasting, Detecting correspondences between two ideas, objects, and
mapping, the like (e.g., Compare historical events to contemporary
matching Situations)
2.7 EXPLAINING Constructing Constructing a cause-and-effect model of a system (e.g., Ex-
models plain the causes of important 18th-century events in France)
3.1 EXECUTING Carrying out Applying a procedure to a familiar task (e.g., Divide one
whole number by another whole number, both with
Multiple digits)
3.2 IMPLEMENTING Using Applying a procedure to an unfamiliar task (e.g., Use New-
ton's Second Law in situations in which it is appropriate)
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 16
Table 3
The Detailed Cognitive Processing Dimension of the Revised Taxonomy (continuation)
CATEGORIES
& COGNITIVE AL TERNATIVE
PROCESSES NAMES DEFINITIONS AND EXAMPLES
4. ANALYZE-Break material into its constituent parts and determine how the parts relate to one
another and to an overall structure or purpose
4.1 DIFFERENTIATING Discriminating, Distinguishing relevant from irrelevant parts or important from
distinguishing, unimportant parts of presented material
focusing, (e.g., Distinguish between relevant and irrelevant
selecting numbers in a mathematical word problem)
6.2 PLANNING Designing Devising a procedure for accomplishing some task (e.g.,
Plan a research paper on a given historical topic)
6.3 PRODUCING Constructing Inventing a product (e.g., Build habitats for a specific
purpose)
Anderson, L., & Krathwohl, D. E. (2001)
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 17
Within the knowledge dimension is basic information that students must remember to be
acquainted with a discipline or solve a problem. Labeled factual knowledge, this may include
terminology of the discipline or knowledge of specific details. Factual knowledge includes the
discrete facts and basic elements that experts use when communicating about their discipline,
understanding it, and organizing it systematically; there is little abstraction to factual knowledge.
Because of the explosion of knowledge within all subjects, curriculum designers, textbook
authors, and teachers must decide what is critical to include and what is of lesser importance.
Many educators now recognize that memorization of discrete facts is not highly productive
knowledge, since so much information today is a few keystrokes away on the internet (Pickard,
2007).
Conceptual knowledge is more complex than factual knowledge and includes three
subtypes: 1) knowledge of classifications and categories, 2) knowledge of principles and
generalizations, and 3) knowledge of theories, models, and structure (Anderson & Krathwohl,
2001). When students can explain the concepts in their own words and transfer information to
new situations they have acquired conceptual knowledge. Chamberlain and Cummings (2003)
indicate that concepts can be defined and characterized, and that generalizations show
relationships among concepts. Classifications and categories of concepts form the basis for
principles and generalizations. Principles and generalizations form the basis for theories, models,
and structures. Classification, principle, and theory capture the greatest amount of intellect
within widely different disciplines (Anderson & Krathwohl, 2001).
Both factual and conceptual knowledge deal with products, however procedural
knowledge is often a series or sequence of steps to follow. Procedural knowledge also includes
criteria of when to use various procedures and reflects knowledge of different processes.
Examples of procedural knowledge could include syntax of an essay, or application of art and
design principles in a display board for interior design. Meaningful learning provides students
with the knowledge and cognitive processes they need for successful problem solving. Problem
solving occurs when a student devises a way of achieving a goal never before accomplished,
often by reformulating the problem into a more familiar form, recognizing the similarity, and
applying the method in solving for the new knowledge.
Table 4
The Major Types and Subtypes of the Knowledge Dimension
A. FACTUAL KNOWLEDGE - The basic elements students must know to be acquainted with a
discipline or solve problems in it
AA. Knowledge of terminology Technical vocabulary, musical symbols
AB. Knowledge of specific details and Major natural resources, reliable sources of
elements information
B. CONCEPTUAL KNOWLEDGE- The interrelationships among the basic elements within a larger structure that
enable them to function together
BA. Knowledge of classifications and Periods of geological time, forms of business
categories ownership
BB. Knowledge of principles and Pythagorean theorem, law of supply and demand
generalizations
BC. Knowledge of theories, models, and Theory of evolution, structure of Congress
structures
C. PROCEDURAL KNOWLEDGE-How to do something, methods of inquiry, and criteria for using skills,
algorithms, techniques, and methods
CA. Knowledge of subject-specific skills and Skills used in painting with watercolors,
algorithms whole-number division algorithm
DB. Knowledge about cognitive tasks, Knowledge of the types of tests particular teachers
Including appropriate contextual and administer, knowledge of the cognitive demands
Conditional knowledge of different tasks
DC. Self-knowledge Knowledge that critiquing essays is a personal
strength, whereas writing essays is a personal weak-
ness; awareness of one's own knowledge level
A major contribution that the revised taxonomy can make is in the way educators think
about instruction. The intersection of the cognitive process dimensions and the knowledge
dimensions can facilitate instructional planning and assessment. When educators plan how they
will assess learning, the intersection of the cognitive processing and knowledge dimension can
facilitate the selection of learning activities that will provide for modeling and practice using the
intended assessment format. Use of the revised taxonomy enables educators to specify how they
expect students to use specified knowledge and thus provide learning experiences to assist
students to reach that cognitive stage. The matrix also streamlines the list of verbs used in
generating learning objectives to precise descriptions of the expected outcomes (Pickard, 2007).
Figure 2
The Knowledge and Cognitive Process Dimensions of a Learning Objective
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 20
Table 5
Sample Learning Activities about the Topic “Travel”
Remembering How many ways can you travel from one place to another? List and draw all
the ways you know. Describe one of the vehicles from your list, draw a
diagram and label the parts. Collect “transport” pictures from magazines- make
a poster with info.
Understanding How do you get from school to home? Explain the method of travel and draw a
map. Write a play about a form of modern transport. Explain how you felt the
first time you rode a bicycle. Make your desk into a form of transport.
Applying Explain why some vehicles are large and others small. Write a story about the
uses of both. Read a story about “The Little Red Engine” and make up a play
about it. Survey 10 other children to see what bikes they ride. Display on a
chart or graph.
Analysing Make a jigsaw puzzle of children using bikes safely. What problems are there
with modern forms of transport and their uses- write a report. Use a Venn
Diagram to compare boats to planes, or helicopters to bicycles.
Evaluating What changes would you recommend to road rules to prevent traffic accidents?
Debate whether we should be able to buy fuel at a cheaper rate. Rate transport
from slow to fast etc.
Creating Invent a vehicle. Draw or construct it after careful planning. What sort of
transport will there be in twenty years time? Discuss, write about it and report
to the class. Write a song about traveling in different forms of transport.
Figure 3
Pie Chart of Cognitive Processes, Activities, and Products
Table 6. Revised Taxonomy – verbs, materials/situations that require this level of thinking, potential activities and products
REMEMBERING UNDERSTANDING APPLYING ANALYZING EVALUATING CREATING
Tell, List, Describe, Relate, Explain, Interpret, Outline, Solve, Show, Use, Analyse, Distinguish, Judge, Select, Choose, Create, Invent, Compose,
Locate, Write, Find, State, Discuss, Distinguish, Predict, Illustrate, Construct Examine, Compare Decide, Predict
Name, Identify, Label, Restate, Translate, Compare, Complete, Examine Contrast, Investigate Justify, Debate, Plan, Construct
VERBS
Recall, Define, Recognise, Describe, Relate, Generalise, Classify, Choose Categorise, Identify Verify, Argue, Design, Imagine
Match, Reproduce, Summarise, Put into your Interpret, Make Explain, Separate Recommend, Assess, Propose, Devise
Memorise, Draw, Select, own words, Paraphrase, Put together, Change, Advertise, Take apart Discuss, Rate, Prioritise, Formulate, Combine,
Write, Recite Convert, Demonstrate, Apply, Produce, Differentiate, Subdivide, Determine, Critique, Hypothesize, Originate,
Visualise, Find out more Translate, Calculate, deduce, Evaluate, Criticise, Weigh, Add to, Forecast,
information about Manipulate, Modify, put Value, estimate, defend
into practice
Events, people, newspapers, Speech, stories, drama, Diagrams, sculptures, Surveys, questionnaires, Recommendations, self- Experiments, games,
SITUATIONS
MATERAILS
magazine articles, cartoons, diagrams, graphs, illustrations, arguments, models, evaluations, group songs, reports, poems,
definitions, videos, dramas, summaries, outlines, dramatisations, forecasts, displays, demonstrations, discussions, debates, court speculations, creations,
textbooks, films, television analogies, posters, bulletin problems, puzzles, diagrams, systems, trials, standards, editorials, art, inventions, drama,
programs, recordings, media boards. organisations, conclusions, reports, values. rules.
presentations classifications, rules, graphed information
systems, routines.
Make a list of the main Cut out or draw pictures to Construct a model to Design a questionnaire to Prepare a list of criteria to Invent a machine to do a
events . show a particular event. demonstrate how it will gather information. judge a ……..show? specific task.
Make a timeline of events. Illustrate what you think the work. Write a commercial to Remember to indicate Design a building to
Make a facts chart. main idea was. Make a diorama to sell a new product. priorities and ratings. house your study.
Write a list of any pieces of Make a cartoon strip showing illustrate an important Conduct an investigation Conduct a debate about a Create a new product,
POTENTIAL ACTIVITIES & PRODUCTS
information you can the sequence of events. event. to produce information to special issue. give it a name and then
remember. Retell the story in your own Make a scrapbook about support a point of view. Make a booklet about 5 devise a marketing
List all the …in the story. words. the areas of study. Construct a graph to rules you see as important strategy.
Make a chart showing.. Paint a picture of some Make a papier-mache illustrate selected to convince others. Write about your feeling
Make an acrostic. aspect you like. map to include relevant information. Form a panel to discuss sin relation to …
Recite a poem Write a summary report of an information about an Make a jigsaw puzzle. views. Design a record, book or
event. event. Make a family tree Write a letter to .... magazine cover.
Prepare a flow chart to Take a collection of showing relationships. advising on changes Sell an idea.
illustrate the sequence of photographs to Put on a play about he needed at … Devise a way to …
events. demonstrate a particular study area. Write a half yearly report. Compose a rhythm or put
Make a colouring book. point. Write a biography of the present your point of view. new words to an old
Make up a puzzle game study person. song.
showing the ideas from Prepare a report.
an area of study. Arrange a party and
Make a clay model of an record as a procedure.
item in the area. Review apiece of art
Design a market strategy including form, colour
for your product. and texture
Dress a doll in costume.
Paint a mural.
Write a textbook outline.
teachers.net/lessons/posts/355.html www.teachers.ash.org.au/researchskills/dalton.htm Dalton.J & Smith.D [(1986) Extending Children’s Special abilities – Strategies for Primary Classrooms
www.lgc.peachnet.edu/academic/educatn/Blooms/critical_thinking.htm
Table 7
Bloom’s Revised Taxonomy Planning Framework
Painting
Evaluating Checking Debate
Hypothesising Panel
(Judging the value of Critiquing Report
ideas, materials and Experimenting Evaluation
methods by developing Judging Investigation
and applying standards Testing Verdict
and criteria). Detecting Conclusion
Monitoring Persuasive
speech
Analyzing Comparing Survey
Organising Database
(Breaking information Deconstructing Mobile
down into its component Attributing Abstract
elements). Outlining Report
Structuring Graph
Integrating Spreadsheet
Checklist
Chart
Outline
Applying Implementing Illustration
Carrying out Simulation
(Using strategies, Using Sculpture
concepts, principles and Executing Demonstration
theories in new Presentation
situations). Interview
Performance
Diary
Lower-order thinking
Journal
Understanding Interpreting Recitation
Exemplifying Summary
(Understanding of given Summarising Collection
information). Inferring Explanation
Paraphrasing Show and tell
Classifying Example
Comparing Quiz
Explaining List
Label
Outline
Remembering Recognising Quiz
Listing Definition
(Recall or recognition of Describing Fact
specific information). Identifying Worksheet
Retrieving Test
Naming Label
Locating List
Finding Workbook
Reproduction
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 23
The knowledge dimension will help you consider the type of knowledge that
you are trying to assess (factual, conceptual, procedural or meta-cognitive). The
cognitive dimension will help you create different types of questions that relate to
different cognitive skills.
The table, therefore, can be used to generate different types of questions – that
is, questions that cover a spread of the knowledge/cognitive domain (rather than a series
of questions that repeatedly assess the same thing). So, given a specific topic, and
thinking about the different types of knowledge and cognitive skills, it should be
possible to come up with a number of diverse questions on that topic.
The questions could be mapped onto the taxonomy table as illustrated in the
Table below.
Table 8
Mapping Questions in the Revised Taxonomy
Cognitive dimension
Knowledge
dimension 1. 2. 3. 4. 5. 6.
Remember Understand Apply Analyze Evaluate Create
A. Factual Question
knowledge 1
B. Conceptual Question
knowledge 2
C. Procedural Question Question Question
knowledge 3 4 5
D. Meta-cognitive
knowledge
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 24
Mapping the questions onto the taxonomy table gives an indication of the
relative complexity of the questions. The mapping also confirms that the questions are
diverse since they occupy different cells in the table and therefore assess different
cognitive abilities.
Different questions will occupy different cells in the taxonomy table; similar
questions will occupy the same cells in the table. Simple questions will occupy cells
close to the top left-hand corner; complex questions will be further away from the top
left-hand corner. In general, you would expect lower level papers to have more
questions towards the top left-hand corner of the table and higher level papers to have
questions towards the middle and bottom right-hand corner. But every paper –
irrespective of its level – should map onto a range of cells (rather than repeatedly
assessing the same type of knowledge or cognitive process). This provides the
necessary discrimination to allow candidates to perform at varying levels and receive
different grades (Elliott, 2002).
Once a paper has been constructed, the taxonomy table can be used to analyse it.
This could be done to check the balance of a paper – in other words, to check if
different types of knowledge have been examined and various cognitive skills assessed.
Dalton (2003) applied the revised taxonomy in identifying the following types
of assessment activities:
Table 9
Possible Assessment Strategies in the Revised Taxonomy
References
Anderson, L. (2006). Taxonomy academy handbook. Retrieved April 11, 2009, from
http://www.andersonresearchgroup.com/tax.html
Anderson, L., & Krathwohl, D. E. (2001). A Taxonomy for learning teaching and
assessing: A revision of Bloom's taxonomy of educational objectives [Abridged].
New York: Addison Wesley Longman, Inc.
Elliott, B. (2002). Using the revised Bloom’s Taxonomy for the creation of examination
questions. Retrieved Feb. 20, 2007, from http://www.bobbyelliott.com/
Taxonomy.htm
Marzano, R. J., Norford, J. S., Paynter, D. E., Pickering, D. J. & Gaddy, B. B. (2001).
Handbook for classroom instruction that works. Alexandria, VA: Association
for Supervision and Curriculum Development.
Pickard, M. J. (2007). The new Bloom’s taxonomy: An overview for family and
consumer sciences. Journal of Family and Consumer Sciences Education, 25,
45-55.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 26
Abstract
The report focuses on aspects in the development and assessment of self-regulated
learning in the school context. The nature of self-regulated learning was discussed by
identifying its critical characteristics. Different models showing the components and
process of self-regulation was presented in order to focus different ways of assessing it
as a construct. Different studies are then presented to show the effects of developing
self-regulation in the classroom context. The need to assess self-regulation as part of the
teaching and learning process is discussed under certain needs in the school setting.
Different protocols with examples are shown in assessing self-regulated learning as
applied in the classroom.
Teachers generally commend students that are more independent in their studies,
diligent in listening inside the classroom, focused on doing their task inside the
classroom, gets high scores in tests, able to recall teachers instruction and facts lectured
in class, and submits quality work. However, teachers see problematic students when
they miss assignments, inattentive during lectures, volatile during class activities, fails
to recall instructions taught in the classroom, submits poor work and worst is not
submitting any work at all. These two scenarios differentiate self-regulated students
with those who are poor in regulating their learning. Self-regulated learners are
generally characterized as independent learners, ability to control their learning, focused
in their studies, plans and studies in advance to obtain high scores in tests, and uses
strategies to recall instruction. By showing these characteristics, self-regulated students
eventually performs well and obtains successful academic outcomes. Self-regulation is
generally defined by Zimmerman (2005) as “self-generated thoughts, feelings, and
actions that are planned and cyclically adapted to the attainment of personal goals” (p.
14). Zimmerman (2002) further explained that self-regulation is “a self-directive
process by which learners transform their mental abilities into academic skills” (p. 65).
There are various contexts where self-regulation can be practiced. It can be
applied in sports to regulate one’s performance, in health to attain potent physical
condition, in the industrial setting to determine effective employees, and in managing
one’s emotions (emotion regulation). This report focuses on self-regulated learning in
the academic context. In the academic setting, one of the main goals is to develop
students to be self-regulated learners. Learners that are self-regulated become
independent of their own learning and thus control their own learning in general. Self-
regulation entails students that carefully plan their actions, set goals, and use a variety
of strategies in accomplishing a task. Zimmerman (2002) further characterizes self-
regulated students as having superior motivation, adaptive learning methods, and views
their future optimistically.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 27
There are several studies indicating that self-regulated learners turn out to
perform well in school related tasks (Blakey & Spencer, 1990; Collins, 1982; Corsale &
Ornstein, 1980; Kluwe, 1982; Lopez, Little, Oettingen, Baltes, 1998; Rock, 2005;
Schneider, 1985). There is also an established theory that learners who self-regulate
have increased self–efficacy or beliefs in one’s ability to execute actions (see Bandura
& Schunk, 1981; Schunk, 1981, Schunk, 1983; 1984). It is also notable that self-
regulated learners are more motivated (see Fiske & Taylor, 1991; Corno & Mandinach,
1983). Specifically for the Filipino adolescent, students’ who see the consequence of
their actions and those who structure their environment for study showed to be more
mastery oriented (developing competency and gaining understanding) (see Magno &
Lajom, 2008). In a developmental perspective, the study of Magno and Lajom (2008)
showed that all components of self-regulation increased from high school to college
students.
Models of Self-regulation
There are several models of self-regulation that are used depending on the
specific area how self-regulation is viewed. Bandura (1986) sees self-regulation as a
triadic process where there is an interaction of personal, behavioral, and environmental
aspects. Framed in this theory, the behavioral aspect of self-regulation involves self-
observation and strategically adjusting performance. The environmental aspect includes
observing and adjusting environmental conditions or outcomes. Covert regulation
(personal) includes monitoring and adjusting cognitive and affective strategies.
Based on the social cognitive perspective, Zimmerman (2002; 2005) derived the
process involved in self-regulation. In this cyclical process, self-regulation in a three-
phase structure (forethought phase, performance phase, and self-reflection phase). The
forethought phase is the stage where the learner analyzes the task by planning and
setting goals. Analysis of tasks is influenced by learners’ self-monitored beliefs,
intrinsic interest, and goal orientations. After careful planning the learner proceed to the
performance phase or the execution of a task. While executing a task, the learner
maintains self-control by establishing self-instruction, imagery, attention focusing, and
strategies used for accomplishing the task. The performance phase is also accompanied
by self-observation by self-recording and self-experimentation. After the performance,
the self-regulated learner reflects on the execution which is the self-reflection phase. In
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 28
this phase the learner judges how well they have planned and executed the task through
self-evaluation and causal attribution. The start to react on the plan and execution
whether they are satisfied and identify what possible adaptations can be used when
engaged in the same task again. Their reflections are carried out in the forethought
phase the next time they engage in a task that will require them to self-regulate.
There are other models of self-regulated learning. For example, Craver and
Scheir (2005) sees self-regulation as a feedback loop. The process starts with a goal,
standard, or reference value. Then the performance is compared with the output value
(comparator). If the output is same or exceed the reference value, then performance is
successful, if not, there is discrepancy. Shah and Kruglanski (2005) see self-regulation
as a network of goals. They use a connectionist perspective where goals and means are
viewed as a network of complex cognitive associations.
A personality systems perspective in self-regulation identify ways how positive
and negative affect influence self-regulation as a cognitive system. This was
operationalized in the model of Magno (2008) where systems of activation and
inhibitions to self-regulated learning were identified and their effects on self-regulation
were tested. The activation system was composed of self-determination, disengagement,
initiative, and persistence while negative affect is composed of anxiety, worry, thought
suppression, and fear of negative evaluation. It was found that the activation and
inhibition systems served their purpose. The activation system increased with self-
regulation while the inhibition system identified as negative affect decreased self-
regulation. This showed that experience of negative affect such as worry, anxiety,
thought suppression, and fear of negative evaluation interfered with the use of self-
regulation. When levels of the activation system (high and low) were varied, it was
found that individuals who used high levels of the activation system who used self-
regulation were not affected by the negative affect. Those individuals with low levels of
the activation system, their self-regulation were negatively impacted by the inhibitions
such as the negative affect. This model provides a theoretical perspective of identifying
certain conditions how to make self-regulation work and not work well.
Moreover, Winne (1995; 1997) views self-regulation as composed of
metacognition, intrinsic motivation, and strategy use. Metacognition is the awareness of
the learners in their own academic strengths and weaknesses, cognitive resources that
they can apply to meet the demands of tasks, and how to regulate the engagement of
tasks. Intrinsic motivation is the belief in incremental learning, high value placed on
personal progress, and high efficacy for learning. His process model of self-regulation
starts with task and cognitive conditions that individuals set. These conditions provide
information on how the task in the environment will be evaluated. The second phase
involves setting goals and planning how to reach them. This includes decision making
supplemented by information retrieved from memory, framing goals, and assembling a
plan to approach them. The third phase involves enacting tactics by controlling and
monitoring used during the performance. The products of self-regulation may turn out
as definition of a task, goals and plans, studying tactics and adaptation. The last phase
involves adapting metacognition. In the past phase the learner makes major adaptations
to those parts of the model under their control.
The various models of self-regulation provides a view on how self-regulation
involves other variables, its process, how its components are interrelated to each other.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 29
The study by Fok and Watkins (2007) used a constructivist teaching approach
which is typically a self-regulation technique and investigated its effect using the
Learning Process Questionnaire (LPQ) and the Constructivist Learning Environment
Scale (CLES). The constructivist technique employed involves students to give their
own examples, authentic problems, testing own ideas, challenge each others’
conceptualizations, group presentations, self-analysis, self-reflective thinking, and
evidence to support ideas, and present ideas. The study found significant post-test gains
among the high achieving group on the learning process and constructivist learning
environment after the constructivist technique. This shows that a constructivist learning
environment that includes self-regulation is effective in developing deeper approaches
to learning.
Paris and Paris (2001) described 12 principles that teachers can use to design
activities in classrooms that promote students self-regulation. They emphasized that
self-regulation can be taught with explicit instruction, directed reflection, metacognitive
discussions, and participation in practices with experts. Self-regulation can be promoted
indirectly by modeling and activities that entail reflective analyses of learning.
There are also other studies that employed self-regulation in the classroom
setting and tested the procedures on their effectiveness on students’ performance in
different tasks and subject areas.
The study by Glaser and Brunstein (2007) examined whether self-regulation
procedures would increase the effectiveness of a writing strategies training designed to
improve 4th graders' (N = 113) composition skills. The strategy training included
methods of direct instruction and cognitive modeling as well as phases of guided and
independent practice to help students acquire effective strategies (e.g., the widely used
story grammar strategy) for planning and redrafting stories. Students who were taught
composition strategies in conjunction with self-regulation procedures were compared
with (a) students who were taught the same strategies but received no instruction in self-
regulation and (b) students who received didactic lessons in composition. Both at
posttest and at maintenance (5 weeks after the instruction), strategy plus self-regulation
students wrote more complete and qualitatively better stories than students in the 2
comparison conditions. They also displayed superior performance at a transfer task
requiring students to recall essential parts of an orally presented story.
The study of Azevedo and Cromley (2004) examined the effectiveness of self-
regulated learning (SRL) training in facilitating college students' learning with
hypermedia. The training included planning (planning, subgoals, prior knowledge
activation), monitoring (feeling of knowing, judgment of learning, self-questioning,
content evaluation, identifying the adequacy of information), strategies (selecting new
informational source, summarization, rereading, and knowledge elaboration), task
difficulty and demands (time and effort planning, task difficulty, and control of
context), and interest. Undergraduate students were randomly assigned to either a
training condition or a control condition and used a hypermedia environment to learn
about the circulatory system. Students in the self-regulation group were given a 30-min
training session on the use of specific, empirically based self-regulation variables
designed to foster their conceptual understanding; control students received no training.
Pretest, posttest, and verbal protocol data were collected from both groups. The SRL
condition facilitated the shift in learners' mental models significantly more than did the
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 31
control condition; verbal protocol data indicated that this was associated with the use of
the SRL variables taught during training.
The study by Fuchs et al. (2003) assessed the contribution of self-regulated
learning strategies, when combined with problem-solving transfer instruction, on 3rd-
graders' mathematical problem solving. SRL incorporated goal setting and self-
evaluation. Problem-solving transfer instruction taught problem-solution methods, the
meaning of transfer, and four superficial-problem features that change a problem
without altering its type or solution. The problem-solving transfer also prompted
metacognitive awareness to transfer. The effectiveness of transfer plus SRL was
contrasted with the transfer treatment alone and to teacher-designed instruction for 16
weeks. Students were pre- and posttested on problem-solving tests and responded to a
posttreatment questionnaire tapping self-regulation processes. SRL positively affected
performance.
A local study by Dedel (2002) taught students in an experimental group different
strategies like orientation, planning, action, and checking (OPAC) strategies to enhance
students' problem-solving skills and conceptual understanding in teaching selected
topics in mechanics. Although the study did not explicitly mention that the OPAC
strategies are self-regulation in itself. The strategies are similar with conceptualizations
on the components of self-regulation. Consistent with the findings of other research, the
OPAC problem-solving strategy used in physics instruction significantly enhanced
students' achievement in terms of problem-solving skills and conceptual understanding.
Developing self-regulation among students can be integrated in the teaching and
learning process. Certain classroom activities that involve the active participation of
students can help them develop self-regulation. For example, in a mathematics class
where students learn concepts of fraction, identify similar and dissimilar fraction, add
and subtract factions (see table 1). A group of teachers devised some activities where
self-regulation is tapped in different subject areas (Tables 1 to 3).
Table 1
Self-regulation Activities in a Third Grade Mathematics Class
Self-regulation Student and Teacher Tasks
component
Goal-setting Students will verbalize at the start of the lesson what will be
their specific goals for the topic on fraction.
Time management Students create a daily schedule and express in fraction how
much time is devoted for specific activities.
Learning strategies Students are taught with strategies in identifying the Least
Common Denominator (LCD).
Self-evaluation Students solve board work and let the other students evaluate of
the answers are correct. The other students also point out where
the mistake is.
Seeking help or Students are paired and they test each other how well they add
information and subtract fractions. They teach each other the correct answers
for the items missed.
Motivational beliefs Students whose works shows exemplary and acceptable
proficiency are posted on the board.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 32
Table 2
Self-regulation Activities in a Second Year High School Class on Anatomy
Table 3
Self-regulation Activities in a Fourth Grade Class on Reading
using Exploratory Factor Analysis (EFA) techniques. The underlying factors are further
tested by using a more rigorous method called Confirmatory Factor Analysis (CFA). On
some instances the test developer may opt to use a different approach such as the Item
Response Theory (IRT). In this approach items are good if they have acceptable item
characteristic curves based on the logit measures. In such cases items with good fit
(Mean Square within 0.8 to 1.2, z standard score of below 2.00), high point biserial
correlations (indicative of item discrimination for a one-parameter Rasch model),
adequate item information functions, and devoid of item differential functioning (free of
bias). On the second criteria, responses to items should indicate acceptable reliability or
consistencies. Most commonly internal consistencies of test are established using
Cronbach’s alpha, split-half, or interitem correlation. Tests and scales of self-regulation
evidence to have acceptable validity and reliability are safe to use.
Table 4
Properties of the A-SRL-S
Table 5
Convergent Validity of the Subscales of the A-SRL-S
Table 6
Self-regulated Learning Strategies Based on the SRLIS
Table 7
Adapted SRLIS for Filipino College Students
factor analyzed, the 12 items loaded only to three factors. These factors were labeled as
student self-regulated learning, student verbal expressiveness, and student achievement.
Teacher’s judgment about students’ self-regulation strategies can be very
accurate if they are trained to be observant of students’ behavior. Teachers can look at
several situations where self-regulation can occur such as during drills, seat works,
group works, tests, recitations, and even during class discussion.
The think aloud protocol is advantageous because it does not limit students of
their response on a task. The teacher can detect multiple signs of self-regulation
strategies the students are engaging in. This can help teachers by creating tasks that
would enrich students to develop further their self-regulation skills.
Error Detection Tasks. Error detection tasks are created to assess students’
ability to monitor their performance and evaluate the material exposed to. The ability to
detect errors is a means that a student can exercise metacognitive control because they
should have the ability to correct errors after identifying them. Error detection can be
done by providing an evaluation of errors conducted. Another technique is by
underlining specific spots where the error occurred.
The ability to detect errors is a sign that students have mastered the lesson and
have developed evaluation and monitoring skills.
There are varied ways on how self-regulation can be implemented and assessed
inside the classroom. Developing self-regulation takes one to believe that it is necessary
as a learning process in order to work well. Initial steps to assess and implement self-
regulation inside the classroom would be difficult especially if students are not used to
it. But once the teacher develops the skill to use it inside the classroom, students would
well develop the skills. It should be realized that self-regulation is necessary in order for
students to be successful in their performance on academic tasks. If a teacher wants and
desires to develop lifelong learners, developing the learners’ self-regulation skills is a
key to this success.
References
Baker, L., & Zimlin, L. (1989). Instructional effects on children’s use of two levels of
standards for evaluating their comprehension. Journal of Educational
Psychology, 81, 340-346.
Bandura, A., & Schunk, D.H. (1981). Cultivating competence, self-efficacy, and
intrinsic interest through proximal self-motivation. Journal of Personality and
Social Psychology, 41, 586-598.
Corno, L., & Mandinach, E. (1983). The role of cognitive engagement in classroom
learning and motivation. Educational Psychologist, 18, 88-108.
de la Fuente Arias, J., Justicia, F., & Gracia Berben, A. (2006). An interactive model of
regulated teaching and self-regulated learning. The International Journal of
Learning, 12, 217-226.
Dedel, E. (2002). The effect of orientation, planning, action and checking (OPAC)
problem-solving strategy on students' problem-solving skills and conceptual
understanding. Unpublished masters’ thesis, De La Salle University, Manila,
Philippines.
Fiske, S.T & Taylor, S.E. (1991). Social cognition (2nd ed.) New York: Mc-Graw Hill.
Lopez, D.F., Little, T. D., Oettingen, G., & Baltes, P. B. (1998). Self-regulation and
school performance: Is there optimal level of action-control?. Journal of
Experimental Child Psychology, 70, 54-75.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 41
Schunk, D. H. (1983). Developing children's self-efficacy and skills: The roles of social
comparative information and goal setting. Contemporary Educational
Psychology, 8, 76-86.
Shah, J. Y., & Kruglanski, A. W. (2005). Aspects of goal networks: Implications for
self-regulation. In M. Bokaerts, P. Pintrich, & M. Zeidner (Eds.). Handbook of
Self-regulation (pp. 86-108). New York: Academic Press.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 42
Winne, P. H. (1982). Minimizing the block box problem to enhance the validity of
theories about Instructional effects. Instructional Science, 11, 13-28.
An Assessment Toolkit
Paz Diaz
Roosevelt College Systems, Cainta
"By mid-way through their first semester, the bases students used to
judge their potential success had changed. Instead of citing challenges
they had faced and assessing how they had performed, they were most
influenced by how they compared with their peers. Some comparisons
were based on how fast they could learn new material or complete
assignments.”
Mica A. Hutchison-Green (2008)
Are we assessing students against one another or against their own capabilities
or against certain standards set by educational authorities? Studies (Green et al, 2008)
have shown that students begin to learn early on to judge their potential success not by
citing challenges they had faced and assessing how they had performed, but mostly how
they compared with their peers or how fast they could learn new material or complete
assignments.
Green et al. (2008) suggest that instructors should explain to students that how
long it takes them to solve a problem is less important than eventually understanding
and solving it. They also suggest that teachers might design group work assignments
that let each student contribute to the learning of the others. One suggestion is that
“Educators should try, early on, to build students’ self-efficacy by giving them the
chance to master particular skills…” and that “instructors need to give students clear
and concise feedback” to improve learning and not necessarily to compare themselves
with others.
Ultimately, educators should not simply make students memorize specific facts
and to judge themselves by how much they have memorized (although that is, of course,
the basis of learning) but to give students a more accurate way of measuring their
successes and failures. By ensuring that they are using appropriate experience to shape
their confidence in lifelong success, studies suggest that teachers can improve students’
attitudes on retention through the excitement of learning.
The examples in this workshop have been gathered from best practices from
numerous sources, the sites and references are found at the end of this handout.
When a teacher prepares assessment exercises, he or she could follow Kolbe’s stages of
learning:
1. Ask for facts; be sure about facts (otherwise an experiment can blow in the student’s
face)
2. Follow through; make students organize, reorganize, reform and adapt data or facts
they have learned; make mind maps, outlines, draw figures, sing songs, assess cases
3. Hands-on Demonstrations; return-shows, re-write classical texts, field trips,
interviews, find exemplars in the community, find masters who are skillful,
undertake structured learning experiences
4. Carry out projects (make students present dramas, personal experiences, write new
poems and personal songs, translate materials to hip-hop sounds
Activity 1: Assess the following assessment tools and techniques you can use in the
classroom
Activity 2: Assess the following assessment tools and techniques you can use
outside the classroom
Keeping a Project Journal:
1. What is a Project Journal useful for? Assess the following:
(a) Remembering
(b) Evaluating the project
(c) Evaluating team members
(d) Protecting oneself
2. What to write in the Project Journal. Evaluate the following:
(a) First, record what you saw and heard. Try to capture the event exactly—word
for word, scene for scene, with no embellishment or evaluation on your part.
"Just the facts man, just the facts."
(b) Second, add your interpretation and understanding. If the meaning you attach to
the event is different from the exact words and actions you observed, you must
write your interpretation in your journal. You must also consider whether this
interpretation is yours alone or if others would have the same interpretation.
(c) Third, add your feelings about what has transpired. Yes techies, it is okay to
have feelings about things. Someone has said, "Feelings are facts to those who
have them." Our feelings influence us in the same way our facts do.
(d) Fourth, record your response—what you said or did at the time. Again, note
exactly what you said and did. If you meant to give a different response you
must note that also.
(e) Fifth, note your feelings about your response. Was your response appropriate?
Was it too sharp? Too judgmental? Too passive? Not only is it okay to have
feelings about our responses, it is vital to evaluate them. Our feelings often give
excellent guidance about the appropriateness of our responses.
Activity 3: Invent assessment tools and techniques you can use for social
performance
Activity 4: Invent assessment tools and techniques you can use for successful self-
direction
1. Backward planning – beginning with desired end results and working on required
procedures to meet those results
2. Task Analysis – identifying the skills and knowledge required to learn or perform a
specific task to arrive at the end result
3. Back-home action plan
Teachers who repeatedly reward student for completing easy tasks results in the student
feeling less able and being less motivated. Even rewarding excellence with honor rolls
and status may be detrimental if students restrict their interests or avoid hard courses to
keep their grade averages high. It is indeed complicated, but carefully selected
incentives can still motivate students to excel in learning.
1. Motivational factors at the BEGINNING: When learner enters and starts learning
ATTITUDES: Toward the environment, teacher, subject matter, and self
NEEDS: The basic need within the learner at the time of learning
Suggest Motivational Strategies:
3. Motivational Factors at the END of learning period: When learner is completing the
learning process.
COMPETENCE: The competence value for the learner that is a result of the
learning behaviors.
REINFORCEMENT: The reinforcement value attached to the learning experience,
for the learner.
Suggest Motivational Strategies:
Activity 6: Postcards
1. Geography Lessons
2. Celebrations
Introduce facts sheet to build students' knowledge of current family trends in the
Philippines.
Instruct them about what to take note of or the angle that they want to write about.
Alternatively, they can dramatize, role play, or do a “before-and after” scene.
How will you assess the selected activity?
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 48
Activity 8: Determine the uses, pros, and cons of the following sample assessment
activities. These are not mutually exclusive and can be mixed and matched by the
teacher depending on the topics or situations inside or outside the classroom.
Applying what has just been learned in class or reading to solve a problem: It's
very important to make sure that students can connect abstract ideas with specific
real examples, especially slightly complicated ones.
Collecting student responses: Think about how you will end an interactive
activity, gathering student responses and providing, when appropriate, a
synthesizing discussion or follow-up assignment. The student responses also
provide useful feedback about what students have learned.
Interactive lectures are classes in which the instructor breaks the lecture at least
once per class to have all of the students participate in an activity that lets them
work directly with the material. These activities allow students to apply what they
have learned earlier or give them a context for upcoming lecture material.
Interactive Segments: There is a wide range of possibilities for interactive
activities that can be interspersed with lecture segments. Any of these general
suggestions for interactive segments could be developed into short think-pair-share
questions or activities
Predicting/Evaluating: Used to help students activate prior knowledge.
K - Recall what the group KNOWs about the subject.
W - Determine what the group WANTs to learn.
L - Identify what the group LEARNed as they read.
H - HOW the group can learn more
Visualizing: Used to see the description of physical structures, places, spatial
relationships, concrete objects, abstract concepts, or visual images. Detailed
diagrams provide more formal options of visualization
Creating thumbnails: Thumbnails represent scaled down versions of a final
composition. For a project where the final size is 9" x 12", thumbnails might be
approximately 2" x 2-2/3", large enough to show some detail, but small enough to
work quickly.
Sketching: Sketches can be fun and or loose indicators that don't require great
artistic ability to describe a physical phenomenon or an abstract learning
Use of Models – Model Types: Conceptual, physical demonstrations, mathematical
and statistical, and visualization. Be aware of technical and pedagogical
considerations when using models
Think–Pair–Share: A problem is posed. Students think about it alone for five
minutes or less, and then pair up to discuss their views. The pairs share their
conclusions with the rest of the class.
Jigsaw: Choose learning material that can be divided into parts, like an experiment,
a list, or several articles on a similar topic. Divide students into groups equaling the
number of parts. Ask each group to read, discuss, and learn the material. Next, form
jigsaw learning groups composed of one member from each of the initial groups.
Each new group will contain an expert on each part of the material, so that together
the group will learn all of the material. Reconvene the class to go over the material.
You may also ask the jigsaw groups to answer questions based on their accumulated
knowledge.
Roundtable: Students divide into groups to answer a query. Each group is given
only one pen and one piece of paper. Each group member in turn writes down his or
her response on the paper. The results are examined and placed on an overhead for
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 51
class discussion. There is more accountability for each student if he or she has to
explain the written remarks.
Voting: A show of hands to keep students involved and to determine what the class
believes as a group. This exercise works well in large classes. It refocuses the
students' attention. It can clarify larger issues into smaller subsets. It requires
students to engage the material. Start off with a general query, and then explore the
subject further.
End of Class Query: In the last three minutes of class ask the students to write on
¼ sheets of paper anonymously two things they learned and what questions remain.
Trade a Problem: Divide the class into teams and have each team construct review
questions. Each question is written on an index card (each team can have a different
color). The answer to each question is written on the back of the card. The teams
then trade cards. Without looking at the answers, one member of the team reads
each question. The team decides by consensus on an answer. If the team's answer
does not agree with the original answer, they should add their answer on the back of
the card as an alternative. Cards continue to be traded. The teacher may then want to
conduct a whole-class discussion on the questions with more than one answer.
Concept Map: Divide the class into groups and give each group a pen and a large
piece of paper or a transparency. Each group should write down the topic being
studied in the center of the paper inside a circle or rectangle, then place key
examples or related concepts inside smaller shapes and connect them to the main
topic. There are many possible models of the relationship among concepts, i.e.
chains, spiders, or more complicated ones.
Minute Paper: Pause after 15 minutes of class and ask students to take a minute to
write a two-sentence summary of what he or she has learned so far. Depending on
how much time you want to devote to this, the students could pair up and help each
other better understand the material for a few minutes or a few could report to the
class.
1. What are the characteristics of a class where the teacher uses authentic assessment
techniques?
2. What will you do if other teachers claim your class is too noisy and the students are
enjoying too much?
3. How would you manage such a class?
4. What are the problems and situations you should look out for?
5. What are the difficulties you may run into?
6. How would students react to authentic assessment techniques?
7. What if all students care about are numerical grades?
8. Would students want to undergo these kinds of assessment techniques when all their
parents ask are for them to have high grades, to be in the Honor Roll, or on the other
hand, just pass their classes and obtain a college diploma no matter what?
9. What effects would authentic assessment have on the students’ chances to pass
government regulatory examinations?
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 52
10. What effects would authentic assessment have on the students’ chances of finding
jobs after their graduation?
Activity 9: What are Rubrics? What are the Uses of Rubrics? Write Rubrics for
this assignment:
You have asked your students to write a 12- to 15-minute speech addressing the youth
in their school or organization, using the excerpt below from Jose Rizal's The Reign of
Greed, following Aristotle’s rhetorical framework: Ethos, Logos, and Pathos. Each
student will deliver the speech before the class.
Where are the youth who will consecrate their golden hours, their
illusions, and their enthusiasms to the welfare of the land? Where are
they who will generously pour out their blood to wash away so much
shame, so much crime, so much abomination? Pure and spotless must be
victim be that the sacrifice may be acceptable! Where are you, youth,
who will embody in yourselves the vigor of life that has left our veins,
the purity of ideas that has been contaminated in our brains, the fire of
enthusiasm that has been quenched in our hearts? We await you, O
youth! Come, for we await you.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 53
Scenario Analysis
Scenario 1:
A high school English teacher assigns students to read three novels by the same
author and develop a thesis statement about a common theme, consistent character
development, or social commentary in the novels. They must then defend that thesis in a
term paper with references. To set students up for success, the teacher begins by
providing them with a sample of an outstanding paper to read and analyze. The next
day, the class discusses what made the sample outstanding.
As their next assignment, the teacher gives students a sample paper of poor
quality. Again, they analyze and evaluate its features in some detail. Comparing the two
papers, students list essential differences. The class then uses this analysis to
collaboratively decide on the keys to a high-quality paper.
After identifying and defining those keys, the students share in the process of
transforming them into a rubric—a set of rating scales depicting a continuum of quality
for each key. The teacher provides examples of student work to illustrate each level on
the quality continuum.
Only after these specific understandings are in place do students draft their
papers. Then they exchange drafts, analyzing and evaluating one another's work and
providing descriptive feedback on how to improve it, always using the language of the
rubric. If students want descriptive feedback from their teacher on any particular
dimension of quality, they can request and will receive it. The paper is finished when
the student says it is finished. In the end, not every paper is outstanding, but most are of
high quality, and each student is confident of that fact before submitting his or her work
for final evaluation and grading (Stiggins, in press; Scenario 1 adapted by permission).
Scenario 2:
Gail is a 5th grader who gets her math test back with “60 percent” marked at the
top. She knows this means another F. So her losing streak continues, she thinks. She's
ready to give up on ever connecting with math.
But then her teacher distributes another paper—a worksheet the students will
use to learn from their performance on the math test. What's up with this? The
worksheet has several columns. Column one lists the 20 test items by number. Column
two lists what math proficiency each item tested. The teacher calls the class's attention
to the next two columns: Right and Wrong. She asks the students to fill in those
columns with checks for each item to indicate their performance on the test. Gail checks
12 rights and 8 wrong.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 54
The teacher then asks the students to evaluate as honestly as they can why they
got each incorrect item wrong and to check column five if they made a simple mistake
and column six if they really don't understand what went wrong. Gail discovers that
four of her eight incorrect answers were caused by careless mistakes that she knows
how to fix. But four were math problems she really doesn't understand how to solve.
Next, the teacher goes through the list of math concepts covered item by item,
enabling Gail and her classmates to determine exactly what concepts they don't
understand. Gail discovers that all four of her wrong answers that reflect a true lack of
understanding arise from the same gap in her problem-solving ability: subtracting 3-
digit numbers with regrouping. If she had just avoided those careless mistakes and had
also overcome this one gap in understanding, she might have received 100 percent.
Imagine that! If she could just do the test over…
She can. Because Gail's teacher has mapped out precisely what each item on the
test measures, the teacher and students can work in partnership to group the students
according to the math concepts they haven't yet mastered. The teacher then provides
differentiated instruction to the groups focused on their conceptual misunderstandings.
Together the class also plans strategies that everyone can use to avoid simple mistakes.
When that work is complete, the teacher gives students a second form of the same math
test. When Gail gets the test back with a grade of 100 percent, she jumps from her seat
with arms held high. Her winning streak begins (Stiggins, Arter, Chappuis, & Chappuis,
2004; Scenario 2 adapted by permission).
“The real voyage of discovery consists, not of seeking new landscapes but see
through new eyes.”
Accurate Assessments + Effective Use = Students Success
Implications?
Assessment and grading procedures had the effect of helping some students
succeed now must serve to help all students succeed.
Testing Explosion
1950s College Admission
1960s District wide Testing
1970s Statewide Testing
1980s National Assessment
1990s International Test
2000s NCLB Every Pupil Test
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 55
1. High-stakes tests are good for all students because they motivate learning
2. If I threaten to fail you, it will cause you to try harder
3. If a little intimidation doesn’t work, use a lot of intimidation
4. The way to maximize learning is to maximize anxiety
Remedy: We can’t, won’t shouldn’t stop the high pressure testing; so how do
we help more (all) students expect to succeed?
5. It is the adults who use assessment results to make the most important
instructional decision.
Remedy: Build assessment systems that inform annual decisions, the
ones made every 3-4 minutes and everything in between
PROFOUND MISTAKE
Teachers and leaders don’t need to understand sound assessment practices – the
testing people will take care of us.
COUNTER BELIEF
They do need to understand sound assessment practices.
ASSESSMENT LEGACY
1. Assessment has been far more a matter of compliance than of teaching and
learning
2. Disregard of the information needs students and teachers who make the most
frequent and highest impact decisions
3. Assessment that drive as many students to give up in hopelessness as they spur
to more learning
4. And we fail to provide practitioners with the assessment understandings needed
to help
OUR ASSESSMENT FUTURE
Remedy: Balance day-to-day classroom assessment in support of learning with periodic
assessments verifying learning
Crucial Distinction
Assessment of Learning
How much have students learned as of a particular point in time?
Assessment for Learning
How can we use assessment to help students learn more?
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 56
Students - accurate
Assessments
Teachers - stronger
desire to learn
Administrators - increased
achievement
Policy Makers - accountability
for performers
OVERVIEW
Assessment OF Assessment FOR
Learning Learning
Reason Check status Improve learning Assessment for Learning
To Others about students Students about
inform themselves TEACHER’S ROLE
Focus Standards Enabling Targets 1. Identify the Standard
2. Deconstruct it to enabling
State Standards the targets
(Writing, Math, Reading) 3. Transform to Student-
KNOW friendly version
REASON 4. Create accurate classroom
SKILLS
PRODUCTS
assessments
5. Use with students to track
growth
Example High Stakes External Assessments that
Assessment diagnose needs or STUDENT’S ROLE
help students see
Classroom Tests used themselves improve 1. Strive to understand what
for grading success looks like
Place in An event after A process during 2. Use each assessment to
Time learning Learning understand how to do better
next time
EFFECTS
1. helping students to understand what good work looks like
2. helping them to compare their work with standards of excellence
3. help them understand how to close the gap
STRATEGIES
1. Student-friendly targets from the beginning
2. Models of strong and weak work
3. Continuous descriptive feedback
4. Teach self-assessment and goal setting
5. Teach one facet at a time
6. Teach focused revision
7. Teach self-reflection to track growth
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 57
Alternative assessment uses activities that reveal what students can do with
language, emphasizing their strengths instead of their weaknesses.
Alternative assessment instruments are not only designed and structured differently
from traditional tests, but are also graded or scored differently.
Source (Haury, 1993; Mitchell, 1992; Stiggins, 1991 b; Vandervoort, 1983; Wiggins, 1989, 1992, 1993 b
& d).
Checklists
Checklists are often used for observing performance in order to keep track of a
student's progress or work over time. They can also be used to determine whether
students have met established criteria on a task. To construct a checklist, identify the
different parts of a specific communication task and any other requirements associated
with it. Create a list of these with columns for marking yes and no.
For example, using a resource list provided by the instructor, students contact
and interviews a native speaker of the language they are studying, and then report back
to the class. In the report, they are to
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 59
Students are told that they will need to speak for a minimum of three minutes
and that they may refer only to minimal notes while presenting. A checklist for
assessing students' completion of the task is shown in the popup window.
Checklists can be useful for classroom assessment because they are easy to
construct and use, and they align closely with tasks. At the same time, they are limited
in that they do not provide an assessment of the relative quality of a student's
performance on a particular task.
Rubrics
Rubrics are primarily used for language tasks that involve some kind of oral or
written production on the part of the student. It is possible to create a generic rubric that
can be used with multiple speaking or writing tasks, but assessment is more accurate
when the instructor uses rubrics that are fitted to the task and the goals of instruction.
1. Holistic rubrics
Holistic rubrics commonly have four or six points. The popup window shows a
sample four-point holistic scale created for the purposes of assessing writing
performance.
2. Analytic rubrics
The instructor can give different weights to different dimensions. This allows
the instructor to give more credit for dimensions that are more important to the
overall success of the communication task. For example, in a writing rubric, the
dimension of content might have a total point range of 30, whereas the range for
mechanics might be only 10.
They provide more information to students about the strengths and weaknesses
of various aspects of their language performance.
However, analytic scoring has also been criticized because the parts do not
necessarily add up to the whole. Providing separate scores for different dimensions of a
student's writing or speaking performance does not give the teacher or the student a
good assessment of the whole of a performance.
For example, consider a task that requires that a student write a persuasive letter
to an editor of the school newspaper. A possible primary trait rubric for this task is
shown in the popup window.
This kind of rubric has the advantage of allowing teachers and students to focus
on one aspect or dimension of language performance. It is also a relatively quick and
easy way to score writing or speaking performance, especially when a teacher wants to
emphasize one specific aspect of that performance.
4. Multi-trait rubrics
The multi-trait approach is similar to the primary trait approach but allows for
rating performance on three or four dimensions rather than just one. Multi-trait rubrics
resemble analytic rubrics in that several aspects are scored individually. However,
where an analytic scale includes traditional dimensions such as content, organization,
and grammar, a multi-trait rubric involves dimensions that are more closely aligned
with features of the task.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 61
America has spent 60 years building layer upon layer of district, state, national,
and international assessments at immense cost—and with little evidence that our
assessment practices have improved learning. True, testing data have revealed
achievement problems. But revealing problems and helping fix them are two entirely
different things. As a member of the measurement community, I find this legacy very
discouraging. It causes me to reflect deeply on my role and function. Are we helping
students and teachers with our assessment practices, or contributing to their problems?
Myth 2: School and community leaders know how to use assessment to improve
schools.
Myth 5: Grades and test scores maximize student motivation and learning.
Rick Stiggins is the founder of the Educational Testing Service's Assessment Training Institute, in
Portland, Ore.Vol. 27, Issue 08, Pages 28-29
http://www.edweek.org/ew/articles/2007/10/17/08stiggins.h27.html?print=1 10/18/2007
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 62
Abstract
This article is a summary of a presentation delivered in the 1st PEMEA Continuing Education
Program which aims to equip teachers with a basic know-how of portfolio assessment to enable
them to effectively assess student learning and evaluate educational outcomes. The seminar
focused on current trends in classroom assessment, particularly the development and use of
portfolios in the basic education level.
Classroom assessment has now become more than a technical process of documenting
student learning. It has transformed into a tool that also enhances the learning process. The
changing focus of classroom assessment is moving towards the evaluation of multiple
intelligences through alternative and authentic methods of assessment (Popham,1999). One of
these methods is portfolio assessment. Portfolios provide this opportunity to evaluate several
abilities exemplified by diverse learners. This characteristic benefits both the teacher and
student. The former is able to gather additional evidences of learning while the latter is given a
chance to monitor oneself by engaging in a self-evaluation process. The use of portfolios is
similar to performance tests since it also allows students to demonstrate their understanding of
the subject matter by showing accomplished works. But unlike performance tests, the time spent
and the outputs included in developing portfolios makes it a richer source of achievement
evidences. Furthermore, the affective aspects of learning such as persistence and effort that
contributed to the completion of a particular product are also assessed in the process of creating
a portfolio.
The first image that will come to mind when one hears the word portfolio is a picture of
a portable case containing numerous documents. This is probably why until now most teachers
as well as students perceive this assessment task as a mere clerical requirement of storing
outputs in a clearfolder , this perception is called folder mentality. This mentality explains why
some students equate portfolios with scrapbooking or a simple compiling task. This
misconception occurs when the real use and purpose of portfolios are not well expressed and
known. Portfolio development is definitely more than a simple compilation task for it requires a
purposeful and systematic process of collection and evaluation. This process is geared towards
showing evidences of accomplishments aligned with specified learning targets (McMillan,
2001; Popham, 1999). The collection process including the selection of entries is planned and
most of the time requires a collaborative effort between students, teachers, as well as parents.
Indeed, the portfolio exists to make meaning out of students’ output, to communicate about their
work, and to relate their learning to a larger context (Grace,1992). These make portfolios truly
an authentic assessment task.
Similar with other forms of alternative assessment, the nature of portfolios could either
be product or process oriented. Product-oriented portfolios typically showcase final outputs and
also allow the comparison of student products. However, it shows limited evidence of growth
since it only includes the revised and final outputs. On the other hand, process-oriented
portfolios highlight the evidence of growth since it emphasize the story behind the product. It
encourages students to reflect on the strategies they have used as well as to plan for the future.
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 63
The use of portfolio encourages active engagement through reflection and self-assessment, a
process by which the learner develops critical and creative thinking (Kubiszyn & Borich, 2004).
Likewise, it provides a comprehensive documentation of student growth which is useful for
both planning instruction and conducting student evaluation. In addition, the products
showcased in portfolios may also be used to demonstrate learners’ progress and achievements in
specific subject areas and also across the curriculum (Lankes, 1995). This allows the learner to
appreciate the relevance of the subject and the interconnection of all subject areas. It may also
be used as basis for discussion during parent-teacher conferences for it includes tangible
evidences of learning (Grace, 1992).
On the other hand, portfolio assessment just like other methods has some disadvantages
needs to be addressed. One of which is the demand for additional time. Unlike selected-
response tests, reviewing and commenting on student’s work and portfolio takes longer time.
Moreover, extra time is also needed for planning and developing materials as well as for
conferring with parents and other teachers. Another drawback is the demand for additional
resources such as multimedia equipment or a bigger space for storage (Sweet, 1993). Hence,
teachers as well as administrators should be equipped with a thorough understanding of this
assessment task and should also be willing to allot time for additional planning, conference, and
preparation of strategies. Still, the benefits that could be gained from using portfolios outweigh
the abovementioned common drawbacks.
Portfolio construction indeed requires time and additional preparation but its fruits are
definitely worthwhile. Planning is essential for successful portfolio assessment. Part of which is
to determine the purpose of using portfolios as a method of assessment. This is done by asking
oneself “Why do I want my students to do it in the first place?” The decisions about what
products to include in a portfolio should be based on the identified purpose of the portfolio.
Without a purpose, a portfolio is just a simple folder of student work. Equally important is the
identification of learning targets. These are competencies that students should achieve at the end
of an activity or program. From these learning targets the teacher could start devising a more
specific plan to further guide the students in the portfolio making process. Involving the
students from portfolio planning to assessment encourages personal ownership which makes it
meaningful and relevant to the students. According to Stiggins (1997), teachers can help
emphasize this ownership by telling students to package their portfolios as their personal
storybook, describing their story of learning and growth.
After clarifying the purpose and learning targets, there must be an agreement of what
type of portfolio will be developed. This is because the type of portfolio will also determine the
selection of entries as well as the criteria for evaluation. Below is a table of common types and
formats of student portfolios:
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 64
STUDENT PORTFOLIOS
Types Forms
composed of predetermined
work samples Special holding cases or
displays
Showcase Portfolios
The type of portfolio will depend on the purpose of assessment while the form will vary
depending on the type of products that will be presented and the age and preference of the
students. Nowadays, multimedia and online portfolios are much preferred than the traditional
printed media. This is because multimedia and online portfolio makes the work “accessible,
portable, examinable, widely distributable, and the performance replayable and reviewable”
((Sheingold, 1992 cited in Barrett, 1994). In addition to this, multimedia and online portfolios
make transfer samples of student work from teacher to teacher and school to school easier
(Lankes, 1995).
After determining the type and format, it is equally important to discuss with the class
what needs to be included. The portfolio is generally subdivided into three parts, the: (a) front
matter, which includes the cover page, acknowledgment, table of contents, introduction and
setting of expectations to prepare the reader, (b) middle part, which is the heart of the portfolio
for it contains the selected work samples and reflections, and (c) back matter, which synthesizes
the learning experience and includes a plan of action to pursue lifelong learning. The selection
of entries will depend on the purpose and the type of portfolio being developed but all types
contain work samples that showcases what and how much the learner’s know, the process
he/she went through to acquire and deepen this knowledge and/or skill, and the effort he/she
exerted in the learning process. Portfolio development is a collaborative process. The teacher
constantly needs to scaffold students by clarifying their conceptions about the purpose of
portfolio making and by showing work samples that matches the different levels of the scoring
rubric.
Portfolios are assessed using a rubric that the learners are familiar with. A good
assessment system allows students and teachers to have a shared understanding of what
constitutes good work (Barrett,1994). These scoring rubrics or guides are used for the
evaluation of the entire portfolio rather than to each piece of entry. A wide variety of criteria
can be used to evaluate the quality of a portfolio. The scoring criteria will depend on the
purpose of assessment and type of portfolio being evaluated. Unlike traditional tests, authentic
assessment involves students in the evaluation process. Student self-evaluation facilitates better
The Assessment Handbook: Continuing Education Program, Vol. 1, May 2009 65
learning and allows the reader to gain insights about learning strategies used (Popham, 1999).
Allowing students to evaluate their efforts and performance promotes appreciation and valuing
of the reflection process (Mondock, 1997). However, the teacher needs to clarify to the students
that self-evaluation is not a random process of selecting a preferred grade it instead involves a
system of justifying the chosen grade. The teacher also has the right to lower or increase the
grade if the justification given by the student inaccurately supports the rating. Hence, the final
grade is a collaborative outcome between the teacher and the students.
Conclusion
A portfolio basically allows the learner to document and demonstrate what he/she has
accomplished. It also allows both the learner and teacher to evaluate the progress achieved in a
given period of time. Successful portfolio projects do not happen without considerable planning
and effort on the part of both the teacher and the student. A portfolio is definitely a labor of love
for it tells a story of a learner’s growth and active engagement in the learning process.
References
Grace, C. (1992). The portfolio and its use: Developmentally-appropriate assessment of young
children. Urbana, IL: ERIC Clearinghouse on Elementary and Early Childhood
Education.
Mcmillan, J. (2001). Classroom assessment: Principles and practice for effective instruction
(2nd ed). MA, USA: Allyn & Bacon.
Mondock, S. (1997). Portfolios: The story behind the story. English Journal, 86(1), 59-64.
Popham, W. J. (2005). Classroom assessment: What teachers need to know (4th ed.). Boston,
MA: Allyn and Bacon.
Sweet, D. (1993). Student portfolios: Classroom uses. Eight Education Research Consumer
Guide. Retrieved April 29, 2009 from http://www.ed.gov/pubs/OR/ConsumerGuides/
classuse.html
Stiggins, R. (1997). Student-centered classroom assessment (2nd ed). Upper Saddle River,
NJ:Prentice-Hall.