You are on page 1of 16

Introduction:-

Introduction of new teaching strategies often expands the expectations for


student learning, creating a parallel need to redefine how we collect the evidence that
assures both us and our students that these expectations are in fact being met. The
default assessment strategy of the typical large, introductory, college-level science
course, the multiple- choice (fixed response) exam, when used to best advantage can
provide feedback about what students know and recall about key concepts. Leaving
aside the difficulty inherent in designing a multiple-choice exam that captures deeper
understandings of course material, its limitations become particularly notable when
learning objectives include what students are able to do as well as know as the result of
time spent in a course. If we want students to build their skill at conducting guided
laboratory investigations, developing reasoned arguments, or communicating their
ideas, other means of assessment such as papers, demonstrations (the “practical
exam”), other demonstrations of problem solving, model building, debates, or oral
presentations, to name a few, must be enlisted to serve as benchmarks of progress
and/or in the assignment of grades. What happens, however, when students are novices
at responding to these performance prompts when they are used in the context of
science learning, and faculty are novices at communicating to students what their
expectations for a high-level performance are? The more familiar terrain of the multiple-
choice exam can lull both students and instructors into a false sense of security about
the clarity and objectivity of the evaluation criteria (Wiggins, 1989) and make these
other types of assessment strategies seem subjective and unreliable (and sometimes
downright unfair) by comparison. In a worst-case scenario, the use of alternatives to the
conventional exam to assess student learning can lead students to feel that there is an
implicit or hidden curriculum—the private curriculum that seems to exist only in the
mind's eye of a course instructor.
Use of rubrics provides one way to address these issues. Rubrics not only can be
designed to formulate standards for levels of accomplishment and used to guide and
improve performance but also they can be used to make these standards clear and
explicit to students. Although the use of rubrics has become common practice in the K–
12 setting (Luft, 1999), the good news for those instructors who find the idea attractive
is that more and more examples of the use of rubrics are being noted at the college and
university level, with a variety of applications (Ebert-May, undated; Ebert-May et al.,
1997; Wright and Boggs, 2002; Moni et al., 2005; Porter, 2005; Lynd-Balta, 2006).
What is a Rubric?
Although definitions for the word “rubric” abound, for the purposes of this
feature article we use the word to denote a type of matrix that provides scaled levels of
achievement or understanding for a set of criteria or dimensions of quality for a given
type of performance, for example, a paper, an oral presentation, or use of teamwork
skills. In this type of rubric, the scaled levels of achievement (gradations of quality) are
indexed to a desired or appropriate standard (e.g., to the performance of an expert or to
the highest level of accomplishment evidenced by a particular cohort of students). The
descriptions of the possible levels of attainment for each of the criteria or dimensions of
performance are described fully enough to make them useful for judgment of, or
reflection on, progress toward valued objectives (Huba and Freed, 2000).
A rubric is an assessment tool that clearly indicates achievement criteria across
all the components of any kind of student work, from written to oral to visual. It can be
used for marking assignments, class participation, or overall grades. There are two types
of rubrics: holistic and analytical.
Holistic Rubrics:-
Holistic rubrics group several different assessment criteria and classify them
together under grade headings or achievement levels.
(For a sample participation rubric, see the Appendix of this teaching tip. Our Responding
to Writing Assignments teaching tip includes holistic rubrics specifically designed for
writing assignments. See also Facione and Facione's (1994) "Holistic Critical Thinking
Rubric [PDF]," useful in many disciplines.)
Analytic Rubrics: -
Analytic rubrics separate different assessment criteria and address them
comprehensively. In a horizontal assessment rubric, the top axis includes values that can
be expressed either numerically or by letter grade, or a scale from Exceptional to Poor
(or Professional to Amateur, and so on). The side axis includes the assessment criteria
for each component. Analytic rubrics can also permit different weightings for different
components. (See the Teamwork VALUE Rubric [PDF], one of the many rubrics
developed by the American Association of Colleges and Universities, or (AAC&U).
A good way to think about what distinguishes a rubric from an explanation of an
assignment is to compare it with a more common practice. When communicating to
students our expectations for writing a lab report, for example, we often start with a list
of the qualities of an excellent report to guide their efforts toward successful
completion; we may have drawn on our knowledge of how scientists report their
findings in peer-reviewed journals to develop the list. This checklist of criteria is easily
turned into a scoring sheet (to return with the evaluated assignment) by the addition of
checkboxes for indicating either a “yes-no” decision about whether each criterion has
been met or the extent to which it has been met. Such a checklist in fact has a number
of fundamental features in common with a rubric (Bresciani et al., 2004), and it is a good
starting point for beginning to construct a rubric.
However, what is referred to as a “full rubric” is distinguished from the scoring
checklist by its more extensive definition and description of the criteria or dimensions of
quality that characterize each level of accomplishment. Table 1 provides one example of
a full rubric (of the analytical type, as defined in the paragraph below) that was
developed from the checklist in Figure 1. This example uses the typical grid format in
which the performance criteria or dimensions of quality are listed in the rows, and the
successive cells across the three columns describe a specific level of performance for
each criterion. The full rubric in Table 1, in contrast to the checklist that only indicates
whether a criterion exists (Figure 1), makes it far clearer to a student presenter what the
instructor is looking for when evaluating student work.

Table 1. A full analytical rubric for assessing student poster presentations that was developed from
the scoring checklist (simple rubric) from Figure 1

Level of achievement
Criteria
High (3) Medium (2) Low (1)

● Purpose: The study


shows evidence of focus
● Purpose: The question within a given topical
● Purpose: The research
or problem is defined area, but the search for
Scientific question or problem is well-
adequately, but may lack new knowledge does not
approach defined and connected to
a clear rationale or seem to be guided by an
(×1.0) prior knowledge in the
purpose that stems from overlying question; there
chosen area of study.
prior knowledge. may be little or no stated
connection to prior
knowledge.

● Design: The experimental ● Design: The design is ● Design: There may be


design is appropriate to the appropriate to the some evidence of an
problem; it is efficient, question or problem, but experimental design, but
workable, and repeatable. may fail to identify an it may be inappropriate
If appropriate to the design, important variable or to or not used well. The
important variables are account for all important design may fail to
Table 1. A full analytical rubric for assessing student poster presentations that was developed from
the scoring checklist (simple rubric) from Figure 1

Level of achievement
Criteria
High (3) Medium (2) Low (1)

identified and contrasted to aspects of standard account for an important


the standard conditions. conditions. Or, it may variable or a major
The design allows for a lack enough comparisons aspect of standard
sufficient number of or tests to obtain data conditions. Another
comparisons of variables that have a clear experimenter would
and of tests to provide meaning. have difficulty repeating
meaningful data. the experiment.

● Data: Most data are


● Data: The data are analyzed thoroughly and ● Data: The analysis and
analyzed and expressed in presented accurately, presentation may be
an accurate way. Statistical but with minor flaws. inaccurate or
analysis of data is present. There may be no evident incomplete.
use of statistical analysis.

● Conclusions: Draws
● Conclusions: Inferences
conclusions that are ● Conclusions: Reported
and conclusions are in all
supported by the data, inferences and
cases connected to, and are
but may not directly conclusions are not
consistent with, the study
connect the conclusions supported by the data.
findings.
to the relevant evidence.

● Overall Appearance:
The poster is visually ● Overall Appearance:
● Overall Appearance: The
appealing, but may The poster consists of
Presentation poster is visually appealing;
contain too much text only or may appear
(×0.75) it draws the viewer in for a
information, and use font to have been hastily
closer look.
sizes that are difficult to assembled.
read.

● Layout: There may be


● Layout: The material is
● Layout: The material is little evidence of a
organized in an
laid out in a clear and cohesive plan for the
appropriate way, but in
consistent way. The flow of layout and design of the
places may lack clarity or
ideas is concise and poster. Ideas seem
consistency. There may
cohesive. jumbled and/or
be extraneous material.
disconnected.

● Figures and Tables:


● Figure and Tables: The General trends in the ● Figures and Tables:
figures and/or tables are data are readily seen Data may be represented
appropriately chosen and from the figures and inaccurately or in an
well-organized; data trends tables; in some cases, inappropriate format.
are illuminated. tables and figures may
provide redundant
Table 1. A full analytical rubric for assessing student poster presentations that was developed from
the scoring checklist (simple rubric) from Figure 1

Level of achievement
Criteria
High (3) Medium (2) Low (1)

information, or raw data.

● Writing: Minor errors


in grammar and spelling
● Writing: There are no ● Writing: The
may be present, but they
grammatical or spelling readability may be
do not detract from the
errors that detract from seriously limited by poor
overall readability; there
readability; use of technical grammar, spelling, or
may be one or two
terminology is appropriate. word usage.
misuses of technical
language.

● Fielding of Questions:
Responses to the judges'
● Fielding of Questions: questions show
Responses to the judges' familiarity with the study ● Fielding of Questions:
questions exhibit sound design and conduct, but The presenter may show
knowledge of the study and may lack clear or difficulty in responding
the underlying science accurate connections to to questions or
concepts; the presenter basic science concepts. responses may lack
exhibits poise, good grace, The presenter exhibits insight.
and enthusiasm. enthusiasm, but shows
signs of discomfort with
some of the questions.

● Creative Expression:
The research topic and
● Creative Expression: The ● Creative Expression:
design are new to the
research topic and design The presenter has
experimenter, but the
are new to the presenter; evidenced some original
research findings may
the answers to the research expression in the design
overlap in significant
question posed by the of the study and the
Originality ways with readily
presenter do not represent poster presentation, but
(×0.5) researched, common
readily researched, the basic ideas have
knowledge. The project
common knowledge. The appeared in a science fair
may be in the same
project is unlike any other project book or in other
general topical area as
in this or prior years' projects in the
projects from prior years'
competitions. competition.
competitions, but has an
original design.

● Acknowledgment of ● Acknowledgment of ● Acknowledgment of


Assistance: Major aspects Assistance: Major Assistance: The
of the project were carried aspects of the project presenter was involved
out by the presenter require methods and/or in the design of the
without assistance; if equipment that is not study, but major aspects
assistance was needed to standard for a high were carried out by
learn a new technique or school science lab another individual; this
Table 1. A full analytical rubric for assessing student poster presentations that was developed from
the scoring checklist (simple rubric) from Figure 1

Level of achievement
Criteria
High (3) Medium (2) Low (1)

for the provision of setting, but this use of assistance may or may
materials, the assistance is other resources and not be acknowledged.
acknowledged. assistance may not be
acknowledged.

How to make a Rubric?


 Decide what criteria or essential elements must be present in the student’s work to
ensure that it is high in quality. At this stage, you might even consider selecting
samples of exemplary student work that can be shown to students when setting
assignments.
 Decide how many levels of achievement you will include on the rubric and how they
will relate to your institution's definition of grades as well as your own grading
scheme.
 For each criterion, component, or essential element of quality, describe in detail
what the performance at each achievement level looks like.
 Leave space for additional, tailored comments or overall impressions and a final
grade.

Design a Rubric:
A more challenging aspect of using a rubric can be finding a rubric to use that
provides a close enough match to a particular assignment with a specific set of content
and process objectives. This challenge is particularly true of so-called analytical rubrics.
Analytical rubrics use discrete criteria to set forth more than one measure of the levels
of an accomplishment for a particular task, as distinguished from holistic rubrics, which
provide more general, uncategorized (“lumped together”) descriptions of overall
dimensions of quality for different levels of mastery. Many users of analytical rubrics
often resort to developing their own rubric to have the best match between an
assignment and its objectives for a particular course.
As an example, examine the two rubrics presented in Tables 2 and 3, in
which Table 2 shows a holistic rubric and Table 3 shows an analytical rubric. These two
versions of a rubric were developed to evaluate student essay responses to a particular
assessment prompt. In this case the prompt is a challenge in which students are to
respond to the statement, “Plants get their food from the soil. What about this
statement do you agree with? What about this statement do you disagree with? Support
your position with as much detail as possible.” This assessment prompt can serve as
both a pre-assessment, to establish what ideas students bring to the teaching unit, and
as a post-assessment in conjunction with the study of photosynthesis. As such, the
rubric is designed to evaluate student understanding of the process of photosynthesis,
the role of soil in plant growth, and the nature of food for plants. The maximum score
using either the holistic or the analytical rubric would be 10, with 2 points possible for
each of five criteria. The holistic rubric outlines five criteria by which student responses
are evaluated, puts a 3-point scale on each of these criteria, and holistically describes
what a 0-, 1-, or 2-point answer would contain. However, this holistic rubric stops short
of defining in detail the specific concepts that would qualify an answer for 0, 1, or 2
points on each criteria scale. The analytical rubric shown in Table 3 does define these
concepts for each criteria, and it is in fact a fuller development of the holistic rubric
shown in Table 2. As mentioned, the development of an analytical rubric is challenging
in that it pushes the instructor to define specifically the language and depth of
knowledge that students need to demonstrate competency, and it is an attempt to
make discrete what is fundamentally a fuzzy, continuous distribution of ways an
individual could construct a response. As such, informal analysis of student responses
can often play a large role in shaping and revising an analytical rubric, because student
answers may hold conceptions and misconceptions that have not been anticipated by
the instructor.

Table 2. Holistic rubric for responses to the challenge statement: ′Plants get their
food from the soil′

No. Criteria 2 points 1 point 0 points

Demonstrates an
● Addresses the
understanding ● Demonstrates
concept, but in an
that … Food can be complete ● Does not
incomplete way
1 thought of as understanding of the address concept in
and/or with one or
carbon-rich concept with no answer
more
molecules including misconceptions
misconceptions
sugars and starches.
Table 2. Holistic rubric for responses to the challenge statement: ′Plants get their
food from the soil′

No. Criteria 2 points 1 point 0 points

● Addresses the
Demonstrates an ● Demonstrates
concept, but in an
understanding that… complete ● Does not
incomplete way
2 Food is a source of understanding of the address concept in
and/or with one or
energy for living concept with no answer
more
things. misconceptions
misconceptions

Demonstrates an
● Addresses the
understanding that… ● Demonstrates
concept, but in an
Photosynthesis is a complete ● Does not
incomplete way
3 specific process that understanding of the address concept in
and/or with one or
converts water and concept with no answer
more
carbon dioxide into misconceptions
misconceptions
sugars.

Demonstrates an ● Addresses the


● Demonstrates
understanding that… concept, but in an
complete ● Does not
The purpose of incomplete way
4 understanding of the address concept in
photosynthesis is the and/or with one or
concept with no answer
production of food more
misconceptions
by plants. misconceptions

Demonstrates an ● Addresses the


● Demonstrates
understanding that… concept, but in an
complete ● Does not
Soil may provide incomplete way
5 understanding of the address concept in
things other than and/or with one or
concept with no answer
food that plants more
misconceptions
need. misconceptions

Table 3. An analytical rubric for responses to the challenge statement, “Plants get
their food from the soil”

No. Criteria 2 points 1 point 0 points

● Does not
address what
● Defines food as
Demonstrates an ● Attempts to define could be meant by
sugars, carbon
understanding that… food and examples food or only talks
skeletons, or
Food can be thought of food, but does not about plants
1 starches or glucose.
of as carbon-rich include sugars, ′eating′ or
molecules including carbon skeletons, or absorbing dirt.
sugars and starches. starches.
● Must go beyond
use of word food.
Table 3. An analytical rubric for responses to the challenge statement, “Plants get
their food from the soil”

No. Criteria 2 points 1 point 0 points

Demonstrates an ● Discusses how


● Describes food as
understanding that… living things may use ● Does not
an energy source and
2 Food is a source of food, but does not address the role of
discusses how living
energy for living associate food with food.
things use food.
things. energy.

● Discusses ● Does not


● Partially discusses
photosynthesis in address the
process of
detail, including a process of
Demonstrates an photosynthesis and
description of the photosynthesis.
understanding that… may mention a
reactants—water
Photosynthesis is a subset of the
and carbon dioxide,
3 specific process that reactants and
their conversion with
converts water and products, but does
energy from sunlight ● May say that
carbon dioxide into not demonstrate
to form plants need water
sugars. understanding of
glucose/sugars, and and sunlight.
photosynthesis as a
the production of
process.
oxygen.

● Associates
photosynthesis with
Demonstrates an ● Discusses the
plants, but does not
understanding that… purpose of ● Does not
discuss
The purpose of photosynthesis as address the
4 photosynthesis as
photosynthesis is the the making of food purpose of
the making of food
production of food and/or sugar and/or photosynthesis.
and/or sugar and/or
by plants. glucose by plants.
glucose and/or
starch.

● Discusses at least
● Discusses at least ● Does not
two appropriate
one appropriate role address an
Demonstrates an roles for soil for
for soil. Possible appropriate role
understanding that… some plants. Possible
roles include the for soil. The use of
Soil may provide roles include the
5 importance of the word nutrient
things other than importance of
minerals (N, P, K), without further
food that plants minerals (N, P, K),
vitamins, water, and elaboration is
need. water, and structural
structural support insufficient for
support from the
from the soil. credit.
soil.

The various approaches to constructing rubrics in a sense also can be


characterized to be holistic or analytical. Those who offer recommendations about how
to build rubrics often approach the task from the perspective of describing the essential
features of rubrics (Huba and Freed, 2000; Arter and McTighe, 2001), or by outlining a
discrete series of steps to follow one by one (Moskal, 2000; Mettler, 2002; Bresciani et
al., 2004; MacKenzie, 2004). Regardless of the recommended approach, there is general
agreement that a rubric designer must approach the task with a clear idea of the desired
student learning outcomes (Luft, 1999) and, perhaps more importantly, with a clear
picture of what meeting each outcome “looks like” (Luft, 1999; Bresciani et al., 2004). If
this picture remains fuzzy, perhaps the outcome is not observable or measurable and
thus not “rubric-worthy.”
Reflection on one's particular answer to two critical questions—“What do I want
students to know and be able to do?” and “How will I know when they know it and can
do it well?”—is not only essential to beginning construction of a rubric but also can help
confirm the choice of a particular assessment task as being the best way to collect
evidence about how the outcomes have been met. A first step in designing a rubric, the
development of a list of qualities that the learner should demonstrate proficiency in by
completing an assessment task, naturally flows from this prior rumination on outcomes
and on ways of collecting evidence that students have met the outcome goal. A good
way to get started with compiling this list is to view existing rubrics for a similar task,
even if this rubric was designed for younger or older learners or for different subject
areas. For example, if one sets out to develop a rubric for a class presentation, it is
helpful to review the criteria used in a rubric for oral communication in a graduate
program (organization, style, use of communication aids, depth and accuracy of content,
use of language, personal appearance, responsiveness to audience; Huba and Freed,
2000) to stimulate reflection on and analysis of what criteria (dimensions of quality)
align with one's own desired learning outcomes. There is technically no limit to the
number of criteria that can be included in a rubric, other than presumptions about the
learners' ability to digest and thus make use of the information that is provided. In the
example in Table 1, only three criteria were used, as judged appropriate for the desired
outcomes of the high school poster competition.
After this list of criteria is honed and pruned, the dimensions of quality and
proficiency will need to be separately described (as in Table 1), and not just listed. The
extent and nature of this commentary depends upon the type of rubric—analytical or
holistic. This task of expanding the criteria is an inherently difficult task, because of the
requirement for a thorough familiarity with both the elements comprising the highest
standard of performance for the chosen task, and the range of capabilities of learners at
a particular developmental level. A good way to get started is to think about how the
attributes of a truly superb performance could be characterized in each of the important
dimensions—the level of work that is desired for students to aspire to. Common advice
(Moskal, 2000) is to avoid use of words that connote value judgments in these
commentaries, such as “creative” or “good” (as in “the use of scientific terminology
language is ‘good’”). These terms are essentially so general as to be valueless in terms of
their ability to guide a learner to emulate specific standards for a task, and although it is
admittedly difficult, they need to be defined in a rubric. Again, perusal of existing
examples is a good way to get started with writing the full descriptions of criteria.
Fortunately, there are a number of data banks that can be searched for rubric templates
of virtually all types (Chicago Public Schools, 2000; Arter and McTighe, 2001; Shrock,
2006; Advanced Learning Technologies, 2006; University of Wisconsin-Stout, 2006).
The final step toward filling in the grid of the rubric is to benchmark the
remaining levels of mastery or gradations of quality. There are a number of descriptors
that are conventionally used to denote the levels of mastery in addition to the
conventional excellent-to-poor scale (with or without accompanying symbols for letter
grades), and several examples from among the more common of these are listed below:
 Scale 1: Exemplary, Proficient, Acceptable, Unacceptable
 Scale 2: Substantially Developed, Mostly Developed, Developed, Underdeveloped
 Scale 3: Distinguished, Proficient, Apprentice, Novice
 Scale 4: Exemplary, Accomplished, Developing, Beginning
In this case, unlike the number of criteria, there might be a natural limit to how
many levels of mastery need this expanded commentary. Although it is common to have
multiple levels of mastery, as in the examples above, some educators (Bresciani et al.,
2004) feel strongly that it is not possible for individuals to make operational sense out of
inclusion of more than three levels of mastery (in essence, a “there, somewhat there,
not there yet” scale). As expected, the final steps in having a “usable” rubric are to ask
both students and colleagues to provide feedback on the first draft, particularly with
respect to the clarity and gradations of the descriptions of criteria for each level of
accomplishment, and to try out the rubric using past examples of student work.
Huba and Freed (2000) offer the interesting recommendation that the
descriptions for each level of performance provide a “real world” connection by stating
the implications for accomplishment at that level. This description of the consequences
could be included in a criterion called “professionalism.” For example, in a rubric for
writing a lab report, at the highest level of mastery the rubric could state, “This report of
your study would persuade your peers of the validity of your findings and would be
publishable in a peer-reviewed journal.” Acknowledging this recommendation in the
construction of a rubric might help to steer students toward the perception that the
rubric represents the standards of a profession, and away from the perception that a
rubric is just another way to give a particular teacher what he or she wants (Andrade
and Du, 2005).
As a further help aide for beginning instructors, a number of Web sites, both
commercial and open access, have tools for online construction of rubrics from
templates, for example, Rubistar (Advanced Learning Technologies, 2006) and TeAch-
nology (TeAch-nology, undated). These tools allow the would-be “rubrician” to select
from among the various types of rubrics, criteria, and rating scales (levels of mastery).
Once these choices are made, editable descriptions fall into place in the proper cells in
the rubric grid. The rubrics are stored in the site databases, but typically they can be
downloaded using conventional word processing or spreadsheet software. Further
editing can result in a rubric uniquely suitable for your teaching/learning goals.

Developing Rubrics Interactively with your Students:


You can enhance students’ learning experience by involving them in the rubric
development process. Either as a class or in small groups, students decide upon criteria
for grading the assignment. It would be helpful to provide students with samples of
exemplary work so they could identify the criteria with greater ease. In such an activity,
the instructor functions as facilitator, guiding the students toward the final goal of a
rubric that can be used on their assignment. This activity not only results in a greater
learning experience, it also enables students to feel a greater sense of ownership and
inclusion in the decision making process.

Rubrics: Useful Assessment Tools


Rubrics can be excellent tools to use when assessing students’ work for several
reasons. You might consider developing and using rubrics if:
 You find yourself re-writing the same comments on several different students’
assignments.
 Your marking load is high, and writing out comments takes up a lot of your time.
 Students repeatedly question you about the assignment requirements, even after
you’ve handed back the marked assignment.
 You want to address the specific components of your marking scheme for student
and instructor uses both prior to and following the assignment submission.
 You find yourself wondering if you are grading or commenting equitably at the
beginning, middle, and end of a grading session.
 You have a team of graders and wish to ensure validity and inter-rather reliability.

How to use rubrics effectively?


Develop a Different Rubric for Each Assignment:-
Although this takes time in the beginning, you’ll find that rubrics can be changed
slightly or re-used later. If you are seeking pre-existing rubrics, consider Rhodes (2009)
for the AAC&U VALUE rubrics, cited below, or Facione and Facione (1994). Whether you
develop your own or use an existing rubric, practice with any other graders in your
course to achieve inter-rater reliability.
Be transparent:-
Give students a copy of the rubric when you assign the performance task. These
are not meant to be surprise criteria. Hand the rubric back with the assignment.
Integrate Rubrics into Assignments:-
Require students to attach the rubric to the assignment when they hand it
in. Some instructors ask students to self-assess or give peer feedback using the rubric
prior to handing in the work.
Leverage Rubrics to Manage Your Time:-
When you mark the assignment, circle or highlight the achieved level of
performance for each criterion on the rubric. This is where you will save a great deal of
time, as no comments are required.
Include any additional specific or overall comments that do not fit within the
rubric’s criteria.
Be Prepared to Revise Your Rubrics:-
Decide upon a final grade for the assignment based on the rubric. If you find, as
some do, that presented work meets criteria on the rubric but nevertheless seems to
have exceeded or not met the overall qualities you’re seeking, revise the rubric
accordingly for the next time you teach the course. If the work achieves highly in some
areas of the rubric but not in others, decide in advance how the assignment grade is
actually derived. Some use a formula, or multiplier, to give different weightings to
various components; be explicit about this right on the rubric.
Consider Developing Online Rubrics:-
If an assignment is being submitted to an electronic drop box you may be able to
develop and use an online rubric. The scores from these rubrics are automatically
entered in the online grade book in the course management system.

Analyzing And Reporting Information Gathered From A Rubric:


Whether used with students to set learning goals, as scoring devices for grading
purposes, to give formative feedback to students about their progress toward important
course outcomes, or for assessment of curricular and course innovations, rubrics allow
for both quantitative and qualitative analysis of student performance. Qualitative
analysis could yield narrative accounts of where students in general fell in the cells of
the rubric, and they can provide interpretations, conclusions, and recommendations
related to student learning and development. For quantitative analysis the various levels
of mastery can be assigned different numerical scores to yield quantitative rankings, as
has been done for the sample rubric in Table 1. If desired, the criteria can be given
different scoring weightings (again, as in the poster presentation rubric in Table 1) if
they are not considered to have equal priority as outcomes for a particular purpose. The
total scores given to each example of student work on the basis of the rubric can be
converted to a grading scale. Overall performance of the class could be analyzed for
each of the criteria competencies.
Multiple-choice exams have the advantage that they can be computer or
machine scored, allowing for analysis and storage of more specific information about
different content understandings (particularly misconceptions) for each item, and for
large numbers of students. The standard rubric-referenced assessment is not designed
to easily provide this type of analysis about specific details of content understanding; for
the types of tasks for which rubrics are designed, content understanding is typically
displayed by some form of narrative, free-choice expression. To try to capture both the
benefits of the free-choice narrative and generate an in-depth analysis of students'
content understanding, particularly for large numbers of students, a special type of
rubric, called the double-digit, is typically used. A large-scale example of use of this type
of scoring rubric is given by the Trends in International Mathematics and Science Study
(1999). In this study, double-digit rubrics were used to code and analyze student
responses to short essay prompts.
To better understand how and why these rubrics are constructed and used, refer
to the example provided in Figure 2. This double-digit rubric was used to score and
analyze student responses to an essay prompt about ecosystems that was accompanied
by the standard “sun-tree-bird” diagram (a drawing of the sun, a tree, and other plants;
various primary and secondary consumers; and some not well-identifiable decomposers,
with interconnecting arrows that could be interpreted as energy flow or cycling of
matter). A brief narrative, summarizing the “big ideas” that could be included in a
complete response, along with a sample response that captures many of these big ideas
accompanies the actual rubric. The rubric itself specifies major categories of student
responses, from complete to various levels of incompleteness. Each level is assigned one
of the first digits of the scoring code, which could actually correspond to a conventional
point total awarded for a particular response. In the example in Figure 2, a complete
response is awarded a maximum number of 4 points, and the levels of partially complete
answers, successively lower points. Here, the “incomplete” and “no response”
categories are assigned first digits of 7 and 9, respectively, rather than 0 for clarity in
coding; they can be converted to zeroes for averaging and reporting of scores.

Conclusion:
Today rubrics are utilized in the classroom as an instructional tool across a wide
range of disciplines; nevertheless, the number of teachers using rubrics remains small.
Although the use of rubrics has become popular in assessing various performance-based
tasks, many educators remain unaware of rubrics and their enormous benefits. Most
teachers often tend to use their personal judgment in evaluating students' performance,
a judgment that can be inconsistent and even biased. Sometimes students complain that
they are assessed unfairly by teachers and that they deserve to get higher marks. Hence
for instructors it is important to communicate their assessment philosophy to their
students by using rubrics when giving any task in the class. With the help of rubrics,
students can clearly understand what it takes to obtain an “A” grade in a particular
assignment or task. At times, students beg their teachers for a few extra marks, but with
a rubric, a teacher can easily point to the rubric and say, “See, this is where you fell
short.” Hence rubrics can serve as a valuable grading tool that can lead to consistent, fair
and more transparent grading. Students can often fail to achieve a good grade if they
are confused about what it takes to achieve that grade in a particular task. In this
respect, rubrics can help teachers clarify their pedagogical goals to their students.
Students can learn more from rubrics than they can from a single letter grade. As rubrics
can be used to identify pertinent aspects of a piece of writing, they can be used by
instructors to communicate expectations very clearly. Students may come from various
backgrounds; hence some are often confused as to what skills they need in order to
achieve high scores or marks. In this respect, rubrics provide all students in the class
with clear guidelines regarding what expectations they must fulfill to achieve a good
grade.

References:
 Facione, P. & Facione, N. (1994). The holistic critical thinking rubric [PDF]. Insight
Assessment/California Academic Press.
 Rhodes, T. (2009). Assessing outcomes and improving achievement: Tips and tools
for using the rubrics. Washington, DC: Association of American Colleges and
Universities.
 Andrade, H., and Du, Y. (2005). Student perspectives on rubricreferenced
assessment. Pract. Assess. Res. Eval. 10. http://pareonline. net/pdf/v10n3.pdf
(accessed 18 May 2006).
 Arter, J. A., and McTighe, J. (2001). Scoring Rubrics in the Classroom: Using
Performance Criteria for Assessing and Improving Student Performance, Thousand
Oaks, CA: Corwin Press.
 Bresciani, M. J., Zelna, C. L., and Anderson, J. A. (2004). Criteria and rubrics. In:
Assessing Student Learning and Development: A Handbook for Practitioners,
Washington, DC: National Association of Student Personnel Administrators, 29 –37.

You might also like