You are on page 1of 8

7 Principles of Student-Centered Classroom Assessment

Principle 1

Assessments require clear thinking and effective communication. Those who develop and use
high quality assessments must share a highly refined focus. They must be clear thinkers, capable of
communicating effectively, both to those being assessed and those who must understand the
assessment results.

Principle 2

Classroom assessment is the key. Teachers direct the assessments that determine what students
learn and how those students feel about the learning. Nearly all of the assessment events that take
place in students’ lives happen at the behest of their teachers.

Principle 3

Students are assessment users. Students are the most important users of assessment results.
Students are consumers of assessment results – right from the time students arrive at school, they look
at their teachers for evidence of success. If that early evidence suggests they are succeeding, what
begins to grow in them is a sense of hopefulness and an expectation of more success in the future.

Principle 4

Clear and appropriate targets are essential. The quality of any assessment depends first and
foremost on the clarity and appropriateness of our definition of the achievement target to be assessed.

Principle 5

High quality assessment is a must. High quality assessment is essential in all assessment contacts. Sound
assessments must satisfy five specific quality standards: 1) clear targets; 2) focused purpose; 3) proper
method; 4) sound sampling; and 5) accurate assessment free of bias and distortion.

Principle 6

Understand the personal implications. Assessment is an interpersonal activity. The principle has
two important dimensions. The first has to do with the important reality of life in classrooms: Students
are people and teachers are people, too, and sometimes we like each other and sometimes we don’t.
Second, assessment is very complex in that it virtually always is accompanied by personal anecdotes and
personal consequences.

Principle 7

Assessment as teaching and learning. Assessments and instruction can be one and the same, if
and when we want them to be.

Why Should Teachers Know about Assessment?

 To determine students’ current status


 To monitor students’ progress
 To assign grades to students
 To determine instructional effectiveness
 Test results determine public perceptions of educational effectiveness
 Students’ assessment performances are increasingly being included as part of the teacher
evaluation process
 As clarifiers of instructional intentions, assessment devices can improve instructional quality

Why should Teachers Know about Assessment?

Traditional Reasons That Teachers Assess Students

 To determine students’ current status


 To monitor students’ progress
 To assign grades to students
 To determine instructional effectiveness

Today’s Reasons for Teachers to Know about Assessment

 Test results determine public perceptions of educational effectiveness.


 Students’ assessment performances are increasingly being included as part of the teacher
evaluation process.
 As clarifiers of instructional intentions, assessment devices can improve instructional
quality.
Purposes for Classroom Assessment

Assessment for Learning

Assessment for learning is designed to give teachers information to modify and differentiate
teaching and learning activities. It acknowledges that individual students learn in idiosyncratic ways, but
it also recognizes that there are predictable patterns and pathways that many students follow. It
requires careful design on the part of teachers so that they use the resulting information to determine
not only what students know, but also to gain insights into how, when, and whether students apply
what they know. Teachers can also use this information to streamline and target instruction and
resources, and to provide feedback to students to help them advance their learning.

Assessment as Learning

Assessment as learning is a process of developing and supporting metacognition for students.


Assessment as learning focusses on the role of the student as the critical connector between assessment
and learning. When students are active, engaged, and critical assessors, they make sense of information,
relate it to prior knowledge, and use it for new learning. This is the regulatory process in metacognition.
It occurs when students monitor their own learning and use the feedback from this monitoring to make
adjustments, adaptations, and even major changes in what they understand. It requires that teachers
help students develop, practice, and become comfortable with reflection, and with a critical analysis of

their own learning.

Assessment of Learning

Assessment of learning is summative in nature and is used to confirm what students know and
can do, to demonstrate whether they have achieved the curriculum outcomes, and, occasionally, to
show how they are placed in relation to others. Teachers concentrate on ensuring that they have used
assessment to provide accurate and sound statements of students’ proficiency, so that the recipients of
the information can use the information to make reasonable and defensible decisions.

Measurement, Testing, and Evaluation

Assessment is a very general term that describes the many techniques that we use to measure
and judge student behavior and performance.
Measurement is the process of assigning meaningful numbers (or labels) to persons or objects

based on the degree to which they possess some characteristic.

Evaluation involves the use of measurement to make decisions about or to determine the worth
of a person or object.

Types and Distinctions of Tests

Preliminary or placement assessments are assessments performed within the first two weeks of the
semester that are designed to measure students’ prerequisite skills.

A diagnostic assessment is any type of assessment used to identify an individual student’s learning
decrements.

Formative assessment is any type of assessment device that we use while an instructional unit is in
progress and is used primarily to give the teacher feedback on how the unit is progressing.

Summative assessments are performed at the end of chapters or units to determine the students’ level
of competency with the material and to assign grades.

Cognitive, Psychomotor and Affective Assessment

Cognitive assessment targets are those that deal with a student’s intellectual operations—for
instance, when the student displays acquired knowledge or demonstrates a thinking skill such as
decision-making or problem-solving.

Psychomotor assessment targets are those focused on a student’s large-muscle or small-muscle


skills.

Examples of psychomotor assessments that take place in schools would include tests of the student’s

keyboarding skills in a computer class or the student’s prowess in shooting a basketball in gym class.

Affective assessment targets are those that deal with a student’s attitudes, interests, and values,
such as the student’s self-esteem, risk-taking tendencies, or attitudes toward learning.
Norm-Referenced vs. Criterion-Referenced Assessment

With norm-referenced measurement, educators interpret a student’s performance in relation to


the performances of students who have previously taken the same examination. This previous group of
test takers is referred to as the norm group. Thus, when educators try to make sense out of a student’s
test score by “referencing” the score back to the norm group’s performances, it is apparent why these
sorts of interpretations are characterized as norm referenced.

A criterion-referenced measurement is an absolute interpretation because it hinges on the


extent to which the criterion (that is, curricular aim) represented by the test is actually mastered by the
student. Once the nature of an assessed curricular aim is properly described, the student’s test
performance can be interpreted according to the degree to which the curricular aim has been mastered.

Selected Responses vs. Constructed Response

With selected responses, students select their responses from alternatives we present to them
—for example, when we give students multiple choice tests and students’ responses must be selected
from each item’s available options. Other examples of selected-response assessment procedures are
binary choice items, such as those found in true–false tests where, for each item, students must select
either a true or a false answer.

In contrast, with constructed response, students construct all kinds of responses. In an English
class, for instance, students construct original essays. In a speech class, students construct 5-minute oral
speeches. In a drama class, students construct their responses while they present a one-act play. In a
homemaking class, students construct soufflés and upside-down cakes. Constructed-response tests lead
either to student products, such as the end tables and soufflés, or to behaviors, such as the speeches
and the one-act plays.

Test-Evaluation Criteria

Reliability represents the consistency with which an assessment procedure measures whatever
it’s measuring.

Validity reflects the degree to which evidence and theory support the accuracy of
interpretations of test scores for proposed uses of tests.
Fairness signifies the degree to which assessments are free of elements that would offend or
unfairly penalize particular groups of students on the basis of students’ gender, ethnicity, and so on.

Types of Reliability Evidence

Test-Retest: Consistency of results among different testing occasions

To get a fix on how stable an assessment’s results are over time, we usually test students on one
occasion, wait a week or two, and then retest them with the same instrument

Alternate Form: Consistency of results among two or more different forms of a test

To collect alternate-form consistency evidence, first, the two test forms are administered to the
same individuals. Ideally, there would be little or no delay between the administration of the two test
forms. When you obtain each student’s scores on the two forms, you could compute a correlation
coefficient reflecting the relationship between students’ performances on the two forms.

Internal Consistency: Consistency in the way an assessment instrument’s items function

Internal consistency evidence does not focus on the consistency of students’ scores on a test.
Rather, internal consistency evidence deals with the extent to which the items in an educational
assessment instrument are functioning in a consistent fashion.

Perspectives on Validity

Content-related evidence of validity refers to the match between the test items and the content that
was taught.

Instructional validity refers to the match between the items on the test and the material that was
taught.

Curricular validity refers to the match between the items on the test and the official curriculum.

If the items on a test appear to be measuring the appropriate skills, and appear to be
appropriate for the students taking the test, the test is said to have face validity.

A test has criterion-related evidence of validity if the test scores correlate well with another

method of measuring the same behavior or skill.


A test has concurrent validity if it displays a positive correlation with another method of
measuring the same behavior or skill given at about the same time.

A test is said to have predictive validity if it is positively correlated with some future behavior or skill.

A measurement device is said to display construct-related evidence of validity if it measures

what the appropriate theory says that it should be measuring.

Users of Classroom Assessment

TEACHERS

Before learning. Assessment before learning provides information that helps teachers determine
the readiness of students for learning in any given class.

During learning. Assessment during learning provides information that helps you monitor
student progress against the effectiveness of the instructional methods you are using. It also enables
teachers to judge whether students need more involvement in instruction, which students need more or
different instruction, and which are proceeding well enough to be assigned enrichment instruction.

After learning. This is the time assessment is used for decisions regarding determination and
assignment of grades—weighing the importance and use of assessments collected earlier in the teaching
of a unit, reporting on ways students are assessed, and reporting and interpreting assessment results.

Users of Classroom Assessment

SCHOOL ADMINISTRATOR

Examine school programs in terms of their strength and weaknesses in meeting community needs

Judge the priorities of the school within the district

Assess whether there is a need for alternative programs

Plan and improve existing programs

Formulate standards in all subject-matter areas as well as standards for achievement levels in content

Evaluate progress of teachers and schools

You might also like