You are on page 1of 24

Classroom Assessment (1627) Roll no: CA554837

1|P ag e
Classroom Assessment (1627) Roll no: CA554837

2|P ag e
Classroom Assessment (1627) Roll no: CA554837

ASSIGNMENT No.2

Course: Classroom Assessment (1627)

Roll no: CA554837

Semester: Spring, 2020

Level: PGD / ECE

Q. 1 What are disadvantages of observation. Write some suggestion to improve it at


elementary level.

Ans: Disadvantages of Observation:

1. Some of the Occurrences may not be Open to Observation:

There are many personal behaviors or secret activities which are not open for observation. In
most of the cases people do not allow the outsider to study their activities.

2. Not all Occurrences Open to Observation can be Observed when Observer is at Hand:

Such problems arise because of the uncertainty of the event. Many social events are very much
uncertain in nature. It is a difficult task on the part of the researcher to determine their time and
place. The event may take place in the absence of the observer. On the other hand, it may not
occur in the constant presence of the observer. For example, the quarrel and fight between two
individuals or groups is never certain. Nobody knows when such an event will take place.

3|P ag e
Classroom Assessment (1627) Roll no: CA554837

3. Not all Occurrences Lend Themselves to Observational Study:

Most of the social phenomenon is abstract in nature. For example, love, affection, feeling and
emotion of parents towards their children are not open to our senses and also cannot be
quantified by observational techniques. The researcher may employ other methods like case
study; interview etc. to study such phenomena.

4. Lack of Reliability:

Because social phenomena cannot be controlled or used for laboratory experiments,


generalizations made by observation method are not very reliable. The relative-ness of the social
phenomena and the personal bias of the observer again create difficulty for making valid
generalization in observation. P.V. Young remarks that in observation, no attempt is made to use
instruments of precision to check the accuracy of the phenomenon.

5. Faulty Perception:

Observation is a highly technical job. One is never sure that what he is observing is the same as it
appears to his eyes. Two persons may judge the same phenomena differently. One person may
find something meaningful and useful from a situation but the other may find nothing from it.
Only those observers who are having the technical knowledge about the observation can make
scientific observation.

6. Personal Bias of the Observer:

The personal bias, personal view or looking at things in a particular way often creates obstacle
for making valid generalization. The observer may have his own ideas of right and wrong or he
may have different pre-conceptions regarding an event which kills the objectivity in social
research.

4|P ag e
Classroom Assessment (1627) Roll no: CA554837

7. Slow Investigation:

Observation is a time taking process. P.V. Young rightly remarks that the valid observation
cannot be hurried; we cannot complete our investigation in a short period through observation. It
sometimes reduces the interest of both observer and observed to continue their observation
process.

8. Expensive:

Observation is a costly affair. It requires high cost, plenty of time and hard effort. Observation
involves travelling, staying at the place of phenomena and purchasing of sophisticated
equipment’s. Because of this it is called as one of the most expensive methods of data collection.

9. Inadequate Method:

According to P.V. Young, “the full answers cannot be collected by observation alone”. Therefore
many suggested that observation must be supplemented by other methods also.

10. Difficulty in Checking Validity:

Checking the validity of observation is always difficult. Many of the phenomena of observation
cannot be defined with sufficient precision and does not help in drawing a valid generalization.
The lack of competence of the observer may hamper validity and reliability of observation.

Write some suggestion to improve it at elementary level.

Selecting an appropriate observation technique to gather your information is animportant part of


this process.

5|P ag e
Classroom Assessment (1627) Roll no: CA554837

Different techniques include:

 Time sampling
 Tracking
 Checklists
 Target child
 Learning stories

Time sampling:

Time sampling involves completing a short narrative observation of a child at 10–15minute


intervals. This gives you quite a broad overview of the child in the setting. Assessment of the
observation can be focused across many areas, as appropriate. The same technique can be used
for activities. An activity is observed every 10–15 minutes.

Again, this offers a broad range of possibilities for assessment.

Tracking:

Tracking observations follow children’s choices within the setting. These choices(including time
children spent between activities and any time they spent observing others) and the time that the
child spends there are recorded. You may also record who else was at the activity and briefly
how the child engaged with the activity/experience. Again, this offers a broad view of the child
in the setting and assessment can be focused on what you need to know. Checklists are pre-
determined lists that identify knowledge, skills or aptitudes. The purpose of observation is to
ascertain whether a child can meet these criteria. The secan be useful if you need to find out
something particular and precise. However, generally checklists are not a sufficiently
sophisticated enough way of capturing the richness of young children’s learning.

6|P ag e
Classroom Assessment (1627) Roll no: CA554837

Target child:

Target child observations are ones in which you identify a particular child to observe.You may
be looking at something in particular or a completing an open-ended observation. In this
observation the child is observed within the learning environment alongside other children. This
gives the child the opportunity to demonstrate what they know and can do within their familiar
environment alongside their peers. The activity that the child is involved in is briefly recorded
narrative and then language and social interactions are recorded and coded to give an accurate
account of what happened during the observation for analysis and interpretation.

Learning stories:

Learning stories are a way of recording and presenting observations of children over time:
building a narrative about their learning. They emerged from the work of Margaret Carr and are
based in socio-cultural theory. Carr (2001) articulates a way of recording children’s learning that
acknowledges the context of that learning. She called these learning stories. The idea is to create
a narrative, a story, recorded as a series of episodes linked together that record what the child
knows and can do, and, records what comes next. This is important. The purpose of recording
children’s learning in learning stories is to enhance their learning, to foreground what they can
do as a starting point for providing for their ongoing development, and to recognize the
complexity of the context and process of learning. The idea of a learning story is interpreted in a
number of ways in practice. Some settings have formatted their observation sheets to create
narrative threads linked to next steps in learning. Others have adopted a portfolio approach, in
which observations and examples of children’s work are kept together to create a narrative of
their progress in the setting. Assessment of children’s learning takes place at each stage of
recording of the learning story in the analysis of the observation to define the next steps.

7|P ag e
Classroom Assessment (1627) Roll no: CA554837

References:

 Palaiologou, I (2012) Child observation for the early years. London. SAGE.
 Papatheodorou, T, Luff, P with Gill, J (2011) Child observation for learning and research.
Harlow:Pearson.
 Smidt, S (2005) Observing, assessing and planning for children in the early years.
London: Routledge.
 Thornton, L and Brunton, P (2005) Understanding the Reggio approach. London: David
Fulton.
 Thornton, L, Brunton, P and Green, S (2007) Bringing the Reggio approach to your early
yearspractice. London: David Fulton.

8|P ag e
Classroom Assessment (1627) Roll no: CA554837

Q.2 Write advantages and disadvantages of standardized tests.

Ans: Standardized testing has been around for several generations. In the United States,
standardized tests have been used to evaluate student performance since the middle of the 19th
century. Virtually every person who has attended a public or private school has taken at least one
standardized test.

The advantages and disadvantages of standardized testing are quite unique. On one hand, these
tests provide a way to compare student knowledge to find learning gaps. On the other hand, not
every student performs well on a test, despite having a comprehensive knowledge and
understanding about the subject matter involved.

Advantages of Standardized Testing:

1. It has a positive impact on student achievement.


According to a review of testing research that has been conducted over the past century, over
90% of students have found that standardized tests have a positive effect on their
achievement. Students feel better about their ability to comprehend and know subject
materials that are presented on a standardized test. Even if a perfect score isn’t achieved,
knowing where a student stands helps them be able to address learning deficits.

2. It is a reliable and objective measurement of achievement.


Standardized tests allow for a reliable measurement of student success that isn’t influenced
by local factors. Local school districts and teachers may have a vested interest in the
outcomes of testing and the desire to produce a favorable result can create inaccurate test
results. Because standardized tests are graded by computers, they are not as subject to human
bias or subjectivity, which makes them a more accurate reflection of student success.

9|P ag e
Classroom Assessment (1627) Roll no: CA554837

3. Standardized tests allow for equal and equivalent content for all students.
This means a complete evaluation of students from an equal perspective can be obtained.
Using alternate tests or exempting children from taking a standardized test creates unequal
systems, which then creates one group of students who is accountable to their results and
another group of students that is unaccountable. It is a system that looks at every child
through equal eyes.

4. A standardized test teaches students prioritization.


Standardized testing covers core subject materials that students need for success in other
subject areas. Without reading, for example, it would be difficult to learn how to write
properly. Without mathematics, it would be difficult to pursue scientific concepts. The goal
of a standardized test is to cover core subject materials that will help students excel in other
related subjects, giving them the chance to master core curriculum items so they can move on
to correlating subjects with greater ease.

5. It allows school districts to discover their good teachers.


Good teachers understand that test preparation drills and specific core instructions to “teach
to a test” are not the best way to encourage learning. Repetition does not produce test score
gains, but teaching a curriculum that allows students to explore a subject according to their
interests, with teacher guidance, will do so. Test-taking skills and memorization do not
promote understanding and districts which take these actions continually show low overall
standardized testing scores.

What Are the Disadvantages of Standardized Testing?

1. It has not had a positive impact on student education.


Since 2002, when the United States added more emphasis to standardized testing, it has
dropped in global education rankings. From 2002-2009, the US went from being ranked 18th
in the world in mathematic to being ranked 31st in the world. The rankings in science also
dropped in a similar way, while reading comprehension remained largely unchanged.
10 | P a g e
Classroom Assessment (1627) Roll no: CA554837

According to the National Research Council, even incentive programs tied to standardized
testing results are not working to improve student comprehension, understanding, and
knowledge.

2. Standardized testing can be predictable.


Students who are aware of patterns can determine what the answers to a standardized test
could be by only knowing a handful of answers with certainty. This predictability reflects the
natural human bias that occurs in every action or reaction we have in any endeavor. It also
means test scores can be high without reflecting student understanding. Brookings found that
up to 80% of test score improvements in test scores can have nothing to do with long-term
learning changes.

3. They assume that all students start from the same point of understanding.
Standardized tests may allow for a direct comparison of data, but they do not account for
differences in the students who are taking the tests. In the US, standardized tests could be
considered discriminatory in some regions because they assume that the student is a first-
language English speaker. Students who have special needs, learning disabilities, or have
other challenges which are addressed by an Individualized Education Plan may also be at a
disadvantage when taking a standardized test compared to those who do not have those
concerns.

4. Standardized tests only look at raw comprehension data


Students learn in a variety of ways. People have many different strengths that may not be
reflected in the context of a standardized test. Traits like creativity, enthusiasm, empathy,
curiosity, or resourcefulness cannot be tracked by these tests, even though they are highly
desirable traits in modern careers. A standardized test could determine the knowledge a
student has about musical theory, but it cannot judge the quality of a composition that a
student might create.
11 | P a g e
Classroom Assessment (1627) Roll no: CA554837

5. Teacher evaluations have been tied to standardized test results.


Many teachers are being evaluated on the work that their students do on a standardized test.
Based on the classroom grades achieved, a teacher might receive a raise or be fired from their
job. This creates a host of learning problems. For starters, only the students who are
performing poorly on testing simulations receive a majority of the attention from the teacher,
leaving good students to fend for themselves. Teachers then begin to “teach to the test”
instead of teaching subject materials to obtain needed results. This creates a reduction of
higher-order thinking, reduces complex assignments, and prevents cognitive understanding.

6. Standardized tests narrow the curriculum.


According to the Center on Education Policy, from 2001-2007, school districts in the United
States reduced the amount of time spent on social studies, creative subjects, and science by
over 40%. This results in the average student losing more than 2 hours of instruction time in
these areas so that they can focus on subjects that are on standardized tests, such as reading
and math.

7. More time is spent on test preparation instead of actual learning.


Many school districts, especially those with lower test scores, spend more classroom time on
test preparation than learning the curriculum. In 2010, New York City took the extraordinary
measure of including 2.5-hour test preparation sessions on scheduled school vacation days.

The advantages and disadvantages of standardized testing show that it can be a useful tool for
student evaluation, but only when it is used correctly. Like any system, it can be abused by those
who are looking for shortcuts. That is why each key point must be carefully considered before
implementing or making changes to a plan of standardized testing.

12 | P a g e
Classroom Assessment (1627) Roll no: CA554837

References:
 Klein, R. (2014, February 7). New York students are incredibly stressed out about
standardized testing, survey says. The Huffington Post.
 Price A B. The histological recognition of Helicobacter pylori. Helicobacter pylori.
Techniques for clinical diagnosis and basic research, A Lee, F Mégraud. Saunders,
London1995; 33–49.
 Engstrand L, Påhlson C, Gustavsson S, Schwan A. Monoclonal antibodies for rapid
identification of Campylobacter pyloridis. Lancet 1986; 2: 1402–3
 Gavinet A M, Duthil B, Mégraud F, Bébéar C. In situ hybridization for the detection of
Helicobacter pylori. Development of a non-radioactive probe. Gastroenterology 1990; 98:
47A.
 Christensen A H, Gjorup T, Hilden J, Fenger C, Henriksen B, Vyberg M, et al. Observer
homogeneity in the histologic diagnosis of Helicobacter pylori. Latent class analysis, kappa
coefficient, and repeat frequency. Scand J Gastroenterol 1992; 27: 933–9
 Kolts B E, Joseph B, Achem S R, Bianchi T, Monteiro C. Helicobacter pylori detection: a
quality and cost analysis. Am J Gastroenterol 1993; 88: 650–5
 Tee W, Fairley S, Smallwood R, Dwyer B. Comparative evaluation of three selective media
and a nonselective medium for the culture of Helicobacter pylori from gastric biopsies. J Clin
Microbiol 1991; 29: 2587–9
 Van Zwet A A, Thijs J C, Kooistra-Smid A MD, Schirm J, Snijder J AM. Sensitivity of
culture compared with that of polymerase chain reaction for detection of Helicobacter pylori
from antral biopsy samples. J Clin Microbiol 1993; 31: 1918–20

13 | P a g e
Classroom Assessment (1627) Roll no: CA554837

Q.3 Discuss the major characteristics and issues of construct validity.

Ans: Construct Validity Construct validity refers to the degree to which inferences can
legitimately be made from the operationalizations in your study to the theoretical constructs
on which those operationalizations were based. Like external validity, construct validity is
related to generalizing. But, where external validity involves generalizing from your study
context to other people, places or times, construct validity involves generalizing from your
program or measures to the concept of your program or measures. You might think of
construct validity as a “labeling” issue. When you implement a program that you call a
“Head Start” program, is your label an accurate one? When you measure what you term
“self esteem” is that what you were really measuring?

I would like to tell two major stories here. The first is the more straig htforward one. I’ll
discuss several ways of thinking about the idea of construct validity, several metaphors that
might provide you with a foundation in the richness of this idea. Then, I’ll discuss the major
construct validity threats, the kinds of arguments your critics are likely to raise when you
make a claim that your program or measure is valid. In most research methods texts,
construct validity is presented in the section on measurement. And, it is typically presented
as one of many different types of validity (e.g., face validity, predictive validity, concurrent
validity) that you might want to be sure your measures have. I don’t see it that way at all. I
see construct validity as the overarching quality with all of the other measurement validity
labels falling beneath it. And, I don’t see construct validity as limited only to measurement.
As I’ve already implied, I think it is as much a part of the independent variable – the
program or treatment – as it is the dependent variable. So, I’ll try to make some sense of the
various measurement validity types and try to move you to think instead of the validity of
any operationalization as falling within the general category of construct validity, with a
variety of subcategories and subtypes Inadequate Preoperational Explication of Constructs

This one isn’t nearly as ponderous as it sounds. Here, preoperational means before
translating constructs into measures or treatments, and explication means explanation – in
other words, you didn’t do a good enough job of defining (operationally) what you mean by

14 | P a g e
Classroom Assessment (1627) Roll no: CA554837

the construct. How is this threat? Imagine that your program consisted of a new type of
approach to rehabilitation. Your critic comes along and claims that, in fact, your program is
neither new nor a true rehabilitation program. You are being accused of doing a poor job of
thinking through your constructs. Some possible solutions:

 think through your concepts better


 use methods (e.g., concept mapping) to articulate your concepts
 get experts to critique your operationalizations

Issues of construct validity:

Mono-Operation Bias

Mono-operation bias pertains to the independent variable, cause, program or treatment in


your study – it does not pertain to measures or outcomes (see Mono-method Bias below). If
you only use a single version of a program in a single place at a single point in time, you
may not be capturing the full breadth of the concept of the program. Every
operationalization is flawed relative to the construct on which it is based. If you conclude
that your program reflects the construct of the program, your critics are likely to argue that
the results of your study only reflect the peculiar version of the program that you
implemented, and not the actual construct you had in mind. Solution: try to implement
multiple versions of your program.

Mono-Method Bias

Mono-method bias refers to your measures or observations, not to your programs or causes.
Otherwise, it’s essentially the same issue as mono-operation bias. With only a single version
of a self esteem measure, you can’t provide much evidence that you’re really measuring self
esteem. Your critics will suggest that you aren’t measuring self esteem – that you’re only

15 | P a g e
Classroom Assessment (1627) Roll no: CA554837

measuring part of it, for instance. Solution: try to implement multiple measures of key
constructs and try to demonstrate (perhaps through a pilot or side study) that the measures
you use behave as you theoretically expect them to.

Interaction of Different Treatments

You give a new program designed to encourage high-risk teenage girls to go to school and
not become pregnant. The results of your study show that the girls in your treatment group
have higher school attendance and lower birth rates. You’re feeling pretty good about your
program until your critics point out that the targeted at-risk treatment group in your study is
also likely to be involved simultaneously in several other programs designed to have similar
effects. Can you really label the program effect as a consequence of your program? The
“real” program that the girls received may actually be the combination of the separate
programs they participated in.

Interaction of Testing and Treatment

Does testing or measurement itself make the groups more sensitive or receptive to the
treatment? If it does, then the testing is in effect a part of the treatment, it’s inseparable
from the effect of the treatment. This is a labeling issue (and, hence, a concern of construct
validity) because you want to use the label “program” to refer to the program alone, but in
fact it includes the testing.

Restricted Generalizability Across Constructs

This is what I like to refer to as the “unintended consequences” treat to construct validity.
You do a study and conclude that Treatment X is effective. In fact, Treatment X does cause
a reduction in symptoms, but what you failed to anticipate was the drastic negative
consequences of the side effects of the treatment. When you say that Treatment X is
effective, you have defined “effective” as only the directly targeted symptom. This threat

16 | P a g e
Classroom Assessment (1627) Roll no: CA554837

reminds us that we have to be careful about whether our observed effects (Treatment X is
effective) would generalize to other potential outcomes.

Confounding Constructs and Levels of Constructs

Imagine a study to test the effect of a new drug treatment for cancer. A fixed dose of the
drug is given to a randomly assigned treatment group and a placebo to the other group. No
treatment effects are detected. Perhaps the result that’s observed is only true for that dosage
level. Slight increases or decreases of the dosage may radically change the results. In this
context, it is not “fair” for you to use the label for the drug as a description for your
treatment because you only looked at a narrow range of dose. Like the other construct
validity threats, this is essentially a labeling issue – your label is not a good description for
what you implemented.

The “Social” Threats to Construct Validity

I’ve set aside the other major threats to construct validity because they all stem from the
social and human nature of the research endeavor.

Hypothesis Guessing

Most people don’t just participate passively in a research project. They are trying to figure
out what the study is about. They are “guessing” at what the real purpose of the study is.
And, they are likely to base their behavior on what they guess, not just on your treatment. In
an educational study conducted in a classroom, students might guess that the key dependent
variable has to do with class participation levels. If they increase their participation not
because of your program but because they think that’s what you’re studying, then you
cannot label the outcome as an effect of the program. It is this labeling issue that makes this
a construct validity threat.

17 | P a g e
Classroom Assessment (1627) Roll no: CA554837

Evaluation Apprehension

Many people are anxious about being evaluated. Some are even phobic about testing and
measurement situations. If their apprehension makes them perform poorly (and not your
program conditions) then you certainly can’t label that as a treatment effect. Another form
of evaluation apprehension concerns the human tendency to want to “look good” or “look
smart” and so on. If, in their desire to look good, participants perform better (and not as a
result of your program!) then you would be wrong to label this as a treatment effect. In both
cases, the apprehension becomes confounded with the treatment itself and you have to be
careful about how you label the outcomes.

Experimenter Expectancies

These days, where we engage in lots of non-laboratory applied social research, we generally
don’t use the term “experimenter” to describe the person in charge of the research. So, let’s
relabel this threat “researcher expectancies.” The researcher can bias the results of a study
in countless ways, both consciously or unconsciously. Sometimes the researcher can
communicate what the desired outcome for a study might be (and participant desire to “look
good” leads them to react that way). For instance, the researcher might look pleased when
participants give a desired answer. If this is what causes the response, it would be wrong to
label the response as a treatment effect.

References:

 Cronbach, L. J.; Meehl, P.E. (1955). "Construct Validity in Psychological


Tests". Psychological Bulletin.
 Guion, R. M. (1980). "On trinitarian doctrines of validity". Professional Psychology.
 Messick, S. (1995). "Validity of psychological assessment: Validation of inferences from
persons' responses and performances as scientific inquiry into score meaning". American
Psychologist.

18 | P a g e
Classroom Assessment (1627) Roll no: CA554837

Q.4 What are scoring rubrics? Discuss the benefit of scoring rubrics with examples.

Ans: In education terminology, rubric means "a scoring guide used to evaluate the quality of
students' constructed responses".Put simply, it is a set of criteria for grading assignments.
Rubrics usually contain evaluative criteria, quality definitions for those criteria at particular
levels of achievement, and a scoring strategy. They are often presented in table format and can
be used by teachers when marking, and by students when planning their work.

A scoring rubric is an attempt to communicate expectations of quality around a task. In many


cases, scoring rubrics are used to delineate consistent criteria for grading. Because the criteria are
public, a scoring rubric allows teachers and students alike to evaluate criteria, which can be
complex and subjective. A scoring rubric can also provide a basis for self-evaluation, reflection,
and peer review. It is aimed at accurate and fair assessment, fostering understanding, and
indicating a way to proceed with subsequent learning/teaching. This integration of performance
and feedback is called ongoing assessment or formative assessment.
Several common features of scoring rubrics can be distinguished, according to BernieDodge and
Nancy Pickett:

 They focus on measuring a stated objective (performance, behavior, or quality).

 They use a range to rate performance.

 They contain specific performance characteristics arranged in levels indicating either the
developmental sophistication of the strategy used or the degree to which a standard has been met.

Benefitsof scoring rubrics:

 Most assessments do not have an answer key


o Rubrics can provide that key.

19 | P a g e
Classroom Assessment (1627) Roll no: CA554837

 Rubrics allow consistent assessment


o Reproducable scoring by a single individual is enhanced.
o Reproducable scoring by multiple individuals can be enhanced with training.
o Greater precision and reliability among scored assessments.
o They allow for better peer feedback on student graded work.

 Rubrics can be impartial.


o Scoring can be prescribed by the rubric and not the instructor predispositions towards students.
o They allow better or more accurate self-assessment by students.

 Rubrics document and communicate grading procedures.


o If parents, students, colleagues, or administrators question a grade, the rubric can be used to
validate it.
o They allow justification and validation of scoring among other stakeholders.
o Students can compare their assignment to the rubric to see why they received the grade that they
did.

 Rubrics allow one to organize and clarify your thoughts.


o They tell you what was important enough to assess.
o They allow comparison of lesson objectives to what is assessed.
o Instruction can be redesigned to meet objectives with assessed items.
o Students can use them as a guide to completing an assignment. They help students with process
and possibly increase the quality of student work.
 Rubrics provide an opportunity for important professional discussions when they are
brought up in scholarly communication.

20 | P a g e
Classroom Assessment (1627) Roll no: CA554837

 Rubrics can help you teach.


o They keep you focused on what you intend to assess.
o They allow you to organize your thoughts.
o They can provide a scaffold with which the students can learn.
 Non-scoring rubrics can encourage students to self-assess their performance.

References:

 Goodrich, H. (1996). "Understanding Rubrics." Educational Leadership, 54 (4), 14-18.


 Popham, James (October 1997). "What's Wrong - and What's Right - with
Rubrics". Educational Leadership. 55 (2): 72–75.
 ^ Dawson, Phillip (December 2015). "Assessment rubrics: towards clearer and more
replicable design, research and practice Phillip". Assessment & Evaluation in Higher
Education. 42 (3): 347–
360. CiteSeerX 10.1.1.703.8431. doi:10.1080/02602938.2015.1111294.
 ^ "Rubrics for Web Lessons". 2007. Retrieved 2020-04-21.

21 | P a g e
Classroom Assessment (1627) Roll no: CA554837

Q.5 Explain different types of assessment reports.

Ans: There are different types of assessment in education. All assessment methods have different
purposes during and after instruction. This article will tell you what types of assessment are most
important during developing and implementing your instruction.

1. Pre-assessment or diagnostic assessment

Before creating the instruction, it’s necessary to know for what kind of students you’re creating
the instruction. Your goal is to get to know your student’s strengths, weaknesses and the skills
and knowledge the posses before taking the instruction. Based on the data you’ve collected, you
can create your instruction.

2. Formative assessment

Formative assessment is used in the first attempt of developing instruction. The goal is to
monitor student learning to provide feedback. It helps identifying the first gaps in your
instruction. Based on this feedback you’ll know what to focus on for further expansion for your
instruction.

3. Summative assessment

Summative assessment is aimed at assessing the extent to which the most important outcomes at
the end of the instruction have been reached. But it measures more: the effectiveness of learning,
reactions on the instruction and the benefits on a long-term base. The long-term benefits can be
determined by following students who attend your course, or test. You are able to see whether
and how they use the learned knowledge, skills and attitudes.

22 | P a g e
Classroom Assessment (1627) Roll no: CA554837

4. Confirmative assessment

When your instruction has been implemented in your classroom, it’s still necessary to take
assessment. Your goal with confirmative assessments is to find out if the instruction is still a
success after a year, for example, and if the way you're teaching is still on point. You could say
that a confirmative assessment is an extensive form of a summative assessment.

5. Norm-referenced assessment

This compares a student’s performance against an average norm. This could be the average
national norm for the subject History, for example. Other example is when the teacher compares
the average grade of his or her students against the average grade of the entire school.

6. Criterion-referenced assessment

It measures student’s performances against a fixed set of predetermined criteria or learning


standards. It checks what students are expected to know and be able to do at a specific stage of
their education. Criterion-referenced tests are used to evaluate a specific body of knowledge or
skill set, it’s a test to evaluate the curriculum taught in a course.

7. Impassive assessment

It measures the performance of a student against previous performances from that student. With
this method you’re trying to improve yourself by comparing previous results. You’re not
comparing yourself against other students, which may be not so good for your self-confidence.

References:
 Anastasi, A. (1968). Psychological testing (3rd ed.). New York: Macmillan.

 Bagnato, S. J. (1980). The efficacy of diagnostic reports as individualized guides to


 prescriptive goal planning. Exceptional Children, 46, 554–557.

 Bagnato, S. J. , & Neisworth, J. T. (1979). Between assessment and intervention:



Forging an assessment/curriculum linkage for the handicapped preschooler. Child Care

23 | P a g e
Classroom Assessment (1627) Roll no: CA554837

Quarterly, 8, 179–197.

 Brandt, H. M. , & Giebink, J. W. (1968). Concreteness and congruence in psychologists’


 reports to teachers. Psychology in the Schools, 5, 87–89.

 Duffy, J. B. , Salvia, J. , Tucker, J. , & Ysseldyke, J. (1981). Nonbiased assessment: A


 need for operationalism. Exceptional Children, 47, 427–434.

 Feuerstein, R. , Rand, Y. , Hoffman, M. B. , & Miller, R. (1980). Instrumental


Enrichment: An intervention program for cognitive modifiability. Baltimore: University
 Park Press
 Narrol, H. , & Bachor, D. G. (1975). An introduction of Feuerstein's approach to

assessing and developing cognitive potential. Interchange, 6, 2–16.

24 | P a g e

You might also like