Professional Documents
Culture Documents
1|P ag e
Classroom Assessment (1627) Roll no: CA554837
2|P ag e
Classroom Assessment (1627) Roll no: CA554837
ASSIGNMENT No.2
There are many personal behaviors or secret activities which are not open for observation. In
most of the cases people do not allow the outsider to study their activities.
2. Not all Occurrences Open to Observation can be Observed when Observer is at Hand:
Such problems arise because of the uncertainty of the event. Many social events are very much
uncertain in nature. It is a difficult task on the part of the researcher to determine their time and
place. The event may take place in the absence of the observer. On the other hand, it may not
occur in the constant presence of the observer. For example, the quarrel and fight between two
individuals or groups is never certain. Nobody knows when such an event will take place.
3|P ag e
Classroom Assessment (1627) Roll no: CA554837
Most of the social phenomenon is abstract in nature. For example, love, affection, feeling and
emotion of parents towards their children are not open to our senses and also cannot be
quantified by observational techniques. The researcher may employ other methods like case
study; interview etc. to study such phenomena.
4. Lack of Reliability:
5. Faulty Perception:
Observation is a highly technical job. One is never sure that what he is observing is the same as it
appears to his eyes. Two persons may judge the same phenomena differently. One person may
find something meaningful and useful from a situation but the other may find nothing from it.
Only those observers who are having the technical knowledge about the observation can make
scientific observation.
The personal bias, personal view or looking at things in a particular way often creates obstacle
for making valid generalization. The observer may have his own ideas of right and wrong or he
may have different pre-conceptions regarding an event which kills the objectivity in social
research.
4|P ag e
Classroom Assessment (1627) Roll no: CA554837
7. Slow Investigation:
Observation is a time taking process. P.V. Young rightly remarks that the valid observation
cannot be hurried; we cannot complete our investigation in a short period through observation. It
sometimes reduces the interest of both observer and observed to continue their observation
process.
8. Expensive:
Observation is a costly affair. It requires high cost, plenty of time and hard effort. Observation
involves travelling, staying at the place of phenomena and purchasing of sophisticated
equipment’s. Because of this it is called as one of the most expensive methods of data collection.
9. Inadequate Method:
According to P.V. Young, “the full answers cannot be collected by observation alone”. Therefore
many suggested that observation must be supplemented by other methods also.
Checking the validity of observation is always difficult. Many of the phenomena of observation
cannot be defined with sufficient precision and does not help in drawing a valid generalization.
The lack of competence of the observer may hamper validity and reliability of observation.
5|P ag e
Classroom Assessment (1627) Roll no: CA554837
Time sampling
Tracking
Checklists
Target child
Learning stories
Time sampling:
Tracking:
Tracking observations follow children’s choices within the setting. These choices(including time
children spent between activities and any time they spent observing others) and the time that the
child spends there are recorded. You may also record who else was at the activity and briefly
how the child engaged with the activity/experience. Again, this offers a broad view of the child
in the setting and assessment can be focused on what you need to know. Checklists are pre-
determined lists that identify knowledge, skills or aptitudes. The purpose of observation is to
ascertain whether a child can meet these criteria. The secan be useful if you need to find out
something particular and precise. However, generally checklists are not a sufficiently
sophisticated enough way of capturing the richness of young children’s learning.
6|P ag e
Classroom Assessment (1627) Roll no: CA554837
Target child:
Target child observations are ones in which you identify a particular child to observe.You may
be looking at something in particular or a completing an open-ended observation. In this
observation the child is observed within the learning environment alongside other children. This
gives the child the opportunity to demonstrate what they know and can do within their familiar
environment alongside their peers. The activity that the child is involved in is briefly recorded
narrative and then language and social interactions are recorded and coded to give an accurate
account of what happened during the observation for analysis and interpretation.
Learning stories:
Learning stories are a way of recording and presenting observations of children over time:
building a narrative about their learning. They emerged from the work of Margaret Carr and are
based in socio-cultural theory. Carr (2001) articulates a way of recording children’s learning that
acknowledges the context of that learning. She called these learning stories. The idea is to create
a narrative, a story, recorded as a series of episodes linked together that record what the child
knows and can do, and, records what comes next. This is important. The purpose of recording
children’s learning in learning stories is to enhance their learning, to foreground what they can
do as a starting point for providing for their ongoing development, and to recognize the
complexity of the context and process of learning. The idea of a learning story is interpreted in a
number of ways in practice. Some settings have formatted their observation sheets to create
narrative threads linked to next steps in learning. Others have adopted a portfolio approach, in
which observations and examples of children’s work are kept together to create a narrative of
their progress in the setting. Assessment of children’s learning takes place at each stage of
recording of the learning story in the analysis of the observation to define the next steps.
7|P ag e
Classroom Assessment (1627) Roll no: CA554837
References:
Palaiologou, I (2012) Child observation for the early years. London. SAGE.
Papatheodorou, T, Luff, P with Gill, J (2011) Child observation for learning and research.
Harlow:Pearson.
Smidt, S (2005) Observing, assessing and planning for children in the early years.
London: Routledge.
Thornton, L and Brunton, P (2005) Understanding the Reggio approach. London: David
Fulton.
Thornton, L, Brunton, P and Green, S (2007) Bringing the Reggio approach to your early
yearspractice. London: David Fulton.
8|P ag e
Classroom Assessment (1627) Roll no: CA554837
Ans: Standardized testing has been around for several generations. In the United States,
standardized tests have been used to evaluate student performance since the middle of the 19th
century. Virtually every person who has attended a public or private school has taken at least one
standardized test.
The advantages and disadvantages of standardized testing are quite unique. On one hand, these
tests provide a way to compare student knowledge to find learning gaps. On the other hand, not
every student performs well on a test, despite having a comprehensive knowledge and
understanding about the subject matter involved.
9|P ag e
Classroom Assessment (1627) Roll no: CA554837
3. Standardized tests allow for equal and equivalent content for all students.
This means a complete evaluation of students from an equal perspective can be obtained.
Using alternate tests or exempting children from taking a standardized test creates unequal
systems, which then creates one group of students who is accountable to their results and
another group of students that is unaccountable. It is a system that looks at every child
through equal eyes.
According to the National Research Council, even incentive programs tied to standardized
testing results are not working to improve student comprehension, understanding, and
knowledge.
3. They assume that all students start from the same point of understanding.
Standardized tests may allow for a direct comparison of data, but they do not account for
differences in the students who are taking the tests. In the US, standardized tests could be
considered discriminatory in some regions because they assume that the student is a first-
language English speaker. Students who have special needs, learning disabilities, or have
other challenges which are addressed by an Individualized Education Plan may also be at a
disadvantage when taking a standardized test compared to those who do not have those
concerns.
The advantages and disadvantages of standardized testing show that it can be a useful tool for
student evaluation, but only when it is used correctly. Like any system, it can be abused by those
who are looking for shortcuts. That is why each key point must be carefully considered before
implementing or making changes to a plan of standardized testing.
12 | P a g e
Classroom Assessment (1627) Roll no: CA554837
References:
Klein, R. (2014, February 7). New York students are incredibly stressed out about
standardized testing, survey says. The Huffington Post.
Price A B. The histological recognition of Helicobacter pylori. Helicobacter pylori.
Techniques for clinical diagnosis and basic research, A Lee, F Mégraud. Saunders,
London1995; 33–49.
Engstrand L, Påhlson C, Gustavsson S, Schwan A. Monoclonal antibodies for rapid
identification of Campylobacter pyloridis. Lancet 1986; 2: 1402–3
Gavinet A M, Duthil B, Mégraud F, Bébéar C. In situ hybridization for the detection of
Helicobacter pylori. Development of a non-radioactive probe. Gastroenterology 1990; 98:
47A.
Christensen A H, Gjorup T, Hilden J, Fenger C, Henriksen B, Vyberg M, et al. Observer
homogeneity in the histologic diagnosis of Helicobacter pylori. Latent class analysis, kappa
coefficient, and repeat frequency. Scand J Gastroenterol 1992; 27: 933–9
Kolts B E, Joseph B, Achem S R, Bianchi T, Monteiro C. Helicobacter pylori detection: a
quality and cost analysis. Am J Gastroenterol 1993; 88: 650–5
Tee W, Fairley S, Smallwood R, Dwyer B. Comparative evaluation of three selective media
and a nonselective medium for the culture of Helicobacter pylori from gastric biopsies. J Clin
Microbiol 1991; 29: 2587–9
Van Zwet A A, Thijs J C, Kooistra-Smid A MD, Schirm J, Snijder J AM. Sensitivity of
culture compared with that of polymerase chain reaction for detection of Helicobacter pylori
from antral biopsy samples. J Clin Microbiol 1993; 31: 1918–20
13 | P a g e
Classroom Assessment (1627) Roll no: CA554837
Ans: Construct Validity Construct validity refers to the degree to which inferences can
legitimately be made from the operationalizations in your study to the theoretical constructs
on which those operationalizations were based. Like external validity, construct validity is
related to generalizing. But, where external validity involves generalizing from your study
context to other people, places or times, construct validity involves generalizing from your
program or measures to the concept of your program or measures. You might think of
construct validity as a “labeling” issue. When you implement a program that you call a
“Head Start” program, is your label an accurate one? When you measure what you term
“self esteem” is that what you were really measuring?
I would like to tell two major stories here. The first is the more straig htforward one. I’ll
discuss several ways of thinking about the idea of construct validity, several metaphors that
might provide you with a foundation in the richness of this idea. Then, I’ll discuss the major
construct validity threats, the kinds of arguments your critics are likely to raise when you
make a claim that your program or measure is valid. In most research methods texts,
construct validity is presented in the section on measurement. And, it is typically presented
as one of many different types of validity (e.g., face validity, predictive validity, concurrent
validity) that you might want to be sure your measures have. I don’t see it that way at all. I
see construct validity as the overarching quality with all of the other measurement validity
labels falling beneath it. And, I don’t see construct validity as limited only to measurement.
As I’ve already implied, I think it is as much a part of the independent variable – the
program or treatment – as it is the dependent variable. So, I’ll try to make some sense of the
various measurement validity types and try to move you to think instead of the validity of
any operationalization as falling within the general category of construct validity, with a
variety of subcategories and subtypes Inadequate Preoperational Explication of Constructs
This one isn’t nearly as ponderous as it sounds. Here, preoperational means before
translating constructs into measures or treatments, and explication means explanation – in
other words, you didn’t do a good enough job of defining (operationally) what you mean by
14 | P a g e
Classroom Assessment (1627) Roll no: CA554837
the construct. How is this threat? Imagine that your program consisted of a new type of
approach to rehabilitation. Your critic comes along and claims that, in fact, your program is
neither new nor a true rehabilitation program. You are being accused of doing a poor job of
thinking through your constructs. Some possible solutions:
Mono-Operation Bias
Mono-Method Bias
Mono-method bias refers to your measures or observations, not to your programs or causes.
Otherwise, it’s essentially the same issue as mono-operation bias. With only a single version
of a self esteem measure, you can’t provide much evidence that you’re really measuring self
esteem. Your critics will suggest that you aren’t measuring self esteem – that you’re only
15 | P a g e
Classroom Assessment (1627) Roll no: CA554837
measuring part of it, for instance. Solution: try to implement multiple measures of key
constructs and try to demonstrate (perhaps through a pilot or side study) that the measures
you use behave as you theoretically expect them to.
You give a new program designed to encourage high-risk teenage girls to go to school and
not become pregnant. The results of your study show that the girls in your treatment group
have higher school attendance and lower birth rates. You’re feeling pretty good about your
program until your critics point out that the targeted at-risk treatment group in your study is
also likely to be involved simultaneously in several other programs designed to have similar
effects. Can you really label the program effect as a consequence of your program? The
“real” program that the girls received may actually be the combination of the separate
programs they participated in.
Does testing or measurement itself make the groups more sensitive or receptive to the
treatment? If it does, then the testing is in effect a part of the treatment, it’s inseparable
from the effect of the treatment. This is a labeling issue (and, hence, a concern of construct
validity) because you want to use the label “program” to refer to the program alone, but in
fact it includes the testing.
This is what I like to refer to as the “unintended consequences” treat to construct validity.
You do a study and conclude that Treatment X is effective. In fact, Treatment X does cause
a reduction in symptoms, but what you failed to anticipate was the drastic negative
consequences of the side effects of the treatment. When you say that Treatment X is
effective, you have defined “effective” as only the directly targeted symptom. This threat
16 | P a g e
Classroom Assessment (1627) Roll no: CA554837
reminds us that we have to be careful about whether our observed effects (Treatment X is
effective) would generalize to other potential outcomes.
Imagine a study to test the effect of a new drug treatment for cancer. A fixed dose of the
drug is given to a randomly assigned treatment group and a placebo to the other group. No
treatment effects are detected. Perhaps the result that’s observed is only true for that dosage
level. Slight increases or decreases of the dosage may radically change the results. In this
context, it is not “fair” for you to use the label for the drug as a description for your
treatment because you only looked at a narrow range of dose. Like the other construct
validity threats, this is essentially a labeling issue – your label is not a good description for
what you implemented.
I’ve set aside the other major threats to construct validity because they all stem from the
social and human nature of the research endeavor.
Hypothesis Guessing
Most people don’t just participate passively in a research project. They are trying to figure
out what the study is about. They are “guessing” at what the real purpose of the study is.
And, they are likely to base their behavior on what they guess, not just on your treatment. In
an educational study conducted in a classroom, students might guess that the key dependent
variable has to do with class participation levels. If they increase their participation not
because of your program but because they think that’s what you’re studying, then you
cannot label the outcome as an effect of the program. It is this labeling issue that makes this
a construct validity threat.
17 | P a g e
Classroom Assessment (1627) Roll no: CA554837
Evaluation Apprehension
Many people are anxious about being evaluated. Some are even phobic about testing and
measurement situations. If their apprehension makes them perform poorly (and not your
program conditions) then you certainly can’t label that as a treatment effect. Another form
of evaluation apprehension concerns the human tendency to want to “look good” or “look
smart” and so on. If, in their desire to look good, participants perform better (and not as a
result of your program!) then you would be wrong to label this as a treatment effect. In both
cases, the apprehension becomes confounded with the treatment itself and you have to be
careful about how you label the outcomes.
Experimenter Expectancies
These days, where we engage in lots of non-laboratory applied social research, we generally
don’t use the term “experimenter” to describe the person in charge of the research. So, let’s
relabel this threat “researcher expectancies.” The researcher can bias the results of a study
in countless ways, both consciously or unconsciously. Sometimes the researcher can
communicate what the desired outcome for a study might be (and participant desire to “look
good” leads them to react that way). For instance, the researcher might look pleased when
participants give a desired answer. If this is what causes the response, it would be wrong to
label the response as a treatment effect.
References:
18 | P a g e
Classroom Assessment (1627) Roll no: CA554837
Q.4 What are scoring rubrics? Discuss the benefit of scoring rubrics with examples.
Ans: In education terminology, rubric means "a scoring guide used to evaluate the quality of
students' constructed responses".Put simply, it is a set of criteria for grading assignments.
Rubrics usually contain evaluative criteria, quality definitions for those criteria at particular
levels of achievement, and a scoring strategy. They are often presented in table format and can
be used by teachers when marking, and by students when planning their work.
They contain specific performance characteristics arranged in levels indicating either the
developmental sophistication of the strategy used or the degree to which a standard has been met.
19 | P a g e
Classroom Assessment (1627) Roll no: CA554837
20 | P a g e
Classroom Assessment (1627) Roll no: CA554837
References:
21 | P a g e
Classroom Assessment (1627) Roll no: CA554837
Ans: There are different types of assessment in education. All assessment methods have different
purposes during and after instruction. This article will tell you what types of assessment are most
important during developing and implementing your instruction.
Before creating the instruction, it’s necessary to know for what kind of students you’re creating
the instruction. Your goal is to get to know your student’s strengths, weaknesses and the skills
and knowledge the posses before taking the instruction. Based on the data you’ve collected, you
can create your instruction.
2. Formative assessment
Formative assessment is used in the first attempt of developing instruction. The goal is to
monitor student learning to provide feedback. It helps identifying the first gaps in your
instruction. Based on this feedback you’ll know what to focus on for further expansion for your
instruction.
3. Summative assessment
Summative assessment is aimed at assessing the extent to which the most important outcomes at
the end of the instruction have been reached. But it measures more: the effectiveness of learning,
reactions on the instruction and the benefits on a long-term base. The long-term benefits can be
determined by following students who attend your course, or test. You are able to see whether
and how they use the learned knowledge, skills and attitudes.
22 | P a g e
Classroom Assessment (1627) Roll no: CA554837
4. Confirmative assessment
When your instruction has been implemented in your classroom, it’s still necessary to take
assessment. Your goal with confirmative assessments is to find out if the instruction is still a
success after a year, for example, and if the way you're teaching is still on point. You could say
that a confirmative assessment is an extensive form of a summative assessment.
5. Norm-referenced assessment
This compares a student’s performance against an average norm. This could be the average
national norm for the subject History, for example. Other example is when the teacher compares
the average grade of his or her students against the average grade of the entire school.
6. Criterion-referenced assessment
7. Impassive assessment
It measures the performance of a student against previous performances from that student. With
this method you’re trying to improve yourself by comparing previous results. You’re not
comparing yourself against other students, which may be not so good for your self-confidence.
References:
Anastasi, A. (1968). Psychological testing (3rd ed.). New York: Macmillan.
23 | P a g e
Classroom Assessment (1627) Roll no: CA554837
Quarterly, 8, 179–197.
24 | P a g e