You are on page 1of 45

Process of Assessment;

1. Program objectives
2. Design
3. Collection of data
4. Analyze data
5. Report result
6. Use of result

DIFFERENCE BETWEEN MEASUREMENT AND


ASSESSMENT / EVALUATION
Measurement:
1): Measurement is the process of obtaining a numerical description of the
degree to which an individual possesses particular characteristics.
2): It answers the questions “how much? “
3): Measurement is limited to qualitative description only.
4): It is concerned with tests and examination.
5): It involves collection of data.
6): It is only a quantitative assessment of instruction outcomes.
7): Measurement describes a situation
8): It is basic and limited concept.
9): Measurement is component of evaluation
10): It provides results in numeric.
11): It provides percentage
Evaluation /Assessment:
1. Evaluation is the systematic process of collecting, analysing and
interpreting information to determine to extent to which pupils are
achieved instructional objectives.
2. It answers the questions “how good? “
3. It includes :
(i) Quantitative description (80/100marks)
(ii) Quantitative description (non-measurement)
(iii) Value judgment (Asjad is making good progress Maths)
4. It is concerned with whole process of education
5. It involves evaluation of data.
6. It is continuous process which covers every aspect of an individual
achievement in an educational program.
7. Evaluation judges the worth or value.
8. It is comprehensive and vast concept.
9. It interprets numeric.
10. It evaluate percentage
TEXONOMIES OF EDUCATIONAL OBJECTIVE

Bloom’s Taxonomy:
The educational objectives are divided into three domains according to this
taxonomy. These are
a) Cognitive domain
b) Affective domain
c) Psychomotor domain

A. Cognitive domain:
It deals with thinking process and mental faculties. It is further divided in six
sub groups by Bloom in 1956.

1. Knowledge
i. Knowledge is defined as the remembering of previously learned
material.
ii. This is the lowest level of learning.
iii. This involves recalling of simple facts to complete theories.
iv. This include:
a) Knowledge of facts e.g.., solids expand on heating.
b) Knowledge of terms e.g.; Abiogenesis.
c) Knowledge of principles e.g.; Boyles law, Charles law.
d) Knowledge of concepts e.g. force, solubility.
e) Knowledge of methods and procedures e.g.; scientific method.

2. Comprehension:
i. Comprehensive is defined as the ability to grasp the meaning of
material.
ii. This may be shown by.
iii. Translating material.
iv. Interpreting material;
v. Estimating future trends
vi. Level of learning is one step higher to that of knowledge.
vii. Example: Explain Newton’s third law of motion.

3. Application:
i. Application refers to the ability to use learned material in new and
concrete situation.
ii. These include application of rules, method, concepts, principle, laws and
theories.
iii. This requires higher level of understanding.
iv. Example, why does water pipe burst when temperature falls to 0 oC

4. Analysis :
i. Analysis refers to the ability to break down material into its components
parts so that its organizational structure may be understood.
ii. This:
a) Identification of parts.
b) Analysis of relationship between parts.
c) Recognition of organizational principles.
iii. Level of learning is higher than that of comprehension and application.
iv. Example: Analysis the organizational structure of art, music or writing.
5. Synthesis:
i. Synthesis refers to the ability to put parts together to form a new
whole.
ii. This may involve.
a) Production of a new speech or them.
b) Production of research proposal.
c) Production of scheme for classifying information’s
iii. This lead to formulation of new patterns or structures
iv. Example writes a creative short story.

6. Evaluation:
i. Evaluation is concerned with the ability to judge the value of material
like novel, report or poem for a given purpose.
ii. Judgments or based on definite criteria.
iii. This is the highest level of learning outcomes in cognitive hierarchy.
iv. Example: Evaluate the poetry of Allama Iqbal.

B. Affective Domain:
It deals with attitudes, liking, disliking, habits, value and
feelings. It is further divided into five subgroups by Krathwohl in 1964.

1. Receiving:
i. Receiving refers to student’s willingness to attend a
particular phenomenon.
ii. This lowest level of learning out come in affective domain.
iii. Examples:
a) Attending classroom activities
b) Listening textbook reading
c) Listening music

2. Responding:
i. Responding refers to active participation on the part of the
students.
ii. This involves:
a) Reading assigned material
b) Reading voluntarily
c) Reading for pleasure
iii. These include objective concerned with interest and
enjoyment.
iv. Examples: A student
a) Completes assigned home work
b) Obeys school rules
c) Participates classroom discussions
d) Completes laboratory work

3. Valuing:
i. Valuing is concerned with the worth or value a student attaches to
a particular object, phenomenon.
ii. This ranges from simple acceptance of value to more complex
level of commitment
iii. This include objectives concerned with attitude and appreciation.
iv. Example:
a) Appreciating good literature or music
b) Appreciating science
c) Showing concern for the welfare of the other
d) Demonstrating problem solving attitude

4. Orgnaization:
i. Organization is concerned with:
a) Bringing together different values
b) Resolving conflicts between them
c) Building internal consistent value system
ii. This include objective concerned with development of philosophy
of life.
iii. Examples:
a) Recognizing for the balance between freedom and
responsibility in democracy.
b) Understanding accepting own strength and limitations.

5. Characterization:
i. At this level, the individual has a value system that controls
the behaviour for a sufficiently long time to develop a
characteristic life style.
ii. Learning outcome indicates typical behaviour of the
students.
iii. Examples:
a) Displaying safety consciousness.
b) Using objective approach in problem solving
c) Maintaining good health habits
d) Practicing co- operation in group activite.

C. Psychomotor Domain:
It deals with the development of psychomotor skills. It is further
divided into seven subgroups Smpson in 1972.

1. Perception:
It is concerned with the use of sense organs to obtain cues that
guide motor activity.
Example – observing computer for operating

2. Set:
i. Set refers to readiness to take particular type of action.
ii. This include:
a) Mental readiness to act.
b) Physical readiness to act.
c) Motional readiness (willingness) to act.
iii. Example: Showing desire to compose.

3. Guided Response:
I. Guided response include early stages for learning skill
II. It includes:
a) Imitation
b) Trial and error
III. Examples:
a) Performing experiments as demonstrated
b) Applying first aid bandages as demonstrated
c) Operating microscope as demonstrated
4. Mechanism:
1) Learned responses become habitual
2) Movements can be performed with some confidence and
proficiency
3) Movement patterns are less complex
4) Example:
a) Setting laboratory and equipment
b) Writing smoothly and legibly
c) Operating a slide projector

5. Complex Overt Responses:


1) It involves complex movement patterns
2) Learned responses are performed skilfully.
3) Proficiency is indicated by a quick, smooth, accurate,
performance, requiring a minimum of energy.
4) Performance is made without hesitation.
5) Movements are with ease and good muscle control.
6) Learning outcome include coordinated motor activities.
7) Examples:
a) Operating computer skill fully.
b) Demonstrating skill in driving automobile.
c) Demonstrating correct form in swimming.

6. Adaptation:
i. In this stage, skills are so highly developed that an individual can
modify movement
ii. Example
a) Adjusting tennis play to counter opponent’s style.
b) Modifying swimming to fir thoroughness of water.

7. Origination:
I). It refers to the creating of new movement patterns to face a specific
problem.
ii). Learning outcomes emphasize creativity
iii). Example
a) Creating a musical composition
b) Designing a new dress style.

SOLO TOXONOMY
SOLO, which stands for Sturcture of the Observed Learning Outcome,
provides a systematic way of describing how a learner’s performance grows in
complexity when mastering many tasks, particularly the sort of tasks
undertaken in school.

1. Pre-Structural
The task is not attacked appropriatly ; the student hasn’t really
understood the point and uses too simple a way of going about it.

2. Unistriuctural
One aspect of a task is picket up or understood serially, and there is
no relationship of facts or ideas.

3. Multistructural
Two or more aspects of a task are picked up or understood serially,
but are not interrelated.

4. Relational
Two or more aspects of a task are integrated so that the whole has a
coherent structure and meaning.

5. Extended Abstract
That coherent whole is generalized to a higher level of abstraction .
The two surface level responses involve understanding of ideas or facts.
Unistructional responses and question require the knowledge or use on only
one piece of given information,fact, or idea,obtained directly from the
problem. With an increase okf quantity, multistructional responses or items
requir knowledge or use of more than one piece of given infromation,facts or
ideas,each used separately , or in two or more distinct steps,with no
integration of the ideas. In contrast, the two deep process constitue a change
of quality of thinking that is cognitively more challenge that surface questions.
Relational responses or questions require integration of at least two separate
piece of given knowledge, information,facts,or idea,which when working
together answer the question. In other words,relational require learners to
impose an organizing pattern on the given material. The hightest level of the
SOLO taxonomy, extended absrtact requires the respondening to go beyond
the given information,knowledge,information,or ideas and deduce a more
general rule or proof that applies to all cases. In this latter cases, the learner is
impose an organizing patteron the given material
The highest level of the SOLO taxonomy,extended abstract requires the
respondent to go beyon the given information,knowledge,information or ideas
and deduce a more general rule or proof that applies to all cases. In this latter
cases, the learner is forced to think beyond the given and bring in related,prior
knowledge,ideas,or information in order to creat an answer,prediction, or
hypothesis that extends the given to a wider range of situations.

1. Preparing the Two-Way chart

1) The final step in building a table of specifications is to prepare a two-


way chart that relates the instructional objectives to the instructional
content, thus specifying the nature of the sample of test items and
assessment tasks. An example of the chart for our middle school
weather unit is presented in Table.

Table of specification for a weather unit in middle school class

Objectives

Knows Understands Interprets

Content Basic Weathe Specific Influence of Weather Total Percen


r Each Factor On number of item
Terms Facts Maps
Weather of items
Symbols
Formation
Air pressure 1 1 1 3 3 9 15

Wind 1 1 1 10 2 15 25

Temperatur 1 1 1 4 2 9 15
e

Humidity 1 1 1 7 5 15 25
and
precipitation
Clouds 2 2 2 6 - 12 20

Total 6 6 6 30 12 60 -
number of
items
Percent of 10 10 10 50 20 - 100
items

1.3 SELECTING APPROPRIATE TYPE OF TEST ITEMS

The tests constructed by teachers may be classified objectives tests


or essay tests, which may be subdivided into the following basic types
of test items.

Objective Tests:

A. Supply type:

1) Short answer

2) Completion

B. Selection type:

1) True-false or alternative response

2) Matching

3) Multiple choice

Essay Test:

A. Extended response

B. Restricted response

The objective tests present pupils with highly structured task which limits
their response to supplying a word, brief phrase, number, or symbol or to
selecting the answer from along a given number of alternatives. The
essay test permits pupils to responds by selecting, organizing, and
presenting those facts they consider appropriate. Both type of tests serve
useful purposes in measuring pupil achievement. The type to use in a
particular situation is best determined by the learning outcomes to be
measured and by the advantages and limitations of each type. A common
practice is to include both objective test items and essay questions in
classroom tests.

So selection of appropriate test item depends on:

1) Nature of learning outcome to be measured.

2) Advantages and limitation of each type.

3) Common practice to use both type i.e., objective and subjective.

4) Skill with which an item is constructed.

1.4 CONSIDERATIONS IN PREPARING RELEVANT TEST ITEMS AND


ASSESSMENT TASKS

a) Matching Items and Tasks to Intended Outcomes

Classroom tests and assessments are most likely to provide a valid


measure of the instructional objectives if the best item and assessments
tasks are design to measure the performance defined by the specific
learning outcomes. The process of matching test items involves fitting
each item or task as closely as possible to the intended outcome.

b) Obtaining a Representative Sample of Items and Tasks

A test or assessment, no matter how extensive, is almost always a


sample of the many possible test items or tasks that could be included.
We expect students to know thousands of facts, but we can test for only
a limited number of them;

Our sampling is most likely to be representative when the preparation


of a test or assessment is guided by a carefully prepared set of
specifications. Unless a table of specifications, or some similar device, is
use as a guide in construction, there is a tendency to overload the test
with items measuring knowledge of isolated facts and to neglect more
complex learning outcomes.

Number of items and tasks. The number of items and task is, of
course, an important factor in obtaining a representative sample. The
number of items and the number of performance tasks are determined
when the set of specifications is built and depend on such factors as the
purpose of measurement, the type of test items and assessment tasks
used, the age of the students, and the level of reliability needed for
effective use of the test or assessment results. Thus an assessment over
a third-grade social studies unit might contain 30 objective items,
whereas a survey test over a tenth-grade social studies course might
contain more than 100 objective items and several essay questions.

c) Eliminating Irrelevant Barriers to the Performance

i) Difficult sentence and vocabulary:

If students have achieved a particular learning outcome (e.g.,


knowledge of terms), we would want them to answer correctly
those test items that measure the attainment of that learning
outcome. We would be very unhappy (and so would they) if
students answered such test items incorrectly merely because the
sentence structure was too complex, the vocabulary too difficult
or the type of response called for unclear.

ii) Prerequisite Skills:

One way to eliminate factors that are extraneous to the purpose


of a measurement is to be certain that all students have the
prerequisite skills and abilities needed to make the response.
These have been called enabling behaviors because they enable
the student to make the response but are not meant to be critical
factors in the measurement. That is, they are a necessary but not
sufficient condition for responding correctly or performing a task
well.

iii) Differences in Ancillary Abilities:


Difference in reading ability, computational skill, and the like
should not influence the students responses unless such
outcomes are specifically being measured. The only functional
difference between those students who perform poorly should be
the possession of the knowledge, understanding, or other learning
outcome being measured by the task. All other differences are
extraneous to the purpose of the task, and their influence should
be eliminated or controlled for valid results.

iv) Ambiguity:

A special problem in preventing extraneous factors from


distorting test and assessment results in avoiding ambiguity.
Objective test items are especially subject to misinterpretation
when long, complex sentences are used, when the vocabulary is
unnecessarily difficult, and when words that lack precise meaning
are used.

v) Racial, Ethnic or Gender Bias:

An effort should also be made to avoid any racial, ethnic, or


gender bias in preparing the test items and performance
assessment tasks. The vocabulary and task situations should be
acceptable to various racial and ethnic groups and to both males
and females, and should be free of stereotyping.

d) Avoiding Unintended clues

There may be some clues that creep into the test items during
their constructions. They lead the poor achiever to the correct
answer and there by prevent the items from functioning as
intended.

Some Possible Barriers in test items:

 Ambiguous statements

 Excessive wordiness

 Difficult vocabulary
 Complex sentence structure

 Unclear instructions

 Unclear illustrative material

 Racial, ethnic, or gender bias

General Suggestions for Writing test Items and Assessment Tests

1. Use your test and assessment specifications as a guide.

2. Write more items and tasks then needed.

3. Writes the items and tasks well in advance of the testing date.

4. Write each test item and assessment task so that the task to be
performed is clearly defined and it calls forth the performance described
in the intended learning outcome.

5. Write each item or task at an appropriate reading level.

6. Write each item or task so that it does not provide help in responding to
other items or tasks.

7. Write each item so deter the answer is one that would be agreed upon
by experts, or in the case of assessment tasks, the responses judged
excellent would be agreed upon by experts.

Whenever a test item or assessment task is revised, recheck its relevance

FORMS AND USES OF ESSAY TEST


RESTRICTED RESPONSE TEST:
The restricted response question usually limits both the content and
the response . The content is usually restricted by the scope of the topic to be
disscussed, Limitations on the form of the response are generally indicated in
the question.
another way of resticting response in essay test is to base the questions on
specific problems. For this purpose, introductory material like that used in
interpretive exercises can be presented. Such items different from objective
interpretive execises only by the fact that essay questions are used instead of
multiple-choice or true false items.
Because the response questions are more structured , it is more useful
measuring learning outcomes requriring the interpretation and application of
data in aspecific area. In fact any of the learning outcomes measured by an
objective interpretative exercise can also be measurd by a resticted resposne
essay questions .The differnce is that the interpretive exercise requried puplis
to select the answer where as the resticted response question requries ease
and relaibility of secoring . In other situation the restricted response question
is better because of its more direct relevence to the learning outcomes(eg.the
ability to formative valid conclusion).

Merits:
1. It measure more specific learning outcomes
2. it is more objective the extented response question
3. it is more relible the extended response question
4. Secoring is easy as compare to extended response question
5. it can simple wide range of content

De-Merits:
1. Students feel restriction in expression idea
2. it is less objective and relible than objective type question
3. writing abilities can not be assessed
4. Difficult to construct as number of questions leaded or more than
extended response question
5. critical ideas and problem solving skills cannot be asked and measured

B) Extended Response Test:


The extended response question allows pupils to select any factual
information that they think is pertinent , to organize the answer in accordance
with there best judgement, and to integrate and evaluate ideas as they deem
appropriate.This freedom enables them to demonstrate their ability to select.
It seems more sensible to identify the complex behaviour we want to measure
formulate question tha elect these behaviours evaluate the results as relibly as
we can and then use these admitively inadequate data as the best evidance we
have avilable
Merits:
1. Students feels complete freedom to express the ideas
2. it used to present integrate and evaluate the ideas
3. it measure complex learning outcomes that cannot be measure by other
type of questions
4. easy to contruct as fue question are needed
5. copying is difficult

De-Merits:
1. cannot measure specific learning outcomes
2. storing is subjective and unrelible
3. storing is difficult
4. sampling of the contents is inqdequate
5. sucsess or failure may be due to choices

5.2 Contructing Esay Items:


1. restict the use of easy question those learning outcomes that cannot be
measured satisfactorily by objective items
2. construct questions that will call fourth the skills specified in the learning
standard
3. phrase the questions so that the students tasks is clearly indicated
4. indicate an approximate time limit for each questions
5. Avoid the use of optional questioins

5.3 Evaluating and Secoring test:


When the necessary preliminary steps have been taken in
intucting essay questions the followings suggestions can be used effectively to
increase the relibility of the secoring

1. Prepare an outline of the expected answer in advance:


This should contain the major points to be included the characteristics of the
answer to be evaluated and the amount of credit to be aloted to each

2.Use the secoring rubric that is must appropriate:


As discussed above two types of scoring rubrics analytic and holistic are
commonly used essay questions.Analytic rubric focus o one charactaristics at a
time.holistics rubrics are likely to more usefull when the focus of the
assessment is on overall content understanding whether then writing skills.

3.Evaluate all responses to one questions before going on the next one
Then score all answers to be second questions and soon untill all the questions
have been scored.A morsuctured punctuapaling sentence stre uniform
standard can be maintained with this procedure because it is eassier to
remember the basis from judging each answer and answers of various degrees
of quality can be more eassily compared.

4.Decide how to handle factors that are irrelevant to the learning outcomes
being measured
several factors influence over evaluation of answers that are not directly
pertinent to the purpose of the measurement.Prominent among these are
legibility of hand writing, speling ,sentence structured ,punctuation and
neatness.

5.when possible,avaluate the answers without looking at the student's name


the general impression we form about each student during our teaching is also
a source of bias in evaluating essay questions.It is not un common for a teacher
to give a high score to a poorly written by rationalizing that "the student is
realy capable even though he did not expressed clearly".

5.4 Development scoring rubrics


A scoring rubric is a set of a guidlines for the appliation of performance critaria
to the responses and performance of students.A scoring rubric typicali consists
of verbal description of performance are esects of students responsis that
distinguish between advance, proficient,partially proficiant and begining level
of performance.

Levels of ribrics
the Number of level nd the verbal descriptions use to guide the scoring may
vary from situation to situation.for the hand-on science task involving the
floating pencils as shown in figure.For example, the seperate scoring rubric
were used for each part of the reponse for the part of the task where the
students were suppse to identify the mystery water and explain how they
could "tell what the mystrey water is" student resonse were scored using a
rubric with three level.
Complete
Student statid that "the mystrey water was fresh and gave a satisfactry
explanation that refferd to observation made doing the hand-on task"

Partial
Student statid that "the mystrey water was fresh but did not suport the choice
by directed reffrence to observation from the hand-on task.

Incorrect
Students gave the wrong answer are gave contradiceory explanation for the
choice of the correct answer of frewsh water.

Example task
First great children are asked to arrange four picture of trees in order of the
season be pasting them infor boxes and printing each season name in the box.

scoring rubric
two point :Students arrange the picture in right order
One point :students begin the task but doesnot complete arrangements
Zero point:Students doesnot respond appropriatly.

.2 Important test characteristics

Validity
validdity is the degree to which a test measures what is sppose to be
measured.
Meter is used for the pupose of measuring the length,while scale is used for
the purpose of measuring weight .If meter is used for measuring length ,it will
be valid because it is suppose to measure length and if it is used for measuring
weight,it will not be valid.
Test are design for variety of purposes.A test is design for measuring
acchievment in the suject of biology will nnot be valid for measuring the
personality.In the same way,a test design for measuring acchievmennt in the
subject of biology for the class 5th will not be valid for design for measuring
acchievmennt in the subject of biology for the class 8th.
Types of validity

1.Content Validity
"content validity is the degree to which a test a measures an intendent content
area"
it is concerned with the ability of a test to cover all cotents.If a test is design to
measure the concept of bology and the items of the test deal only with five out
of ten chapters then the test will show poor content validity
 Test with good content validity covers all the contents
 content validity is of prime importance for achievement test
It is measured by expert judgment .it can not be computed by any
formula and cannot be expressed quantiatively.Usaully experts are ased
to access the cotent validity of a test.however developing a table of
specification for item development insures the content validity.a table of
specification is a device which results in a balanced coversage of
content.

2.construct validity
A construct is non-observable trate such as
learning,anxiety,creativity,scientific attitude,inteligence which express
behaviour.
construct validity is the degree to which a test measures an intended
hypothetical construct.
A construct can not be seen,only its effect can be observed e.g
inteligence can not be seen but we observed that some students learn
faster then other.To explain this difference a theory of inteligence was
develop. it was hypothesis that inteligence is related to learning.test was
develop to measure inteigence.students having high IQs tend to learn
faster.
construct validity involves testing hypothesis deduced from a theory
concerning the construct.For example,if a theory of inteligence
hypothsised that students with high IQs lern faster than students
acchieveing high scores an a test designed to measured intelligence if
indeed kearn faster,it will be the evidence in support construct validity
of a test.
3.Concurrent Validity:
Concurrent Validity is the degree to which the scores on a
test are related to the score an other already established test ministered
at the same time.
Often test is developed that claims to do the same job as
other tests e.g., a paper and pencil test does the same job as
performance test, will br preferred.In the same way s shorter test doing
the ssame job as longer test , will be preferred. In these cases, the
concurrent validity of the paper and pencil test and needed to be
established.
Steps:
1. Administer the new test.
2. Administer already developed valid test to the same group at the
same time or shortly therafter.
3. Correlate the teo st of scores
4. High correlation coefficient will indicate good concurrent validity
and vice versa.

4.Predictive Validity:
Predictive validity is the degree to which a test can
predict how well an individual will do in future.
A mathematics apitude test that have high predictive validity
will accurately predict which student will do well in mathematics and which
will not. Predictive validity is of prime importance for test which are used for
the purpose of admission or selection of students. If the admission or selection
will have high predictive validity there the students will get high scores in
future.

Steps:
1. Administer the test (Mathapitude test)
2. Obtain Secore in final exmas.
3. Correlate the two set of secores.
4. High correlate coefficient will indicate high predictive validity of the test
administered.
FACTORS AFFECTING VALIDITY:

1.Unclear Directions:
Directions that do not clearly indicate to pupil how to response to
the items, wether it is premissionable to guess, and how to record the
answer will tend to reduce validity.

2.Reading vocabulary and sentence structure too difficult


vocabulary and sentence structure that is too complicated for the
pupils taking the test will results in the test measuring reading
comprehension and aspects of intellengance, which will distort the
measuring of the test results .
3. Inappropriate level of difficulty of the test items:
In normreferenced test ,items that are to easy or to difficultwill not
provide reliable discriminations among pupils and will therefore low
validity.In criterion-referenced tests , the faliure to match the difficukty
specified by the learning outcomes
will lower validity.

4.Poorly constructed tes items:


Tests that unintentionally provides clues to the answer will tend
to measure the pupis alertness in detecting clues as well as those
aspects of pupil performance that the test is intended to measure

5.Ambiguity:
Ambiguous statements in test items contibute to
misintepretations and confudions.Ambiguity sometimes confuses the
better pupils more than its does the poor pupils,causing the items to
discriminate in a negative direction.

6.Test too short


A tets is only a sample of manu questions that might be asked.if a test is
too short to provide a representative sample of the performance we are
interested in,its validity will suffer accordingly.

7.Identifiable pattern of answer


Placing correct answer in some systematic pattern(eg,T,TF,F or
A,B,C,D,A,B,C,D) will enable puppils to guess the answer to sum items
more easilly and this will lower validity.
2. RELABILITY
Relability is the degree to which a test consistantly measures what ever
it measyres.a relabile test gives the same score when administered and
readministerd while an unrelible test doesnot gives the same score.
if n inteligence test was unreliable ,then a student scoring and IQ of 120 might
score and IQ of 140 tommorow and 95 day afer tommorow if a test was
reliable student IQ was 110 then we would not expect fluctuation in score.A
score of 105 would not be unusual but a score of 145 would be very unlikely.
A valid test is always reliable but a relibe test is not necessary valid.

FACTORS INFLUENCING RELIABITY :

1Lenght of test:
The longer test is the higher its reliabilty will be.This is because a longer test
will measure more adequate sample of behaviours and scores are less affected
by chance factor or guessing

2Spread of Scores:
The larger Spread of Scores is the higher the estimate of reliabilty will be.This
is because lager reliabilty coefficient result when individual's posotions
remains same freom one testing to another. The greater differences between
scores reduces the posibility of shifting positions.

3. Difficulty of test;
Too easy or too difficult test have low reliability. This because
both easy and difficult tests result in restricted spread of scores .For easy test
the scores are close together at the top . For difficult test the scores are
grouped together at the bottom. The difference among individuals are small
here tend to be unreliable.
4. Objectivity;
The standardized test which are high in objectivity,have high in
objectivity, have high reliability. A test said to be objective. The objective type
tests have high reliability and essay type test have low reliability. This is
because scoring is not affected by personal opinion of score in objective type
test But scoring is affected personal opinion of score.

3. USABILITY;
It is the characteristic of a test to fulfill following practical
consideration.

1.Time for Administration.


The shorter test are favoured but they have low reliability. A safe procedures
is to allot as much time as necessary. Some were between 20-60 minutes
testing time is probably a fairly good guide.

2.Ease of Administration;
A test will be easy to administer when
• Directions will be simple and clear.

• Sub test will be few.

• Time of test will be suitable.

3.Ease of Scoring;
Those test are favoured that offer ease and economy of scoring without
sacrificing scoring accuracy.

4.Ease of interpretation ;
when the result are presented to the pupils or parents ease of interpretation
and application are especially important if result are correctly interpreted they
contribute in important educational decisions.

5. Availability of equivalent forms;


Equivalent forms of test measure the same aspect of behavior by using test
items that are like in contents difficulty level and other characteristics.

6.Cost of testing;
Test should be economical but sacrificing valid and reliable test of being high
cost and selecting cheaper tests is false economy.

4.Objecitivity;
The objectivity of a test refers to the degree to which equally competent
scorers obtain the same results Most Standearized test of aptitude and
achivevemet are high in objectivity. The test items are objective type (e g,
multiple choice) and resulting scores are not influenced by the scorers
judgement or opinion. In fact test are usually constructed so that they can be
accurately scored by trained clerks and scoring machine.For classroom test
constructed by teachers howere objectivity may play an important role in
obtainingreliable measures of achievement.

6.3 Benefits of item analysis;


The effectivenessof each test item can be determined by analyzing
student responses to it .Item analysis is generally associated with a norm
referenced perspective. Slection on these grounds is not relevant from a
criterion referenced perspective.
Item analysis is usually designed to answer questions such as the following.
1.Did the item funcation as intended?
2.Were the test items of appropriate difficulty?
3. Were the test item free of irrelevant clues and other defects
4.Was each the distracters effective in multiple choice item ?

1. Item analysis data provide a basis for efficient class discussion of the test
results.
Knowing how effectively each item or task functioned in measuring
achivevement makes it possible to confine the discussion to those areas most
helpful to students.

2.item analysis data provide a basis for remedial work.


Although discussing the test results in class can clarify and correct many
specific points, item analysis frequently brings to light general areas of
weakness requiring more extended attention.

3. Item analysis data provide a basis for the general improvement of


classroom.
In addition to the preceding uses iteam analysis data can assist in evaluating
appropriateness of the learning outcomes and course content for the
particular students being taught .
4.Item analysis procedures provide a basis for increased skill in test
construction.
Item analysis reveals ambiguities clues ineffective distracters and technical
defects that were missed during the test preparation. This information is used
directly in revising the test items for future use.

6.4 Item analysis


It includes;
1. Item difficulty.
2.Discrimination Power
3. Effectiveness of Distracters

a) Item difficulty :
It deals with how difficult is the test item .It is indicated by the percentage of
pupils Who got the item right .It is recommended that would be neithere too
easy nor to difficult.
Steps;
1.Arrange the papers in order from the highest to the lowest scores say (80
papers)
2.Select 25% Papers with the highest scores high achievers 20 papers .
3. Select 25% papers with the lowest scores low achievers 20 papers.
4.50% papers in the middle 40 papers would not be taken in account.
5.Calculate the correct responses of high achievers and low achievers on each
test item.
6. Apply formula and calculate F ( Facility index)
F=Nr/NT*1oo
Nr.No of students who get right item.
Nt.Total no of students.
7. F is acceptable when it rangers from 30%to 70%.
8.Value more then 70% indicates that itemis very easy .
9.Value less than 30% indecates that item is very difficult.
Example:
Total papers ;80
High achievers;20
Low achievers;20
N A B c d Omit
25% 20 5 10 0 5 0
High
Achievers

25% 20 4 2 0 14 0
Low
Achievers
12
“b” is the correct answer.
NR
F¿ NT ×100

12
¿ × 100
40
¿ 30 %

b) Discrimination Power:
It refers to the degree to which test item discriminates between pupils with
high and low achievement.
One purpose of testing is to discriminate between high and low achievers.
Steps:
1. Arrange the papers in order from the highest to lowest score (say
80 papers).
2. Select 25% papers with the highest scores (high achievers)-20
papers.
3. Select 25% papers with the lowest scores (low achievers)-20
papers.
4. 50% papers in the middle (40 papers) would not be taken in
account.
5. Calculate the correct responses of high achievers and low
achievers on each test item.
6. Apply the formula and calculate D.
NH −NL
D=
n
Where
n = Number of high or low achievers.
NH = Number of high achievers who got the item right.
NL = Number of low achievers who got the item right.
7. D is acceptable when value rangers from 1-0.30.
8. Value of 1indicates 100% discrimination.
9. Value less than 0.30 indicates incapability of the item to
discriminate.
Example:
Total Paper = 80
High Achievers = 20
Low Achievers = 20
N a b c d Omit
25% 20 5 10 0 5 0
High
Achievers
25% 20 4 2 0 14 0
Low
Achievers
“b” is the correct answer.
NR
F¿ NT ×100

12
¿ × 100
40
¿ 30 %

c) Effectiveness of Distracters:
How well distracter is operating can be determined by inspection and so there
is no need to calculate in index of effectiveness, although the formula for
discriminating power can be used for this purpose. In general, a good distracter
attracts more students from the lower group than the upper group. Thus, it
should discriminate between the upper and lower groups in a manner opposite
to that of the correct alternative. An examination of the following item-analysis
data will lustrate the ease with which the effectiveness of distracters can be
determined by inspection. Alternative A is the correct answer.
Alternatives A B C D Omits
Upper 10 5 4 0 1 0
Lower 10 3 2 0 5 0
First, not that the item discriminates positively, because 5 in the upper
group and 3 in the lower group got the item right. The index of discriminating
power is fairly low, however (D=20), and this may be partly due to the
effectiveness of some of the distracters.
Alternative B is a poor distracter because it attracter because it attracts
more students from the upper group than from the lower group. This is most
likely due to some ambiguity in the statement of the item.
Alternative C is evidently not a plausible distracter because it attracted
no one.
Alternative D is functioning as intended, for it attracts a larger
proportion of students from the lower group.
Thus, the discriminating power of this item can probably be improved by
removing any ambiguity in the statement of the item and revising or replacing
alternatives B and C. The specific changes must, of course, be based on an
inspection of the test item itself, item-analysis data merely indicate poorly
functioning items, not the cause of the poor functioning.
Preparing Test Item:
A file of effective item and tasks can be built and maintained easily if items and
tasks are recorded on cards by indicating on the card both the objective and
the content area being measured, it is possible to file the cards under both
headings. Course content can be major categories, with the objectives forming
the subcategories.
This type of filing system makes it possible to select items or tasks in
accordance with any table of specifications in the particular area covered by
the file.
Building a file of effective items and tasks is a little like building a bank
account. The first several years are concerned mainly with making deposits
withdrawals must be delayed until a sufficient reserve is accumulated. Thus,
items and tasks are recorded on card as they are constructed information from
analyses of student responses is asses after the items and tasks from the file
for any test or assessment without repeating them too frequently. To prevent
using a test item or assessment task too often, record on the card the date it is
used.
A file of effective item and task assumes increasing importance as we
shift from test items that measure knowledge of facts to items and tasks that
measure understanding, application, and thinking skills. Items and tasks in
these areas are difficult and time-consuming to construct with all of other
demands on our time, it is nearly impossible to construct effective test items or
assessment tasks in these areas each time we prepare a new test or
assessment. We have two alternatives: Either we neglect the measurement of
learning outcomes in these areas (which, unfortunately, has been the practice),
or we slowly build a file of effective items and tasks in these areas. If quality of
student learning is our major concern, the choice is obvious.
Rating scales:
Rating scales has been to measure the personality of an individual.
According to Wight Stone, Rating scale is a selected list of words, phrase,
sentence or paragraph following an observer record a value or rating based
upon some objective scale of values.
In the words of A.S. Barr and his colleagues:’ Rating is a term applied or
judgment regarding some situation, object or character. Opinions are usually
expressed on a scale of values. Rating techniques or devisees by which such
judgments may be quantified’.
Rating scale is nothing but quantifying the essence of facts evacuated
through classification.
Types of Rating Scale:
Rating scales are of different types. The main ones or as follows:
a) Numerical rating scale:
One of the simplest types of rating scale is that in which the rater
checks or circles a number to indicate the degree to which a
characteristics is present. Typically, each of a series of number is given a
verbal description that remains constant form one characteristic to
another. In some cases, it is merely indicated that the largest number is
high, 1 is low, and the other numbers represent intermediate values.
Example:
1. To which extent does the student participate in group discussion?
1 2 3 4
2. To which extent are the comments related to the topic under
discussion?
1 2 3 4
b) Graphic Rating Scale:

The distinguish feature of the graphic rating scale is that each


characteristics is followed by a horizontal line. The rating is made by placing a
check on the line. A set of categories identifies specific position along the line,
but the rater is free to check between these points.
Example:
Directions: Indicate the degree to which this student contributes to a
group problem-solving task by placing an X anywhere along the horizontal line
under each item.
1. To what extent does the student participate in group discussion?

Never Seldom Occasionally Frequently Always


2. To what extent are the comments related to the topic under discussion?

Never Seldom Occasionally Frequently Always


Descriptive Graphic Rating Scale:
The descriptive rating scale uses descriptive
phrases to identify the point on graphic scale. The descriptions are thumbnail
sketches of how students behave at different steps along the scale. In some
scale, only the center and end positions are defined. In others, each point has a
descriptive phrase. A space of comments is frequently provided to enable the
rater to clarify the rating.
Examples:
Directions: Make your rating on each of the following characteristics
by planning an X anywhere along the horizontal line under each item. In the
space for comments, include anything that helps clarify your rating.
1. To what extend does the student participate in group discussion?

Never Participates Participates


Participates as much as more than any
Quit, other group other group
Passive members member

2. To what extend are the comments related to the topic under


discussion?

Comments Comments Comments


Ramble usually are
Distract pertinent, always
Form occasionally related
Topic wanders from topic to topic

ADVANTAGES OF RATING SCALES


a) This method acquaints the teacher with a students working.
b) By compiling a students progress report, it helps parents to know
about their child’s abilities. Further on its bases, a student rank in
the class can be mentioned.
c) On the basis of the conclusions drawn by this method, a student
becomes aware of his shortcomings and is inspired to overcome
from them.
d) Conclusions based on this method help administration in taking
appropriate decisions in matter of appointments, transfers,
promotions, etc.
e) This method also helps selecting children for admission.
DISADVANTAGES OF RATING SCALES
There are certain drawback of rating scales.
They are as follows.
a) The error of leniency:
If the evaluators is acquainted with the
individuals being evaluated there can be leniency in
judgment.
b) Halo Effect:
According to this effect, once a person gets
influenced by someone then the impression persists.
c) Logical errors:
When an evaluators finds similarity between
the performance of two youngsters, he gives them the
same grading. This type of evaluation cannot be impartial.
d) Difference between the quality of judgment of two Different Evaluations:
It is natural that there would be difference between
the evaluation done by two different evaluators. Because
they being two different individuals their evaluations
capabilities are bound to be different form each other.

These rating are subjective, hence are not fully reliable. FUNCTIONS OF
MARKS AND PROGRESS REPORTS:
1. Uses of Reports to Pupils:
i. These facilitate, the pupil’s learning and development.
ii. There is need for a periodic summary of progress.
iii. Reports also give them a basis for checking the adequacy of their
own self-estimates of learning progress.
2. Uses of reports to parents:
i. Reports to parents inform them of the school’s objectives and
the progress their children are making toward those objectives.
ii. These give parents a basis for helping their children make
sound educational plants.
3. Uses of Reports By Teachers and counselors:
i. These contribute guidance by providing more information about
pupils.
ii. Reports supplement and complement test scores and other
evaluative data in the cumulative records.
iii. With the help of reports we can better understand their present
strengths and weaknesses and can better predict the areas in
which they are likely to be successful.
iv. Counselors use the reports, along with other information, to
help pupils develop better self-understanding and make more
realistic educational and vocational plant.
v. Reports are useful in counting pupils with emotional problems.
4. Uses of Reports by Administrators:

The reports are used for determining promotion and gradation, awarding
honors, determining athletic eligibility, and reporting to other schools and
prospective employers.

PRINCIPLES OF MARKING AND REPORITNG SYSTEM:

According to Chand basic principles of a good marking and reporting system


are as under:
1) A marking and reporting system should be realistic, reasonable, and
as true to human life patterns as possible.
2) A marking system should provide sufficient range of grades, so that,
various degrees of attainment can be indicated reliably.
3) A marking should be based on objective measures or standards that can
be checked objectively or rated consistently with high degree of
reliability.
4) A marking should utilize statistical procedure is converting scores into
grades.
5. A marking system must be used as means to an end and not as an
end in itsel Uses of Reports to Pupils:
iv. These facilitate, the pupil’s learning and development.
v. There is need for a periodic summary of progress.
vi. Reports also give them a basis for checking the adequacy of their
own self-estimates of learning progress.
6. Uses of reports to parents:
iii. Reports to parents inform them of the school’s objectives and
the progress their children are making toward those objectives.
iv. These give parents a basis for helping their children make
sound educational plants.
7. Uses of Reports By Teachers and counselors:
vi. These contribute guidance by providing more information about
pupils.
vii. Reports supplement and complement test scores and other
evaluative data in the cumulative records.
viii. With the help of reports we can better understand their present
strengths and weaknesses and can better predict the areas in
which they are likely to be successful.
ix. Counselors use the reports, along with other information, to
help pupils develop better self-understanding and make more
realistic educational and vocational plant.
x. Reports are useful in counting pupils with emotional problems.
8. Uses of Reports by Administrators:

The reports are used for determining promotion and gradation, awarding
honors, determining athletic eligibility, and reporting to other schools and
prospective employers.
9. Uses of Reports to Pupils:
vii. These facilitate, the pupil’s learning and development.
viii. There is need for a periodic summary of progress.
ix. Reports also give them a basis for checking the adequacy of their
own self-estimates of learning progress.
10. Uses of reports to parents:
v. Reports to parents inform them of the school’s objectives and
the progress their children are making toward those objectives.
vi. These give parents a basis for helping their children make
sound educational plants.
11. Uses of Reports By Teachers and counselors:
xi. These contribute guidance by providing more information about
pupils.
xii. Reports supplement and complement test scores and other
evaluative data in the cumulative records.
xiii. With the help of reports we can better understand their present
strengths and weaknesses and can better predict the areas in
which they are likely to be successful.
xiv. Counselors use the reports, along with other information, to
help pupils develop better self-understanding and make more
realistic educational and vocational plant.
xv. Reports are useful in counting pupils with emotional problems.
12. Uses of Reports by Administrators:
13. The reports are used for determining promotion and gradation,
awarding honors, determining athletic eligibility, and reporting to other
schools and prospective employers. Uses of Reports to Pupils:
x. These facilitate, the pupil’s learning and development.
xi. There is need for a periodic summary of progress.
xii. Reports also give them a basis for checking the adequacy of their
own self-estimates of learning progress.
14. Uses of reports to parents:
vii. Reports to parents inform them of the school’s objectives and
the progress their children are making toward those objectives.
viii. These give parents a basis for helping their children make
sound educational plants.
15. Uses of Reports By Teachers and counselors:
xvi. These contribute guidance by providing more information about
pupils.
xvii. Reports supplement and complement test scores and other
evaluative data in the cumulative records.
xviii. With the help of reports we can better understand their present
strengths and weaknesses and can better predict the areas in
which they are likely to be successful.
xix. Counselors use the reports, along with other information, to
help pupils develop better self-understanding and make more
realistic educational and vocational plant.
xx. Reports are useful in counting pupils with emotional problems.
16. Uses of Reports by Administrators:

The reports are used for determining promotion and gradation, awarding
honors, determining athletic eligibility, and reporting to other schools and
prospective employers.

PRINCIPLES OF MARKING AND REPORITNG SYSTEM:

According to Chand basic principles of a good marking and reporting system


are as under:
5) A marking and reporting system should be realistic, reasonable, and
as true to human life patterns as possible.
6) A marking system should provide sufficient range of grades, so that,
various degrees of attainment can be indicated reliably.
7) A marking should be based on objective measures or standards that can
be checked objectively or rated consistently with high degree of
reliability.
8) A marking should utilize statistical procedure is converting scores into
grades.
9) A marking system must be used as means to an end and not as an
end in itself.

SUGGESTIONS FOR IMPROVING MARKING AND REPORTING:


1) The marking and reporting system should be carefully planned and
guided by stated objectives, such as school related motivation, student
parent and teacher understanding; and home-school cooperation.
2) Student, parents, teachers and administrators, usually with the aid
of a technical expert should develop the reporting system and forms.
3) Informal teacher-student reporting and direct communication
should be an ongoing process.Student-teacher conferences should not
be a ‘Last Resort’ and should be encouraged as a normal part of the
reporting system.
4) Parent-teacher conferences can be very effective. Relaxed time for
teacher is often necessary for this activity to be practicable. It is very
difficult to make parent conferences practical for every student at the
secondary school level, but the conferences are especially desirable for
students whose academic performance begins to decline. Report forms
for parent conferences, with copies for each party, are desirable.
5) The reporting system should include feedback on school behavior,
attitudes, work habits, and attendance as well as describe performance
in school subjects.
6) Marks and parent conference are necessary to fully describe the
performance at least at the upper elementary and secondary levels.
Reporting systems in the primary grades can be less standardized, with
greater reliance on parent conferences. Some parents and time to
adjust to the reality of their child’s ability.
7) Feeling marks are rarely justified or needed in the primary or even
upper elementary grades. A failing marks should rarely be given in
elementary or middle school grades.

TYPES,OF MARKING AND REPORTING SYSTEMS


1. Traditional Marking Systems
The traditional method of reporting pupil progress, which is still in wide use
today, is to assign a single letter grade (e.g., A, B, C, D, F) or a single number
(e.g., 5, 4, 3, 2, 1) to represent a pupil's achievement in each subject.
2. Pass-Fail System
It is a two-category marking system (e.g., satisfactory-unsatisfactory, pass-fail)
3. Checklists of Objectives
For more informative progress reports, some schools have replaced on
supplemented the 'traditional marking system with a list of objectives to be
checked or rated. These reports, which are most common at the
elementary school level, typically include ratings of progress toward the
major objectives in each subject matter area. The following statements for
reading and arithmetic are examples:
(i)Reading:
1. Reads with understanding.
2. Works out meaning and use of new words,
3. Reads well to others.
4. Reads independently for pleasure.
(ii)Arithmetic:
1. Uses fundamental processes.
2. Solves problems involving reasoning.
3. Is accurate in work.
4. Letters to Parents-
For greater flexibility in reporting pupil progress to parents, some schools have
turned to using informal letters, enabling them to report on the strengths,
weakness, and learning needs of each pupil and to suggest plans for
improvement. In addition, the report can include as much detail as is needed
to pinpoint the pupil's progress in all areas of development.
5. Parent-Teacher Conferences
To overcome the limited information supplied by the traditional report card
and to establish better cooperation between teachers and parents, some
schools regularly schedule parent-teacher conferences. This reporting method
has been most widely used-at elementary level, with its greatest use in the
primary grades.
6. Multiple Marking and Reporting Systems
The typical multiple reporting system retains the as of traditional marking
(letter grades or numbers) and supplements the marks with checklists of
objectives. In some cases, two marks are assigned to each subject: one for
achievement and the other for effort, improvement, or growth.
GUIDELINES FOR DEVELOPING A MULTIPLE MARKING AND REPORTING
SYSTEM
1. The marking and reporting system should be developed cooperatively by
parents. pupils, and school personnel.
2. The marking and reporting system should be based on a clear statement
of educational objectives.
3. The development of the marking and reporting system should be guided
by the functions to be served.
4. The marking and reporting system should be detailed enough to be
diagnostic and yet compact enough to be practical.
5. Interpretation of letter grades (A, B, C) is necessary.
6. Adopt the same policy for assigning grades.
7. The marking and reporting system should provide parent-teacher
conferences, as needed.
8. The marking and reporting system should be based on adequate
evaluation.
Conducting Parent-Teacher Conferences
Regardless of the type of marking and reporting system used in the school
the men I teacher conferences are an important supplement to written
report of pupil progress. The face to face conference makes it possible
1) To share information with parents.
2) To overcome any misunderstand between home and school.
3) To plane cooperatively a programme of maximum benefit to the pupil.
At the elementary school level, parent-teacher cooperation Is molt
important. Conferences with parents should be regularly scheduled at the
secondary level, The parent teacher, conference is typically used only when
some special problem situation arises,
Conferences with parents are most likely to be productive when they are
preceded by careful planning and the teacher has skill in conducting such
conferences. Many schools offer in-service training for teacher to help them
to develop effective conference techniques. Typically such training includes
knowledge of how to conduct a parent-teacher conference and role playing
to practice the use of conference skills. The following guide lines list the
types. of things that contribute to the effective use of parent teacher
conference for reporting pupil progress.
Preparing for the Conference
1) Have a clear grasp of the purpose of the conference.
2) Review the pupil's school records for general background in-formation.
3) Assemble a folder' of specific information concerning the pupil's present
learning progress along with other curricular and co-curricular activities and
work habits.
4) Organize the information to be presented to parents in a systematic
manner.
5) Make a tentative list of questions to ask the parents.
6) Anticipate parents questions.
7) Provide a comfortable, informal setting, free from interruption.
8) Prepare a written plan of activities for the conference.
9) Encourage two-way communications.
10) Accept some of the responsibility for problems.
11) Conclude conference with an overall summary.
12) Keep a written record of the conference, listing problems and
suggestions, with a copy for the parents.
Establishing and Maintaining Rapport during the conference
1) Create a friendly informal atmosphere.
2) he professional and maintain a positive attitude.
3) Use language that is understandable to parents.
4) be willing to listen to parents.
5) be honest and sincere with parents and do not betray confidence. .
Sharing information with Parents during the Conference
1) Begin by describing the pupil's strong points.
2) Describe the area needing improvement in a positive and tactful manner.
3) Encourage parents to participate in the conference be cautious about
giving advice.
Planning a Course of Action with Parents during Conference
1) Begin the concluding phase of the conference with a brief overall
summary.
2) Have parents participate in planning a coarse. of action.
3) Review your conference notes with parents.
4) End the conference on a positive note.
Bailard and R. Strange suggest a list of important don'ts:
1) Don't blame the parents.
2) Don't put the )arent on the defensive about anything.
3) Don't talk about other children on compare this child with other children.
It is unprofessional.
4) Don't talk about other teachers to the complimentary nature.
5) Don't belittle the administration or make derogatory remarks about
the school direct.
6) Don't argue with the parent.
7) Don't try to outtalk a parent.
8) Don't interrupt the parent to make your own point.
9) Don't go too far with a parent, who it not ready and able to
understand your purpose
10) Don't ask parents questions which might bembatTassing to them.
Only information pertinent to the child's welfare is important, Questions
asked out of mere curiosity are unforgivable, After the conference,
don't repeat any confidential information which the parent may,
volunteer. It is most unprofessional and can be very damaging to the
parent of the Child
GRADING
A grade is an alphabetical or numerical symbol, or mark, that indicates
the degree to which intended outcomes have been achieved.
• The major purpose of grades is to communicate how well a student is
doing in the various subject areas.
Another purpose often attributed to grades is that they serve as a
motivator for student performance.
• Grades also serve as an indication of achievement to be expected in
the future. Past performance is the best single predictor of future
performance.
• The major objection to grades is that there is considerable. variability
in the meaning of a given grade; further, there are so many different
methods of grading and such diversity of symbols that it is difficult to
interpret exactly what a given set of grades means.
PURPOSES OF GRADING:
Purposes as given by Karmel and Karmel (1987) are as under:
1) They provide data for parents on their children progress.
2). They certify promotional status and graduation.
3) They serve as an incentive to do school lessons.
4) The help in educational and vocational guidance by presenting a
realistic basis for future choices. -5) They serve as a reference point for
personal development.
6) They provided a basis for awarding honors.
7) They enable the school to ascertain the amount of extracurricular
activities; if any, in which the students should participative.
8) The may be used as a source for communication to prospective
employees. 9) They provide information for curriculum research.
10) They provide data to a school that the student may later attend
through transfer or graduation.
TYPES OF GRADING
1. Percent grading
Percent grading involves averaging scores and converting them to a
percent.

2. referenced Norm Grading :


Norm-referenced grading involves rank ordering students and
expressing a given student's achievement i n relation to the achievement
of the class; in essence, the rest of the class serves as the norm group.

3. Normal Curve Grading:


(i) In the extreme form norm-reference grading is based on an
assumptions of a normal distribution, and a fixed percentage of
students receive each grade.
(ii) Some students must receive failing grades regardless of their actual
level of achievement.

4. Pass- Fail Grading :


(i) A pass fail system usually result in a reduction of achievement level’s quite
naturally. Students are less motivated to do well in such courses and devote
most of their energies to those courses in which they will receive a letter
grade.
(ii) Pass-fail grading does not fulfill any of the purposes of
grades communication, motivation and prediction.
5) Criterion referenced Grading :
Criterion referenced grading involves expressing a students
achievement in relation to pre-specified criteria rather than
the achievement of others in the class.

You might also like