Professional Documents
Culture Documents
After a test is administered and scored, it is usually put aside and waits
until it is used again. Much of the effort in planning and constructing the test are
usually not evaluated. A better procedure is to analyze the test items and find
out if it has to be rejected, revised, or retained for future use. This process is
called item analysis. Item analysis is a process by which each item in the test
is carefully analyzed (e.g. difficulty, discriminating ability and effectiveness of
each alternative). Recall in the early part of this chapter that in building a test
item bank, certain item statistics should be sought so that teachers can build a
pool of high-quality test items in the subject or courses they handle.
In a standardized test, item analysis is usually conducted right after
the item try-out. Item try-out refers to the field testing of the test items. In an
item try-out, the goal is to let students answer the test items and then examine
the students’ responses to each test item. This is not to be confused with actual
and formal administration of the test.
There are different ways on conducting item analysis depending on
whether the test is criterion-referenced or norm-referenced. The perspectives
of test item analysis procedures for these two tests differ. A norm-referenced
test item analysis is a process of analysing each item based on its ability to
distinguish high and low performing students which gives prime emphasis on
the difficulty of the test items. This is not the case in a criterion referenced test.
For example, if the learning outcome being measured by the item is hard, the
test item would naturally be hard and likewise if the learning outcome is easy,
it would follow that the test item would also be easy. The following are the steps
in conducting norm-referenced test item analysis.
Example. Compute for the difficulty index of the given first five test items.
Solutions:
Item no. 1 Item no. 2 Item no. 3 Item no. 4 Item no. 5
Difficulty = Difficulty = R2/T Difficulty = R3/T Difficulty = R4/T Difficulty = R5/T
R1/T
=12/22 =13/22 =20/22 =5/22
= 19/22
=0.55 =0.59 =0.91 =0.23
= 0.86
Discrimination Index = RU - RL
½T
Where:
RU = Number of students in the upper group who answered the item correctly
RL= Number of the students in the lower group who answered the item correctly
½ T = One half of the total number of students included in the analysis (number of students in
one of the two groups)
Example. Compute for the discrimination index of the give first five test items
Table 3 Discrimination Index of the Sample Test Items
ITEM UPPER LOWER DIFFICULTY DISCRIMINATING VERBAL
NO. GROUP GROUP INDEX INDEX INTERPRETATION
(DISCRIMINATING
INDEX)
1 10 9 0.86 0.09 Poor
2 8 4 0.55 0.36 Good
3 8 5 0.59 0.27 Moderate
4 10 10 0.91 0 Poor
5 5 0 0.23 0.45 High
Solutions:
Item No. 1 Item No. 2 Item No. 3 Item No. 4 Item No. 5
Discrimination Discrimination Discrimination Discrimination Discrimination
Index = RU1 - RL1 Index = RU2 - RL2 Index = RU3 - RL3 Index = RU4 - RL4 Index = RU5 - RL5
½T ½T ½T ½T ½T
= 10 – 9 =8–4 =8–5 = 10 – 10 =5–0
11 11 11 11 11
Example: A table for analysing the effectiveness of the alternatives in the first five items
of the sample test is shown in Table 4. (The columns that corresponds to the correct
answers are highlighted)