You are on page 1of 7

ED 108 – Assessment of Learning 1

Module 4. Item Analysis and Validation

 Explain the meaning of item analysis, item validity,


reliability, item difficulty, discrimination index.
 Determine the validity and reliability of given test
items

Introduction

The teacher normally prepares a draft of the test., Such a draft is subjected to item
analysis and validation in order to ensure that the final version of the test would be useful and
functional. First, the teacher tries out the draft test to a group of students of similar characteristics
as the intended test takers (try-out phase). From the try-out group, each item will be analyzed in
terms of its ability to discriminate between those who know and those who do not know and also
its level of difficulty (item analysis phase). The item analysis will provide information that will
allow the teacher to decide whether to revise or replace an item (item revision phase). Then,
finally, the final draft of the test is subjected to validation if the intent is to make use of the test
as a standard test for the particular unit or grading period.

There are two important characteristics of


an item that will be of interest to the teacher. These
are:
Item Analysis a) Item difficulty, and
b) Discrimination index

We shall learn how to measure these


characteristics and apply our knowledge in
making a decision about the item in question.

What is Item
Item Difficulty
Item Difficulty or
difficulty of an item is

Difficulty?
defined as the number of
students who are able to
answer the item correctly
divided by the total
number of students. Thus:
Number of students withcorrect answer
Item Difficulty =
Total Number of Students

The item difficulty is usually expressed in percentage.


ED 108 – Assessment of Learning 1

What is the item difficulty index of an item if 25 students are unable to


answer it correctly while 75 answered it correctly?
Here, the total number of students is 100, hence, the item difficulty
index is 75/100 or 75%.

One problem with this type of difficulty index is that it may not actually indicate that the
item is difficult (or easy). A student who does not know the subject matter will naturally be
unable to answer the item correctly even if the question is easy. How do we decide on the basis
of this index whether the item is too difficult or too easy?

Range of Difficulty Index Interpretation Action


0 – 0.25 Difficult Revised or discard
0.26 – 0.75 Right difficulty Retain
0.76 – above Easy Revised or discard

What is Discrimination Index?


 Discrimination Index - The discrimination index is a basic measure of the
validity of an item. It is a measure of an item's ability to discriminate between those
who scored high on the total test and those who scored low. Though there are several
steps in its calculation, once computed, this index can be interpreted as an indication
of the extent to which overall knowledge of the content area or mastery of the skills is
related to the response on an item. Perhaps the most crucial validity standard for a test
item is that whether a student got an item correct or not is due to their level of
knowledge or ability and not due to something else such as chance or test bias.

An easy way to derived such a measure is to measure how difficult an item is with
respect to those in the upper 25% of the class and how difficult it is with respect to those in the
lower 25% of the class. If the upper 25% of the class found the item easy yet the lower 25% of
the class found it difficult, then the item can discriminate properly between these two groups.

Index of discrimination = DU - DL

Obtain the index of discrimination of an item if the upper 25% of the


class had a difficulty index of 0.60 (i.e. 60% of the upper 25% got
the correct answer) while the lower 25% of the class had a difficulty
index of 0.20.
Here, DU = 0.60 while DL = 0.20, thus index of discrimination =
0.60-0.20 = 0.40.

Theoretically, the index of discrimination can range from -1.0 (when DU = 0 and DL =
1) to 1.0 (When DU = 1 and DL = 0). When the index of discrimination is equal to -1, then this
means that all of the lower 25% of the students got the correct answer while all of the upper 25%
got the wrong answer. In a sense, such an index discriminates correctly between the two groups
but the item itself is highly questionable. Why should the bright ones get the wrong answer and
the poor ones get the right answer? On the other hand, if the index of discrimination is 1.0, then
this means that all of the lower 25% failed to get the correct answer while all of the upper 25%
ED 108 – Assessment of Learning 1

got the correct answer. This is perfectly discriminating item and is the ideal item that should be
included in the test.

Rule of Thumb for Index Difficulty


Index Range Interpretation Action
-1.0 - -0.50 Can discriminate but item is Discard
questionable
-0.55 – 0.45 Non-discriminating Revise
0.46 - 1.0 Discriminating item Include

Consider a multiple-choice type of test of which the following data were


obtained:
Item Options
1 A B* C D
0 40 20 20 Total
0 15 5 0 Upper 25%
0 5 10 5 Lower 25%

The correct response is B. Let us compute the difficulty index and index of
discrimination:

Number of students withcorrect answer 40


Difficulty index = = = 0.40
Total Number of Students 100
= 40%, within range of “a good item”

The discrimination index can be similarly be computed:


No . of students ∈upper 25 % withcorrect response 15
DU = = = 0.75
No . of students∈the upper 25 % 20
= 75%

No . of students ∈lower 25 % with correct response 5


DL = = = 0.25
No . of students∈the lower 25 % 20
= 25%

Discrimination Index = DU – DL
= 0.75 – 0.25
= 0.50 or 50%.
Thus, the item also has a “good discriminating power”.

It is also instructive to note that the distracter A is not and effective distracter since this
was never selected by the students. Distracter C and D appear to have good appeal as distracters.

Basic Item Analysis Statistics


The Michigan State university Measurement and Evaluation Department reports a
number of item statistics which aid in evaluating the effectiveness of an item. The first of these is
the index of difficulty of which MSU (http//www.msu.edu/dept/) defines as the proportion of the
total group who got the item wrong. “Thus, a high index indicates a difficult item and a low
index indicates an easy item. Some item analysts prefer an index of difficulty which is the
proportion of the total group who got an item right. This index may be obtained by marking the
ED 108 – Assessment of Learning 1

PROPORTION RIGHT option on the item analysis header sheet. Whichever index is selected is
shown as the INDEX OF DIFFICULTY on the item analysis print-out. For classroom
achievement tests, most test constructors desire items with indices of difficulty no lower than 20
nor higher than 80, with an average index of difficulty from 30 or 40 to a maximum of 60.
The INDEX OF DISCRIMINATION is the difference between the proportion of the
upper group who got an item right and the proportion of the lower group who got the item right.
This index is dependent upon the difficulty of an item. It may reach a maximum value of 100 for
an item with an index of difficulty of 50, that is, when 100% of the upper group and none of the
lower group answer the item correctly. For items of less than or greater than 50 difficulty, the
index of discrimination has a maximum value of less than 100. Interpreting the Index of
Discrimination document contains a more detailed discussion of the index of discrimination.”
(http//www.msu.edu/dept).

More Sophisticated Item description refers to the ability of an item to


Discrimination Index differentiate among students on the basis of how well they
know the material being tested. Various hand calculation
procedures have traditionally been used to compare item
responses to total test scores using high and low scoring groups of students. Computerized
analysis provide more accurate assessment of the discrimination power of items because they
take into account responses of all students rather than just high and low scoring groups.
The item discrimination index provided by ScorePak® is a Pearson product Moment
correlation between student responses to a particular item and total scores on all other items on
the tests. This index is the equivalent of a point-biserial coefficient in this application. It provides
an estimate of the degree to which an individual item is measuring the same thing as the rest of
the items.

Validation
After performing the item analysis and revising the items
which need revision, the next step is to validate the instrument. The
purpose of validation is to determine the characteristics of the whole
test itself, namely, the validity and reliability of the test. validation is the process of collecting
and analyzing evidence to support the meaningfulness and usefulness of the test.

What is
Validity?
Validity
-Is the extent to which a test measures what it
purports to measure or as referring to the
appropriateness, correctness, meaningfulness
and usefulness of the specific decisions of a
teacher based on the test results. These two
definitions of validity differ in the sense that
the first definition refers to the test itself
while the second refers to the decisions made
by the teacher based on the test. A test is
valid when it is aligned to the learning
outcome.

A teacher who conduct test validation might want


to gather different kinds of evidence. There are
three main types of evidence that may be collected: content-related evidence of validity,
criterion-related evidence of validity and construct-related evidence of validity.
ED 108 – Assessment of Learning 1

Content-Related Refers to the content and the format of the instrument. How
Evidence of Validity appropriate is the content? How comprehensive? Does it logically
get at the intended variable? How adequate does the sample of
items or questions represent the content to be assessed?
Criterion-Related Refers to the relationship between scores obtained using the
Evidence of Validity instrument and scores obtained using one or more other test (often
called criterion).
How strong is this relationship? How well do such scores estimate
present or predict future performance of a certain type?
Construct-Related Refers to the nature of the psychological construct or characteristic
Evidence of Validity being measured by the test. How well does a measure of a construct
explain differences in the behavior of the individuals or their
performance on a certain task?

The usual procedure for determining content validity may be describe as follows: The
teacher writes out the objectives of the test based on the table of specifications and then gives
these together with the test to at least two (2) experts along with a description of the intended test
takers. The experts look at the objectives, read over the items in the test and place a check mark
in front of each question or item that they feel does not measure one or more objectives.
In order to obtain evidence criterion- related validity, the teacher usually compares
scores on the test in question with the scores on some other independent criterion test which
presumably has already high validity. For example, is a test is designed to measure mathematics
ability of students and it correlates highly with a standardized mathematics achievement test
(external criterion), the we say we have high criterion-related evidence of validity. In particular,
this type of criterion-related validity is called its concurrent validity. Another type of criterion-
related validity is called predictive validity wherein the test scores in the instrument are
correlated with scores on a later performance (criterion measure) of the students. For example,
the mathematics ability test constructed by the teacher may be correlated with their later
performance in a Division wide mathematics achievement test.
Apart from the use of correlation coefficient in measuring criterion-related validity,
Gronlund suggested using the so-called expectancy table. This table is easy to construct and
consist of a test (predictor) categories listed on the left hand side and criterion categories listed
horizontally along the top of the chart. For example, suppose that a mathematics achievement
test is constructed and the scores are categorized as high, average, and low. The criterion
measure is the final average grades of the students in high school: Very Good, Good, and Needs
Improvement. The two-way table lists down the number of students falling under each of the
possible pairs (test, grade) as shown below:
Test Score Grade Point Average
Very Good Good Needs Improvement
High 20 10 5
Average 10 25 5
Low 1 10 14

The expectancy table shows that there were 20 students getting high test scores and
subsequently rated excellent in terms of their final grades; 25 students got average scores and got
rated good in their finals; and finally, 14 students obtained low test scores and were later graded
as needing improvement. The evidence for this particular test tends to indicate that students
getting high scores on it would be graded excellent; average scores on it would be rated good
later; and students low scores on the test would be graded as needing improvement later.

Reliability
Reliability refers to the consistency of the scores obtained – how
consistent they are for each individual from one administration of an
instrument to another and from one set of items to another.
For internal consistency; for instance, we could use the split-half
method or the Kuder-Richardson formulae (KR-20 or KR-21)
ED 108 – Assessment of Learning 1

Reliability and validity are related concepts. If an instrument is unreliable, it cannot yet
valid outcomes. As reliability improves, validity may improve (or it may not). However, if an
instrument is shown scientifically to be valid then it is almost certain that it is also reliable.

The following table is a standard followed almost universally in educational tests and
measurement.

Reliability Interpretation
0.90 and above Excellent reliability; at the level of the best standardized tests
0.80 – 0.90 Very good for a classroom test
0.70 – 0.80 Good for a classroom test; in the range of most. There are probably
a few items which could be improved.
0.60 – 0.70 Somewhat low. This test needs to be supplemented by other
measures (e.g., more tests) to determine grades. There are probably
some items which could be improved.
0.50 – 0.60 Suggests need for revision of test, unless it is quite short (ten or
fewer items). The test definitely needs to be supplemented by other
measures (e.g., more tests) for grading.
0.50 or below Questionable reliability. This test should not contribute heavily to
the course grade, and it needs revision.

Additional Readings:
 https://assessment.tki.org.nz/Using-evidence-for-learning/Working-with-
data/Concepts/Reliability-and-validity
 https://fcit.usf.edu/assessment/basic/basicc.html
 https://www.washington.edu/assessment/scanning-scoring/scoring/reports/item-analysis/

References:
1. http://www.specialconnections.ku.edu/?
q=assessment/quality_test_construction/teacher_tools/item_analysis
2. Navarro, R.L., Santos, R.G., Corpus, B.B. Assessment of Learning 1, LORIMAR
Publishing Inc. 2017.
ED 108 – Assessment of Learning 1

PERFORMANCE EVALUATION SHEET


(ED 108 – Assessment of Learning 1)

Name: _____________________________________ Course & Year: _______ Score: ____


Date of Submission: ______________________ Section: ______________

A. Find the index of difficulty in each of the following situations:


1. N = 60, number of wrong answers: upper 25% = 2 lower 25% = 6
Answer: ____________
2. N = 80, number of wrong answers: upper 25% = 2 lower 25% = 9
Answer: ____________
3. N = 30, number of wrong answers: upper 25% = 1 lower 25% = 6
Answer: ____________
4. N = 50, number of wrong answers: upper 25% = 3 lower 25% = 8
Answer: ____________
5. N = 70, number of wrong answers: upper 25% = 4 lower 25% = 10
Answer: ____________

B. Which of the items in Exercise A are found to be the most difficult?


______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
______________________________________________________________________________
C. Compute for the discrimination index for each of the items in Exercise A.
1. Answer:_____________
2. Answer:_____________
3. Answer:_____________
4. Answer:_____________
5. Answer:_____________
D. What is the relationship between validity and reliability? Can a test be reliable and yet not valid?
Illustrate.
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________

Goodluck! God Bless!


AJA!!!!

You might also like