You are on page 1of 17

Chapter 6

Item Analysis and


Validation
Presenter :
Glenn Mark Reyes
Maria Paloma Bayran
What is Item Analysis?
- It is a statistical technique which is used for selecting and rejecting the items of the test on the basis
of their difficulty value and discriminated power.

Purpose of Item Analysis


• Evaluates the quality of each item.
• Rationale : the quality of items determines the quality of test (i.e., reliability and validity)
• May suggest ways of improving the measurement of a test.
• Can help with understanding why certain tests predict some criteria but not others.
Draft
• Subjected to Item
Analysis and Validation

PHASES
• Try-out phase
• Item analysis phase (level of
difficulty)
• Item revision phase
Item Analysis
Two Characteristics :

(a) Item difficulty


- It is defined as the number of students
who are able to answer the item correctly
divided by the total number of students.
Thus:

• Item difficulty = number of students with correct answer


total numbers of students

• The item difficulty is usually expressed in percentage.


Example:
What is the item difficulty index of an item if 25 students are unable to answer it
correctly while 75 answered it correctly?

Here are the total number of students is 100, hence, the item difficulty index is 75/100
or 75%.
Item Analysis
Two Characteristics :

(b) Index of Discrimination


• Tells whether it can discriminate
between those who do not know the
answer.

• Difficulty in:
- Upper 25% of the class
- Lower 25% of the class
Index of discrimination = DU – DL
Example: Obtain the index of discrimination of an item if the upper 25% of the class had a
difficulty index of 0.60 (i.e. 60% of the upper 25% got the correct answer) while the lower
25% of the class had a difficulty index of 0.20.

DU = 0.60 while DL = 0.20, thus index of


discrimination = .60 - .20 = .40.
Index of Difficulty
• The proportion of the total group who got the item wrong.
Index of Item Discriminating Power
Three Main Types of Evidence that may be collected:
• Contented-related evidence of validity
- refers to the content and format of the instrument.

• Criterion-related evidence of validity


- refers to the relationship between scores obtained using the instrument and scores
obtained using one or more other test (often called criterion)

• Construct-related evidence of validity


- refers to the nature of the psychological construct or characteristic being measured by
the test.
Thank you!

You might also like