You are on page 1of 18

Item Analysis and

Validation
Explain item analysis, validation, and reliability
Execute the formula for item difficulty and index of discrimination
Analyze a sample test using the index of discrimination
Review
Item Analysis
• Try-out Phase > Item Analysis Phase > Item Revision Phase > Validation
• Item Difficulty: How difficult an item is.
• Discrimination Index: How “correct” the takers are.
Item Difficulty
Item Difficulty
Discrimination Index
Discrimination Index
• Teachers should interpret item discrimination results based on the context
of the test.
• Here are some common errors in item-making:
• Wrong input of answer key for an item
• Ambiguous questions
Analysis Overview
Analysis Overview
Benefits of Item Analysis
• Provides statistical data that can improve teaching and assessment
• Provide specific topics which the learners find difficult
• Provides teachers insights on effective item-making
Validation
• The measure of how meaningful and useful a test is.
• Aside from the test, it also highlights the specific decisions a teacher
makes based on the results of the test.
• Three main types.
Validation
• Content-Related Evidence of Validity
• Usually is evidence given by an expert.
• Answers the questions:
• How appropriate is the content? How comprehensive?
• Does the test logically get at the intended variable?
• How adequately does the sample of items or questions represent the content
to be assessed?
Validation
• Criterion-Related Evidence of Validity:
• Usually based on another instrument or test.
• Answers the questions:
• How strong is the relationship between your test and other related tests?
• How well do such scores estimate present or predict the future performance
of a certain student?
Validation
• Criterion-Related Evidence of Validity:
Validation
• Construct-Related Evidence of Validity:
• Usually based on the psychological effect of the test.
• Mostly appropriate for affective domain tests.
• Explains how well a measure of the construct explains differences in the
behavior of the individuals or their performance on a certain task.
Validation and Reliability
• Reliability is the consistency of each learner’s scores in taking the test.
• Utilizes statistical tools such as the Kuder-Richardson formulae (KR-20 or KR-21)
• Tests require both validation and reliability in terms of:
• If an instrument is unreliable, it cannot yet have valid outcomes.
• As reliability improves, validity may improve (or it may not).
• If an instrument is shown scientifically to be valid then it is almost certain that it is
also reliable.

You might also like