You are on page 1of 39

ITEM ANALYSIS

AND
VALIDATION
LEARNING OUTCOMES
▪ Explain the meaning of item analysis, item validity,
reliability, item difficulty, discrimination index
▪ Determine the validity and reliability of given test
items
▪ Determine the quality of a test item by its difficulty
index, discrimination index and plausibility of
options (for selected-response test)
Phases of
Preparing a
Draft of the
Test
Try-out Phase
The first phase where the teacher
tries out the draft test to a group of
students of similar characteristics as
the intended test takers.
Item Analysis Phase
Each item will be analyzed in terms
of its ability to discriminate between
those who know and those who do
not know and also its level of
difficulty
Item Revision Phase
This will provide information that
will allow the teacher to decide
whether to revise or replace an
item.
Final Draft Phase
The final draft of the test is subjected
to validation if the intent is to make
use of the test as a standard test for
the particular unit or grading period.
ITEM ANALYSIS:
DIFFICULTY INDEX &
DISCRIMINATION INDEX
TWO IMPORTANT
CHARATERISTICS OF AN ITEM

1. Item Difficulty
2. Discrimination Index
1. Item Difficulty – defined as the
number of students who are able
to answer the item correctly
divided by the total number of
students.
Item Difficulty = no. of students with
correct answer/ total no. of students

✓ Item difficulty is usually


expressed in percentage
EXAMPLE
What is the item difficulty
index of an item if 25 students
are unable to answer it correctly
while 75 answered it correctly?

Item Difficulty = 75/100 = 75%


• high percentage indicates
an easy item/question

• low percentage indicates a


difficult item
ARBITRARY RULE OFTEN USED IN THE LITERATURE
2. Index of Discrimination - basic
measure of the validity of an item.
It is a measure of an item's ability
to discriminate between those who
scored high on the total test and
those who scored low.
Index of Discrimination = DU – DL

Where: U- Upper group


L- Lower group
EXAMPLE
Obtain the index of discrimination of
an item if the upper 25% of the class had
a difficulty index of 0.60 while the lower
25% of the class had a difficulty index of
0.20.
Index of discrimination = DU- DL
= 0.60 -0.20
= 0.40
RULE FOR INDEX OF DIFFICULTY
INDEX RANGE INTERPRETATION ACTION

-1.0 - -0.50 Can discriminate Discard

-0.55 – 0.45 Non-discriminating Revise

0.46 – 1.0 Discriminating item Include


EXAMPLE
Consider a multiple choice type of test
of which the following data were obtained:

Item Options
A B* C D
1 0 40 20 20 Total
0 15 5 0 Upper 25%
0 5 10 5 Lower 25%
THE ITEM-PROCEDURE FOR
NORM PROVIDES THE FF.
INFORMATION

▪ The difficulty of the item;


▪ The discriminating power of the item; and
▪ The effectiveness of each alternative
SOME BENEFITS DERIVED
FROM ITEM ANALYSIS
1. It provides useful information for class
discussion of the test
2. It provides data which help students improve
their learning
3. It provides insights and skills that lead to the
preparation for better tests in the future.
VALIDATION
AND
VALIDITY
The purpose of validation is to
determine the characteristics of
the whole test itself, namely the
validity and the reliability of the
test. V
Validation is the process of
collecting and analyzing
evidence to support the
meaningfulness and usefulness
of the test.
V
Validity is the extent to which
a test measures or as referring
to the appropriateness,
correctness, meaningfulness
and usefulness of the specific
decisions a teacher makes
based on the test results. V
A test is valid when
itis aligned with the
learning outcomes.

V
A teacher who
conducts test
validation might want
to gather different
kinds of evidence.
V
3 MAIN TYPES OF EVIDENCE
1. Content-related evidence of validity
2. Criterion-related evidence of validity
3. Construct-related evidence of validity
V
CONTENT-RELATED EVIDENCE
OF VALIDITY- refers to the content and
format of the instrument.

✓ How appropriate is the content?


✓ How comprehensive?
✓ Does it logically get at the intended variable?
✓ How adequately does the sample or questions
represent the content to be assessed?
V
USUAL PROCEDURE

The teacher writes out the objectives


of the test based on the Table of
Specifications and then gives these
together with the test to at least two (2)
experts along with a description of the
intended test takers. V
CRITERION-RELATED EVIDENCE
OF VALIDITY- refers to the relationship
between scores obtained using the
instrument and scores obtained
using one or more other test.

✓ How strong the relationship?


✓ How well do such scores estimate present or predict
future performance of a certain type V
TYPES OF CRITERION VALIDITY

1. Concurrent validity
2. Predictive validity

V
Concurrent validity refers to a
comparison between the measure in
question and an outcome assessed
at the same time.

V
Another type of criterion-related validity
is called predictive validity wherein the
test scores in the instrument are
correlated with scores on a later
performance (criterion measure) of the
students.
V
Criterion-related validity is also
known as concrete validity
because criterion validity refers to a
test’s correlation with a concrete
outcome.
V
CONSTRUCT-RELATED EVIDENCE
OF VALIDITY- refers to the nature of
the psychological construct or
characteristic being measured by the
test.

✓ How well does a measure of the construct explain


differences in the behavior of the individuals of their
performance on a certain task?
V
RELIABILITY
➢Refers to the consistency of the scores
obtained – how consistent they are for each
individual from one administration of an
instrument to another and from one set of
items to another.

• Reliability and validity are related concepts.


If an instrument is unreliable, it cannot get V
valid outcomes.
V
Thank You!

You might also like