You are on page 1of 9

NOTE:

Activity 2: Deadline is June 11,


ASSESSMENT IN LEARNING 2 2021. Submit through Edmodo.
MODULE 3
Prepared by: Vincent L. Banot, MS Math Edmodo Exam for module 3 is on
June 11, 2021.

TOPICS: Google Meet: June 10, 2021.


1. Item Analysis (Review for module 3 before the
2. Criterion-Referenced Assessment exam)
3. Norm-Referenced Assessment

Objective: At the end of this module, the students will be able to differentiate Criterion and Norm-
Referenced Assessment. The students will be able to conduct an Item Analysis also.
Sources:
https://www.turnitin.com/blog/what-is-item-analysis-and-other-important-exam-design-
principles#:~:text=Item%20analysis%20is%20the%20act,unconsciously%20on%20a%20regular%20basis.
https://www.turnitin.com/blog/what-is-item-analysis-and-other-important-exam-design
principles#:~:text=Item%20analysis%20is%20the%20act,unconsciously%20on%20a%20regular%20basis.
https://www.edglossary.org/criterion-referenced-
test/#:~:text=Criterion%2Dreferenced%20tests%20and%20assessments,specific%20stage%20of%20their%20education.
https://www.edglossary.org/norm-referenced-test/

_________________________________________________________________________________________________________

What is item analysis?

Item analysis is the act of analyzing student responses to individual exam


questions with the intention of evaluating exam quality. It is an important tool to
uphold test effectiveness and fairness.

Item analysis is likely something educators do both consciously and


unconsciously on a regular basis. In fact, grading literally involves studying
student responses and the pattern of student errors, whether to a particular
question or particular types of questions.

But when the process is formalized, item analysis becomes a scientific method
through which tests can be improved, and academic integrity upheld.

Item analysis brings to light test quality in the following ways:

• Item Difficulty -- is the exam question (aka “item”) too easy or too hard?
When an item is one that every student either gets wrong or correct, it
decreases an exam’s reliability. If everyone gets a particular answer
correct, there’s less of a way to tell who really understands the material
with deep knowledge. Conversely, if everyone gets a particular answer
incorrect, then there’s no way to differentiate those who’ve learned the
material deeply.
• Item Discrimination -- does the exam question discriminate between
students who understand the material and those who do not? Exam
questions should realize the varying degrees of knowledge students have
on the material, reflected by the percentage correct on exam questions.
Desirable discrimination can be shown by comparing the correct answers
to the total test scores of students--i.e., do students who scored high
overall have a higher rate of correct answers on the item than those who
scored low overall? If you separate top scorers from bottom scorers,
which group is getting which answer correct?
• Item Distractors -- for multiple-choice exams, distractors play a
significant role. Do exam questions effectively distract test takers from the
correct answer? For example, if a multiple-choice question has four
possible answers, are two of the answers obviously incorrect, thereby
rendering the question with a 50/50 percent chance of correct response?
When distractors are ineffective and obviously incorrect as opposed to
being more disguised, then they become ineffective in assessing student
knowledge. An effective distractor will attract test takers with a lower
overall score than those with a higher overall score.

Item analysis entails noting the pattern of student errors to various questions in
all the ways stated above. This analysis can provide distinct feedback on exam
efficacy and support exam design.

How can item analysis inform exam design?

Shoring up student learning can be enacted through feedback, but also exam
design. The data from item analysis can drive the way in which you design
future tests. As noted previously, if student knowledge assessment is the bridge
between teaching and learning--then exams ought to measure the student
learning gap as accurately as possible.

Item analysis should bring to light both questions and answers as you revise or
omit items from your test.

• Is the item difficulty level appropriate?


• Does the item discriminate appropriately?
• Are the distractors effective?

In doing so, item analysis can increase the efficacy of your exams by testing
knowledge accurately. And knowing exactly what it is students know and what
they don’t know, helps both student learning and instructor efficacy.

How can item analysis inform course content or the curriculum?

Not only can item analysis drive exam design, but it can also inform course
content and curriculum.

When it comes to item difficulty, it’s important to note whether errors indicate a
misunderstanding of the question or of the concept the item addresses. When a
large number of students answer an item incorrectly, it’s notable. It may be a
matter of fine-tuning a question for clarity; is the wording of the question
confusing? Are the answers clear?

Or it could be that the material may have to be reviewed in class, possibly with
a different learning approach.

Item distractor analysis is also helpful in that it can help identify


misunderstandings students have about the material. If the majority of students
selected the same incorrect multiple-choice answer, then that provides insight
into student learning needs and opportunities. (Also--congrats on a great
distractor that highlights student learning gaps and discriminates student
knowledge).

Whether you employ item analysis manually or via software, we think data-
driven exams and curricula are a great thing. And we hope this helps you out on
your pedagogical journey.

STEPS FOR INDEX OF DIFFICULTY

1. Arrange the scores from highest to lowest.


2. Separate the top 27% and the bottom 27% of the cases
3. Tally the number of cases from each group who got the item correct.
4. Compute the difficulty index using the formula:
𝑹𝒖 +𝑹𝒍
Index of difficulty = x 100
𝑵
Where:
𝑹𝒖 = 𝑈𝑝𝑝𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
𝑹𝒍 = 𝐿𝑜𝑤𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
N = Total Number of students from Upper and Lower Group

5. Write the interpretation.

EXAMPLE
Step 1: Arrange the scores from highest to lowest.

Score Answer Score Answer


95 C 73 D
93 C 73 A
92 C 72 E
91 C 71 E
90 C 70 E
89 C 70 C
88 C 69 C
87 C 68 C
86 C 67 C
85 A 66 D
83 D 65 C
82 B 64 D
81 B 63 A
80 E 61 B
79 C 60 B
78 C 58 E
78 A 57 E
77 C 56 D
76 B 55 A
75 C 53 A
74 D 50 D

STEP 2: Separate the top 27% and the bottom 27% of the cases.

42 x 0.27 = 11.34 ≈ 11

Score Answer Score Answer


95 C 73 D
93 C 73 A
92 C 72 E
91 C 71 E
90 C 70 E
89 C 70 C
88 C 69 C
87 C 68 C
86 C 67 C
85 A 66 D
83 D 65 C
82 B 64 D
81 B 63 A
80 E 61 B
79 C 60 B
78 C 58 E
78 A 57 E
77 C 56 D
76 B 55 A
75 C 53 A
74 D 50 D
STEP 3: Tally the number of cases from each group who got the item correct.

𝑹𝒖 = 𝟗
𝑹𝒍 = 𝟏

STEP 4: Compute the difficulty index using the formula:


𝑹𝒖 +𝑹𝒍
Index of difficulty = x 100
𝑵
Where:
𝑹𝒖 = 𝑈𝑝𝑝𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
𝑹𝒍 = 𝐿𝑜𝑤𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
N = Total Number of students from Upper and Lower Group

Solution:
9+1
Index of difficulty = x 100
22

Index of difficulty = 45. 45

STEP 5: Write the interpretation.

Interpretation: The index of difficulty is 45.45 percent (45. 45%). This percentage value is below
fifty percent (50%), hence, Item 18 is Right difficult and must be retained.

Range of difficulty index Interpretation Action


0% - 25% Difficult Revise or discard
26% - 75% Right difficulty retain
76% - above Easy Revise or discard
STEPS FOR INDEX OF DISCRIMINATION

1. Arrange the scores from highest to lowest.


2. Separate the top 27% and the bottom 27% of the cases
3. Tally the number of cases from each group who got the item correct.
4. Compute the discrimination index using the formula:
𝑹𝒖 −𝑹𝒍
Index of discrimination = 𝑵𝑮
Where:
𝑹𝒖 = 𝑈𝑝𝑝𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
𝑹𝒍 = 𝐿𝑜𝑤𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
NG = Total Number of students from Upper or Lower Group

5. Write the interpretation.


EXAMPLE
Step 1: Arrange the scores from highest to lowest.

Score Answer Score Answer


95 C 73 D
93 C 73 A
92 C 72 E
91 C 71 E
90 C 70 E
89 C 70 C
88 C 69 C
87 C 68 C
86 C 67 C
85 A 66 D
83 D 65 C
82 B 64 D
81 B 63 A
80 E 61 B
79 C 60 B
78 C 58 E
78 A 57 E
77 C 56 D
76 B 55 A
75 C 53 A
74 D 50 D

STEP 2: Separate the top 27% and the bottom 27% of the cases.

42 x 0.27 = 11.34 ≈ 11

Score Answer Score Answer


95 C 73 D
93 C 73 A
92 C 72 E
91 C 71 E
90 C 70 E
89 C 70 C
88 C 69 C
87 C 68 C
86 C 67 C
85 A 66 D
83 D 65 C
82 B 64 D
81 B 63 A
80 E 61 B
79 C 60 B
78 C 58 E
78 A 57 E
77 C 56 D
76 B 55 A
75 C 53 A
74 D 50 D
STEP 3: Tally the number of cases from each group who got the item correct.

𝑹𝒖 = 𝟗
𝑹𝒍 = 𝟏

STEP 4: Compute the discrimination index using the formula:


𝑹𝒖 −𝑹𝒍
Index of discrimination = 𝑵𝑮
Where:
𝑹𝒖 = 𝑈𝑝𝑝𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
𝑹𝒍 = 𝐿𝑜𝑤𝑒𝑟 𝑔𝑟𝑜𝑢𝑝 𝑤ℎ𝑜 𝑔𝑜𝑡 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑎𝑛𝑠𝑤𝑒𝑟.
NG = Total Number of students from Upper or Lower Group

Solution:
9−1
Index of discrimination =
11

Index of discrimination = 0.727

STEP 5: Write the interpretation.

The discrimination index is 0.727 which means the item is very good and must be retained.
ACTIVITY 2

INSTRUCTION: THIS IS AN INDIVIDUAL WORK. SUBMIT THOUGH EDMODO IN A


PDF, WORD, PICTURE (make sure it is very clear).

Find the INDEX of DIFFICULTY and INDEX of DISCRIMINATION. Show the steps and
interpretation.

Result of Item 2 Taken by 36 Grade 8 students in Mathematics Achievement Test Subject for Item
Analysis

You might also like