Professional Documents
Culture Documents
Group j
J = ke3; mj = 6 ; rj = 4
P(ej) = 4/6
Observed proportion of correct response as a
function of ability (Baker, 2001)
Under this approach, initial values for the item
parameters, such as b = 0.0, a = 1.0, are
established a priori. Then, using these estimates,
the value of P( j ) is computed at each ability
level via the equation for the item characteristic
curve model. The agreement of the observed
value of p( j ) and computed value P( j ) is
determined across all ability groups.
Item response theory (IRT) - Model
IRT attempts to model student ability using
question level performance instead of
aggregate test level performance.
Instead of assuming all questions contribute
equally to our understanding of a students
abilities, IRT provides a more nuanced view on
the information each question provides about a
student.
The features? Lets see some examples.
First, think back to the previous example.
In the traditional grading paradigm - a correct
answer on the first section would count just as
much as a correct answer on the final section,
despite the fact that the first section is easier
than the last!
The traditional grading scheme, however,
completely ignores each questions difficulty
when grading students.
The one-parameter logistic (1PL) IRT model
attempts to address this by allowing each
question to have an independent difficulty
variable. It models the probability of a correct
answer using the following logistic function: