You are on page 1of 2

Understanding a Scaled Score

What is a scaled score?

A scaled score is a representation of the total number of correct questions a candidate


has answered (raw score) that has been converted onto a consistent and standardized
scale. Scaled scoring is a certification industry best practice for reporting high stakes
exam scores, used to account for the potential differences in difficulty across unique
exam forms.

For APICS Certification exams, the converted raw passing score is 300 on a scale of 200
to 350.

Why use scaled scoring?

For fair and consistent decisions to be made on exam results, scores should be
comparable. This means that scores from different forms of a test should indicate the
same level of performance regardless of which exam form a candidate has received. This
will take into account the potential variability in difficulty between unique exam forms.

Test developers adhere to strict test specifications when developing multiple exam forms
to ensure that they are similar in difficulty. Because of the variability in difficulty of
individual questions, though, the forms are rarely equal in difficulty. For this reason,
percent-correct scores do not always represent a fair comparison of different forms. For
example, a candidate scoring 50% correct on a very difficult exam form would likely have
more knowledge and skills than a candidate scoring 60% on an easier form. For the same
reason, raw scores cannot be used, as two candidates with the raw score on different
forms would have demonstrated different level of performance relative to the difficulty of
the individual exam forms. A scaled score provides a standard range for candidates and
allows direct and fair comparisons of results from one exam form to another.

How is the passing score set?

The passing score for certification programs is established by the APICS Certification
Committee through a psychometrically-valid standard setting process. During this process,
a task force of subject matter experts discusses the minimum level of competence that is
required for passing the examination and obtaining the credential. After evaluating and
analyzing the difficulty of each question, as well as the specific knowledge, skills and
abilities that qualified practitioners possess, a raw cut score, or the passing score, is set for
that particular exam form. This becomes the standard for the program. As new exam forms
are created, equating is done to adjust the passing score as needed to account for any
differences in form difficulty.
How are differences between test forms handled?

Candidates are assured fairness when form difficulty varies by a statistical process called
equating. Equating procedures measure the difficulty of each exam form and adjust the
passing score as needed so that the same level of candidate performance is reflected in the
passing score regardless of the difficulty of the form. By using equating procedures, an
equivalent passing standard for each form is maintained. Candidates who happen to take a
slightly more difficult exam form are not penalized. Likewise, candidates who take the slightly
easier exam form are not given an unfair advantage.

The table below shows an example of scaled scores associated with different raw scores
for two different exam forms, Form 1 and Form 2.

Raw Score Scaled Score


Form 1 Form 2
100 305 305
99 304 303
98 303 301
97 302 300
96 301 299
95 300 298
Etc. Etc. Etc.

As you can see, Form 1 is the more difficult form because it requires less correct
questions to achieve a passing score of 300. The minimum raw score required to achieve
a scaled score of 300 on Form 1 (95) is not the same as the minimum raw score required
for Form 2 (97). The passing score on each of these two forms, however, is reported as
the same number - 300. It is very important to note that the questions and the difficulty of
each of these forms are independent of each other.

You might also like