You are on page 1of 1

Criterion-related evidence

The validity of a test can be measured through a second form of evidence that is called
criterion-related evidence, also known as, criterion-related validity, or the extent to which
the “criterion” of the test has actually been reached. Criterion-related evidence is best
demonstrated through a comparison of results of an assessment with results of some other
measure of the same criterion. For example:
Objective course: orally produce voiced and voiceless stops in phonetic environments. The
results of one teacher’s unit test might be compared with an independent assessment
(possibly in a text book) of the same phonetic proficiency.
Criterion-related evidence usually falls into one of two categories: concurrent and
predictive validity. A test has concurrent validity of lots results are supported by other
concurrent performance beyond the assessment itself. For example, the validity of a high
score on the final exam of a foreign language course will be substantiated by accrual
proficiency in the language. The predictive validity of an assessment becomes important in
the case of placement tests, admissions assessment batteries, and achievement tests
designed to determine students’ readiness to “move on” to another unit.

Criterion-related evidence can be either predictive of later behaviour or a concurrent


measure of behaviour or knowledge. Predictive validity refers to the "power" or usefulness
of test scores to predict future performance. Examples of such future performance may
include academic success (or failure) in a particular course, good driving performance (if
the test was a driver's exam), or aviation performance (predicted from a comprehensive
piloting exam). This type of predictive validity is useful when schools use standardized test
scores as part of their admission criteria for enrolment or for admittance into a specific
program.

You might also like