Topic: Meaning and Types of
Validity
Meaning of Validity
instrument measured what it claimed to measure.
APA, defines validity as “the degree to which evidence and theory
support the interpretations of test scores entailed by proposed
uses of tests”.
validity has been defined as referring to the appropriateness,
correctness, meaningfulness, and usefulness of the specific
inferences researchers make based on the data they collect.
Validity can also be thought of as utility.
an instrument is considered valid for a particular purpose only.
A valid test is assumed to be reliable and consistent, but a reliable
test may be valid only for a specific purpose.
There are three basic types of validity, according to Standards for
Educational and Psychological Testing (1999):
content validity,
criterion-related validity,
and construct validity.
face validity
CONTENT VALIDITY
Content validity describes how well an instrument measures a representative
sample of behaviors and content domain about which inferences are to be
made.
a measuring instrument provides adequate coverage of the topic under study.
In order to establish the content validity of a test, its items are examined
and compared to the content of the unit to be tested, or to the behaviors and
skills to be measured.
to assess the content validity of achievement tests.
match between the test items and the content they are designed to measure.
determined by using a panel of persons who shall judge how well the
measuring instrument meets the standards, but there is no numerical way to
express it.
CRITERION-RELATED VALIDITY
Criterion-related validity of a measure involves collecting evidence to
determine the degree to which the performance on a measuring
instrument is related to the performance on some other external measure.
The external measure is labeled as the criterion.
test developers can correlate it with an appropriate criterion.
The correlation coefficient is called the validity coefficient,
to indicate the strength of the relationship between the instrument and
the criterion.
There are two types of criterion-related validity: concurrent validity and
predictive validity.
Concurrent Validity
Concurrent validity is concerned with the evaluation of how well the test
we wish to validate correlates with another well-established instrument
that measures the same thing.
The well-established instrument is designated as the criterion.
In order to establish concurrent validity, the two measures are
administered to the same group of people, and the scores on the two
measures are correlated.
The correlation coefficient serves as an index of concurrent validity.
Predictive Validity
Predictive validity describes how well a test predicts some future performance.
This type of validity is especially useful for aptitude and readiness tests that are
designed to predict some future performance.
The test to be validated is the predictor (e.g., the Scholastic Aptitude Test or the ACT
test) and the future performance is the criterion (e.g., GPA of college freshmen).
Data are collected for the same group of people on both the predictor and the
criterion, and the scores on the two measures are correlated to obtain the validity
coefficient.
Unlike concurrent validity where both instruments are administered at about the same
time, predictive validity involves administering the predictor first, while the criterion is
administered later in the future.
Constructive Validity
Construct validity is the most complex and abstract.
A measure is said to possess construct validity to the degree that it confirms
to predicted correlations with other theoretical propositions.
Construct validity is the degree to which scores on a test can be accounted
for by the explanatory constructs of a sound theory.
Construct validity is the extent to which an instrument measures and provides
accurate information about a theoretical trait or characteristic.
to measure test anxiety
criteria are specific standards used to evaluate something, while constructs are
abstract concepts used to describe or explain something.
Factors affecting Validity
Content, criteria, construct representativeness:
Sampling bias
Response bias
Test administration
Test format
Sample size