You are on page 1of 2

VALIDITY

Fauzana Putri

Valid means correct or appropriate. We often hear people say “this instrument is
not valid”. That statement is not entirely correct since it is not the instrument that is
valid but the instrument is the source of validity evidence. When we say a result of an
assessment is valid, it means we are sure that the result represents what it is supposed to.
If we want to know students’ writing ability, for example, we will use an instrument that
really assesses students’ ability in writing. As a result, the conclusion/result of the
assessment will correctly reflect students’ writing ability. Then, we can say that the
result obtained from the instrument is valid. On the other hand, if we want to know
students’ speaking ability but we assess their ability by asking them to write a text, the
result of the assessment will not become valid. It is because we use a wrong or incorrect
instrument. Therefore, the result does not reflect students’ speaking ability and the
assessment instrument used has a validity problem.

Besides the usage of wrong and inappropriate instruments, validity problems


occur when steps of the procedure are not done correctly or the sample doesn’t represent
the population. However, when we have conducted a research according to the
procedures, we can convince that the result obtained from the instrument does not have
a validity problem. The instrument can be as the source of supporting validity evidence
if the task required by the instrument correctly reflect the skill to be measured.
Although the term validity is an abstract thing, we can predict it by providing
supporting validity evidence. There are four kinds of supporting validity evidence. They
are construct validity evidence, content validity evidence, concurrent validity evidence,
and predictive validity evidence. The first two of validity evidence can be collected
from the assessment instrument while the others can be collected from empirical
/criterion-related data.

Construct validity evidence is evidence that shows the match between the skill to
be assessed with the task required by the instrument. For example, if we want to assess
students’ speaking skills, the assessment instrument must require the students to
perform speaking activity. If we gave multiple-choice paper and pencil to test their
speaking test, there will be an inappropriateness between the skill to be assessed and the
task required by the instrument. The absence of the match shows the construct validity
problem. In short, the tasks/instruments given to the students become evidence of this
validity. Meanwhile, it has construct validity problem if the instrument does not have
construct validity evidence.

Although an assessment instrument provides construct validity evidence, it does


not mean that it provides content validity evidence. Construct validity evidence is a
prerequisite for content validity. Content validity evidence is evidence that shows the
match between the skills to be assessed with the coverage of the task in the instrument.
If we want to assess students' writing skills, for instance, the assessment instrument
must cover all components of writing skills learned by the students. If we only assess
some components of the skill, the scores suffer from content validity problems. In short,
the coverage of the task given to the students becomes the evidence of this validity. The
instrument does not have content validity if it has content validity problem.

The next type of supporting validity evidence is concurrent validity. The score
from an English competency test conducted in a classroom, as an illustration, can be
compared to the score from IELTS because the two tests have the same purposes,
measuring English proficiency. When the two scores are co-related, then the correlation
shows the level of the validity of the scores. A high and positive correlation between the
two scores shows concurrent validity evidence. On contrary, a low correlation between
the two scores shows that the result of the tests have concurrent validity problem.

Unlike concurrent validity where the result of a test can predict another that takes
place at almost the same time, in predictive validity evidence, the result of a test can
predict another that takes place at a later time after the initial test. If a student got a high
score from the admission test, he is predicted to perform better in his study. When it is
proven in the future he succeeds in his study, then the score from the admission test has
good prediction and is supported with predictive validity evidence. On the contrary, if
the student got a high score from admission test but he does not show good performance
in the learning process, it means the score has a predictive validity problem.

You might also like