Professional Documents
Culture Documents
1. Content Validity
A coordinator in science is checking the science test paper for grade 4. She asked the grade 4
science teacher to submit the table of specifications containing the objectives of the lesson and the
corresponding items. The coordinator checked whether each item is aligned with the objectives.
How are the objectives used when creating test items?
o In creating a test item, it is very important that we know the objectives that we are
aiming, for us to know if the students learned from the lesson. Objectives are usually
used in a multiple choice items, each item consists of a stem, which is a question or a
problem, followed by the several response options. The options include the correct or the
best answer and several incorrect or inadequate answers to the stem.
How is content validity determined when given the objectives and the items in a test?
o Content validity is determined when you create a good test that covers the content and the
objectives that is actually taught.
What should be present in a test table of specifications when determining content validity?
o In creating a test table of specification, we should consider this steps to determine its
content validity. The first step is to identify the test objectives. Next is to determine the
contents of the test. Third is to calculate the weight for each topic. Forth, determine the
number of items for the whole test. And last is to determine the number of items per
topic.
Who checks the content validity of items?
o The content validity is checked by the Head Teacher or by the Coordinator.
2. Face Validity
The assistant principal browsed the test paper made by the math teacher. She checked if the
contents of the items are about mathematics. She examined if instructions are clear. She browsed
through the items if the grammar is correct and if the vocabulary is within the students' level of
understanding.
What can be done in order to ensure that the assessment appears to be effective?
o To ensure that the assessment is effective, teacher or the creator of the assessment must
always consider the reliability, validity, inclusivity, practicality and the objective of the
assessment that they will create. And it should be reviewed thoroughly.
What practices are done in conducting face validity?
o In conducting face validity, test item should be reviewed and tried out on the small
groups of respondents.
Why is face validity the weakest form of validity?
o Because face validity is a subjective measure. It is considered to be a superficial measure
unlike the content validity and the rest, they have standard procedures or measures to test
its validity. It also measures the validity of a test if it is free from error.
3. Predictive Validity
The school admission's office developed an entrance examination. The officials wanted to
determine if the results of the entrance examination are accurate in identifying good students, they
took the grades of the students accepted for the first quarter. They correlated the entrance exam
results and the first quarter grades, they found significant and positive correlations between the
entrance examination scores and grades, the entrance examination results predicted the grades of
students after the first quarter. Thus, there was predictive-prediction validity.
Why are two measures needed in predictive validity?
o Predictive validity has two measures to know the correlation between those measures.
What is the assumed connection between these two measures?
o It assumed the significance and positive correlations between the entrance examination
scores and grades. The entrance examination results predicted the grades of the students
after the first quarter.
How can we determine if a measure has predictive validity?
o We can determine if the measure has predictive validity if the test score or other
measurement correlates with a variable that can only be assessed at some point after the
test has been administered.
What statistical analysis is done to determine predictive validity?
o A correlation coefficient is obtained where the x-variable is used as the predictor and y-
variable as the criterion.
How are the test results of predictive validity interpreted?
o Predictive validity is determined by calculating the correlation coefficient between the
results of the assessment and the subsequent targeted behavior. The stronger the
correlation between the assessment data and the target behavior, the higher the degree of
predictive validity the assessment possesses.
4. Concurrent Validity
A school guidance counselor administered a math achievement test to grade 6 students. She also
has a copy of the students' grades in math. She wanted to verify if the math grades of the students are
measuring the same competencies as the math achievement test. The school counselor correlated the
math achievement scores and math grades to determine if they are measuring the same
competencies.
What needs to be available when conducting concurrent validity?
o Concurrent validity is demonstrated when a test correlates well with a measure that has
previously been validated. The two measures may be for the same construct, but more
often used for different, but presumably related, constructs. The two measures in the
study are taken at the same time.
At least how many tests are needed for conducting concurrent validity?
o Two tests
What statistical analysis can be used to establish concurrent validity?
o The concurrent validity is often quantified by the correlation coefficient between the two
sets of measurements obtained for the same target population - the measurements
performed by the evaluating instrument and by the standard instrument.
How are the results of a correlation coefficient interpreted for concurrent validity?
o If there’s one new assessment while the other is well established and has already been
proven to be valid, then it shows that there’s concurrent validity when the result of the
new assessment is same or correlated to the proven one.
5. Construct Validity
A science test was made by a grade 10 teacher composed of four domains: matter, living things,
force and motion, and earth and space. There are 10 items under each domain. The teacher wanted to
determine if the 10 items made under each domain really belonged to that domain. The teacher
consulted an expert in test measurement. They conducted a procedure called factor analysis. Factor
analysis is a statistical procedure done to determine if the items written will load under the domain
they belong.
What type of test requires construct validity?
o Multiple Choice
What should the test have in order to verify its constructs?
o Construct validity is usually verified by comparing the test to other tests that measure
similar qualities to see how highly correlated the two measures are.
What are constructs and factors in a test?
o Domains and test items
How these factors are verified if they are appropriate for the test?
o When planning the assessment or test, consider examinees’ age, stage of development,
ability level, culture, etc. These factors will influence construction of learning targets or
outcomes, the types of item formats selected, how items are actually written, and test
length.
What results come out in construct validity?
o Convergent construct validity tests the relationship between the construct and a similar
measure; this shows that constructs which are meant to be related are related.
How are the results in construct validity interpreted?
o In order to demonstrate construct validity, evidence that the test measures what it
purports to measure (in this case basic algebra) as well as evidence that the test does not
measure irrelevant attributes (reading ability) are both required.
The construct validity of a measure is reported in journal articles. The following are guide questions
used when searching for the construct validity of a measure from reports: