You are on page 1of 5

COLLEGE OF EDUCATION

Name: Suson, Jessa Mae O. Yr. & Sec: 3A1


Instructor: Date: November11, 2021
E-Portfolio # 1: Midterm
There are cases for each type of validity provided that illustrate how it is conducted. After reading
the cases and references about the different kinds of validity, answer the following questions and put it
on your e-portfolio.

1. Content Validity
A coordinator in science is checking the science test paper for grade 4. She asked the grade 4
science teacher to submit the table of specifications containing the objectives of the lesson and the
corresponding items. The coordinator checked whether each item is aligned with the objectives.
 How are the objectives used when creating test items?
o In creating a test item, it is very important that we know the objectives that we are
aiming, for us to know if the students learned from the lesson. Objectives are usually
used in a multiple choice items, each item consists of a stem, which is a question or a
problem, followed by the several response options. The options include the correct or the
best answer and several incorrect or inadequate answers to the stem.
 How is content validity determined when given the objectives and the items in a test?
o Content validity is determined when you create a good test that covers the content and the
objectives that is actually taught.
 What should be present in a test table of specifications when determining content validity?
o In creating a test table of specification, we should consider this steps to determine its
content validity. The first step is to identify the test objectives. Next is to determine the
contents of the test. Third is to calculate the weight for each topic. Forth, determine the
number of items for the whole test. And last is to determine the number of items per
topic.
 Who checks the content validity of items?
o The content validity is checked by the Head Teacher or by the Coordinator.
2. Face Validity
The assistant principal browsed the test paper made by the math teacher. She checked if the
contents of the items are about mathematics. She examined if instructions are clear. She browsed
through the items if the grammar is correct and if the vocabulary is within the students' level of
understanding.
 What can be done in order to ensure that the assessment appears to be effective?
o To ensure that the assessment is effective, teacher or the creator of the assessment must
always consider the reliability, validity, inclusivity, practicality and the objective of the
assessment that they will create. And it should be reviewed thoroughly.
 What practices are done in conducting face validity?
o In conducting face validity, test item should be reviewed and tried out on the small
groups of respondents.
 Why is face validity the weakest form of validity?
o Because face validity is a subjective measure. It is considered to be a superficial measure
unlike the content validity and the rest, they have standard procedures or measures to test
its validity. It also measures the validity of a test if it is free from error.
3. Predictive Validity
The school admission's office developed an entrance examination. The officials wanted to
determine if the results of the entrance examination are accurate in identifying good students, they
took the grades of the students accepted for the first quarter. They correlated the entrance exam
results and the first quarter grades, they found significant and positive correlations between the
entrance examination scores and grades, the entrance examination results predicted the grades of
students after the first quarter. Thus, there was predictive-prediction validity.
 Why are two measures needed in predictive validity?
o Predictive validity has two measures to know the correlation between those measures.
 What is the assumed connection between these two measures?
o It assumed the significance and positive correlations between the entrance examination
scores and grades. The entrance examination results predicted the grades of the students
after the first quarter.
 How can we determine if a measure has predictive validity?
o We can determine if the measure has predictive validity if the test score or other
measurement correlates with a variable that can only be assessed at some point after the
test has been administered.
 What statistical analysis is done to determine predictive validity?
o A correlation coefficient is obtained where the x-variable is used as the predictor and y-
variable as the criterion.
 How are the test results of predictive validity interpreted?
o Predictive validity is determined by calculating the correlation coefficient between the
results of the assessment and the subsequent targeted behavior. The stronger the
correlation between the assessment data and the target behavior, the higher the degree of
predictive validity the assessment possesses.
4. Concurrent Validity
A school guidance counselor administered a math achievement test to grade 6 students. She also
has a copy of the students' grades in math. She wanted to verify if the math grades of the students are
measuring the same competencies as the math achievement test. The school counselor correlated the
math achievement scores and math grades to determine if they are measuring the same
competencies.
 What needs to be available when conducting concurrent validity?
o Concurrent validity is demonstrated when a test correlates well with a measure that has
previously been validated. The two measures may be for the same construct, but more
often used for different, but presumably related, constructs. The two measures in the
study are taken at the same time.
 At least how many tests are needed for conducting concurrent validity?
o Two tests
 What statistical analysis can be used to establish concurrent validity?
o The concurrent validity is often quantified by the correlation coefficient between the two
sets of measurements obtained for the same target population - the measurements
performed by the evaluating instrument and by the standard instrument.
 How are the results of a correlation coefficient interpreted for concurrent validity?
o If there’s one new assessment while the other is well established and has already been
proven to be valid, then it shows that there’s concurrent validity when the result of the
new assessment is same or correlated to the proven one.
5. Construct Validity
A science test was made by a grade 10 teacher composed of four domains: matter, living things,
force and motion, and earth and space. There are 10 items under each domain. The teacher wanted to
determine if the 10 items made under each domain really belonged to that domain. The teacher
consulted an expert in test measurement. They conducted a procedure called factor analysis. Factor
analysis is a statistical procedure done to determine if the items written will load under the domain
they belong.
 What type of test requires construct validity?
o Multiple Choice
 What should the test have in order to verify its constructs?
o  Construct validity is usually verified by comparing the test to other tests that measure
similar qualities to see how highly correlated the two measures are. 
 What are constructs and factors in a test?
o Domains and test items
 How these factors are verified if they are appropriate for the test?
o When planning the assessment or test, consider examinees’ age, stage of development,
ability level, culture, etc. These factors will influence construction of learning targets or
outcomes, the types of item formats selected, how items are actually written, and test
length.
 What results come out in construct validity?
o Convergent construct validity tests the relationship between the construct and a similar
measure; this shows that constructs which are meant to be related are related.
 How are the results in construct validity interpreted?
o In order to demonstrate construct validity, evidence that the test measures what it
purports to measure (in this case basic algebra) as well as evidence that the test does not
measure irrelevant attributes (reading ability) are both required.

The construct validity of a measure is reported in journal articles. The following are guide questions
used when searching for the construct validity of a measure from reports:

 What was the purpose of construct validity?


o Construct validity is used to determine how well a test measures what it is supposed to
measure.
 What type of test was used? What are the dimensions or factors that were studied using construct
validity?
o Multiple choice
 What procedure was used to establish the construct validity?
 What statistics was used for the construct validity?
o The Pearson r can be used to correlate items for each factor.
 What were the results of the test's construct validity?
o If the items are highly correlated or not with the factors.
6. Convergent Validity
A math teacher developed a test to be administered at the end of the school year, which measures
number sense, patterns and algebra, measurement, geometry, and statistics. It is assumed by the math
teacher that students' competencies in number sense improve their capacity to learn patterns and
algebra and other concepts. After administering the test, the scores were separated for each area, and
these five domains were inter-correlated using Pearson r. The positive correlation between number
sense and patterns and algebra indicates that, when number sense scores increase, the patterns and
algebra scores also increase. This shows student learning of number sense scaffold patterns and
algebra competencies.
 What should a test have in order to conduct convergent validity?
o Convergent validity is usually accomplished by demonstrating a correlation between the
two measures, although it's rare that any two measures will be perfectly convergent.
 What are done with the domains in a test on convergent validity?
o Correlation will be done with the domains to test the convergent validity.
 What analysis is used to determine convergent validity?
o To determine the convergent validity, correlate the scores between two assessment tools
or tools' sub-domains that are considered to measure the same result
 How are the results in convergent validity interpreted?
o It is interpreted as, the more hypotheses are tested, the stronger the evidence towards the
instrument being valid
7. Divergent Validity
An English teacher taught metacognitive awareness strategy to comprehend a paragraph for
grade 11 students. She wanted to determine if the performance of her students in reading
comprehension would reflect well in the reading comprehension test. She administered thé same
reading comprehension test to another class which was not taught the metacognitive awareness
strategy. She compared the results using a t-test for independent samples and found that the class
that was taught metacognitive awareness strategy performed significantly better than the other
group. The test has divergent validity.
 What conditions are needed to conduct divergent validity?
o To conduct a divergent validity, there must be a context where the two factors are test to
come up to a negative result just like in the statements above. There will be no evidence
that the two class will have the same level of skill specifically in metacognitive
awareness because the other class was never taught about it. Therefore, there is a
divergent validity when the two test do not measure similarly.
 What assumption is being proved in divergent validity?
o The results being collected do not correlate strongly to the other measure.
 What statistical analysis can be used to establish divergent validity?
o Correlation can be used to establish divergent validity.
 How are the results of divergent validity interpreted?
o Divergent validity will be interpreted as if it shows that two measures that are not
supposed to be related are strongly dissimilar or unrelated.

You might also like