You are on page 1of 4

Genesis Abrasaldo

Assessment in Learning

Activity 2

How do you interpret the statement “Is valid test always valid?” Explain
your answer and give example.

For me, the statement “Is valid test always valid?” is not true. For example,
You design a questionnaire to measure self-esteem and one of your student
is really shy and quiet but after taking the self-esteem test the result came
out that the student is an outgoing person. So now, you can determine that
valid test is not always valid because of the personality shows that the
student is a shy and quiet person but it does not match the result that the
student is an outgoing person.

Is reliable test always valid? Why? Give example.

The statement “Is reliable test always valid” is not true. For example, If the
thermometer shows different temperatures each time, even though you have
carefully controlled conditions to ensure the sample’s temperature stays the
same, the thermometer is probably malfunctioning, and therefore its
measurements are not valid.
So when is a reliable test considered valid? It is considered valid when the
results of the thermometer has an accurate results no matter how many times
you took a temperature at the same condition.
Discuss briefly the “morality or ethics in assessment”.

Assessment made by the human subject in deciding what he ought to do.


The very notion of moral assessment conflicts with any purely arbitrary
determination of right and wrong. It is the processes of assessment where it
should be fair and transparent, and must not discriminate according to
gender, sexual orientation, ethnicity, religion or belief, age, class or
disability.

As a teacher how do you perform “justness” or “fairness” in assessing the


achievements of your students?
As a teacher, I perform “justness” or “fairness” in assessing the
achievements of my students by:
1. Don't rush.( Assessments that are thrown together at the last minute
invariably include flaws that greatly affect the fairness, accuracy, and
usefulness of the resulting evidence.)
2. Plan your assessments carefully.( Aim not only to access your key learning
goals but to do so in a balanced, representative way. If your key learning goals
are that students should understand what happened during a certain historical
period and evaluate the decisions made by key figures during that period, for
example, your test should balance questions on basic conceptual
understanding with questions assessing evaluation skills.
3. Aim for assignments and questions that are crystal clear.( If students find
the question difficult to understand, they may answer what they think is the
spirit of the question rather than the question itself, which may not match your
intent.)
4. Guard against unintended bias.( Ask a variety of people with diverse
perspectives to review assessment tools.)
5. Try out large-scale assessment tools.( If you are planning a large-scale
assessment with potentially significant consequences, try out your assessment
tool with a small group of students before launching the large-scale
implementation. Consider asking some students to think out loud as they
answer a test question; their thought processes should match up with the ones
you intended. Read students' responses to assignments and open-ended survey
questions to make sure their answers make sense, and ask students if anything
is unclear or confusing.)
6. Ask a variety of people with diverse perspectives to review assessment
tools.( This helps ensure that the tools are clear, that they appear to assess
what you want them to, and that they don't favor students of a particular
background.

It is important to ensure that the learner is informed about, understands


and is able to participate in the assessment process, and agrees that the
process is appropriate. It also includes an opportunity for the person being
assessed to challenge the result of the assessment and to be reassessed if
necessary.

Discuss briefly the four types of validity. Give example of each type.
a. Construct validity
Construct validity refers to the degree to which a test or other measure
assesses the underlying theoretical construct it is supposed to measure. The
test is measuring what it is purported to measure. For example, a test of
reading comprehension should not require mathematical ability.

b. Content validity
The extent to which a test measures a representative sample of the subject
matter or behavior under investigation. For example, if a test is designed to
survey arithmetic skills at a third-grade level, content validity indicates how
well it represents the range of arithmetic operations possible at that level.

c. Predictive validity
Predictive validity is the extent to which performance on a test is related to
later performance that the test was designed to predict. For example, the
SAT test is taken by high school students to predict their future performance
in college (namely, their college GPA).

d. Concurrent validity
Concurrent validity indicates the amount of agreement between two different
assessments. Generally, one assessment is new while the other is well
established and has already been proven to be valid. For example, let's say a
group of nursing students take two final exams to assess their knowledge.
One exam is a practical test and the second exam is a paper test. If the
students who score well on the practical test also score well on the paper
test, then concurrent validity has occurred.

You might also like