You are on page 1of 14

Imee Joy A.

Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 18

Appropriateness of Assessment Methods

Assessment Method Appropriatess


-students select from a givenset of options to
answer a question or a problem. Because
Selected-Response there is only one correct or best answer,
(Kubizyn, 2003) selected-response items are objective and
efficient. The items are easy to grade. The
teacher can assess and score a great deal of
content quickly (Kubizyn, 2003)
-more useful in targeting higher levels of
cognition; demands that students create or
produce their own answers in response to a
Constructed – Response Format question, problem or task. In this type, items
(Kubizyn, 2003) may fall under any of the following categories:
Brief-constructed response items,
performance tasks, essay items, or oral
questioning
-area form of on-going assessment, usually
done in combination with oral questioning. By
watching how students respond to oral
Teacher Observations questions and behave during individual and
(Kubizyn, 2003) collaborative activities, the teacher can get
information if learning is taking place in the
classroom. Teachers have to be watchful if
students are losing attention, misbehaving, or
appear non-participative in classroom activities
-one of the standards of quality
-It is a process where the students are given a
chance to reflect and rate their own work and
judge how well they have performed in relation
Student Self-Assessment to a set of criteria. There are self-monitoring
(Kubizyn, 2003) techniques like activity checklists, diaries and
self-report inventories. This provides the
students with an opportunity to reflect on their
performance, monitor their learning progress,
motivate them to do well and give feedback to
the teacher which the latter can see to improve
the subject/course
References:

Kubizyn T., (2003). Educational Testing and Measurement: Classroom Application and Practice.
John Wiley & Sons, Inc. India Replika Press.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 19

Definition of Validity

Validity Classroom Illustration


-Validity is a term derived from the Latin word Mathematics test is administered twice to a
“validus”, meaning strong. In view of the group of first year high school students. The
assessment, it is deemed valid if it measures answer of Student A to Item 7 “How many
what it is supposed to. It is not a property of a meters are there in 9 kilometers?” is 9,000
test. It pertains to the accuracy of the meters and in the second administration, his
inferences teachers make about the students answer is still the same, 9,000 meters to Item
based on the information gathered from an 7. Hence, the student’s answer is valid
assessment. This implies that the conclusions because there is truthfulness of his answer
teachers come up with in their evaluation is (Calmorin, 2004)
valid if there are strong and sound evidences
of the extent of students learning
- The degree to which a test measure what is
supposed to be measured. The quality of a
test depends on its validity. It is the most
central and essential quality in the
development, interpretation and use of
educational measures (Asaad, Abubakar S.,
2004).
- The most important quality of good
measuring assessment it refers to the degree
to which a test measures what it intends to
measure (Raagas, Ester L., 2010).

References:
Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and
Application (Third Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore
Inc.
Calmorin, L. (2004). Measurement and Evaluation, 3rd Edition. Mandaluyong City:
National Bookstore, Inc.
Raagas, Ester L. (2010). Measurement (Assessment) and Education Concept and Application
(Third Edition).Karsuagan, Cagayan De Oro City.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 20

Factors Affecting Validity

Factors How it Affects Validity


Inappropriateness of the Test Items Measuring the understanding, thinking skills,
and other complex types of achievement test
forms that are appropriate only for measuring
factual knowledge will invalidate results
(Asaad, 2004).
Directions of the Test Items Directions that are not clearly stated as to how
the students respond to the items and record
their answers will tend to lessen the validity of
the test items (Asaad, 2004).
Reading Vocabulary and Sentence Structure Vocabulary and sentence structures that do
not match the level of the students will result in
the test of measuring reading (Asaad, 2004).
Level of Difficulty of the Test Item When the test items are too easy and too
difficult they cannot discriminate between the
bright and the poor students. Thus, it will lower
the validity of the test (Asaad, 2004).
Test items which unintentionally provide clues
to the answer will tend to measure the
students’ alertness in detecting clues and the
Poorly Constructed Test Items
important aspects of students’ performance
that the test is intended to measure will be
affected (Asaad, 2004).
A test should be sufficient number of items to
measure what it is supposed to measure. If a
test is too short to provide a representative
Length of the Test Items
sample of the performance that is to be
measured, validity will suffer accordingly
Asaad, 2004).
Test items should be arranged in an
increasing difficulty. Placing difficult items
early in the test may cause mental blocks and
it may take up too much time for the students;
Arrangement of the Test Items hence, students are prevented from reaching
items they could easily answer. Therefore,
improper arrangement may also affect the
validity by having a detrimental effect on
students’ motivation (Asaad, 2004).
A systematic pattern of correct answers, and
Pattern of the Answers this will lower again the validity of the test
(Asaad, 2004).
Ambiguity Ambiguous statements in test items contribute
to misinterpretations and confusion. Ambiguity
sometimes confuses the bright students more
than the poor students, casing the items to
discriminate in a negative direction (Asaad,
200)

Reference:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and Application (Third
Edition).Manila: Rex Bookstore Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 21

Types of Validity

Types of Validity Definition Classroom Illustration


Content Validity -It is related to how adequately A teacher wishes to validate a test in
the content of the root test Mathematics. He request experts in
sample the domain about Mathematics to judge if the items or
which inference are to be questions measures the knowledge the
made (Calmorin, Laurentina, skills and values supposed to be
2004) measured (Oriondo, L., 1984)
- This is being established
through logical analysis
adequate sampling of test
items usually enough to assure
that test is usually enough to
assure that a test has content
validity (Oriondo, L., 1984)
Face Validity -This is done by examining the Calculation of the area of the rectangle
test to bind out if it is the good when it’s given direction of length and
one. And there is no common width are 4 feet and 6 feet respectively
numerical method for face (Raagas, Ester L., 2010)
validity (Asaad, Abubakar S.,
2004)

Construct Validity The test is the extent to which A teacher might design whether an
a test measure a theoretical educational program increases artistic
trait. This involves such test as ability amongst pre-school children.
those of understanding, and Construct validity is a measure of
interpretation of data whether your research actually
measures artistic ability, a slightly
abstract label (Calmorin, Laurentina.,
2004)
Criterion – Related Refers to the degree of Mr. Celso wants to know the predictive
Validity (Predictive accuracy of how a test predicts validity of his test administered in the
Validity) one performance at some previous year by correlating the scores
subsequent outcome (Asaad, with the grades of the same students
Abubakar S., 2004) obtained in a (test) later date. Their
scoresand grades are presented below:
Grade (x) Test (y) xy x2 y2
89 40 3560 7921 1600
85 37 3145 7225 1369
90 45 4050 8100 202
5
79 25 1975 6241 62
5
80 27 2160 6400 72
9
82 35 2870 6724 122
5
92 41 3772 8464 168
1
87 38 3306 7569 144
4
81 29 2349 6561 84
1
84 37 3108 7056 136
9
_______ ______ ______ ______
______
849 354 30 295 72 261 12
908

r = __10(30295) – (849)
(354)_____________
√[10(77261) – (849)2] [10(12908) –
(354)2]

r = 0.92
- a 0.92 coefficient of correlation
indicates that his test has a high
predictive validity (Asaad, Abubakar S.,
2004)
Criterion – Related It refers to the degree to which Grade (x) Test (y) xy x2 y2
Validity the test correlate with a 34 30 1020 1156 900
(Concurrent criterion, which is set up as an 40 37 1480 1600 136
Validity) acceptable measure on 9
standard other than the test 35 25 875 1225 62
itself. The criterion is always 5
available at the time of testing 49 37 1813 2401 136
(Asaad, Abubakar S., 2004) 9
50 45 2250 2500 202
5
38 29 1102 1444 84
1
37 35 1295 1369 122
5
47 40 1880 2209 160
0
38 35 1330 1444 122
5
43 39 1677 1849 152
1
_______ ______ ______ ______
______
411 352 14722 17197 1270
0
r = __10(1722) –
(411)2 (352)_____________
√[10(17197) – (411)2] [10(12700) –
(352)2]

r = 0.83
- a 0.83 coefficient of correlation
indicates that his test has a high
predictive validity (Asaad, Abubakar S.,
2004)

References:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and Application (Third
Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore Inc.
Calmorin, Laurentina. (2004). Measurement and Evaluation, 3rd Edition. Mandaluyong
City. National Bookstore Inc.
Oriondo, L. (1984). Evaluation Educational Outcomes. Manila.

Raagas, Ester L. (2010). Measurement (Assessment) and Education Concept and


Application (Third Edition).Karsuagan, Cagayan De Oro City
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 24

Definition of Reliability
Definition Classroom Iluustration
Reliability is a factor of validity. It refers to the For the teachers – made test reliability index of
consistency of the test results (Buendicho, F. 0.50 and above is acceptable (Buendicho, F.
2010) 2010)
If you create a quiz to measure students ability
to solve quadratic equation, you should be
Reliability is defined as the as the consistency able to assumethat if the students get some
of test results (Rico A., 2011) items correct, he or she will get other similar
items correctly.

References:

Buendicho, F. (2010). Assessment of Students Learning 1. Manila. Rex Bookstore Inc.

Rico, A. (2011). Assessment of Students Learning (A Practical Approach). Manila. Anvil


Publishing Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 25

Factors Affecting Reliability


Factors How Each Factor Affect Reliability
A longer test provides a more adequate
Length of the test sample of behavior being measured and is
(Asaad, Abubakar S., 2004) less disturbed by chance factors like guessing
(Calmorin, 2004)
Spread the scores over a quarter range than
Moderate item difficulty when a test is composed of difficult or easy
(Asaad, Abubakar S., 2004) items (Calmorin, 2004)
Eliminate the biases, opinions or judgments of
Objectivity the person who checks the test (Calmorin,
(Asaad, Abubakar S., 2004) 2004)
Reliability is higher when test scores are
Heterogeneity of the students’ group spread out a range of abilities (Calmorin,
(Calmorin, 2004) 2004)
Speed is factor and is more reliable than a test
Limited time that is conducted at a longer time (Calmorin,
(Calmorin, 2004) 2004)

References:
Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and
Application (Third Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore
Inc.
Calmorin, L. (2004). Measurement and Evaluation, 3rd Edition. Mandaluyong City:
National Bookstore, Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 26

Methods of Establishing the Reliability of a Good Measuring Instrument

Method Test-Retest Method (Asaad, Abubakar S.,


2004)
Definition In this method, the same test is administered
twice to the same group of students with any
time interval (Asaad, Abubakar S., 2004)
Estimate of Reliability Measure of Stability (Asaad, Abubakar S.,
2004)
Name of Statistical Tool r = __n (Σxy) - (Σx(Σy)____
(Formula) √ [n(Σx2) – (Σx)2]
[n(Σy ) – (Σy) ] (Asaad, Abubakar S., 2004)
2 2

References:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and


Application (Third Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore
Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 26

Methods of Establishing the Reliability of a Good Measuring Instrument

Method Equivalent/Parallel Form Method (Asaad,


Abubakar S., 2004)
Definition In this method, there are two sets of test which
is similar in content, type of items, difficulty
and others in close succession to the same
group of students (Asaad, Abubakar S., 2004)
Estimate of Reliability
Name of Statistical Tool r = __n (Σxy) - (Σx)(Σy)______
(Formula) √ [n(Σx2) – (Σx)2]
[n(Σy ) – (Σy) ] (Asaad, Abubakar S., 2004)
2 2

References:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and


Application (Third Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore
Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 26

Methods of Establishing the Reliability of a Good Measuring Instrument

Method Split-Half Method (Asaad, Abubakar S., 2004)


Definition In this method, a test is conducted once and
the results are broken down into halves
(Asaad, Abubakar S., 2004)
Estimate of Reliability Internal consistency (Asaad, Abubakar S.,
2004)
Name of Statistical Tool r = __n (Σxy) - (Σx)(Σy)________
(Formula) √ [n(Σx2) – (Σx)2] [n(Σy2)
– (Σy) ]
2

rt = __2roe_____
1 + roe] (Asaad, Abubakar S., 2004)

References:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and


Application (Third Edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore
Inc.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 26

Methods of Establishing the Reliability of a Good Measuring Instrument

Method Internal Consistency Methods (Asaad,


Abubakar S., 2004)
Definition This is the last method of establishing the
reliability of a test. Like the split-half method, a
test is conducted only once. This method
assumes that all items are of equal difficulty
(Asaad, Abubakar S., 2004)
Estimate of Reliability This measures the homogeneity(pattern of the
percentage of the correct and wrong
responses of the students) of the of the
instrument (Asaad, Abubakar S., 2004)
Name of Statistical Tool Kuder-Richardson Formula 21 (Asaad,
(Formula) Abubakar S., 2004)

Kuder-Richardson Formula 20 (Gabuyo, 2012)

Mean

x̄=∑X
N

Standard Deviation

SD2= ∑(X- x̄)2


N–1
(Calmorin, 2004)

S2= n(∑x2)-(∑x)2
N(n-1)
(Gabuyo, 2012).

References:

Asaad, Abubakar S. (2004). Measurement and Evaluation Concepts and


Application (Third Edition).Manila: Rex Bookstore Inc.
Calmorin, L. (2004). Measurement and evaluation, 3rd ed. Mandaluyong City:
National Bookstore, Inc.
Gabuyo, Y. (2013). Assessment of Learning 1(Textbook & Reviewer). Manila,
Philippines: Rex Bookstore.
Imee Joy A. Dayaan October 08, 2019

EDUC 107: Assessment in Learning 1: TTH: 7:30 9 9:00 am

Activity Sheet No. 27

Types of Consistency of Test

Type Its Consistency


Internal Reliability Assesses the consistency of results across
(de Guzman, E., Adamos J., 2005) items within a test (de Guzman, E., Adamos
J., 2005)
External Reliability Gauges the extent to which a measure varies
(de Guzman, E., Adamos J., 2005) from one use to another (de Guzman, E.,
Adamos J., 2005)

References:

de Guzman, E., Adamos J., (2005)Assessment of Learning 1. Manila: Ariadna Publishing Co.,
Inc.

You might also like