Professional Documents
Culture Documents
Pinal na Pagsusulit
Summer 2018
Inihanda ni
Mag-aaral
Ipinasa kay:
magkaroon ng interes sap ag-aaral ang isang bata, dahil sa pidbak na dulot ng
pagsusulit nagkaroon ng katuparan ang mga pagsisikap ng isang bata sa kanyang pag-
aaral.
2. Nagiging daan din anng pagsusulit sa lubusang pagkatuto, natututo ang mga bata
makikita sa resulta ng pagsusulit kung ano ang lubusang natutuhan ng bata at aling
• Baliditi
• Relayabiliti
• Administrabiliti
• Objektibiti atbp.
BALIDITI- It refers to extent to which the test measures what it intends to measure. For
example, when an intelligent test is developed to assess the level of intelligence, it should
assess the intelligence of the person, not other factors. It means that it measures what it is
supposed to measure. It tests what it ought to test. A good test which measures control of
grammar should have no difficult lexical items. Validity explains us whether the test fulfils
the objective of its development. There are many methods to assess validity of a test.
RELIABILITY – This refers to the extent to which they obtained results are consistent or
reliable. When the test is administered on the same sample for more than once with a
reasonable gap of time, a reliable test will yield same scores. It means the test is
trustworthy. There are many methods of testing reliability of a test. There are different
theories to explain the concept of reliability in a scientific way. Firs and simplest: A test is
reliable if we get the same results repeatedly. Second: when a test gives consistent
results. Third: reliability is ratio of true score variance to observed score variance.
Nasusukat nito ang konsistensi ng layon ng pagsusulit o nang ibig mataya batay sa item
analysis na isang statistical tool. Gayundin, nasusukat ito kung magiging pareho ang
resulta kapag ibinigay sa parehong grupo sa ibang pagkakataon o sa ibang grupo na
may pareho ring katangian ng naunang grupo. Dapat konsistent at “dependable” o
magagamit muli sa hinaharap May mga uri ng reliability: reliability coefficient at
coefficient of equivalence.
USABILITY - Usability means the degree to which the tests are used without much
expenditure of time, money and effort. It also means practicability. Factors that determine
usability are: administrability, scorability, interpretability, economy and proper mechanical
makeup of the test.
ADMINISTRABILITY- means that the test can be administered with ease, clarity and
uniformity. Directions must be made simple, clear and concise. Time limits, oral
instructions and sample questions are specified. Provisions for preparation, distribution,
and collection of test materials must be definite. Scorability is concerned on scoring of test.
A good test is easy to score thus: scoring direction is clear, scoring key is simple, answer
is available, and machine scoring as much as possible be made possible. Test results can
be useful if after evaluation it is interpreted. Correct interpretation and application of test
results is very useful for sound educational decisions.
Objective measurement. Is the repetition of a unit amount that maintains its size,
within an allowable range of error, no matter which instrument, intended to measure the
variable of interest, is used and no matter who or what relevant person or thing is
measured.
ADVANTAGES
1. easy to construct
2. easy to construct requiresthestudenttosupplytheanswer
3. easy to construct requiresthestudenttosupplytheanswer faster to answer
4. easy to construct requiresthestudenttosupplytheanswer faster to answer better
diagnostic info
5. Quick and easy to grade
DISADVANTAGES
1. limitedtomeasuringrecallof information
2. ]limitedtomeasuringrecallof information handwriting and spelling become bit of
an issue
3. Students fill the blank with unintended responses
4. Students fill the blank with unintended responses scored erroneously than are
the objectively scored formats
5. Encourage students to memorize terms and details, so that their understanding
of the content remains superficial
Pagsulat ng sanaysay
1. Alam ko ang paksa pero hindi ko maiwasang maging paulit- ulit ang mga ideya
ko sa loob ng aking sanaysay.
2. Alam ko ang paksa, pero nawawala ako sa focus habang sinusulat ko na ang
sanaysay.
3. Alam ko ang paksa, pero nahihirapan akong ayusin ang mga ideya ko. Alam ko
ang paksa, pero hindi ko alam kung ano sa mga nalalaman ko sa paksa ang
isasama ko sa sanaysa.
4. Alam ko ang paksa, pero hindi ko alam kung paano ako magsisimula o kung
paano ako magtatapos.
5. Ito ang limang problema na kadalasang nararanasan kapag pamilyar tayo sa
paksa ng sanaysay:
Essay questions
Advantages
• Can be used to develop student writing skills, particularly the ability to formulate
arguments supported with reasoning and evidence
Disadvantages
Multiple-choice questions
Advantages
• Often test literacy skills: “if the student reads the question carefully, the answer is
easy to recognize even if the student knows little about the subject” (p. 194)
• Provide unprepared students the opportunity to guess, and with guesses that are
right, they get credit for things they don’t know
• Expose students to misinformation that can influence subsequent thinking about
the content
• Take time and skill to construct (especially good questions)
True-false questions
Advantages
Disadvantages
• Often written so that most of the statement is true save one small, often trivial bit
of information that then makes the whole statement untrue
Item analysis is a process which examines student responses to individual test items
(questions) in order to assess the quality of those items and of the test as a whole. Item
analysis is especially valuable in improving items which will be used again in later tests,
but it can also be used to eliminate ambiguous or misleading items in a single test
administration. In addition, item analysis is valuable for increasing instructors’ skills in
test construction, and identifying specific areas of course content which need greater
emphasis or clarity. Separate item analyses can be requested for each raw score 1
created during a given ScorePak® run.
A basic assumption made by ScorePak® is that the test under analysis is composed of
items measuring a single subject area or underlying ability. The quality of the test as a
whole is assessed by estimating its “internal consistency.” The quality of individual items
is assessed by comparing students’ item responses to their total test scores.
Item Statistics
Item statistics are used to assess the performance of individual test items on the
assumption that the overall quality of a test derives from the quality of its items. The
ScorePak® item analysis report provides the following item information:
Item Number
This is the question number taken from the student answer sheet, and the ScorePak®
Key Sheet. Up to 150 items can be scored on the Standard Answer Sheet.
The mean is the “average” student response to an item. It is computed by adding up the
number of points earned by all students on the item, and dividing that total by the
number of students.
The standard deviation, or S.D., is a measure of the dispersion of student scores on that
item. That is, it indicates how “spread out” the responses were. The item standard
deviation is most meaningful when comparing items which have more than one correct
alternative and when scale scoring is used. For this reason it is not typically used to
evaluate classroom tests.
Item Difficulty
For items with one correct alternative worth a single point, the item difficulty is simply
the percentage of students who answer an item correctly. In this case, it is also equal to
the item mean. The item difficulty index ranges from 0 to 100; the higher the value, the
easier the question. When an alternative is worth other than a single point, or when
there is more than one correct alternative per question, the item difficulty is the average
score on that item divided by the highest number of points for any one alternative. Item
difficulty is relevant for determining whether students have learned the concept being
tested. It also plays an important role in the ability of an item to discriminate between
students who know the tested material and those who do not. The item will have low
discrimination if it is so difficult that almost everyone gets it wrong or guesses, or so
easy that almost everyone gets it right.
To maximize item discrimination, desirable difficulty levels are slightly higher than
midway between chance and perfect scores for the item. (The chance score for five-
option questions, for example, is 20 because one-fifth of the students responding to the
question could be expected to choose the correct option by guessing.) Ideal difficulty
levels for multiple-choice items in terms of discrimination potential are:
Five-response multiple-choice 70
Four-response multiple-choice 74
Three-response multiple-choice 77
(From Lord, F.M. “The Relationship of the Reliability of Multiple-Choice Test to the
Distribution of Item Difficulties,” Psychometrika, 1952, 18, 181-194.)
ScorePak® arbitrarily classifies item difficulty as “easy” if the index is 85% or above;
“moderate” if it is between 51 and 84%; and “hard” if it is 50% or below.
Item Discrimination
Item discrimination refers to the ability of an item to differentiate among students on the
basis of how well they know the material being tested. Various hand calculation
procedures have traditionally been used to compare item responses to total test scores
using high and low scoring groups of students. Computerized analyses provide more
accurate assessment of the discrimination power of items because they take into
account responses of all students rather than just high and low scoring groups.
The item discrimination index provided by ScorePak® is a Pearson Product Moment
correlation2 between student responses to a particular item and total scores on all other
items on the test. This index is the equivalent of a point-biserial coefficient in this
application. It provides an estimate of the degree to which an individual item is
measuring the same thing as the rest of the items.
Because the discrimination index reflects the degree to which an item and the test as a
whole are measuring a unitary ability or attribute, values of the coefficient will tend to be
lower for tests measuring a wide range of content areas than for more homogeneous
tests. Item discrimination indices must always be interpreted in the context of the type of
test which is being analyzed. Items with low discrimination indices are often
ambiguously worded and should be examined. Items with negative indices should be
examined to determine why a negative value was obtained. For example, a negative
value may indicate that the item was mis-keyed, so that students who knew the material
tended to choose an unkeyed, but correct, response option.
Tests with high internal consistency consist of items with mostly positive relationships
with total test score. In practice, values of the discrimination index will seldom exceed
.50 because of the differing shapes of item and total score distributions. ScorePak®
classifies item discrimination as “good” if the index is above .30; “fair” if it is between .10
and.30; and “poor” if it is below .10.
Alternate Weight
This column shows the number of points given for each response alternative. For most
tests, there will be one correct answer which will be given one point, but ScorePak®
allows multiple correct alternatives, each of which may be assigned a different weight.
Means
The mean total test score (minus that item) is shown for students who selected each of
the possible response alternatives. This information should be looked at in conjunction
with the discrimination index; higher total test scores should be obtained by students
choosing the correct, or most highly weighted alternative. Incorrect alternatives with
relatively high means should be examined to determine why “better” students chose that
particular alternative.
The number and percentage of students who choose each alternative are reported. The
bar graph on the right shows the percentage choosing each response; each “#”
represents approximately 2.5%. Frequently chosen wrong alternatives may indicate
common misconceptions among the students.
Test Statistics
Two statistics are provided to evaluate the performance of the test as a whole.
Reliability Coefficient
The reliability of a test refers to the extent to which the test is likely to produce
consistent scores. The particular reliability coefficient computed by ScorePak® reflects
three characteristics of the test:
• Intercorrelations among the items — the greater the relative number of positive
relationships, and the stronger those relationships are, the greater the reliability.
Item discrimination indices and the test’s reliability coefficient are related in this
regard.
• Test length — a test with more items will have a higher reliability, all other things
being equal.
• Test content — generally, the more diverse the subject matter tested and the
testing techniques used, the lower the reliability.
Reliability coefficients theoretically range in value from zero (no reliability) to 1.00
(perfect reliability). In practice, their approximate range is from .50 to .90 for about 95%
of the classroom tests scored by ScorePak®. High reliability means that the questions
of a test tended to “pull together.” Students who answered a given question correctly
were more likely to answer other questions correctly. If a parallel test were developed
by using similar items, the relative scores of students would show little change. Low
reliability means that the questions tended to be unrelated to each other in terms of who
answered them correctly. The resulting test scores reflect peculiarities of the items or
the testing situation more than students’ knowledge of the subject matter.
Reliability Interpretation
.90 and
Excellent reliability; at the level of the best standardized tests
above
Good for a classroom test; in the range of most. There are probably a few
.70 – .80
items which could be improved.
Suggests need for revision of test, unless it is quite short (ten or fewer
.50 – .60 items). The test definitely needs to be supplemented by other measures
(e.g., more tests) for grading.
The measure of reliability used by ScorePak® is Cronbach’s Alpha. This is the general
form of the more commonly reported KR-20 and can be applied to tests composed of
items with different numbers of points given for different response alternatives. When
coefficient alpha is applied to tests in which each item has only one correct answer and
all correct answers are worth the same number of points, the resulting coefficient is
identical to KR-20.
The standard error of measurement is directly related to the reliability of the test. It is an
index of the amount of variability in an individual student’s performance due to random
measurement error. If it were possible to administer an infinite number of parallel tests,
a student’s score would be expected to change from one administration to the next due
to a number of factors. For each student, the scores would form a “normal” (bell-
shaped) distribution. The mean of the distribution is assumed to be the student’s “true
score,” and reflects what he or she “really” knows about the subject. The standard
deviation of the distribution is called the standard error of measurement and reflects the
amount of change in the student’s score which could be expected from one test
administration to another.
Whereas the reliability of a test always varies between 0.00 and 1.00, the standard error
of measurement is expressed in the same scale as the test scores. For example,
multiplying all test scores by a constant will multiply the standard error of measurement
by that same constant, but will leave the reliability coefficient unchanged.
A general rule of thumb to predict the amount of change which can be expected in
individual test scores is to multiply the standard error of measurement by 1.5. Only
rarely would one expect a student’s score to increase or decrease by more than that
amount between two such similar tests. The smaller the standard error of measurement,
the more accurate the measurement provided by the test.
Each of the various item statistics provided by ScorePak® provides information which
can be used to improve individual test items and to increase the quality of the test as a
whole. Such statistics must always be interpreted in the context of the type of test given
and the individuals being tested. W. A. Mehrens and I. J. Lehmann provide the following
set of cautions in using item analysis results (Measurement and Evaluation in Education
and Psychology. New York: Holt, Rinehart and Winston, 1973, 333-334):
• Item analysis data are not synonymous with item validity. An external criterion is
required to accurately judge the validity of test items. By using the internal criterion
of total test score, item analyses reflect internal consistency of items rather than
validity.
• The discrimination index is not always a measure of item quality. There is a variety
of reasons an item may have low discriminating power:(a) extremely difficult or
easy items will have low ability to discriminate but such items are often needed to
adequately sample course content and objectives;(b) an item may show low
discrimination if the test measures many different content areas and cognitive
skills. For example, if the majority of the test measures “knowledge of facts,” then
an item assessing “ability to apply principles” may have a low correlation with total
test score, yet both types of items are needed to measure attainment of course
objectives.
• Item analysis data are tentative. Such data are influenced by the type and number
of students being tested, instructional procedures employed, and chance errors. If
repeated use of items is possible, statistics should be recorded for each
administration of each item.
1 Raw scores are those scores which are computed by scoring answer sheets against a
ScorePak® Key Sheet. Raw score names are EXAM1 through EXAM9, QUIZ1 through
QUIZ9, MIDTRM1 through MIDTRM3, and FINAL. ScorePak® cannot analyze scores
taken from the bonus section of student answer sheets or computed from other scores,
because such scores are not derived from individual items which can be accessed by
ScorePak®. Furthermore, separate analyses must be requested for different versions of
the same exam. Return to the text. (anchor near note 1 in text)
2 A correlation is a statistic which indexes the degree of linear relationship between two
variables. If the value of one variable is related to the value of another, they are said to
be “correlated.” In positive relationships, the value of one variable tends to be high when
the value of the other is high, and low when the other is low. In negative relationships,
the value of one variable tends to be high when the other is low, and vice versa. The
possible values of correlation coefficients range from -1.00 to 1.00. The strength of the
relationship is shown by the absolute value of the coefficient (that is, how large the
number is whether it is positive or negative). The sign indicates the direction of the
relationship (whether positive or negative). Return to the text.
Tiyakin na:
1. Sinusubok ng bawat aytem ang isang kasanayan sa talahanayan ng ispesipikasyon.
2. Akma sa sinusubok na kasanayan ang bawat uri ng aytem.
3. Malinaw na nakasaad ang hinihingi ng aytem.
4. Tama ang kahirapan ng aytem sa mga kukuha ng pagsusulit.
5. Walang mga hindi kailangang salita o pahiwatig ang aytem.
6. Walang kinikilingang isang kultura o relihiyon ang aytem.
7. Hindi kinuha ang mga aytem sa aklat o aklat sanayan.
8. May sapat na dami ng aytem sa bawat layunin sa talahanayan ng ispesipikasyon.
Kung mahaba ang pagsusulit na binubuo ng iba't ibang uri ng aytem, sumulat ng
panlahat na panuto bukod sa mga tiyak na panuto para sa bawat uri ng pagsusulit
(subtest). Ang panlahat na panuto ay dapat magbigay ng sumusunod na impormasyon.
E. Ihanda ang mga sagot sa pagsusulit. Tiyaking isa lamang ang tamang sagot sa bawat
aytem.
I. Piliin ang titik ng tamang sagot. Isulat ang letra bago ang bilang.
1. Kapag ang panaguri ay nasa unahang bahagi at and simuno ay nasa hulihang
bahagi ng pangungusap
A. Karaniwang ayos B. Di-karaniwang ayos
2.Kapag ang panaguri ay nasa hulihang bahagi at ang simuno ay nasa unahang bahagi
ng pangugusap. Ginagamitan din ito ng katagang ay.
A.Karaniwang ayos B. Di-karaniwang ayos
3. Ang balita tungkol kay Berto ay walang katotohanan
A. Karaniwang ayos B. Di karaniwang ayos
4. Walang katotohanan ang balita tungkol kay Berto.
A. Karaniwang ayos B. Di-karaniwang ayos
5. Maagang dumating sa bahay ang mga anak-pawis
A. KA B. DKA
6. Si Loleng ay nagsaing ng sampung gatang.
A. KA B. DKA
7. Gawing nasa karaniwang ayos ang mga pangungusap Si Loleng ay mapagmahal na
kapatid.
A.Mapagmahal na kapatid si Loleng B. Kapatid na mapagmahal si Loleng
8. Gawing nasa karaniwang ayos ang mga pangungusap Sina Loleng at Ingga ay
walang hilig sa tsismisan
A. Walang hilig sa tsismisan sina Loleng at Ingga.
B. Sa tsismisan walang hiling sina Loleng at Ingga
9. Gawing nasa di-karaniwang ayos ang mga pangungusap Masarap kausap ang
dalagitang si Loleng.
A. Di masarap kausap ang dalagitang si Loleng
B. Ang dalagitang si Loleng ay masarap kausap
10. Gawing nasa di-karaniwang ayos ang mga pangungusap Galing sa Saudi si Berto
A. Si Berto ay galing sa Saudi B. Sa Saudi galing si Berto
11. Ginagamit na pantawag sa karaniwang ngalan ng tao, bagay, hayop, pook o
pangyayari.
A.Pangngalan B. Pantangi C.Pambalana
12. Bahagi ng pananalita na tumutukoy sa ngalan ng tao, bagay, hayop, pook o
pangyayari
A. Pangngalan. B. Pantangi C. Pamabalan
13. Ginagamit na pantawag sa tanging pangalan ng tao, bagay, hayop, pook o
pangyayari
A. Pantangi B. Pangngalan C. Pambalana
14. Pangngalang pamabala na tumutukoy sa isang pangkat ng tao, bagay, hayop, pook
o pangyayari.
A. Pangtangi B. Pambalana C. Langkapan
15. Bahagi ng pananalita na tumutukoy sa kilos.
A. Pangngalan B. Pandiwa C. Pang-abay
16. Bahagi ng pananalita na naghahalili sa ngalan ng tao, bagay, hayop at iba pa.
A. Pangtangi B. Pambalana C. panghalip
47. tunay
48. wala
49. gaya
50. lalo