You are on page 1of 12

ASSESSMENT OF STUDENT LEARNING

Components of the Educational Process

 Objectives
 Learning Experiences
 Measurement and Evaluation

Second Component of Educational Process: Objectives

Instructional Goals are statements that describes in general what learners should be able to DO forever after
experiencing a distinct unit of instruction. It is broad in nature as an instructional intervention. It is also
often as a direct solution to an instructional need.

Learning outcomes/Instructional Objectives are statements of what students will learn in a class or in a class
session. The statements are focused on student learning (What will students learn today?) rather than
instructor teaching (What am I going to teach today?). These statements should include a verb phrase
and an impact ("in order to") phrase -- what students will do/be able to do and how they will apply that
skill or knowledge. It forms the backbone of the lesson and specific in nature as intervention to develop
specific knowledge and skills.

Examples:

1. To explain the causes of World War I


2. To compare nationalism, colonialism and militarism
3. To distinguish between propaganda and facts

What were the causes of World War I?


How are nationalism, colonialism, and militarism related?
How can we distinguish between propaganda and facts

Classifications of Educational Objectives

 Cognitive Domain - development of intellectual abilities and skills.


 Affective Domain – development of interest, attitudes, values and appreciation.
 Psychomotor Domain – acquisition of manipulative motor skills.

Cognitive Domain

1. Remembering – this level includes objectives related to (a) knowledge of specifics, such as terminology and
facts; (b) knowledge of ways and means of dealing with specifics, such as conventions, trends and sequences,
classifications and categories, criteria and methodology; and (c) knowledge of universals and abstractions,
such as principles, generalizations, theories, and structures.
Example: To identify the capital of a particularly country.

2. Understanding – Objectives of this level relate to (a) translation, (b) interpretation, and (c) extrapolation of
materials. Example: To interpret a table showing population density of the world.

1
3. Applying – Objectives at this level relate to the use of abstraction in particular situations. Example: To predict
the probable effect of a change in temperature on a chemical.

4. Analyzing – Objectives relate to breaking a whole into parts and distinguishing (a) elements, (b)
relationships, and (c) organizational principles. Example: To deduce facts from a hypothesis.

6. Evaluating – Objectives related to judging in terms of (a) internal evidence or logical consistency and (b)
external evidence or consistency with facts developed elsewhere. Example: To recognize fallacies in an
argument.
5. Creating – This is the highest level of complexity and includes objectives related to putting parts together in a
new form such as (a) a unique communication, (b) a plan of operation, and (c) a set of abstract relations.
Example: To produce an original piece of art.

AFFECTIVE DOMAIN

1. Receiving – These objectives are indicative of the learner’s sensitivity to the existence of stimuli and include
(a) awareness, (b) willingness to receive, and (c) selective attention. Example: To identify musical
instruments by their sound.

2. Responding – This includes active attention to stimuli such as (a) acceptance (b) willing to respond, and (c)
feeling of satisfaction. Example: To contribute to group discussions by asking questions.

3. Valuing – This includes objectives regarding beliefs and evaluations in the form of (a) acceptance, (b)
preference, and (c) commitment. Example: To argue over an issue involving health care.

4. Organization – This level involves (a) conceptualization of values and (b) organization of a value system.
Example: To organize a meeting concerning a neighborhood’s housing integration plan.

5. Characterization – This is the level of greatest complexity and includes behavior related to (a) a generalized
set of values and (b) a characterization of philosophy of life. Example: To demonstrate in front of a
government building in behalf of a cause or idea.

PSYCHOMOTOR DOMAIN

1. Reflex movements – Objectives relate to (a) segmental reflexes (involving more than one spinal segment).
Example: To contract a muscle.

2. Fundamental movements – Objectives relate to (a) walking, (b) running, (c) jumping, (d) pushing, (e)
pulling, and (f) manipulating. Example: To run a 100-yard dash.

3. Perceptual abilities – Objectives relate to (a) kinesthetic, (b) visual, (c) auditory, (d) tactile, and (e)
coordination abilities. Example: To distinguish distant and close sounds.

4. Physical abilities – Objectives relate to (a) endurance, (b) strength, (c) flexibility, (d) agility, (e) reaction-
response time, and (f) dexterity. Example: To do five sit-ups.

5. Skilled movements – Objectives relate to (a) games, (b) sports, (c) dances; and (d) the arts. Example: To
dance the basic steps of the waltz.

6. Non discursive communication – Objectives relate to expressive movement through (a) posture, (b)
gestures, (c) facial expressions, and (d) creative movements. Example: To act a part in a play.

How to write Learning Objectives?


Bloom's Taxonomy of Educational Objectives (published in 1956 and revised in 2001) gives a way to express
learning outcomes in a way that reflects cognitive skills.

There are five levels (lowest to highest cognitive skills):

 Knowledge/remembering

2
 Comprehension/understanding
 Application/applying
 Analysis/analyzing
 Evaluation/evaluating
 Synthesis/creating

Bloom's taxonomy can be used to identify verbs to describe student learning. Examples of learning outcomes verbs
for library instruction include:

 Knowledge/Remembering: define, list, recognize


 Comprehension/Understanding: characterize, describe, explain, identify,
locate, recognize, sort
 Application/Applying: choose, demonstrate, implement, perform
 Analysis/Analyzing: analyze, categorize, compare, differentiate
 Evaluation/Evaluating: assess, critique, evaluate, rank, rate
 Synthesis/Creating: construct, design, formulate, organize, synthesize

There are some verbs to avoid when writing learning outcomes. These verbs are vague and often not observable
or measurable. For example, how would you measure whether someone has "become familiar with" a particular
tool? Use a more specific verb. If you want students to "understand" something, think more closely about what
you want them to be able to do or produce as a result of their "understanding."

Verbs to avoid:

 Understand
 Appreciate
 Know about
 Become familiar with
 Learn about
 Become aware of

How to Use Learning Outcomes in Teaching?

Learning outcomes can be used to assess student learning through worksheets or one-minute papers where
students demonstrate that they have learned from the learning outcome.

The third Component of Educational Process: Assessment, Measurement and Evaluation

Classroom Assessment

 is an ongoing process of gathering and analyzing evidence of what the student can do. (Kay Burke)
 is a formal attempt to determine student’s status with respect to educational variables of interest. (W.
James Popham)
 is concerned with obtaining of information about the skills and potentials of individuals with dual goals of
providing useful feedback to the individuals and helpful to the surrounding community. (Gardner)
 is a word that embraces diverse kinds of tests and measurements.(W. James Popham)

Examples of variables of interest

 Performance of students about a subject.


 Skills of students in performing long division of real numbers.
 Attitude of students toward a subject.
 Math anxiety of students
 Written communication skills of students.

Measurement

3
is the assigning of marks, numbers or rating to certain characteristics of an individual. This process takes
before evaluation.

Evaluation
is the process of interpreting the evidence and making judgments and decisions based on the evidence.

Measurement

Assessment
Evaluation

Standardized Test
is an instrument that contains a set of items that are administered and measured according to uniform
scoring standards. The test has been pilot tested and administered to representative population of similar
individuals to obtain normative data.

Standardized in four areas


 Format
 Questions
 Instructions
 Time Allotment

Two Standardized Assessment Strategies

Norm-referenced Test (NRT)


- measures a student’s level of achievement at a given period of time compared to other students.

Criterion referenced Test (CRT)


- assess student’s proficiency on the basis of a predetermined standard. The performance of the student is
related to a set of behavioral objectives or references. The test is used to determine what students know
or can do in a specific domain of learning.

Types of Standardized Tests

Intelligence Tests - are used for specific testing or placement of students.


Examples: Stanford Binet (SB) Intelligence Test
Wechster Intelligence Scale for Children (WISC)
Achievement Tests - are tests to measure knowledge and skills of students. It measures the amount
learned over a certain period of time.
Examples: Diagnostic Tests and Competency Tests

Aptitude Tests - are tests used to predict achievement to provide information to college
admission/entrance.

Personality Tests - are generally used for special placement of students with learning problems or
adjustment problems.

Non-standardized Test
is usually referred to as teacher-made tests or classroom tests and have not been pilot tested on several
sample populations and therefore are not accompanied by normative data.

Types of Evaluation

1. Placement Evaluation – helps to determine student placement or categorization before instruction begins. It is
sometimes called pre assessment; hence, it takes place before instruction.

2. Diagnostic Evaluation - is often administered at the beginning of a of a course, quarter, semester, or a year to
assess the skills, abilities, interests, levels of achievement, or difficulties of one student or a class. It can be

4
done formally or informally and are not included in the grade. Diagnostic tools include items such as pre-tests,
writing samples, problem-solving exercises, skills tests, attitude surveys, or questionnaires.

Uses of Diagnostic Evaluations

 Modify programs
 Determine causes of learning difficulties
 Determine at what level of placement of student
 Determine how far the student has progressed after period of one year.
 Use as baseline data to find out where students are before a teacher tries a new intervention to produce
desired results

3. Formative Evaluation – monitors progress during the learning process and provide meaningful and immediate
feedback as to what students have to do to achieve standards. It takes place at various points during the
teaching-learning process while modifications can be made. Instruction can be modified, based on the feedback
that formative evaluation yields, to ahead more rapidly.

4. Summative Evaluation – takes place at the end of an instructional unit or course. It is designed to determine the
extent to which the instructional objectives have been achieved by students and it is used primarily to certify or
to grade outcomes. It is also used to judge the effectiveness of a teacher or a particular curriculum or program.

Principles of High Quality Classroom Assessment/Evaluation

Principle 1. Clarity of Learning Targets

 A sound assessment reflect learning target, the degree of emphasis of different topics, and difficulty of
learning targets
 Targets should be clear and appropriate
 Targets should reflect what the students should know and be able to do.
 Targets should be specific/observable
 Targets should center on what is truly important

Principle 2. Appropriateness of assessment methods

 Objective tests
Supply Answers (short answer and completion)
Objective Selection (multiple choice, matching type and true or false)
Essay (restricted response and extended response)

 Performance
Products: papers, projects, journals, exhibitions, portfolios, reflections, graphs, tables, illustrations,
spreadsheet, wed page, etc.)
Skills: Speech, demonstrations, debate, recital, dramatic, reading, athletics, keyboarding, etc.
 Oral questioning
Informal questioning, examinations, conferences, interviews, etc.
 Observations
Informal and formal
 Self-reports
Attitude survey, Sociometric devices, questionnaires, inventories, etc.

Modes of Assessment
Mode Description Examples Advantages Disadvantages
Traditional The paper-pencil Standardized and  Scoring is  Preparation of the
and pen-test used teacher-made tests objective instrument is time
in assessing  Administration is consuming
knowledge and easy because  Prone to guessing
thinking skills students can take and cheating
the test at the
same time
Performance A mode of  Practical Test  Preparation of the  Scoring tends to

5
assessment that  Oral and Aural instrument is be subjective
requires actual Test relatively easy without rubrics
demonstration of  Projects, etc.  Measures  Administration is
skills or creation of behavior that time consuming
products of cannot be
learning deceived

Principle 3. Balance

 A balanced assessment sets targets in all domains of learning (cognitive, affective, & Psychomotor or
domains of intelligence (verbal-linguistic, logical-mathematical, bodily-kinesthetic, visual-spatial, musical-
rhythmic, intrapersonal-social, intrapersonal-introspection, physical world-natural, existential-spiritual)
 involvement of all students in the class (both who volunteered & those who don’t, high and low ability,
students who are near and far from the teacher, and both male and female).
 makes use of both traditional and alternative assessment.

Principle 4. Validity

Validity – is the degree to which the assessment instrument measures what it intends to measure. It is a
characteristics that refers to the appropriateness of the inferences, uses, and consequences that result
from the assessment. It is the most important criterion of a good assessment instruments.

Ways of establishing validity

Face Validity – this is done by examining the physical appearance of the instrument.

Content Validity – this is done through a careful and critical examination of the objectives of assessment so
that it reflects the curricular objectives.

Criterion-related validity – this is established statistically such that a set of scores revealed by measuring
instrument is correlated with the scores obtained in another external predictor or
measure. It has two purposes:
Concurrent Validity – it describes the present status of the individual by correlating the sets of obtained
from two measures given concurrently
Predictive Validity – this describes the future performance of an individual by correlating the sets of
scores obtained from two measures given at a longer time interval.
Construct Validity – this is established statistically by comparing psychological traits or factors
that theoretically influence scores in a test
Convergent Validity – this is established if the instrument defines other similar trait other than what it is
intended to measure (e.g. Critically Thinking Test may be correlated with Creative
Thinking Test)
Divergent Validity – this is established if an instrument can describe only the intended trait and not the
other traits (e. g. Critical Thinking Test may not be correlated with Reading
Comprehension Test)
Principle 5. Reliability

Reliability
A test is reliable if the similar results were obtained after administering it in two different schedules, that is
consistency of scores were obtained by the same person when rested using the same instrument.

Types of Reliability

 Test of stability
 Test of equivalence
 Test of internal consistency

Principle 6. Fairness

 Students have t knowledge of learning targets and assessment


 Students have equal opportunity to learn
 Students possess the prerequisite knowledge and skills

6
 Students are free from teacher stereotypes
 Students are free from bias in assessment tasks and procedures

Principle 7. Practicality and Efficiency

 Teacher familiarity with the method


 Time required
 Complexity of administration
 Ease of scoring
 Ease of interpretation

Principle 8. Assessment should be a continuous process

 Takes place in all phases (before, during & after)

Activities occurring prior to instruction


o Understanding of student’s backgrounds, interests, skills, & abilities as they apply across a range
of learning domains &/or subject areas.
o Understanding student’s motivations & their interests in specific class content
o Clarifying & articulating the performance outcomes expected of pupils; and
o Planning instruction for individuals or groups of students

Activities occurring during to instruction


o Monitoring pupils progress toward instructional objectives
o Identifying gains and difficulties pupils are experiencing in learning and performing
o Adjusting instruction
o Giving contingent, specific, & credible praise and feedback;
o Motivating students to learn; and
o Judging the extent of pupil attainment of instructional outcome

Activities occurring after the appropriate instructional segment

o Describing the extent to which each student has attained both short & long-term instructional goals.
o Communicating strengths and weaknesses based on assessment results to students, & parents or
guardians.
o Recording & reporting assessment results for school-level analysis, evaluation, & decision-making.
o Analyzing assessment information gathered before & during instruction to understand each students’
progress to date and to inform future instructional planning.
o Evaluating the effectiveness of the curriculum and materials in use.

Principle 9. Authenticity

Features of Authentic Assessment


o Meaningful performance task
o Clear standards and public criteria
o Quality products and performance
o Positive interaction between the assessee and assessor
o Emphasis on meta-cognition and self-evaluation
o Learning that transfers

Principle 11. Positive consequences

 On Student
Assessment should have a positive consequences to students, that is, it should
motivate them.
 On teachers
Assessment should have a positive consequences on teachers, that is, it should help them improve the
effectiveness of their instruction

7
Principle 12. Ethics

 Teachers should free the students from harmful consequences of misuse and overuse of various
assessment procedures such as embarrassing them and violating their right to confidentiality
 Teachers should be guided by laws and policies that effect their classroom assessment.
 Administrator and teachers should understand that it is inappropriate to use standardized student
achievement to measure teaching effectiveness.

Test Construction

General Considerations in Writing Test Items

1. Carefully define your instructional objectives.


2. Prepare a test plan
 Prepare a table of specification
 Select the type of test to be used
3. Prepare more items than actually needed.
4. Prepare the items well in advance to have time to review and edit.
5. Prepare scoring key in advance.
6. Control the test length such that it is a power test rather than a speed test.
7. Construct tests that are clear and unambiguous. It should be written in a low reading level.
8. Write test items in a way that it does not become a clue to other test items. Each test item is independent
of all other items.
9. Write a test item whose answer is one that would be agreed upon by the experts.
10. Items should be arranged in ascending order of difficulty.
11. Be sure that each item deals with an important aspect of the content area and not with trivia.
12. Avoid replication of the textbook in writing test items; don’t quote directly from textual materials. The test
is not meant to test how well the student memorized the text.
13. Ordinarily, half or more of the items should be above the knowledge taxonomy level. Include items that
require higher level of thinking
14. Directions must be clear and simple.
15. Perform item analysis to evaluate individual item.

Power Tests – are tests in which every student has adequate time to complete each item arranged in increasing
difficulty, designed to measure level of performance under ample time conditions.

Speed Tests – are tests in which not all the examinees have enough time to respond to all questions. It is
designed to measure the number of items an individual can complete in a given time.

General Considerations in Writing Objective Test Items


1. Test for facts and knowledge
2. Tailor the questions to fit the examinee’s age and ability level as well as the purpose of the test.
3. Write item as clearly as possible.
4. Avoid lifting statements directly from the textbook.
5. Avoid using interrelated items.
6. There should be only one correct answer
7. Avoid a regular pattern of correct responses.

Multiple-Choice Items
unique among objective test items because it measures higher levels of the Taxonomy of Educational
Objectives (Remembering, Understanding, Applying, Analyzing, Evaluating and Creating.

Suggestions for Writing Multiple-Choice Items


1. The stem of the item may be constructed in question form.
2. The stem should be clear. Avoid awkward stems.
3. The question should not be trivia. There should be consensus on its answer.
4. Questions that tap only rote learning and memory should be avoided.
5. Avoid lifting statements directly from the textbook.
6. There is one and only correct or clearly best answer.
7. Options must be plausible, that is, they should be closely related to each other as possible.

8
8. Eliminate unintentional grammatical clues, and keep the length and form of all the answer choices equal.
Grammatical inconsistencies such as a or an give clues to the correct answer and will help those students
who are not well prepared for the test.
9. Rotate the position of the correct answer from item to item randomly.
10. Options should be arranged according to length; from shortest to longest or longest to shortest.
11. Include from three to five options to optimize testing of knowledge rather than encouraging guessing.
12. Increase the similarity of contents among the options.
13. Use the option “none of the above” sparingly and only when the keyed answer can be classified
unequivocally as right or wrong.
14. Avoid using “all of the above” and “none of the above” sparingly.

Suggestions for Writing True-False Items


1. Statements should be brief and simple.
2. Avoid exact textbook wording.
3. Construct statements that are definitely true or definitely false, without additional qualifications.
4. Keep true and false statements at approximately equal numbers of true and false items.
5. Avoid using double-negative statements.
6. Avoid trick questions.
7. Start with a false statement since it is a common observation that the first statement in this type of test is
always positive.
8. Avoid pattern of correct answer.

Suggestions for Writing Matching Items


1. Keep both the list of items and the list of options fairly short and homogenous
2. Place all the items and options for one matching exercise on the same page.
3. The list of items on the left should contain the longer phrases or statements, whereas the options on the
right side should consists of short phrases, words or symbols.
4. Limit to not more than 15 items per exercise.
5. Each item in the list should be numbered and the list of options should be identified by letter.
6. Place items on the left and options on the right.
7. Include more options than items.

Suggestions for Writing Completion Items


1. Items should require a single-word answer or a brief and definite statement.
2. Avoid over mutilated statements.
3. Omit only key or important words; don’t eliminate so many elements that the sense of the content is
impaired.
4. Word the statement such that the blank is near the end of the sentence rather than near the beginning.
This will prevent awkward sentences.
5. If the problems require a numerical answer, indicate the units in which it is to be expressed.
6. Avoid lifting statements directly from the textbook.
7. Avoid grammatical clues to the correct answer.
8. Use this type of test only when there is a clearly identifiable answer.

Advantages and Disadvantages of Objective Type of Tests

Advantages of True-False Test

1. Questions tend to be short, more material can be covered than with any other item format. Thus T-F
items tends to be used when a great deal of content has been covered.
2. Questions take less time to construct, but one should avoid taking statements directly from the text and
modifying them slightly to create an item.
3. Scoring is easier with T-F questions, but one should avoid having students write “true” or “false” or a “T”
or “F” provided for each item.

Disadvantages of True-False Test

1. Questions tend to emphasize rote memorization of knowledge, although sometimes complex questions can
be asked using T-F items.
2. Presumes that the answer to the question or issue is unequivocally true or false. It will be unfair to ask
the student to guess at the teacher’s criteria for evaluating the truth of a statement.

9
3. Allow for and sometimes encourage a high degree of guessing. Generally, longer examinations are
needed to compensate for this.

Advantages of Matching Test


1. Usually simple to construct and score.
2. Ideally suited to measure associations between facts.
3. Can be more efficient than multiple-choice questions because they avoid repetition of options in measuring
associations.
4. Reduce the effects of guessing

Disadvantages of Matching Test


1. Sometimes tend to ask students for vital information.
2. Emphasize memorization.
3. Most commercial answer sheets can accommodate no more than five options, thus limiting the size of any
particular matching item.

Advantages of Multiple-Choice Test


1. Have considerable versatility in measuring objectives from the knowledge to the evaluation level.
2. Since writing is minimized, a substantial amount of course material can be sampled in a relatively short
time.
3. Scoring is highly objective, requiring only a count of the number of correct responses.
4. Can be written so that students must discriminate among options that vary in degree of correctness. This
allows the students to select the best alternative and avoids the absolute judgment found in T-F tests.
5. Effects of guessing are reduced.
6. Amenable to item analysis which permits a determination of which items are ambiguous or too difficult.

Disadvantages of Multiple-Choice Test


1. Multiple-choice questions can be time consuming to write.
2. If not carefully written, it can sometimes have more than one defensible correct answer.

Advantages of Completion Test


1. Construction is relatively easy.
2. Guessing is eliminated since the questions require recall.
3. It takes less time to complete than multiple-choice items, so a greater proportion of content can be
covered.

Disadvantages of Completion Test


1. Usually encourage a relatively low level of response complexity.
2. The response can be difficult to score since the stem must be general enough so as not to communicate
the correct answer. This can unintentionally lead to more than one defensible answer.
3. The restriction of an answer to a few words tends to measure the recall of specific facts, names, places
and events as opposed to mastery of more complex concepts.

Suggestions for Writing Good Essay Tests


1. Restrict the use of essay questions to those learning outcomes that cannot satisfactorily measured by
objective items.
2. Focus the question
3. Avoid optional questions.
4. Indicate the approximate time limit or the number of points for each question.
5. Several shorter are not preferable with such word as “list”, “who”, “what”, “where.”
6. Prepare a scoring key in advance.

Suggestions for Improving the Objectivity of Scoring Essay Tests


1. Score one question at a time for all examinees.
2. Make a list of the points you expect to find before scoring.
3. Keep the identity of the examinee anonymous while scoring
4. Assign specific values to key points expected.

Limitations of the Essay Test


1. Reader Unreliability – a major problem with essay examinations is the lack of consistency in judgments
among competent raters, which is the lack of scorer or reader reliability. This means that some readers

10
are more lenient in, grading than others. Raters differ in standards and also in the distribution of grades
throughout the scale.
2. The Halo Effect – the halo effect is the tendency in evaluating the essay, to be influenced by another
characteristic or by one’s general impression of that person.
3. Item-to-item Carryover Effects – the rater acquires an impression of the student’s knowledge on the
initial item that “colors” his or her judgment of the second item. In other words, the student’s response to
the first question influences the rater’s judgment of the response to the second question.
4. Test-to-test Carryover Effects – the grade assigned to a paper tends to be influenced greatly by the
grade given to the immediately preceding paper “A C paper may be graded B if it is read after an illiterate
theme but if it follows an A paper, it seems to be of D caliber.”
5. Order Effects – the order in which a paper appears also effect scores. It was observed in several studies
that papers read earlier tended to receive higher ratings than those read near the end of the sequence.
Perhaps readers become very weary and in this physical and mental condition nothing looks quite as good
it otherwise might.
6. Language Mechanics Effects – investigators have found that teachers were unable to rate essay
response to Social Studies questions on content alone, independent of errors bin Spelling, punctuation and
grammar. It was found out that good and poor handwriting showed effects on essay responses

Table of Specification
 It is a plan prepared by a classroom teacher as a basis for test construction especially a periodic test.
 It can also provide an assurance that test will measure representative samples of the instructional
objectives and the contents included instruction.

Two kinds of Commonly Prepared Tables of Specification


1. One-way table of Specification – only the contents are listed down the left side content/topics and number
of class sessions.
2. Two-way Table of Specification – the instructional objectives are listed across the top and just like the
one-way table, the content areas are listed down the left side.

ONE-WAY TABLE OF SPECIFICATION

Example: Subtraction of Whole Numbers


CONTENTS No. of Class Sessions No. of Item Placement
Items
1. Subtraction Concepts 4 5 1-5
2. Subtraction as the Inverse of Addition 4 5 6-10
3. Subtraction without Regrouping 8 10 11-20
4. Subtraction with regrouping 5 6 21-26
5. Subtraction Involving Zeros 8 10 27-36
6. Mental Computation through Estimation 5 6 37-42
7. Problem Solving 6 8 43-50
TOTAL 40 50 1-50

Example: Two-way table of specifications in high school Physics, indicating content, number of class sessions
and objectives to be tested.

CONTENTS CS R U Ap An Ev C T Item Placement


1. Conversion of Units 1 1 1 1 1 1 6 1, 2, 4, 20, 21 & 30
2.Speed and Velocity 1 1 2 1 1 6 3, 5, 10, 27, 29 &
31
3. Acceleration 1 1 1 1 1 4 6, 7, 19 & 25
4. Free Falling Body 1 1 2 8, 9
5. Projectile 1 1 2 10, 11
6. Force 1 1 2 13, 14
7. Vector 1 1 1 1 4 15, 16, 17 & 18
8. Work, Energy, and Power 1 2 1 1 1 6 22, 23, 26, 32 & 33
9. Conversion of Energy 1 1 1 1 4 24, 34, 35, 36

11
10. Conversion of 1 1 1 1 4 37, 38, 39 & 40
Momentum
4 4 5 7 8 7 6 40

 Legend:
CS = number of class sessions
R = remembering
U = understanding
Ap = applying
An = analyzing
Ev = evaluating
C = creating
T = total numbers of items

Steps in Preparing Table of Specifications

1. Make a content outline of the subject matter, which have been taught after a definite period of time.
2. Determine the objectives, specific knowledge or skills to be tested.
3. Decide the number of items to be constructed.
4. Find the number of class sessions.
5. Find out the total number of items for each subject matter.
Nr xTd
Tt =
Tr

Where: Tt = Total number of items for each topic


Nr = Number of recitations/sessions
Tr = Total number of recitations/sessions
Td = Total number of items desired
6. Distribute the number of items.
7. Construct the test items by observing the principles of test construction.

12

You might also like