Professional Documents
Culture Documents
Objectives
Learning Experiences
Measurement and Evaluation
Instructional Goals are statements that describes in general what learners should be able to DO forever after
experiencing a distinct unit of instruction. It is broad in nature as an instructional intervention. It is also
often as a direct solution to an instructional need.
Learning outcomes/Instructional Objectives are statements of what students will learn in a class or in a class
session. The statements are focused on student learning (What will students learn today?) rather than
instructor teaching (What am I going to teach today?). These statements should include a verb phrase
and an impact ("in order to") phrase -- what students will do/be able to do and how they will apply that
skill or knowledge. It forms the backbone of the lesson and specific in nature as intervention to develop
specific knowledge and skills.
Examples:
Cognitive Domain
1. Remembering – this level includes objectives related to (a) knowledge of specifics, such as terminology and
facts; (b) knowledge of ways and means of dealing with specifics, such as conventions, trends and sequences,
classifications and categories, criteria and methodology; and (c) knowledge of universals and abstractions,
such as principles, generalizations, theories, and structures.
Example: To identify the capital of a particularly country.
2. Understanding – Objectives of this level relate to (a) translation, (b) interpretation, and (c) extrapolation of
materials. Example: To interpret a table showing population density of the world.
1
3. Applying – Objectives at this level relate to the use of abstraction in particular situations. Example: To predict
the probable effect of a change in temperature on a chemical.
4. Analyzing – Objectives relate to breaking a whole into parts and distinguishing (a) elements, (b)
relationships, and (c) organizational principles. Example: To deduce facts from a hypothesis.
6. Evaluating – Objectives related to judging in terms of (a) internal evidence or logical consistency and (b)
external evidence or consistency with facts developed elsewhere. Example: To recognize fallacies in an
argument.
5. Creating – This is the highest level of complexity and includes objectives related to putting parts together in a
new form such as (a) a unique communication, (b) a plan of operation, and (c) a set of abstract relations.
Example: To produce an original piece of art.
AFFECTIVE DOMAIN
1. Receiving – These objectives are indicative of the learner’s sensitivity to the existence of stimuli and include
(a) awareness, (b) willingness to receive, and (c) selective attention. Example: To identify musical
instruments by their sound.
2. Responding – This includes active attention to stimuli such as (a) acceptance (b) willing to respond, and (c)
feeling of satisfaction. Example: To contribute to group discussions by asking questions.
3. Valuing – This includes objectives regarding beliefs and evaluations in the form of (a) acceptance, (b)
preference, and (c) commitment. Example: To argue over an issue involving health care.
4. Organization – This level involves (a) conceptualization of values and (b) organization of a value system.
Example: To organize a meeting concerning a neighborhood’s housing integration plan.
5. Characterization – This is the level of greatest complexity and includes behavior related to (a) a generalized
set of values and (b) a characterization of philosophy of life. Example: To demonstrate in front of a
government building in behalf of a cause or idea.
PSYCHOMOTOR DOMAIN
1. Reflex movements – Objectives relate to (a) segmental reflexes (involving more than one spinal segment).
Example: To contract a muscle.
2. Fundamental movements – Objectives relate to (a) walking, (b) running, (c) jumping, (d) pushing, (e)
pulling, and (f) manipulating. Example: To run a 100-yard dash.
3. Perceptual abilities – Objectives relate to (a) kinesthetic, (b) visual, (c) auditory, (d) tactile, and (e)
coordination abilities. Example: To distinguish distant and close sounds.
4. Physical abilities – Objectives relate to (a) endurance, (b) strength, (c) flexibility, (d) agility, (e) reaction-
response time, and (f) dexterity. Example: To do five sit-ups.
5. Skilled movements – Objectives relate to (a) games, (b) sports, (c) dances; and (d) the arts. Example: To
dance the basic steps of the waltz.
6. Non discursive communication – Objectives relate to expressive movement through (a) posture, (b)
gestures, (c) facial expressions, and (d) creative movements. Example: To act a part in a play.
Knowledge/remembering
2
Comprehension/understanding
Application/applying
Analysis/analyzing
Evaluation/evaluating
Synthesis/creating
Bloom's taxonomy can be used to identify verbs to describe student learning. Examples of learning outcomes verbs
for library instruction include:
There are some verbs to avoid when writing learning outcomes. These verbs are vague and often not observable
or measurable. For example, how would you measure whether someone has "become familiar with" a particular
tool? Use a more specific verb. If you want students to "understand" something, think more closely about what
you want them to be able to do or produce as a result of their "understanding."
Verbs to avoid:
Understand
Appreciate
Know about
Become familiar with
Learn about
Become aware of
Learning outcomes can be used to assess student learning through worksheets or one-minute papers where
students demonstrate that they have learned from the learning outcome.
Classroom Assessment
is an ongoing process of gathering and analyzing evidence of what the student can do. (Kay Burke)
is a formal attempt to determine student’s status with respect to educational variables of interest. (W.
James Popham)
is concerned with obtaining of information about the skills and potentials of individuals with dual goals of
providing useful feedback to the individuals and helpful to the surrounding community. (Gardner)
is a word that embraces diverse kinds of tests and measurements.(W. James Popham)
Measurement
3
is the assigning of marks, numbers or rating to certain characteristics of an individual. This process takes
before evaluation.
Evaluation
is the process of interpreting the evidence and making judgments and decisions based on the evidence.
Measurement
Assessment
Evaluation
Standardized Test
is an instrument that contains a set of items that are administered and measured according to uniform
scoring standards. The test has been pilot tested and administered to representative population of similar
individuals to obtain normative data.
Aptitude Tests - are tests used to predict achievement to provide information to college
admission/entrance.
Personality Tests - are generally used for special placement of students with learning problems or
adjustment problems.
Non-standardized Test
is usually referred to as teacher-made tests or classroom tests and have not been pilot tested on several
sample populations and therefore are not accompanied by normative data.
Types of Evaluation
1. Placement Evaluation – helps to determine student placement or categorization before instruction begins. It is
sometimes called pre assessment; hence, it takes place before instruction.
2. Diagnostic Evaluation - is often administered at the beginning of a of a course, quarter, semester, or a year to
assess the skills, abilities, interests, levels of achievement, or difficulties of one student or a class. It can be
4
done formally or informally and are not included in the grade. Diagnostic tools include items such as pre-tests,
writing samples, problem-solving exercises, skills tests, attitude surveys, or questionnaires.
Modify programs
Determine causes of learning difficulties
Determine at what level of placement of student
Determine how far the student has progressed after period of one year.
Use as baseline data to find out where students are before a teacher tries a new intervention to produce
desired results
3. Formative Evaluation – monitors progress during the learning process and provide meaningful and immediate
feedback as to what students have to do to achieve standards. It takes place at various points during the
teaching-learning process while modifications can be made. Instruction can be modified, based on the feedback
that formative evaluation yields, to ahead more rapidly.
4. Summative Evaluation – takes place at the end of an instructional unit or course. It is designed to determine the
extent to which the instructional objectives have been achieved by students and it is used primarily to certify or
to grade outcomes. It is also used to judge the effectiveness of a teacher or a particular curriculum or program.
A sound assessment reflect learning target, the degree of emphasis of different topics, and difficulty of
learning targets
Targets should be clear and appropriate
Targets should reflect what the students should know and be able to do.
Targets should be specific/observable
Targets should center on what is truly important
Objective tests
Supply Answers (short answer and completion)
Objective Selection (multiple choice, matching type and true or false)
Essay (restricted response and extended response)
Performance
Products: papers, projects, journals, exhibitions, portfolios, reflections, graphs, tables, illustrations,
spreadsheet, wed page, etc.)
Skills: Speech, demonstrations, debate, recital, dramatic, reading, athletics, keyboarding, etc.
Oral questioning
Informal questioning, examinations, conferences, interviews, etc.
Observations
Informal and formal
Self-reports
Attitude survey, Sociometric devices, questionnaires, inventories, etc.
Modes of Assessment
Mode Description Examples Advantages Disadvantages
Traditional The paper-pencil Standardized and Scoring is Preparation of the
and pen-test used teacher-made tests objective instrument is time
in assessing Administration is consuming
knowledge and easy because Prone to guessing
thinking skills students can take and cheating
the test at the
same time
Performance A mode of Practical Test Preparation of the Scoring tends to
5
assessment that Oral and Aural instrument is be subjective
requires actual Test relatively easy without rubrics
demonstration of Projects, etc. Measures Administration is
skills or creation of behavior that time consuming
products of cannot be
learning deceived
Principle 3. Balance
A balanced assessment sets targets in all domains of learning (cognitive, affective, & Psychomotor or
domains of intelligence (verbal-linguistic, logical-mathematical, bodily-kinesthetic, visual-spatial, musical-
rhythmic, intrapersonal-social, intrapersonal-introspection, physical world-natural, existential-spiritual)
involvement of all students in the class (both who volunteered & those who don’t, high and low ability,
students who are near and far from the teacher, and both male and female).
makes use of both traditional and alternative assessment.
Principle 4. Validity
Validity – is the degree to which the assessment instrument measures what it intends to measure. It is a
characteristics that refers to the appropriateness of the inferences, uses, and consequences that result
from the assessment. It is the most important criterion of a good assessment instruments.
Face Validity – this is done by examining the physical appearance of the instrument.
Content Validity – this is done through a careful and critical examination of the objectives of assessment so
that it reflects the curricular objectives.
Criterion-related validity – this is established statistically such that a set of scores revealed by measuring
instrument is correlated with the scores obtained in another external predictor or
measure. It has two purposes:
Concurrent Validity – it describes the present status of the individual by correlating the sets of obtained
from two measures given concurrently
Predictive Validity – this describes the future performance of an individual by correlating the sets of
scores obtained from two measures given at a longer time interval.
Construct Validity – this is established statistically by comparing psychological traits or factors
that theoretically influence scores in a test
Convergent Validity – this is established if the instrument defines other similar trait other than what it is
intended to measure (e.g. Critically Thinking Test may be correlated with Creative
Thinking Test)
Divergent Validity – this is established if an instrument can describe only the intended trait and not the
other traits (e. g. Critical Thinking Test may not be correlated with Reading
Comprehension Test)
Principle 5. Reliability
Reliability
A test is reliable if the similar results were obtained after administering it in two different schedules, that is
consistency of scores were obtained by the same person when rested using the same instrument.
Types of Reliability
Test of stability
Test of equivalence
Test of internal consistency
Principle 6. Fairness
6
Students are free from teacher stereotypes
Students are free from bias in assessment tasks and procedures
o Describing the extent to which each student has attained both short & long-term instructional goals.
o Communicating strengths and weaknesses based on assessment results to students, & parents or
guardians.
o Recording & reporting assessment results for school-level analysis, evaluation, & decision-making.
o Analyzing assessment information gathered before & during instruction to understand each students’
progress to date and to inform future instructional planning.
o Evaluating the effectiveness of the curriculum and materials in use.
Principle 9. Authenticity
On Student
Assessment should have a positive consequences to students, that is, it should
motivate them.
On teachers
Assessment should have a positive consequences on teachers, that is, it should help them improve the
effectiveness of their instruction
7
Principle 12. Ethics
Teachers should free the students from harmful consequences of misuse and overuse of various
assessment procedures such as embarrassing them and violating their right to confidentiality
Teachers should be guided by laws and policies that effect their classroom assessment.
Administrator and teachers should understand that it is inappropriate to use standardized student
achievement to measure teaching effectiveness.
Test Construction
Power Tests – are tests in which every student has adequate time to complete each item arranged in increasing
difficulty, designed to measure level of performance under ample time conditions.
Speed Tests – are tests in which not all the examinees have enough time to respond to all questions. It is
designed to measure the number of items an individual can complete in a given time.
Multiple-Choice Items
unique among objective test items because it measures higher levels of the Taxonomy of Educational
Objectives (Remembering, Understanding, Applying, Analyzing, Evaluating and Creating.
8
8. Eliminate unintentional grammatical clues, and keep the length and form of all the answer choices equal.
Grammatical inconsistencies such as a or an give clues to the correct answer and will help those students
who are not well prepared for the test.
9. Rotate the position of the correct answer from item to item randomly.
10. Options should be arranged according to length; from shortest to longest or longest to shortest.
11. Include from three to five options to optimize testing of knowledge rather than encouraging guessing.
12. Increase the similarity of contents among the options.
13. Use the option “none of the above” sparingly and only when the keyed answer can be classified
unequivocally as right or wrong.
14. Avoid using “all of the above” and “none of the above” sparingly.
1. Questions tend to be short, more material can be covered than with any other item format. Thus T-F
items tends to be used when a great deal of content has been covered.
2. Questions take less time to construct, but one should avoid taking statements directly from the text and
modifying them slightly to create an item.
3. Scoring is easier with T-F questions, but one should avoid having students write “true” or “false” or a “T”
or “F” provided for each item.
1. Questions tend to emphasize rote memorization of knowledge, although sometimes complex questions can
be asked using T-F items.
2. Presumes that the answer to the question or issue is unequivocally true or false. It will be unfair to ask
the student to guess at the teacher’s criteria for evaluating the truth of a statement.
9
3. Allow for and sometimes encourage a high degree of guessing. Generally, longer examinations are
needed to compensate for this.
10
are more lenient in, grading than others. Raters differ in standards and also in the distribution of grades
throughout the scale.
2. The Halo Effect – the halo effect is the tendency in evaluating the essay, to be influenced by another
characteristic or by one’s general impression of that person.
3. Item-to-item Carryover Effects – the rater acquires an impression of the student’s knowledge on the
initial item that “colors” his or her judgment of the second item. In other words, the student’s response to
the first question influences the rater’s judgment of the response to the second question.
4. Test-to-test Carryover Effects – the grade assigned to a paper tends to be influenced greatly by the
grade given to the immediately preceding paper “A C paper may be graded B if it is read after an illiterate
theme but if it follows an A paper, it seems to be of D caliber.”
5. Order Effects – the order in which a paper appears also effect scores. It was observed in several studies
that papers read earlier tended to receive higher ratings than those read near the end of the sequence.
Perhaps readers become very weary and in this physical and mental condition nothing looks quite as good
it otherwise might.
6. Language Mechanics Effects – investigators have found that teachers were unable to rate essay
response to Social Studies questions on content alone, independent of errors bin Spelling, punctuation and
grammar. It was found out that good and poor handwriting showed effects on essay responses
Table of Specification
It is a plan prepared by a classroom teacher as a basis for test construction especially a periodic test.
It can also provide an assurance that test will measure representative samples of the instructional
objectives and the contents included instruction.
Example: Two-way table of specifications in high school Physics, indicating content, number of class sessions
and objectives to be tested.
11
10. Conversion of 1 1 1 1 4 37, 38, 39 & 40
Momentum
4 4 5 7 8 7 6 40
Legend:
CS = number of class sessions
R = remembering
U = understanding
Ap = applying
An = analyzing
Ev = evaluating
C = creating
T = total numbers of items
1. Make a content outline of the subject matter, which have been taught after a definite period of time.
2. Determine the objectives, specific knowledge or skills to be tested.
3. Decide the number of items to be constructed.
4. Find the number of class sessions.
5. Find out the total number of items for each subject matter.
Nr xTd
Tt =
Tr
12