You are on page 1of 7

BLOOMS TAXONOMY OF EDUCATIONAL OBJECTIVES

1. COGNITIVE DOMAIN call for outcomes of mental activity such as memorizing,


reading problem solving, analyzing, synthesizing and drawing conclusions.
2. AFFECTIVE DOMAIN refers to a person’s awareness and internalization of
objects and stimulation, it focus on emotions.
3. PSYCHOMOTOR DONMAIN it focus on the physical and kinesthetic skills of the
learner. This domain is characterized by the progressive levels of behaviors form
observation to mastery of physical skills.

Blooms Cognitive Taxonomy

Bloom identified six levels within the cognitive domain from simple recall of facts
as the lowest level through increasingly more complex and abstract mental level, to the
highest level that can be classified as evaluation. Verb samples for stating specific
learning outcomes that represent intellectual activity on each level are presented here.

1. Knowledge – recognizes students’ ability to use rote memorization and recall


facts. Verb samples: define, name, recognize, repeat, list, label, memorize,
select, cite, reproduce, state.
2. Comprehension – involves students’ activity to read subject matter, extrapolate
and interpret important information and out other ideas to their own words. Verb
samples: describe, classify, explain, discuss, express, identify, translate, restate,
review, give examples, interpret, summarize. Test questions should focus on the
use of facts, rules and principles.
3. Application – students take new concept and apply the another situation. Verb
samples: construct, arrange, compute, discover, show, relate, produce, prepare,
predict, solve, dramatize, interpret.
Test questions focus on applying facts or principles.
4. Analysis – students have the ability to take new information and break it down
into parts to differentiate between them.
Samples verbs: determine, differentiate, distinguish, estimate, point out,
discriminate, categorize, compare, criticize, examine, experiment, debate.
Test questions focus on separation of a whole into components and parts.
5. Synthesis – creating a pattern where one did not previously exist.
Sample verbs: assemble, compose, create, formulate, plan, prepare, design,
reorganize, rewrite, rearrange, propose, set up.
6. Evaluation – involves students’ ability to look at someone else’s ideas or
principles and see the worth of the work and the value of the conclusion.
Sample verbs: conclude, justify, criticize, assess, judge, predict, rate, evaluate,
select, choose, support, compare, argue, appraise.
Rest questions focus on developing opinions, judgment or decisions.

Krathwohl’s Affective Taxonomy refers to a person’s awareness and


internalization of objects and stimulation.

Anderson and Krathwohl (2001) revised the Bloom’s original taxonomy by


combining both the cognitive process and knowledge dimensions. From lowest level to
highest level.

1. Receiving – listens to ideas


Verb samples: identify, select, give, listen to ideas
2. Responding – answers questions about ideas
Verb samples: rest, select, tell, write, assist, present
3. Valuing – thinks about how to take advantage of ideas, able to explain them well
Verb samples: explain, follow, initiate, justify, propose
4. Organizing- commits to using ideas, incorporate them to activity.
Verb samples: prepare, follow, explain, relate, synthesize, integrate, join,
generalize
5. Characterizing – incorporate ideas completely into practice, recognized by the
use of them
Verb samples: solve, verify, propose, modify, practice, qualify

Psychomotor Domain

This domain is characterized by the progressive levels of behaviors from


observation to mastery of physical skills. From lowest to highest level.

1. Observing – active mental attending of a physical level.


2. Imitating – attempted copying of a physical behavior.
3. Practicing – trying a specific physical activity over and over.
4. Adapting – fine tuning, making minor adjustment in the physical activity in order
to perfect it.

Factors to Consider when Constructing Good Test Items

A. VALIDITY is the degree to which the test measures what is intended to measure.
It is the usefulness of the test for a given purpose. A valid test is always reliable.
B. RELIABILITY refers to the consistency of score obtained by the same person
when retested using the same instrument or one that is parallel to it.
C. ADMINISTRABILITY the test should be administered uniformly to all students so
that the scores obtained will not vary due to factors other than differences of the
students’ knowledge and skills. There should be a clear provision for instruction
for the student, proctors and even the one who will check the test or the scorer.
D. SCORRABILITY the test should be easy to score; directions for scoring is clear,
provide the answer sheet and the answer key.
E. APPROPRIATENESS the test item that the teacher constructed must assess the
exact performances called for in the learning objectives. The test item should
require the same performance of the student as specified in the learning
objectives.
F. ADEQUACY the test should contain a wide sampling of items to determine the
educational outcomes or abilities so that the resulting scores are representatives
of the total performance in the areas measured.
G. FAIRNESS the test should be biased to the examinees. It should not be offensive
to any examinee subgroups. A test can only be good if it is also fair to all test
takers.
H. OBJECTIVITY represents the agreement of two or more raters or test
administrators concerning the score of a student. If the two raters who assess the
same student on the judge is valid, thus lack of objectivity reduces test validity in
the same way that lack reliability influence validity.

Factors affecting the Validity of a test Item

1. The test itself.


2. The administration and scoring of a test.
3. Personal factor influencing students’ response to the test.
4. Validity is always specific to a particular group.

Ways to Reduce the Validity of the Test Item

1. Poorly constructed test items


2. Unclear directions
3. Ambiguous items
4. Reading vocabulary too difficulty
5. Complicated syntax
6. Inadequate time limit
7. Inappropriate level of difficulty
8. Unintended clues
9. Improper arrangement of items

Test Design to Improve Validity

1. What is the purpose of the test?


2. How well do the instructional objectives selected for the test represent the
instructional goals?
3. Which test items format will best measure achievement of each objectives?
4. How many test items will be required to measure the performance adequately
on each objective?
5. When and how will the test be administered?

Reliability of a Test

Reliability refers to the consistency of measurement, that is, how consistent test
results or other assessment results from one measure to another. We can say that a
test is reliable when it can be determined by means of Pearson Product Correlation
Coefficient, Spearman-Brown Formula and Kuder-Richardson Formula .

Factors Affecting the Reliability of a Test

1. Length of the lest


2. Moderate items difficulty
3. Objective scoring
4. Heterogeneity of the student group
5. Limited time

Four Methods of Establishing Reliability

1. Test-retest Methods. A type of reliability determined by administering the same


test twice to the same group of students with any time interval between test. The
results of the scores are correlated using the Pearson Product Correlation
Coefficient (r) and this correlation coefficient provides a measure of reliability.
This indicates how stable the test over a period of time.
2. Equivalent-Form Method. A type of reliability determined by administering two
different but equivalent forms of the test (also called parallel or alternate forms) to
the same group of students in close succession. The equivalent forms are
constructed to the same set of specifications that is similar in content, type of
items and difficulty. The result of the test scores are correlated using the Pearson
Product Correlation Coefficient (r) and this correlation coefficient provides a
measure of the degree to which generalization about the performance of
students from one assessment to another assessment is justified. It measures
the equivalence of the tests.
3. Split-Half Method. Administer test once. Score two equivalent halves of the test.
To split the test into halves that are equivalent, the usual procedure is to score
the even-numbered and the odd-numbered separately. This provide two score for
each student the result of the test scores are correlated using the Spearman-
Brown formula and this correlation coefficient provides a measure of internal
consistency. It indicates the degree to which consistent results are obtained from
two halves of the test.
4. Kuder-Richardson Formula. Administer the test once. Score total test and
apply the Kuder-Richardson formula. The Kuder-Richardson formula is
applicable only in situation where students responses are scored dichotomously
and therefore is most useful with traditional test items that are scored as right or
wrong. KR-20 estimates of reliability that provide information about the degree to
which the items in the test measure that same characteristics, it is an assumption
that all items are of equal difficulty. (A statistical procedure used to estimate
coefficient alpha, a correlation coefficient is given.

Descriptive Statistics of Test Scores

Statistics play a very important role in describing the test scores of students.
Teachers should have a background on the statistical techniques in order for them to
analyze and describe the results of measurement obtained in their own classroom;
understand the statistics used in the test and research reports; interpret the types of
scores used in testing.

Descriptive Statistics – is concerned with collecting, describing and analyzing a


set of data without drawing conclusions or inferences about a large group of data in
terms of tables, graphs, or single number (example average score of the class in a
particular test).

Inferential Statistics – is concerned whit the analysis of a subset of data leading


to prediction or inferences about the entire set of data or population.

We shall discussed statistical techniques used in describing and analyzing test


results.

1. Measures of Central Tendency (Averages)


2. Measures of Variability (Spread of Scores)
3. Measures of Relationship (Correlation)
4. Skewness

Measures of Central Tendency it is a single value that is used to identify the


center of the data, it is taught as the typical value in a set of scores. It tends to lie within
the center if it is arranged from lowest to highest or vice versa. There are three
measures of central tendency commonly used; the man, the median and mode.
REVISED BLOOM’S TAXONOMY (RBT)

Changes in terminology between two versions are perhaps the most obvious
differences and can also cause the most confusion. Basically, Bloom’s six major
categories were changed from noun to verb forms. Additionally, the lowest level of the
original, knowledge was renamed and became remembering. Finally, comprehension
and synthesis were retitled to understanding and creating. In an effort to minimize the
confusion, comparison images appear below.

Caption: Terminology changes “The graphic is a representation of the NEW verbage


associated with the long familiar Bloom’s Taxonomy. Note the change from Nouns to
Verbs [e.g., Application to Applying] to describe the different levels of the taxonomy.
Note that the top two levels are essentially exchanged from the Old to the New Version.”
(Schultz, 2005) (Evaluation moved from the top to Evaluating in the second from the
top, synthesis moved from second on top to the top as creating). Source:
http://www.odu.edu/educ./11schult/bloomstaxonomy.htm

Old taxonomy – evaluation was the highest level, higher than synthesis. In the past,
educators have accepted that critical thinking was higher than creative thinking.

Revised Taxonomy – to create which corresponds to synthesis, is considered the


highest level of the cognitive process. This means that the ability to create new
knowledge is now believed to pose on anyone the greatest cognitive demand.

Cognitive process, their definitions and examples

Cognitive Process Definition Example


Remembering Retrieving, recognizing and Vocabulary, terms,
recalling relevant reference, terminology,
knowledge from long-term meaning(s), facts, factual
memory information
Understanding Constructing meaning from Meaning(s) abstractions,
oral, written, and graphic representations, words,
messages through phrases, and compare
interpreting, exemplifying,
classifying, summarizing,
inferring, comparing and
explaining
Applying Carrying out or using a Principles, laws,
procedure through conclusions, methods
executing or implementing theories, abstractions,
generalization, process and
select
Analyzing Breaking material into Hypotheses, conclusions,
constituent parts, assumptions, relationships,
determining how the parts cause-effects, attribute
relate to one another and to
an overall structure or
purpose through
differentiating, organizing,
and attributing
Evaluating Making judgments based Accuracy, consistency,
on criteria and standards reliability, critique
through differentiating,
organizing and attributing
Creating Putting elements together Design an appropriate
to sampling procedure for a
given research proposal

BLOOM’S TAXONOMY

The Cognitive Process Dimension


Knowledge Remember Understand Apply Analyze Evaluate Create
Dimension
Factual
List Summarize Classify Order Rank Combine
knowledge
Conceptual
Describe Interpret Experiment Explain Assess Plan
Knowledge
Procedural
Tabute Predict Calculate Differentiate Conclude Compose
Knowledge
Meta-
Appropriate
Cognitive Execute Construct Achieve Action Actualize
use
Knowledge

Samples of Objective-Test Item Relationship

Knowledge Level
Objective : The student will be able to identify various types of schizophrenia
Test Item : Which of the following is not a type of schizophrenia?

a. hebephrenic b. catatonic c. paranoid d. autistic

Comprehension
Objective : The student will be able to provide examples of several psychological
concepts.
Test Item : Which of the following would be an instance of compliance?
a. moving away from the group norm regardless of prior opinions
b. maintaining one’s belief in the face of group pressure to change those
beliefs.
c. following group norms in overt behavior but not cognitively
d. seeking group consensus without carefully considering all possible
arguments.

Application Level
Objective : The student will be able to calculate the expected date of delivery.
Test Item : If Cathy’s cycle is regular and her last menstrual period is April 20, 2010,
when is the expected date of delivery?
a. January 20, 2011
b. January 27, 2011
c. January 13, 2011
d. January 30, 2011
Analysis Level

Objective : The student will be able to make generalizations about the research
results.
Test Item : Nurse Bryan has carried out a number of experiments on the bargaining
process. Generalizing from his results, we can conclude that

a. The more threatening the weapons that bargainers acquire, the more
likely they care to compete.
b. The more threatening the weapons that bargainers have, the more
likely they are to cooperate.
c. Threatening weapons do not influence bargaining as much as saving
face does.
d. We cannot accurately conclude any of the above

Evaluation Level
Objective : The student can recognize the values and points of view used in a
particular judgment of a work.
Test Item : Which of the statements can correctly be made about Organization and
Management.

a. An organization (or company) is people. Values make people persons;


value give vitality, meaning and direction to a company. As the people
of an organization value, so the company becomes.
b. Management is the process by which administration achieves its
mission, goals and objective.
c. Management effectiveness can be measured in terms of
accomplishment of the purpose of the organization while
management efficiency is measured in terms of the
satisfaction of individual motives.
d. Management principles are universal therefore, one need not be
concerned about peoples, culture, values, traditions and human
relations.

You might also like