You are on page 1of 4

In 1985, L.H.

Bradley wrote a hand book on Curriculum Leadership and Development, which provides
indicators that can aid in measuring the effectiveness of a developed or written curriculum. For
purposes of the classroom teachers, some of the statements were simplified. First, you have to
identify what curriculum you will evaluate. Next, you have to find out if the curriculum you are
evaluating answers Yes or No. Answering Yes to all the questions means, good curriculum as described
by Bradley. Bradley's Effectiveness Model for Curriculum Development includes indicators which are
as follows: Vertical Curriculum Continuity, Horizontal Curriculum Continuity, Instruction Based on
Curriculum, Broad Involvement, Long Range Planning, Positive Human Relations, Theory-Into Practice,
and Planned Change. Each of these indicators has a corresponding descriptive question which is
answerable by YES or NO. If any of indicators is answered with a “NO”, actions should be made to
make it “YES”.

Michael Scriven, in 1967 introduced the Consumer Oriented Evaluation among many others when
education products flooded the market. Consumers of educational products which are needed to
support an implemented curriculum often use consumer-oriented evaluation. These products are used
in schools which require a purchasing decision. These products include textbooks, modules,
educational technology like softwares and other instructional materials. Even teachers and schools
themselves nowadays write and produce these materials for their own purposes. Consumer-oriented
evaluation uses criteria and checklist as a tool for either formative or summative evaluation purposes.
The use of criteria and checklist was proposed by Scriven for adoption by educational evaluators. The
following codes are used to rate the material: + means yes or good quality; - means no or poor quality;
o: means all right but not of good quality; and NA means not applicable. Using the checklist for
instructional material review or evaluation may help any curricularist make a decision as to which
textbook, modules or any instructional support material will be used, revised, modified or rejected.

Stake Responsive Model is oriented more directly to program activities than program intents.
Evaluation focuses more on the activities rather than intent or purposes. Robert Stake (1975)
recommends the following steps to the curriculum evaluator. 1. Meets with stakeholders to identify
their perspectives and intentions regarding curriculum evaluation. 2. Draws from Step 1 documents to
determine the scope of the evaluation. 3. Observes the curriculum closely to identify the unintended
sense of implementation and any deviations from announced intents. 4. Identifies the stated real
purposes of the program and the various audiences. 5. Identifies the problems of the curriculum
evaluation at hand and identifies an evaluation design with needed data. 6. Selects the means needed
to collect data or information, 7. Implements the data collection procedure. 8. Organizes the
information into themes. 9. Decides with stakeholders the most appropriate formats for the report.

The Context, Input, Process, Product Model (CIPP) Model of Curriculum Evaluation was a product of
the Phi Delta Kappa committee chaired by Daniel Stufflebeam. The model emphasized that the result
of evaluation should provide data for decision making. There are four stages of program operation.
These include (1) CONTEXT EVALUATION. (2) INPUT EVALUATION. (3) PROCESS EVALUATION and (4)
PRODUCT EVALUATION. However, any evaluator can take only any of the four stages as the focus of
evaluation. For all the four stages, the six steps are suggested: Step 1: Identify the kind of decision to
be made. Step 2: Identify the kinds of data to make that decision. Step 3: Collect the data needed.
Step 4: Establish the criteria to determine quality of data. Step 5: Analyze data based on the criteria.
Step 6: Organize needed information for decision makers.

Ralph Tyler in 1950 proposed the Objectives-Centered Model, a curriculum evaluation model which
until now continues to influence many curriculum assessment processes. His monograph was entitled
Basic Principles of Curriculum and Instruction In using the Tyler's model, the following curriculum
components and processes are identified in curriculum evaluation: (1) Objectives/Intended Learning
Outcomes; (2) Situation or Context; (3) Evaluation Instruments/Tools; (4) Utilization of Tool; (5)
Analysis of Results; and (6) Utilization of Results. Evaluation processes were also determined for each
of the components. Using all the steps to evaluate the curriculum and obtaining all YES answer would
mean the curriculum has PASSED the standards. Tyler's model of evaluating the curriculum is
relatively easy to understand which many teachers can follow.

The levels of the learning outcomes are: (1) Knowledge, (2) Process or Skills, (3) Understanding, and
(4) Products or Performance.

The levels of assessment follow also the levels of thinking skills from lower level to higher level. In the
first level (Knowledge), the following are to be assessed through Pencil & Paper/ Non-paper-and-
pencil type of assessment: Who, What, When, How, and Why. In the second level (Process Skills), the
following are to be assessed through Pencil & Paper/ Non-paper-and-pencil type of assessment:
Constructed meaning from Knowledge. In the third level (Understanding), the following are to be
assessed through Pencil & Paper type of assessment: Explanations, Interpretations, Applications,
Empathy, Perspective and Self Knowledge, Big ideas, Principles and Generalization.

The numerical grades are described in the different levels of proficiency in the different competencies
set in the subject areas. The following are the level of proficiency descriptors with their respective
grading scale: (1) Advanced - 90% and above, (2) Proficient - 85%-89%, (3) Approaching Proficiency -
80%-84%, (4) Developing - 75% -79%, and (5) Beginning - Below up to 74%.

The three types of learning outcomes and on the different levels can be assessed in many ways with
the use of appropriate tools. The types of tests to measure knowledge, process and understanding
include the objective tests (paper-and-pencil tests such as simple recall, alternative response test,
multiple choice test, matching type test) and subjective tests like essays (e.g. restricted response item,
extended response item). The assessment tools to measure authentic learning performance and
products (KPUP) include checklist, rating scale, and rubrics for portfolio.
The periodical exam will be the basis whether the students have learned something or not, what they
do not understand and what they're good at. This can also tell which parts of the topic the students
find hard to understand so that teachers can give further explanations.

On one hand, many infer that the result gathered from a periodical test does reflect towards the
evaluation of a curriculum in terms of being able to reassess its effectiveness in the student's
performance. In theory, a set of subjects within a given curriculum should be easily learned by the
students. There has to be something wrong when the results from a periodical examination come back
with a high percentage of failures.  The students can be blamed to a certain degree but isn’t it the job
of the school to gain most if not all of the learner's interest. It is up to the establishment’s thinkers to
design, adjust, and readjust techniques to provide the best learning tools, in terms of environment,
activities, and curriculum.  If most of the learners constantly fail in their periodical exams, then maybe
it’s time to look for adjustments in the way things are done. 

On the other hand, it is also supposed that test results will only be used as one of the pieces of
evidence of evaluation. It should not be considered as the sole reflection of the effectiveness of a
curriculum by and large because it is only a fragment of the evaluation process. Moreover, there exist
other designed evaluation instruments or tools aside from the periodical exam which can also
appropriately become the basis of curriculum evaluation. Aside from the test results, evaluators must
also look into other essential curriculum components such as products which include textbooks,
modules, educational technology like softwares and other instructional materials, and program
activities.

Y e s , a s a f u t u r e e d u c a t o r , p l a n n i n g , i m p l e m e n ti n g , a n d
e v a l u a ti n g w i l l n o t o n l y i m p r o v e m y t e a c h i n g . P l a n n i n g i s w h a t
you intended to teach. Allow you to think and improve any
shortening you may have about the lesson. It also improve
teaching because you can see what worked, what didn’t work
and what could be improved. Without this three series it is
hard to make something you can’t understand. It will complete
a l l a b o u t t e a c h i n g . P l a n n i n g i s s e tti n g t h e o b j e c ti v e s f o r t h e
l e s s o n s . I m p l e m e n ti n g n e e d s t o b e fl e x i b l e w i t h t h e o b j e c ti v e s
i n m i n d . E v a l u a ti n g m e a n s a s s e s s i n g i f t h e c o n t e n t o b j e c ti v e s
were met.

Since curriculum development is a continuous process, it can also be viewed like a PIE. Planning,
Implementing, and Evaluating (PIE) is a cyclical process which means that after evaluating, the process of
planning starts again. Planning, implementing and assessing are three processes in curriculum
development that are taken separately but are connected to each other. The cycle continues as each is
embedded in a dynamic change that happens in curriculum development. As a curricularist, these
guiding ideas clarify our understanding that one cannot assess what was not taught, nor implement
what was not planned PLAN then IMPLEMENT then EVALUATE and the next cycle begins.

Teaching is fundamentally a process, including planning, implementation, evaluation and


revision. Planning and teaching a class are familiar ideas to most instructors. More overlooked
are the steps of evaluation and revision. Without classroom assessments or some other means
of receiving feedback on a regular basis, it is surprisingly easy to misunderstand whether a
particular teaching method or strategy has been effective. A teacher can create an environment
of mutual trust and respect by relying on students for feedback -- students can be a valuable
resource for verifying whether the class pedagogy is (or isn't) working. Self-examination with
feedback from your students and the instructor are key to improving your teaching.
As a future educator, I believe that planning, implementing, and evaluating will truly improve my
teaching. Planning includes what you intended to teach and allows you to think and improve any
shortening you may have about the lesson. It also improves your teaching practice because you
can see what worked, what didn’t work and what could be improved. Without this three series, it
is hard to make something you can’t understand. It will complete all about teaching. Planning is
setting the objectives for the lessons. Implementing needs to be flexible with the objectives in
mind. Evaluating means assessing if the content objectives were met.

You might also like