You are on page 1of 13

Lesson 4.

1: The Role of Evaluation in


Curriculum Development

Lesson Summary
In the context of curriculum development, evaluation plays the crucial
function in formally examining the existing curriculum for particular purposes.
How it is done depends on its scope, objectives and time of implementation.
There are different curriculum evaluation models that can aid us in undertaking
such process in a systematic and effective way.

Learning Outcomes
1. Discuss what is curriculum evaluation, its types and its purposes
2. Analyze various curriculum evaluation models
3. Perform the roles of a curriculum evaluator

Motivation Question
What does the news headline below tell us about the curriculum?

Figure 1. Review of K to 12 Curriculum (Source: https://newsinfo.inquirer.net/1014464/deped-


will-review-k-to-12-curriculum-briones)

Discussion
The news cited above from Inquirer was actually released last July
2018, informing us that the K to 12 curriculum will be undergoing a review. Such
curriculum review already began and there are initial results already (Malipot,
2020). But, according to Undersecretary for Curriculum and Instruction,
Diosdado San Antonio, it “will take some time” to be completed because certain
phases or stages has to be followed (Malipot, 2020). Nevertheless, we can
expect that the process will critically examine the K to 12 curriculum to
determine areas of accomplishment and improvement (University of Calgary,
n.d.). This will later help make steps or decisions in moving forward for
enhanced curriculum effectiveness and student achievement (UNESCO-IBE,
2013b; University of Calgary, n.d.).
This curriculum review we are talking about is a process that is closely
tied to one of the four general phases of curriculum development, which is
curriculum evaluation. If you can still remember our lesson on “Processes and
Models of Curriculum Development” in Module 1, any curriculum that has been
implemented will later have to undergo evaluation. It is a process that responds
to the school’s public responsibility and accountability (Bilbao et al., 2020).

Curriculum Evaluation: Definition and Purposes

(Bilbao et al., 2020; Pawilen, 2019; Villena et al., 2015)


Can you still remember how we defined curriculum evaluation earlier in
this subject? Is it similarly concerned with the general definition of evaluation,
which is the process of making judgments based on a set of standards or
criteria? Let’s take a look on how various curricularists defined curriculum
evaluation in the infographic below:

Figure 2. Definitions of curriculum evaluation from different experts.

From the definitions given, we can say that curriculum evaluation is a


necessary process in ensuring a good quality curriculum. It answers the
important question on “Did we do what we want to do?” (Wiles, 1989 as cited
in Bago, 2001).
Based on what is introduced to you, it would be easy for you to know
what are the purposes or reasons why we evaluate the curriculum. All of these
will eventually boil down to the ultimate goal of improved student learning
(UNESCO-IBE, 2013b).
✓ Needs assessment. The strengths and weaknesses of any
implemented curriculum must be identified. These will serve as bases
for subsequent planning, designing and implementation.
✓ Monitoring. While the curriculum is being implemented, it must be
examined regularly whether it is producing the expected results. This
answers the question “How are we doing?” (Bilbao et al., 2008).
Monitoring looks into issues like relevance, responsiveness, alignment,
and sustainability (UNESCO-IBE, 2013a).
✓ Terminal assessment. To determine quality, the information gathered
about the curriculum’s extent of achievement are compared to its own
objectives or to certain standards or criteria. This will reveal whether
the objectives or standards have been satisfied or even exceeded.
✓ Decision-making. The results from curriculum evaluation are useful to
school personnel and other stakeholders in making policy
recommendations, like curricular or instructional changes and revised
allocation of resources for cost-effectiveness (Bago, 2001).
Curriculum evaluation is based on the need for alignment. This
alignment specifically refers to the coherence or matching between what
learning outcomes are expected to be accomplished (intended curriculum),
how the curriculum was delivered through the learning experiences
(implemented or taught curriculum) and what was actually attained or learned
by the students (achieved curriculum) (Bilbao et al., 2008; UNESCO-IBE, 2013b).
Examining these three types of curriculum is also used in the ongoing
curriculum review of DepEd (Malipot, 2020).

Figure 3. Aligning the curriculum (modified and adapted from Bilbao et al., 2008). Note: Some
references would use the assessed curriculum in place of the achieved curriculum.
Types or Forms of Curriculum Evaluation

(Bago, 2001; Bilbao et al., 2020; Navarro et al., 2019)


There are two ways of seeing curriculum evaluation. These are given in the
comparison below.

Figure 4. Two ways of looking curriculum evaluation.

Curriculum program evaluation is broader in scope. It requires analysis of the


various components of an education system (i.e., inputs, outputs, and
processes) that affect how the curriculum is implemented.
Curriculum evaluation can also be classified depending on when it will be done
and what it intends to do. These are given in the table below.
Table 1. Forms of Evaluation

Formative evaluation Summative evaluation


✓ It intends to modify and ✓ It presents the big picture of
enhance the curriculum design the curriculum’s quality after
prior to its full implementation, its wide-scale implementation
specifically on its designing and to give judgment on whether it
pilot-testing stages. has achieved its expressed
objectives.
✓ It also seeks to identify
problematic areas in the ✓ It enables curricularists to
curriculum during its gather data in knowing how
implementation to make well or how badly the
necessary revisions or curriculum worked and what
introduce appropriate decisions have to be made.
interventions.
✓ In the classroom level, it “sums
✓ In the classroom level, it tries to up” all the information about
improve the delivery of the the performance of every
lesson through the information learner for grading and
gathered by the teacher in promotion purposes.
quizzes, class recitation, etc.
✓ Focuses on the outcomes
✓ Focuses on the process
Curriculum Evaluation in the Classroom and School Level

(Bilbao et al. 2008; Pawilen, 2019)


The classroom is considered as the first site of gathering data for
curriculum evaluation (Doll, 1997). With that, the results of various assessment
methods used to determine the learners’ progress and level of mastery are
important inputs for curriculum evaluation. Depending on the learning
outcomes and other important considerations, the various assessment
methods that can be used include pen-and-paper tests, performance
demonstration, projects, observations, portfolio, rating scale, checklist, oral
reports, etc. The assessments that the teachers use will describe or validate
what has been truly learned by the students i.e., the achieved curriculum.
Results from assessment of student learning are essential because it serves
as reference data on what still needs to be done for improvement.
At the bigger school system level, more data collection methods are
used. These include opinion polls, surveys, past curricular reports, focus group
discussions, department meetings, follow-up studies, documentary or content
analysis and district or national test results. In the college level, curriculum
evaluation comes in the form of accrediting degree programs. Evaluation may
also include assessing the effectiveness of teachers for them to stay current
and continually improve as learning facilitators. Teacher evaluation may be
done by the learners or by their colleagues through observation and feedback.

Curriculum Evaluation: Processes and Models

(Bago, 2001; Bilbao et al., 2020; Glatthorn et al., 2018; Pawilen, 2019;
Villena, et al., 2015).
Just like in curriculum development and curriculum implementation,
there are models for evaluating a curriculum. These evaluation models are
made by curriculum experts based on their understanding on how to assess
the quality or value of a particular curriculum. There are a number of curriculum
evaluation models but we will only study six of them.
A. Bradley Effectiveness Model
In his Curriculum Leadership and Development Handbook, L.H. Bradley
(1985) provided key indicators that are useful in measuring the effectiveness
of a curriculum. First, you need to choose a particular curriculum to evaluate.
Then, assess the curriculum using the indicators in the table below by
responding with a Yes or No. (Some original descriptions were simplified)
Table 2. Bradley's Curriculum Development Indicators (adapted from Bilbao et al., 2020 and
Glatthorn et al., 2018)

Indicators Descriptive Questions Yes or No


Vertical Does the curriculum reflect the format (i.e., K to
Curriculum 12, OBE, etc.) that enables the teachers to have
Continuity quick access to what is being taught in the
grade/year levels below or above the current
level? Does it avoid useless curricular
repetitions?
(Example: If you are looking at Math 5, below
means Math 4 and above is Math 6?
Horizontal Does the curriculum provide common content
curriculum and objectives to all classrooms within the same
continuity grade level?
(Example: Humanities 11 is common to all 1st
year college students)
Instruction Are the lesson plans or syllabi derived from the
Based on curriculum? Are the utilized materials correlated
Curriculum with the content, objectives and activities?
Broad Is there evidence that the various curriculum
involvement stakeholders are represented or involved in the
planning, designing, implementation and
evaluation of the curriculum?
Long-Range Does the curriculum follow a regular sequence
Planning and review cycle within its planning and
implementation period?
Positive Did the initial thoughts of the curriculum come
Human from the teachers, school heads, and other
Relations stakeholders? Are participating members willing
to risk disagreeing without breaking down
communication lines?
Theory-Into- Is there clarity and consistency in the
Practice philosophy, vision, mission, goals and
Approach objectives, graduation outcomes, learning
outcomes and authentic tasks of the
curriculum?
Planned Is there tangible evidence that shows that the
Change internal and external publics accept the
developed curriculum? Is the process of
program development focusing on how to do it
better?

What if there is/are any indicator(s) answered with a “No”? Efforts should be
made to make it a “Yes”.

B. Tyler’s Objective-Centered Model


With his vast contribution in the field of educational evaluation, did you
know that Ralph Tyler earned the reputation of being called the Father of
Evaluation? In 1950, Ralph Tyler proposed one of the earliest evaluation models
which is said to have influenced many curriculum assessment processes until
now. His model is advantageous in the sense that it is relatively easy to
understand and apply. The components and steps in Tyler’s evaluation model
are shown in the following table.
Table 3. Tyler's Evaluation Model (adapted from Bilbao et al., 2020 and Glatthorn et al., 2018)

Components Evaluation Steps


1. Objectives / 1. Establish the objectives or desired learning
Desired Learning outcomes. The objectives should specify the content
Outcomes to be learned and expected behavior.
2. Situation or 2. Identify the situations or contexts that will provide
Context the opportunity for students to attain or achieve the
objectives.
3. Evaluation 3. Select and develop appropriate evaluation tools,
Tools/Instruments then check their validity, reliability and objectivity.
4. Utilization of 4. Use the instrument to collect data/results.
the Tool
5. Compare the data obtained from several
instruments before and after to determine the extent
of change (or compare data with the stated
objectives).

5. Results 6. Study the obtained results to determine the


Analysis strengths and weaknesses of the curriculum. Give
also possible explanations for these strengths and
weaknesses.
6. Utilization of 7. Use the results in making the necessary revisions or
Results modifications in the curriculum.

Undertaking all the steps can mean satisfying the standards. Once the
seven steps or stages are completed, the objectives in the beginning may be
revised; thus, Tyler’s evaluation model is cyclical. This ensures that there is a
continuing cycle of evaluation, analysis then improvement.

C. Provus’ Discrepancy Evaluation Model or DIPP Model


Developed by Malcolm Provus in 1971, it has four components. According
to Doll (1997), this is called the discrepancy model because it compares the
actual performance of the curriculum with the standards to identify any
discrepancy or difference between the two. With that, curriculum evaluators
and school administrators will clearly see if the collected data / evidence
satisfies the standards.

Figure 5. Four components of Provus' Discrepancy Model (adapted from Pawilen, 2019)

Program performance is specifically evaluated in terms of four aspects:


Stage I – Design. Is the program adequately designed?
Stage II – Installation. Is the program implemented as stated in its
design?
Stage III – Processes. Are the persons involved accomplishing what is
expected from the program design? Are the resources and techniques
used aligned with the program objectives?
Stage IV – Products. Are the end products (student learning,
productivity level, etc.) congruent with what is anticipated from the
program design?

D. Stufflebeam’s Context, Input, Process, Product (CIPP) Model


Composed of four stages, this model was developed by the Phi Delta
Kappa National Study Committee on Evaluation, chaired by Daniel L.
Stufflebeam. According to Braden (1992), CIPP model can be used for both
formative and summative evaluation activities. This means it does not just look
into the conclusion of the program but also at various stages of program
implementation. CIPP model is appealing to evaluators and educational
leaders because it provides a wide range of data for decision making.

Context Input Process Product


evaluation evaluation evaluation evaluation

Figure 6. Stufflebeam's CIPP Model (adapted from Pawilen, 2019)

1. Context Evaluation – it assesses the environment or setting of the


curriculum, revealing the needs, problems, opportunities and
challenges that affect the curriculum. This will provide decision makers
with a strong basis for determining the objectives of the curriculum.
2. Input Evaluation – provides information on how available resources are
used to achieve curriculum goals. It helps evaluators in assessing
alternative ways for attaining goals to choose the optimal means. It
also considers the different designs for implementing the curriculum.
3. Process Evaluation – monitors the implementation of the curriculum to
ensure that what is planned is actually delivered. It aims to detect
problems in its initial design and implementation for revisions, provide
information for decisions, and maintain a record of procedures done.
4. Product Evaluation – focuses on gathering data to determine whether
the actual curriculum outcomes satisfactorily meet the objectives in its
design. It provides evaluators with information that will enable them to
decide whether to continue, terminate, or modify the new curriculum.

While it is ideal to do the four phases of evaluation in using this model,


any evaluator can choose to focus in one or two phases of evaluation only.
Table 4. Steps for each stage in the CIPP Model (adapted from Bilbao et al., 2020 and Glatthorn
et al., 2018)

Stages Steps to be taken in each stage


1. Context evaluation Step 1: Identify the kinds of decisions to be made
Step 2: Identify the kinds of data to make those
decisions.
2. Input evaluation
Step 3: Gather the needed data.
Step 4: Establish the criteria for determining the
3. Process evaluation quality of the gathered data.
Step 5: Based on the criteria, analyze the data.
4. Product evaluation Step 6: Organize and give the information needed
by decision makers.

E. Stake’s Responsive or Stakeholder-Centered Model


Developed by Robert Stake (1975), this model is oriented more directly
on program activities than program intents. The main advantage of this model
is its sensitivity to the concerns and values of their clients and stakeholders.
By adapting to their needs and involving them in the evaluation process,
stakeholders will find the evaluation results to be highly relevant and useful.
The following are the recommended steps to be taken by curriculum evaluators
when adopting the responsive evaluation model (Glatthorn, 1987):
Table 5. Steps in Stake's Responsive Model (adapted from Bilbao et al., 2020 and Glatthorn et al.,
2018)

Step Process
Meet with clients and stakeholders to understand their perspectives
1 and intentions about the evaluation process.
Determine the scope of the evaluation project based from the
2
discussions and documents in Step 1.
Observe closely the operation of the curriculum to note any
3 unintended deviations or nonconformities from the announced
intents.
Identify the stated and real purposes of the program, as well as the
4
concerns of its stakeholders about it and the evaluation process.
Identify the problems or issues that the evaluation process should
5 address. For each problem, develop an evaluation design and specify
the data needed.
Select the means for collecting the needed data. Most of the time,
6
judges or evaluators are chosen.
7 Implement the data collection process.
Organize the gathered information into themes and prepare how to
8
present or communicate them.
Decide which of the stakeholders require which report and choose the
9
most appropriate formats for the report.
Prior to his responsive model, Stake had an earlier approach to
evaluation, which was called the Congruence-Contingency Evaluation Model
(Stufflebeam & Coryn, 2014). From the name itself, it analyzes the matching or
congruency between the intended results and the actual results in the
implementation process. Such analysis is done in terms of three major areas,
namely (1) antecedents, (2) transactions and (3) outcomes. The relationship or
contingency between these data are also examined.
1. Antecedents – any condition that exists before the implementation
process (teaching-learning) has taken place, such as students and
teachers’ profiles, school community context, etc.
2. Transactions – the activities during implementation, these are the
interactions between and among students, teachers, other school
personnel and the various aspects of the learning environment
3. Outcomes – results/impact of the delivery of instruction, e.g., level of
students’ learning, effects of the curriculum on the teachers,
administrators, the school and the community.

F. Scriven’s Consumer Oriented Evaluation Model


Educational products have increasingly flooded the market over the
years. With that, there is a need for consumers to properly evaluate the
products they will buy, especially to those who are tasked for purchasing
instructional resources to support an implemented curriculum. These
resources include both printed materials and modern educational technologies
like textbooks, modules, educational applications and productivity software.
In 1967, Michael Scriven formally introduced the consumer-oriented
evaluation approach or model, which is based on the premise that the
evaluation process must serve the interests of the consumer, who is the end
user of any program, curriculum, service or product (Bilbao et al., 2020; Frey,
2018). The consumer-oriented model utilizes criteria and checklist as a tool for
either formative or summative purposes. The criteria must be deemed
meaningful or valuable to the consumers or else the evaluation is useless.
The Instructional Material Review Form by Marvin Patterson of Florida State
University given below is adapted for you to better understand this model.
Instructional Materials Review Form
Table 6. Preliminary information and recommendation (adapted from Bilbao et al., 2020)

Preliminary Information Recommendation


Title:
Author(s): ____ Retain for further review
Publisher: ____ Reject (Comments)
Copyright date:
Name of evaluator:
The codes below are used in the following checklist to rate the instructional
material.
(+) yes or good quality (-) no or poor quality
(0) all right but not of good quality (NA) not applicable

Table 7. Instructional Material Review Form Checklist (adapted from Bilbao et al., 2020 and
Marvin Patterson Center for Studies in Vocational Education, FSU)

0 all
- no or
+ yes right but NA not
Criteria poor
or good no so applicable
quality
good
1. Does the content cover a
significant portion of the
program competencies?
2. Are the contents up-to-
date?
3. Is the reading level
appropriate for most
students who will use the
material?
4. Are intended learning
outcomes or competencies
stated?
5. Are formative and
summative assessments
included?
6. Are varied and experiential
activities provided to meet
the needs of students?
If the instructional material to this point appears to be possibly selected,
proceed with the review. Otherwise, stop the review if it appears too poor.
7. Is a teacher’s guide (TG)
included to offer
management suggestion?
8. Is the material presented in
logical order?
9. Are objectives,
competencies and/or
tasks of qood quality?
10. Do learning activities
match with the intended
learning outcomes?
11. Are test items of good
quality and do they match
with the intended learning
outcomes?
12. Are performance
checklists of good quality
and do they match with the
intended learning
outcomes?
13. Are the directions of good
quality in guiding students
on how to proceed through
the materials?
14. Are drawings, photographs
and other visuals of good
quality?
15. How is the quality of the
overall design of the
learning activities for
individualized instruction?
16. Is there emphasis on
safety practices (when
needed)?
17. How much is the degree of
freedom from bias with
respect to age, sex, race,
religion, and nationality
etc.?
18. Is the quality of
management procedures
in the TGs of good quality?
19. Is there a list of course
map competencies
covered by the material?
(optional)
Comments:

After looking into the results of this checklist, can we expect that any
curricularist will be guided on deciding whether a particular instructional
support material will be used, revised or rejected? Yes, of course.

Curriculum Evaluation: A Wrap-Up

Now, let’s try to wrap-up your learnings in this lesson. Regardless of the
curriculum evaluation model to be utilized, the Association for Supervision and
Curriculum Development (1983) recommends the steps in the following table
in conducting curriculum evaluation. Connectedly, Stabback (2016) of
UNESCO-IBE emphasized that clearly defining the purpose and the scope of
the evaluation is the first task of evaluators. This will identify on how relatively
narrow or broad is the coverage of the evaluation (e.g., teaching in a single
subject area or whole school system).
Table 8. ASCD (1983) Steps in Curriculum Evaluation (with modifications)

Steps What to Consider


1. Identify primary • Curriculum program sponsors, school
audiences / heads, managers and administrators,
stakeholders teachers, students, content specialists
and others
2. Identify critical • Outcomes (desired or intended), process
problems/issues (implementation), inputs (resources)
3. Identify data sources • Teachers, students, parents, curriculum
developers and other relevant persons,
existing documents and records,
evaluation studies
4. Identify data • Informal tests, standardized tests,
collection techniques samples of students’ work, interviews,
observations, questionnaires, checklists,
etc.
5. Identify established • Standards set by professional
standards and criteria organizations or agencies (DepEd, CHEd,
etc.)
6. Identify data analysis • Document and content analysis, process
techniques analysis, statistics, comparison and
evaluation
7. Prepare evaluation • Written or oral; progress or final
report summary; descriptive, illustrative and
evaluation (judgmental); with
recommendations
8. Prepare modes of • Test scores summary, multimedia
display presentation, case studies, testimonies,
exhibits, technical report

Based on the steps given, do they have any similarities or differences


that you can find with the previously discussed models? Can they be done in
the Philippine context?
With how evaluation critically examines the curriculum, it is no doubt
that its main function is to judging its quality and make the necessary
improvements. According to Stabback (2016) of UNESCO-IBE, curriculum
evaluation must be conducted on a regular basis, but in consideration of
resources available, feedback received and other factors. If you plan a large-
scale evaluation, you will need more funds and human support for that. In
addition, curriculum evaluation should be conducted by qualified and
experienced people. It is fitting only that people with deep understanding of the
field of curriculum and education must lead the evaluation process. This will
ensure that curriculum evaluation is done objectively and the results are
reported in a professional, valid and transparent manner.

You might also like