You are on page 1of 41

Prescriptive Evaluation

Model
Kirkpatrick Evaluation Model
Schuman: Experimental Evaluation Model
Stufflebeam: CIPP
Prescriptive model evaluation involves providing
recommendations or prescriptions for
improvement based on the findings of an
evaluation. It goes beyond simply identifying
strengths and weaknesses to offering specific
actions or strategies for enhancing the
effectiveness, efficiency, or impact of a program,
intervention, or process.
It goes beyond simply identifying
strengths and weaknesses to
offering specific actions or strategies
for enhancing the effectiveness,
efficiency, or impact of a program,
intervention, or process.
In prescriptive evaluation, evaluators not only
assess the current state of the program but
also offer guidance on how to address any
identified issues or capitalize on strengths.
This may involve suggesting changes to
program design, implementation methods,
resource allocation, or monitoring and
evaluation strategies.
Prescriptive evaluation typically follows a structured process that includes:

Analysis of Findings:

Reviewing the results of the evaluation


to identify key findings, trends, and
areas for improvement.
Prescriptive evaluation typically follows a structured process that includes:

Identification of Needs:

Determining the specific needs or areas


where changes or interventions are
required to enhance program
effectiveness.
Prescriptive evaluation typically follows a structured process that includes:

Development of Recommendations:

Formulating actionable recommendations


based on the evaluation findings and needs
analysis. These recommendations are
tailored to address identified weaknesses or
capitalize on strengths.
Prescriptive evaluation typically follows a structured process that includes:

Consultation and Collaboration:

Engaging stakeholders, program staff,


and other relevant parties in the
development of recommendations to
ensure buy-in and feasibility.
Prescriptive evaluation typically follows a structured process that includes:

Prioritization and Implementation


Planning:

Prioritizing recommendations based on


their potential impact and feasibility,
and developing plans for implementing
them effectively.
Prescriptive evaluation typically follows a structured process that includes:

Monitoring and Follow-Up:

Tracking the implementation of


recommendations over time and assessing
their impact on program outcomes.
Adjustments may be made as needed based
on ongoing monitoring and feedback.
Prescriptive model evaluation is valuable for
organizations and program managers seeking
actionable insights to improve the performance
and outcomes of their initiatives. It helps ensure
that evaluation findings lead to meaningful
changes and enhancements that contribute to
the overall success of the program or
intervention.
Kirkpatrick's Four-
Level Training
Evaluation Model

“Capacity Building Interventions: How do we know what difference we are


making?”
Capacity Building via
Training

Participant Training is:


The transfer of knowledge, skills, or attitudes (KSAs), as well as ideas and
sector context, through structured learning and follow-up activities to
solve job performance problems or fill identified performance gaps.
ADS Chapter 253
Evaluating
Evaluating training is
recommended Training
as a
best practice, and it
aligns with USAID’s
policy on evidence-
based decision-making.
ADS Chapter
253
Kirkpatrick's Four-Level
Training Evaluation
Model
If you deliver training, then you probably know how important it is to measure
its effectiveness. After all, you don't want to spend time or money on training
that doesn't provide a good return
The four levels are:

Each level is
important and has an
impact on the next
level. As you move
from one level to the
next,
Kirkpatrick's Four-Level Training Evaluation
Model
Level 1: Reaction
This level measures how your trainees (the people being trained), reacted to the
training. Obviously, you want them to feel that the training was a valuable experience,
and you want them to feel good about the instructor, the topic, the material, its
presentation, and the venue.
Why?
• Gives us valuable feedback that helps us to evaluate the program.
• Tells trainees that the trainers are there to help them do their job better and that
they need feedback to determine how effective they are.
• Provides trainers with quantitative information that can be used to establish
standards of performance for future programs.
How?
• Satisfaction Survey
Kirkpatrick's Four-Level Training Evaluation
Model
Level 1: Reaction measures:
• CUSTOMER SATISFACTION:measure participants satisfaction with the
training.
• Taking this program was worth my time.

• ENGAGEMENT: measure involvement and contribution of


participants.
• My learning was enhanced by the facilitator.

• RELEVANCE: measure participants opportunity to apply what they


learned in training on the job.

Kirkpatrick's Four-Level Training Evaluation
Model
Level 2: Learning
At level 2, you measure what your trainees have learned. How much has their
knowledge increased as a result of the training?

When?
• After training conducted

How?
• By evaluating both before and after the training program.
• Before training commences, test trainee to determine their knowledge, skills and attitude.
• After training is completed, test trainee for second time to determine if there is any
improvement.
• By comparing both the result, it can be determined whether learning is successful
or not.
Kirkpatrick's Four-Level Training Evaluation
Model
Level 2: Learning measures:
• Knowledge “I know it” : measured primarily with formative exercises during the
session or a quiz near the end.

• Skills “I can do it right now.” : measured with activities and demonstrations during
the session that show that participants can perform the skill

• Attitude “I believe this will be worthwhile to do on the job.” : measured with Rating
Scale Questions

• Confidence “I think I can do it on the job.” : measured with Rating Scale Questions

• Commitment “I intend to do it on the job.”: measured with Rating Scale Questions


Kirkpatrick's Four-Level Training Evaluation
Model
Level 3: Behavior
At this level, you evaluate how far your trainees have changed their behavior, based on
the training they received. Specifically, this looks at how trainees apply the information.

How?
• Use a control group if practical,
• Evaluate both before and after the program,
Allow time for
• Survey and/or interview: one or more of the following: behavior
• Trainees, change to take
• Immediate supervisor, place
• others who often observe their behavior.
• Repeat the evaluation at appropriate times,
• Consider cost versus benefits.
Kirkpatrick's Four-Level Training Evaluation
Model
Level 3: Behavior

Examples for interviews questions.

• Did the trainees put any of their learning to use?


• Are trainees able to teach their new knowledge, skills, or attitudes to other people?
• Are trainees aware that they've changed their behaviour?
Kirkpatrick's Four-Level Training Evaluation
Model
Level 4: Results
At this level, you analyze the final results of your training. This includes outcomes that
you or your organization have determined to be good for business, good for the
employees, or good for the bottom line.

When?
If your programs aim at tangible results rather
than teaching management concepts, theories,
and principles, then it is desirable to evaluate in
terms of results.

How?
• Search for evidences
Kirkpatrick's Four-Level Training Evaluation
Model
Level 4: Result

Examples for interviews questions.

• What results have you seen since attending this training?


• Please give an example of the success you have achieved since attending this training.
Schuman: Experimental
Evaluation Model
The Schuman Experimental Evaluation
Model is a framework used in social
science research to assess the
effectiveness of interventions or
programs. Developed by William
Schuman
This model typically involves several stages:

Design:
This phase involves planning the intervention or
program and designing the evaluation process.
Researchers need to define clear objectives,
identify the target population, and select
appropriate methodologies for data collection
and analysis.
This model typically involves several stages:

Implementation:
During this stage, the intervention or program
is put into action according to the design plan.
It's essential to follow the implementation plan
closely to ensure consistency and fidelity to
the intended intervention.
This model typically involves several stages:
Data Collection:
Researchers gather data on various aspects
of the intervention, such as its impact on
participants, changes in behavior or attitudes,
and any other relevant outcomes. This often
involves using a combination of qualitative
and quantitative methods, such as surveys,
interviews, observations, or experiments.
This model typically involves several stages:
Analysis:
In this phase, researchers analyze the
collected data to assess the effectiveness
of the intervention. Statistical techniques
are commonly used to determine whether
any observed changes are statistically
significant and to identify patterns or
trends in the data.
This model typically involves several stages:
Interpretation:
Researchers interpret the findings of the
evaluation, considering the implications
for theory, practice, and policy. They may
also assess the strengths and limitations
of the intervention and offer
recommendations for future
improvements or research.
This model typically involves several stages:
Reporting:
Finally, the results of the evaluation are
communicated to stakeholders, such as
policymakers, practitioners, and the
general public. Clear and transparent
reporting is crucial to ensure that the
findings are understood and can inform
decision-making effectively.
The Schuman Experimental
Evaluation Model provides a
systematic approach to evaluating
interventions or programs, helping
researchers to generate reliable
evidence about their effectiveness
and impact.
Stufflebeam: CIPP
The CIPP Model, developed by
Daniel Stufflebeam, is a
comprehensive framework used for
evaluating programs or interventions.
The acronym stands for Context,
Input, Process, and Product.
The CIPP component includes:

Context Evaluation:
This involves understanding the
environment in which the program operates.
It examines factors such as the needs of the
target population, the resources available,
and any external influences that may affect
the program.
The CIPP component includes:

Input Evaluation:
Input evaluation focuses on the resources
invested in the program, including personnel,
funding, materials, and technology. It aims to
assess whether these resources are
adequate and appropriate for achieving the
program's objectives.
The CIPP component includes:

Process Evaluation:
Process evaluation looks at how the program
is implemented. It examines the activities,
procedures, and interactions involved in
delivering the program to determine whether
they are being carried out as planned and
whether they are effective in achieving the
desired outcomes.
The CIPP component includes:

Product Evaluation:
Product evaluation assesses the
outcomes or results of the program. This
includes both intended and unintended
outcomes, as well as the overall impact
of the program on the target population or
the broader community.
The CIPP component includes:

Product Evaluation:
Product evaluation assesses the
outcomes or results of the program. This
includes both intended and unintended
outcomes, as well as the overall impact
of the program on the target population or
the broader community.
By addressing these four components,
the CIPP Model provides a
comprehensive framework for evaluating
programs at various stages of
development and implementation,
helping stakeholders make informed
decisions about program improvement
and future planning.
Proverbs 1:5

“Let the wise hear and increase in learning,


and the one who understands obtain
guidance”

Thank you po and God bless!

You might also like