You are on page 1of 12

Program Evaluation

Objectives

Describe why program evaluation is important.


Identify and describe the types of program evaluation.
Define evaluation terms.
Design a program evaluation instrument.
• Introduction
• Evaluating programs is a critical step in public health initiatives.
• Without evaluating both the process and the outcomes of a program, the benefits cannot be shared and the
effectiveness remains unknown.

• PURPOSE OF PROGRAM EVALUATION


• Program evaluation is simply an extension of this common sense practice to organized settings or programs.
• Public health professionals strive to improve health.
• They design programs and interventions, such as
• tobacco cessation classes,
• fluoride mouth rinse programs, and
• sealant programs.
• The most important purpose for program evaluation is the contribution to the provision of quality services to
people in need.
• Evaluation is important for several additional reasons: as a means to developing good practice, to make the
best use of scarce resources, to provide feedback to staff and participants, and to shape policy development.3
The results of the interventions are measured against the program objectives.4 The evaluation answers
whether the program was successful in reducing or eliminating the identified need or problem.
Questions one might ask about programs are as follows:
• Did the program accomplish what it was designed to do?
• Did the program work better than other simi- lar programs?
• Did the program reduce health costs?
• How could the program be improved?
• Should the program be continued?
• Does the program merit continued funding?
• Should the program be expanded?
Types of Evaluation (EVALUATION TIMING)

• Formative evaluation (also referred to as process evaluation) occurs during the implementation process, and
• Summative evaluation (also referred to as outcome evaluation) occurs after the intervention.

• Formative evaluation
• Formative evaluations help point out problems and identify opportunities to make improvements.
• This type of program evaluation is tied to routine operations with practical, ongoing measurement of processes and outcomes
involving program staff and stakeholders.

In the implementation of a school sealant program the evaluator would want to know the answers to questions such as:
• How many children are served?
• How many permission slips were returned?

Formative evaluation may answer several questions, such as:


• What is the nature of the people being served?
• Is the program operating as expected?
• Do the activities match the plans for the program?
Types of Evaluation (EVALUATION TIMING)

• Summative evaluation
• In a summative evaluation, the results of the program are compared with the goals and objectives and used to
determine the impact of the program on the community’s health.
• This is like reevaluating an individual patient after treatment to determine the effectiveness of scaling on the
health of the tissue.
Summative evaluation answers questions such as:
• Has oral disease been reduced?
• Has tobacco use changed?
• How many dental sealants have been placed?
• What are the retention rates of sealants at various intervals?
• And finally, how many sealants failed?
• How many of those teeth decayed?
• How much did a sealant program cost and how does this compare to the cost of restorative treatment if the program did not exist?

• Summative evaluation helps all interested parties make decisions about the value and possible continuance of
programs.
EVALUATION
FOCUS
• The most appropriate focus
for most evaluations is on
improvement of processes,
implementation, efficiency,
or anything that makes a
program more organized and
cost-effective.
• Scheetz and Gholston use an
evaluation model that asks
several questions (Box 7-1)
EVALUATION FOCUS
• A combination of quantitative and qualitative methods can be used to specify and measure identifiable
objectives.
• Together, they lend numbers and traits to tell stakeholders whether, and by how much, a program had an
impact.
• Qualitative methods are helpful when long-term changes are expected. They are more likely to tell us why
something changed, what factors are involved and, finally, they lend to program improvement more readily
than quantitative methods.
• For example, quantitative methods may tell us that a certain proportion of people in a population received
fluoride varnishes.
• Qualitative methods could tell us what people liked or did not like about the product or process and lend
information that leads to better processes, acceptance, and outcomes.
ASSIGNING VALUE TO PROGRAM ACTIVITIES

• Programs are judged on several criteria, including their merit or quality, worth or cost-
effectiveness, and significance or importance.
• A program can have merit but not be worth its cost.
• Before assigning value and making judgments regarding pro- grams, the following questions must
be answered:
• What will be evaluated?
• What aspects of the program will be considered when judging a program’s performance?
• What standards must be reached for the program to be considered successful?
• What evidence will be used to indicate how the program has performed?
• What conclusions regarding program performance are justified by comparing the available evidence to the selected standards?
• How will the lessons learned be used to improve public health effectiveness?
FRAMEWORK FOR PROGRAM EVALUATION
• The Centers for Disease Control and Prevention describes six steps in public health program evaluation.
Table 7-1 summarizes those steps.

• 1. Engage Stakeholders
• 2. Describe the Program
• 3. Focus the Evaluation Design
• 4. Gather Credible Evidence
• 5. Justify Conclusions
• 6. Ensure use and share lessons learned
Understanding the STANDARDS 1. Utility 2. Feasibility 3. Propriety 4. Accuracy

• Utility •Ensures that the information needs of intended users are met. # Who
needs the evaluation findings? # What do the users of the evaluation need? #
Will the evaluation provide relevant (useful) information in a timely manner?
• Feasibility •Ensures that evaluation is realistic, prudent, diplomatic, and frugal.
# Are the planned evaluation activities realistic given the time, resources, and
expertise at hand?
• Propriety Ensures the evaluation is conducted legally, ethically, & with due
regard for the welfare of those involved and those affected. # Does the
evaluation protect the rights of Individuals & protect the welfare of those
involved? # Does it engage those most directly affected by the program & by
changes in the program, such as participants or the surrounding community?
• Accuracy Ensures that the evaluation reveals Ensures that the evaluation
reveals and conveys technically accurate and conveys technically accurate
information. Information. # Will the evaluation produce findings that# Will the
evaluation produce findings that are valid and reliable, given the needs of are
valid and reliable, given the needs of those who will use the results? those who
will use the results?
Conclusion
• Evaluation should be seen as an integral component of any program.
• An evaluation plan needs to be determined at the beginning of a program
development, so that a plan for data collection for this purpose is built into
project implementation.
• Information on the effectiveness of a program, will help identify if there are
any problems in running a program.
• The aim of the program must be reflected in the evaluation plan and
performance indicators selected. This will help decide how best to carry out
the evaluation.
• Results of the evaluation should be shared with the appropriate parties, and
the results should be used in the planning of the future programs.

You might also like