Professional Documents
Culture Documents
Countries
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
This content is licensed under a RAND Corporation License. To view a copy of this license, visit
https://www.rand.org/pubs/permissions.html.
RAND Corporation is collaborating with JSTOR to digitize, preserve and extend access to
Challenges in Program Evaluation of Health Interventions in Developing Countries
13
Using the log frame approach, time- and resource-heavy impact as-
sessments are completed only upon prior demonstration of sufficient
inputs and outputs to plausibly generate the desired outcomes. Most
projects do not warrant an evaluation of the effectiveness of the final
impact of their activities because the causal chain is broken at some
point.
Three stages of post-implementation evaluation can be con-
ceived using a prospective evaluation design:
Stage One
The first stage involves process evaluation, which assesses the content,
scope, or coverage of the intervention, together with the quality and
integrity of implementation. If the process evaluation finds that the
project is not being implemented as planned (the right material and
process inputs are missing and/or the desired outputs were insuffi-
ciently in evidence), there is little reason to go ahead with the evalua-
tion of short-term outcomes, but improvements should be made in
the intervention design (Andrews, 2004).
Stage Two
If progress is made at the process-evaluation stage, then the next
phase—short-term outcome evaluation—can be attempted. For ex-
ample, in the UNAIDS framework for indicator selection for
HIV/AIDS prevention programs, HIV-related behaviors (along with
knowledge, attitudes, and beliefs about HIV) have been considered to
be outcomes. To be considered an outcome evaluation of the inter-
vention’s effectiveness, the link to causality would need to be well-
defined and the nonintervention factors accounted for. If no positive
changes occur in the short-term outcome measures, such as risky sex-
ual or drug-taking behaviors, there is little point in going to the third
phase of evaluation and looking at longer-term impact measures, such
as HIV or sexually transmitted infection (STI) prevalence in the tar-
get population. (There would, however, be advantages to further
evaluation to determine why the intervention did not work). If short-
term outcome indicators plausibly are changing due to the interven-
tion, an impact evaluation may be warranted, which could attribute
Stage Three
In a well-designed intervention with pre-existing trend data and a
baseline survey, the monitoring of program data and surveillance of
disease incidence or associated health indicators during the life of the
intervention should provide enough information to fulfill the needs
of the program-logic model. Changes in the trend data for disease
prevalence and mortality when process and outcome indicators have
been positive may be sufficient to suggest whether the intervention
makes a difference.
The final phase of the evaluation then becomes the effectiveness
study when the criteria for an impact evaluation are met. The advan-
tage of such “prospective” impact evaluations—which are part of the
design of the project itself from the beginning—is that the evaluation
is a dynamic process that does not have to take the program imple-
mentation as given. If a process evaluation identifies problems with
how the project is being implemented, changes can be made to the
design and implementation to modify its processes, inputs, and out-
puts that would affect the ultimate outcomes. Another advantage of
the prospective evaluation is that it can begin to trace program effects,
regardless of where in the evolution of the health care intervention
the evaluation begins.
There are, however, several potential disadvantages to “prospec-
tive” impact evaluations. First, there is a trade-off between making
changes to improve the intervention and understanding why the in-
tervention worked or did not work. Mid-course modifications in pro-
Figure 2.1
Lineal Cause-and-Effect Model for System-Impact Assessments
Health program
Health Overall
(prevention or
outcome development
treatment)
RAND MG402-2.1
Figure 2.2
Conceptual Framework for Evaluating an HIV/AIDS Intervention
Prevention Activities
• Education
• Condom distribution
• STI treatment
Development Goals
• Economic (workforce)
Infection (HIV)
• Political infrastructure
• Health infrastructure
Treatment Activities
• Providing drugs
• Training staff and
setting up clinics
RAND MG402-2.2