You are on page 1of 2

PROFESSOR: In addition to the process evaluation,

the collected outcome data was also


used to estimate the impact of the program.
To estimate the impact, we compared the average outcomes
of interest, such as attendance between the two groups.
Recall, due to random assignment,
any difference in outcomes between the two groups
can be attributed to the program, that
is, the introduction of the biometric tracking system.
Here is what we found.
First, we found a small increase in the attendance
of medical staff.
On the y-axis in this graph, you see the percentage
of health staff present on average
during our random checks.
This graph includes doctors, nurses, lab techs,
and pharmacists.
The blue bar shows that the average attendance
of the health staff in the control group was 37%.
And the orange bar shows that the attendance
of health staff in the treatment group during our study
was about 40%.
So there was a small 3% increase in attendance of medical staff
in the treatment versus comparison,
also known as the control group.
But the difference is very small.
And the overall level remained much
below what the government had expected
would be the impact of this program.
Digging a little bit deeper, we find
that this small increase is due to nurses,
lab techs, and pharmacists.
See the left graph, where attendance was 45% versus 40%.
But if you now look at the right-hand-side graph,
you see that, disappointingly, we
do not find any effect on the attendance of doctors.
In fact, doctor attendance was 29%
in the treatment group versus 31% in the comparison
group, that is, doctors in the treatment areas
were not more likely to be present than in the comparison
group.
Looking at data from different followups,
or random checks, separately, we also
see that the effect on attendance
can be seen in the fifth and the sixth followup
but went away after that.
So it seems that even the small effect on all health care
workers, on average, fades away over time.
The government was fully ready to scale this program up
throughout the 2,200-plus primary health centers across
the state, including a massive rollout of trainings
and motivation camps.
Yet, since the impact evaluation showed that the program did not
improve doctor attendance, the government decided to, in fact,
scale down the program.
This saved at least $1 million US dollars of taxpayer money
in just the first year and in just the fixed costs,
compared to a total evaluation cost of about $200,000.
Kudos again to the government for piloting this program
and evaluating it, rather than scaling it out in one shot.
We just went through one example of how results from an impact
evaluation inform the government to scale down a program.
But this is, in fact, not the usual story you will hear,
and that is why I thought I would start here.
And that is why I thought this might be
an interesting example for you.
There are, in fact, numerous examples
in the opposite direction, of how
programs that have been evaluated and found
to be impactful have, in fact, been scaled up,
including programs that were originally
evaluated in one context, that have been adapted and scaled
in other contexts.
And even that is just one pathway from evidence
that comes from impact evaluations to policy change.
I will not go into detail here but refer you
to our website linked in the lecture
notes for other pathways.

You might also like