You are on page 1of 20

MCC and Impact Evaluations

:
A Focus on Results

Franck Wiebe
Chief Economist, MCC
wiebef@mcc.gov

January 29, 2009
Outline of Comments
 MCC’s approach to Aid Effectiveness

 MCC’s investment in Impact Evaluation

 Limits to Impact Evaluation

 Lessons Learned

 Demonstration of IE website
The MCC Model
 Established 5 years ago to deliver aid differently
 Explicit focus on reducing poverty: raising incomes of the poor
 Reflecting generally accepted “best practice” principles for aid
effectiveness

 Fundamental principles of MCC
 Growth matters → Core element of poverty reduction strategy
 Good governance matters → 17 indicators determine eligibility
 Country ownership matters → align aid with country priorities
 Country partners develop proposal with broad participation
 Country partners responsible for implementing the program
 Results matter
 Objective measures of impact
 Transparent use of information in decision-making
Enhancing Aid Effectiveness
requires a Focus on Results
 What are we trying to do?

“Aid effectiveness is the effectiveness of development aid
in achieving economic development (or development targets).”

 Thank you, Wikipedia! This lack of clarity is part of the problem.

 “Reducing poverty through growth” creates a different context
 Income-metric directly linked with measured poverty
 Allows project appraisal across sectors based on same objective
 Enables MCC to avoid the “some positive impact” trap and address the core
question: “does the size of the impact justify the use of funds in this way?”
 Importantly, the absence of evidence enables persistence of bad activities
The “Cold Chain” of
Aid Effectiveness at MCC
 MCC approach designed around a presumption of success (given
process) but a requirement of proof, both before and after

 Pre-Investment Analyses
 Constraints Analysis → a form of Growth Diagnostic
 Benefit-Cost Analysis
 Pre-investment estimate of impact based on evidence
 Incorporates promised institutional and policy reforms
 Enables a comparison across sectors

 Monitoring and Assessment in Implementation
 Baseline surveys
 Implementation performance against BCA expectations

 Rigorous Impact Evaluations, as appropriate
Impact Evaluations at MCC
 IEs provide independent and rigorous measurement of MCC-funded programs.
 Explicit statement of the counterfactual – what would have happened w/o project?
 Use of control group to document with randomized assignment, as possible

 MCC uses IEs to provide highly credible evidence for:
 testing implementation approaches
 scaling up programs that work
 making future funding decisions

 MCC contracts IEs for roughly 15% of activities
 Independent third parties contracted by MCC or MCAs
 10 IEs under contract, 11 more without randomization, 26 under consideration
 Method depends on feasibility, cost, potential learning
 MCC seeks to use most rigorous method possible
Limits to Impact Evaluation
 When not to consider an IE
 When a lot of relevant IE evidence already exists
 When activities have purely unquantifiable aims
 When IE is technically not feasible
 When evidence of impact will not shape future design decisions
 When implementers/recipients have more authority than the funder

 Possible Explanations
 Institutional Politics
 International Politics

 But these should be limited
 A lot of “proven successes” remain untested
 How much effort should be devoted to unquantifiable outcomes?
 Rarely are technical issues insurmountable (except for timing)
 Despite reluctance, rarely are decision-makers impervious to evidence
 Any discussion of “limits to IE” needs to close with “… but more are needed”
Lessons Learned (1)
 Everyone has prior beliefs and many test them with reluctance
 Consider proof for land titles, scholarships, training, empowerment, BDS, MFI, ICT
 Strong and sometimes reasonable incentives exist within the development community to fund
and implement ineffective programs

 Something for these people is better than nothing for them (country bias)
 This thing is better than that thing (sector bias)
 Professional reputations depend on it working this way (personal bias)

 Evaluators need to be able answer a core question: Did this program have a sufficiently large
impact to justify the use of scarce resources?
 Answer will only be compelling if the design reflects sincere skepticism – because we now
operate in an external environment of widespread and generally well-founded skepticism

 Starting with the Null Hypothesis
 Expect to find no cost-effective impact and prove the opposite if you can
 Control population needed to understand changes not caused by program
 Need to control all potential sources of bias
 Impose quantitative assessments that can be reproduced (including qualitative variables) and
demonstrate that alternative explanations were tested
Lessons Learned (2)
 In IE, Evidence matters, not Capacity Building
Too often, activities are undermined by multiple objectives
If agencies need the evidence, must use best methods
Always will include opportunities to build capacity, too
Seek to use most rigorous method possible and sacrifice with
great reluctance and increasing skepticism of results

 Transparency is critical – probably essential
 Provides history and documents failure of IEs
 Rationale, workplan, timeline statements build commitment
 Public agreements reduce risk of cold feet
Lessons from Poor Evaluations
(to remain anonymous)

Impact of Globalization on Women

Rural ICT

Private Sector Promotion
MCC’s Open Access Approach
to Impact Evaluations

 MCC is posting public documentation related to IEs on its website

http://www.mcc.gov/programs/impactevaluation/index.php

 MCC Impact Evaluation Practice Group (20+) eager to encourage
broader external technical exchange on current practices
 Discuss strategy and evaluation methodologies
 Share lessons learned
 Contact at impact-eval@mcc.gov
 MCC Evaluation Example: Burkina Faso girl-friendly schools (TP)
 USAID administered $12.9 million for 132 schools
 IE by Mathematica using regression discontinuity
 Significant improvement in enrollment (15-20 ppt) and scores (0.4 SD)
 Improvement in enrollment higher for girls (5 ppt)

 What we learned:
 Donor funds could effectively accelerate government construction program
 New schools, materials would significantly increase attendance and scores
 Investments reduced gender gap by half
Regression Discontinuity Design
Thank You for the Invitation!

Follow-up Questions and Comments
are Welcome:

wiebef@mcc.gov