You are on page 1of 32

Leveraging Learning

in Evaluation

Olivier Serrat

2010

The views expressed in this presentation are the views of the author/s and do not necessarily reflect the views or policies of the Asian
Development Bank, or its Board of Governors, or the governments they represent. ADB does not guarantee the accuracy of the data included
in this presentation and accepts no responsibility for any consequence of their use. The countries listed in this presentation do not imply any
view on ADB's part as to sovereignty or independent status or necessarily conform to ADB's terminology.
Overview

• Introduction to monitoring and evaluation, and its


results chain and life cycle
• Introduction to independent evaluation
• Distinction between evaluation for accountability
and evaluation for learning
• Considerations in evaluation capacity
development
• Description of areas of competence for knowledge
management and learning
• Overview of tools for evaluation for learning
Monitoring and Evaluation

Monitoring is the continuous collection of data and


information on specified indicators to assess the
implementation of a development intervention in relation
to activity schedules and expenditure of allocated funds,
and progress and achievements in relation to its
intended outcome.
• It involves day-to-day follow-up of activities during
implementation to measure progress and identify
deviations.
• It requires routine follow-up to ensure activities are
proceeding as planned and are on schedule.
• It needs continuous assessment of activities and
results answers the question, "what are we doing?"
Monitoring and Evaluation

Evaluation is the periodic assessment of the design,


implementation, outcome, and impact of a development
intervention. It should assess the relevance and
achievement of the intended outcome, the performance of
implementation in terms of effectiveness and efficiency,
and the nature, distribution, and sustainability of impact.
• It is a systematic way of learning from experience to
improve current activities and promote better planning
for future action.
• It is designed specifically with the intention to attribute
changes to the intervention itself.
• It answers the question, "what have we achieved and
what impact have we had?"
Monitoring and Evaluation

Impact

Outcome

The Results Chain Outputs

Inputs Activities
Monitoring and Evaluation

Needs Impact

Outcome

Objective Inputs Activities Outputs

Relevance Efficiency

Effectiveness

Sustainability

The Results Chain Explained


Monitoring and Evaluation

Key: EA = ex-ante, MT = mid-term, EP = ex-post

EA MT EA EP MT EA EP MT EP

Life Cycle of Monitoring and Evaluation


Monitoring and Evaluation

Challenge of
Degree of Monitoring
Logic
Control and
Evaluation
What the
development
Impact intervention is
expected to

Increasing Difficulty
Decreasing Control
contribute to

Outcome What the


development
intervention
Outputs What is within can be
the direct expected to
control of the achieve and
Activities development be
intervention's accountable
Inputs management for

Challenges and Limits to Management


Independent Evaluation

• Contributes to decision making throughout the


project cycle and in the agencies as a whole
• Demands that the relevance and usefulness of
evaluation findings to core audiences should be
enhanced
• Requires that units should improve the timeliness
of their evaluations, strengthen the operational
bearing of the findings, and increase access to
and exchange of lessons
Increasing Value Added
from Independent Evaluation
1. Adhere to strategic principles.
2. Sharpen evaluation strategies.
3. Distinguish recommendation typologies.
4. Make better recommendations.
5. Report evaluation findings.
6. Track action. on recommendations.
Some Criticisms of
Evaluation for Accountability
• Provides for control of the organization.
• Implies lack of utility.
• Diverts resources.
• Focuses on justification rather than improvement.
• Distorts program activities.
• May promote incentive to lie, cheat, and distort.
• Misplaces accent on control.
• Emphasizes results orientation while maintaining
traditional checks on use of inputs and
compliance with procedures.
Learning

• Learning is the acquisition of knowledge or skills


through instruction, study, and experience.

• Learning is driven by organization, people, knowledge,


and technology working in harmony—urging better and
faster learning, and increasing the relevance of an
organization.

• Learning is an integral part of knowledge management


and its ultimate end.

Data Information Knowledge Wisdom

Know What Know How Know Why

Reductionist Systemic Holistic


Learning from Experience

• Evaluation from
learning can
serve as an
important
foundation block
of a learning
organization.
• Researchers
now recognize it
as the greatest
need today and
tomorrow.
Evaluation for Accountability and
Evaluation for Learning
Item Evaluation for Evaluation for Learning
Accountability
Basic Aim The basic aim is to find out The basic aim is to improve
about the past. future performance.
Emphasis Emphasis is on the degree Emphasis is on the reasons
of success or failure. for success or failure.
Favored by Parliaments, treasuries, Development agencies,
media, pressure groups developing countries,
research institutions,
consultants

Selection of Topics are selected based Topics are selected for


Topics on random samples. their potential lessons.
Status of Evaluation is an end Evaluation is part of the
Evaluation product. project cycle.
Evaluation for Accountability and
Evaluation for Learning
Item Evaluation for Evaluation for Learning
Accountability
Status of Evaluators should be Evaluators usually include
Evaluators impartial and independent. staff members of the aid
agency.
Importance Data are only one Data are highly valued for
of Data from consideration. the planning and
Evaluations appraising of new
development activities.
Importance Feedback is relatively Feedback is vitally
of Feedback unimportant. important.

Source: Adapted from Cracknell, B. 2000. Evaluating Development Aid: Issues, Problems, and Solutions. East Sussex: Sage Publications.
Types of Learning Failure
Stage Category

Preparation Failures of intelligence: not knowing enough at the early


stages of project formulation, resulting in crucial aspects of
the project context being ignored.
Failures of decision making: drawing false conclusions or
making wrong choices from the data that are available, and
underestimating the importance of key pieces of
information.
Implementation Failures of implementation: bad or inadequate management
of one or more important aspects of the project.
Failures of reaction: inability or unwillingness to modify the
project in response to new information or changes in
conditions that come to light as the project proceeds.
Evaluation Failures of evaluation: not paying enough attention to the
results.
Failures of learning: not transferring the lessons into future
plans and procedures.

Source: Adapted from Nolan, R. 2002. Development Anthropology: Encounters in the Real World. Boulder, Colorado: Westview Press.
Learning in Nongovernment
Organizations: Food for Thought
Who Should Be What Should
Learning? They Be Learning?
Field Staff Participation in practice
Effective empowerment
Local-level collaboration with government and other
nongovernment organizations
Gender dimensions of local development
Technical Good practice in their area of expertise
Specialists
Ways of integrating with other disciplines
How to improve cost-effectiveness
How existing internal and external policies affect
performance
Operational What factors make interventions and projects work well or
Managers badly, for example, funding conditions
How to be more cost-effective
How to coordinate internally and externally
Learning in Nongovernment
Organizations: Food for Thought
Who Should Be What Should
Learning? They Be Learning?
Fund-Raisers Principles and insights to be used in negotiation with
and professional donors
Development New messages to get across to private contributors
Educationalists Examples of impact and what made things work or fail
Leaders How policy choices and strategies work out in practice
How to make external relationships more effective
How best to exert influence
What environmental factors have had unforeseen effects and
must be taken into account
Governors The quality and costs of donors
The degree of stakeholder satisfaction
Consistency between mission, strategy, and impact
Improving social standing and credibility of the organization
Source: Adapted from Fowler, A. 1997. Striking a Balance: A Guide to Enhancing the Effectiveness of Non-Governmental Organizations in International
Development. London: Earthscan.
Reinterpreting Work Programs to
Emphasize Organizational Learning
Primary
Organizational Strategic Reporting Content/
Responsibility User and Timing
Level Drivera Mechanism Focus
Uses

Corporate

Policy

Strategy

Operations

AThe strategic drivers might be (i) developing evaluation capacity, (ii) informing corporate risk assessments by offices and departments, (iii)
conducting evaluations in anticipation of known upcoming reviews, (iv) monitoring and evaluating performance, (v) critiquing conventional wisdom
about development practice, and (vi) responding to requests from offices and departments.

Source: ADB. 2007. Acting on Recommendations and Learning from Lessons in 2007. Manila: ADB. Available: www.adb.org/documents/pers/rpe-
oth-2007-15.asp
Making Evaluation Reports
Effective
Evidence Persuasive Clear purpose
Argument Cohesive argument
Quality of evidence
Transparency of evidence underpinning policy
recommendations (e.g., a single study or a
synthesis of available evidence)
Authority Clear purpose
Cohesive argument
Quality of evidence
Transparency of evidence underpinning
recommendations (e.g., a single study or a
synthesis of available evidence)
Context Audience Addresses the specific context (e.g., national,
Context local)
Specificity Addresses the needs of target audience (e.g.,
social, economic policy)
Actionable Information linked to specific processes
Recommend Clear and feasible recommendations on steps to
ations be taken
Making Evaluation Reports
Effective
Engagement Presentation Presentation of author's own views about the
of Evidence- implications of findings
Informed Clear identification of argument components that
Opinions are opinion based
Clear Easily understood by educated nonspecialists
Language
and Writing
Style
Appearance Visually engaging
and Design Presentation of information through charts,
graphs, and photographs

Source: Adapted from Jones, N., and C. Walsh. 2008. Policy Briefs as a Communication Tool for Development Research. Overseas Development
Institute Background Note. May. Available: www.odi.org.uk/publications/background-notes/0805-policy-briefs-as-a-communication-tool.pdf
Evaluation Capacity Development

Capacity is the ability of people, organizations, and


society to manage their affairs successfully.

Capacity to undertake effective monitoring and


evaluation is a determining factor of aid
effectiveness.

Evaluation capacity development is the process of


reinforcing or establishing the skills, resources,
structures, and commitment to conduct and use
monitoring and evaluation over time.
Evaluation Capacity Development

Stronger evaluation capacity will help development agencies to


• Develop as a learning organization
• Take ownership of their visions for poverty reduction, if the
evaluation vision is aligned with that
• Profit more effectively from formal evaluations
• Make self-evaluations an important part of their activities
• Focus on quality improvement efforts
• Increase the benefits and decrease the costs associated with
their operations
• Augment their ability to change programming midstream and
adapt in a dynamic, unpredictable environment
• Build evaluation equity, if they are then better able to
conduct more of their own self-evaluation, instead of hiring
them out
• Shorten the learning cycle
Evaluation Capacity Development

In starting to develop evaluation capacity internally,


consider key decisions in
• Architecture – Locating and structuring evaluation
functions and their coordination.
• Strengthening Evaluation Demand – Ensuring
that there is an effective and well-managed
demand for evaluations.
• Strengthening Evaluation Supply – Making
certain that the skills and competencies are in place
with appropriate organizational support.
• Institutionalizing Evaluations – Building
evaluation into policy-making systems.
Competencies for Knowledge
Management and Learning
• Strategy Development
A strategy is a long-term plan of action to achieve a
particular goal.
• Management Techniques
Leadership is the process of working out the right things
to do. Management is the process of doing things
right.
• Collaboration Mechanisms
When working with others, efforts sometimes turn out to
be less than the sum of the parts. Too often, not enough
attention is paid to facilitating effective collaborative
practices.
Competencies for Knowledge
Management and Learning
• Knowledge Sharing and Learning
Two-way communication that take place
simply and effectively build knowledge.
• Knowledge Capture and Storage
Knowledge leaks in various ways at various times.
Tools for Evaluation and Learning

Strategy Development Management Techniques


• Linking Research to Practice • Output Accomplishment and
the Design and Monitoring
• Auditing Knowledge
Framework
• Outcome Mapping
• Focusing on Project Metrics
• The Most Significant Change
• Value Cycles for Development
Technique
Outcomes
• Learning Lessons with
• Understanding Complexity
Knowledge Audits
• The Perils of Performance
• From Strategy to Practice
Measurement

Collaboration Mechanisms
• Appreciative Inquiry
• Learning in Strategic Alliances
Tools for Evaluation for Learning

Knowledge Capture and Knowledge Sharing and


Storage Learning

• Conducting Exit Interviews • Conducting After-Action Reviews


• Monthly Progress Notes • Posting Research Online
• Assessing the Effectiveness of • Storytelling
Assistance in Capacity
• Identifying and Sharing Good
Development
Practices
• Showcasing Knowledge
• Disseminating Knowledge
• Harvesting Knowledge Products
• The Critical Incident Technique • Learning from Evaluation
• Asking Effective Questions
• Embracing Failure
• Enriching Policy with Research
Organizational Competencies for
Knowledge Management and Learning
Level Strategy Management Collaboration Knowledge Knowledge
Development Techniques Mechanisms Sharing and Capture and
Learning Storage

1
Learning from Evaluation
How is learning
generated in the
organization?
Are users
capable of using
the information?
Are users open Is the information
to the used in decisions? Is the
information? information
easily accessible?
Is the information channeled
to the right people
at the right time?

Are users
Is the information relevant involved in
and useful? generating the
information?

Source: Adapted from Scanteam


International. 1993. Internal Learning How is the need for the
from Evaluations and Reviews. Report
No. 193. Oslo: Royal Norwegian information generated?
Ministry of Foreign Affairs.
Key Learning Points

• Monitoring and evaluation are closely interrelated.


• The role of independent evaluation has grown in
recent years. It is now the focus of efforts to raise its
contribution to development effectiveness.
• Evaluation for accountability should be supplemented
by evaluation for learning.
• Many shortcomings stem from learning failures.
• Evaluation capacity development is an intuitive area
for investment.
• Notions of knowledge management have a role to play
in evaluation.
Knowledge Management Center

Olivier D. Serrat
Principal Knowledge Management Specialist
Knowledge Management Center
Regional and Sustainable Development Department
Asian Development Bank
knowledge@adb.org
www.adb.org/knowledge-management/

You might also like