Professional Documents
Culture Documents
Evaluation,
Reporting, and
Learning (MERL) for
Peacebuilding
Programs
Pact is a promise of a better tomorrow for all those who are poor and marginalized. Working
in partnership to develop local solutions that enable people to own their own future, Pact
helps people and communities build their own capacity to generate income, improve access to
quality health services, and gain lasting benefit from the sustainable use of the natural
resources around them. At work in more than 30 countries, Pact is building local promise
with an integrated, adaptive approach that is shaping the future of international
development. Visit us at www.pactworld.org.
September 2016
Disclaimer:
Portions of this module have been made possible by the generous support of the American
people through the U.S. Agency for International Development (USAID). The entirety of the
contents of this module are the responsibility of Pact and do not necessarily reflect the views
of USAID or the United States Government.
Recommended citation:
Pact. 2016. Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding
Programs. Washington, D.C., U.S.: Pact.
Contact:
Contents
Abbreviations and Acronyms ............................................................................................ i
Acknowledgements .......................................................................................................... ii
Introduction ..................................................................................................................... 1
Chapter 1: Overview of MERL Principles ......................................................................... 2
What is MERL? ................................................................................................................................. 2
What is peacebuilding? .................................................................................................................... 3
Why is MERL important for peacebuilding? .................................................................................. 3
Definitions ........................................................................................................................................ 4
Who is on a MERL team? ................................................................................................................. 8
Key Reasons for Developing Quality MERL Systems ..................................................................... 8
MERL in the Project Cycle ............................................................................................................. 10
MERL in Summary .......................................................................................................................... 11
Chapter 2: Peacebuilding Approaches: Defining What Peace Looks Like ....................... 13
What are peacebuilding approaches? ............................................................................................ 13
Approach 1: Incentives for Peace ................................................................................................... 14
Approach 2: Addressing Root Causes of Injustice ........................................................................ 14
Approach 3: Addressing Individual Attitudes and Relationships and Building Trust ............... 14
Chapter 3: Situation Analysis .......................................................................................... 16
What is situation analysis?............................................................................................................. 16
Types of Situation Analysis ............................................................................................................ 16
Disseminating Situation Analysis Findings .................................................................................. 19
Chapter 4: Theories of Change and Developing Project Goals........................................22
What are Theories of Change? ....................................................................................................... 22
Theories of Change in the Context of Peacebuilding Programs ................................................... 23
Develop Your Peacebuilding Project’s Theory of Change ............................................................ 25
Develop Project Goals and Outcomes from Your Theory of Change ........................................... 26
Tying Situation Analysis and Theory of Change to MERL System Design ................................. 28
Chapter 5: Conceptual Frameworks and Assumptions ...................................................32
What are conceptual frameworks? ................................................................................................ 32
Advantages and Challenges of Using Conceptual Frameworks ................................................... 33
The Results Framework ................................................................................................................. 34
Logical Frameworks ....................................................................................................................... 39
Basic Logframe Outline (Hierarchy of Objectives) ....................................................................... 39
The Role of Assumptions ............................................................................................................... 40
Chapter 6: Indicators ..................................................................................................... 43
Overview ......................................................................................................................................... 43
How to Develop Indicators ............................................................................................................ 45
Qualities of a Good Indicator ......................................................................................................... 47
Indicator-Related Terms ................................................................................................................ 48
Indicator Definitions ...................................................................................................................... 49
Collecting Indicator Data ............................................................................................................... 49
Chapter 7: Monitoring ....................................................................................................52
What is monitoring? ....................................................................................................................... 52
Conflict and Context Monitoring ................................................................................................... 53
Implementation Monitoring .......................................................................................................... 54
Assumption Monitoring ................................................................................................................. 56
Monitoring Plans ............................................................................................................................ 57
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
page i
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
Acknowledgements
The initial draft of this module was compiled by Lynn McCoy, Hannah Kamau, and Margaret
Elise during their time with Pact.
Special thanks are given to Pact’s PEACE III project MERL team—Jacqueline Ndirangu,
Lauren Serpe, Josiah Imbayi Mukoya, Michael Kahindi, and Christopher Kinyua—for
customizing this module for use in Pact Kenya’s PEACE III program and for broader
publication to share with other peacebuilding programs.
Many other Pact staff contributed to the development of the peacebuilding MERL module.
Gratitude goes to Nanette Barkey (technical oversight), Leighton Clark (technical
contributions), Maggie Dougherty (graphic design), Rachel Elrom (editing and layout), and
Mason Ingram (technical contributions).
page ii
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
Introduction
Robust monitoring, evaluation, reporting, and learning (MERL) are critical components of
successful programming. The MERL components enable program stakeholders to monitor
progress and evaluate the achievement of expected results. Reporting processes and timelines
should be clearly defined and tailored to meet the needs of key audiences and stakeholders,
and provision should be made for the program to continually reflect and learn from
experiences gained during implementation.
Measuring the success of peacebuilding programs poses specific challenges that are unique to
this program area. This module was developed to guide PEACE III local program partners—
peacebuilding practitioners—through the development and implementation of effective and
practical MERL systems for their projects. This five-year cross-border peacebuilding program
is implemented by Pact in partnership with Mercy Corps and a range of local partners with
activities in Ethiopia, Kenya, Somalia, South Sudan, and Uganda. PEACE III aims to
strengthen cross-border conflict management in the Horn of Africa and is pursuing two
related objectives: 1) to strengthen local cross-border conflict management and 2) to improve
the responsiveness of regional and national institutions to cross-border conflict.
This manual was created to support and provide examples to peacebuilding practitioners and
is an addition to Pact’s existing MERL Modules.1 Other useful examples of MERL training
manuals for peacebuilding programs exist. This module does not seek to replicate those
manuals, but rather draws on them and integrates their expertise here. Parts of this manual
were also drawn from Pact’s MERL Modules but customized with practices and examples
relevant for peacebuilding programs. It is the authors’ hope that this module can serve as an
introduction to MERL for peacebuilding practitioners and can point them to other relevant
resources in the field. Each chapter begins with an outline of the learning objectives, includes
learning activities throughout the chapters, and ends with a summary of key points and
learning.
1Pact has developed four core training modules to guide MERL staff and to provide a framework for Pact’s
partners on M&E basics. These are: Module 1: Building Basic Monitoring and Evaluation Systems; Module 2: Field
Guide to Data Quality Management; Module 3: Field Guide for Evaluation; Module 4: Mobile Technology
Handbook. All modules can be found in Pact’s Resource Library: http://www.pactworld.org/library.
Introduction | page 1
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
Chapter 1:
Overview of MERL Principles
Chapter Objectives
In this chapter you will learn:
The four key interrelated processes found in a monitoring, evaluation, reporting, and learning
(MERL) system
Key reasons for having a high quality MERL system
How MERL is embedded in the project cycle
Learning Activities
Defining peacebuilding for your project
What do you want to achieve with your peacebuilding project’s MERL system?
Checking and improving the project cycle
What is MERL?
MERL is a contemporary of the commonly used abbreviation, M&E. The “M” and “E” are the
same: monitoring and evaluation. The “R” and the “L” stand for reporting2 and learning.
Pact uses this more-comprehensive terminology to emphasize that the four components are
inherently linked. Without reporting on and learning from results, monitoring and evaluating
programs is pointless. M&E as a standalone activity is like discovering that the brakes on your
car do not work, but not telling anyone or getting them fixed. Learning about something is
only useful if you apply what you learn.
MERL is:
A very important component of effective
peacebuilding
A way for you to learn about what in your
program works and what does not work
Even more important to conflict-based
development initiatives than in stable
environments: As the conflict and
environment changes, so must peacebuilding
activities and foci. MERL systems provide key
insights into what has changed and what needs to
be changed.
Ideally conceived at the same time as your peacebuilding approaches are developed;
however, even if you are already in the midst of program activities, you can still develop
useful MERL tools to help guide, improve, and assess your program
2Of note, the “R” in MERL also can represent “Research;” however, for the purposes of this manual, the “R” will
represent “Reporting.”
MERL is not:
A system separate from your programs and projects; it is closely interwoven with your
project planning and implementation
Just for funder reporting
Meant to be a burden to program staff: Even when certain MERL activities are required
by funders and parent organizations, your team should always work with them to develop
processes that are mutually beneficial.
The job of an M&E specialist alone: MERL is a team effort and involves stakeholders from
within your organization and the communities where you strive to build peace.
What is peacebuilding?
Have you ever tried to define the term peacebuilding? Defining peacebuilding can be
challenging, mirroring the complexity and range of activities needed to bring about peace.
Despite the term’s complexities, most people working on this issue have a similar
understanding of the change they want to bring about. Ending fighting, reducing potential
for future conflict, improving relations between peoples and communities, and reducing
injustice are common goals of peacebuilding programs.
Peacebuilding approaches can address the effects of violence or the root causes or
drivers of conflict. Activities can range from direct interventions to stop fighting to
initiatives aimed at improving education, health, and prosperity. In fact, peacebuilding can
take place where there has not been violence: helping individuals understand how they can
better live with their neighbors, creating just societies, and collaborating to make
communities prosper are all part of peacebuilding.
Areas beseiged by conflict are subject to frequent changes, in the conflict itself and in the
overall environment. Peacebuilding programs likewise need to be adaptable to meet the
changing needs of people and institutions. But in order to do so, peacebuilding programs
must have access to timely and reliable information, not just about the effects of their direct
interventions, but about the evolving situation around them.
Developing and implementing a high quality MERL system allows program teams to access
information to make good decisions before, during, and after program implementation.
Through MERL, a program team can see if they are achieving their targets, whether their
work is having its desired (or other) effects, and whether it is necessary to make corrections
to the initiative. Also, programs can learn if cumulative efforts of multiple activities are
bringing about the peacebuilding changes sought.
It is just as important to learn about what is not working as it is to learn about what is
working. Monitoring helps to identify what is not working early so that program
managers can make adjustments throughout the life of the program. Periodic formative
evaluations have a similar purpose and can help programs understand why they see the
results they see.
Definitions
A MERL system is composed of four separate but interrelated processes: monitoring,
evaluation, reporting, and learning.
Monitoring data can be used to measure how productively inputs (money, time, equipment,
personnel, etc.) are used in the creation of outputs (products, outcomes, results) so a program
team can compare what they planned to accomplish in a given period to what actually
occurred in that time period.
Monitoring also can be used to keep track of the context and, specifically, the conflict in
which program activities are occurring, as well as assumptions on which program activities
are based. This type of monitoring is particularly important for program effectiveness
because changes in conflict and context often necessitate changes in program activities in
order to meet desired outcomes.
Monitoring data helps project teams answer key questions, such as the following.
What activities have been completed (compared to those planned)?
At what cost and in what timeframe are activities being accomplished and how does this
compare to what we planned for a given period?
What short term results have occurred, for example, how many people have been
mobilized, trained, or reached in a given time period?
Have key benchmark activities been completed as planned?
Does data tell us anything about differences between activities and/or program sites?
What has changed around us? Are there reasons to change our upcoming plans?
Like monitoring processes, evaluations can be conducted periodically throughout the project
or program. And, while monitoring activities focus on what happened, evaluations look more
in depth at how well things happened and whether they resulted in desired outcomes or are
making progress toward achieving them. Evaluations often can be broken down into three
phases:
Baseline: collecting values at the start of a program or intervention to be compared
against at later points
Midterm: checking statutes against baseline figures and planning for changes to the
approach, if necessary
Endline: measuring changes from beginning of the project to the end
3 It
is important to note that in this context, impact assessment implies looking at long-term, broader-scale
changes in the community that result from activities and processes among smaller populations within that
community. This distinction is important because the term “impact assessment” is sometimes used to describe an
evaluation whose goal is attributing change to a particular intervention. Such studies are highly controlled, costly,
and difficult to implement, even in stable environments.
Evaluation data can help program teams answer key questions, such as the following.
Are activities effective in achieving desired outcomes (short- and long-term)? Are we
heading in the right directions?
Are our approaches as efficient at achieving outcomes as they could be (or is there a better
way)?
What are the secondary- and highest-level effects of our interventions? Are our efforts
resulting in changes that are leading to stability and peacebuilding?
Will changes persist after our direct influence has left? Will our interventions’ positive
changes and effects be able to adapt over time, as needed, to fit the local context?
What unintended outcomes have occurred?
Program teams can use evaluation data to analyze what is working well to maintain, expand,
or replicate it and to review what has not worked well to make program course corrections,
avoid repeating mistakes, and improve the quality of their initiatives.
Reporting provides regular feedback that helps organizations inform themselves and others
(stakeholders, partners, funders, etc.) on the progress, problems, successes, and lessons of
program implementation. M&E report writers have the responsibility of conveying relevant
information to various audiences in effective ways. The challenge for MERL teams is to turn
raw data into useful knowledge, then communicate that information to the different program
audiences (community members, program staff, partners, etc.) in ways that will most
effectively meet their needs and enable them to use it as needed.
MERL teams should think of reporting as broader than simply providing written updates to a
funder. Rather, the role of reporting in a MERL system is to ensure that data and analysis is
being distributed to those who can use the information to better manage the program,
provide oversight, or reflect on the events and outcomes. Effective reporting at regular,
strategic intervals is critical for informing program staff, funders, partners, program
participants, and other stakeholders of the progress of peacebuilding interventions. Reports
can take many forms, can be formal (reports, spreadsheets) or informal (phone calls, quick
briefs), and should communicate successes and failures, positive and negative results, and
any and all lessons learned.
When creating reports and a reporting schedule, MERL team members should consider:
Who needs it
What specifically they need and why
When and how often they need it
Developing a matrix that details this information is immensely helpful to appropriately plan
for reporting. An example of this matrix is presented in Table 1.1.
Learning is a deeper process than simply reading reports; it involves interpreting M&E
results and taking action based on what is learned. Learning is the systematic review of
achievements against desired results and is used to change course or modify approaches if
necessary. Learning benefits the current project and future projects.
Project technical specialists and MERL team members must participate in planning M&E
activities because they all are responsible for learning from the results, including successes
and failures, and be willing to accept unexpected results and take action to resolve problems
that arise. It is also important to involve other MERL audiences in planning and/or review of
MERL plans: program staff, funders, community partners, and others.
During the planning phase, it is important to think about who will be responsible for
decision-making regarding changes to be made as a result of learning. Developing a learning
schedule also can be helpful to ensure project staff hold regular learning reviews within
your organization and externally with relevant stakeholders. Also think about your program
design: If we wanted to change something in the future, could we? How easy or difficult
would this change be? Is there a point at which there’s “no turning back” (implying you
should learn about the program’s effectiveness before then)?
Other benefits of MERL include ensuring that our projects, services, and activities:4
Meet community needs
Reach those they are intended to reach
Make adequate, timely progress toward our objectives
Make efficient use of resources (capital, human, and other)
Yield results that are in line with our efforts
Are sustainable5
Are appropriate for the changing environment (political, social, conflict context, etc.)
Effective MERL also benefits institutions in developing future programs in the same or
another context. Learnings from a particular project can be used to develop future initiatives
during planning, design, and implementation and even at the proposal writing stage. Funders
may use past MERL results to evaluate the suitability of your organization or your particular
approach to a proposed activity.
Commonly, MERL is viewed as complex, time intensive, and costly. Some organizations also
consider MERL to be simply a requirement of funders and thus see MERL as an external
rather than internal necessity. Therefore, an effective MERL system is organized and focused,
affordable, time efficient, and useful to program staff, funders, and other key program
stakeholders.
1) Project identification
At this stage, as someone who works on peacebuilding, you
probably are looking at the conflict in an area and hoping to find a way to improve the
existing situation. You develop your visions of peace and peacebuilding. You analyze the
conflict, context, and most pressing needs.6 You start to identify the goals and objectives you
will ultimately try to achieve and which will be the focus of your MERL system.
Once you understand what is happening and what is needed, you determine what your
organization is best suited to do about it. This will likely be done in collaboration with other
local organizations, government offices, or other partners. Together you will decide what your
intervention will be.
2) Project design
In the design phase you take your idea of what to do and determine more specifically how to
do it. Project design is about coming up with a good Theory of Change (see Chapter 4) and
developing specific strategies (and activities) for bringing about the desired changes. Project
design includes identifying resource needs (funds, time, personnel, equipment, etc.) to
complete the program successfully and establishing the project’s basic framework and MERL
system.
3) Project implementation
The majority of project activities will take place in this stage. A baseline study is usually
conducted just before a project begins to establish your starting point and for later
comparison. Monitoring activities take place regularly throughout the life of the project to
track progress and resource allocation.
4) Project evaluation
Periodic evaluations can be used to assess ongoing progress toward peacebuilding goals.
What you learn from M&E activities leads you back to program design and implementation to
make adjustments or initiate new activities as indicated. A comprehensive project evaluation
is an important activity to carry out once a project has concluded. It is a way for you to learn
what ultimately worked and what did not work, to inform others who remain (if your activity
has ended), and to guide future initiatives.
6 Conflict, context, and needs assessments are discussed in more detail in Chapter 3.
MERL in Summary
Figure 1.4 summarizes the MERL processes that will be covered throughout this module.
Chapter 2:
Peacebuilding Approaches:
Defining What Peace Looks Like
Chapter Objectives
In this chapter you will learn:
Why defining your peacebuilding approach is critical to program design and monitoring,
evaluation, reporting, and learning (MERL)
About three specific approaches to peacebuilding that your project might employ
Pact’s general approach to peacebuilding
Learning Activity
Identify and write your peacebuilding approach
As noted in the previous chapter, peacebuilding approaches can address the effects
of violence and/or the root causes or drivers of conflict. Activities can range from
direct interventions to stop fighting to initiatives aimed at increasing education, health, and
prosperity. In fact, peacebuilding also can take place where there has not been violence,
helping individuals understand how they can better live with their neighbors, creating just
societies, and collaborating to make communities prosper are all part of peacebuilding.
Why is this a starting point for talking about MERL? Because we can’t measure our progress
toward achieving ultimate goals without setting them. Defining your approach to
peacebuilding effectively declares what peace should “look like,” thus giving you goals to
monitor progress toward throughout the life of your project.
Below are three examples of approaches to peacebuilding. Your intervention may fit one of
these approaches or something different. Consider both individual programs (“What is the
ultimate peacebuilding goal that this effort hopes to achieve?”) and the collective efforts of
your organization in a particular region (“Are our programs united by a singular view of how
they will contribute to peacebuilding here?”).
If a program defines peacebuilding as addressing root causes, then MERL systems would
focus heavily on capturing social and structural changes in society and people’s lives. Some
example changes we might study include the number of people reporting an improvement in
the level of security, perceived or legislated level of independence of the judiciary, the number
of arms passing through border points, or the number of development projects in resource-
poor or marginalized communities.
It is critical to understand the specific conflict, context, and needs of the communities in
question to avoid misguided program activities or poor indicators for your project. If program
activities are misguided, then the indicators developed to measure the results of these
activities will not be well aligned to the project objectives and cannot be used to demonstrate
achievement of results. Because of the importance of these preliminary situation analyses, we
review them here, with special attention paid to how they relate to MERL processes.
By the time you are done carrying out a situation It’s never too late to do situation
analysis, you will have a clearer picture of the analysis. Even if you are in the middle
situation on the ground in the area you hope to of program activities and have never
affect, as well as the larger national or regional scope taken a formal, close look at the
of the situation. And, you should be keenly aware of conflict around you, take some time to
what others are doing to address the conflict. update yourself and your organization
on the current state of things.
Using this information, the project team can develop
strategies (and related activities) to bring about the changes sought. Some basic questions for
reflection are:
What would make the challenges to peace diminish?
What can we do (with our partners and stakeholders) to increase successful
peacebuilding?
Table 3.1 summarizes the types of situation analyses, when they are conducted, and how to
carry out a situation analysis. Detailed descriptions of each type of analysis follows the table.
Conflict analysis examines the root causes, drivers, nature, and primary
actors of a conflict with the goal of gaining an in-depth understanding of the
conflict’s dynamics.
Some of the key questions often incorporated into conflict analysis are:
Who are the key actors/stakeholders in the conflict?
What are the main causes of the conflict? What is dividing certain groups and uniting
others? How did it start? Are there different driving factors now?
What are the economic, political, and socio-cultural contexts of the conflict at the local,
national, regional, and international levels?
What are the current conflict trends? What is happening?
What are the current windows of opportunity to address the conflict?
What efforts are currently in place to build peace/end the conflict?
What has been successful in reducing the conflict? What has failed?
Understanding what caused the conflict to start and what is causing it to continue is
important if we are to find ways to mitigate it and build peace. However, because of the
dynamic nature of conflict, it is important to regularly re-assess the conflict to see if the
drivers have changed or if new barriers to peace have emerged. Ongoing conflict assessment
focused on the factors most likely to change should be part of the MERL process.
7 One type of context analysis is political economy analysis (PEA). The Organization for Economic Co-operation
and Development (OECD) defines PEA as being concerned with the interaction of political and economic
processes in a society: the distribution of power and wealth between different groups and individuals and the
processes that create, sustain, and transform these relationships over time. PEA is used to understand the explicit
legal, policy, and economic frameworks and the implicit and unwritten norms, values, and interests that help
determine how individual and group actors behave. See DFID. July 2009. Political Economy Analysis How to
Note. Available at https://www.odi.org/sites/odi.org.uk/files/odi-assets/events-documents/3797.pdf.
8 Pact’s Applied Political Economy Analysis (APEA) aims to provide project teams with practical help in
identifying and mapping the political, economic, and social incentives that influence key stakeholders’ actions and
decisions. This project- or problem-based methodology is designed to directly support project decision-making. A
full description of the approach is available at at http://www.pactworld.org/library/applied-political-economy-
analysis-tool-analyzing-local-systems.
Conflict analyses look at these same systems and structures, but with a focus on those directly
affecting the conflict. Context analysis looks more broadly at these systems, even if they do
not have a direct role in propelling the conflict. Such factors are important because they
impact the way people live their daily lives and are subject to being brought into or affected
by the conflict. Assuming your intervention will affect people’s daily lives, it is important to
understand all of the factors that impact their habits, activities, interactions, and livelihoods.
Some of the key questions often incorporated into a context analysis are:
What policies, systems, structures, and forces exist inside and outside the immediate local
context (village, district, province, nation) of the conflict?
How can the policies, systems, structures, and forces outside the immediate local context
be addressed?
What kind of cooperation or linkages between the local, national, regional, and
international levels need to be made?
Who are the key actors that can be used to leverage this cooperation (who could be the
project’s “champions”)? Who may want to see the project fail (who may be the project’s
“spoilers”)?
Where does our organization fit within the context? On what systems do we rely? With
whom are we directly aligned?
Like conflict analysis, context analysis needs to be regularly updated to monitor changes.
Context analysis often includes looking at systems not directly linked to the conflict. As such,
it is important for peacebuilding programs to continuously monitor the emergence of new
conflict drivers and how these affect their program context.
Needs assessment involves finding out what services or systems are missing
or require improvement in a given context.
Needs can be assessed on a large scale (national level) or more locally, depending on the
scope of your intervention, keeping in mind the impacts local changes have on the larger
scale. Such assessments help to determine what has to happen to move from the current
toward the desired state of things. The project team’s goal is to develop strategies (and related
activities) to fill the gaps and meet needs.
Some of the key questions often incorporated into a needs assessment are:
What are the gaps/needs for the community or target group?
What action is being taken to address them? What more is needed?
How is the community currently trying to meet its needs?
How can the challenges be addressed?
What resources are currently available within the community to address needs and gaps?
What barriers have prevented these needs from being met before?
Ongoing monitoring can help program staff track when needs have been met, changed, or
reprioritized. As with other analyses, this requires close collaboration with multiple
community members, leaders, experts, and others with direct local knowledge.
Institutional capabilities
Alongside your situation analysis, it is important to constantly consider your place: your own
institution’s capabilities, strengths, and potential role in peacebuilding. Consider your
organization’s history in the community and region, partnerships, staff capacity, interests,
expertise, and any other factors you feel would strengthen or weaken your institution’s ability
to carry out successful peacebuilding efforts.
9This table was taken from Church & Rogers, 2006, pp. 18–19, which was adapted from M.Q. Patton. 1997.
“Outcome Examples.” Utilization-Focused Evaluation (3rd ed.). Thousand Oaks, C.A., U.S.: Sage, Inc.
Chapter 4:
Theories of Change and
Developing Project Goals
Chapter Objectives
In this chapter you will:
Learn what Theories of Change (TOCs) are and how to develop them
Review several TOCs that can be used in evaluating peacebuilding programs
Review several conceptual models
Learn the importance of testing assumptions
Understand the relevance of conceptual models to program design and monitoring, evaluation,
reporting, and learning (MERL)
Learning Activity
Describe your project’s TOC
TOCs help guide thinking during the project design and Definition of “theory”
evaluation stages. Peacebuilding practitioners select project
An assumption about how
goals, methods, approaches, and activities based on underlying
something works, or a
theories of how peace can be achieved in a specific context. prediction of what will
Effective projects usually clarify their TOCs early in the life of happen as a result of an
the project and continually test them against on-the-ground action (Lederach et al.,
realities. 2007, p. 4)
11 See Church & Rogers, 2006, pp. 14–15. The text on those pages was used to create Table 4.1.
There are several reasons why monitoring, evaluating, and reporting on peacebuilding
projects is more difficult than other types of development. Some of these challenges include
the different definitions of peacebuilding, the different theories of what causes conflict and
brings about peace, the complexity of conflict, and the need to understand the conflict before
a peacebuilding project can start. In addition, the general lack of security and resultant lack of
trust of people interviewing them about issues in their community is also a challenge in these
types of programs.
When monitoring and evaluating a peacebuilding project, the difficulties named above lead to
a greater problem: frequent disagreement about the ultimate ideal outcome(s). Not all people
agree on what peace looks like, thus it may be difficult to come to consensus on your ultimate
goals. If your ultimate goals are not clear, it is impossible to assess whether you have
reached them.
Despite these challenges there are steps to take that can help you develop your TOC.
Conducting a situation analyses for instance can help to identify what other factors in the
environment are likely to affect your TOC and how the project will account for them.
In this context, TOC describes processes that you believe will lead to peace.12,13
For example, suppose that the situation analyses identified tensions between ethnic groups in
a particular community as being a driving force of conflict. Because these groups cannot get
along (manifesting in a number of problems with service delivery/provision and violence),
your peacebuilding strategy involves changing attitudes of and relationships between
individuals in these groups. You believe that changing people’s attitudes about people of “the
other” ethnicity through education will encourage peace on a larger scale. Your TOC clearly
spells out what you perceive to be the link between what you plan to do (educate) and what
you expect to happen (peace).
Your TOC can be articulated in various formats depicting “IF we do this, THEN this will
happen.” Here, we depict this TOC as a model and as a sentence.
The above TOC example falls under the “Individual Change Theory.” As is always the case
when adapting ideas from general frameworks, it is important to base your TOC within the
specific context in which you are working.
Points to remember
1. Your TOC does not have to reference specific program activities at the outset, though the
program activities you later choose should fit into the TOC you have established. In the
above example, “Changes in attitudes of educators and youth views of ‘the other’” will lead
to “decreased levels of conflict and violence in the community,” among other effects.
See Church & Rogers, 2006, p. 118. There are also others who look at a TOC as a conceptual model and use it
12
much like a results framework or logical framework. This is not the meaning used in this chapter.
13There are a number of resources available online with more detail and examples of TOCs. One such resource is
the Center for Theory of Change: http://www.theoryofchange.org/.
2. Being able to write your TOC in a clear and convincing manner is important and may take
time. Prepare to have your TOC challenged. In the previous example, people may take
exception to the third part of the theory, that teaching young people to be more tolerant
will in-turn make their adult family members more tolerant. Similarly, if your TOC does
not reference specific or general program activities, you will eventually need to include
more details about how change will occur. When confronted, you must be prepared to
defend your TOC and/or amend to ensure it’s clear and convincing to project
stakeholders. One way to defend your TOC is to cite existing supporting evidence of the
logic applied in your TOC. A defendable TOC
should also have plans for collecting data that Developing and explaining a TOC
tests your theory and if the data disproves your This is often overlooked in
theory, the program should be ready to adapt peacebuilding initiatives, yet it is
your programming to reflect the reality. essential for both developing
initiatives with a clear peacebuilding
3. You should make sure your TOC is reflected aim and for monitoring your progress
throughout your project.14 Your activities and toward achieving the intended change.
events, their outputs and outcomes, should be
aligned to your theory of how peace will be built. If your project has several objectives,
you will likely need multiple TOCs to describe the expected changes under each objective.
As your program activities develop, return to your TOCs to make sure they fit. If not,
develop new TOCs or adjust program activities to be aligned. It may seem obvious, but it
also is important to make sure you do not adopt conflicting TOCs.
16The goals stated here include specific targets (how much will change by when). Your initial goals do not have to
have specific targets. They can be added (or at least refined) after your baseline study (evaluation that tells you
your starting point). Targets should not be chosen arbitrarily, but based on what is reasonable and feasible (as
opposed to idealistic or unrealistic).
17From what we know about this (hypothetical) TOC, this effort also could have a complementary TOC about
changes in attitudes leading to improved relationships, leading to peace. In this case, “relationship” would be the
“type” of change we are promoting.
18Church & Rogers, 2006, pp. 18–23, provides examples of how to align specific types of change with specific
peacebuilding TOCs.
These goals won’t be achieved for some time (several years). However, outcomes can be
evaluated periodically. If we see a marked decrease in community members expressing fear
after year one, we might know we are on track toward meeting our goal. Or the opposite could
be true. Either way, it is better to know when we are on or off track with meeting our goal
than to wait until the intervention is over and “hope” we have been successful.
These outcomes will be very important for moving to the next stage. They will be integrated
into your chosen conceptual model.
At this stage in the process, if you do not yet have specific project activities in mind,
establishing goals and objectives will help you develop your project activities.
Examples of key changes sought proposes even more specific types of changes
the programs will seek. The first is an output (simply getting people to the trauma
centers) and the second is an outcome (improved emotional wellbeing) that should result
from applying the knowledge and experiences gained from the trauma counseling.
Chapter 5:
Conceptual Frameworks and Assumptions
Chapter Objectives
In this chapter readers will:
Understand what conceptual frameworks are and how to develop them
Review examples of conceptual frameworks for peacebuilding programs (results framework,
logical framework)
Understand the relevance of conceptual frameworks to program design and monitoring,
evaluation, reporting, and learning (MERL)
Learn the role of assumptions
Learning Activities
Create a results framework
Create a logical framework
Conceptual frameworks enable projects to explain the ideas and reasoning behind their
Theory of Change (TOC) and link project activities to overall peacebuilding goals.
Describing your projects through conceptual frameworks helps you and your organization
think about (and design/present) project activities and their results as contributing to larger
peacebuilding efforts. Conceptual frameworks can help you expand on peacebuilding
approaches to further define goals that relate to your TOC and proposed outcomes.
Conceptual frameworks can be used at any stage of the project cycle, even if your project is
already in the midst of program activities, but has never clearly laid out how they relate to
peacebuilding goals. You can start by populating the framework with the information you do
have and fill in the pieces you do not.
Also, revisiting a conceptual framework periodically during the project cycle is a good way to
make sure you are still on track as minor (or major) changes are made to programs,
evaluations, objectives, etc. throughout the life of the project.
There are many types of conceptual frameworks used by development and peacebuilding
initiatives. This chapter presents two that you may find useful:
Results framework
Logical framework (also called a “logframe”)
Each of these models, while different in appearance and structure, include the same four key
components,19 as listed in Figure 5.1. Though they may be named differently in different
models, (such as “outcomes” instead of “objectives”) for clarity we will label these
components “goals,” “objectives,” “outputs,” and “activities” in each of their corresponding
sections in the frameworks we are about to describe.
In Chapter 4’s learning activity, we wrote program goals and objectives based on our
TOC. Using these as a starting point, we can begin to list short-term program outputs and
define the activities that will shape them. Once we’ve established clear goals (keeping in mind
that every stage of this process should be collaborative, i.e., goals of all stakeholders should
be considered), it is easier to define specific activities and the resulting outputs.
20 Adapted from International Bank for Reconstruction and Development & World Bank, 2012, pp. 8–10, 41.
In the previous chapter, we developed desired goals and outcomes from our TOC. The results
framework can be used to build on them (and add more or make adjustments) and to tie in
the related activities (inputs and processes) and outputs. Below we describe in more detail the
four components of the results framework.
Inputs and processes (analogous to project “activities” in Figure 5.1) are the resources
and methods employed to conduct an activity, project, and/or program.
Processes are the methods or courses of action selected to conduct the work, such as
training, organizing, publishing, lobbying, service provision, and message promotion. Direct
results from inputs and processes are generally seen quickly (0–2 years) and often are
measured through monitoring activities.
21 Inputs and processes are sometimes segregated into two separate levels.
being that if you train people they will increase their knowledge on a given subject.22 Outputs
usually reflect a result achieved in a relatively short time period (0–2 years) and are often
measured through monitoring activities.
Outcomes are broad changes in development conditions. Outcomes help us answer the “so
what?” question. For example: We trained 100 people and increased their knowledge, but did
they change their behavior? Outcomes often reflect change in attitudes and/or perceptions,
emotional well-being of individuals, household income, access to rangeland or water for
marginalized communities, and other changes and help us analyze how our activities and
projects scale up or contribute towards these development outcomes. Outcomes usually
reflect a result achieved over an intermediate time period (2–5 years).
Impacts are the overall and long-term effects of an intervention. Impacts are the ultimate
result, such as peaceful co-existence of previously conflicting communities. Impacts usually
reflect a result achieved over a longer time period (5–10+ years).
A results framework organizes these four levels of results so you can see how inputs and
processes lead to outputs, outputs lead to outcomes, and outcomes lead to impacts. If you
choose to use this framework, you will later construct/select indicators (in Chapter 6) to help
you measure attainment of each of these stages. Generally, a results framework will be
accompanied by a separate MERL and indicator plan. Table 5.1 describes the type of
information normally present in a results framework and how it is structured, and Table 5.2
shows the PEACE II results framework as an example.
22However, it is worth noting that some may consider “number of people trained” as an output indicator, with
“knowledge level increased” as a shorter-term outcome indicator. The levels at which you consider indicators
depends on several factors, such as project length, overall goal/objective, donor definitions.
Conduct participatory learning and action Peace dividends constructed 1.1: Communities are
workshops with community representatives to more open to social
Communities sensitized on the role cultural reconciliation
identify and prioritize peace dividends
practices play in conflict and peacebuilding
Create space for dialogues on cultural norms and
practices
Identify and train trauma healing (TH) counselors Cadre of trauma healers with knowledge and insight 1.2: Communities’
on who can assist communities withstand conflict peacebuilding
Conduct TH sessions
shocks capacities mobilized
Assess community peacebuilding capacities to
Targeted capacity support extended to communities
identify capacity building needs
At least one peace network per corridor established
Facilitate formation and/or strengthening of peace
and strengthened
networks
Strengthened cross-
border conflict
1.3: Local management
Conduct Appreciative Inquiry/strengths, Joint visioning and planning conducted and joint
weaknesses, opportunities, and threats (SWOT) activity plans developed governments partner
analysis with local government peace institutions with their cross-
Increased number of cross-border mechanisms border counterparts
Targeted capacity building interventions for established or strengthened and with
government peace institutions communities in
Increased local government participation in
peacebuilding initiatives (initiated by PEACE III) conflict management
Develop customized capacity assessment tool for Organizational Capacity Assessment conducted for 1.4: Capacity of Horn
PEACE III partners local partners of Africa NGO
partners to support
Identify capacity building priorities for PEACE III Targeted capacity support extended to implementing cross border conflicts
partners partners management
increased
Logical Frameworks
A logical framework or logframe is similar to a results framework in that it explains
how a project’s day-to-day activities will achieve results. One main difference between a
logframe and a results framework is that the latter describes the results as if the program had
already been completed while a logframe describes what will happen in the future.23
A logframe promotes good project design by clearly stating the defined project logic and
components. The logframe is usually formed as a chart that shows a hierarchy of:
Four levels of the causal relationship: activities, outputs, purpose, and goal
Indicators24 of performance
Means of verifying the indicators
Important risks and assumptions
The logframe provides a summary of what the project aims to achieve and how, what the
main assumptions are, and a basis for developing the activity’s MERL system. Progression
from one level to the next is based on “if/and/then” logic; for example, if the activity at the
lowest level takes place and the assumptions hold true (or the risks are not realized), then the
expected output at the next level can be achieved.
Examples of things we assume will (or will not) take place include:
Approval of community-based organizations (CBOs) to be funded under the program by
the funder
Assumptions that we take for granted or believe to be true—or we believe are very likely
to be true—that affect results include:
Partner organizations will value capacity building and will make time for institutional
strengthening activities
Enough people will be interested in supporting peacebuilding efforts
The program will mobilize a critical mass to effectively advance peacebuilding initiatives
We can get off track in our planning, budgeting, and problem solving if we don’t list our
assumptions or have assumptions that are wrong.
We need to deeply think about the things we are “assuming” because they can affect the
program results.
We need to write down our really important assumptions.
We need to test our assumptions to be sure that our assumptions are based in reality.
You can assess the risks coming from assumptions using Figure 5.3. Note that if an
assumption is very likely to prevent the program from taking off or advancing, you either
need to redesign the program or address the assumption directly to make it true.
Apart from redesigning a program based on the likelihood of an assumption preventing the
execution of such a program at the start of a program, programs also could be redesigned
during the course of the program if foreseen assumptions increasingly become likely to affect
the intended outcome of such a program. Assumptions could give grounds for redesigning
programs.
Purpose
Outputs
Activities
Chapter 6: Indicators
Chapter Objectives
In this chapter you will:
Learn how indicators are used to answer questions about your project's outputs, outcomes, and
impact
Learn how to develop and assess indicators
Learn how to use an indicator plan
Get familiar with the U.S. Government Foreign Policy Indicators for Conflict Prevention and
Mitigation
Be introduced to the PEACE III indicator protocols to help determine indicators that apply to
your project
Learning Activity
Indicator plan
Overview
Indicators are the building blocks of monitoring and evaluation. Indicators signal change.
The Organization for Economic Cooperation and Development (OECD)28 defines an indicator
as a “quantitative or qualitative factor or variable that provides a simple and reliable means to
measure achievement, to reflect the changes connected to an intervention, or to help assess
the performance of a development actor.”
Indicators are not just anything your project wants to measure, and every measure is not an
indicator. Indicators are meant to reduce a large amount of data down to its simplest form.
For example, when you want to buy a car, unless the engine has been replaced, the best
indicator of a car’s condition is its odometer because it tells you how far the car has driven
and, therefore, how much wear and tear the engine has undergone.
Indicators also are not the same as goals, objectives, results, or targets. They are developed to
measure results at the four levels of results depicted in conceptual models: activities/input,
output, outcome/objectives/results/purpose, and goals/impact. Indicators do not specify a
particular level of achievement; thus, words like improved, increased, gained, and decreased
do not normally belong in an indicator. Rather, indicators describe the unit of information to
be measured over time, for example, “change in knowledge,” which could increase or
decrease, but this is further specified in the indicator target. When compared with targets,
indicators may signal the need for management action and help us determine if objectives are
being met.
Indicators can be especially useful when straightforward data cannot tell us what we need to
know. Instead, we can ask a series of questions that indicate the answer to our question. In
other words, when a question cannot be answered directly by one piece of data, we collect one
or more indicators instead that give us a good approximation of what the answer may be.
Table 6.1 provides an example of overall project questions and their illustrative indicators.
28 OECD 2002.
Indicator 1 in Table 6.1, about the number of trainings held, is straightforward because the
question it answers is straightforward. This indicator represents output data we can track
through training sign-in sheets or registration records. Questions 2 and 3 are less
straightforward. Often a single indicator is not sufficient to answer our questions. Sometimes
we want to look at the issue from multiple perspectives; other times the issue at hand is too
complex to be addressed with a single line of inquiry. For example, indicators 3a and 3b could
be used to answer the question “Have business opportunities increased for people in group
B?” In fact, if we see positive change in these indicators, they might have little to do with trust
building and more to do with increased business opportunities or a better enabling
environment.
This brings up another point: Indicators do exactly as their name implies, they indicate the
answer to a question that cannot be directly answered. It is important when both selecting
indicators and analyzing data to carefully consider if an indicator’s ability to show a result is
caused to some extent by the project-sponsored activities or whether results demonstrated by
the indicators do not stem from project activities.
We ask a particular question because we think it will tell us something about our intended
program outputs, outcomes, or impacts. For example, we would ask a question about trust
building if a goal of our program is to increase trust among opposing groups. This also
presumes that we did something specific in our program that aimed to build trust (directly or
indirectly). But, even if the indicators we choose all show positive results (there is an increase
in people doing business across groups and there are more businesses in region A operated by
people from group B), we still can’t be certain that those changes happened because of
increased trust.
Also, even if they are the result of increased trust among groups, we can’t be certain that our
program efforts directly led to that change. Perhaps there was a government-run program
operating in the same region with the same goal of trust building that was actually
responsible for the change. Perhaps a sub-group or leader within the community that had
been supporting division and separation of the groups is no longer active. The point is that it
is very difficult to control for all the factors that influence change in individuals and
communities.
One way of looking at the difference between quantitative and qualitative indicators is that
quantitative data tells us what happened and how much or how often it happened, while
qualitative data tells us how what people thought about it and what impact it had on them. It
is highly recommended to use a mixture of qualitative and quantitative indicators to measure
the attainment of your objectives.
Table 6.2: Examples of qualitative and quantitative indicators and data sources
Example indicator Potential data source
Qualitative
Community openness to social reconciliation Focus groups
Improved sense of security among TH Key informant interviews
participants
TH participants less inclined to partake in Key informant interviews
violence
Quantitative
Number of peace dividend initiatives benefitting Project monitoring documents
two or more conflicting communities
Performance of community conflict Pact’s Community Performance Index
management structures
Number of initiatives led by community peace Project monitoring documents
actors to address local conflicts
Number of cross-border peace initiatives that Project monitoring documents
involve local government
Percentage of community peace initiatives that Project monitoring documents
receive tangible local-government support Review of government actions
Number of new linkages among local peace- Pact’s Organizational Network Analysis
building organizations
Percentage of community respondents who Community-wide, face-to-face survey
perceive effectiveness in local conflict
management institutions
The U.S. Department of State’s Standard Foreign Assistance Indicators31 are a good place to
search for indicators that have been tested and used. Most U.S. Government-funded projects
will be required to use some of these indicators. PEACE III has adopted several of them for
use in the program.
Developing or choosing the right indicators is the heart of designing a practical MERL
system. This process can be tedious and exacting, so you cannot expect to sit down in one
afternoon and develop all the indicators your project needs. It should be an iterative process
involving brainstorming a range of possibilities, seeking opinions from various staff and
stakeholders, and assuming different perspectives as you develop indicators.
Reliability is the extent to which one can reasonably assume that they can acquire correct
or accurate information. If it is likely that two or more people seeking the same information
will come back with different results/answers (repeatability), the indicator may not be
reliable. An indicator may be deemed unreliable if:
It comes from generally unreliable sources (e.g., subjective assertions)
The question asked can be interpreted in multiple ways (e.g., the act of “participating” in
an activity can be interpreted in different ways by different people)
Factors such as time of day or time of year of collection are likely to influence answers
(e.g., during the dry season people will be supportive of activities that involve encroaching
on other communities’ grazing land)
People are prone to lie about it, not know the answer, or give an inaccurate answer (e.g.,
which individuals were involved in the cattle raid?)
Feasibility33 is the extent to which the information needed can be readily acquired. It
requires that you be able to access the source of information and the specific data you need.
Barriers to access include:
Security or safety concerns
Sensitivity of the topics
Confidentiality of information sought
Unwillingness of sources to participate
Physical barriers (e.g., information is in a location that is difficult to get to, that is prone
to severe weather)
Utility is the extent to which the information is actually useful. How much will what we learn
help us to make or adjust programmatic or strategic decisions? When considering whether an
indicator has utility, imagine having the results in front of you and asking yourself, “What will
I do with this information?” Individual indicators might not, by themselves, inform decision-
making, but may be considered collectively for utility in some instance.
Another way projects use to assess the quality of indicators is based on the five criterion:
Specific, Measurable, Achievable, Relevant, and Time-bound (SMART). Table
6.3 describes the criteria for SMART indicators.
Table 6.4 provides examples of indicators that are well formed (“good”) and poorly formed
(“bad”). Note that indicator wording does not need to capture the direction of change you are
hoping for. Rather, this would be visible in the targets set—the targets would show
increases—and the actual achieved figures could be increases or decreases.
Indicator-Related Terms
Table 6.5: Other terms used in relation to indicators
Term Definition
Baseline This is a record of what exists in an area prior to the start of a project. It is
primarily a benchmark for the future. The baseline values establish the starting
point from which change can be measured.
Targets These are values against which the actual program/project achievements are
measured. They describe the magnitude or level of results expected to be
achieved. Targets should be realistic and, where possible, informed by past
achievements.
Proxy indicators These are alternate measures used to stand in for another indicator when
obtaining direct information is too difficult, time consuming, or sensitive. For
example, household consumption of maize could be a proxy indicator for
household income.
Indicator These are instruction sheets that describe the indicator in precise terms and
protocols identify the plans for data collection, analysis, reporting, and review. Examples
of precise information contained in an indicator protocol are indicator
definition; unit of measure; disaggregation; data collection, collation, analysis,
reporting, storage, and use; data quality; and targets.
Sources of These are sources you have or use to confirm or substantiate the data, for
verification example attendance lists and event reports.
Unit of measure This specifies what exactly the indicator will measure or count, for example
number of events funded by the U.S. Government or number of people trained.
Disaggregation This refers to how the indicator data will be broken down at the time of
reporting to stakeholders, for example by location, gender, or age.
35 This
may be considered a “bad indicator” from an early warning perspective. This indicator has less value in
terms of informing program action to “save lives” when compared to the number of incidences reported. The
assumption here is that tracking occurrence of incidences can help a program take timely action to focus on
hotspots, thereby reducing the likelihood of escalation and loss of lives.
Indicator Definitions
In order to ensure that our indicators are not subject to misinterpretation and are clearly
understood by everyone who will use them, it is important for us to define each indicator. An
indicator definition clarifies the key terminologies used within the indicator to ensure data
collectors and users measure or interpret the same thing. Using the example indicator that we
developed earlier in the chapter, “percentage of cattle herders in Karamoja who report
disputes over grazing land to their local council one or more times per month by December
2016,” it is important to define what the project means by “cattle herders in Karamoja” and
“report disputes to the local council.” Doing so will help ensure there is common
understanding of which group of people within the Karamoja community the project will
count as “cattle herders” and what the project will count as an “incident” that has been
“reported” to the “local council.” Below is an example of an indicator definition.
This indicator will count cattle herders from among the Pokot and Turkana
communities who are residents and derive their livelihood from keeping and
grazing cows within the administrative boundaries of Pokot and Turkana
counties. A dispute refers to a disagreement between two cattle-keeping
communities over issues of grazing land that may or may not result in
fighting between the two communities. Reporting to a local council means
formally informing the local authority responsible for ensuring equitable use
of grazing land within these communities about a dispute over grazing land
with members of the other community.
In addition to defining the indicator, it is important to specify the disaggregation that will
be used when reporting on each indicator, for example location, name of community, and
type of local authority.
All of the information about the indicator is tracked in the project’s indicator plan. Table 6.6
provides an example from a peacebuilding program.
Note: Implementers of U.S. Agency for International Development (USAID) programs will
be asked to complete Performance Indicator Reference Sheets (PIRSs) as part of their MERL
plan.36 PIRSs are a short table for each indicator detailing the indicator definition, unit of
measure, source, disaggregation, and other key pieces of information. PIRSs are invaluable
resources for all MERL and program staff to understand exactly what is committed to be
measured and how to follow the protocols established.
Chapter 7: Monitoring
Chapter Objectives
In this chapter you will learn:
What monitoring is and how it helps peacebuilding efforts
Three types of monitoring
How to develop a monitoring plan
Learning Activity
Develop a monitoring plan
What is monitoring?
Monitoring is an activity carried out throughout the life of the intervention that provides
project stakeholders with concrete information from which to make strategic decisions. In
peacebuilding, we often monitor three specific areas: the conflict/context, our program
implementation, and our assumptions. In other words, we use monitoring to periodically
answer the following questions.
Has the conflict context changed?
Have we done what we planned to do by this point?
Has what we’ve done had the effect we expected it to?
Let’s look at our example Theory of Change (TOC) and goals from Chapter 4, with added
sample indicators, as shown in Table 7.1.
Table 7.1: Example TOC with goals, outcomes, and sample indicators
TOC: IF young people have a better understanding of the need to be tolerant of other ethnic
groups, THEN their attitudes and views of the “other” communities will change and have a
trickle-down effect on adult family members. THEN communities will be more tolerant of
one another and the levels of conflict and violence will decrease.
Step 1: Goals Step 2:
derived from Type of Step 3: Anticipated Sample indicators (more and
TOC change37 outcomes others are possible)
a. Reduced Attitudes Decreased fear of “the other” % change in # of members of
reports of conflict Group A who self-report fear of
between groups members of Group B
to community
Reduction in distaste for % change in # of members of
leadership from
customs and traditional Group A who self-report that they
100 to 10 per
practices of “the other” disagree with the customs or
month by 2016
traditions of Group B
Increase in acceptance of % change in # of members of
“the other” as part of the Group A who, when asked to
community describe their community, mention
individuals or groups from Group B
Decrease in blaming “the % change in # of members of Group
other” for past grievances A who self-report that past conflict
with members of Group B was at
least in some part their own fault
Church & Rogers 2006 provides examples of how to align specific types of change with specific peacebuilding
37
TOC: IF young people have a better understanding of the need to be tolerant of other ethnic
groups, THEN their attitudes and views of the “other” communities will change and have a
trickle-down effect on adult family members. THEN communities will be more tolerant of
one another and the levels of conflict and violence will decrease.
Step 1: Goals Step 2:
derived from Type of Step 3: Anticipated Sample indicators (more and
TOC change37 outcomes others are possible)
b. Increased Knowledge Young people are familiar % change in # of young people of
tolerance for and with specific customs of “the Group A who can name 2–3
customs and attitudes other” specific customs of Group B
traditions of “the
Increase in understanding of % change in # of members of
other” among
why it is important to Group A who self-report that they
youth and their
respect customs and understand the importance of
family members
traditions of “the other” respecting the customs and
community traditions of Group B
Young people and family % change in # of members of
members are more accepting Group A who self-report that they
and respectful of customs do respect customs and traditions
and traditions of “the other” of Group B
community
How might we monitor progress toward our goals? We would probably track program
participation (e.g., monthly, quarterly) and resources being used (e.g., cost, staffing) to be
sure we are not currently in danger of surpassing budgets. Tracking progress toward specific
indicators is likely to be done through process (periodic) evaluations (see Chapter 8).
Deciding when and how to monitor these changes depends on resources available (does
someone have time to monitor this monthly? quarterly?) and your perception of how quickly
the context may change. How fragile is the situation? Are there particular triggers or
flashpoints (specific events that would impact the conflict positively or negatively) that are
likely to occur? Are there patterns to watch for (e.g., movement of populations, frequency of
aid delivery) that would indicate a change in the conflict? In ideal circumstances, how often
should we monitor the situation? Focusing on key contextual elements (rather than trying to
cover everything) will help you design a context-relevant monitoring plan.
Quantitative methods
- Primary data: information that can be collected and quantified (counted) related to
security, economic development, living conditions, etc.
- Survey or polling data:
Primary sources: Public opinion surveys, including surveying local program staff
Secondary sources: Datasets or information from international organizations that
monitor crises and publish relevant information publically (Keep in mind that these
broader monitoring tools cannot replace the valuable information you can learn
through monitoring your specific context locally.)
Implementation Monitoring
Regardless of the duration of the intervention, it is important to monitor step-by-step
progress toward achieving ultimate goals in terms of project administration, inputs,
processes (activities), and outputs.
Example questions implementation monitoring answers include: How many trainings were
held and how many people completed the training? How much money was spent on each of X
activities? How many times did Y happen?
For example, consider a project that has a plan to deliver 20 trainings to community leaders in
years 2–4 of a five-year project. At the end of year 2, monitoring data might show that the
project has delivered two trainings, which may be on target with the project plan or may
indicate that the team is behind on holding these trainings. Or perhaps 15 trainings have been
held by the end of year 2, which could indicate higher than anticipated demand for the activity,
in which case the organization may want to adjust its planning or search for more funding,
staffing, or opportunity to increase the number of trainings it can hold in subsequent years.
The previous example was of monitoring the magnitude of outputs (how many of this,
how much of that). We can similarly monitor progress toward achieving changes.
If a program’s target is to increase “something” from 10% to 75% over 5 years, we may use
monitoring tools to look for incremental improvements every six months or a year, for
example, increasing to 20% after year 1 or 35% after year 2.
Achievement of planned project stages also is important to monitor. Interventions often are
planned incrementally (e.g., first introduce a savings product, then offer business
development training, then offer business loans) and subsequent activities may each require
significant planning and preparation. Monitoring the implementation of each project
stage is important for informing how and when to plan and adjust future stages. In the
examples in Table 7.3, it may be wise to monitor each activity even more closely, such as
every four to six months, to check progress toward each specific activity’s target.
Personal narratives, stories, and anecdotes also can provide valuable monitoring
information, including for report writing, adding a human element to complement
quantitative data. However, make sure your program is not reactionary based on a small
number of accounts. If one program officer constantly complains about a particular problem
(or success), it may or may not be worth looking into. However, several program officers
reporting the same problem (or success) with or without prompting is even stronger evidence
that something needs to be addressed.
Funders often have requirements for monitoring and expect regular reports (e.g., every three
or six months), such as budgetary reports and status updates. While they may provide you
with a set of metrics on which they want to see reports, most funders will be willing to work
with you to come up with metrics that both meet their needs and the project’s needs.
Assumption Monitoring
In Chapter 5, we talked about the importance of collecting information about assumptions in
the baseline study to help ensure that programs are appropriately designed from the outset.
Regular monitoring of these important assumptions (e.g., about the population, governing
regulations) is particularly important in fragile states because of how quickly and
unexpectedly changes can occur. Examples of such changes include population migration
patterns, leadership (which can dictate changes in laws), food security, economic landscape,
and allegiances. Many of the methods previously described for monitoring conflict can be
used to monitor assumptions. Table 7.4 uses several examples to illustrate assumptions that
could change and how they might affect our interventions. Project staff should be part of the
monitoring process and the learning that goes along with it (via, e.g., reports, staff meetings,
bulletins) in order to identify the appropriate responses to changes in assumptions.
It is important to note that monitoring can tell us when changes in our assumptions occur,
but it does not necessarily tell us if and how to react (as in the “We might have to…” column
in the table not being labeled “We should…” or “We must…”).
Monitoring Plans
In order to appropriately monitor progress against targets, each project should have a
monitoring, evaluation, reporting and learning plan. A monitoring plan should include the
elements in the table below: type of monitoring (it is a good idea to include all types of
monitoring), indictors, decisions to be informed by data, how often to monitor the data point,
and how data is collected and by whom. For crafting indicators, please refer to Chapter 6.
Table 7.5: Monitoring plan outline with examples of the three monitoring types
discussed previously
Monitoring How How data is
Type of data to collect Decisions to be often to collected and by
monitoring (indicators) informed by data monitor whom
Conflict # of cross- What level of security Weekly Interns monitoring
border raids should we maintain? Monthly specific news sources
reported Should we maintain or Meetings with local
postpone any key hospital administrator
activities?
Implemen- # of local leaders Should trainings Quarterly Written evaluations
tation trained continue and at what completed by
% of those rate? participants after
involved Should training training and read by
deeming curricula be adjusted? program staff
training a Written evaluations
“success” completed by
training facilitators
Assumptions # of people from Should we prepare for Quarterly Staff conversations
“the other” increased incidences of with community
community violence between the two elders about
living in the communities? community
target area Should we increase participation in
programming efforts peacebuilding
related to this violence? initiatives
Implemen-
tation
Assumptions
Chapter 8: Evaluation
Chapter Objectives
In this chapter you will learn:
The definition of evaluation and why it is important
Different types of evaluations
Evaluation frameworks and questions
Components of an evaluation Terms of Reference (TOR)
Learning Activities
Create an evaluation plan
Measuring success in peacebuilding programs requires going beyond just counting activities,
such as the number of trauma healing (TH) activities held. Rather, we must understand what
outcomes have occurred as a result of conducting TH activities in the project communities.
Evaluation can focus on the change we created in the lives and minds of others.
Evaluation is most valuable when a project wants to look not only at results on a cursory
level, but seeks to understand the underlying reasons why change is occurring or
not occurring in the field, then uses that information to learn and adapt both its actions
and its conceptual framework.
Defining Evaluation
There is no universal definition for the term evaluation. British mathematician and
academician Michael Scriven (1991), one of the founders of evaluation as a field, noted nearly
60 different synonyms, based on such verbs as appraise, assess, critique, examine, grade,
inspect, and judge.
40 This chapter is merely an introduction to evaluation. To read about evaluation in much greater detail before
planning for an evaluation, please review Pact’s MERL Module 3, Field Guide for Evaluation: How to Develop
Effective Terms of Reference, available at http://www.pactworld.org/library/field-guide-evaluation-how-develop-
effective-terms-reference. Portions of this chapter are taken directly from Module 3.
As managers and leaders of evaluations, it is important for you to understand how others may
understand the term. A common language for evaluation helps us all communicate better.
In this chapter, we will present several common definitions. None is particularly better than
another. Instead, each emphasizes a different aspect of evaluation, including its purpose and
utility. Understanding the similarities and differences among these definitions will directly
help us manage evaluation work in our communities.
According to Michael Quinn Patton (1997, p. 23), a leader in the field of program
evaluation, evaluation is “the systematic collection of information about the activities,
characteristics, and results of programs to make judgments about the program, improve
or further develop program effectiveness, inform decisions about future programming,
and/or increase understanding.”
According to the Organization for Economic Co-operation and Development
(OECD; 2002, pp. 21–22), evaluation is “the systematic and objective assessment of an
ongoing or completed project, programme or policy, its design, implementation and
results. The aim is to determine the relevance and fulfillment of objectives, development
efficiency, effectiveness, impact, and sustainability.”
According to the U.S. Agency for International Development (USAID; 2011, p. 2),
evaluation is “The systematic collection and analysis of information about the
characteristics and outcomes of programs and projects as a basis for judgments, to
improve effectiveness, and/or inform decisions about programming.”
Specific
Next, evaluation is specific to a program or project; this is what distinguishes evaluation from
research (see more below). For example, someone might investigate whether children who
live near the garbage dump get sick more often than children who live far from the dump.
This is research, but it is not evaluation. Another person could study whether children who
attend a certain nutrition program get sick less often. Both studies are research, but only the
second example is evaluation because it is specific to a project and the first is not.
Versatile
Last, the three definitions also show that evaluation can answer many different types of
questions. For instance, an evaluation may ask:
Did the program improve the well-being of community residents?
Were resources used effectively?
What factors were most important to the success of the intervention?
Why did the program fail?
Why evaluate?
Demanding time and resources, evaluation often competes with resources that are needed to
implement programs or deliver services. Many managers ask the question, “Why evaluate?”
The Patton, OECD, and USAID definitions, above, suggest clear reasons:
To measure a program’s value or benefits
To improve a program or make it more effective
To better understand a program
To inform decisions about future programs
Knowing why a program is being evaluated is essential to the evaluation’s success. After
all, evaluations are meant to be used. How an evaluation is used depends on what questions
have been asked, the reasons for evaluating the program, funder requirements, timing, and
other factors. Evaluation reports sometimes sit on shelves gathering dust. However, if we are
clear about an evaluation’s purpose, if the evaluation is conducted systematically, and if the
right questions have been asked about the program during the evaluation, the results should
be useful and actionable.
Types of Evaluation
Types of evaluation vary by purpose and program stage. The five main types are:
Formative Summative Process Outcome Impact
Most useful during program design and early in the implementation phase, formative
evaluations examine how a program, policy, or project is implemented, whether or not the
program Theory of Change (TOC) corresponds with its actuality, and what immediate
consequences the implementation produces.
The most rigorous types of evaluation are impact evaluations, which use statistical
methods and comparison groups to attribute change to a particular project or intervention.
USAID defines impact evaluations as evaluations that:
Measure the change in a development outcome that is attributable to a defined
intervention; impact evaluations are based on models of cause and effect and
require a credible and rigorously defined counterfactual [examination of what
would have happened if the intervention did not exist] to control for factors
other than the intervention that might account for the observed change.45
The counterfactual is what differentiates the outcome evaluation from an impact evaluation.
Internal evaluations may allow for a more complex, multi-stage evaluation design and can
take advantage of in-house staff members’ understanding of the project, either to produce the
evaluation more efficiently or to yield more nuanced findings.
External evaluations can be (or can be perceived as) more objective and can bring additional
expertise and external perspective that can add value to the evaluation.
Which type of evaluation uses resources the most efficiently depends on an organization’s
capacity. Often, evaluation involves both internal staff and external consultants in a joint
effort that can leverage the strengths of each.
Barriers to Evaluation
If evaluation is important, why does it not always happen or happen well? What stands in the
way? Key factors are:
Lack of time, knowledge, and skills
Lack of resources for evaluation, including a limited budget
Poor project design, for example, evaluation activities not being integrated into project
design
Start-up activities competing with baseline measurement or delaying baseline
measurement
Project capacity overwhelmed by complex or overly ambitious evaluation designs
Fear of the consequences of negative findings
The perception of M&E as “police work” or “auditing,” that is, a fault-finding exercise
Arguments by stakeholders that M&E resources would be better spent on program
expansion
Difficulty convincing others how useful evaluation will be as a learning exercise
The perception that because no baseline data was collected, it is too late to evaluate
Barriers to program evaluation are worth overcoming so we can learn what works and what
does not enables us to better serve the needs of our communities.
46 CDC, 1999.
47 Patton, 1997, p. 44.
There are several useful frameworks available to help you choose your priorities and
evaluation questions in a systematic way. Framework 1 is more common across a range of
different types of development programs; however, Framework 2 is applicable specifically for
peacebuilding programs.
Relevance “The extent to which the aid activity is suited to the priorities and policies of the
target group, recipient and funder.” This criterion is used to determine if (and to what
degree) the activity is (still) aligned with ultimate goals and objectives (near- and long-term)
and whether those objectives have changed (as the situation has evolved) or remained the
same.
Example question: To what extent does the concentration of aid on peacebuilding
correspond to the needs in the communities?
Effectiveness “A measure of the extent to which an aid activity attains its objectives.” This
criterion is focused on determining if and why or why not the activity is (or is not) working in
a way such that the activity’s goals have been or are on the right track to being achieved.
Example question: To what extent has the aid contributed to reduced incidents of
violence?
Sustainability Measures “whether the benefits of an activity are likely to continue after
funder funding has been withdrawn. Projects need to be environmentally as well as
financially sustainable.” This criterion takes a longer-term view than even impact to look at
whether interventions resulted in perpetual long-term changes. This is the most difficult of
the criteria in this model to assess.
Example question: To what extent has the aid contributed to durable peace in the
communities?
48 Framework information and all quotes taken from OECD Development Assistance Committee, n.d.
49Church & Rogers, 2006, as modified from Church, C., & J. Shouldice. 2002. The Evaluation of Conflict
Resolution Interventions: Framing the State of Play. Belfast, U.K.: INCORE at Ulster University. p. 100.
Theme 1: Goals and assumptions This theme explores the justifications for specific
interventions and their methodologies.
Relevance
Attempts to discover whether a given intervention is appropriate for the situation (context
and conflict), which has likely changed since initial program design
Assesses whether the intervention is still the appropriate means by which to achieve
ultimate peacebuilding goals
Closely related to conflict and context assessment and monitoring
Important in highly volatile environments
Should be pared with outcome identification (Theme 3) to truly understand the
intervention’s effectiveness
Is more often included as part of formative evaluations than summative evaluations
Strategic Alignment
Determines whether program activities and intended outcomes are aligned with
organizational values and mission
Primarily used to validate internal assumptions about whether a project fits within an
organization’s mandate (at a local or international level) or if it has gone off course
Is distinct from looking at an intervention’s alignment with national strategic
peacebuilding efforts
Theme 2: Effectiveness and efficiency This theme looks at what was done and the
extent to which it was done efficiently.
Cost accountability
Examines the cost-effectiveness of the intervention
Asks if resources were spent wisely (efficiently) and were accounted for appropriately
Practical applications include budget management, funder reporting, and cost projecting
Goal is not simply spending as little money as possible, but the best possible use of
resources given the challenges and realities of operating in a time of conflict
Theme 3: Range of results This theme measures outputs, outcomes, and impacts of
the intervention(s).
Output identification
Measures near-term, (often) tangible results
Determines what happened and to what degree (how much/many)
Very commonly collected data; useful for meeting funder requirements and monitoring
specific, countable targets
Can have reasonable and reliable information earlier in project timeline than outcome or
impact assessments
Outcome identification
Assesses the changes brought about by the intervention
Steps beyond “what happened” to “what happened next” and “to whom”
Looks at both positive and negative, intentional and unintentional outcomes
Essential to look both at outcomes related to program objectives and other possible
outcomes not initially considered50
Impact assessment51
Evaluates the role of an intervention’s outcomes in the larger conflict
Determines the extent to which the outcomes of activities and processes lead to changes
in the overall conflict.
Should include both positive and negative, direct and indirect, intentional and
unintentional consequences
Adaptability of change
Explores the extent to which change is both sustainable and adaptable
Asks whether the initial changes seen will be able to withstand and adjust with the
changing environment52
Requires an extended program cycle to observe changes and adaptation to new conflict
scenarios
50Looking only at anticipated program outcomes is a criticism of many M&E practices. Looking for more open-
ended views of change helps ensure that results don’t just include what we looked for or wanted to happen. We can
miss out on other outcomes simply by not asking questions outside the scope of our objectives. See Stave, 2011.
51 It
is important to reiterate that in this context impact assessment implies looking at long-term, broader-scale
changes on the community that result from activities and processes among smaller populations within that
community. Identifying impacts does not necessarily prove causality; that is, even if you can prove that a large
change (impact) has occurred, you cannot attribute it to a specific intervention component. The systems that
influence change are vast and complex, and isolating the factors that lead to change is impossible in some
situations. This distinction is important because the term impact assessment is sometimes used to describe an
evaluation whose goal is attribution. Such studies are highly controlled, costly, and difficult to implement, even in
stable environments.
52For example, consider a reconciliation program that is successful in reducing people’s view of “the other” and
lessens incidences of violence and feelings of animosity between the groups. If a new “other” group migrates to the
region, will the same principles be applied and lessen the likeliness that violence and/or animosity will grow with
the new group?
There may be a need to justify the program to policymakers or funders by proving that
resources are being used efficiently. You may want to improve the program or strengthen the
organizations that are a part of it. There are many possible purposes for evaluations.
But, no matter what they are, a clear and well-written purpose statement is important for
clarifying the aim of the evaluation so much so that it is often required when planning
evaluations and writing grant proposals.
Another way to write the purpose statement is to complete the blanks in the following
sentence.
We are conducting an evaluation of [name of program] to find
out ________________________________________________
and will use that information in order to __________________.
Evaluation Questions
Evaluation questions used to better
All managers have questions about the understand results of peacebuilding efforts
programs they manage. The following Did we meet or exceed our written objectives? If
questions are the raw material for our stated objective was not reached, why not?
creating evaluation questions. Did we develop mechanisms to receive feedback
on our progress throughout the project? Which
Is the program making a
feedback methods allowed us to alter strategies
difference? in a timely manner and measure our impact?
Is the course of action we’re
Did the campaign result in positive change in the
following the best way to do
lives of our beneficiaries?
things?
Are there fewer conflicts in project areas?
Are the participants benefiting
from the program as expected? Is there more cooperation between communities
in conflict?
An evaluation question is different Have perceptions changed? Are people more
than the evaluation purpose tolerant or less prone to partake in conflict?
(discussed above), but the evaluation Did our work have any unintended side effects?
questions should help to fulfill the
evaluation purpose. For example, if
the purpose is to influence policymakers to fund similar programs in other parts of the
country, it might be appropriate to ask:
How did the communities that received the program benefit, compared with those that
did not?
How cost-effective was the program?
What elements of the program were most important in creating the desired outcomes?
On the other hand, if the evaluation purpose is to show program staff how to improve the
program, you might ask:
How do participants of the program perceive it?
What are the program’s strengths and weaknesses?
Why did some program sites perform better than others?
Developing the TOR yields a shared understanding of the evaluation’s specific purposes,
questions, objectives/themes, the design and data collection approach, the resources
available, the roles and responsibilities of different evaluation team members, the timelines,
and other fundamental aspects of the evaluation. The TOR facilitates clear communication of
evaluation plans to people inside and outside of the organization/project.
Importantly, if the evaluation will be external, the TOR helps communicate expectations to
and then managing the consultant(s). Because external evaluators are less familiar with the
project than the individuals commissioning them, it is important to have a TOR that clearly
sets forth all the necessary background, specifically to alert the evaluator to the questions that
are most important to stakeholders.
53 Church & Rogers, 2006 and OECD Development Assistance Committee, 2008.
Component Description
Project objective 1: to decrease instances of domestic violence in the
community
Evaluation goal: to improve the processes and programs offered by our
organization to promote peacebuilding
Evaluation objective 1: To determine whether co-ed educational
programs are more effective at decreasing domestic violence than single-
sex programs
Evaluation approach: Household survey and key informant interviews
with community leaders
Ethical considerations Requiring informed consent
Ensuring data is not traceable to individuals (de-identified data sets)
Implementation plan: Describes logistical components of the implementation plan
Schedule and logistics Describes what is needed of the evaluator or evaluation team sought,
Evaluation team similar to a job description
Reporting and Explains requirements for report format
dissemination plan Explains how the evaluator hands the raw data
Specifies how the report will be disseminated and to whom
Application guidelines Provides information for external consultants and survey firms who wish
Budget guidelines to apply to offer their services
Timeline Explains how to submit applications and by which date
Contact details
Chapter 9: Learning
Chapter Objectives
In this chapter you will learn:
Some best practices for organization/project learning
The basics of the adaptive management approach
How to develop a learning agenda
Learning Activity
Create a learning agenda
Overview
Learning is an essential component of strong organizations and projects. It directly builds
on monitoring and evaluation (M&E) and uses the findings from M&E activities to strengthen
program design and best practices. Learning is most successful when it:
Comes in the form of regular performance data learning reviews, i.e., scheduling and
holding periodic learning reviews with the technical team and partners to review
performance and discuss any changes in approach that may be necessary
Also occurs post-evaluation and includes action planning (course-correction if necessary)
Involves funders, project beneficiaries, and other stakeholders
Adaptive management is ideal for learning about and understanding complex systems and
structures because it recognizes that systems are inherently changing and unpredictable.54
Adaptive management copes with the uncertainties by monitoring decision-making results
and by re-examining choices in light of these results and based on new information that
becomes available.
54The U.S. Agency for International Development (USAID) has written about the local systems approach in a
paper that explains how development programming can best work within existing systems. See Local Systems: A
Framework for Supporting Sustained Development, available at https://www.usaid.gov/policy/local-systems-
framework.
to engage in learning around contextual changes and project outcomes, which will feed
directly into project decision-making.
During the design phase, projects should identify key sources of information and create clear
mechanisms for capturing and learning from this information. Practical examples of
information sources are listed below. When considering sources, projects should keep in
mind that the goal is not to gather more information, but to carefully consider what types of
information is most useful to inform their decisions.
Consultations with key informants
Basic field visit reports
Project monitoring data
Discrete applied political economy analyses (APEAs) and/or conflict analyses
Beneficiary feedback mechanisms (like community scorecards, social audits, etc.)
Close monitoring of news and social media
As part of this process, projects should include consideration for how frequently managers
want to make substantive pivots to the project. The context of the project also should be
considered. A highly volatile context may need more frequent information review, such as on
a weekly basis, to inform resource deployment, while a more stable context might only
necessitate quarterly learning reviews of key environmental and/or programmatic
information.
The adaptive management approach requires that we periodically evaluate activities and
revisit our results frameworks, hypotheses, and cause-and-effect linkages (the premises on
which an organization selects activities to carry out) to ensure they are still valid based on
what we have learned.
Learning agendas are similar to evaluation plans, but differ in some key ways. While
questions in the evaluation plan may be similar to those in the learning agenda, the latter
enables the program/project/organization to plan for how it will ensure that improvements
are made along the way based on learning from immediate processes, experiences, and
activity outputs. Also, evaluations tend to be more rigorous in their methodology and analysis
because findings are often published for use by stakeholders outside the organization.
Learning agenda questions do not necessarily require as much rigor because findings are
used internally to inform continuous program improvement.
Developing a learning agenda includes the following steps. Table 9.1 provides an example of a
completed learning agenda that includes information for all these steps.
1. Determine the components of your program you want to learn about and identify what
needs to be assessed. To do this, review your implementation plan, deliverables, and
results framework and identify key components, sub-components, or other aspects of
your program that you will analyze in terms of your organization’s efficiency to
implement them and/or their effectiveness in obtaining results.
2. Clarify what you want to learn about each component you identified and determine the
questions you will answer. Also review the planned deliverables, results frameworks, and
indicators for this step. Example questions are:
What has been learned from the project that can contribute to improved program
implementation or to building relevant knowledge in the peacebuilding field?
What do we want to know about the subject?
What was changed as a result of our program?
How do targeted stakeholders perceive our programs?
What sort of reach do we have?
How many home visits are we supporting?
How was the target population affected?
How much money did we spend?
For example, if you determine that the training component should be evaluated, you
might ask what evidence is available to show that the training implemented has resulted
in new ways of doing things or increased participant knowledge and skills.
3. Identify how you will obtain the data. What data do you already have to help analyze this
issue, and what data do you need to be able to answer your questions? For example, will
you need to facilitate a focus group discussion, hire a research consultant, hold a staff
meeting, or use data from specific indicators?
4. Identify who should be involved in answering the questions and in participating in
analysis of the answers.
5. Determine deadlines for obtaining the data and conducting the analysis. Do you need the
information every month, each quarter, at the end of the project, etc.?
6. Identify how you plan to document the things you have learned, disseminate findings,
adapt your program activities, and/or update underlying premises or results frameworks,
thus altering the program design.
The trauma Do our TH Pre/post Interviews and Program staff At the beginning Pre and post-test
healing (TH) sessions lead to questionnaires surveys with TH and end of each training results
TH participants
activities across increased participants and TH session collect discussed at
Most Significant
two communities understanding of community Other community pre- and post-test quarterly
Change stories
during our five- the relation members during members data program review
from TH
year project between trauma mid and end term meetings
participants 3–6 months after
and conflict? evaluations to
completing the MSC stories
Attendance establish actions
What kinds of TH sessions, analyzed annually
records for the taken towards
resolutions do TH collect MSC and findings
TH sessions reconciliation
participants make stories discussed at the
that demonstrate annual reflection
Midterm
willingness to meeting
forgive or reconcile Endline
Midterm and
with the other
endline assessment
community(ies)?
completed in years
What are we 3 and 5
learning about the
Following
kind of
midterm, results
community
framework
members to target
reviewed for
for TH sessions?
remaining project
Peace dividends Are peace Quarterly peace Interviews and Program staff Quarterly during Peace dividend
in corridors dividends dividend surveys with the quarterly utilization reports
Community
between two benefitting two or utilization reports participants to review meetings and context
members
communities more establish if peace updates discussed
Stories of Most 6-12 months after
communities? dividends are at quarterly
Significant completing
changing program review
Are peace Change resulting construction of
attitudes and meetings
dividends from the peace the peace
behaviors during
contributing to dividend dividend collect MSC stories
the mid and end
increased MSC stories from analyzed annually
Context updates line evaluations.
interaction among members of the and findings
from program Are communities
members of two communities discussed at the
staff and more willing to
previously annual reflection
implementing reconcile? What Midterm and
conflicting meeting
partners reconciliation endline
communities?
actions have they evaluations Midterm and
Observation of
Are certain peace taken? endline
how members
dividends more evaluation
from different Changing context
inclined to completed in
communities are through periodic
increase years 3 and 5
accessing the context analysis
interaction
peace dividend Following
between
midterm, results
communities than
framework
others?
reviewed for
remaining project
Overview
Reporting, or communicating your project results, milestones, and successes to stakeholders,
is an essential component to ensuring that the information you collect is put to its best
possible use. It is important to know who can best use the data and how. You also need to
know how to turn the data into useful knowledge that can be used by decision-makers. This
chapter will show you how to easily turn your monitoring, evaluation, and learning data into
reports and communication tools that can be broadly distributed among a wide range of
audiences and that are useful both for your management and your funders.
Introduction to Reporting
A report is a compilation of descriptive information. It is a communication tool to present
monitoring, evaluation, and research results by presenting raw data and information as
knowledge. A report is an opportunity for project implementers to inform themselves and
others (stakeholders, partners, funders, etc.) on the progress, difficulties encountered,
successes, and lessons learned during implementation of programs and activities.
Reporting enables assessment of progress against work plans and helps focus audiences on
the results of activities, enabling the improvement of subsequent work plans. Reporting helps
form the basis for decision-making and learning at the program level. Reporting also
communicates how effectively and efficiently the program is meeting its objectives.
A good report:
Focuses on results and accomplishments within the context of the project
Assesses performance over the past reporting period, using established indicators,
schedules, baselines, and targets
States explicitly whether and how much progress or results surpassed, met, or fell short of
expectations and why
Specifies actions to overcome problems and accelerate performance, where necessary
Explains the influence of comparative performance by objectives on the resources needed
Identifies the need to adjust resource allocations, indicators, or targets, where necessary
Discusses the way forward for programming in light of the findings; annual or final
reports also may address prospects of successful program closeout and expected
sustainability of results
However, reporting should not end with funder requirements. Other types of communication
tools you could use include:
Oral presentations/lectures Formal academic papers and books
Discussion sessions/community meetings Visual presentations (e.g., videos, photos)
Informal contacts and conversations Internet, email, and websites
Press and media releases Plays, music, and dances
Brochures and pamphlets
Use the following steps to determine the most appropriate communication tools for your
organization or project’s needs.
1. Identify your audiences’ information needs. For each audience, ask yourself what key
information you want to communicate.
2. Determine how you will report to each audience by selecting a tool/format that best suits
the information you want to convey. Think about their primary interest in the
organization/project to help you decide on the data to report.
3. Review your information database and identify what data you have to address that
interest.
Reporting Schedule
Table 10.1 illustrates one way to manage various reports required by funders, internal staff,
and other stakeholders. The process for writing, compiling, and submitting reporting, as
shown in the table, occurs in order from left to right.
2. Introduction/program background
This section should include standard language in one to two paragraphs about the project’s
objectives, beneficiaries, and funders and what the report includes.
This section also can include agreements between you and your funder to take specific
actions. If decisions are made (in consultation with the funder) to either change the
geographic location or the strategic elements being emphasized in the activity, they should be
noted here for the official record. If contract modification or amendments are needed, they
can be described in the administrative review section.
7. Administrative review
In a maximum of one page, discuss the status of your program administration. During the
reporting period, were there any changes in staffing/management, institutional
strengthening plan actions, contract modifications or amendments to the program?
Table 10.2: Example projected work plan for the following quarter
January to March 2015
Activity January February March
Program start-up and cross-cutting issues
Cross-cutting and start-up activities
Result 1.1: Communities more open to social reconciliation
Activity 1.1.1: Trauma healing
Activity 1.1.2: Peace dividends
Activity 1.1.3: Cultural adaptation
Result 1.2: Communities peacebuilding capacities mobilized
Activity 1.2.1: Promote local peacebuilding leadership
Activity 1.2.2: Promote youth leadership and engagement
opportunities
Activity 1.2.3: Expand the impact of peacebuilding
organizations
Activity 1.2.4: Establish/strengthen peace networks
Result 1.3: Local governments partner with their cross-border counterparts and
communities in conflict management
Activity 1.3.1: Promote emergence of local-government
peacebuilding leadership
Activity 1.3.2: Expand impact of local-government
peacebuilding initiatives
Activity1.3.3: Promote partnership between local
government and peace networks
1. Executive summary
This section captures the essence of the report and provides an overview of its contents. It is
the last section to be written and does not exceed two pages.
2. Introduction
This section presents a very concise overview of the need for and history of this program in a
couple paragraphs. It describes the results, objectives, context, and activities that were
anticipated under the program during the period of agreement.
4. Results/impact
This section compares planned versus actual achievements. In this section, you should:
Summarize program accomplishments or failings
Present findings as to why progress toward planned results was unexpectedly positive or
negative
Present findings on how well needs of different customers were met (e.g., by gender, age,
ethnic group)
Present indicator results/tables and anecdotal information to support findings
Assess the value of the program’s contribution and clarify exactly how the achievement of
your objectives contributed to the development outcome and impact
Review the validity of hypotheses and assumptions underlying the results framework
based on lessons learned in implementation
Describe mitigating factors that disrupted what was planned and the organization’s
response to the disruption
Describe the facilitating factors that helped spur results
Identify and analyze unintended consequences and effects of assistance activities
6. Review of deliverables
Review the deliverables submitted, preferably in table format for easier reading. Explain any
unfinished work and recommend whether and how it should be completed.
Photographs are images that convey events or ideas. Photographs are an important
component of success stories and help bring them to life.
Chapter 11:
Data Quality and Ethics
Chapter Objectives
In this chapter, you will learn:
Key data quality management concepts and assessment criteria
Steps for addressing common anticipated data quality issues during routine data
management processes
Factors or tips to remember when creating a good data management system
Learning Activities
Review key data quality issues and indicator plans
We can select the best indicators and write the best protocols, but if tools are not properly
used and if standards of reporting set by the organization are not respected in terms both of
quality and timeliness, data could be at risk of poor value and/or could be over/under/or
miscounted.
Issues and risks relating to data quality need to be thought through and documented to
ensure quality standards are developed and maintained. Thus each organization or project
needs to develop and document how it checks the following.
2. If the data collection processes are stable and consistent over time
and are thus reliable
Are data collection procedures consistent (e.g., from one reporting period to the other, from
location to location, is the same instrument being used)? Are we checking the data to ensure
it is correct and free from errors? Are data problems reported? Would we get the same results
if someone else went out again to collect the same data in the same way?
3. If the data is collected frequently enough and is current, and thus timely
Is data coming in on a frequently enough to inform program management
decisions? For example, are you receiving training feedback data before planning the next
round of training (so as to use learning from those training to shape the upcoming training?)
Do people report when they are supposed to? Data that is required today but not received
until tomorrow might not still be helpful.
4. If the data have an acceptable margin of error and are thus precise
Is the margin of error less than the expected change being measured?
Data Management
Data management process involves collecting, collating, analyzing, reporting, using, and
storing project activity information. There can be no monitoring, evaluation, reporting, and
learning (MERL) without a good data management system, with the components described
below.
1. Data source
2. Data collection
Data collection is the process of gathering data generated from the various activities
implemented by an organization and relevant to an organization’s MERL framework. Data
collection involves obtaining data from original sources and transferring it into tools (paper
or electronic) from which it can be analyzed or transferred to another data analysis system for
analysis and reporting. Many times, data quality is compromised at this stage; therefore, you
should exercise caution during data entry—garbage in, garbage out! See Chapter 12 for tips on
collecting data through electronic means.
3. Data collation
Data collation refers to aggregation of data from different sources, such as different program
sites or different field workers who collect the data, into summarized formats. Collation can
be done electronically using MS Excel spreadsheets or databases or manually using paper-
based systems. For example, you likely want to sum up female and male participants of
different dialogue sessions across regional sites/partners in the quarter to get the total
dialogues conducted and total female and male participants who attended peace dialogues
implemented by the organization in the quarter.
4. Data analysis
Data analysis involves reviewing and manipulating data to assess progress made toward
desired objectives and targets. Analysis enables data users to:
Associate variables (test underlying theories or assumptions)
Predict relationships (cause and effect/outcomes)
Indicate confidence in results
5. Reporting
Reporting entails compiling descriptive data and presenting raw data and information
generated from data analysis as useful knowledge. Reporting provides an opportunity for
project implementers and others to learn about the progress, problems, difficulties
encountered, successes, and lessons from implementing program/project activities.
Remember, if it is not reported it never happened because others do not know about it. See
Chapter 10 for more on reporting.
6. Data use
Data use refers to making timely data-driven decisions that relate to program/project
implementation. Information need to be available when required so it serves the purpose for
which it was collected, then is used for learning.
7. Data storage
Data storage entails securely keeping and maintaining information or data collected for
current and future reference. Data storage is done best when both electronic and hard copy
(original data source documents) are well organized and kept safe.
To get it right from the start, it is absolutely essential that all indicators are defined upfront to
avoid potential problems with data validity or reliability that could arise from indicator
definitions being unclear. Indicator protocols allow programs to address common data
validity and reliability problems by providing standard definitions of program indicators and
related units of measure.
Staff responsible for MERL must ensure common indicator understanding and interpretation
by program team members who would be involved in collecting or analyzing indicator data
coming through the system at any given time. This could best be done through a team
meeting or partners/stakeholders MERL training. It is also important for the program
manager(s) to consider and determine the likely risks to managing the individual indicator
data right from source to data use.
10 tips for developing a good data
The “data quality issues” section of the standard management system
USAID performance indicator reference sheet
(PIRS)57 allows programs to document all key 1. Keep it simple
2. Make the interface attractive
data-management processes and the
3. Remember only techies read the users’
anticipated data quality risks that may affect guide!
data use. This also allows the program MERL 4. Craft well-defined indicators
and/or program managers to pay attention to 5. Consider user needs
key data quality considerations, such as 6. Ensure compatibility of systems
timeliness of the data, which is addressed by 7. Ensure quality of reports generated
detailing the frequency and timing of data 8. Assign a dedicated database
acquisitions and analysis. Data integrity also is administrator
addressed to some extent by the provision on 9. Watch out for accessibility issues
the indicator protocol to assign responsibility 10. Conduct Back up and maintenance
for data storage along with protocols and
systems necessary to ensure that data remains sure and uncorrupted. However, additional
issues, such as policies addressing data ethics within the organization, are still required to
ensure that data integrity is addressed comprehensively.
Develop standard operating procedures (SOPs) for managing the collected data (moving
data from one point to the next)
Develop SOPs for revising the collection tool
Communicate the process and establish processes for supportive supervision of data
collectors
Conduct on-site reviews during the data collection process
Hard copy or primary information sources need be well filed for future reference and
possible DQAs
Stored data access needs to be available only to authorized team members as per the data
management protocol; this ensures data remain safely secured
Consider implementing basic system maintenance tools like “restore” and “check data
integrity” commands
Data Ethics
Data ethics refers to the rules or standards governing the conduct of a person collecting,
collating, reporting, or using data. Common ethical considerations that programs should
make in relation to data include the following.
Ensure that program participants/beneficiaries are provided with sufficient information
to enable them to make informed decisions about their participation in data collection
efforts. No participant/beneficiary should be coerced to give information.
Participants need to be made aware of how much privacy/confidentiality/anonymity they
can expect for their responses. This is especially important for peacebuilding programs
where safety is of the utmost concern.
Programs should take steps to ensure that data is not misrepresented or falsified by
anyone involved in the data-management process.
Learning Activity: Review key data quality issues and indicator plans
Divide participants into groups and ask them to discuss and identify key data quality issues
likely to affect their indicator data reporting.
Review partner indicator plans to ensure they include measures for addressing any data
quality issues moving forward and have strong indicator definitions.
Chapter 12:
Mobile Technology for Data Collection
Chapter Objectives
In this chapter, you will learn:
About short message service (SMS)-based data collection versus general packet radio service
(GPRS) data collection
The benefits of using mobile technology
How to develop a mobile technology plan
When to use and not use mobile technology
How to capture GPS coordinates
Learning Activities
Brainstorming on mobile technology for data collection
Evaluating experience with mobile data collection
Overview
Pact’s MERL Module 4: Mobile Technology Handbook58 covers in great detail the use of
mobile technology for data collection. This chapter is an overview of the different types of
mobile data collection platforms available and considerations to make when deciding how to
use mobile technology in a peacebuilding project. Much of this chapter is drawn directly from
MERL Module 4.
Mobile technology already has proven itself a powerful and efficient tool that accelerates
achievement of project objectives and, ultimately, of development goals. Efficiency and
data quality gains have been accepted as the norm for many applications, and the
frontier of possibilities expands daily.
When developing a strategy, it is important to involve all stakeholders, including those who
will collect the data, use or analyze data, and manage the process. Chapter 2 of Pact’s MERL
Module 4 discusses how to get started and what questions you should be asking stakeholders
as you develop the strategic plan.
Use the following steps to help you begin to develop your organization or project’s mobile
technology strategy.
1. Brainstorm the areas where mobile technology can improve your project, keeping in mind
the sensitivity of the context and logistical limitations in which it operates.
2. Identify the data you would like to collect.
3. Identify and understand your mobile technology users.
4. Consider data-management processes.
5. Conduct a mobile technology feasibility scan.
6. Decide on the appropriate mobile technology platform.
7. Develop your mobile technology strategy document.
Goal:
Problem:
Solution:
Remember: Mobile technology should simplify your life. For example, think of the large
volume of data that are waiting to be entered or data that people have to travel far to submit
paper survey forms.
ensure the success of your rollout. Exposing end users to the technology early on will test
any assumptions you have made about the appropriateness of mobile technology. Early
user involvement also has the potential to greatly improve the level of participation and,
ultimately, the speed and quality of implementation. It also is highly important to ensure
gender equity when introducing technology. Obtaining buy-in from both male and female
users is crucial. Additionally, sensitization to the technology may require different
techniques with men and women, depending on existing usage levels.
With so many different mobile devices on the market, each with different plans and features,
choosing a mobile device can be overwhelming. To quickly narrow your options, consider the
nature of your mobile initiative (one-off versus long-term), data requirements, method of
transmission, power sources, and budget. Table 12.1 outlines the types of mobile devices you
could purchase and the functionality of each.
Cons Moving parts of Not all have GPS Heavy power user, Heavy power user,
flip phones and Language needs recharging needs recharging
slider phones compatibility frequently frequently
affect reliability issues Might be targeted Might be targeted
and durability by thieves by thieves
Small screen
Creating texts on Some are more
basic keyboard expensive than
can be challenging other mobile
devices
Manufacturers Nokia Nokia, Siemens, Samsung, LG, Aakash, Ubistlate,
Sony Ericsson ZTE, Vodaphone, Google Nexus,
(Java phones) HTC, Apple Samsung, Apple
A number of mobile platforms offer a broad range of features and different pricing plans;
several open-source options come at no cost.
To determine which platform is right for your mobile initiative, consider your needs and
overall project budget. To a certain extent, your project’s data needs and the frequency and
type of data collection outlined in your mobile strategy will help you narrow your options.
Please keep in mind that platforms change rapidly and that features and price structures
evolve over time; the latest information is always available on the platform’s website. When
deciding among mobile platforms, consider phone requirements; the data entry interface and
transmission methods permitted; data storage, analysis, and reporting features;
miscellaneous features; and the pricing structure.
Table 12.2 shows the most commonly used mobile platforms on the market today; however,
new options open daily. For additional details on the different mobile platforms, please see
Chapter 3 of MERL Module 4.
It is important to consider the type of data you are collecting and who will be collecting it to
make sure that the use of mobile technology is appropriate for the context.
Develop a Budget
There are costs associated with mobile data technologies. The most cost-effective platform
depends on the amount of data you collect, number of users, and number of survey items.
Costs of mobile versus paper data collection should be considered when deciding on the mode
of data collection you will use. Table 12.3 outlines the considerations for both types of data
collection mode.
Table 12.3: Data collection comparison: mobile technology vs. traditional paper
Cost of mobile data collection Cost of paper-based data collection
Number of mobiles (e.g., total number of Printing costs
data collectors plus backups or replacement Pens and paper
mobiles)
Transportation of paper surveys
Cost of submitting data (e.g., per SMS text
message, or, if over the internet, per Training of data collectors
gigabyte) Human resources for, e.g., data entry,
Charging devices transportation, supervision of data collectors
Costs for each platform varies: some charge an annual fee for unlimited use (iFormBuilder),
while others charge per data field collected (per survey question, for example—Mobenzi’s
model). Of note, there are plenty more mobile software options than only Mobenzi and
iFormBuilder, and projects are encouraged to investigate and produce cost estimates of a
variety of platforms before making their decision.
Costs related to powering mobiles may need to be budgeted as well. Some options to
ensure reliable power include:
Chargers that plug into car cigarette lighter sockets
External battery sources
Spare internal batteries
Solar chargers
Appendix 1:
Success Stories Guide
Partners should report stories that either:
1. Show broad-scale sustainable peace activities/projects, i.e., no one-offs and nothing that
is not being implemented at scale; the writer may focus on an individual beneficiary to
highlight their work, but the greater project must impact more than an individual or small
group of people
2. Relate to the peacebuilding project they work on
3. Highlight new, cutting-edge development innovations
4. Highlight any contributions to funder priorities, for example Value for Money (DFID) or
localization of foreign aid (USAID).
For more information on the USAID Reform Agenda, visit http://forward.usaid.gov or
other funder priorities. For the DFID Value for Money policy, visit
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67479
/DFID-approach-value-money.pdf.
Photographs
The process of taking and quality of a photograph can make or break a story. The following
should be considered when using a photograph in a success story.
Digital photos should be shot with at least a 3-megapixel resolution and, when possible,
maintain at least 300 dpi (dots per inch).
Send only graphics files (preferably JPEG/.jpg). Do not attach an MS Word document
with the photo pasted into it.
Do not alter, compress, or crop photographs. Send only original images.
Do not scan images from publications or other printed materials.
Generally speaking, the larger the file, the better the quality and final result.
Include a caption that briefly summarizes what is occurring in the photograph: who,
what, when, and where.
The photo should be colorful, depict action, and feature the main story character.
Ask permission before taking a photo of someone. Check funder, country, and your
organization’s requirements for taking and using photographs of human subjects.
Play with different angles and backgrounds instead of capturing the subject straight on.
The most interesting and visually pleasing photographs are often those with that capture
the subject at unusual angles.
Use available light instead of flash whenever possible.
Model Stories
Before sitting down to write, consider reviewing the model stories you have come across.59
This will give you a sense of what makes for a successful submission.
Appendix 2:
MERL Plan Template
Name of Organization/Project
Contents
Abbreviations and Acronyms………………………………………………………………………………………….. 1
1. Introduction……………………………………………………………………………………………………………… 2
1.1. Mission and Vision……………………………………………………………………………………….. 2
1.2. Purpose of the MERL Plan……………………………………………………………………………. 2
1.3. Overview of Programs……………………………………………………………………………………2
1.4. MERL and the Project Cycle………………………………………………………………………….. 3
1.5. MERL Team………………………………………………………………………………………………… 3
1.6. Audience Analysis………………………………………………………………………………………… 4
2. Conceptual Framework……………………………………………………………………………………………….5
2.1. Organizational/Project Theory of Change………………………………………………………..6
3. Indicator Definitions, Data Collection, and Reporting Plan…………………………………………….7
4. Reporting…………………………………………………………………………………………………………………. 8
5. Data Quality and Data Verification Procedures…………………………………………………………….. 9
6. Monitoring Tools………………………………………………………………………………………………………10
7. Deliverables Schedule……………………………………………………………………………………………….. 11
8. Evaluation Plan……………………………………………………………………………………………………….. 12
Annex 1. Work Plan……………………………………………………………………………………………………… 13
Annex 2. Indicator Targets…………………………………………………………………………………………… 14
1. Introduction
1.1. Mission and Vision
State your vision and mission here. Explain each as needed for better understanding by
users of the MERL plan and to show how these inform the goal of this MERL framework.
page 2
Appendix 2: MERL Plan Template | page 112
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
page 3
Appendix 2: MERL Plan Template | page 113
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
Internal audience
page 4
Appendix 2: MERL Plan Template | page 114
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
2. Conceptual Framework
Use the either the results framework or logical framework to demonstrate your expected project results. These frameworks show the logical
connections between what your organization/project hopes to achieve and the activities it conducts. It gives the reader of the plan an
overview of why you do what you do.
R1.2
R1.3
page 5
Appendix 2: MERL Plan Template | page 115
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
page 6
Appendix 2: MERL Plan Template | page 116
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
R1.2
R1.3
page 7
Appendix 2: MERL Plan Template | page 117
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
4. Reporting
Describe the types of reports the organization/project is required to prepare periodically for your MERL audiences, such as weekly updates,
monthly reports, quarterly reports, and annual reports. You can use the following template to develop a reporting schedule to show when
each type of report is due, who is responsible for preparing it, and to whom the reports are submitted. Annex all relevant reporting templates
to this plan (e.g., those from your current funders).
Communication tool Responsibility for To whom is the report
Audience selected for reporting Schedule for reporting preparing report submitted
EXAMPLE Quarterly report April 5, July 5, October 5, Program manager Pact regional manager and
Funder: Pact January 5 MERL officer
Soft copy of event and incident Monthly by 5th of the following Program officer Pact regional manager and
reports month MERL officer
Financial reports Monthly by 5th of the following Finance manager Pact grants manager
month
Final report 30 days after end of grant Program manager Pact regional manager, grants
manager, and MERL officer
Final financial report and 30 days after end of grant Finance manager Pact grants manager
closeout audit
page 8
Appendix 2: MERL Plan Template | page 118
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
After drawing the graphic, describe the procedures your organization is carrying out to
ensure data is of good quality at each of the following data management stages: source,
collection, collation, analysis, reporting, use, and storage. If no verification and data
quality management procedures have been specified before, use this plan to develop some
that will guide the organization/project going forward.
page 9
Appendix 2: MERL Plan Template | page 119
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
6. Monitoring Tools
Include a matrix of the monitoring tools that you will use to track the progress of your
organization/project. Remember that these are not the tools for collecting data, but are
related to monitoring the organizational and operational aspects of program/project rollout.
These tools include things like conflict/context monitoring tools, work plan implementation
monitoring, and assumptions monitoring. It is important to state in your matrix how often the
information captured in the tools is collected and/or will be updated. Remember that these
data collection processes must inform operational issues, and thus must be updated as often
as the information would be useful for management. Do not waste time, money, and effort
using tools that have no operational impact when their content is reviewed! Include blank
examples of your monitoring tools as an annex to this plan.
Operational area of Name of Data collection
concern monitoring tool frequency Update frequency
Conflict monitoring
Project
implementation and
expenditure
Assumptions
monitoring
page 10
Appendix 2: MERL Plan Template | page 120
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
7. Deliverables Schedule
Refer to your award document and identify all the contractual deliverables that are due to
your funder(s). Then, complete the following matrix. Update it quarterly, especially to
record the actual delivery dates.
page 11
Appendix 2: MERL Plan Template | page 121
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
8. Evaluation Plan
Include a basic evaluation plan that enables you to evaluate why you have or have not achieved the goal/results that were set. This plan
allows you to look at consequences (intended or unintended), effectiveness, efficiency, outcomes, and sustainability of initiatives. Remember
that evaluation looks at the overall program/project, the operations, governance, and deliverables! Basically, it helps you identify the lessons
learned and what you would do better next time. Use a simple tool such as the table below to help you evaluate your overall program/project.
What do we need to What questions do How will we obtain When will we get the
Type of evaluation evaluate? we need to ask? the data? data? Who will do this?
EXAMPLE Progress made in Have people changed One-to-one interviews Semi-annually and MERL officer
Outcome evaluation reducing violence in their behaviors, the way with key informants annually Program staff
conflict areas they interact with other Survey questionnaires
communities, and/or
their underlying
attitudes?
page 12
Appendix 2: MERL Plan Template | page 122
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
page 13
Appendix 2: MERL Plan Template | page 123
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
page 14
Appendix 2: MERL Plan Template | page 124
Monitoring, Evaluation, Reporting, and Learning (MERL) for Peacebuilding Programs
Appendix 3: References
Argyris, C., R. Putnam, & D. McLain Smith, D. 1985. Action Science: Concepts, methods, and
skills for research and intervention. San Francisco: Jossey-Bass.
Channel Research. 2008. Evaluation of Conflict Prevention and Peace Building. Handout for
INCORE University of Ulster Summer School.
Church, C., & M. Rogers. 2006. Designing for Results: Integrating Monitoring and
Evaluation in Conflict Transformation Programs. Washington, D.C.: Search for
Common Ground.
Church, C., & J. Shouldice. 2002. The Evaluation of Conflict Resolution Interventions:
Framing the State of Play. Londonderry: Incore.
---. 2003. The Evaluation of Conflict Resolution Interventions. Part II: Emerging Practice &
Theory. Londonderry: Incore.
Dziedzic, M., B. Sotirin, & J. Agoglia (Eds.). 2008. Measuring Progress in Conflict
Environments (MPICE) - A Metrics Framework for Assessing Conflict
Transformation and Stabilization. Washington, DC: United States Institute for
Peace. Available at http://www.usip.org/files/resources/MPICE%20Aug%2008.pdf.
Igbo, A. 2013, Sept. 3. Ten Tips for Developing a Data Management System for Monitoring
and Evaluation (M&E). The Communication Initiative Network. Available at
http://www.comminit.com/job_vacancies/content/10-tips-developing-data-
management-system-monitoring-and-evaluation-me.
International Bank for Reconstruction and Development & World Bank. 2012. Designing A
Results Framework for Achieving Results: A How-To Guide. Washington DC:
Independent Evaluation Group. Available at
http://siteresources.worldbank.org/EXTEVACAPDEV/Resources/designing_results_
framework.pdf.
Jaszczolt, K., T. Potkański, & S. Alwasiak. 2003. Internal Project M&E System and
Development of Evaluation Capacity: Experience of the World Bank-funded Rural
Development Program.
Kusek, J.Z., & R.C. Rist. 2004. A Hand Book for Development Practitioners, Ten Steps to
Results Based Monitoring and Evaluation System. Washington, DC: World Bank.
Lederach, J.P., et al. 2007. Reflective Peacebuilding. Mindanao, Philippines: John B. Kroc
Institute for International Studies.
OECD. 2002. Glossary of Key Terms in Evaluation and Results Based Management.
Available at http://www.oecd.org/dataoecd/29/21/2754804.pdf.
OECD Development Assistance Committee. n.d. DAC Criteria for Evaluating Development
Assistance. Available at http://www.oecd.org/dataoecd/42/6/49756382.pdf.
Patton, M.Q. 1997. Utilization-focused evaluation: The new century text (3rd ed.). Thousand
Oaks: SAGE Publications, Inc.
Pact. 2008. Building Monitoring and Evaluation Systems in Civil Society Advocacy
Organizations (3rd ed.). MERL Module 1. Washington, DC: Pact.
---. 2014. Field Guide for Data Quality Management. MERL Module 2. Available at
http://www.pactworld.org/sites/default/files/DQM%20Manual_FINAL_Novemb
er%202014.pdf.
Sartorius, R. & Carver, C. n.d. Monitoring, Evaluation and Learning for Fragile States and
Peacebuilding Programs. Arlington, VA: Social Impact.
Segone, M. (Ed.). 2009. Country-Led Monitoring and Evaluation Systems: Better evidence,
better policies, better development results. Available at
http://mymande.org/sites/default/files/images/Country_ledMEsystems.pdf.
Scriven, M. 1991. Evaluation thesaurus. 4th ed. Newbury Park, CA: SAGE Publications, Inc.
Social Impact. 2009. Strengthening Monitoring and Evaluation for Fragile States and Peace
Building Programs [PowerPoint presentation].
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/40
4342/Humanitarian-Response-Funding-Guidelines-2015.pdf.
U.S. Government. n.d. Plain Language.gov: Improving Communication from the Federal
Government to the Public [website]. Available at
http://www.plainlanguage.gov/index.cfm.
University of Reading Statistical Services Center. 1998. Project Data Archiving: Lessons
from a Case Study.
U.S. Centers for Disease Control and Prevention. 1999. Morbidity and Mortality Weekly
Report (Vol. 48). Atlanta: Centers for Disease Control and Prevention.
World Bank. 2007. International Program for Development Evaluation Training (IPDET)
Handbook. Washington, DC: World Bank.
Collier, P., & A. Hoeffler. 2004. Greed and Grievance in Civil War. Oxford Economic Papers
56(4): 563–595.
European Union External Action. n.d. Conflict Prevent, Peace building and Mediation [web
page]. Available at http://eeas.europa.eu/cfsp/conflict_prevention/index_en.htm.
Fearon, J., & D. Laitin. 2003. Ethnicity, Insurgency, and Civil War. American Political
Science Review 97(1): 75–90.
Le Billon, P. 2005. Fuelling War: Natural resources and armed conflict. The Adelphi series
edition no. 373. Available at
https://www.iiss.org/en/publications/adelphi/by%20year/2005-1a3b/fuelling-war--
natural-resources-and-armed-conflict-ebea.
United Nations. 2008. United Nations Peacekeeping Operations Principles and Guidelines.
Available at http://www.un.org/en/peacekeeping/documents/capstone_eng.pdf.
United Nations Development Programme. n.d. Bureau for Crisis Prevention and Recovery
Overview [web page]. Available at
http://www.undp.org/content/undp/en/home/ourwork/crisispreventionandrecovery
/overview.html.
United Nations General Assembly and United Nations Security Council. 2007. Report of the
Peacebuilding Commission On Its First Session, June 2006 - June 2007, A/62/137-
S/2007/458, 4. Available at
http://www.securitycouncilreport.org/atf/cf/%7B65BFCF9B-6D27-4E9C-8CD3-
CF6E4FF96FF9%7D/PBC%20A62137-S2007458.pdf.
Williams, A.Z. 1999. Sierra Leone: The Political Economy of Civil War, 1991–98. Third World
Quarterly 20(1): 143–162.