You are on page 1of 15


Position Statement on
Demonstrating NGO Effectiveness

Approved by the InterAction Board of Directors
September 20, 2005

The Working Group on Evaluation and Program Effectiveness
Amy Coen and Chris Dunford, Co-chairs

InterAction would like to thank the Hewlett Foundation for their support of this initiative.
This document and related activities are the collective products of InterAction Members
and staff through the Working Group on Evaluation and Program Effectiveness, with
invaluable guidance, facilitation and writing by the Working Group’s consultant advisor,
Darcy Ashman. Including InterAction staff, more than 40 people from almost as many
InterAction member organizations participated in the meetings to initiate the Working
Group and produce this statement. The most active members are listed below; the starred
names (*) are recognized for their additional contributions to drafting and reviewing the
statement and background document.

*Amy Coen, CEO Population Action International, Working Group Co-chair
*Chris Dunford, CEO Freedom from Hunger, Working Group Co-chair
Barnett, Kathleen, International Center for Research on Women
Bilinsky, Paula, Academy for Educational Development
Bloom, Evan, Pact
Castrillo, Carolina, Catholic Relief Service, El Salvador
Getman, Christie, Winrock International
Henderson, Laura, Christian Childrens’ Fund USA
Majarowitz, Paul, Mercy Corps
*McLaughlin, Patricia, Red Cross
*Ogborn, Tim, Heifer International
Roper, Laura, Oxfam America
*Rugh, Jim, Care
Santos, Rick, Church World Service
Schlangen, Rhonda, Population Action International
Seims, La Rue, Save the Children US
Steele, Roger, Worldvision
*Steinke, Megan, Save the Children US
Whisenant, Jeff, Pact

InterAction Staff and Consultant:

Mohammad N. Akhter, M.D., M.P.H., President and CEO, InterAction
*Darcy Ashman, Advisor, Working Group on Evaluation and Program Effectiveness
Rebecca Cathcart, Program Associate, InterAction
Kenneth Giunta, Director, Membership and Standards
Beth Newman, Special Projects Associate, Membership and Standards

Special thanks also to Mary McClymont, past President of InterAction and John
Zarafonetis, immediate past Director, Committee on Development Policy and Practice.

Glossary of Key Terms1

“Producing a decided, decisive or desired effect”. There are many approaches to
defining and measuring nonprofit organizational effectiveness. Common criteria include
(a) the extent to which the organization’s major goals, usually stated in the mission and
strategy, are achieved, (b) the extent to which key stakeholders (donors and other groups
or individuals with a major stake in the organization and its activities) are satisfied with
the organization and its results, and (c) the extent to which the organization is able to
attract resources to continue its activities. Many agencies also include the extent to
which they are responsive to communities or their other primary constituencies.

For the purpose of this statement, program includes the entire set of goals and activities
that an NGO implements in pursuit of its mission. Program activities may include
services, advocacy, research, training, etc. They are often divided into geographic or
thematic areas, such as health, education, or governance. Programs are distinguished
from projects and from other essential organizational activities like fundraising,
marketing, human resources, etc.

Program Participants
Those who are involved in and intended to benefit from program activities. NGOs use
many different terms to describe such persons and groups or organizations, including
beneficiaries, citizens, clients, members, communities or community-based organizations,
families, women, marginalized groups, local government, etc. Some member programs
may bring indirect benefits to populations, such as those focused on policy change.
Approaches to evaluating benefits and satisfaction should fit the program activities.

“A strategy is … a pattern of purposes, policies, programs, actions, decisions, and/or
resource allocations that define what an organization is, what it does and why it does
it.… Effective strategy formulation and implementation will link rhetoric, choices and
actions into a coherent and consistent pattern across levels, [units] and time.” To be
strategic: “of great importance within an integrated whole or to a planned effect.”

“To determine …the value of. In this statement, the term ‘evaluation’ includes a broad
range of activities that enable agencies to collect evidence to assess program progress and
achievements, including monitoring, inquiries, reviews, surveys, research (participatory
action, applied), etc.

These definitions come from either Webster’s Dictionary or top resources on nonprofit leadership and
management. Nonprofit effectiveness ideas are from Herman, R.D. and Renz, D.O. (1999). Theses on
Nonprofit Organizational Effectiveness. Nonprofit and Voluntary Sector Quarterly. 28(2), 107-126.
Strategic planning ideas are from Bryson, J. (1994). Strategic Planning and Action Planning in Nonprofit
Organizations, The Jossey-Bass Handbook of Nonprofit Leadership and Management, eds. R.D. Herman &
Assoc. San Francisco.

InterAction Position Statement on
Demonstrating NGO Effectiveness

Table of Contents

I. InterAction Position Statement on Demonstrating NGO Effectiveness……. 5

II. Background to the Position Statement on Demonstrating NGO Effectiveness
1. Preamble.........................................................................................................7
1.1. Commitment to demonstrating the effectiveness of our work……………….7
1.2. Benefits of voicing a common perspective…………………………………..7

2. InterAction Member Perspectives on Demonstrating Effectiveness…… 8
2.1. Effective NGOs bring about meaningful change in peoples’ lives…………..8
2.2. NGOs demonstrate effectiveness by providing evidence of change……....... 8
2.3. A strategic approach …………………………………………….. ………... 9
2.4. Applying relevant and sound evaluation throughout agencies…………… 10
2.4.1. Broad and flexible concept of evaluation………………………..10
2.4.2. Mainstream evaluation in agency policy, systems and culture…. 10
2.5. Allocate adequate resources for strategic evaluation needs………………. 11
2.6. Collaborate actively and flexibly with partners and stakeholders……..…. 11

3. Actions by InterAction Members and Committees ………………………12
3.1.Member actions………………………………………………………………12
3.1.1. Articulate criteria for success.
3.1.2. Regularly evaluate progress.
3.1.3. Mainstream monitoring and evaluation in policy, systems, and
3.1.4. Allocate adequate resources for strategic evaluation needs.
3.1.5. Collaborate with partners and stakeholders.

3.2.InterAction Committee actions………………………………………………12
3.2.1. Standards committee.
3.2.2. EPE Working Group: Practical learning activities.
3.2.3. EPE Working Group: Communication and advocacy with donors.

Appendix: Selected Examples to Illustrate Member Actions 1 – 5......................14 - 15

InterAction Statement on Demonstrating NGO Effectiveness
Prepared by the InterAction
Working Group on Evaluation and Program Effectiveness

All InterAction members join in making this commitment to demonstrating the
effectiveness of our work to ourselves, our stakeholders and the broader public. As
we show how our work leads to the changes we seek to bring about in the world, we
enhance our value as NGOs in international development and humanitarian assistance.

An effective NGO brings about meaningful changes in people’s lives. Our activities
should lead to development or relief results aligned with the goals and aspirations of our
mission and vision statements. The communities or other constituencies we seek to serve
should value our work as effective for them.

In order to share our achievements with our stakeholders and the wider public
globally, we must provide evidence of our progress and results. Evidence of the ways
in which our activities are changing lives can be provided by a strategic approach to
applying relevant and sound evaluation throughout our organizations. Measures like
program-to-administration ratios and budget size provide very limited evidence of
effective development or relief.

A strategic approach to evaluation enables NGOs to communicate our overall
success. By focusing on the strategic level, agencies can provide evidence of progress
towards our missions and major program goals. Each NGO should develop an evaluation
system to meet its specific accountability, management and communication requirements.
A strategic approach enables each NGO to selectively allocate its scarce resources to
evaluate high priority activities at appropriate times.

Relevant and sound evaluation includes a broad and flexible variety of methods,
such as inquiries, reviews, surveys, routine monitoring, special studies, etc. The
evaluation of international development and relief today is a dynamic professional field;
we can establish meaningful criteria for success, collect reasonable and credible
qualitative and quantitative evidence, and report findings in useful formats.

Each InterAction member will develop its own approach to demonstrating
effectiveness, consistent with a common framework suitable for NGOs as outlined in
the following five actions. Policy influence and research work requires different
methods than service delivery. Smaller agencies may have fewer resources to allocate
than larger institutions. Yet all share a common history and identity as NGOs in
international development and humanitarian assistance. The following common actions2
will ensure that each InterAction member can demonstrate its own effectiveness.

See the background section for further explanation of member perspectives on demonstrating
effectiveness and the appendix for illustrative examples of each action. Each member will assess its current
capacity vis-à-vis these commitments and establish a reasonable timeline for fulfilling them.

Each InterAction member commits to:

1. Articulate its own criteria for success in bringing about meaningful changes in
people’s lives, in terms of its mission and major program goals.

2. Regularly evaluate its progress towards such success.

3. Mainstream relevant monitoring and evaluation in agency policy, systems and

4. Allocate adequate financial and human resources for its strategic evaluation needs.

5. Collaborate with partners and stakeholders in developing mutually satisfying goals,
methods, and indicators for project and program activities.

Background to the InterAction Position Statement on
Demonstrating NGO Effectiveness
September 1, 2005

1. Preamble

1.1. As InterAction member NGOs, we affirm our commitment to bringing about
meaningful changes in people’s lives through our programs in international
development and relief. This statement addresses the importance of demonstrating
the effectiveness of our work to ourselves, our stakeholders and the broader

1.2. By voicing our common commitments to achieving and demonstrating
effectiveness, we expect to add significant value to the individual efforts of
member NGOs. The statement provides a common foundation of principles and
standards for the InterAction community. As a peer-driven statement, it sets
meaningful and achievable benchmarks. The statement has been developed by a
member-led Working Group. It has been informed by current member initiatives
and a keen awareness of our diversity. It will be useful for further developments
by spreading best practices, speeding up learning curves, and generating
confidence that one’s peers are also making these critical investments.

The statement will be useful in strengthening programs. Members can draw on
the shared principles and standards in designing effective programs, building
capacity and negotiating program funding, management and reporting
arrangements. It will also be valuable in providing a basis for collective
communication and advocacy on matters of effectiveness and evaluation affecting
the entire InterAction community.

1.3. We articulate this position statement in the context of several trends currently
affecting public attitudes towards NGOs in the US and overseas, including:

Growing societal concerns in the US and abroad about the transparency,
accountability, legitimacy and effectiveness of non-profits. As a popular
financial magazine recently reported, “Giving is up but public confidence in
charities is down.3” Our public supporters are justified in expecting that their
money will be used both efficiently and effectively. Our reports should
provide evidence of progress and achievements towards our major program
goals, as well as of the remaining challenges in reaching them.

Widespread demands by donor agencies, Congress, the US administration and
private foundations for ‘results-based’ or ‘outcomes-oriented’ management of
development and relief programs. Although a variety of approaches and tools
are used by different agencies, all share a common underlying rationale.
Money, December 2004

While avoiding the limitations of the ‘blueprint approach’ of the 1960s and
1970s, most development and relief programs today state (a) program goals
and objectives, (b) the causal relationship(s) linking program activities with
the goals, and (c) criteria for success or measures of impact. Systematic
monitoring and evaluation is essential to keep programs on track, report
achievements and challenges to stakeholders, and foster learning within and
across projects and programs.

Vigorous debates about the most effective approaches to international
development and humanitarian assistance are shaping new policies and
programs globally. NGOs with credible program-based evidence have a
stronger base from which to influence development and relief policies. When
competing for scarce donor funds, NGOs must articulate the advantages of
their programs and demonstrate their record of achievements.

In adopting this statement, InterAction members join other international
development agencies, NGOs and resource agencies in voicing their perspectives
and raising common standards to achieve effectiveness and accountability
globally in international development and humanitarian relief.

2. InterAction Member Perspectives on Demonstrating Effectiveness

2.1. As effective NGOs in international development and humanitarian relief, we
bring about meaningful changes in people’s lives. Our activities should lead to
development or relief results aligned with the goals and aspirations of our mission
and vision statements4. The communities or other constituencies we seek to serve
should value our work as effective for them.

2.2. In order to share our success with our stakeholders and the wider public, we must
provide evidence of our progress and achievements. As often acknowledged, it is
deceptively difficult to provide meaningful evidence of progress and impact in the
fields of international development and relief.5

A strategic approach to applying relevant and sound evaluation throughout our
organizations can provide useful evidence of the ways in which our activities are
changing lives. Measures such as lean administrative-to-program ratios and large
total budget size are very limited indicators of effective development or relief. We
know of no common measures of effectiveness that could be applied by all
InterAction members.

Depending on the size and complexity of the agency, results may be directly or loosely aligned with the
mission, but agencies should be able to articulate a clear and compelling link between the changes we
foster and our missions.
See for example, Smillie, I. (1995) The Alms Bazaar. Altruism Under Fire- Non-Profit Organizations and
International Development. IT Publications. Carothers, T. (1999) Aiding Democracy Abroad. The Learning
Curve. Carnegie Endowment

2.3. By a strategic approach, we mean that each member agency should regularly
assess its achievement of its mission and major programmatic goals (often
articulated as a strategic framework or plan), in addition to the success of
individual projects. Clear strategic program priorities enable agencies to position
themselves to allocate scarce resources to evaluation needs.

High quality program strategies to achieve impacts include two components: (a) a
clear program theory of change and (b) meaningful evidence or measures of

Program theory of change. Each agency should be able to articulate clear causal
links between major program activities, impacts and mission. Its underlying
hypothesis about how its activities will lead to desired changes should be explicit.
An agency with a single program may use one theory to describe its approach;
others may find they use several theories of change for different major program
areas or sectors. Each program theory or hypothesis should clearly link to the

Meaningful evidence of change. Agencies consider many factors when defining
program success and the kinds of evidence to measure and demonstrate it.
Important influences include prevailing norms within sectors or sub-field(s) of
development or relief, the particular goals of communities or other constituencies,
and donor-preferred indicators. Several sub-fields of development practice have
recently defined common standards, e.g. child sponsorship, humanitarian
assistance, microfinance, etc7.

Members should provide at least two kinds of evidence, e.g. program changes
(outcomes or impacts) and participants’ (beneficiaries, clients, etc.) views of such

At the strategic program level, evidence of progress and impacts may be captured
in broader measures than specific sectors, themes or individual project activities.
While InterAction is not prescribing a common set of measures of effectiveness
for use at the strategic level, practical categories of evidence for management and
communication purposes may include8:

Traditional approaches to strategic planning for NGOs have failed to include sufficient attention to the
social change processes of sustainable development. Perhaps because the planning approaches have been
adapted from business and government sectors, they are designed around products and services rather than
engaging people in transformative social change. Programs are often cast as more about organizational
values and philosophy than designs for innovating and expanding successful approaches to social change.
While there is much of value in these approaches for nonprofit organization survival and growth, new
attention is needed on developing strategies that guide programs to achieve their desired impacts. See S.
Colby, et al, Zeroing in on Impact, Stanford Social Innovation Review. Fall 2004.
See InterAction’s Sponsorship Group Standards, SPHERE/ALNAP, SEEP, others.
These criteria are synthesized from several published practical studies of development program success.
See Uphoff, et al (1998) Reasons for Success. Kumarian Press, CT. Brown L.D. & Ashman D. (1996)
Participation, Social Capital, and Intersectoral Problem Solving: African and Asian Cases. World

Positive changes, e.g. type and scope of benefits, whether material,
human/social, organizational, civic, policy, governance, environmental, or
other. Evidence of participants’ satisfaction with such changes should be
Reach, e.g. number of people, communities, organizations, regions, etc.; depth
of poverty or marginalization; number of partnerships & alliances.
Efficiency of delivery, e.g. timeframe for implementation and results; costs.
Resources for sustainability, e.g. commitment by participants to continue
activities or benefits, new resources, external stakeholder support, enabling
policy environment.
Post-project gains, e.g. replication, expansion, policy change, etc.

2.4. Applying relevant and sound evaluation throughout each organization involves
two elements, (1) a broad and flexible concept of evaluation and (2)
mainstreaming evaluation comprehensively in agency policy, systems, culture and

2.4.1. A broad and flexible concept of evaluation embraces a variety of approaches and
methods. The field has moved away from traditional approaches in which number
crunching experts using expensive research methods judged an NGO against
externally imposed criteria and reported findings in tomes destined for the dusty
Useful forms of evaluation may include strategic reviews, inquiries, surveys,
monitoring of implementation, participatory appraisals, etc.
Findings are reported in accessible and user-friendly formats to NGOs and
Methods selected to carry out evaluations are appropriate to the program.
They may mix qualitative and quantitative tools.

2.4.2. Mainstreaming monitoring and evaluation means integrating evaluation thinking
and practice throughout each agency as best suited to the individual agency.
Agencies must address their policies, systems, cultures and resources. Evaluation
is a direct cost of doing business and can be allocated to program budgets. When
well managed, evaluation becomes part of regular cycles of planning,
implementing, and monitoring at strategic and operational levels.

Institutional policy on evaluation for effectiveness. Policies should guide the use
of evaluation in the organization for governance, program management, financial
management, and other functions. Individual member policies will differ with
their unique needs, but each should address the following points:
Why evaluation is being carried out by the organization, e.g. how it will
address program effectiveness, accountability, and communication

Development 24(9). Atack, I. (1999) Four Criteria of Development NGO Legitimacy. World Development.
See the InterAction PVO Standards for guidance on accountability and communication.

Since different methodologies and evidence may be needed for each of the
three objectives of accountability program improvement and learning,
agencies may need to set up evaluation systems that include separate
components to address each objective.
Some agencies may wish their policies to specify the situations when
evaluations will be conducted by staff internally and when they will seek to
work with external evaluators. Choices will be influenced by organizational
philosophies of evaluation, stakeholder requirements, and resources.
Key principles and practices defining the organization’s overall approach to
Incentives to foster an organizational culture that values regular reviews of
progress, using evidence to improve performance, and integrating new
practical knowledge into programs.

Allocate adequate financial and human resources for strategic evaluation needs.
Cost-effective evaluations balance quality, usefulness, and cost. They should
provide clear value to the organization and its stakeholders for the resources
expended. Insufficient resources and expertise for particular needs can do more
harm than good.

Once a policy and strategic approach have been outlined, agencies should plan
how to fund their evaluation needs, which might include reprioritizing current
uses of their resources and building evaluation into fundraising approaches10.

2.5. Collaborate actively and flexibly with partners and stakeholders in developing
mutually satisfying methods, tools, and indicators for monitoring and evaluation.
Most NGOs work with others to achieve desired impacts, either in intermediary
roles with donors, partners, and communities or as members of alliances to
influence policy or strengthen civil society. Each NGO must define its terms of
success mindful of establishing shared definitions and sound working
relationships with their partners and stakeholders. When one or more agencies,
especially those closest to the funding end of the relationship, establish fixed
requirements, it becomes difficult, if not impossible at the implementing end to
engage in the kind of mutual development and relief partnerships that lead to
successful performance.

Although most NGOs claim to implement a participatory approach, in practice it
is often very difficult to do so. InterAction members whose program theory of
change involves active participation by communities or other constituencies

Some InterAction members have expressed interest in establishing funding guidelines for evaluation,
e.g. a recommended proportion of agency or program budgets. No such guidelines presently exist and it
may not be productive to try to establish them. However, research has been carried out in some sectors to
suggest guidelines for projects, e.g. Title II Food Aid (Schmidt, I., 2003, Evaluation Costs and Associated
Factors, In USAID PL 480, Title II funded Projects. PDF file to be included in Working Group resource

should have clear policies and procedures to engage the active participation of
communities and partners in program design, planning, monitoring, evaluation
and learning. However, regardless of their change theories, all InterAction
member agencies should regularly assess the satisfaction of those they seek to

3. Actions by InterAction Members and Committees

3.1. Member actions

3.1.1 Articulate criteria for success in bringing about meaningful changes in terms of its
mission and major program goals.

3.1.2 Regularly evaluate progress towards such success.

3.1.3 Mainstream monitoring and evaluation in policy, systems and culture.

3.1.4 Allocate adequate financial and human resources for strategic evaluation needs.

3.1.5 Collaborate with partners and stakeholders in developing mutually satisfying
goals, methods, and indicators for project and program activities.

3.2. InterAction Committee actions

3.2.1. The Standards Committee will review the PVO standards and revise them to be
coherent and consistent with the statement, especially the five actions.
Duplication and contradiction among the various standards should be avoided.
Any points on which there is disagreement should be sent to the full board for

As examples, specific sections which should be noted in the review include all of
Section 7 on Programs, Sections 6.1 and 6.2 on management and planning, and
‘Guidelines 1.’
It would be easy to focus only on 7.1.9, which deals with evaluation. However
the WG explicitly decided against attempting to specify program quality, as is
done in Sections 7.1.1 – 7.1.8, so discussion is needed to explain the need for
those statements. The WG felt that InterAction should avoid such kinds of
standard for members due to the diversity of member programs and
recognition of the many sub-fields that have recently re-defined their own set
of standards; e.g., microfinance, humanitarian relief and child sponsorship.
Where the WG did discuss some of the current standards (e.g., participation),
consensus could not be reached.
In Section 6, the Standards include sections on strategic planning and
management together with Human Resources. In comparison, this Position
Statement links strategic planning, management and evaluation – at strategic,

operational and project levels – in continuous cycles to produce quality
‘Guidelines 1’ notes the importance of benchmarking member practices to the
Better Business Bureau. Their current standard is to have a board level policy
to evaluate the organization’s progress every two years. This is in some ways
a higher standard than is outlined in the Position Statement.

3.2.2. The Working Group on Evaluation and Program Effectiveness will initiate a
series of practical learning activities to assist members in taking the action steps.
Possible examples include: workshops on major topics of demonstrating
effectiveness, compiling a list of resources, facilitating processes for self- or peer-
assessment of organizational capacity, etc. The activities will be facilitated to
encourage members to become a true community of practice, sharing experiences,
knowledge, and tools.

3.2.3. The Working Group will also coordinate communication and advocacy initiatives
with key donors. It will seek to create opportunities to open dialogue and
discussion as a community with various stakeholders in order to improve mutual
understanding and overcome constraints in the current environment, as noted

Current constraints in the expectations, formats, and procedures of some of our
funding partners include:
A lack of fit between our programs and expected methods, indicators, and
time frames for evaluation. It can be difficult to demonstrate significant
quantitative impacts in the short term when program success involves
qualitative processes, short-term uncertainties and long-term timeframes to
see results.
Demands to carry out extensive impact evaluations for program activities that
have already been shown to lead to desired results.
Requirements to adopt externally defined indicators for program success that
may not suit the partners and communities who are implementing and
participating in the programs.
Expectations to produce evidence of program progress and results without
adequate funding for necessary evaluation resources.
Changes by the donor in program goals &/or priorities after the program
already has started.

The aim of the communication and advocacy strategy with key donors for
InterAction member agencies will be to improve mutual understanding and ability
to develop effective programs and select meaningful measures and methods for
monitoring and evaluation.

Selected Examples to Illustrate Member Actions 1 - 5

Member Action 1: Articulate criteria for success in bringing about meaningful changes in
terms of its mission and major program goals.

Ex. 1 Our goal is to achieve a sustainable end to chronic hunger—food security—for
nearly 3 million families over the next five years.

Ex. 2 We developed Vision and Mission Statements that were agreed to by all members
of the international NGO consortium. These were then supported by common
Programming Principles and Project Standards (best practices for Design,
Monitoring and Evaluation). There is also a self-assessment tool (the Project
Standards Measurement Instrument) that is used for projects to confirm their
compliance with the Principles and Standards. We also have Strategic Plans with
specific, measurable goals.

Member Action 2: Regularly evaluate progress towards such success.

Ex. 1 A recently adopted evaluation policy states its objectives:
“The policy is designed to promote:
• Strategic and systematic collection, documentation and dissemination –
both internally and externally – of lessons learned and impacts of NGO
projects and programmes;
• Opportunities for stakeholders, especially the poor with whom NGO
works, to present their honest perceptions and assessments of NGO
• Opportunities for NGO staff to reflect upon and share experience and
• Transparent sharing of evaluations with all stakeholders in forms and
formats amenable to their needs; and
• Examination of progress/set-backs in achieving strategic priorities to
achieve better organizational results.”

Member Action 3: Mainstream monitoring and evaluation in policy, systems and culture.

Ex. 1 We established an organizational planning and evaluation function at the
executive level to work closely with a chief operating officer and executive
leadership to do all of the above. Historically, planning, monitoring and
evaluation (PME) had been primarily focused on programs, specifically at the
project level. It follows that many in the agency have viewed PME more as
something that project staff do in the field, as opposed to something that VPs,
directors and their staff do to plan for and monitor the performance of their own
department's work.

In the past two years we have taken significant steps to begin changing our
organizational culture, placing greater emphasis and value on planning and
accountability throughout the organization and linking PME at the strategic, global
program and country project levels, including:

• Implemented tightly linked strategic and operational planning and budgeting
processes throughout the agency;
• Strengthened the quality of strategic and operational plans throughout the
agency (including areas like policy, communications, finance and
• Use assessments of strategic performance and progress in making budgeting
• Worked with program technical and monitoring and evaluation staff to ensure
results are linked at the project, global program and strategic levels (ongoing);
• Developed and implemented a system to track, monitor and report on the
organization's progress against its strategic plan to all of its stakeholders

Member Action 4: Allocate adequate financial and human resources for strategic
evaluation needs.

Ex. 1 Our research and evaluation specialist sits in the President’s Office. It is a full
time permanent position. The associated costs are covered through program
budgets as much as possible. When such funds are not available, we cover the
costs through undesignated funds in order to implement our strategic evaluation

Member Action 5: Collaborate with partners and stakeholders in developing mutually
satisfying goals, methods, and indicators for project and program activities.

Ex. 1 One agency revised its monitoring and evaluation system to enable it to be more
responsive and accountable to communities. One decision was to not require
reports to be submitted upwards through the organizational channels in order to
facilitate community-based learning.

Ex. 2 One Planning and Learning Department has developed Planning and Evaluation
Frameworks that are based on the agency’s strategic aims and include menus of
sample questions and possible measures. The frameworks inform and guide
discussions between the agency and its partners; they are flexible to allow field
staff and partners to develop mutually satisfying plans.