You are on page 1of 15

Joint Humanitarian Evaluations: Opportunities and Challenges

Scott Green United Nations Office for the Co-ordination of Humanitarian Affairs (OCHA)

Drivers for Joint Humanitarian Evaluations

ALNAP research finds joint evaluations to be of higher quality than single agency. Delivering As One Report:

Stresses importance of closer partnerships between the United Nations and NGOs Identifies need for more accountability to both affected people and donors Evaluation defined as a major driver for building system-wide coherence. Panel calls for humanitarian system getting more adept at using joint evaluations; or regular independent assessments of the performance of the humanitarian system in responding to humanitarian emergencies

Humanitarian system becoming more adept at commissioning and using joint evaluations; OCHA playing a key role in moving the process forward.

IASC Real-Time Evaluations

Approach initially piloted (2007 2009) in Pakistan, Mozambique and Myanmar.

2009 IASC Review endorses approach for scale up and regular implementation across the system Approach now routinely implemented across all major humanitarian crises through use of an automatic trigger mechanism: applies in cases where affected population exceeds 1 million and appeal exceeds US$50 million. OCHA serves as lead managing agent; IASC RTEs are jointly funded by member agencies.

Key Characteristics

IASC RTEs are rapid assessments which make use of participatory methodologies; identify operational bottlenecks around co-ordination issues and seek out the views of affected people.
Management Structure Global-level policy and Steering Group Ad Hoc Management Committees In Country Advisory Groups

IASC RTEs: Key Characteristics

Teams deployed no later than 3 months after disaster

Content based on an agreed global IASC RTE assessment framework and processes are implemented based on an agreed set of standard operating procedures Real-time feedback used to support remedial action planning for immediate implementation; action plans developed with assigned roles and accountabilities for follow up. Major IASC RTEs: Haiti (2010 & 2011) Pakistan (2010 & 2011) Kenya (2011)

Important lessons from IASC RTE experience

Create the necessary political space for joint evaluation work: it is important to have clear sense of the purpose and use for joint evaluation work and to make it a routine part of the culture
Have an agreed assessment framework in place. There needs to be a strong concept of what is being used as a baseline for the assessment. Maintain a narrow scope to be most useful. Focus on issues of broad cross cutting concern rather than in depth assessments by sectors

Ensure good communication throughout the process.

Ongoing Challenges

Ensuring RTE results get used at both strategic and operational levels
Ensuring real-time deployments: significant demand for IASC RTEs to be deployed earlier for greater impact. How to assess joint humanitarian impact: system still unable to deliver multiple-agency impact assessments. What value-added if any?

JHIE Consultations
To assess the potential for future inter-agency impact evaluations, OCHA led a series of consultations: Objective was to define feasible approaches to undertaking joint impact assessments with possibility of having some pilot evaluations (group interviews held around issues of scope, focus, purpose, use and methodologies) Consultations were held with: The affected population in 15 communities in Sudan, Bangladesh and Haiti (including women, men, children, disabled, ethnic groups), mainly in focus groups Local government and local NGOs in the same countries National governments and international humanitarian actors in Haiti and Bangladesh 67 international humanitarian actors, donors and evaluators in seven meetings in New York, Rome, Geneva, London and Washington

Systematic attempt to consult with national governments and with disaster-affected people during the design phase of a major evaluative exercise


A new approach to evaluation design is warranted because of the scale of and resources required for JHIE, but also to model a participatory approach to designing impact evaluations Evidence that it is necessary and useful to consult with the affected population during evaluation design.

Results and Implications: Possibility of Pilot JHIEs

95 percent of respondents were supportive and 75 percent were strongly supportive. Differences in what JHIEs should look like not about their relevance Main advantages seem to be the potential to offer a more comprehensive picture of impact and look at areas that cannot be covered by single agency evaluations

Implications There is a strong basis to proceed with some pilot JHIEs in the future

Results & Implications: Questions of Purpose, Focus and Use

Strongest support was for generalizable knowledge and accountability to affected populations Main focus should be on changes in the quality of life of affected populations Focus may need to differ between sudden onset natural disasters and complex emergencies Concern expressed over limited use of current system-wide humanitarian evaluations at country level (e.g. IA RTEs and other system-wide evaluations such as TEC) Strong support for national governments to become primary users

Results & Implications: Questions of Purpose, Focus and Use

Implications Focus on making JHIEs useful first at the national level by building knowledge which is useful for national and local governments in natural disaster situations; get better at engaging with governments on evaluation issues Pilot JHIEs should not be used in support of lesson learning for ongoing programme modification or for meeting upward accountability needs; Consider a pilot JHIE with primary purpose of supporting accountability to affected populations Pilots should avoid mixing purposes as most evaluations currently try to do: one option would be to have different JHIE pilots be designed around different purposes.

Results & Implications: Meaning of impact in humanitarian context

Little interest in developing a common definition of humanitarian impact Both short and longer-term changes are relevant Common indicators would be useful based on classification of phases: relief, transition and recovery

Implication: JHIE steering committees will need to decide on a case-by-case basis what humanitarian impact means. There should be an effort to focus on both short and longer-term changes, using common indicators.

Results & Implications: Questions of evaluation methodology

Strong emphasis on the need for consultations with communities ahead of time; do not always assume communities will want to be involved Evaluation design should build in learning for affected populations. Strong emphasis on follow up and feedback mechanisms at the community level No support for experimental design and only minority opinion supported use of formal quantitative population surveys Strong majority supported use of qualitative methodologies. Participatory impact assessment tools likely to be preferable.

Implications Current fly-in fly-out model of Humanitarian Evaluation not suitable Longer time frames needed for research and multiple visits to communities Goal free evaluation might offer a better model than theory-based combined with QED

Results and Implications: linkages to broader M&E efforts and management arrangements

Consultations emphasized need for development of a common set of indicators which could be tracked from baseline Consultations supported a two-tier management structure with a steering committee made up of key stakeholders and a management group Communities expressed doubts in Haiti that agencies could effectively work together to assess joint impact

Contingency Planning Preparedness


Impact Monitoring & Learning

Recovery Assessment

Preliminary Scenario Definition

Flash Appeal Multi-cluster Rapid Assessment

Implications: JHIE should build a small set of quantitative and qualitative indicators in key intervention areas that can be tracked throughout the JHIE process Develop JHIEs as part of a broader effort to enhance M&E efforts and a common programme cycle

Cluster-level monitoring and reporting

Revision of Flash Appeal Coordinated sectoral assessments

Performance monitoring & RTEs

RTE / Monitoring

Humanitarian Dashboard Updates