spon se s is challenging. It is also difficult to under stand the do wnside of a project since evaluations focus on thosewho ha ve benefitted from certain progra m s and do not take into ac cou nt those w ho do not benefit or are wor se off.Following the panel disc us sion, wor ks hop participants broke up into small groups and w ere as ked to answ er thefollowing questions:1. W hat do we mean by impa ct evaluation?2. W he n is it appropriate to do impa ct evaluation in a relief or hu manitarian conte xt?3. W h y do we underta ke imp act evaluation in a huma nitarian context respo nse ?4. Ho w do we underta ke hu manitarian imp act evaluation in a huma nitarian respon se?5. W ho und ertake s hu manitarian impa ct evaluation in a humanitarian respon se co ntext?Small group discu s sion highlights
The hu manitarian com m unity should take ad vantage of the current ex citeme nt over monitoring and eval-uation to encoura ge greater coordination. Since it is difficult to set and determine baselines in an emer-genc y, one group should act as the lead evaluator and collect and coordinate data to recon struct bas e-lines for all groups to use.
There are pro s and con s to N G O collaboration with acade mic institutions for impa ct asse s s ments. Moremethod s could be explored and there would be greater variety of skill sets and capa cities. While it is im-portant to learn from evaluations and shar e results, there m ay be so m e disc o mfort in publishing results.External collaboration can lead to m ore risk, and there is a co ncer n about being penalized for producingdifferent res ults than wh at was originally expe cted.
N G O s und ertake imp act evaluation becau se there is an increa sed need to understand what happenedand wh y. External evaluators are useful and provide a certain level of expertise, but they are often un-che ck ed and ma y hav e a certain bias. There also needs to be internal supp ort within an organization andgreater senior level buy-in.
Impact evaluation is not eas y. When c o mparing developme nt w ork and hu manitarian action, evaluationma y be more important in hu manitarian action. Better mana ge ment is needed for mea suring how well weare doing.
There is an inherent tension when co ndu cting an impa ct evaluation becau se it has be en ma ndated by adonor or for organizational learning. Do nor s require one thing, which ma y contradict what an organizationis trying to learn from the evaluation. Organizations are trying to create global standards and indicatorsand to cond uct an impa ct evaluation wh en they pilot a new approa ch or enter a new se ctor.
Impact evaluation is just one type of evaluation. An as pect of imp act evaluation is the mana ge ment ofbeneficiaries’ and donors’ expe ctations.
Impact evaluation as a whole can lock e valuators into a theory of cau sal relationships, which ma y blindthem to unintended c ons equen ce s of the project. Mo st evaluations exa mine whether wh at the organiza-tion set out to do wa s achieved or not. As a result, the “do no harm” principle and the difficulty in reachingcertain groups are ignored beca use orga nizations are focu se d on demo nstrating to donor s that they didwhat they were a ske d to do.
Ha ving the learning proce ss driven by local actors is both ke y and ideal. Learning should not be impo sedfro m the top or donor s, but rather sho uld result fro m wre stling with key que stions that will hav e a meaning-ful impa ct of the next phas e of impleme ntation. This ma y or may not be impa ct evaluation.
There was ge neral agree ment that it has been a struggle to im prov e and strengthen capa city on evalua-tion. One contributing factor me ntioned wa s high staff turno ver.