You are on page 1of 3
Appendix 2: Framework for analysing an evaluation and assessing evidence Title of the evaluation ‘Type of evaluation Design: type 1 (descriptive); type 2 (audit); type 3 (outcome); type 4 (com- parative experimental); type 5 (randomized controlled experimental); type 6a orb (intervention to service provider (a) impact on providers, (b) impact on patients). Intervention: treatment; service; health promotion; intervention to provider; policy; reform. Perspective: experimental, economic, developmental, mnanagerial 1 Target of the intervention Who or what does the intervention which is evaluated aim to change? (eg. patient, population, providers) 2 Description of the intervention ‘Are the elements ofthe intervention precisely described, and the boundaries ofthe intervention defined? (What is inthe box, and what isnot evaluated?) 3 Users Who was the evaluation done for, or who might be users ofthe evaluation? 4 Value criteria and perspective Are explicit criteria used to judge the value of the intervention, or are these Implied? $ Evaluation question(s) or hypotheses for testing © Type of evaluation design (draw it) Note in the diagram: Data gathering or measures ofthe target ofthe intervention: © Fasc ueasUTeS OF data gathering of outcome, and when and how often hese were made (timing and frequency), * aoe fore! measures or data gathering and when and how often these indergoing the inter vention and how many people Y ‘drop-outs’; ‘What else was measured or data gathered about (€g, about health workers, or about the intervention)? 7 Data sources and collection methods - details Would others using the same methods get the same results? GauM there be errs in the data (systematic bier ae ite gathering methou on design? What isthe gecrar reliability of this method, raatitt Precautions were used in the evaiseton fe ensure and maximize lability (eg. interviewer training)? ‘om whose per which are used «d how often ‘roften these ne the target ag the inter. 0 alth workers, the data come sources and things about cepts? How they are sup- ws vali for duced by the this method, | od maximize 8 Validity of conclusions Did the evaluation prove that the intervention di or did not make any difference to the targets ofthe intervention, if that was one of the purposes ofthe evaluation? Is there sufficient evidence to support the conclusions? 9 Practical conclusions and actions resulting from the evaluation 10 Strengths and weakness for the purpose [sit lear or implied whois the actual o intended user of the evaluation? Is itclear which decisions and actions the evaluation is intended to inform? Strengths and weaknesses ofthe design for the purpose. Was there bias in the sample, in selection before and inthe population measured after (.e. drop-outs)? Would the study have detected possibly important unintended effects? What changes might the evaluation itself have produced which reduce the valid. ity orreliabiity ofthe findings? Strengths and weaknesses of the data gathering methads/measures for the pur- pose? Were all the limitations described? Were the conclusions justified by the results? Could some people be misled by the report? ‘Would the conclusions be credible to the audlence for the evaluation (users)? Were there any unethical aspects? Could the purpose have been achieved with fever resources or in a shorter time? What can we lear from this example about how to do and not fo do an evalu. ation? Strengths 11 Other comments

You might also like