The Case For and Against Experimental Methods in Impact Evaluation By Dexter N.

Pante Introduction For thousands of years, human beings have been making causal inference about their environment. The discovery of fire by early humans indicated that even with their primitive minds, they understood that repeatedly striking stone flints causes them to spark (Cook & Campbell, 1979). It was through failures, coupled with reasoning and observations, that Homo sapiens were able to produce knowledge which enabled them to make sense of and then survive in their environment (Scriven, 2013b). From the ancient time to the present, the dominant paradigm in the discussion of causation has been the scientific tradition which argues that the experimental method is the best tool of discovering causal relations in the world (Beebee, Hitchcock, & Menzies, 2009; Losee, 2011). The successes of experiments in the natural sciences convinced social scientists that the same logic could be applied to arrive at causal inferences in the social sciences and evaluation discipline (Clarke, 2006; Donalson, Christie, & Mark, 2009; King, Keohane, & Verba, 1994; Tacq, 2011). In this paper, I argue that there are many languages to understand causation and experimentation is only one of them. Just like any language, the language of experimentation is very useful when addressing the communication needs that exist within its context. I shall also argue that the strength of this language is also its limitation. Finally, I will discuss emerging communication needs in the area of impact evaluation which are better addressed by other languages and how these languages could expand the repertoire of evaluation methods in understanding causation. The language1 of experimentation: Its logic, vocabulary and notation In the Philosophical Investigations, Wittgenstein (1958, p. 20) wrote, “For a large class of cases, the meaning of a word is its use in the language”. For Wittgenstein, one can only understand the meaning of an act through its expression in the language (Martland, 1975). In the case of causation, there is an entire language that has been developed to understand this concept. According to Cook and Campbell (1979), this is the language of experimentation which has its own logic, vocabulary and notation. The underlying logic of experimentation proceeds as follows: 1) take two identical groups and subject the groups to a ‘pre-test’; 2) administer the intervention or treatment to either one of the groups; 3) subject the groups again to a ‘post-test’ and 4) compute the difference of the observed difference over time of the two groups (Astbury, 2012; Bryman, 2012; Getler, Martinez, Premand, Rawlings, & Vermeersch, 2011; Pawson & Tilley, 1997; Shadish, Cook, & Campbell, 2002). The resulting ‘difference of differences’ serves as an estimate of the effect of the treatment (Getler et al., 2011). According to Campbell and Stanley (1969), there are two types of experiments: true experiments and quasi-experiments. In brief, both follow the logic described earlier; the only difference is the presence of randomisation in the allocation of treatment in the former which is absent in the latter (Cook & Campbell, 1979; Getler et al., 2011). From these two types of experiments, different research design variations were developed (Campbell & Stanley, 1969; Cook & Campbell, 1979). Experimentation also introduced terms like ‘internal’ and ‘external validity’, ‘bias’, ‘randomisation’, ‘counterfactual’ and ‘control’, among others. Experimental language has also its own notation to communicate the type of research designs being implemented, including, for
1

The word ‘language’ in this paper is used in a very liberal sense. A language is any heuristic device used to understand the world.

This principle applies to experimentation as well. 2002).e. USAID. the application of 2 For an exhaustive discussion of the different experimental and quasi-experimental designs. 1979. In fact. The notion that causation is proved through evidence of association and elimination of competing explanations has its roots in David Hume’s regularity theory during the seventeenth century. 2011. international business and research where English is the accepted medium (Smits. 2012). 1999. 2011). First. Scriven. The problem with this is correlation is not causation (Shadish et al. Stern et al. the second event would not occur (Losee. The use of mother tongue for teaching is beneficial where the objective is to facilitate learning.. & Evergreen. i. the use of ‘O’ for observation. Getler et al. Or in other words. “A cause is an object followed by another.. only relevant terms will be explained. custom or experience of such conjunction (Garret.. In sum. Losee. and where all the objects are similar to the first are followed by objects similar to the second. 912). 2002). please refer to Shadish et al. St. 2011. Pierre. 2011). This is how counterfactual reasoning originated. 1997. According to Hume (1777. if a causal relationship is ever asserted. 1994). 2009. 1979. Scruton. . ‘Did the program have an effect on the beneficiaries?’ (Bamberger. UNESCO. and a dashed line means lack of randomisation. Stern et al. 2012.2 The uses and constraints of the language of experimentation Different languages have different purposes and uses. causation is understood as a mere constant conjunction of events or objects (Cook & Campbell. ‘X’ for treatment. 2004.. (2002). 2009. subscripts to refer to sequential order. 1969. White & Phillips. but it is not useful in other contexts. In this section. the counterfactual is the ‘what if’ situation where no intervention was implemented (Cummings. Together. Cook and Campbell (1979) and (Campbell & Stanley. Garret. 2012. 2012). Based on this. which refers to inferences that the observed covariation between variables is causal (Campbell & Stanley. It is important to note that Hume’s theory involves the following two concepts: constant conjunction and counterfactual. the second never had existed”. it is understood to be a high correlation (Cook & Campbell. rigour is usually understood as the quality of the method when it provides warranted causal inference and an estimate of the program effect while rendering other alternative explanations implausible (Getler et al. (2011) argued that the purpose of any rigorous impact evaluation is basically to answer whether the changes in the outcomes were directly caused by the program. 2010. 2010. 1969). Due to space limitations here. These two concepts were the basis of John Stuart Mill’s methods of agreement and disagreement which became the foundation of the logic of experimentation (Copi & Cohen. & Mabry. p. 2011. I will enumerate the three uses of experimentation and discuss their constraints. Cook & Campbell. 2012). Coryn.example. Experimentation is the most appropriate tool when the requirements in impact evaluation are methodological rigour. 2009). Huisman. rigour is synonymous with internal validity. ‘R’ for randomisation. Shadish et al. In simple terms. accountability and knowledge accumulation. terms and notations. With the first concept. Stern et al. Rugh. Stern et al. Losee.. 2012). Mohr. 2008. A language might be useful in one context but not in another. there must be a necessary connection between two events such that without the first. Ferraro. Pawson & Tilley. where if the first object had not been. 1979). To bolster correlation.. Losee. 2012).. The causal necessity is not observable in the regularity theory (Cook. & Kruijff. The necessary connection in any perceived causal relationship is a mental construct formed through habit. 2006. 1990. Many experts agree that experimentation is the most rigorous evaluation design that can answer the question.

which may explain the occurrence of effect. 1990). The limitation of this causal language is based on its ontological view of causation. 1997. 1981. . We are not just an amalgamation of atomic particles.. 2009. Stern et al. It is also important to unpack the causal processes within the program so that implementers learn the various factors affecting the program’s success or failure (Cronbach et al. this need is something beyond what the language of experimentation can provide. Hence. 2011.. Mumford. 2009). Stern et al. and Onishi (2013) on the impact of conditional cash transfer in the Philippines. they acknowledged the need for an in-depth qualitative study to explain the causal factors affecting the varied responses of the treatment group. getting more with less. & Leviton. 2012).. However. 2012). For example. accountability means being answerable for expenditures. However. 2009).constant conjunction and counterfactual dependencies permits stronger causal inferences by controlling other possibilities (Benoît & Ragin. Again. 2010. we are also endowed with intrinsic powers. The internal causation describes the transformative potential of phenomena (Pawson & Tilley. 2009. Copi & Cohen. it must be pointed out that this dual aspect of causation is also present in the language of experimentation. This definition of accountability is aligned to economic rationalism. Humans are example of powerful particulars. organizational learning suffers. Secondly. experiments are useful when people are discussing accountability because they allow identification of the cause and the magnitude of the effects. Shadish. the realists require understanding of the internal mechanisms producing the effect. 2011. Davies.. 1980. that is to say. The concepts of molar and micromediation causations correspond to the external and internal aspects in the realist language (Cook & Campbell. When there is too much accountability. 2011. Being able to pinpoint which one contributed to the effect is a very strong incentive for managers because of the expectation to demonstrate performance. Stern et al. 1991. such as intention. This assertion is supported by the fact that those pushing for experiments in impact evaluation are those working for or with policymakers or development agencies that make decisions regarding resource allocations and require evidence of the returns of investments. In the report of Chaudhury. respectively. Friedman. Unlike Hume. This view. There are philosophers. accountability is not the sole purpose of impact evaluation. This is the reason White (2010) argued that one of the advantages of experiments is they allow cost-effectiveness or cost-benefit analysis when the evaluation is able to report on the estimate of the effect. a view that the objective of government or programmatic intervention should be efficiency. known as generative theory or causal power. According to Owen (2006). Cook. From this view arose the realist evaluation language which provides another way of understanding causation. sees causation as acting internally as well as externally. they concluded that while the evaluand had an impact on the identified outcomes of interests. Scriven. even some psychologists. While experiments aim to answer the question regarding causal attribution and effect. decisions and one’s actions. the realist approach seeks to unpack the causal mechanism in order to explain the complexity and dynamics occurring within the cause and its context. Harre and Madden argued that reality is not only composed of events or objects but also of ‘powerful particulars’ (Losee. the realist formula for explaining the outcome of an act consists of context and mechanism (Pawson & Tilley. 2012). who disagree with the causal regularity of Hume and with the notion that causation is inferred and not observed (Anscombe. Danks. 1979). This shows the growing consensus among evaluation experts that causal explanation is better handled by alternative languages of causation. theory-based and realist evaluations (Astbury & Leeuw. 1997). motivation and self-reflection. Davies. for example.

the idea of sidelining nonexperimental evidence... White. another benefit of experiments is that they enable policymakers to separate the good programs from the poor ones on the basis of proven scientific knowledge. 2011. Getler et al. might disempower beneficiaries who belong to different contexts and cultures (Wehipeihana. 1979). 2010). In summary. The evolutionary epistemology provides an idealized view that human progress can be achieved by storing all knowledge that have been proven by experiments in data banks (Campbell. I will provide some initial ideas on the future trends on impact evaluation which have been shaped by debates on experimentation. According to Shadish et al. impact evaluation had been narrowly defined as determining the causal relationship between a program and an outcome of interest (Getler et al. it is no longer sufficient that the precise answer to this question is achieved without addressing the scope and relevance of the causal relationship (Cronbach. more questions are being raised now regarding causation and its effects than before. optimization is not always the goal when individuals or organizations make decisions. popularly supported and ‘good enough’ solutions. (1991). 2004. 1980. decisions are sometimes made without regard to achieving the optimal scenarios (Cronbach et al. Shadish et al. 1990). 1980. including political. furthermore. it is important to point out the all the arguments that have been forwarded thus far support the thesis that the benefits of the language of experimentation fit only within the boundaries and communication needs of this language. then there would be a very serious problem with knowledge creation in society (Davidson.. In the next section. 2010. Campbell and Cochrane Collaborations. Donalson. 1971). and if evaluands which cannot be evaluated using experiments are not evaluated. 1985. such as oral testimonies. this thinking of confirming and falsifying knowledge is entrenched in the language of experimentation and in science (Campbell.. 2006). Even though the program impact is measured precisely. On a positive note. 2011. In the past. 1980). this was the issue when the Institute for Educational Sciences released a policy that only RCTs and quasi-experimental designs would be funded (Cook et al. 2013). However. 2009). and opinions of elders. Impact evaluation must be able to provide answers to the questions. these benefits have their own constraints.. such as the International Initiative for Impact Evaluation (3ie). White. 1991). However. This form of decision-making is described as bounded rationality (Simon. ‘What works?’ and . In actual practice. The criticism against evolutionary epistemology is the privileging of certain knowledge claims at the expense of others. 1982. this epistemological model might conflict with democratic values which posit that it is more important to build consensus as opposed to ‘eliminating’ views that are unproven.Another flaw in the argument for accountability is the premise articulated by economists that government strives for economic efficiency and exactitude in decision-making (Cronbach et al. If no formative evaluation is conducted. Often referred to as evolutionary epistemology. According to Simon (1990). 2010). Future Directions of Impact Evaluation The debates on causation had the effect of clarifying the important issues in impact evaluation. Also. furthermore. in multicultural settings. In the context of decision-making in developing countries. a lot of decisions are made under a variety of constraints. the alternative views provide a more comprehensive and richer understanding of causation. 1971. This is the rationale behind the establishment of knowledge database systems of evidence from RCTs and quasi-experiments. In 2003.. Cronbach et al. Another objection to this privileging is its normative implications. Finally. it sometimes suffices that a decision is well-argued. Cook & Campbell. According to Davidson (2006). this policy would actually produce the opposite. there would still be no advancement of knowledge if the evaluation is not relevant and is not used (Patton. 2008). group consensus.. data and time constraints.

2013a). 2006). 2012). 2009). The rejection of necessity in causal relationships implies that counterfactual dependence. In a sense. These concepts correspond to the dichotomy of causal description and causal explanation. 2011. For him. The linguistic meaning of cause in ordinary language further illustrates the diversity of this concept. SmithGodfrey. causation is fundamental to our thinking and in language because of its survival value (Scriven. is also unnecessary. Finally. 2009). etc. The psychological basis has its roots in the work of Albert Michotte on causal perceptions (Danks. I mean constantly challenging our understanding about causal concepts. Pawson & Tilley. furthermore. means keeping an open mind about the diversity of available tools to answer . causation could be established in a single case. Pawson & Tilley. There are two arguments in support this. 2011. 2009. Smith-Godfrey. Stern et al. Based on this work. The evaluative judgment gained by early humans that hitting the stone flints together is the cause that ignited the fire was ‘beyond reasonable doubt’ for them because of that knowledge’s functional value. it is argued that evaluative judgement of causality is deeply intertwined with observation (Danks. Studies show that babies perceive causal relations through the launching effect (Danks. 1987). the direction of this analysis leans towards a critical pluralist stance in impact evaluation. how does it work? Why does it work? And does it work equitably?” (Bamberger & Segone. trajectories and mechanisms. including its effects. I will discuss three alternative perspectives on causation and describe how these are gaining acceptability among evaluation theorists and practitioners. This causal explanation facilitates learning for program implementers and even beneficiaries because it demonstrates the interaction between the pre-existing context in which the program works and the mechanisms which might also be deeply entrenched in the individual. respectively. The philosophical argument rejects the notion that cause is related with necessity or universality (Anscombe. Leslie.‘When it does work. however. fidelity assessment. 2010. when the word ‘cause’ is used. 2009). timeframes. The findings support the hypothesis that humans are causal cognizers even without habits. running. one of which is psychological and the other is philosophical. This means that it would be difficult to understand cause if there is no other causal related concepts in human language (Anscombe. 1997). 1997. 1981).. A pluralist stance. This goes to show that there is a need for other languages to understand causation holistically. By critical. the last view is causal pluralism. What causal pluralism tells us is that diversity is a characteristic of causation and. 1981. eating. realist evaluation provides basis for identifying assumptions in logic models. the best way to understand it is to employ diverse heuristic devices. bandwidth. from which regularity theory derives its explanatory power. Hence. & Scholl. which describes that causation is not a single monolithic concept that explains the connection between things in reality (Losee. and use of case studies in impact evaluation. 2009. Regarding this matter. The second is a common sense view of causation which accepts as basic proposition the notion that causal relationships are perceivable. customs and experiences. it is sometimes used as a ‘container’ for other sorts of meaning. The main benefit of this language is that it explains the causal processes and connections that occur inside the ‘black box’ (Astbury & Leeuw. such as igniting. 2009. The perceptibility of causation lends credence to qualitative and mixed method studies which relies on observation to establish causation. According to Anscombe (1981). the word ‘cause’ is used in the two following senses: effects of causes and causes of effects. van Lier. The first alternative language which has already been discussed thoroughly is the realists’ view of causation. Scriven adopts a somewhat pragmatic stance in his arguments. organisational and social relationships. As a concluding remark. In impact evaluation. Wagemans.

I. C. J. & Onishi. Campbell. Clarke. (2009). Rugh. 363-381. I. . 3ie and the Funding of Impact Evaluation: A Discussion Paper for 3IE's Members. Hornic. Differences and Challenges. D. Los Angeles: Sage Publications. Introduction to Logic (8th ed. & Segone. (2009). Bowen & S. In I. Evaluation Journal of Australasia. Coryn. Toward reform of program evaluation San Francisco: Jossey-Bass Publishers. B. (2009). Oxford: Oxford University Press. C. C. Christie. Scriven. Mark (Eds. Boston: Houghton Mifflin Co. S. T.. The Psychology of Causal Perception and Reasoning. Time. & Leeuw. T. Ambron. Danks. Editorial. A. L. 105-117. California: Sage Publications Inc. Beebee. 6... Astbury. References Anscombe. Campbell. Social Research Methods (4th ed. Benoît. London: Sage Publications. Davies. Oxford: Oxford University Press. Metaphysics and the Philosophy of the Mind. S. New York: Macmillan Publishing Co. Cummings. S. The Oxford Handbook of Causation. Cook. Menzies (Eds. Real World Evaluation: Working Under Budget.. Configurational Comparative Methods. Donalson.pdf. Cronbach. Using Theory in Criminal Justice Evaluation. M. (1990). N. S.. & Mark. 3-27): Emerald Group Publishing Limited. In H. In S. Philippines Conditional Cash Transfer Program: Impact Evaluation 2012: World Bank. Unpacking Black Boxes: Mechanisms and Theory Building in evaluation. Evidenced-Based Evaluation in Different Professional Domains: Similarities... Beebee. (1979). In Search of the Blueprint for an Evidence-Based Global Society. pp. (1981). Donalson.. Los Angeles: Sage Publications. Davidson.. (2012). What counts as credible evidence in applied research and evaluation practice? . J. Hitchcock & P. Friedman.. S. Beebee. How to design and manage Equity-focused Evaluation. Hess. C. R. (2013). The hope is that by being both critical and pluralist. (1969). J. (2012). (2006).). (1980). D.. M. F. J. D. & Stanley. H. Dornbusch. R. Hitchcock. M. Chaudhury. E. San Francisco: Jossey-Bass Publishers. (2009). (2006).. Bryman. (2010). (2011).org/sites/default/files/EWP5_Equity_focused_evaluations.).. Hitchcock & P. Chicago: Rand McNally and Co. Donalson. & Menzies. & Evergreen.). I. C. Weiner. In E. R. Copi. Bamberger. In H. Greene & M. Menzies (Eds. & Cohen. Mark (Eds. American Journal of Evaluation. . Astbury. 6-15. The Sage Handbook of Evaluation. Introduction.. Brown (Eds. J. D. R. Christie & M. Experimental and Quasi-Experimental Designs for Research. Bamberger. A.). Oxford: Basil Blackwell. R...). Shaw.. Urban Affairs Review.. C. S.. Advances in Program Evaluation (Vol. E.). In Search of the Blueprint for an Evidence-Based Global Society. L. Designing Evaluations of Educational and Social Programs.. C. American Journal of Evaluation. (2010). Data and Political Constraints (2nd ed. (2012). C. 7(133). D. Reforms as Experiments. Cook. C.). Contemporary Thinking About Causation in Evaluation: A dialogue with Tom Cook and Michael Scriven. Canberra: AusAID. R. M. J.. C.). 13. C.. S. The Oxford Handbook of Causation. J. M. In S.various causal questions and the need for dialogue between different methods and languages. M. Oxford: Oxford University Press. Cronbach. Mark (Eds. P. What counts as credible evidence in applied research and evaluation practice? California: Sage Publications Inc. & Ragin. Journal of MultiDisciplinary Evaluation. J. I. a more holistic understanding of the world could be approximated. G. I. (2011). 6(2). 31(1). Quasi-Experimentation: Design and Analysis Issues for Field Setting. B. & Campbell. What if: the counterfactual in program evaluation. 31(3). (1971).). L. D. (2009). S. L. (1982). New York: UNICEF Retrieved from http://mymande. . & Mabry. Christie & M.. (2006). Donalson. . Philipps.

Mayne. I. Beebee. Shadish. Menzies (Eds. (2008). Paper presented at the Australasian Evaluation Society 2013 International Conference.. (2004). pp. R. Alkin (Ed. Cook. C. Donalson. M. (2009). (2012).pdf. An Enquiry Concerning Human Understanding Second. 2006. (2011). R. Washington: World Bank. Counterfactual thinking and impact evaluation in environmental policy. G. Keohane. Scruton. New Jersey: Princeton University Press.uk/government/uploads/system/uploads/attachment_data/file/67427/de sign-method-impact-eval. Thousand Oaks: Sage Publications. from www. Donalson (Ed. 122. Hatry & K. Rawlings. Simon. (1990). Beebee.). American Journal of Evaluation.. Owen.Ferraro. C. P. Huisman. J.gov. H. Davies. St. & Leviton. Scriven. (2008).S. New York: Oxford University Press. Premand. 79(2).). San Francisco: Jossey-Bass. (2009). (1987). W. & Verba. C. Pawson. Hitchcock & P. T. Pierre. (1991). 29(1). M. K. 2013.. Wholey. Hitchcock & P. The Past. New Jersey: Transaction Publishers. Q. E.). Beebee. J. Oxford: Oxford University Press. L. The American Political Science Review. 276-292). Using Randomized Experiments. London: Sage Publications. CA: Sage Publications. The Oxford Handbook of Causation. Stame. (1994).. The qualitative method of impact analysis.). Do Six-Month-Old Infants Perceive Causality? Cognition 25.W. The Oxford Handbook of Causation.. Program evaluation: forms and approaches: St Leonards. & Vermeersch. J.. New Directions for Evaluation (Vol. & Tilley. D. The Review of Metaphysics. Evaluation Roots (pp. Hitchcock & P. C. (1997). I. What counts as credible evidence in applied research and evaluation practice? California: Sage Publications Inc. In M. Scriven. Patton. [Article]. 1-19. (1777). P. On "The limits of my language mean the limits of my world". & Befani. Newcomer (Eds. R.unesco. N.).). (1985). 293-304. Getler.pdf. H. In M. In H. 75-84). Australia. Mickwitz (Eds. Foundations of program evaluation : theories of practice... N.. Human Nature in Politics. D. D.net King. Annual Review in Psychology. (1999). Mohr. M. Leslie. C. Shadish. : Allen & Unwin.. Theories of Causality: From Antiquity to the Present. New York: Penguin Press. (1975). Newbury Park. New Brunswick. Patton. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. 41. Christie & M. J. Causal Powers and capacities. R. (2013b). K. C. Q. Mark (Eds.. (2009). Bangkok: UNESCO Retrieved from http://unesdoc. Inc. A. The Roots of Utilization-Focused Evaluation. C. Realist Evaluation. P. Los Angeles: Sage Publications. Modern Philosophy. (2004). Mumford. (2011). D. & Campbell. (2002). Utilization-focused evaluation. 69.).. M. The Oxford Handbook of Causation. T. Garret. Broadening the Range of Designs and Methods for Impact Evaluation. In S. Brisbane. R. Smits. . Menzies (Eds. (1994). Invariants of Human Behavior. S. M.. Retrieved from https://www. N. S. J. In H. Martland.. 265-288. In J. Handbook on Practical Program Evaluation. Boston: Houghton Mifflin Co. B.. Cook.. & Kruijff. Losee. W. Environmental program and policy evaluation: Addressing methodological challenges. L.org/images/0017/001787/178702e. Stern. (2009). Hume.. T. The Future of Evaluation in Society: A tribute to Michael Scriven. Charlotte: Information Age Publishing. In H. B. Hume. The Foundation and Future of Evaluation. (2009). M.). Forss. M. 20(1). Scriven. Home language and education in the developing world. Impact Evaluation in Practice. R. In S. (2013a). (2006). Menzies (Eds. Martinez. Designing Social Inquiry. L. Causal Pluralism. Oxford: Oxford University Press. P.gutenberg. H. Present and Future of Evaluation. Simon. Birnbaum & P.. S. Smith-Godfrey. Demythologizing Causation and Evidence.

. J. White. 45(2).gov/pdf_docs/pnadw119. UNESCO. R. Indigenous Evaluation: a metaphor for social justice. H. B. & Scholl. Wehipeihana.3ieimpact. (2011). D. Paper presented at the Australasian Evaluation Society 2013 International Conference. Thailand: UNESCO. Brisbane. H. Australia. (2012). Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework.pdf. Causality in qualitative and quantitative research. J.Tacq. (1958). (2013). Introduction to Michotte's heritage in perception and cognition research. Quality & Quantity. 153164s. Oxford: Basil Blackwell Ltd. Retrieved from http://pdf. A Contribution to Current Debates in Impact Evaluation. Philosophical Investigations. Language Matters for the Millenium Development Goals. van Lier. & Phillips. Evaluation. inclusion and equity. USAID. White. Acta Psychologica.. L. . (2012). (2006). Bangkok.. from http://www. N.usaid. 123(1-2). Retrieved March 18. Wagemans. (2010). 1-19. 3ie Working Paper Series. 2013.org/en/evaluation/working-papers/working-paper15/ Wittgenstein. Performance Monitoring and Evaluation Tips: Rigorous Impact Evaluation. J. 263-291. (2010). 16(2).

Sign up to vote on this title
UsefulNot useful