This action might not be possible to undo. Are you sure you want to continue?
A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research Author(s): James Mahoney and Gary Goertz Reviewed work(s): Source: Political Analysis, Vol. 14, No. 3, Special Issue on Causal Complexity and Qualitative Methods (Summer 2006), pp. 227-249 Published by: Oxford University Press on behalf of the Society for Political Methodology Stable URL: http://www.jstor.org/stable/25791851 . Accessed: 11/02/2012 16:13
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact firstname.lastname@example.org.
Oxford University Press and Society for Political Methodology are collaborating with JSTOR to digitize, preserve and extend access to Political Analysis.
Political Analysis(2006) 14:227-249 doi:10.1093/pan/mpj017
A Tale of Two Cultures: Contrasting Quantitative Qualitative Research
Departments Northwestern e-mail: of Political University, Science Evanston, and Sociology, IL 60208-1006 (corresponding author)
Department e-mail: of Political Science,
UniversityofArizona, Tucson,AZ 85721
and qualitative these
by different values,
beliefs, and norms. research
be thought of as distinct cultures In this essay, we adopt thismetaphor toward 10 areas: (1) approaches to expla
nation,(2)conceptionsof causation, (3)multivariate explanations,(4)equifinality, scope (5) and causal generalization, case selection, (7)weighting observations,(8) substantively (6)
important cases, appreciation (9) lack of fit,and (10) concepts and to more and measurement. goals of the alternative assumptions We suggest that an of the traditions can help scholars "cross-cultural" communica
the end of contrasting
avoid misunderstandings tion inpolitical science.
Introduction Comparisons of the quantitative and qualitative research traditions sometimes call to mind religious metaphors. In his commentary for this issue, for example, Beck (2006) likens the traditions to the worship of alternative gods. Schrodt (2006), inspired by Brady's (2004b, 53) prior casting of the controversy in termsof theology versus homiletics, ismore explicit: "while this debate is not in any sense about religion, its dynamics are best understood as though itwere about religion.We have always known that, it just needed to be said." We prefer to thinkof the two traditions as alternative cultures. Each has its own values, beliefs, and norms. Each is sometimes privately suspicious or skeptical of the other thoughusually more publicly polite. Communication across traditions tends to be difficult and marked by misunderstanding. When members of one tradition offer their insights to
Authors' note: Both authors contributed equally to this article. Mahoney's work on this project is supported by the National Science Foundation (Grant No. 0093754). We would like to thankCarles Boix, Bear Braumoeller, David Collier, Scott Desposato, Christopher Haid, Simon Hug, Benjamin I. Page, Charles C. Ragin, Dan Slater, David We also thank students at the Waldner, Lisa Wedeen, and the anonymous reviewers for comments on earlier drafts. 2006 Arizona State Qualitative Methods Training Institute for feedback on a presentation of much of thismaterial. ? The Author 2006. Published by Oxford University Press on behalf of theSociety forPolitical Methodology. All rightsreserved.For Permissions, please email: email@example.com
Goldthorpe 1997). Our characterizations are intended to describe dominant norms and practices. (7) weighting observations. but it assumed thatquantitative researchers have the best tools for making scientific inferences. theyreacted by scrutinizing thebook ignore the in great detail. as with all cultures. and Verba's Designing Social Inquiry: Scientific Inference in Qualitative Research (1994). theirtonewas often quite dismissive (e. (9) lack of fit. Like Brady and Collier (2004). we emphasize goals. we should note that our discussion presents a stylized view of both qualitative and quantitative research. (4) equifinality. we tell a tale of these two cultures.g. (5) scope and causal generalization. It is easy to find examples of research in one tradition qualitative researcherswho seek to communicate with quantitative researchers. For its part. In fact. and Verba was practices in the two traditions. Having said this.. the advice is likely to be viewed (rightlyor wrongly) as unhelpful and even belittling. we suggest thatmost researchers in political science will locate themselves predominantly in one column of Table 1. (8) substantively important cases. Our goal is to contrast the assumptions and practices of the two traditions toward the end of enhancing cross-tradition communication. and when theydid engage them. statistical methodologists largely ignoredRagin's ideas.We do not consider qualitative research cultures inpolitical other potential differences concern level of measurement.however.. there will be some individuals who have fairly strongattachments toboth traditions. type of probabilistic approach. In this essay.we argue throughout that the dominant practices of both traditions make good sense given theirrespective goals. . a topic which we do not explore here. importance of path dependence. we wish to stress thatour intention is not to criticize either quantitative or qualitative researchers. we also believe that these scholars pursue differentspecific research norms about research practices. and hence qualitative researchers should attempt to emulate these tools to the degree possible. There are certainly other differences across the two traditions. (6) case selection practices. Qualitative methodologists certainly did not work ofKing. and rationales forbeing concerned about omitted variables. see the essays inBrady and Collier 2004). Although Ragin's book was intended to combine qualitative and quantitativemethods. We should also be upfront thatour comparison of the two traditionsfocuses centrally on issues related to causal analysis. As evidence. 1994. we believe thatqualitative and quantitative scholars share the overarching goal of producing valid descriptive and causal inferences. and it became a classic in the field of qualitative methodology. pouring over each of its claims and sharply criticizingmany of its conclu sions (e. Instead. the famous work of King. Before proceeding.Likewise. itwas written from the perspective of a qualitative researcher. (2) con ceptions of causation. (3) multivariate explanations.We do so from the perspective of inwhich the analyst carries out practices thatcharacterize theother tradition. Lieberson 1991. Table 1 provides a guide to the discussion that follows.228 JamesMahoney and GaryGoertz members of the other community. We adopt a criterial approach (Gerring 2001) to thinkingabout differences between the two traditions and contrast them across 10 areas: (1) approaches to explanation. However. understandings ^ome of time. and (10) concepts and measurement. Quantitative and qualitative methodologies may also have affinities for distinct theoretical orientations. consider the reception of Ragin's The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies (1987) and King.g. Keohane. Hence. Keohane. However. and Verba.Yet. Keohane.1 but our experience has been that these areas are especially important in generating misunderstandings and miscommunication. which in turn produce different here to a greater degree than Brady and Collier the ckstinctiveness in basic goals and explicitly about qualitative research.
of number causal Estimate causation.Quantitative indicators and Measurement of center attention. Additive average Nonsystematic factorsmodeled causal indicators are treatednew as error identified and/or interaction terms broad Adopt statistical to leverage maximize a scope and "effects-of-causes" approach fit of is pattern crucial all cases analyzed generalization Contrasting Table 1and qualitative quantitative research Qualitative sufficient Necessary and causes. approach (0. of leads concept error one misfit have can an important impact Substantively important cases must be explained Core paths few concept. on Correlational probability/statistical theory causes. equallyimportant overall given are a cases not special attention (ideally) Random independent selection variables. causal Approaches individual Explain explanation to "causes-of-effects" cases. implicitly Absent large paths concept. mathematical logic individual INUS occasional causation. effects Nonconforming cases examined are closely and explained Adopt avoid to heterogeneity causal scope a narrow Oriented towardindividual cases Theory on positive dependent variable.0. evaluation sensitive to observations.0) no cases revision Criterion Substantively important cases Scope and measurement and observations Concepts Conceptions causation generalization Weighting of Multivariate explanations Case selection practices Equifinality fit Lack of 2 3 4 5 6 7 ro . Concepts to center attention. error is All observations priori Substantively important. independentoccasional effect of variables.
1 Approaches to Explanation A core goal of qualitative research is the explanation of outcomes in individual cases. did not think thisquestion to be nonsensical or nonscientific (Vaughan 1986)." universe ofwhich corresponds toFrance. the end of theCold War. Consideration of these research orientations would necessitate new columns and new criteria in Table 1..1925) and Neyman (1923) on agricultural experiments.230 James Mahoney and Gary Goertz science?such as descriptive case studies. Hence. researchers in . ex ceptional growth inEast Asia. Statistical approaches attempt to reproduce the paradigm of the controlled experiment in the context of an observational study. is based primarily on how well it succeeds at this research objective. By startingwith cases and their outcomes and thenmoving backward toward the causes. and the rise of neopopulist regimes. quantitative researchers formulate questions such as "What is the effectof economic development on democracy?" or "What effect does a given increase in foreign direct investment have on economic do not normally ask questions such as "Was economic crisis necessary growth?" They for democratization in the Southern Cone of Latin America?" or "Were high levels of foreign investment in combination with soft authoritarianism and export-oriented policies sufficientfor the economic miracles in South Korea and Taiwan?" Methodologists working in the statistical tradition have seen clearly the difference between the causes-of-effects approach. qualitative analysts adopt a "causes-of-effects" approach to explanation. Questions such as "Why did the space shuttleChallenger When testifyingin front Congress. critical and postmodern theories. The explanation of specific outcomes in particular cases is not a central concern. and China.most natural scientistswould find itodd that their theories cannot be used to explain individual events. one seeks to estimate the average effect of one or more causes across a population of cases.Although there are important and well-known difficulties inmoving from controlled experiment to observational study (e.with a statistical research design. the social revolution among agrarian-bureaucratic states that thefields of evolutionary biology and astronomy often seek to identifythe causes of partic ular outcomes. Indeed. Imbens. going back at least towork by Fisher (1981. From the qualitative perspective. qualitative researchers attempt to identify the causes ofWorld War I. For example. in turn. inwhich the research goal is to explain particular 2For example. For example. the absence of true randomization and manipulation). and Rubin (1996. Skocpol's (1979) famous theory is intended to explain adequately all cases of were not formally colonized. "causal inference in statistics. Angrist.2With a controlled experiment. for our purposes the crucial point is that statis tical researchers follow the "effects-of-causes" approach employed in experimental research. Good theories must ideally explain the outcome in all the cases within the population.and some strands of interpretiveanalysis?in which causal analysis is not the leading goal. 144) assert that. is fundamentally based on the randomized experiment (see also Kempthorne 1952 and Cox 1958). the creation of especially generous welfare states. In particular. Russia. this approach to asking and answering questions is consistent with normal science as conventionally understood. one does not know the outcome until the treatmenthas been applied. A central purpose of research is to identify the causes of these specific outcomes for each and every case that falls within the scope of the theoryunder investigation. Richard of explode?" are a request fora cause of an effect. one might say that thewhole point of the exper iment is to observe the effect (if any) of the treatment. Indeed. For instance. Feynman In contrast. The assessment of the theory. statistical approaches to explanation usually use the paradigm of the con trolled experiment.g.
and China may seem puzzling within the statistical culture.' which I expected to be themost controversial" (Dawid 2000. the idea that Skocpol (1979) would really want towrite a whole book that is primarily an effort to explain the occurrence of social revolution within a scope that includes as positive cases only France. in fact. Keohane. stimulating a larger-Af analysis inwhich the goal is less to explain particular cases and large populations. and the effects-of-causes approach." In the case of Stokes's (2001) and Brady's (2004a) work. Beck (2006) in 3In contrast toHolland. theyhave expressed skepticism about the causes-of effects approach. I believe that there is an unbridgeable gulf between Rubin's model and Lewis's analysis. For example. Rubin's) analysis. however. For his contribution believes it is essential to be clear "whether our interest is in finding some general lawlike statements or in explaining a particular event. Much misunderstanding between the two traditions seems to derive from these different approaches to explanation. Thus.3 They do not consider or discuss the causes-of-effects We can see clearly the differences between these two approaches to explanation when we consider research on a particular topic. more to estimate average effects. he expresses surprise that his analysis of causes of effects provoked so little discussion since he thought itwould be controversial. For instance. one wishes to try to locate the effects in specific cases. "I am surprised at how little of the discussion relates tomy suggestions for inference about 'causes of effects. We believe that both approaches are of value." His view of causation too is from the effects-of-causes tradition. and with the counterfactual analysis of causation of Lewis described by Glymour.n Lewis does this by interpreting "A causes Bn as "A is a cause of B? Rubin's model interprets "A causes BT as "the effect of A is (Holland 1986b. "Real science should seek to generalize about causal effects. he treats it as a special case of causation. Both wish to give meaning to the phrase "A causes B. science can precisely be used in service of explaining outcomes in particular cases. 970) King. when statistical results about the effects of causes are reported. Likewise. qualitative researchers will rephrase the research question as "What causes democracy in one or more particular cases?" Quantitative approach to explanation.446). Holland responded to comments on his article as follows: Imust disagree with Glymour's paraphrasing of my (i. Russia. . however. not to quasi statistical generalization. it seems natural to ask if these results make sense in terms of the history of individual cases. an explanation of an outcome in one or a small number of cases leads one to wonder if the same factors are at work when a broader understanding of scope is adopted. Shively (2006) suggests that scholars who work with a small number of cases "devote their efforts predominately to process-tracing. Quantitative researchers may have difficulty appreciating the concern of qualitative researchers with explaining outcomes in particular cases. researchers will typically translate it into a new question according to the norms of their culture.e. Ideally. Dawid (2000) does not go so far as to reject the causes-of-effects approach.. Instead. In general. inwhich the research goal is to estimate average effects. Yet. in his response to a series of comments from several distinguished statisticians. they complement one another." but his basic view advocates looking for effects across researchers will translate itdifferently: "What is the average causal effect of one ormore independent variables on democracy?" The distinction between causes of effects and effects of causes arises several times in the symposium on Brady and Collier (2004) in this special issue. Likewise. This example. scholars from either tradition may start theirresearch with a general question such as "What causes democracy?" To address thisquestion. and Verba (1994) follow Holland quite closely. Interestingly. from thequalitative perspective. For example."might be a typical reaction. he concedes that "the qualitative analysis is helpful for understanding one specific case. however. and they explicitly define causality in termsof the effects-of-causes approach.A Tale ofTwo Cultures 231 outcomes.
we have for an individual case: Causal effect = yj yf.Likewise. 78-9) call the "realized causal effect" for unit i (Dawid 2000 calls this the "individual causal effect"). Y would not have occurred. Thus.such that there aremultiple causal paths to the same outcome (this is sometimes called equifinahty. The situation is quite different on the quantitative.Using the framework and notation of King. explanatory typolo gies.Mill's methods of difference and agreement. The adoption of this understand ing of causation can be seen clearly in the kinds of comparative methods employed by qualitative researchers. Unlike the 4An INUS condition (Mackie is "an insufficientbut nonredundant part of an unnecessary but sufficient [combination of 1980. 40. (1) This equation representswhat King. Ragin 1987. qualitative researchers often thinkabout causation in terms of necessary and/or sufficientcauses. one can define this approach to causation for a single case in termsof a counterfactual: the difference between the treatment (T) and control (C) for the same unit. and Verba (1994.on average. This approach to causation corresponds to the preference of most qualitative analysts for expressing their theories using logic and set-theoretic terms. affect (e.. Mahoney 2000.4 An INUS cause is neither individually necessary nor individually sufficientfor an outcome. More formally. in fact going back toDavid Hume?think about causation in individual cases in termsof a necessary condition counterfactual: if->X then ->K X is a cause of Y because without X. see subsequently). T?treatment. as various methodologists point out. and classical qualitative methodologists?such Aron (1986). 581-2) When the scope of their theoryencompasses a small ormedium N9 qualitative research ers often adopt the "INUS" approach to causation (Mackie 1980. see George and Bennett 2005. and Ragin's qualitative comparative methods are all predicated in one way or another on necessary and/or sufficientcausation (see Ragin 1987. Here the analyst typically seeks to identify causes that. (Fearon 1996. Keohane. The approach assumes that distinct combinations may each be sufficient. we call this the corre lational approach to causation. From the qualitative perspective. 2000).g. scholars seek to identify combinations of variable values that are sufficient for outcomes of interest. Goertz and Starr 2003. the assessment of necessary and/or sufficientcausa tion seems quite natural and fully consistent with logic and good science. statistical side. Instead. George and Bennett 2005). Honore and Hart (1985). 161. C?control. i. there seems to be no alternative but to imply that a counterfactual claim is true?if A had not occurred.232 James Mahoney and Gary Goertz complementarity is one reason why mixed-method research is possible (for recent dis cussions of mixed-method research strategies. the event B would not have occurred. Nagel 1961. For convenience. 2 Conceptions of Causation In order to explain outcomes in particular cases. Lieberman 2005. Coppedge forthcoming). For example. see also Gallie 1955. it is one cause within a combination of causes that are jointly sufficient for an outcome. Keohane. and Verba (1994). 2000. increase or decrease) the values on an outcome across a large population. Elman 2005. with this approach. 62). asWeber (1949). conditions]" . Research findings with INUS causes can often be formally expressed through Boolean equations such as Y = (AAND B AND C) OR (CAND D AND E). this understanding of causation is common in historical explanation: If some event A is argued to have been the cause of a particular historical event B.
it is an easy step to consider causal effects as being the Ps one estimates in statisticalmodels. "average causal response" (Angrist and Imbens 1995). Keohane. For one thing. and Korea: low levels of elite conflict and a narrow state coalition are both necessary for a de velopmental state. ^ot surprisingly. Again using the basic notation of King. Thus. in other words. Possibly. But Waldner does not in fact develop (or necessarily agree with) these hy potheses. Taiwan.For example. and Verba (1994) use P to refer to themean causal effect for unit i. and it cannot be unpro blematically translated into the language of correlational causation. which represents the mean of the group of cases receiving T or C. as Dawid (2000) notes.. the statistical approach replaces the impossible-to-observe causal effect of T on a specific unit with the possible-to-estimate average causal effect of T over a population of units (Holland 1986a. 947). the quantitative approach uses an additive criterion to define cause: yj . the kinds of hypotheses developed in the two traditions are not always commensurate. Not surprisingly. Alternatively. This skepticism may be grounded in the belief that there are no necessary and/or sufficientcauses of social phenomena. consider Waldner's (1999) hypotheses about state building and economic development inTurkey. 5Actually King. as Waldner example. they may choose to reinterpretthem as representing suggested with the Given these different conceptualizations of causation. In fact. When the quantitative approach moves from the individual case tomultiple cases. shemight assume that Waldner hypothesizes that implicit correlational hypotheses. senting the idea of one does not have to define causal effects in additive terms. qualitative researchers have responded systematically to these kinds of concerns (e. the understanding of causal effect as an (unobservable) contrast between control and treatment for an individual observation becomes the causal effect for multiple observations through = the comparison of groups. over many units / 1... his argument focuses on necessary and sufficientcauses.A Tale ofTwo Cultures 233 logical and set-theoretic focus of qualitative research. Goertz and Starr 2003. It is not clear how a scholar working within the statistical frameworkwould evaluate or understand these causal claims. For example. in equation (1) for an individual case.King. and Verba: Mean causal effect = \iT \f\ T?treatment. or that these kinds of causes must be measured as dichotomies. The reaction of statistical researchers to the qualitative approach to causation is often one of profound skepticism. Keohane.. "average treatmenteffect" (Sobel 2000). and Verba refer to themean causal effect as (3. that these kinds of causes make untenable deterministic assumptions. Hence. there is real potential formis understanding and miscommunication.yf.which we would .6 Statistical researchers may thereforechoose to dismiss out of hand qualitative hypotheses that assume necessary/sufficient causation. C?control. (2) Instead of the y. Syria. Our view is that it is a mistake to reject in toto alternative understandings and defi mathematical models for repre nitions of cause.Af.Rather. Mahoney 2004).. a developmental state in turn is necessary and sufficientfor sustained high growth. there are in fact different cause within each tradition. notate as (3. or "average causal effect" (Dawid 2000). (1) elite conflict and coalitional shallowness are positively associated with the presence of a developmental state and (2) a developmental state is positively associated with economic development.5 "mean causal effect" (Holland 1986a). . in equation (2) we have n. This is variously called the Keohane. within the statistical tradition. shewould translate thehypotheses into a language that she is familiarwith. See also below. Thus.g.
5) or "every general proposition of the form 'C causes E' is equivalent to a proposition of the form 'whenever C. .234 James Mahoney and Gary Goertz as Braumoeller (2006) suggests. The ways inwhich these two equations are similar and different are not obvious. Cn obtain. then E'" (Ayer 1946.. however. Dion 1998. one often focuses primarily on the impact of combinations of variables and only occasionally focuses on the effects of individual variables. How ever. unless a variable is a necessary cause 7Many works of qualitative analysis at least implicitly employ continuous measurement. Hall 2003). e. Scott R. the tendency in political science has too often been to dismiss certain understandings of causation or to use methods that assume an understanding that is not congruent with the theoryunder investigation (see. the + = symbol represents the logical OR. equation (4) is a standard statistical model that includes an in teraction term. Take perhaps themost common. the desire to explain leads to a multivariate focus. In this equation. the symbol represents the logical AND.. More generally. For a receding and reanalysis of Skocpol (1979) with continuous fuzzy-set variables. Ragin 2000. Also. this can be seen with the assumption that individual events do not have a cause. Explanations 3 Multivariate controlling for relevant variables. For example. one could one could use or yj/yf \og(yf/yf). one normally assumes that it is impossible to estimate average effectswithout F=(A*?*c) Y = P0+Ml + (A*C*D*?) + P2*2+ P3*3+ + M * + B.unpublished manuscript).. In qualitative research. modal. rather one must include a variety of casually relevant factors. scholars should be open toworking with different conceptions of causation. Although to some thismay seem self-evident. and lowercase letters indicate the negation of a variable. model causal effects as appearing in the variance rather than themean. whereas the quantitative one does not. In fact. .. Braumoeller and Goertz 2000. one might assume that the lack of an error term in the qualitative equation means that themodel must be tested under deterministic assumptions.. the model could be tested using one of several procedures that have been developed over the last 10 years for analyzing probabilistic necessary and sufficient causes (e. model in each tradition: In all causal research. THEN always E" (Elster 1999. There are real differences between the two equations.g.g.7 Likewise. By contrast. the symbol indicates sufficiency and implies a if-then statement. given that theories regularly posit alternative notions of cause. see Goertz and Mahoney (2005). X2 (3) (4) Equation (3) represents a typical set-theoretic Boolean model based on the INUS ap * proach to causation. Yet the typical multivariate model of each tradition varies in quite importantways. Indeed. Eliason and Robin Stryker. In quantitative research. 55). The logical equation identifies two different combinations of variables that are sufficient for the outcome. one might believe that the equations are different in that the qualitative model necessarily assumes dichotomous variables. of course. one could think of causation in singular cases in terms of sufficiencywithout necessity: "a [covering. equation (3) can be readily estimated with continuously coded variables (Ragin 2000). In the qualitative tradition. In the qualitative tradition. scientific] law has the form IF conditions Cl. C2.
i.e. In the quantitative tradition. AC. they may try to translate it into the familiar terms of interaction effects.8 When scholars not familiarwith qualitative methods see a Boolean model like equation (3). when scholars use interaction terms they still ask about the individual impact of X (see Braumoeller 2004 for examples and critique). and Ray (2005).. King.. see. Clark. exploring carefully their interactions. This advice pushes quantitative research in the direction of standard qualitative practice. and Golder 2006. rather than including all possibly relevant independent variables. To estimate the model. Although there are very good statistical reasons for this practice. one can include interaction terms in a statistical model (as we have done). all cases where the outcome is present are contained within a larger population of cases where the necessary cause is present. This is not a completely unreasonable view (Clark. the logical AND in equation (3) is not the same as multiplication in equation (4). see also Achen 2005b) illustrate that the individual effect ap Hence. In fact.g. We believe that a failure to recognize these differences contributes to substantial confusion across the two traditions.. other variable values) inwhich B appears. e.With a sufficient present are a superset. one ismore likely to be focused on estimating or individually sufficientfor an outcome. Achen (2005a). One way to illustrate the point is by considering the set-theoretic underpinnings of necessary and sufficientcauses (see Ragin 2000. Nor is the logical OR in equation (3) the same as addition in equation (4). For example. With a necessary cause. In particular. These same methodologists may as suggest that researchers might profit by focusing on particular subsets of cases rather than the population a whole. Gilligan. "what is the effect of cause CT Because C sometimes has a positive effect and sometimes a negative effect depending on the other variable values with which it appears. one is centrally concerned with estimating the net effect of each individual variable. Keohane. B matters in thepresence ofA and c but in other settings ithas no effect on the outcome. Clarke (2005). Likewise. it is not useful to generalize about the overall effectof B without saying something about the context (i. 87-9. This set-theoretic logic ensures that there is a consistent relationship at the superset and subset levels for findings that are expressed with the logical AND.A Tale ofTwo Cultures 235 the effect of individual causes. Goertz and Starr 2003). . cases in which a necessary cause is = 1 cases are a subset of this superset. a good statisticianwould almost never actually estimate equation (3). asking about its net effect is not a fruitfulapproach. cases where a sufficientcause is present are one subset of a larger superset of Y = 1 cases.g. Thus. Seawright 2005). Brambor. in equation (3) the qualitative researcher would certainly point out that variable A is necessary for the outcome. and Golder 2006. But itmakes virtually no sense to ask. the qualitative researcher will usually make no effort to estimate its net effect. Hence. defends at length this translation). For instance. have suggested that quantitative practice would be 8We are also aware that some statistical methodologists improved if analysts were to focus on a smaller number of independent variables. Gilligan.e. For example. it causes quantitative scholars to believe that a Boolean model is a set of interaction terms thatcould easily be translated into statistical language (e. all cases where the sufficient population where the outcome is present. in the causal model represented by equation (4). Typically. and the Y cause is present are contained within the larger cause. and Verba 1994.. for the logical AND is a first However. the individual Xt. statisticalpractice suggests thatone should include all lower order terms such as A. proach continues to be the norm in statistics as actually practiced in the social sciences. Clark.by contrast. and Golder's article in this special issue cousin of multiplication. and AD. To be sure. in Boolean models these reasons do not exist because one is dealing with logic and set theory. 2004. AB. recent articles on themethodology of statistical interaction terms (Braumoeller 2003. Nevertheless.
e. suppose for a population that must be a necessary cause for A is a necessary cause of Ffor thispopulation. Since the set of war dyads is a subset of all nondemocratic dyads. i. is "no. Does this Xx cannot have a stronglynegative impact for a particular subset of cases? The answer.But from theperspective of a dialogue between cultures. In short. Stable Unit Treatment Value (see Brady and Seawright 2004 for a nice discussion). take the classic democratic peace any subset of thepopulation.. which points to real differences across the traditions.. A* b* c might not be present in all subsets (e.e. if the combination A * b * c is sufficient must be sufficientfor the outcome in any subset for the outcome in thepopulation. then it of the population.We for social science research. and it plays a key role in how many qualitative scholars think about causal relationships. Imagine that in a statistical study mean that the impact ofXx is stronglypositive in thepopulation. In terms of multivariate explanations. Of course. . the vocabulary. wars. Likewise. But the point is that ifA present in a subset.g. The logical approach of qualitative research can be contrasted with the relationship between populations and subsets in statistical research." the concept of equifinality is strongly associated with the qualitative com parative analysis approach developed by Ragin (1987). Also referred to as "multiple. conjunctural causation" or just "multiple Equifinality is the idea that therearemultiple causal paths to the same outcome. the logic and set theory thatform thebasis of thequalitative view of cause and causal statisticalmodels. the estimate of the parameter Pi^Xi X2 could change dramat whole population to a subset.9 Similarly. what is a mathematical icallywhen moving from the truthinBoolean logic?the consistency of necessary/sufficient causal relationships across a superset and its subsets?is a contingent relationship in the parameter estimates of we have a Boolean model such as Y = (A * b * c) + (A *Q. there is no mathematical reason why X\ could not be negatively related to the outcome in particular subsets. e. discussions of equifinality are absent in quantitative work.236 James Mahoney and Gary Goertz to any subset of the population.. Another indicator of differences between the qualitative and quantitative traditions is the importance or lack thereof attributed to the concept of "equifinality" (George and Bennett 2005).g. as we have seen. theA * C * b * c is one). the stability of parameter estimates is a contingent * phenomenon. parameter instability remains a real possibility. equifinality is expressed using the INUS 9The assumptions associated with unit homogeneity and unit independence. If one were to read only large-Af quantitative word "equifinality" (or its synonyms)would not be part of one's methodological work. In contrast. of course. nondemocratic dyads) is a necessary cause of war.findings thatapply at thepopulation levelmust as a mathematical fact also apply who is rightor better. The twomodels represented in equations (3) and (4) are thus inmany ways difficult to compare. The hypothesis can be phrased in termsof hypothesis: democratic dyads do not fight a necessary cause: nondemocracy (i. this hypothesis will remain truefor any subset ofwar dyads. then Y will also be present. In short. then it Since For a substantive example. therefore see the two approaches as each viable 4 Equifinality causation.. are designed to prevent this parameter Assumption instability from occurring. Indeed. In practice." The impact ofX\ as one moves from a superset to subsets is always contingent in statistical models. it is better to understand the differences than to fight over complexity are not more or less rigorous than the probability and statisticsused by quantitative scholars.
analysts will normally assign cases to causal paths. 87-9). the goal of research here is not to explain any case but rather to generalize about individual causal effects.By contrast. scholars usually define their scope more broadly and seek to make generalizations about large numbers of cases.g. the fact that causal paths are "conjunctural" in nature.A Tale ofTwo Cultures 237 * * OR * * * c) (A C D E)\ approach. of potential paths to a particular outcome. it is not surprising that Mackie's (1980) chapter on INUS models is called "causal regularities. In contrast. one may believe that equifinality is simply a way of talking about interaction terms. To cite another example.. it is common for investigators to define the scope of their theories narrowly such that inferences are generalizable to only a limited range of cases. it does not seem useful to group cases according to common causal configurations on the independent variables. Keohane. the goal is to identify all the causal paths present in the population. Although one could do this. Moore's (1966) famous work identifies three modern world. one particular speaks about the population as a whole and does not discuss the particular pathways that individual cases follow to arrive at their specific values on the dependent variable. Hicks.One has equifinality in spades. in quantitative research. but there are not very many of them.Within the typically more limited scope conditions of qualitative work (see below). The right-hand side of the statistical equation essentially represents a weighted sum. implicit in statistical models such as equation (4) are thousands. differentpaths to the and the specific countries that follow each path are clearly identified. . as do King. these causal paths can play a key organizing role for general theo retical knowledge. Each path is a specific conjunction of factors. e. Qualitative 10Given that equifinality often organizes causal generalization in qualitative research. In equation (3). Quantitative universe. Indeed. Since the overall research goal is to explain cases. Misra. In qualitative research.g. and as long as that weighted sum is greater than the specified threshold?say in a logit setting?then the outcome should (on average) occur. In this context. in some qualitative works. What actually makes equifinality distinctive in qualitative work is the fact that there are only a few causal paths to a particular outcome. We thinkthat much of thediscussion of equifinality inappropriately views itsdistinctive aspect as the representation of causal paths through combinations of variable values.2000). it is not a practice within the tradition. there are two causal paths (A B either one is sufficient to attain the outcome. Again. each case may belong to a larger set of cases that follow the same causal path. Scope and Causal Generalization 5 In qualitative research. therewill be countless ways that theweighted sum could exceed the threshold. one does so by identifying the specific causal path thateach case follows. For example.10 Within quantitative research. Ragin 1987. and Ng (1995) conclude that there are three separate paths to an early welfare state.'' With an INUS model. each defined by a particular combination of variables. If one focuses mainly on this compo nent using a statistical perspective. if not millions. scholars often view the cases theyanalyze simply as a sample of a potentially much larger The narrower scope adopted in qualitative analysis grows out of the conviction that causal heterogeneity is thenorm for large populations (e.. and theiranalysis allows one to identify exactly which cases followed each of the threepaths (see also Esping-Andersen 1990). and Verba (1994. INUS models thus form a series of theoretical generalizations. the cases analyzed in the study represent the full scope of the theory. In qualitative research.Within this framework.
causal generalization in qualitative research often takes the form of specifying a few causal paths that are each sufficientfor the outcome of interest. Here of course researchers need to have a large number of observations to use most statistical techniques. heterogeneity of various sorts (e.Whereas findings from qualitative research tend to be more stable than findings from quantitative research when one moves from a superset to par ticular subsets. and model) poses a major problem. where adequate explanation does not require getting the expla nation right for each case. analysts can omit minor variables to say something more general about the broader population. In particular. one has two causal paths (A* B* c) OR (A C D ?).. In terms of equation (3). and enlarging the scope might mean that the new cases require the addition of a thirdor fourth causal path. the error term of a typical statistical model may contain a number of variables that qualitative researchers regard as crucial causes in individual cases. which may encourage a broad understanding of theory scope. But more importantly.g. the very conception of causation used in quantitative research means that the concerns of causal heterogeneity are cast in different terms. Given this approach. expanding the scope of a study can easily risk introducing causal heterogeneity. quantitative findings tend to be more stable than qualitative findingswhen one moves from a subset to a superset. These researchers thus believe that the addition of each case to the analysis stands a good chance of necessitating substantial modifications to the theoreticalmodel. 12In this sense. which in turn makes qualitative scholars particularly mirror image ofwhat likely to restrict thedomain of theirargument. Statistical analyses are often robust and will not be dramatically influenced by modest changes in scope or population. This implication is the we saw in the last section. 11 Of course. Insofar as 1989). even though themodel works perfectly well for the original cases analyzed. Yet this concern raises a separate set of issues that are best debated from within the statistical tradition itself. One key implication of thisdifference is thatcausal generalizations in qualitative work are more fragile than those in large-Afstatistical analyses. For example. Lieberson 1985.but they should not form the basis for criticism of either tradition. Itmight be that the new cases do not fit the current set of causal * * * paths. Research practices are quite different in the quantitative tradition. they are simply logical implications of the kinds of explanation pursued in the two traditions. the potential for key causal relationships to be missing from or misspecified in their theories increases dramatically. But in qualitative research. in quantitative research. As we saw in the previous section. measurement. the path (A B interest in the new cases. even modestly.g. Skocpol develops separate theories for the great historical social revolutions and for themore contemporary social revolutions in theThirdWorld (Skocpol 1979. These differences are important.12 Hence. Freedman 1991). the exclusion of certain variables associated with new cases is not a problem as long as assumptions of conditional independence still hold.11 Independent variables that are important for only a small subset of cases may be appropriately considered "unsystematic" and relegated to the error term. concepts. . ifyour goal is to estimate an average effect of some variable or variables. For example.. even though they are sufficientfor the outcome of interest in the original cases * * c) may not be sufficient for the outcome of analyzed. some statistical methodologists do not believe that these assumptions usually hold outside of natural experiments (e.238 James Mahoney and Gary Goertz thesemodifications produce major complications. qualitative researchers believe that it is better to develop an entirely separate theoryfor the additional cases. Goodwin and Skocpol researchers assume that as the population size increases. It can also arise that the new cases make existing causal paths prob lematic.
Regarding no-variance designs. In the typical small-Afstudy. Although sometimes qualitative researchers may only select positive cases. In statis tical analysis. By the late 1990s. Collier. This is truebecause the positive cases of interest (i.g. wars. In quantitative research. Achen and Snidal (1989) and Geddes (1991) criticized qualitative research designs on the subject of selecting on the dependent variable. an example is helpful.13 Likewise. In Table 2. 2000. researchers generally select cases without regard for their value on the dependent variable. qualitative scholars include negative outcome cases for thepurposes of causal contrast and inference (e.g. In fact. and Verba (1994). it is natural to choose cases that exhibit those outcomes. as is common in qualitative research. one can ma nipulate cases such that they assume the four possible combinations of values on the independent variables and then observe their values on the dependent variable... no-variance designs).g. there are two independent variables and one dependent variable. whereas the negative cases (e. In a standard experimental design. nonrevolutions.g. . quite commonly they also choose "negative" cases to test their theories (see Mahoney and Goertz 2004). the same can be true in experimental or statistical research when analysts study rare events (e. nonwars. This practice is not sur prising when we recall that their research goal is the explanation of particular outcomes. provides substantial leverage for causal inference even when the equals 1. and Seawright 2004) insisted thatwithin-case analysis. methodologists pointed out that if thehypothesis under consideration postulates necessary causes.. Quantitative researchers therefore ideally tryto choose populations of cases through random selection on independent variables. Skocpol also examines six negative cases where social revolution did not occur in addition to her threepositive cases). the top half of the table ismuch less populated than the bottom half. The first is that thereare usually very few cases of 1 on thedependent variable. the selection of a large number of cases without regard for theirvalue on the dependent variable has the effect of approximating this experimental design.A Tale ofTwo Cultures 6 Case Selection Practices 239 Qualitative researchers usually start their research by selecting cases where the outcome of interestoccurs (these cases are often called "positive" cases). there are two characteristics that are somewhat distinctive. in termsof Table 2. by contrast. a number of scholars responded to these criticisms.however. which was especially critical of research designs that lack variance on the dependent vari ertheless. Dion 1998.g. in many research designs. Heckman 1976). King and Zeng able (i. forwell-understood reasons. To highlight other differences to case selection in the two traditions. This foreshadowed the well-known discussion of the issue by King.e.. Mahoney. cases where Y = 1) in qualitative research are generally rare occurrences (e. all variables are measured dichotomously.. If you want to explain certain outcomes.. selecting cases based on theirvalue on thedependent variable can bias findings in statistical research (e. Braumoeller and Goertz (2000) argue thatno-variance designs 13Although there ismostly consensus do not permit one to distinguish trivial from nontrivial necessary causes. revolutions. other methodologists (e. which relies on causal-process observations (discussed N below).. These basic differences to case selection have stimulated debate across the two traditions. Harvey 2003). see Seawright = 1 when (2002).Nev on this point. the design is appropriate (e. Braumoeller and Goertz 2000. who argues for the use of "all cases" and not merely those where Y testing necessary condition hypotheses. For a different view..e. non-growth miracles) are potentially almost infinite in size.. see Goldstone et al.g. Keohane. Of course. Ragin 2000.g. In the late 1980s and early 1990s. growthmiracles).
240 James Mahoney 2 and Gary Goertz selection Table Case Y Xi 7=1 1 0 1 0 1 0 1 X2 1 1 0 0 1 1 0 Y= 0 0 0 2001). Negative cases in In contrast. If your goal is to estimate average causal effects for large populations of cases. In fact. inmost of these cases. in a statistical analysis.1) cells help qualitative researchers illustrate how Xi and X2 are not individually sufficientfor the outcome.0.0) cell.0. Within a statistical framework. By contrast. itmakes sense to include all types of negative cases and treat them as equally important for drawing conclusions about causal effects. But the (0. itdoes notmake sense to select cases without regard for their value on the outcome. hence. the two traditions differ in how they approach case selection on both the dependent and independent variable sides of the equation. In short. In practice.0) cell. assume that the causal model being tested inTable 2 is Y = Xi AND X2. and computation of statistical results is not hindered but helped by having many cases in each cell. Likewise. having a lot of cases is desirable.0) cases provide less leverage for causal inference (Braumoeller and Goertz 2000). but the outcome is absent). random selection on the independent variables would be a good way of accomplishing this.But ifyour goal is to explain outcomes in particular cases. the (0.. negative cases in the (0.1) cell are extremely useful because theydisconfirm or at least count against this theory (i. In these important ways.1. we are con vinced thatboth traditions have good reasons for doing what they do. itmakes sense to avoid selecting on the dependent variable.0) cell (in bold type in the table) where both causes and the outcome are absent is particularly heavily populated and problematic. Nor does itmake sense and (0.0. one would normally wish to include cases distant from 1 on the independent variables. qualitative researchers rarely choose cases (or case studies) from the (0. whereas selecting a large number of these cases is not a realistic option.0.1. such as cases from the (0. The second and more important distinctive traitof qualitative analysis is that in the heavily populated bottom half.0. Yet. Likewise. though as a generalization we can say that the study of exceptional outcomes is more common in qualitative research. one will almost never see a qualitative scholar doing a case study on an obser vation from the (0. qualitative re searchers are highly attuned to finding these cases.0) cell. the outcome of interest is not even possible and thus the cases are regarded as irrelevant (Mahoney and Goertz 2004). Another problem confronting the qualitative scholar is that the (0.0) cases are so numerous and ill-defined that it is difficult to select only a few for intensive analysis.0. increasing variance reduces the standard error and thus is pursued when possible. For example.0. Given a large N. both causes are present.0) cases are less useful for testing theorieswhen compared to cases taken from the other cells. A practical reason why is that the (0. the (0. in quantitative research.0) .0.e.
the theory no longermakes sense. These differentuses of data correspond to thedistinction of Brady and Collier between "causal-process" and "data-set" observations (2004. Western 2001). despite the fact it is plausible in otherways (forother examples. the theory seems deeply problematic. and when correctlymeasured.A Tale ofTwo Cultures 241 to treat all negative cases that lack the outcome of interest as equally relevant to the analysis. these researchers in effect ask: "Given my prior theoretical beliefs. chap. They then establish a pattern of conforming observations against a null hypothesis. qualitative researchers do not view themselves as approaching observations in a theoretically neutral way (Goldstone 2003. an air-tightalibi). And yet. When researchers introduced a new measure of economic development. quantitative scholars generally make no assumptions that some pieces of count more heavily than others. A data-set observation is simply a row in a standard rectangular data set and is ordinarilywhat statistical researchers call a case or an observation. a new fact can lead qualitative researchers to conclude that a given theory is not correct even though a con siderable amount of evidence suggests that it is. experience working with similar cases. not all pieces of evidence count equally for building an explanation. 7 Weighting Observations Qualitative researchers are in some ways analogous to criminal detectives: they solve puzzles and explain particular outcomes by drawing on detailed fact gathering. Also like detectives. George and Bennett 2005). a divided peasantry. By the same token. 2. a theory is usually only one critical observation away from being falsified. McKeown 1999. Data-set observations provide analytic leverage because they . only a pattern of many observations can bolster or call into question a theory. This theory is challenged by the observations thatpowerful landed elites in the fascist cases either could not deliver large numbers of votes or were actually supporters of liberal candidates (Luebbert 1991. From the stand point of this "detective" method (Goldstone 1997. The crucial data could show that a key variable was incorrectlymeasured. 308-9). Rather. see McKeown 1999). 252-5). Rather..e. By contrast. thewhole theorywas called into question and rejected (Dreze and Sen 1989). consider the theory that the combination of a weak bourgeoisie.g. researchers sometimes build enough evidence to feel quite confident that the theory is valid and thatno falsifying evidence will ever be found. a single observation cannot lend decisive support or critically undermine a theory. and knowledge of general causal principles. With this ap proach. see also Van Evera 1997. When one takes this information into consideration. We see thiswith the theory thatheld thatChina performed better than India on key social indicators before 1980 because of its higher level of GDP per capita. certain observations may be "smoking guns" that contribute substantially to a qualitative researcher's view that a theory is valid. evidence?i. and a powerful landed elite is sufficient for fascism in interwarEurope (Moore 1966). The decisive data need not involve a measurement problem. which addressed problems with the previous GDP per capita estimate and showed similar levels of development in the two countries. Rather. For qualitative researchers. Statistical results thatdraw too heavily on a few specific observations (often outliers) are suspect. For instance. a single piece of data can radi cally affect posterior beliefs.much like a detective whose initial hunch about a murder suspect can be undermined by a single new piece of evidence (e. particular observations?should work to quantitative researchers usually weight a priori all observations equally.. how much does this obser vation affect these beliefs?" When testing some theories.
Here "substantively important"means of special normative interestbecause of a past or current major role in domestic or international politics. whereas data-set observations are especially helpful when one wishes to generalize about average causal effects for a large population.' It gives insight into causal mechanisms. we believe thatboth kinds of evidence can be useful. And techniques have long existed for identifyingand analyzing these kinds of cases (e.242 James Mahoney and Gary Goertz show or do not show statistically significant patterns of association between variables as well as allow for the estimation of the size of effects.. For example. In addition. Substantively Important Cases 8 Qualitative and quantitative scholars have different perspectives on what constitutes an "important" case. Causal-process observations are crucial for theory testing in a qualitative setting precisely because one sorts through the data with preexisting theoretical beliefs (including common sense). It does not . By contrast. "A causal-process observation is an insight or piece of data that provides information about context or mechanism and contributes a differentkind of leverage in causal inference. Each case carries equal weight. Collier 1993. a necessarily do so as part of a larger. George and Bennett 2005). there are no ex ante important cases. just as was true for specific pieces of evidence. insight that is essential to causal assessment and is an indispensable alternative and/or supplement to correlation-based causal inference" (Brady and Collier 2004. move back and forth makes sense to Thus. e. systematized array of observations causal-process observation may be like a 'smoking gun. qualitative scholars might have serious doubts about a theory of American elections that failed miserably for California and New York even if itworked well for some smaller states. The upshot is that quantitative researchers should not primarily seek out causal-process observations anymore than qualitative researchers should primarily study data-set observations. in fact. Like Brady and Collier.g. one should normally assume a more strictdifferentiation between theory and data and one should not move as freely back and forthbetween theory and data (though specifica tion searches and other data probes may be consistent with good practice). they are aware of and concerned with cases that are regarded as substantively important. itdoes notmake sense to carry out a single pass of thedata or to avoid all ex post model revisions (though researchers must still be sensitive to simply fittinga theory to the data). Bollen and Jackman 1985). the exchange between Brooks andWohlforth For example. In the field of international relations. a failure of realism to explain this single case represents a major strike against thewhole paradigm. In a typical large-Af analysis.. . scholars in security studies believe that the ability of realism to explain the end of Cold War is absolutely crucial. some cases are more "important" than others. it between theoryand thedata. These kinds of research designs assume that the research necessarily treat all cases as equal. 252-3)." and "critical" case study research designs (Przeworski and Teune 1970. For some social constructivists. Realists seem to agree and work hard to explain the end of Cold War (there is a massive literatureon this debate. see. if one seeks to estimate average causal effects. In contrast.g.. in thequalitative tradition.. qualitative scholars do not makes certain cases especially interesting community has prior theoretical knowledge that and theoretically important.We would simply add that causal-process observations are especially useful when one seeks to explain specific outcomes in particular cases. By contrast. because qualitative researchers are interested in individual cases. Ex post one can and should examine outliers and observations that have large leverage on the statistical results.researchers explicitly pursue "most likely." "least likely. ifyour goal is to explain particular outcomes.
Many idiosyncratic factorsmay matter for particular cases. Keohane. think about what would nonsystematic components_To happen ifwe could rerun the 1998 election campaign in the Fourth District of New York. A slightly different total would result. with a Democratic incumbent and a Republican challenger. but these factors are not important formore general theory. 9 Lack of Fit In qualitative research.These special factors may not be considered part of the central theoreticalmodel. In addition. variables might come into play. Within the quantitative framework.but idiosyncratic. the failure of a theoretical model to explain particular cases is not a problem as long as themodel provides good estimates of param eters for the population as a whole. but these are relegated captured by the one effort of King. the nonconformity of theFrench Revolu tion is not a special problem (or at least no more of a problem than. a revolution in one of the largest.As a consequence. From thisperspective. The qualitative concern with important cases is puzzling for a quantitative scholar.and therefore they are not of great concern. 2002. By contrast. and English 2002). in quantitative research.A Tale of Two Cultures 243 2000. Ifmany other cases conform. but finding that itdid not hold in one of the historically most important revolutions (that is. there is no real reason why substantively or historically important cases are thebest ones when evaluating a theory.135-8).most influential. the French Revolution does not count extra for falsifying theory. the researcher seeks to identify the special factors thatlead thiscase to follow a distinctive causal pattern. a surprisingly popular speech can therefore imagine a variable thatwould express the values of theDemocratic vote issue_We across hypothetical replications of this same election. The qualitative researcher thereforeseeks to understand exactly why thepartic ular case did not conform to the theoretical expectation (Ragin 2004. King. If theoretical and empirical scope statements are important?which we believe they are in both qualitative and quantitative research?then itwould be better to explain more cases than to evaluate the theory primarily against what might be very important. by contrast. even if the campaigns begin on identical footing. and most imitated states of the its day and frequent exemplar forMarxist theories)would certainly shake one's faith in thevalue of the theory" (2003. a particular case thatdoes not conform to the investigator's causal model is not simply ignored. The to evaluate the effect of incumbency on elections. It could well be thatan obscure case has the key characteristics needed to test a theory. and Verba authors describe a research project inwhich the goal is Keohane.14 The exclusion of idiosyncratic factors does not bias the parameter model given that these factors are often not systematically correlated with estimates of the 14The view of statistical researchers on this issue is nicely to discuss causal explanation for an individual case. The general point is nicely illustratedwith Goldstone's discussion of the consequences Marxist theoryof a failure to adequately explain theFrench Revolution: "Itmight still for be that the Marxist view held in other cases. it is unclear why important cases should count formore in evaluating theories. say. the investigator is normally quite familiar with each case under investigation. the Bolivian Revolution would be). Keohane. Our view is that qualitative researchers almost instinc tively understand the requirement of getting the "big" cases right and worry when it is not met. and Verba 1994. Instead. (King.45-6). due of politics that do not persist from to nonsystematic features of the election campaign?aspects one campaign to the next. Some of these non or position on an systematic features might include a verbal gaffe. 79) . cases. and Verba realize that other "nonsystematic" to the error term and are of no particular interest: [W]e have argued that social science always needs to partition the world into systematic and see the importance of this partitioning. but they are explicitly identified and discussed.
Debates about measurement validity in this research tradition are therefore often focused on the logical structureand content of specific concepts (see Gerring 2001. measurement error typically occurs at the level of indicators. not the level of concepts. By contrast.They may ask. But if you really want to explain outcomes in particular cases. and methodological discussions of measurement error therefore concentrate on modeling measurement error and modifying indicators with little concern for concept revision. rather than simply acknowledged" (Ragin 2004. they may be unconvinced by the argument that the error termcontains only minor variables (or measurement error or inherent randomness). Given this belief. Solutions to the problem are proposed at the conceptual will simultaneously allow level?e. it is hard to condemn either viewpoint.you should not be in thebusiness of tryingto hunt down each causal factor that might affect outcomes in particular cases. we are convinced that It is common in qualitative analysis for scholars to spend much time and energy devel oping clear and precise definitions for concepts that are central to their research. In the qualitative research tradition. Goertz 2006). Moreover. it makes good sense to be in this business.leaving the rest to the error term. For quantitative researchers. Instead. the lack of fitof a theoreticalmodel may be due not simply to omitted variables but also to randomness and nonsystematic measure ment error?problems which again do not bias results. "Why use up valuable time on error terms specified in the model. In quantitative research. and theybelieve that the failure to address this concern is a major source of measurement error. These different approaches to dealing with a lack of fit provide ample ground for misunderstandings. developing appropriate subtypes of democracy that researchers to capture diverse forms of democracy and avoid stretching the concept (Collier and Levitsky 1997). the focus is less on measurement error deriving from the definition and structure of concepts. 138). In fact. a position that qualitative researchers would almost never endorse.. they may be troubled by statisticalmodels thatexplain only a small portion of the variation of interest. they may view the effort of fully explaining the outcome of interest as a deterministic trap or a Utopian goal. some (though certainly not all) quantitative researchers would go so far as to say that a concept is defined by the indicators used tomeasure it. these researchers especially try to avoid conceptual stretching or the practice of applying a concept to cases forwhich it is not appropriate (Sartori 1970. "What are thevarious factors that comprise the error term?" If the overall fit of the statisticalmodel is not very good. Collier and Mahon 1993). debates over the (mis)measurement of democracy often focus on the stretching of this concept to cases that are not really democracies (or are special kinds of democracies). when one appreciates thedifferentresearch goals pursued by Yet. statistical researchers may be perplexed when qualitative researchers spend a great deal of energy attempting to identify factors atwork in nonconforming cases. this research tradition is more concerned with operationalization and the use of indicators. discussions about the (mis)measurement of democracy in quantitative research are concerned with the properties of indicators and . We can see these differences clearly in comparative research on democracy. When analyzing multiple cases. by contrast. They may wonder. 10 Concepts and Measurement research thatdoes not lead to generalizable findings?" Indeed. If you really want to estimate average causal effects. They do so because they are concerned with conceptual validity. Qualitative researchers believe that prediction error "should be qualitative and quantitative analysts. For theirpart.244 James Mahoney and Gary Goertz explained.g.
Bowman.. Bollen and Paxton 1998). but cross-cultural communication between the two is relatively rare (though see Adcock and Collier 2001. in short. 1993. data. measurement error is something that is unavoidable but not devastating so long as it can be adequately modeled. all key variables correctly for each case.we believe thatbetter labels for versus logic. bias) is of course important and procedures exist to identify it (e. Indica tors thaton average do a reasonable job ofmeasurement will be problematic because they measure many particular cases. by contrast. Misunderstanding is enhanced by the fact that the labels "quantitative" and "qualita tive" do a poor job capturing the real differences between the traditions.. for theory falsificationmight occur with a change in the value of one or a small number of variables. and a quantitative strand that focuses on indicators and surement validity and that seeks tomodel measurement error and avoid systematic error. They may feel that statistical indicators do not measure the same thing across diverse contexts and thus that significantunrecognized conceptual heteroge neity is present in quantitative research. Goertz 2006). which means that they quantitative researchers will normally seek out better indicators for the concept being measured or betterways tomodel error. including error (e.A Tale ofTwo Cultures 245 the statisticalmeasurement model. This skepticism ultimately emanates from the goal of qualitative researchers to develop measure must tryto adequate explanations of each particular case. For example.. Scholars associated with either tradition tend to react defensively and in exaggerated ways to criticisms or perceived mischaracteriza tions of their assumptions. The possibilities formisunderstanding are manifold. Bollen 1980. And when systematicmeasurement error is discovered. Conclusion Comparing differences in qualitative and quantitative research in contemporary political science entails traversing sensitive ground. Bollen and Paxton 1998). It is standard in this research tradition to believe that many measurement result from the use of poor or biased indicators of democracy.e. qualitative researchers sometimes believe that the indicators used in statistical research are simplisticmeasures thatomit key elements (or include inappropriate elements) of the concept being studied (Coppedge 1999.g. In qualitative research. measurement error needs to be addressed and eliminated completely. will incorrectly For quantitative researchers.Although we have no illusions about changing prevailing terminology. effect describing the two kinds of research analyzed here would be statistics . it is appropriate to speak of two separate strands in themeth odological literature on measurement error in political science: a qualitative strand that focuses on concepts and conceptual validity and that is centrally concerned with elimi mea nating measurement error. goals. Both literatures are hugely influentialwithin their respective cultures. and practices. and Mahoney 2005).But it is still often quite possible to generate good estimates of average causal effectswhen nonsystematic measurement error is present. problems These differences contribute to skeptical views across the traditions.g. many qualitative techniques in fact require quantitative information. ifpossible. in fact. In the qualitative tradition. Given these differences. Quantitative analysis inherently involves the use of numbers. scholars actively discuss and debate the scoring of particular variables for specific cases. The stakes of such discussions may be high. but all statistical analyses also rely Qualitative studies quite frequently employ numerical heavily on words for interpretation. Systematic measurement error (i. Lehoucq. Munck and Verkuilen 2002.
Sociological Methods and Research 13:510-42. American Journal of Bollen. Kenneth A. -. 1985. 1996.. Kirk. Political 1993. 2005. Paris: Gallimard. Journal of theAmerican Statistical Association 91:444?55. New York: Dover. and Peace Science 22:327-39. 1986 (1938). and Donald Rubin. versus outcome James Mahoney and Gary Goertz or versus case-oriented explanation. 1946. E. Brady and D. Fabrice Lehoucq. Identification of causal mental variables. E. Measuring Bowman. Rational deterrence theory and comparative case studies. E. 1998. MD: Rowman and inquiry: Diverse Littlefield. observations: The 2000 U. American Political Science Review 95:529-46. Is causal-process observation an oxymoron? Political Analysis 10. and Central America. factors in cross-national measures. political democracy: data adequacy. Detection American Sociological Review 63:465-78. H. population-oriented This article is not as an effort to advise researchers about how they should carry out work within their tradition. presidential Brady. Bollen. Collier. Journal of the American Statistical Association 90: 431^12. -. Robert. and Duncan World Politics 41:143-69.We especially hope that scholars will not read the article with the goal of noting how the assumptions of the other side are deeply flawed fromwithin their own culture. it necessarily follows thatwhat is good advice and good practice in statistical research might be bad advice and bad practice in qualitative research and vice versa. and Guido W.We thushope that scholars will read this articlewith thegoal of learningmore about how the "other side" thinksabout research. American Sociological Review 45:370-90. 2004b. the research practices we have described make good sense.S.. Language.1093/pan/mpj015. Lanham. Achen. Data-set observations versus causal-process election. Case expertise. Imbens. 2001. de Vhistoire: effects using instru essai sur les limites de Vobjectivite truth and logic. Liberal democracy: Validity and method Science 37:1207-30. Guido W.246 estimation approaches. and respectful dialogue. 2004a. and David Collier.. Let's put garbage-can regressions and garbage-can probits where they belong. they can productively communicate with one another. Adcock. Comparative Political Studies 38:939-70. Christopher H. H. 2nd ed. Snidal. Alfred Jules. Lanham. In Rethinking social inquiry: Diverse tools. In this framework. Doing good and doing better: How far does the quantitative template get us? In Rethinking tools. Nathaniel. Misunderstandings across the two traditions are not inevitable. Jackman. Studies International Development 27-32. and determinants of bias in subjective measures. H. shared standards.. 2006. It is also not an effort to criticize research practices?within the assumptions of each tradition.We hope that our listing of differences across the two traditionsmight contribute to this kind of References Achen. Issues in the comparative measurement of political democracy. Collier. shared standard for qualitative and quan least squares estimation of average causal ef Angrist. . shared standards. Two-stage fects in models with variable treatment intensity. Angrist. ed. Kenneth A. and Robert W. 1995. Insofar as scholars are conversant in the language of the other tradition and interested in exploring a peaceful productive communication. social -. and Pamela Paxton. Beck. 2005a. historique. Measurement validity: A titative research. Kenneth A. Ayer. Christopher H. Two cheers for Charles Ragin. Bollen. Introduction a la philosophie Aron. Raymond. Joshua D. 1989. Regression diagnostics: An expository treatment of outliers and influential cases. it is not helpful to condemn research practices without taking into consideration basic research goals. Brady and D. Conflict Management in Comparative 40: 2005b. Joshua D. MD: Rowman and Littlefield. ed. and James Mahoney. Given the different assumptions and research goals un derlying the two traditions. Imbens. 1980.
2000. American Stephen G. and William C. in history and the genetic sciences. George. Belkin. 58:807-20. Collier. Jr. Mind 64:160-80. Collier. -. W. DC: Omitted variable bias in econometric research. 2006. Explanatory typologies in qualitative studies of international politics. American Political Science Review 87:845-55. E. David. J. Alchemies of themind: rationality and the emotions. globalization. E. Paper presented at the annual meetings of the American Political Science Association. Esping-Andersen. and Gary Goertz. inference without counterfactuals (with discussion). Barbara. International Organization 2006. Cambridge: English. Clarke. Press. Explanations Geddes. Political Analysis 14:63-82. NJ: Princeton University Press. and the end of the Cold War: Reevaluating a landmark case for ideas. In Counter/actual thought experiments inworld politics. Case Cambridge. 2004. H. Alexander L. Comparative and applications. The comparative method. R. issues in comparative macrosociology. 2001. Gilligan. In Rethinking social inquiry: Diverse tools. Methodological 16:121-32. Finifter. International Security 26:93-111. Michael. Explaining mpj009. 2000.. Collier. In Political science: The state of the discipline American Political Science Association. Statistical models and shoe leather. Science Bear R. Causes and counterfactuals in social science: Exploring an analogy between cellular automata and historical processes. Thickening thin concepts and theories: Combining Politics 31:465-76. Political Brooks. P. and new evidence on the Cold Wars Cambridge University Press. politics. Power. How methodology. 2005. conditions. A. 1991. Wohlforth. Collier. Braumoeller. Gerring. W. Political Analysis 11:209-33. Oxford: Clarendon Press. and Harvey Starr. Douglas. David A. John. M. 44:844-58. Claiming too much: Warnings about selection bias. ed. Gallie. and James Mahoney. 1997. and Jason Seawright. International Security 26:70-92. The methodology of necessary variance: Exploring the neglected second moment. J. Comparative Dawid. 1998. Vol. methodology. and Amartya Sen. James Mahoney. Freedman. Two-level theories and fuzzy-set analysis. Coppedge. 2004. 2004. ed. 1955. David. Hypothesis testing and multiplicative interaction terms. A. 49:430-51. P. Social science concepts: A user's guide. Goldstone. Braumoeller. Democracy with adjectives: Conceptual II. David. 2003. 2. Power. David. Goertz.. MA: MIT Press. Comparative Politics 30:127-45. James D. Gary. Golder. research. and Andrew Bennett. studies and theory development in the social sciences. shared standards. Colin. H. Political Analysis 10. innovation in comparative and James E. G0sta. E. Tetlock and A. . MD: Rowman and Littlefield. and M. Brambor. Princeton. politics. Elman. Conceptual stretching revisited: Adapting categories in compar ative analysis. Jean. 1991. 2005. -. Robert D. end: A reply to Brooks and Wohlforth. Marsden. 2000. 2006. H. The phantom menace: agement and Peace Science 22:341-52. Elster. 2002.1093/pan/mpj018. 1999. 2004. 1989. Causal complexity and the study of politics. ed.. W. Stimson. Necessary Lanham. Ann Arbor: University of Michigan Press. Social science methodology: A criterialframework. From old thinking to new thinking in qualitative research. Rethinking social MD: Rowman and Littlefield. J. A simple multivariate test for asymmetric hypotheses. Brady and D. Cambridge: Polity Press. eds. the cases you choose affect the answers you get: Selection bias in comparative In Political Analysis. Conflict Man 1993. P. 2006. T. In Sociological San Francisco: Jossey-Bass. Clark. International Security 25:5-53. 2005. shared standards. 2003.1093/pan/ Journal of -. Political Analysis 10. and David Collier. eds. 2002. Bear F. Cambridge: Cambridge University Goertz. NJ: Princeton University Press. J. ed. and M. and Jason Seawright. Washington.A Tale ofTwo Cultures 247 Brady.. Collier. Evidence and inference in the comparative case study. 1990. Fearon. MD: Rowman and Littlefield. Clark. Lanham. Golder. 2005. Framing social inquiry: From models of causation to statistically based causal inference. Hunger and public action.. International Orga nization 59:293-326. 1996. inquiry: Diverse tools. 1999. 1993. 1997. Kevin A. Dreze.A. ed. Princeton. The three worlds of welfare capitalism. World Politics and Steven Levitsky. Sociological Methods and Research Goertz. Understanding interaction models: Improving empirical analyses. ideas. conditions: Theory. Brady. Social Research Gary. Mahon. Gary. 33:497-538. large-N and small in comparative Journal of theAmerican Statistical 95:407-50.. Causal Association Dion. Lanham.
1986a. Bates. 1994. Sociological Paul W. in comparative research. research. and Langche Zeng. Journal of the American Statistical Association 81: and Sidney Verba. Levy. Keohane. James. J. Strategies of causal 28:387-424. Statistics 81:968-70. Causation in the law.A. fascism. B. John Leslie. sample selection and limited of Economic and Social Measurement JoyaMisra. Social 1991. 1987. New York: Oxford University Press. Honore. In 2003. Comparative J. Politics and Society 17:489-509. and applications. The cement of the universe: A study of causation. Harff. The logic of comparative social inquiry. State failure task report: Phase III findings. The common structure of statistical models and a simple estimator for such models. Epstein. 1970. -. International Organization 53:161-90. The comparative Berkeley: University of California Press. ed. Review 60:329-50. Hicks. J. Gurr.248 JamesMahoney and GaryGoertz -. R. 1997. C. qualitative research. Statistics and causal inference. 2000. Gary. N. 2004. or social democracy: Social classes and thepolitical origins of regimes in interwar Europe.Berkeley: University of the reasoning in comparative methods in small-N comparative studies based on studies. Current Social Research issues in comparative macrosociology: A debate on methodological issues.. American Holland. Gregory M. 1999. Barrington 1966. 945-60. Mackie. and Gary Goertz. A. 1995. VA: International Corporation. Lieberson. 2001. -. Gerardo L. The Harcourt. Comparative historical analysis and knowledge accumulation and D. Heckman. 1961. and JayVerkuilen. H. H. indices. Harvey. Practicing coercion: Revisiting successes and failures using Boolean logic and compar ative methods. McLean. King. Comparative Goldstone.. Cambridge: historical analysis in the social sciences. Political Analysis 9:137-63. Princeton. Liberalism. Nagel. G. Peter A. Gary. 35:5-34. Ng. Alexander M. 16:1-26. Hart. Forces itcount: The improvement of social research and theory. Unger. and T. Ulfelder. Stanley. 2005. Nested analysis as a mixed-method strategy for comparative research. Evan S. American Political Science Review 98:653-69.. More 72:1225-37. D. J. beyond qualitative strategies. 2nd ed. Sociological Methods and Research 2004. Conceptualizing Problems and measuring democracy: Evaluating alternative New York: structure of science: in the logic of scientific explanation. 1985. Moore. ed. Comparative-historical methodology. Goldthorpe. and Verba's Social Inquiry. Designing social inquiry: Scientific inference in King. James. Marshall. Mahoney. 1989. T. Sons. methodology. In Necessary conditions: Theory.Mahoney Press. James J. J.. Logistic regression in rare events data. Science Applications Surko. Annual Review of Sociology 30:81-101. Boston: Beacon Press. Case studies and the statistical worldview: Review of King. 1985. Ragin.New York: John Wiley and quantitative and 2002. P. C. 1991. Oxford: Oxford University Press. inference in small-N research.Mahoney Cambridge University Press. method: Moving and World. Social Forces 70:307-20. Munck. Lieberman. 1994. 1980. Start New York: Rowman and Littlefield. 2003. Oxford: Oxford Univer and causal inference: Rejoinder. in the study of revolutions. M. J. Goertz and H. -. Rueschemeyer. Small Ns and big conclusions: An examination a small number of cases. 2000. -. Cambridge: Cambridge University analysis in the social sciences. and Henry Teune. Aligning ontology and methodology and D. Frank P. The possibility principle: Choosing negative cases Mahoney. and Theda Skocpol. Designing in comparative Timothy J. Keohane. The social origins of dictatorship and democracy: Lord and peasant in themaking of themodern world. . Goodwin. 1986b. 2003. NJ: Princeton University Press. 1976. Adam. G. The programmatic emergence of the social security state. R. Kahl. Tony. Robert O. Rueschemeyer. In Comparative historical Hall. Charles C. American Political Science Review 99:435-52. and Herbert Lionel Adolphus sity Press. on the uneasy case for using Mill-type Luebbert. Making of California Press. Explaining revolutions in the contemporary Third World. Journal of the American Statistical Association Annals of truncation. M. N. L. Brace Przeworski. ed. dependent variables 5:475-92. McKeown. and A. T. Ernest.
Beyond the linear frequentist orthodoxy. Mandates Cambridge University Press. Giovanni. 1970.1093/ 1979. 2006. -. Theda. E.A Tale ofTwo Cultures 249 -. 64:1033-53. Sartori. Studies in Comparative Development 40:3-26. Testing for necessary and/or sufficient causation: Which discussion) Political Analysis 10:178-93. Ithaca. 1997. Fuzzy-set social science. Turning the tables: How case-oriented research challenges variable-oriented tools. the social sciences. Cambridge: Cambridge University Press. Russia. Waldner. Guide and democracy: Neoliberalism by surprise science. Phillips. Bruce. Concept misformation (of dangerous dyads). Conflict Management Political Science in comparative politics. 2006. Vaughan. shared standards. Sobel. W. Case selection: Insights from Rethinking Social Inquiry. James Lee. Journal of theAmerican Statistical Association Skocpol. American Journal of Sociology 107:352-78. In Re Lanham. Chicago: University of Chicago Press. 1999. -. Max. NY: Cambridge: pan/mpj007. Causal inference in the social sciences. Van Evera. Press. 2001. The methodology Weber. and China. Seawright. Ithaca. States and social revolutions: A comparative analysis of France. 1949. Susan C. 2004. Shively. NY: Cornell University Press. MD: and thinking social inquiry: Diverse Rowman and Littlefield. Bayesian thinking about macrosociology. Political Analysis Jason. Philip A. Stephen. Qualitative International comparative analysis vis-a-vis regression. analyses ed. Michael E. Political Analysis 10. State building and late development. of . The Challenger launch decision. David. Collier. H. in Latin America. Chicago: University of Chicago Press. 2005. 2002. research. New York: Free Press. Brady and D. Western. 95:647-51. to methods for students of political Cornell University 1986. 2000.1093/pan/mpj013. American Review Schrodt. Ray. 10. 2000. Constructing multivariate Peace Science 22:277-92. cases are relevant? (with 2005. Diane. Stokes. Objective possibility and adequate causation in historical explanation. 2001.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.