You are on page 1of 25

PL20CH02-Gerring ARI 31 March 2017 6:47

ANNUAL
REVIEWS Further
Click here to view this article's
online features:
• Download figures as PPT slides
• Navigate linked references
• Download citations
• Explore related articles
Qualitative Methods
• Search keywords

John Gerring
Department of Government, University of Texas, Austin, Texas 78712;
email: jgerring@austin.utexas.edu
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Annu. Rev. Polit. Sci. 2017. 20:15–36 Keywords


First published online as a Review in Advance on case selection, causal inference, multimethod research, process tracing,
January 11, 2017
qualitative methods
The Annual Review of Political Science is online at
polisci.annualreviews.org Abstract
https://doi.org/10.1146/annurev-polisci-092415- One might argue that political science has gone further than any other social
024158
science in developing a rigorous field of study devoted to qualitative meth-
Copyright  c 2017 by Annual Reviews. ods. This review article begins by discussing the time-honored qualitative/
All rights reserved
quantitative distinction. What is qualitative data and analysis, and how does
it differ from quantitative data and analysis? I propose a narrow definition of
“qualitative” and explore its implications. I also explore in a speculative vein
some of the factors underlying the ongoing Methodenstreit between scholars
who identify with quantitative and qualitative approaches to social science.
In the remainder of the article I discuss areas of qualitative research that have
been especially fecund over the past decade. These include case selection,
causal inference, and multimethod research.

15
PL20CH02-Gerring ARI 31 March 2017 6:47

INTRODUCTION
Qualitative methods, broadly construed, extend back to the very beginnings of social and political
analysis. Self-conscious reflection on those methods, however, is comparatively recent. The first
methodological statements of contemporary relevance grew out of the work of logicians, philoso-
phers, and historians in the nineteenth century, most importantly John Stuart Mill (1843/1872).
To be sure, these scholars were in a quest for science, which they understood as a unified venture;
the notion of a method that applies only to qualitative data would have made little sense to them.
At the turn of the twentieth century, a bifurcation appeared between quantitative and qualita-
tive methods (Platt 1992). The natural sciences, along with economics, moved fairly quickly and
without much fuss into the quantitative camp, whereas the humanities remained largely quali-
tative in orientation. The social sciences found themselves in the middle: Scholars aligned with
either camp, and some embraced both. For this reason, the qual/quant distinction has assumed
considerable importance in these fields, and very little importance outside of them.
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Perhaps it is not coincidental that the quest for a method to qualitative inquiry has proceeded
further in the social sciences than in the humanities, and among the social sciences, one might argue
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

that political science has gone further than any other in developing a field of qualitative methods.
Accordingly, this review article focuses primarily on work produced by political scientists, with an
occasional glance at neighboring disciplines.
I begin by discussing the time-honored qual/quant distinction. What is qualitative data and
analysis, and how does it differ from quantitative data and analysis? I propose a narrow definition
of “qualitative” and explore its implications. I also explore in a speculative vein some of the
factors underlying the ongoing Methodenstreit between scholars who identify with quantitative
and qualitative approaches to social science. In the remainder of the article I discuss areas of
qualitative research that have been especially fecund over the past decade. These includes case
selection, causal inference, and multimethod research.
In treating these subjects, I try to represent the current state of the field. However, representing
every position in every debate is not possible in the short space of this review. There are simply
too many ways to cut the cake. My selection incorporates work conducted by many scholars but—
inevitably—imposes my own views on the subject matter and neglects many important subjects.
I do not address concept formation (Collier & Gerring 2009, Goertz 2005); typological methods
(Collier et al. 2012, Elman 2005, George & Bennett 2005); set theory and qualitative comparative
analysis (Mahoney 2010, Mahoney & Sweet Vanderpoel 2015, Rihoux 2013); data archiving,
transparency, and replication (Elman & Kapiszewski 2014, Elman et al. 2010, Lieberman 2010);
comparative historical analysis (Mahoney & Thelen 2015); path dependence (Bennett & Elman
2006a, Boas 2007, Page 2006); the organizational features of qualitative methods (Collier & Elman
2008); interpretivism and ethnography (Schatz 2009, Yanow & Schwartz-Shea 2013); or other
methods of data collection grouped under the rubric of field research (Kapiszewski et al. 2015).
Fortunately, these topics are amply covered in recent work, as the foregoing citations attest. I
should also specify that the following discussion pertains mostly to causal inference, leaving aside
many knotty questions pertaining to descriptive inference (Gerring 2012).
Having acknowledged my biases and omissions—especially important in a review focused on
a subject as contested as qualitative methods—let us begin.

QUAL AND QUANT


Although the qual/quant distinction is ubiquitous, it is viewed differently by scholars identified
with each camp. As a rule, scholars whose work is primarily quantitative view science as a unified
endeavor that follows similar rules and assumptions. The naturalistic ideal centers on goals such

16 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

as replication, cumulation, and consensus, all of which point toward a single logic of inference
(Beck 2006, 2010; King et al. 1994).
By contrast, scholars whose work is primarily qualitative tend to view the two modes of inquiry
as distinctive, perhaps even incommensurable. They are more likely to believe that knowledge of
the world is embedded in theoretical, epistemological, or ontological frameworks from which we
can scarcely disentangle ourselves. They may also identify with the phenomenological idea that all
human endeavors, including science, are grounded in human experience. Given that experiences—
which are inevitably couched in positions of differential power and status—vary, one can reasonably
expect that the methods and goals of social science will also vary. The apparent embeddedness
of knowledge reinforces qualitative scholars’ predilection for pluralism, because it suggests that
there are fundamentally (and legitimately) different ways of going about business (Ahmed & Sil
2012; Bennett & Elman 2006b, pp. 456–57; Goertz & Mahoney 2012; Hall 2003; Mahoney &
Goertz 2006; Shapiro et al. 2004; Sil 2000; Yanow & Schwartz-Shea 2013).
Following the axiom that where one sits determines where one stands, we must also consider
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

the stakes in this controversy. Over the past century, quantitative work has been ascendant and
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

qualitative work has been cast in a defensive posture. Qualitative researchers are at pains to explain
their work in ways that those in the quantitative tradition can understand. Uncomfortable with the
prospect of absorption into a quantitative template, one may surmise that many qualitative scholars
have sought to emphasize the distinctiveness of what they do for strategic reasons—establishing a
nature preserve for endangered species, as it were.
Whatever its intellectual and sociological sources, the question of unity or dis-unity depends
upon how one chooses to define similarity and difference. Any two objects will share some charac-
teristics and differ in others. It follows that they may be either compared or contrasted, depending
upon the author’s point of view. Quantitatively inclined scholars may choose to focus on similari-
ties, whereas qualitatively inclined scholars may choose to focus on differences. Both are correct,
as far as they go. The half-empty/half-full conundrum seems difficult to overcome in this partic-
ular context.1 To put the matter in a more specific frame: Most political scientists probably agree
with Brady & Collier (2010) that there are “diverse tools” (the pluralistic angle) as well as “shared
standards” (the monist angle).2 But they do not necessarily agree on what those shared standards
are, or to what extent they should discipline the work of social science.
Any attempt to resolve the monism/pluralism question that begins with high-level concepts
(e.g., monism and pluralism, logic of inquiry, epistemology, commensurability, naturalism, inter-
pretivism) is probably doomed to failure. These words are loaded, and once they have been uttered
the die is cast. Those who identify with either camp are likely to dig in their heels.
I propose, therefore, to take a ground-level approach that seeks to avoid diffuse and loaded
concepts from the philosophy of science, focusing instead on matters of definition. What, exactly,
are qualitative data? And what, by contrast, are quantitative data? We shall then explore the
repercussions of this distinction, working toward some tentative conclusions that may resolve
some (though not all) aspects of the qual/quant debate.

Definitions
Qualitative and quantitative are usually understood as antonyms. The resulting polar concepts
may be viewed as a continuum (a matter of degrees) or as a set of crisp concepts (with clear-cut

1
This is nicely illustrated in recent arguments about causation (Reiss 2009).
2
A more radical pluralist view, associated with poststructuralism (Rosenau 1992), denies the existence of shared standards. I
suspect that few political scientists hold that view.

www.annualreviews.org • Qualitative Methods 17


PL20CH02-Gerring ARI 31 March 2017 6:47

boundaries). In either case, the two terms are defined in opposition to each other. Let us consider
some of the attributes commonly associated with these contrasting approaches.
Qualitative work is expressed in natural language, whereas quantitative work is expressed in
numbers and in statistical models. Qualitative work employs small samples, whereas quantitative
work is based on large-N analysis. Qualitative work draws on cases chosen in an opportunistic or
purposive fashion, whereas quantitative work employs systematic (random) sampling. Qualitative
work is often focused on particular individuals, events, and contexts, lending itself to an idiographic
style of analysis. Quantitative work is more likely to be focused on features that (in the researcher’s
view) can be generalized across a larger population, lending itself to a nomothetic style of analysis.
I shall suppose that all of the foregoing contrasts contain some truth; that is, they describe
patterns found in the work of social scientists, even if there are many exceptions. Let us further
suppose that these contrasts resonate with common usage of these terms, as reflected in existing
work on the subject [e.g., Bennett & Elman 2006b, Brady 2010, Caporaso 2009, Collier & Elman
2008, Glassner & Moreno 1989, Goertz & Mahoney 2012, Hammersley 1992, King et al. 1994,
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Levy 2007, McLaughlin 1991, Morgan 2012, Patton 2002, Schwartz & Jacobs 1979, Shweder 1996,
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Snow 1993 (1959), Strauss & Corbin 1998]. If so, we have usefully surveyed the field, but we have
not provided anything more than a semantic map of this rugged terrain. And because the foregoing
attributes are multidimensional, the subject remains elusive. We cannot bring methodological
clarity to it because “it” remains ambiguous.
My goal is to arrive at a minimal definition that bounds our subject in a fairly crisp fashion,
that resonates with current understandings (subsuming many of the meanings contained in the
passage above), and that does not trespass on other well-established terms. For example, it would be
inefficient, semantically speaking, to conflate qualitative with idiographic, ethnographic, or some
other term in this family of concepts. In addition, it would be helpful if the proffered definition
could (in a loosely causal sense) account for the various attributes commonly associated with the
terms “qualitative” and “quantitative” as surveyed above.
With these goals in mind, I propose that the defining feature of qualitative work is its use of
noncomparable observations—observations that pertain to different aspects of a causal or descrip-
tive question. As an example, one may consider the clues in a typical detective story. One clue
concerns the suspect’s motives; another concerns the suspect’s location at the time the crime was
committed; a third concerns a second suspect; and so forth. Each observation, or clue, draws from
a different population. This is why they cannot be arrayed in a matrix (rectangular) data set and
must be dealt with in prose (aka narrative analysis). It is also why we have difficulty counting such
observations. The time-honored question of quantitative research—What is the N?—is impossi-
ble to answer in a definitive fashion. Likewise, styles of inference based on qualitative data operate
somewhat differently from styles of inference based on quantitative data.
I therefore define quantitative observations as comparable (along whatever dimensions are
relevant) and qualitative observations as noncomparable, regardless of how many there are.
When qualitative observations are employed for causal analysis they may be referred to as
causal-process observations (Brady 2010), though I shall continue to employ the more general
and less bulky concept of qualitative observation, which applies to both descriptive and causal
inferences.
The notion of a qualitative or quantitative analysis is, accordingly, an inference that rests on
one or the other sort of data. If the work is quantitative, it enlists patterns of covariation found
in a matrix of observations and it usually analyzes them within a formal model (e.g., set theory/
qualitative comparative analysis, frequentist statistics, Bayesian probabilities, randomization in-
ference, synthetic control) to reach a descriptive or causal inference. If the work is qualitative, the

18 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

inference is based on bits and pieces of noncomparable observations that address different aspects
of a problem. Traditionally, these are analyzed in an informal fashion, an issue taken up below.
Some strategies of data collection seem inherently qualitative—e.g., unstructured interviews,
participant observation (ethnography), and archival work. This is because researchers are likely to
incorporate a wide variety of clues drawn from different sources and addressing different aspects of a
problem. The heterogeneity of the evidence makes the data noncomparable and hence qualitative.
Other data collection strategies such as standardized surveys are inherently quantitative, because
they involve counting large numbers of observations that are comparable by assumption. Of course,
they might not be comparable; we are speaking here of assumptions about the data generating
process, not about the Truth with a capital T. However, we cannot avoid assumptions about the
world, and these assumptions quite rightly lead researchers to adopt one or the other method of
apprehending reality.
It may seem that I am defining our subject too narrowly. After all, some studies that are com-
monly regarded as qualitative select multiple cases that are causally comparable to each other, as
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

discussed below. Indeed, this is absolutely critical to the closely related traditions of structured,
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

focused comparison (George 1979); comparative historical analysis (Mahoney & Thelen 2015);
and most-similar/most-different analysis (Mill 1843/1872). Yet, if studies conducted in these re-
search traditions rested solely on covariation across cases, they would be reducible to a qualitative
comparative analysis algorithm. And if they also incorporated variation through time, they would
be equivalent to a panel analysis (e.g., difference in differences, fixed effect, or synthetic control).
What makes studies in these traditions qualitative is their employment of noncomparable observa-
tions drawn from the chosen cases, often referred to as within-case evidence. This sort of evidence
is not reducible to an algorithm.
Evidently, in order to say anything about our our subject one needs to circumscribe it. In doing
so, one defines in some phenomena and defines out other phenomena. There is no getting around
the stipulative quality of definitions. However, the choices made here are non-arbitrary insofar as
they resonate with everyday usage of the term (“qualitative”) and make sense of current practices.
So let us now turn to the payoff: What might we learn about qualitative and quantitative research
as defined in this rather narrow manner?

Converting Words to Numbers


No qualitative observation is immune from quantification. Interviews, pictures, ethnographic
notes, and texts drawn from other sources may be coded, either through judgment exercised by
coders or through mathematical algorithms (Grimmer & Stewart 2013). By coding I refer to the
systematic measurement of the phenomenon at hand—that is, reducing the available information
to a small number of dimensions, consistently defined across the units of interest. All that is
required, following our definition, is that multiple observations of the same kind be produced and,
voilà, quantitative observations are born. These may then be represented in the matrix format
familiar to those who work with rectangular data sets.
Of course, there are often practical obstacles to quantification. Perhaps additional sources
(informants, pictures, texts) are unavailable. Perhaps, if available, they are not really comparable,
or they introduce problems of causal identification (e.g., heterogeneity across cases that could
pose a problem of noise or confounding). Alternatively, it may be possible to generate additional
comparable observations but not worthwhile, for example because the first observation is sufficient
to prove the point at issue. Sometimes, one clue is decisive. Nonetheless, in principle, if the
researcher’s assumptions of comparability are justified, qualitative data can become quantitative
data. The plural of anecdote is data.

www.annualreviews.org • Qualitative Methods 19


PL20CH02-Gerring ARI 31 March 2017 6:47

Something is generally lost in the process of reducing qualitative information to quantitative


data. One must ignore the unique aspects of each qualitative observation to render them compa-
rable. If one wishes to generalize across a population, ignoring idiosyncratic features of the data
is desirable; but if one wishes to shed light on those heterogeneous features, the conversion of
qualitative to quantitative data will iron out the ruggedness of the landscape, obscuring variations
of theoretical interest. Information loss must be reckoned with.3
Finally, and perhaps most importantly, there is an asymmetry between qualitative and quanti-
tative data. One can convert qualitative data to quantitative data but not the reverse. It is a one-way
street. Once a piece of information is rendered in a matrix template, whatever unique aspects may
have adhered to that observation have been lost. Data reduction is possible, but not expansion.
The singular of data is not anecdote, which is to say one can never recover an anecdote from a
data point. Of course, one may return to the original source in order to explore a subject in a rich,
qualitative manner; but this exploration will be based on evidence other than that provided by a
matrix.
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Contrasting Affinities
It follows from our discussion that the utility of qualitative and quantitative data is likely to vary
according to the researcher’s goals. In particular, I argue that qualitative inquiry is often especially
fruitful when research is at an exploratory stage and when it is case based. Other features of a
theory or an analysis do not seem to have a direct bearing on the relative utility of these varying
approaches.4
First, qualitative data are likely to be more important when not much is known about a subject
and when the goal of the researcher is to develop a new concept, uncover a new hypothesis, or shed
light on unknown causal mechanisms. Qualitative data are ideal for exploratory analysis. More
generally, one might argue that social science knowledge typically begins at a qualitative level and
then (sometimes) proceeds to a quantitative level. This is implicit in the notion that data can be
converted from qual to quant but not the reverse.
Granted, qualitative analysis may follow quantitative analysis, and occasionally it may help
to confirm patterns found in the quantitative analysis. More typically, however, the goal of a
qualitative inquiry that follows a quantitative analysis is to inform us about a different aspect of
a relationship where little is known a priori. Frequently, quantitative analysis establishes a causal
relationship between X and Y but leaves open-ended the mechanisms that might connect X to
Y. This missing information is often referred to as a black box, signifying how little we know
about this important feature of a causal relationship. In this context, qualitative inquiry plays an
exploratory role, suggesting potential mechanisms that may be tested subsequently in a quantitative
analysis.
Second, qualitative data are likely to be more useful insofar as a study is focused on a single
case (or event) or a small number of cases (or events). Such investigations bear close resemblance,
methodologically speaking, to a detective’s quest to explain a crime, which may be thought of
as a single event or a small number of associated events (e.g., in the case of a string of crimes

3
Of course, any rendering of a complex phenomenon involves some loss of information. This is true even for the most faithful
and detailed descriptions of reality, such as those produced by ethnomethodologists (Garfinkel 1967).
4
For example, the long-standing distinction between research that seeks a complete explanation of an outcome (causes-
of-effects) and research that narrows its scope to a single hypothesis (effects-of-causes) seems to bear ambivalently on the
qual/quant divide. Note that a causes-of-effects explanation may be provided solely on the basis of quantitative data, e.g.,
a full regression model. Likewise, an effects-of-causes explanation may be provided based solely on qualitative data, e.g., a
process-tracing analysis.

20 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

committed by the same person or group). The reason that these investigations often rest on
qualitative data is that the researcher wishes to know a lot about the chosen case/event, and this
requires a supple mode of investigation that allows one to draw different kinds of observations
from different populations.
Whether case-level analysis is warranted may rest on other, more fundamental aspects of the
analysis. For example, case-level analysis is more plausible if the cases of theoretical interest are
heterogeneous and scarce (e.g., nation-states) rather than homogeneous and plentiful (e.g., firms
or individuals); if the causal factor cannot be manipulated by the researcher; if the causal factor or
outcome is extremely rare; if the theory is focused on a single case or a small set of cases; and so
forth.

CASE SELECTION
We have observed that case-based analysis invariably contains qualitative observations (even if it
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

also incorporates quantitative observations). Consequently, the question of case selection—how


Access provided by 179.13.82.71 on 02/05/22. For personal use only.

a case, or a small number of cases, is chosen from a large number of potential cases—is central to
qualitative analysis.
Quite a number of case-selection typologies have been proposed over the years, with a notice-
able acceleration in the past decade. Mill (1843/1872) proposes the method of difference (a.k.a.
most-similar method) and method of agreement (a.k.a. most-different method), along with sev-
eral others that have not gained traction. Lijphart (1971, p. 691) proposes six case study types:
atheoretical, interpretative, hypothesis generating, theory confirming, theory infirming, and de-
viant. Eckstein (1975) identifies five species: configurative idiographic, disciplined configurative,
heuristic, plausibility probe, and crucial case. Skocpol & Somers (1980) identify three logics of
comparative history: macrocausal analysis, parallel demonstration of theory, and contrast of con-
texts. Gerring (2007) and Seawright & Gerring (2008) identify nine techniques: typical, diverse,
extreme, deviant, influential, crucial, pathway, most similar, and most different. Levy (2008) iden-
tifies five case study research designs: comparable, most and least likely, deviant, and process
tracing. Rohlfing (2012, ch. 3) identifies five case types—typical, diverse, most likely, least likely,
and deviant—which are applied differently according to the purpose of the case study. Blatter
& Haverland (2012, pp. 24–26) identify three explanatory approaches—covariational, process
tracing, and congruence analysis—each of which offers a variety of case-selection strategies.
Building on these efforts, Gerring & Cojocaru (2016) propose a new typology that arguably
qualifies as the most comprehensive to date, incorporating much of the foregoing literature. Its
organizing feature is the goal that a case study is intended to serve, identified in column 1 of
Table 1. Column 2 specifies the number of cases (N) in the case study. Note that case studies
enlist a minimum of one or two cases, with no clearly defined ceiling (though at a certain point the
defining goal of a case study, the intensive analysis of a case, becomes dubious). Column 3 clarifies
which dimensions of the case are relevant for case selection: descriptive features (D), causal factors
of theoretical interest (X), background factors (Z), and/or the outcome (Y ). Column 4 specifies
the criteria used to select one or more cases from a universe of possible cases. Column 5 offers an
example of each case-selection strategy. In what follows, I offer a brief summary of the resulting
typology.
Before beginning, it is worth pointing out that the process of case selection is quantitative
(according to the proposed definition) insofar as it strives to select cases that are comparable
to each other (if there is more than one case) and to a larger population of theoretical interest.
The qualitative aspect of case-study research is encountered in the analysis of the chosen case(s),
sometimes referred to as process tracing.

www.annualreviews.org • Qualitative Methods 21


PL20CH02-Gerring ARI 31 March 2017 6:47

Table 1 Case-selection strategies


Goals/strategies N Factors Criteria for cases Examples
I. Descriptive (to describe Y )
Typical 1+ D Mean, mode, or median of D Lynd & Lynd (1956 [1929])
Diverse 2+ D Typical subtypes Fenno (1977, 1978)
II. Causal (to explain Y )
a. Exploratory (to identify HX )
Extreme 1+ X or Y Maximize variation in X or Y Skocpol (1979)
Index 1+ Y First instance of Y Pincus (2011)
Deviant 1+ Z, Y Poorly explained by Z Alesina et al. (2001)
Most similar 2+ Z, Y Similar on Z, different on Y Epstein (1964)
Most different 2+ Z, Y Different on Z, similar on Y Karl (1997)
Diverse 2+ Z, Y All possible configurations of Z Moore (1966)
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

(assumption: X∈ Z)
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

b. Estimating (to estimate HX )


Longitudinal 1+ X, Z X changes, Z constant or biased Friedman & Schwartz (1963)
against HX
Most similar 2+ X, Z Similar on Z, different on X Posner (2004)
c. Diagnostic (to assess HX )
Influential 1+ X, Z, Y Greatest impact on P(HX ) Ray (1993)
Pathway 1+ X, Z, Y X→ Y strong, Z constant or Mansfield & Snyder (2005)
biased against HX
Most similar 2+ X, Z, Y Similar on Z, different on X and Walter (2002)
Y, X → Y strong

Abbreviations: D, descriptive features (other than those to be described in the case study); HX , causal hypothesis of interest; P(HX ), the probability of HX ;
X, causal factor(s) of theoretical interest; X→ Y , apparent or estimated causal effect; Y, outcome of interest; Z, vector of background factors that may affect
X and/or Y.

Many case studies are primarily descriptive, which is to say they are not organized around
a central, overarching causal hypothesis. Although writers are not always explicit about their
selection of cases, most of their decisions might be described as following a typical or diverse case
strategy. That is, they aim to identify a case or cases that exemplify a common pattern (typical) or
patterns (diverse). This follows from the minimal goals of descriptive analysis. Where the goal is
to describe, there is no need to worry about the more complex desiderata that might allow causal
leverage on a question of interest.
Other case studies are oriented toward causal analysis. A good case (or set of cases) for purposes
of causal analysis is generally one that exemplifies quasi-experimental properties, replicating the
virtues of a true experiment even in the absence of a manipulated treatment (Gerring & McDermott
2007). Specifically, for a given case (observed through time) or for several cases (compared to each
other), variation in X should not be correlated with other factors that are also causes of Y, which
might serve as confounders (Z) and generate a spurious (noncausal) relationship between X and Y.
Exploratory case studies aim to identify a hypothesis. Sometimes, the researcher begins with
a factor that is presumed to have fundamental influence on a range of outcomes. The research
question is, what outcomes (Y ) does X affect? More commonly, the researcher works backward
from a known outcome to its possible causes. The research question therefore is, what accounts
for variation in Y? Or, if Y is a discrete event, why does Y occur? The researcher may also have
an idea about background conditions (Z) that influence Y but are not of theoretical interest. The

22 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

purpose of the study, in any case, is to identify X, regarded as a possible or probable cause of Y.
Specific exploratory techniques may be classified as extreme, index, deviant, most different, most
similar, or diverse, as specified in Table 1.
Estimating cases aim to test a hypothesis by estimating a causal effect. This might mean a precise
point estimate along with a confidence interval (e.g., from a time-series or synthetic matching
analysis), or an estimate of the sign of a relationship, i.e., whether X has a positive, a negative, or
no relationship to Y. The latter is more common, not only because of the small size of the sample
(at the case level) but also because it is more likely to be generalizable across a population of
cases. In either situation, case selection rests on information about X and Z (not Y ). Two general
approaches—longitudinal and most similar—are viable, as outlined in Table 1.
Diagnostic case studies help to confirm, disconfirm, or refine a hypothesis garnered from the
literature on a subject or from the researcher’s own ruminations, and to identify the generative
agent (mechanism) at work in that relationship. All the elements of a causal model—X, Z, and
Y—are generally involved in the selection of a diagnostic case. Specific strategies may be classified
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

as influential, pathway, or most similar, as shown in Table 1.


Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Note that virtually all of these case-selection strategies may be executed in an informal, qual-
itative fashion or by employing a quantitative algorithm. For example, a deviant case could be
chosen based on a researcher’s sense about which case is poorly explained by existing theories, or
it might be chosen by looking at residuals from a regression model. Discussion of the pros and
cons of algorithmic case selection can be found in Gerring (2017).

Validation
The reader may wonder, how does one know whether a designated strategy will achieve its intended
goal? Given a research objective, which is the best way to choose cases? Why the strategies listed
in Table 1 and not others? Evidently, there are serious problems of validation to wrestle with.
Several attempts have been made to assess varying case-selection strategies using Monte Carlo
techniques. The approach here is to work with data that have known parameters and then to
see how successful different case-selection strategies are in reproducing aspects of the population
of interest. Herron & Quinn (2016) assess estimating strategies, in which the case is intended
to measure causal effects; Seawright (2016a) assesses diagnostic strategies, in which the case is
designed to help confirm or disconfirm a causal hypothesis.
It would take some time to discuss these complex studies, so I shall content myself with sev-
eral summary judgments. First, case-selection techniques have different goals, so any attempt to
compare them must focus on the goals that are appropriate to that technique. A technique whose
purpose is exploratory (to identify a new hypothesis about Y ) cannot be judged by its efficacy in
identifying causal mechanisms, for example. Second, among these goals, estimating causal effects
is the least common—and, by all accounts, the least successful—so any attempt to gauge the effec-
tiveness of case-selection methods should probably focus primarily on exploratory and diagnostic
functions. Third, case-selection techniques are best practiced by taking into account change over
time in the key variables rather than static cross-sectional analyses, as most of the simulation exer-
cises appear to do. Finally, and most importantly, it is difficult and perhaps impossible to simulate
the complex features involved in an in-depth case analysis. The question of interest—which case(s)
would best serve my purpose?—is hard to model without introducing assumptions that prejudge
the results of the case study and are in this respect endogenous to the case-selection strategy.
In my opinion, testing the viability of case-selection strategies in a rigorous fashion would
involve a methodological experiment of the following sort. First, assemble a panel of researchers
with similar background knowledge of a subject. Second, identify a subject deemed ripe for case

www.annualreviews.org • Qualitative Methods 23


PL20CH02-Gerring ARI 31 March 2017 6:47

study research, that is, one that is not well studied or has received no authoritative treatment and
is not amenable to experimental manipulation. Third, select cases algorithmically, following one
of the protocols laid out in Table 1. Fourth, randomly assign these cases to the researchers with
instructions to pursue all case study goals—exploratory, estimating, and diagnostic. Fifth, assemble
a panel of judges who are well versed in the subject of theoretical focus to evaluate how well each
case study achieved each of these goals. These could be scored on a questionnaire using ordinal,
Likert-style categories. Judges would be instructed to decide independently (without conferring),
though there might be a second round of judgments following a deliberative process in which they
share their thoughts and their preliminary decisions.
Such an experiment would be time consuming and costly (assuming participants receive some
remuneration). Moreover, it would need to be iterated across several research topics and with sev-
eral panels of researchers and judges in order to make strong claims of generalizability. Nonethe-
less, it might be worth pursuing given the possible downstream benefits.5
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

CAUSAL INFERENCE
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Having discussed case selection, we proceed to case analysis, with a focus on the qualitative com-
ponents of that inquiry. Can causal inference be reached with qualitative data? Here, we encounter
the most mysterious and most contested aspect of qualitative methods.
Causal inference in a quantitative context usually refers to the estimation of a fairly precise causal
(treatment) effect. In qualitative contexts, the meaning of causal inference is more complicated.
First, inferences about a causal effect are apt to be looser and less precise (unless the relationship is
deemed to be deterministic). Typically, an author will attempt to determine whether X is a cause
of Y and whether its effect is positive or negative. Sometimes, an attempt will be made to account
for all the causes of an outcome (a causes-of-effects style of research). Invariably, there will be
an attempt to identify a mechanism. Indeed, the latter may form the main focus of analysis, as
it would be in situations where a causal effect is presumed at the outset, perhaps as a product of
quantitative analysis.
For qualitative inquiry, the distinction between internal validity (causal relationships for the
studied cases) and external validity (causal relationships inferred for a broader population) is
especially critical. This is because the studied cases are usually small in number and not chosen
randomly from a known population. In this section we are concerned about causal inferences
drawn for the studied cases, not for a larger population.

Rules of Thumb
Over the past several decades, scholars have attempted to identify a set of loosely framed rules to
guide the process of qualitative inquiry when the goal is causal inference (loosely defined).6 These
may be summarized as follows:

5
Note, however, that this experiment disregards qualitative judgments by researchers that might be undertaken after an
algorithmic selection of cases. These qualitative judgments might serve as mediators. It could be, for example, that some case-
selection strategies work better when the researcher is allowed to choose from a set of potential cases that meet the stipulated
case-selection criteria based on knowledge of the potential cases. One must also consider the problem of generalizability that
stems from the use of algorithmic procedures for selecting cases. It could be that subjects for which algorithmic case selection
is feasible (i.e., for which values for X, Z, and Y can be measured across a large sample) are different from subjects for which
algorithmic case selection is infeasible. If so, we could not generalize the results of this experiment to the latter genre of case
study research.
6
Readers are referred to Beach & Pedersen (2013), Bennett & Checkel (2015), Brady & Collier (2004), Collier (2011), George
(1979), Hall (2006), Jacobs (2015), Mahoney (2012), Roberts (1996), Schimmelfennig (2015), and Waldner (2012; 2015a,b).

24 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

 Analyze sources according to their relevance (the source is pertinent to the question of
theoretical interest), proximity (the source is in a position to know what he or she is claiming),
authenticity (the source is not fake or reflecting the influence of someone else), validity (the
source is not biased), and diversity (collectively, sources represent a diversity of viewpoints
on the question at hand).
 When identifying a new causal factor or theory, look for one that (a) is potentially gen-
eralizable to a larger population, (b) is neglected in the existing literature on the subject,
(c) greatly enhances the probability of an outcome (if binary) or explains a lot of variation
on that outcome (if interval level), and (d ) is exogenous (not explained by other factors).
 Canvas widely for rival explanations, which also serve as potential confounders. Treat them
seriously (not as “straw men”), dismissing them only when warranted. Utilize this logic of
elimination, where possible, to enhance the strength of the favored hypothesis.
 For each explanation, construct as many testable hypotheses as possible, paying close atten-
tion to within-case opportunities—e.g., mechanisms and alternative outcomes.
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

 Enlist counterfactual thought experiments in an explicit fashion, making clear which features
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

of the world are being altered and which are assumed to remain the same in order to test
the viability of a theory. Also, focus on periods when background features are stable (so they
don’t serve as confounders) and minimize changes to the world (the minimal-rewrite rule)
so that the alternate scenario is tractable.
 Utilize chronologies and diagrams to clarify temporal and causal interrelationships among
complex causal factors. Include as many features as possible so that the timeline is
uninterrupted.
These are the loose guidelines that students are taught and that scholars follow (we hope). Be-
cause of the informal nature of these rules, qualitative evidence is often regarded with suspicion. It
is hard to articulate what a convincing inference might consist of and how to know it when one sees
it. Are there methodological standards that apply to qualitative data analysis (e.g., process tracing)?

Inferential Frameworks
To remedy this situation, a number of recent studies try to make sense of qualitative data and
to impose order on the seeming chaos. Proposed frameworks include set theory (Mahoney 2012,
Mahoney & Sweet Vanderpoel 2015), acyclic graphs (Waldner 2015b), or, most commonly,
Bayesian inference (Beach & Pedersen 2013, pp. 83–99; Bennett 2008, 2015; Crandell et al. 2011;
George & McKeown 1985; Gill et al. 2005; Humphreys & Jacobs 2015, 2018; McKeown 1999;
Rohlfing 2012, pp. 180–99).
These efforts have performed an invaluable service to the cause of qualitative inquiry, fitting
it into frameworks that are already well established for quantitative inquiry. It should be no
surprise that there are multiple qualitative frameworks, just as there are multiple frameworks for
quantitative methodology. Scholars may debate whether, or to what extent, these frameworks are
compatible with each other; this important debate is orthogonal to the present topic. The point to
stress is that qualitative inquiry can be understood within the rubric of general causal frameworks.
There is, in this sense, a unifying logic of inquiry.
Thus far, applications of set theory, acyclic graphs, and Bayesian inference to qualitative meth-
ods have focused on making sense of the activity rather than providing a practical guide to research.
It remains to be seen whether these frameworks can be developed in such a way as to alter the
ways that qualitative researchers go about their business. Let me illustrate.
Some years ago, Van Evera (1997) proposed a fourfold typology of tests that has since been
widely adopted (e.g., Bennett & Checkel 2015, p. 17; George & Bennett 2005; Mahoney & Sweet

www.annualreviews.org • Qualitative Methods 25


PL20CH02-Gerring ARI 31 March 2017 6:47

Table 2 Qualitative tests and their presumed inferential role


Inferential role
Tests Necessary Sufficient
Hoop 
Smoking gun 
Doubly decisive  
Straw in the wind

Vanderpoel 2015; Waldner 2015a). According to this typology, a “hoop” test is necessary (but
not sufficient) for demonstrating HX ; a “smoking-gun” test is sufficient (but not necessary) for
demonstrating HX ; a “doubly-decisive” test is necessary and sufficient for demonstrating HX ; and
a “straw-in-the-wind” test is neither necessary nor sufficient, constituting weak or circumstan-
tial evidence (Van Evera 1997, pp. 31–32). These concepts, diagramed in Table 2, are useful
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

for classifying the nature of evidence according to a researcher’s judgment. However, the hard
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

question—the judgment itself—is elided. When does a particular piece of evidence qualify as a
hoop, smoking-gun, doubly decisive, or straw-in-the-wind test (or something in between)?
Likewise, Bayesian frameworks are useful for combining evidence from diverse quarters in a
logical fashion with the use of subjective assessments—e.g., the probability that a hypothesis is
true ex ante, and assessments of the probability that the hypothesis is true if a piece of evidence
(stipulated in advance) is observed. The hard question, again, is the case-specific judgment. Con-
sider the lengthy debate that has ensued over the reasons for electoral system choice in Europe
(Kreuzer 2010). Humphreys & Jacobs (2015) use this example to sketch out their application of
Bayesian inference to qualitative research. In particular, they explore the “left threat” hypothesis,
which suggests that the presence of a large left-wing party explains the adoption of proportional
representation (PR) in the early twentieth century (Boix 1999). The authors point out that “for
cases with high left threat and a shift to PR, the inferential task is to determine whether they would
have, or would not have, shifted to PR without left threat” (Humphreys & Jacobs 2015, p. 664;
italics original). Bayesian frameworks do nothing to ease this inferential task, which takes the form
of a counterfactual thought experiment. Similar judgments are required by other frameworks: set
theory, acyclic graphs, and so forth.
To get a feel for the level of detail required in qualitative research, let us take a closer look at a
particular inquiry. In her study of how policy makers avoid political backlash when they attempt to
tax economic elites, Fairfield (2013, pp. 55–56; see also Fairfield 2015) provides a scrupulous blow-
by-blow account of the sleuthing required to reach each case-level inference about how policy
makers avoid political backlash when they impose taxes on economic elites. One of the three
countries in Fairfield’s study is Chile, which is observed during and after a recent presidential
election. Fairfield explains:

During the 2005 presidential campaign, right candidate Lavı́n blamed Chile’s persistent inequality on
the left and accused President Lagos of failing to deliver his promise of growth with equity. Lagos
responded by publicly challenging the right to eliminate 57 bis, a highly regressive tax benefit for
wealthy stockholders that he called “a tremendous support for inequality.” The right accepted the
challenge and voted in favor of eliminating the tax benefit in congress, deviating from its prior position
on this policy and the preferences of its core business constituency.

The following three hypotheses encompass the main components of my argument regarding why
the right voted in favor of the reform:

26 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

Hypothesis 1. Lagos’ equity appeal motivated the right to accept the reform, due to concern over public
opinion.
Hypothesis 2. The timing of the equity appeal—during a major electoral campaign—contributed to its
success.
Hypothesis 3. The high issue-salience of inequality contributed to the equity appeal’s success.

The following four observations, drawn from different sources, provide indirect, circumstantial sup-
port for Hypothesis 1:

Observation 1a . . . : The Lagos administration considered eliminating 57 bis in the 2001 Anti-Evasion
reform but judged it politically infeasible given business-right opposition (interview: Finance Ministry-
a, 2005).
Observation 1b: The Lagos administration subsequently tried to reach an agreement with business to
eliminate 57 bis without success (interview, Finance Ministry-b, 2005).
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Observation 1c: Initiatives to eliminate the exemption were blocked in 1995 and 1998 due to right
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

opposition. (Sources: congressional records, multiple interviews)


Observation 1d: Previous efforts to eliminate 57 bis did not involve concerted equity appeals. Although
Concertación governments had mentioned equity in prior efforts, technical language predominated,
and government statements focused much more on 57 bis’ failure to stimulate investment rather than its
regressive distributive impact (congressional records, La Segunda, March 27, 1998, El Mercurio, April
1, 1998, Interview, French-Davis, Santiago, Chile, Sept. 5, 2005).
Inference: These observations suggest that right votes to eliminate 57 bis would have been highly
unlikely without some new, distinct political dynamic. Lagos’ strong, high-profile equity appeal, in the
unusual context of electoral competition from the right on the issue of inequality, becomes a strong
candidate for explaining the right’s acceptance of the reform.

The appendix continues in this vein for several pages, focusing relentlessly on explaining the
behavior of one particular set of actors in one event, i.e., the motivation of the right wing in
favoring the reform. This event is just one of a multitude of events discussed in connection with
the Chilean case study, to which must be added the equally complex set of events occurring in
Argentina and Bolivia. Clearly, reaching case-level inferences is complicated business.
One may conclude that if researchers agreed on case-level judgments, then general frame-
works could successfully cumulate those judgments into higher-level inferences, accompanied by
a (very useful) confidence interval. But if one cannot assume case-level consensus, conclusions
based on qualitative judgments combined through a Bayesian (or other) framework represent one
researcher’s views, which might vary appreciably from another’s. Readers who are not versed in
the intricacies of Chilean politics will have a hard time ascertaining whether Fairfield’s judgments
are correct.

A Crowd-Based Approach
In principle, this sort of problem could be overcome with a crowd-based approach. Specifically, one
might survey a panel of experts, chosen randomly or with an aim to represent diverse perspectives,
on each point of judgment. One could then cumulate these judgments into an overall inference,
in which the confidence interval reflects the level of disagreement among experts (among other
things). Unfortunately, not just any crowd will do. The extreme difficulty of case study research
derives in no small part from the expertise that case study researchers bring to their task. I cannot
envision a world in which lay coders recruited through Amazon Turk or Facebook would replace

www.annualreviews.org • Qualitative Methods 27


PL20CH02-Gerring ARI 31 March 2017 6:47

that expertise, which is honed through years of work on a particular problem and in a particular
site (a historical period, country, city, village, organization, etc.).
To be credible, a crowd-based approach to the problem of judgment would need to enlist
the small community of experts who study a subject and can be expected to make knowledgeable
judgments about highly specific questions such as the motivations of right-wing Chilean politicians.
In the previous example, it would entail enlisting scholars versed in contemporary Chilean politics.
This procedure is conceivable, but difficult to implement. How would one identify a random, or
otherwise representative, sample? What is the sampling frame? How would one motivate scholars
to undertake the task? How would one elicit honest judgments about the specific questions on a
questionnaire, uncorrupted by broader judgments about the theoretical question at hand (which
the experts would probably be able to infer)?
Likewise, if one goes to the trouble of constructing a common coding frame (a questionnaire),
an online system for recording responses, a system of recruitment, and a Bayesian (or some other)
framework for integrating judgments, the considerable investment of time and money of such a
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

venture would probably justify extending the analysis to many cases, chosen randomly, so that a
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

representative sample can be attained and stochastic threats to inference minimized. In this fashion,
procedures to integrate qualitative data into a quantitative framework seem likely to morph from
case studies into cross-case coding exercises. This is not to argue against the idea; it is simply to
point out that any standardization of procedures tends to work against the intensive focus on one
or several cases that (by my definition) characterizes case study research.

MULTIMETHOD RESEARCH
In multimethod research both qual and quant styles of evidence are brought to bear on the same
general research question (Brewer & Hunter 2006, Goertz 2017, Harrits 2011, Lieberman 2005,
Seawright 2016b). Although multimethod research is increasingly common, there are serious
questions about its effectiveness. Doing more than one thing might mean doing multiple things
poorly, by dint of limited time, space, or expertise. Writers have also questioned whether qualitative
and quantitative analyses speak to one another productively (Ahmed & Sil 2012, Lohmann 2007).
In discussing this question it is important not to confuse disagreement with incommensurability.
If qual and quant tests of a proposition are truly independent, there is always the possibility that
they will elicit different, perhaps even directly contradictory, answers. For example, the most
common style of multimethod analysis combines a quantitative analysis of many units with an
in-depth, qualitative (or at least partially qualitative) analysis of a single case or a small set of cases,
which Lieberman (2005) refers to as a nested analysis. Occasionally, these two analyses reach
different conclusions about a causal relationship (though authors might not always bring these
disagreements to the fore). However, the same disagreements also arise from rival quantitative
analyses (e.g., conducted with different samples or specifications) and rival qualitative analyses
(e.g., focused on different research sites or generated by different researchers). Disagreement
about whether X causes Y, or about the mechanisms at work, does not entail that multimethod
research is unavailing. Sometimes, triangulation does not confirm one’s hypothesis. It is still useful
information; and for those worried about confirmation bias, it is critical.
In any case, Seawright (2016b) points out that when qualitative and quantitative evidence are
combined, the analyses are usually oriented toward somewhat different goals. Typically, a large-N
cross-case analysis is focused on measuring a causal effect, whereas a small-N within-case analysis
is focused on identifying a causal mechanism. As such, the two styles of evidence cannot directly
conflict because their objectives are different. They nonetheless inform each other in a useful
fashion.

28 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

This leaves open another way of viewing multimethod research. Sometimes, the qualitative
and quantitative aspects of research are profitably united within a larger research cycle that in-
cludes a diversity of methods and authors (Lieberman 2016). This allows scholars with a qual
or quant bent to do what they do best, concentrating their efforts on their particular skill set
and on one particular context they can become intimately acquainted with. The research cycle
also mitigates a presentational problem—stuffing results from myriad analyses into a 10,000-word
article.
Unfortunately, the research cycle approach to multimethod research also encounters obstacles.
In particular, one must wonder whether cumulation can occur successfully across diverse studies
utilizing diverse research methods. Note that political science work is not highly standardized, even
when focused on the same research question and when utilizing the same quantitative method.
This inhibits the integration of findings and helps to account for the scarcity of meta-analyses in
political science. Qualitative studies are even less likely to be standardized in a way that allows
for their integration into an ongoing research trajectory. Inputs and outputs may be defined
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

and operationalized in disparate ways, or perhaps not clearly operationalized at all. And because
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

samples are not randomly chosen, any aggregation of studies cannot purport to represent a larger
population in an unbiased fashion.
There is yet another angle on this topic that offers what is perhaps a more optimistic, not to
mention realistic, reading of the multimethod ideal. Rather than conceptualizing qualitative and
quantitative research as separate research designs, we might regard them as integral components
of the same design.
Nowadays, it is my impression that there are fewer purely qualitative studies. Although the
main burden of inference may be carried by qualitative data, this is often supplemented by a
large-N cross-case analysis or a large-N within-case analysis (in which observations are drawn
from a lower level of analysis). Likewise, there are few purely quantitative analyses, because such
studies are usually (always?) accompanied by qualitative observations of one sort or another. At a
minimum, qualitative data are trotted out by way of illustration; at a maximum, qualitative data
are essential to causal inference.
In this vein, a number of recent studies highlight the vital role played by qualitative data even
when the research design is experimental or quasi-experimental. Although we tend to think of these
designs as being quantitative—because they generally incorporate a large number of comparable
units—they may also contain important qualitative components.
There is, to begin with, the problem of research design. Without an ethnographic understand-
ing of the research site and the individuals who are likely to serve as subjects, it is impossible
to design an experiment that adequately tests a hypothesis of interest. It is impossible to define
a confounder in the abstract. In-depth case-based understanding is especially important in the
context of field experimentation, where the context is likely to influence how subjects react to a
given treatment.
Second, one must assess potential threats to inference. Where the assignment is randomized,
ex-ante comparability is assured. However, ex-post comparability remains a serious threat to
inference. For example, experiments often face problems of compliance, so it is incumbent on
the researcher to ascertain whether subjects adhered to the prescribed protocol and, if not, which
subjects violated it. When significant numbers of subjects attrit (withdraw from participation), it
is important to determine what motivated their withdrawal and what sort of subjects were inclined
to withdraw. In field experiments, where a significant time lag often separates the treatment and
the outcome of theoretical interest, one must try to determine whether the subjects under study
may have communicated with one another, introducing potential problems of interference and/or
contamination (interference across treatment and control groups).

www.annualreviews.org • Qualitative Methods 29


PL20CH02-Gerring ARI 31 March 2017 6:47

Third, there is a question of causal mechanisms: Assuming a treatment effect can be measured
without bias, what is it that accounts for the connection between X and Y?
Finally, there are questions of generalizability. To determine the external validity of an exper-
iment, one must have a good sense of the research site and the subjects who have been studied.
Specifically, one must be able to assess the extent to which these individuals, and this particular
treatment effect, can be mapped across other, potentially quite different, settings.
These issues—research design, inferential threats, causal mechanisms, and generalizability—
are often assessable with qualitative data. Indeed, they may only be assessable by means of a rich,
contextual knowledge of a research project as it unfolds on a particular site.
Paluck (2010) argues, further, that experimental designs may be combined with qualitative
measurement to access outcomes that would not be apprehended with traditional quantitative
measures. As an example, she explores Chattopadhyay & Duflo’s (2004) study of women leaders
in India. While praising this landmark study, Paluck (2010, p. 61) points out that participant obser-
vation of women leaders outside of the council settings—such as in their homes, where they visit
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

with other women—could have revealed whether they were influenced by women constituents in
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

more informal settings. Intensive interviews could have compared social processes in villages with
female or male council leaders to reveal how beliefs about women leaders’ efficacy shift. For exam-
ple, did other council members, elders, or religious leaders make public statements about female
leaders or the reservation system? Was there a tipping point at which common sentiment in villages
with female leaders diverged from those in villages with male leaders? Such qualitatively generated
insights could have enabled this study to contribute more to general theories of identity, leadership,
and political and social change. Moreover, ethnographic work could have compared understand-
ings of authority and political legitimacy in villages with female- and male-led councils. Do the first
female leaders inspire novel understandings of female authority and legitimacy, or are traditional
gender narratives invoked just as frequently to explain women’s new power and position?
Paluck concludes that experiments provide an opportunity for qualitative analysis, one that is
grossly underutilized. Quantitative scholars who are enamored of experiments are well advised to
pursue a parallel investigation along ethnographic lines. Qualitative scholars who wish to under-
stand causal relationships are well advised to conduct experiments to facilitate their analysis.
For example, suppose one is interested in the impact of modernization across a range of mea-
surable outcomes. To address this question, one might construct a field experiment in which an
agent of modernization—e.g., a bridge, road, harbor, or radio tower—is randomized across sites,
allowing for an opportunity to systematically compare treatment and control groups over time.
This design not only allows for unbiased estimates of a causal effect; it also affords an occasion for
participant observation focused on how subjects respond to the treatment, what sense they make
of their changing world, and what mechanisms are at work.
Where the treatment is not randomly assigned (i.e., in observational research), there are ad-
ditional issues pertaining to potential assignment (or selection) bias. Here, qualitative data often
come into play (Dunning 2012). For example, Ferwerda & Miller (2014) argue that devolution of
power reduces resistance to foreign rule. They focus on France during World War II, when the
northern part of the country was ruled directly by German forces and the southern part was ruled
indirectly by the Vichy regime headed by Marshall Pétain. The key methodological assumption
of their regression discontinuity design is that the line of demarcation was assigned in an as-if
random fashion. For the authors, and for their critics (Kocher & Monteiro 2015), this assumption
requires in-depth qualitative research—research that promises to uphold, or call into question,
the authors’ entire analysis.
As a second example, we may consider Romer & Romer’s (2010) analysis of the impact of tax
changes on economic activity. Because tax changes are nonrandom, and likely to be correlated with

30 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

the outcome of interest, anyone interested in this question must be concerned with bias arising
from the assignment of the treatment. To deal with this threat and to elucidate the motivation of
tax policy changes in the postwar era, the authors make use of the narrative record provided by
presidential speeches and congressional reports. This allows them to distinguish policy changes
that might have been motivated by economic performance from those that may be considered
as-if random. By focusing solely on the latter, they claim to provide an unbiased test of the theory
that tax increases are contractionary.

CONCLUSIONS
I began by stipulating a definition for quantitative and qualitative research. If the work is quan-
titative, it enlists patterns of covariation found in a matrix of observations and analyzed within a
formal model to reach a descriptive or causal inference. If the work is qualitative, the inference is
based on bits and pieces of noncomparable observations that address different aspects of a problem
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

and are traditionally analyzed in an informal fashion. If one accepts this definition, it follows that
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

one can convert qualitative data to quantitative data (e.g., through coding) but not the reverse.
It also follows that each approach to social science has characteristic strengths and weaknesses.
Qualitative data are generally (but not always) more useful insofar as a study is exploratory and is
focused on a single case or a small number of cases.
In the third section, I presented a typology of case-selection strategies whose organizing feature
is the goal that the case study is intended to serve. I argued (implicitly) that methods of case selection
are considerably more differentiated than existing work suggests.
In the fourth section, I discussed the application of qualitative data to the goal of causal infer-
ence, beginning with loose guidelines and proceeding to general frameworks such as set theory,
acyclic graphs, and Bayesian probability. These general frameworks have demonstrated (at least to
my mind) that qualitative and quantitative observations can be incorporated into a unified frame-
work in the pursuit of causal inference. They have not yet provided practical tools for the conduct
of qualitative inquiry (more on this below).
In the fifth section, I discussed multimethod research, which in this context refers to the combi-
nation of quantitative and qualitative data in the same analysis, the same study, or in various studies
devoted to the same research question (a research cycle). This pluralistic approach seems to offer
the possibility of combining the strengths of both styles of research while avoiding their respective
weaknesses, and it seems to be reflected in current trends within the discipline. Acknowledging the
burdens imposed upon the researchers (who must master a diverse range of skills), the limitations
imposed by journals with stringent word counts, and the problem of cumulating results across
diverse methods, I would argue that the multimethod ideal nonetheless offers a plausible solution
to the pervasive conflict between quantitative and qualitative styles of research.
By way of conclusion to this short review of a very broad subject, I shall invoke the funda-
mental tradeoff in scientific endeavor between a context of discovery (i.e., exploration, innova-
tion) and a context of justification (i.e., appraisal, demonstration, proof, verification/falsification)
(Reichenbach 1938). Although both are acknowledged to be essential to scientific progress, the
field of methodology is strongly aligned with the latter. This is because the task of justification is
amenable to systematic rules that can be presented in academic journals, summarized in textbooks,
and taught in courses. By contrast, the task of discovery is a comparatively anarchistic affair. There
are no rules for finding new things. There may be some informal rules of thumb, analogies, pieces
of advice, but nothing one could build an academic field around.
I am exaggerating, to be sure; but this dichotomy is useful in illustrating a core feature of
qualitative inquiry. If exploratory work is inherently hostile to systematic method (Feyerabend

www.annualreviews.org • Qualitative Methods 31


PL20CH02-Gerring ARI 31 March 2017 6:47

1975), and if qualitative approaches are uniquely insightful during early stages of research, it may be
a mistake to suppose that systematic rules of method can apply, or should always apply, to this genre
of research. “Soaking and poking” (Fenno 1978) may be useful precisely because it is not hemmed
in by rigid rules of procedure. Indeed, the very features that inhibit the achievement of falsifiability,
replicability, and cumulation may enhance the possibility of discovery. For example, qualitative
work is often derided as providing multiple angles on a subject, all of which are plausible and none
of which can be definitively proven or disproven. This flows from the narrow but intensive manner
of study, which may be summarized as large-K (variables), small-N (observations). Qualitative work
is also seen as post hoc, adjusting theories to fit the facts or adjusting facts to fit the theories (i.e.,
looking for settings in which a theory might be true). These are indeed vices if the researcher’s
goal is to avoid Type A errors. But they are virtues insofar as one wishes to discover new, and
potentially true, things about the world.
I do not wish to ghettoize qualitative inquiry as purely exploratory. Noncomparable bits of
evidence have a vital role to play in confirming and disconfirming theories, as the foregoing
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

discussion illustrates. However, insofar as qualitative inquiry contributes to the discovery of new
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

concepts, new hypotheses, and new frameworks of analysis, we must come to terms with the nature
of that inquiry, which is at odds with current trends in social science methodology. To honor the
contributions of qualitative research in social science is to honor the role of exploratory research
in the progress of social science.

DISCLOSURE STATEMENT
The author is not aware of any affiliations, memberships, funding, or financial holdings that might
be perceived as affecting the objectivity of this review.

ACKNOWLEDGMENTS
I am grateful to Colin Elman, Tasha Fairfield, Evan Lieberman, Jim Mahoney, and David Waldner
for comments and suggestions on this manuscript.

LITERATURE CITED
Ahmed A, Sil R. 2012. When multi-method research subverts methodological pluralism—or, why we still
need single-method research. Perspect. Polit. 10(4):935–53
Alesina A, Glaeser E, Sacerdote B. 2001. Why doesn’t the US have a European-style welfare state? Brookings
Pap. Econ. Act. 2:187–277
Beach D, Pedersen RM. 2013. Process-Tracing Methods: Foundations and Guidelines. Ann Arbor: Univ. Mich.
Press
Beck N. 2006. Is causal-process observation an oxymoron? Polit. Anal. 14(3):347–52
Beck N. 2010. Causal process “observations”: oxymoron or (fine) old wine. Polit. Anal. 18(4):499–505
Bennett A. 2008. Process tracing: a Bayesian approach. See Box-Steffensmeier et al. 2008, pp. 702–21
Bennett A. 2015. Disciplining our conjectures: systematizing process tracing with Bayesian analysis. See
Bennett & Checkel 2015, pp. 276–98
Bennett A, Checkel JT, eds. 2015. Process Tracing: From Metaphor to Analytic Tool. Cambridge, UK: Cambridge
Univ. Press
Bennett A, Elman C. 2006a. Complex causal relations and case study methods: the example of path dependence.
Polit. Anal. 14(3):250–67
Bennett A, Elman C. 2006b. Qualitative research: recent developments in case study methods. Annu. Rev.
Polit. Sci. 9:455–76

32 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

Blatter J, Haverland M. 2012. Designing Case Studies: Explanatory Approaches in Small-N Research. Basingstoke,
UK: Palgrave Macmillan
Boas TC. 2007. Conceptualizing continuity and change: the composite-standard model of path dependence.
J. Theor. Polit. 19(1):33–54
Boix C. 1999. Setting the rules of the game: the choice of electoral systems in advanced democracies. Am.
Polit. Sci. Rev. 93(3):609–24
Box-Steffensmeier J, Brady H, Collier D, eds. 2008. Oxford Handbook of Political Methodology. Oxford, UK:
Oxford Univ. Press
Brady HE. 2010. Data-set observations versus causal-process observations: the 2000 U.S. presidential election.
See Brady & Collier 2010, pp. 237–42
Brady HE, Collier D, eds. 2004. Rethinking Social Inquiry: Diverse Tools, Shared Standards. Lanham, MD:
Rowman & Littlefield
Brady HE, Collier D, eds. 2010. Rethinking Social Inquiry: Diverse Tools, Shared Standards. Lanham, MD:
Rowan & Littlefield. 2nd ed.
Brewer J, Hunter A. 2006. Foundations of Multimethod Research: Synthesizing Styles. Thousand Oaks, CA: Sage
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Caporaso J. 2009. Is there a quantitative-qualitative divide in comparative politics? In The SAGE Handbook of
Comparative Politics, ed. T Landman, N Robinson, pp. 67–83. Thousand Oaks, CA: Sage
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Chattopadhyay R, Duflo E. 2004. Women as policy makers: evidence from a randomized policy experiment
in India. Econometrica 72(5):1409–43
Collier D. 2011. Understanding process tracing. PS Polit. Sci. Polit. 44(4):823–30
Collier D, Elman C. 2008. Qualitative and Multimethod Research: Organizations, Publications, and Reflections on
Integration. See Box-Steffensmeier et al. 2008, pp. 779–95
Collier D, Gerring J, eds. 2009. Concepts and Method in Social Science: The Tradition of Giovanni Sartori. New
York: Routledge
Collier D, LaPorte J, Seawright J. 2012. Putting typologies to work: concept formation, measurement, and
analytic rigor. Polit. Res. Q. 65(1):217–32
Crandell JL, Voils CI, Chang YK, Sandelowski M. 2011. Bayesian data augmentation methods for the synthesis
of qualitative and quantitative research findings. Qual. Quant. 45:653–69
Dunning T. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge, UK:
Cambridge Univ. Press
Eckstein H. 1975. Case studies and theory in political science. In Handbook of Political Science. Political Science:
Scope and Theory, Vol. 7, ed. FI Greenstein, NW Polsby, pp. 94–137. Reading, MA: Addison-Wesley
Elman C. 2005. Explanatory typologies in qualitative studies of international politics. Int. Organ. 59(2):293–326
Elman C, Kapiszewski D. 2014. Data access and research transparency in the qualitative tradition. PS Polit.
Sci. Polit. 47(1):43–47
Elman C, Kapiszewski D, Vinuela L. 2010. Qualitative data archiving: rewards and challenges. PS Polit. Sci.
Polit. 43(1):23–27
Epstein LD. 1964. A comparative study of Canadian parties. Am. Polit. Sci. Rev. 58:46–59
Fairfield T. 2013. Going where the money is: strategies for taxing economic elites in unequal democracies.
World Dev. 47:42–57
Fairfield T. 2015. Private Wealth and Public Revenue in Latin America: Business Power and Tax Politics. Cambridge,
UK: Cambridge Univ. Press
Fenno RF Jr. 1977. U.S. House members in their constituencies: an exploration. Am. Polit. Sci. Rev. 71(3):883–
917
Fenno RF Jr. 1978. Home Style: House Members in Their Districts. Boston, MA: Little, Brown
Ferwerda J, Miller N. 2014. Political devolution and resistance to foreign rule: a natural experiment. Am. Polit.
Sci. Rev. 108(3):642–60
Feyerabend P. 1975. Against Method. London: New Left Books
Friedman M, Schwartz A. 1963. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton
Univ. Press
Garfinkel H. 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall
George AL. 1979. Case studies and theory development: the method of structured, focused comparison. In
Diplomacy: New Approaches in History, Theory, and Policy, ed. PG Lauren, pp. 3–68. New York: Free Press

www.annualreviews.org • Qualitative Methods 33


PL20CH02-Gerring ARI 31 March 2017 6:47

George AL, Bennett A. 2005. Case Studies and Theory Development. Cambridge, MA: MIT Press
George AL, McKeown TJ. 1985. Case studies and theories of organizational decision-making. In Advances in
Information Processing in Organizations, ed. RF Coulam, RA Smith, pp. 21–58. Greenwich, CT: JAI Press
Gerring J. 2007. Case Study Research: Principles and Practices. Cambridge, UK: Cambridge Univ. Press
Gerring J. 2012. Mere description. Br. J. Polit. Sci. 42(4):721–46
Gerring J. 2017. Case Study Research: Principles and Practices. Cambridge, UK: Cambridge Univ. Press. 2nd ed.
Gerring J, Cojocaru L. 2016. Selecting cases for intensive analysis: a diversity of goals and methods. Sociol.
Methods Res. 45(3):392–423
Gerring J, McDermott R. 2007. An experimental template for case-study research. Am. J. Polit. Sci. 51(3):688–
701
Gill CJ, Sabin L, Schmid CH. 2005. Why clinicians are natural Bayesians. BMJ 330:1080–83
Glassner B, Moreno JD, eds. 1989. The Qualitative-Quantitative Distinction in the Social Sciences. Dordrecht,
Neth.: Springer
Goertz G. 2005. Social Science Concepts: A User’s Guide. Princeton, NJ: Princeton Univ. Press
Goertz G. 2017. Multimethod Research, Causal Mechanisms, and Selecting Cases: The Research Triad. Princeton,
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

NJ: Princeton Univ. Press


Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Goertz G, Mahoney J. 2012. A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences.
Princeton, NJ: Princeton Univ. Press
Grimmer J, Stewart BM. 2013. Text as data: the promise and pitfalls of automatic content analysis methods
for political texts. Polit. Anal. 21(3):267–97
Hall PA. 2003. Aligning ontology and methodology in comparative politics. In Comparative Historical Analysis
in the Social Sciences, ed. J Mahoney, D Rueschemeyer, pp. 373–404. Cambridge, UK: Cambridge Univ.
Press
Hall PA. 2006. Systematic process analysis: when and how to use it. Eur. Manag. Rev. 3:24–31
Hammersley M. 1992. Deconstructing the qualitative-quantitative divide. In Mixing Methods: Qualitative and
Quantitative Research, ed. J Brannen. Aldershot, UK: Avebury
Harrits GS. 2011. More than method? A discussion of paradigm differences within mixed methods research.
J. Mixed Methods Res. 5(2):150–66
Herron MC, Quinn KM. 2016. A careful look at modern case selection methods. Sociol. Methods Res. 45(3):458–
92
Humphreys M, Jacobs AM. 2015. Mixing methods: a Bayesian approach. Am. Polit. Sci. Rev. 109(4):653–73
Humphreys M, Jacobs AM. 2018. Integrated Inferences: A Bayesian Integration of Qualitative and Quantitative
Approaches to Causal Inference. Cambridge, UK: Cambridge Univ. Press. In press
Jacobs A. 2015. Process tracing the effects of ideas. See Bennett & Checkel 2015, pp. 41–73
Kapiszewski D, MacLean LM, Read BL. 2015. Field Research in Political Science: Practices and Principles.
Cambridge, UK: Cambridge Univ. Press
Karl TL. 1997. The Paradox of Plenty: Oil Booms and Petro-States. Berkeley: Univ. Calif. Press
King G, Keohane RO, Verba S. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research.
Princeton, NJ: Princeton Univ. Press
Kocher M, Monteiro N. 2015. What’s in a line? Natural experiments and the line of demarcation in WWII occupied
France. Work. Pap., Dep. Polit. Sci., Yale Univ.
Kreuzer M. 2010. Historical knowledge and quantitative analysis: the case of the origins of proportional
representation. Am. Polit. Sci. Rev. 104:369–92
Levy JS. 2007. Qualitative methods and cross-method dialogue in political science. Comp. Polit. Stud.
40(2):196–214
Levy JS. 2008. Case studies: types, designs, and logics of inference. Confl. Manag. Peace Sci. 25:1–18
Lieberman ES. 2005. Nested analysis as a mixed-method strategy for comparative research. Am. Polit. Sci. Rev.
99(3):435–52
Lieberman ES. 2010. Bridging the qualitative-quantitative divide: best practices in the development of his-
torically oriented replication databases. Annu. Rev. Polit. Sci. 13:37–59
Lieberman ES. 2016. Can the biomedical research cycle be a model for political science? Perspect. Polit.
14:1054–66

34 Gerring
PL20CH02-Gerring ARI 31 March 2017 6:47

Lijphart A. 1971. Comparative politics and the comparative method. Am. Polit. Sci. Rev. 65:682–93
Lohmann S. 2007. The trouble with multi-methodism. Newsl. APSA Organ. Sect. Qual. Methods 5(1):13–17
Lynd RS, Lynd HM. 1956 (1929). Middletown: A Study in American Culture. New York: Harcourt Brace
Mahoney J. 2010. After KKV: the new methodology of qualitative research. World Polit. 62(1):120–47
Mahoney J. 2012. The logic of process tracing tests in the social sciences. Sociol. Methods Res. 41(4):566–90
Mahoney J, Goertz G. 2006. A tale of two cultures: contrasting quantitative and qualitative research. Polit.
Anal. 14:227–49
Mahoney J, Sweet Vanderpoel R. 2015. Set diagrams and qualitative research. Comp. Polit. Stud. 48(1):65–100
Mahoney J, Thelen K, eds. 2015. Advances in Comparative-Historical Analysis. Cambridge, UK: Cambridge
Univ. Press
Mansfield ED, Snyder J. 2005. Electing to Fight: Why Emerging Democracies Go to War. Cambridge, MA: MIT
Press
McKeown TJ. 1999. Case studies and the statistical world view. Int. Organ. 53:161–90
McLaughlin E. 1991. Oppositional poverty: the quantitative/qualitative divide and other dichotomies. Sociol.
Rev. 39:292–308
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Mill JS. 1843/1872. A System of Logic. London: Longmans, Green. 8th ed.
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Moore B Jr. 1966. Social Origins of Dictatorship and Democracy: Lord and Peasant in the Making of the Modern
World. Boston, MA: Beacon Press
Morgan M. 2012. Case studies: one observation or many? Justification or discovery? Philos. Sci. 79(5):655–66
Page SE. 2006. Essay: path dependence. Q. J. Polit. Sci. 1:87–115
Paluck EL. 2010. The promising integration of qualitative methods and field experiments. Ann. Am. Acad.
Polit. Soc. Sci. 628:59–71
Patton MQ. 2002. Qualitative Research and Evaluation Methods. Thousand Oaks, CA: Sage
Pincus S. 2011. 1688: The First Modern Revolution. New Haven, CT: Yale Univ. Press
Platt J. 1992. “Case study” in American methodological thought. Curr. Sociol. 40(1):17–48
Posner D. 2004. The political salience of cultural difference: why Chewas and Tumbukas are allies in Zambia
and adversaries in Malawi. Am. Polit. Sci. Rev. 98(4):529–46
Ray JL. 1993. Wars between democracies: rare or nonexistent? Int. Interact. 18:251–76
Reichenbach H. 1938. Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge.
Chicago: Univ. Chicago Press
Reiss J. 2009. Causation in the social sciences: evidence, inference, and purpose. Philos. Soc. Sci. 39(1):20–40
Rihoux B. 2013. Qualitative Comparative Analysis (QCA), anno 2013: reframing the comparative method’s
seminal statements. Swiss Polit. Sci. Rev. 19(2):233–45
Roberts C. 1996. The Logic of Historical Explanation. University Park: Pa. State Univ. Press
Rohlfing I. 2012. Case Studies and Causal Inference: An Integrative Framework. London: Palgrave Macmillan
Romer CD, Romer DH. 2010. The macroeconomic effects of tax changes: estimates based on a new measure
of fiscal shocks. Am. Econ. Rev. 100:763–801
Rosenau PM. 1992. Post-Modernism and the Social Sciences: Insights, Inroads, and Intrusions. Princeton, NJ:
Princeton Univ. Press
Schatz E, ed. 2009. Political Ethnography: What Immersion Contributes to the Study of Power. Chicago: Univ.
Chicago Press
Schimmelfennig F. 2015. Efficient process tracing: analyzing the causal mechanisms of European integration.
See Bennett & Checkel 2015, pp. 98–125
Schwartz H, Jacobs J. 1979. Qualitative Sociology: A Method to the Madness. New York: Free Press
Seawright J. 2016a. The case for selecting cases that are deviant or extreme on the independent variable. Sociol.
Methods Res. 45(3):493–525
Seawright J. 2016b. Multi-Method Social Science: Combining Qualitative and Quantitative Tools. Cambridge, UK:
Cambridge Univ. Press
Seawright J, Gerring J. 2008. Case-selection techniques in case study research: a menu of qualitative and
quantitative options. Polit. Res. Q. 61(2):294–308
Shapiro I, Smith R, Masoud T, eds. 2004. Problems and Methods in the Study of Politics. Cambridge, UK:
Cambridge Univ. Press

www.annualreviews.org • Qualitative Methods 35


PL20CH02-Gerring ARI 31 March 2017 6:47

Shweder RA. 1996. Quanta and qualia: What is the “object” of ethnographic method? In Ethnography and
Human Development: Context and Meaning in Social Inquiry, ed. R Jessor, A Colby, RA Shweder, pp. 175–
82. Chicago: Univ. Chicago Press
Sil R. 2000. The division of labor in social science research: unified methodology or “organic solidarity”? Polity
32(4):499–531
Skocpol T. 1979. States and Social Revolutions: A Comparative Analysis of France, Russia, and China. Cambridge,
UK: Cambridge Univ. Press
Skocpol T, Somers M. 1980. The uses of comparative history in macrosocial inquiry. Comp. Stud. Soc. Hist.
22(2):147–97
Snow CP. 1993 (1959). The Two Cultures. Cambridge, UK: Cambridge Univ. Press
Strauss A, Corbin J. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded
Theory. Thousand Oaks, CA: Sage
Van Evera S. 1997. Guide to Methods for Students of Political Science. Ithaca, NY: Cornell Univ. Press
Waldner D. 2012. Process tracing and causal mechanisms. In Oxford Handbook of Philosophy of Social Science,
ed. H Kincaid, pp. 65–84. Oxford, UK: Oxford Univ. Press
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Waldner D. 2015a. Process tracing and qualitative causal inference. Secur. Stud. 24(2):239–50
Waldner D. 2015b. What makes process tracing good? Causal mechanisms, causal inference, and the com-
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

pleteness standard in comparative politics. See Bennett & Checkel 2015, pp. 126–52
Walter B. 2002. Committing to Peace: The Successful Settlement of Civil Wars. Princeton, NJ: Princeton Univ.
Press
Yanow D, Schwartz-Shea P, eds. 2013. Interpretation and Method: Empirical Research Methods and the Interpretive
Turn. Armonk, NY: M.E. Sharpe. 2nd ed.

36 Gerring
PL20-TOC ARI 22 March 2017 10:17

Annual Review of
Political Science

Contents Volume 20, 2017

Politics, Academics, and Africa


Robert H. Bates p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 1
Qualitative Methods
John Gerring p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p15
Just War Theory: Revisionists Versus Traditionalists
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

Seth Lazar p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p37


Access provided by 179.13.82.71 on 02/05/22. For personal use only.

International Courts: A Theoretical Assessment


Clifford J. Carrubba and Matthew Gabel p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p55
Political Economy of Taxation
Edgar Kiser and Steven M. Karceski p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p75
Comparing Political Values in China and the West: What Can Be
Learned and Why It Matters
Daniel A. Bell p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p93
Culture, Politics, and Economic Development
Paul Collier p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 111
Progovernment Militias
Sabine C. Carey and Neil J. Mitchell p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 127
Voter Identification Laws and Turnout in the United States
Benjamin Highton p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 149
Climate Change and International Relations (After Kyoto)
Arild Underdal p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 169
Social Movement Theory and the Prospects for Climate Change
Activism in the United States
Doug McAdam p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 189
Climate Change: US Public Opinion
Patrick J. Egan and Megan Mullin p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 209
The Political Economy of Regional Integration
Christina J. Schneider p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 229
PL20-TOC ARI 22 March 2017 10:17

Bureaucracy and Service Delivery


Thomas B. Pepinsky, Jan H. Pierskalla, and Audrey Sacks p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 249
Feminist Theory Today
Kathy E. Ferguson p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 269
When Does Globalization Help the Poor?
Nita Rudra and Jennifer Tobin p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 287
Measuring Public Opinion with Surveys
Adam J. Berinsky p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 309
Conflict and Cooperation on Nuclear Nonproliferation
Alexandre Debs and Nuno P. Monteiro p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 331
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org

From a Deficit of Democracy to a Technocratic Order: The Postcrisis


Debate on Europe
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

Ignacio Sánchez-Cuenca p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 351


Understanding the Political Economy of the Eurozone Crisis
Jeffry Frieden and Stefanie Walter p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 371
The Electoral Consequences of Corruption
Catherine E. De Vries and Hector Solaz p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 391
Labor Unions, Political Representation, and Economic Inequality
John S. Ahlquist p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 409
Coding the Ideological Direction and Content of Policies
Joshua D. Clinton p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 433
Wealth Inequality and Democracy
Kenneth Scheve and David Stasavage p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 451
The New New Civil Wars
Barbara F. Walter p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 469
State Building in the Middle East
Lisa Blaydes p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 487
Information, Uncertainty, and War
Kristopher W. Ramsay p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 505
Large-Scale Computerized Text Analysis in Political Science:
Opportunities and Challenges
John Wilkerson and Andreu Casas p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 529
Trading in the Twenty-First Century: Is There a Role for the World
Trade Organization?
Judith Goldstein p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 545
PL20-TOC ARI 22 March 2017 10:17

Police Are Our Government: Politics, Political Science, and the


Policing of Race–Class Subjugated Communities
Joe Soss and Vesla Weaver p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p 565

Errata

An online log of corrections to Annual Review of Political Science articles may be found
at http://www.annualreviews.org/errata/polisci
Annu. Rev. Polit. Sci. 2017.20:15-36. Downloaded from www.annualreviews.org
Access provided by 179.13.82.71 on 02/05/22. For personal use only.

You might also like