You are on page 1of 13

This article was downloaded by: [FU Berlin]

On: 03 July 2015, At: 10:04


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: 5 Howick Place, London, SW1P 1WG

Security Studies
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/fsst20

Using Process Tracing to Improve Policy


Making: The (Negative) Case of the 2003
Intervention in Iraq
Andrew Bennett
Published online: 22 Jun 2015.

Click for updates

To cite this article: Andrew Bennett (2015) Using Process Tracing to Improve Policy Making:
The (Negative) Case of the 2003 Intervention in Iraq, Security Studies, 24:2, 228-238, DOI:
10.1080/09636412.2015.1036619

To link to this article: http://dx.doi.org/10.1080/09636412.2015.1036619

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [FU Berlin] at 10:04 03 July 2015
Security Studies, 24:228–238, 2015
Copyright © Taylor & Francis Group, LLC
ISSN: 0963-6412 print / 1556-1852 online
DOI: 10.1080/09636412.2015.1036619

Using Process Tracing to Improve Policy


Making: The (Negative) Case of the 2003
Intervention in Iraq

ANDREW BENNETT

This article argues that applying the Bayesian logic of process trac-
ing can improve intelligence estimates, appraisals of alternative
Downloaded by [FU Berlin] at 10:04 03 July 2015

policy options, and reassessments of whether policies are working


as planned. It illustrates these points by demonstrating how more
systematic use of this logic could have improved each of these three
elements of policymaking regarding the 2003 US military interven-
tion in Iraq.

INTRODUCTION

The essays by James Mahoney, Nina Tannenwald, and David Waldner in this
issue’s symposium nicely outline the use of process tracing for the academic
goals of generating and testing historical explanations of individual cases. I
take on a different question: can methods of policy analysis derived from
process tracing contribute to better policymaking?1 I argue that policymakers
can indeed improve on their decision making by using process tracing modes
of analysis more systematically and explicitly.
I focus on three such modes of analysis. First, decision makers can use
the same process tracing methods as scholars for descriptive inference, which
is often the focus of intelligence estimates. Second, policymakers can engage
in “process projecting,” or thinking through theory-based predictions of how

Andrew Bennett is Professor of Government at Georgetown University.


1 This is in the spirit of recurrent efforts to make scholarly research relevant for policymakers, such
as Paul C. Avey and Michael C. Desch, “What Do Policymakers Want From Us? Results of a Survey of
Current and Former Senior National Security Decision Makers,” International Studies Quarterly 58, no. 2
(June 2014): 227–46. See also the ongoing project at American University’s School of International Service,
entitled “Bridging the Gap,” which aims to create a closer and more fruitful interchange between academic
research and American foreign policy, and between scholars and policymakers. See the project’s web site
at http://www.american.edu/sis/btg/.

228
Process Tracing and Policy Making 229

alternative policies might play out and paying attention to the base rates
of populations that theories indicate are similar to the case at hand. Third,
policymakers can make mid-course corrections with the help of “process
tracking,” updating expected outcomes in light of new evidence on whether
policies are working as planned.
I proceed as follows: first I identify the similarities and differences be-
tween academic process tracing and policy work. I then briefly review the
standards for good process tracing that apply to both. Next, I use the exam-
ple of policymaking on the 2003 invasion of Iraq to show how the use of
these standards could have improved intelligence assessments, analyses of
the likely outcomes of alternative policies, and policy reviews. I conclude
that although process tracing is intuitively simple and therefore accessible
to policymakers, its very familiarity can lead to the neglect of systematic
procedures for assessing and predicting hypothesized processes.
Downloaded by [FU Berlin] at 10:04 03 July 2015

ACADEMIC PROCESS TRACING AND POLICY ANALYSIS:


SIMILARITIES AND DIFFERENCES

Process tracing is the study of evidence within a single case to assess whether
the observable implications of hypothesized causal processes are borne out
in that case. It can be characterized as following a Bayesian logic: we update
our level of confidence in an explanation of a particular case according to
whether new evidence from that case fits the explanation better than it fits
alternative explanations.2
Academic process tracing and the analysis of security policies have much
in common. Both require descriptive accuracy to work well. Both focus on
particular cases. Policymakers are not very interested in knowing how most
countries usually respond to a policy instrument; rather, they want to know
how country X will respond this time to this instrument. Both academic and
policy-oriented process tracing focus on within-case evidence, and they use
prior cross-case knowledge to help assess the implications of that evidence.
Both kinds of process tracing take alternatives—either alternative explana-
tions or alternative policies—into account. A final similarity is that both need
rigorous methods to forestall well-documented cognitive biases, especially
confirmation bias.
There are important differences as well. Academic process tracing fo-
cuses on past cases, and policy analysis assesses current events and contin-
gent futures. Academic process tracing involves working backward from an

2 James Mahoney, “Process Tracing and Historical Explanation,” Security Studies 24, no. 2 (April

2015): 200–218; Andrew Bennett, “Disciplining our Conjectures: Systematizing Process Tracing with
Bayesian Analysis,” in Process Tracing: From Metaphor to Analytic Tool, ed. Andrew Bennett and Jef-
frey Checkel (New York: Columbia University Press, 2015), 276–98.
230 A. Bennett

outcome that has already happened and forward from hypothesized vari-
ables to predict the observable implications about hypothesized processes
that already either did or did not happen. Policymakers, in contrast, ana-
lyze alternative scenarios to project policies and their associated processes
forward and to assess the likelihood of as-yet-unrealized outcomes. Both
involve theoretical predictions about unknowns, but one set of unknowns is
the details of what happened in the past and the other is what might happen
in the future.
Another difference is more sociological than logical: most academic
process tracing is done by individual scholars, whereas policy work typically
involves groups involved in a mix of collaboration and contestation, usually
within a hierarchical authority structure. Recent research suggests that when
teams of experts are trained in probabilistic reasoning and share and debate
their predictions on political events, they can outperform individuals, simple
trend-line projections, and prediction markets in predicting future events.3
Downloaded by [FU Berlin] at 10:04 03 July 2015

On the other hand, in the absence of rigorous procedures for decision mak-
ing, groups can engage in “groupthink” and arrive at biased or distorted
judgments.4

STANDARDS FOR GOOD PROCESS TRACING AND POLICY


ANALYSIS

Because they both involve Bayesian analysis of evidence from individual


cases, there are many shared standards for judging whether process tracing
and policy analysis have been done well. I briefly address six such standards
here:5

1. Cast the net widely for alternative explanations or policies. It is important


to avoid prematurely narrowing down to one or a few alternative explana-
tions or policies. One way to avoid this mistake is to consider a standard

3 Philip Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton, NJ:

Princeton University Press, 2006). In an ongoing experiment on forecasting, teams performed some-
what better than prediction markets only when team members’ judgments were weighted by algo-
rithms based on earlier research. In particular, these algorithms took into account team members’
personal characteristics and transformed individual forecasts to push them away from 0.5 (that is,
scores were moved away from 50–50 predictions as a way of adjusting for the tendency, demon-
strated in earlier experimental studies, toward under-confidence in predicting low probability events).
See Lyle Ungar, Barb Mellors, Ville Satopää, Jon Baron, Phil Tetlock, Jaime Ramos, and Sam Swift,
The Good Judgment Project: A Large Scale Test of Different Methods of Combining Expert Predic-
tions, Association for the Advancement of Artificial Intelligence, Philadelphia, PA, 2012. Available at
http://www.aaai.org/ocs/index.php/FSS/FSS12/paper/viewFile/5570/5871.
4 Janis Irving, Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes

(Boston, 1972).
5 Andrew Bennett and Jeff Checkel, “Process Tracing: From Philosophical Roots to Best Practices,”in

Process Tracing, 23–31.


Process Tracing and Policy Making 231

set of explanations and policy options, including those focused on indi-


vidual agents, social and material structures, material power, institutional
transactions costs, and normative legitimacy.6
2. Be equally tough on the alternative explanations or policies. Experimental
evidence indicates that we perceive more readily evidence that fits a con-
cept that we favor.7 Bayesian logic reminds us to consider systematically
evidence both for and against an explanation or policy, and also that for
and against the alternatives.
3. Identify the observable implications of hypothesized processes that would
be true if an explanation is true or a policy is sound. It is tempting to apply
a general theory or policy idea to a case and to skip over the details, but
thinking through exactly how it should apply in a particular case provides
greater analytical leverage over whether the theory did or could cause a
particular outcome in the case.
4. Be relentless in gathering diverse and relevant evidence, but make a justi-
Downloaded by [FU Berlin] at 10:04 03 July 2015

fiable decision on when to stop actively searching for evidence and remain
open to new evidence. Diversity of evidence is good because at some point
dipping into the same stream of evidence has less capacity to force us to
update our views. Deciding when to stop seeking evidence involves a
judgment about what threshold of confidence is sufficient to choose a
policy, how costly it is to gather more evidence and delay decisive action,
and how likely it is that new evidence would overturn an explanation or
policy. Remaining open to new evidence is essential as Bayesianism re-
minds us that we should never be 100 percent confident in an explanation
or policy.
5. Consider the potential biases of evidentiary sources. This includes incen-
tives that a person providing evidence might have to mislead others, as
well as unmotivated biases like confirmation bias and selection biases
in the information to which researchers or respondents are exposed. It
also involves assessments of whether actors had capabilities as well as
incentives for concealing information from outside analysts.
6. Ask: “Imagine that the favored policy or explanation proves wrong. How
would it go wrong, and how would we know it was going wrong?” 8 Simply
thinking through this question can help avoid confirmation bias, tricking
our brains out of easily finding evidence for a favored idea and into easily
finding evidence against it.

6 Andrew Bennett, “The Mother of All Isms: Causal Mechanisms and Structured Pluralism in In-

ternational Relations Theory,” European Journal of International Relations 19, no. 3 (September 2013):
459–81.
7 Raymond Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of

General Psychology 2, no. 2 (June 1998): 175–220.


8 Robert Jervis, Why Intelligence Fails: Lessons from Iran and the Iraq War (Ithaca, NY: Cornell

University Press, 2011), 192.


232 A. Bennett

Some of these standards are not unique to process tracing; a variety of


research methods and approaches to policy analysis include the considera-
tion of alternative explanations or policies, for example. Yet the value-added
of explicit reliance on the Bayesian logic of process tracing is that this makes
more transparent exactly where individuals disagree, and whether and how
additional evidentiary tests can narrow those disagreements. Bayesian logic
outlines four different ways in which individuals can disagree. First, scholars
can disagree on the prior likelihood of alternative explanations, or analo-
gously, policymakers can disagree on whether policies are likely to succeed.
Second, individuals can disagree on the likelihood of “true positives” or the
likelihood of evidence that a policy or theory is succeeding when in fact
it is. Third, they can disagree on the likelihood of “false positives” or the
likelihood of evidence that a policy is succeeding even when it is failing.
Fourth, people can disagree on the reading or measurement of particular
pieces of evidence.9
Downloaded by [FU Berlin] at 10:04 03 July 2015

Making estimates on each of these four issues explicit helps in updating


judgments on whether policies are working as planned. Even when indi-
viduals disagree on how likely it is that a policy would succeed, if they
agree on the likelihood of true positives and false positives, they might be
able to agree on what kinds of future evidence would attest to whether
a policy is succeeding or failing. In Bayesian terms, their differing priors
will converge if the right kind of evidence becomes available. On the other
hand, if individuals do not agree on the likelihood of true positives and
false positives, they will remain far apart in their policy prescriptions. In the
case of Iraq, for example, some opponents of intervention argued that if
inspectors turned up no evidence that Iraq had weapons of mass destruc-
tion, that would signal Iraq had no such weapons. Some proponents of
intervention, in contrast, argued that any failure of inspections to find evi-
dence of Iraqi weapons of mass destruction (WMDs) would only signal that
Iraq was successful at hiding evidence from inspectors. New evidence from
inspections was unlikely to create any convergence in the views of these
groups.
Building further on the example of the intervention in Iraq in 2003, the
sections that follow argue that the failure to apply several of the standards
outlined above, rather than incomplete information or bad luck, led to a
far costlier and longer deployment of US troops to Iraq than proponents of
intervention predicted.

9 Bennett, “Disciplining our Conjectures,” 278–80, 295.


Process Tracing and Policy Making 233

PROCESS TRACING AND DESCRIPTIVE INFERENCE: MISTAKEN


ESTIMATES ON IRAQ’S WMDs

Given that intelligence assessments involve adversaries with incentives and


abilities to hide information, these assessments can be mistaken even if they
follow rigorous procedures. The George W. Bush administration’s error in
presuming that Iraq had WMDs could have been due largely or solely to the
fact that Saddam Hussein had private information on his military capabilities
and incentives to misrepresent that information. Yet assessments should be
judged harshly if other analysts at the time, with similar access to information,
more accurately predicted both outcomes and the processes through which
they arose.
In a careful analysis of the intelligence process behind the erroneous
estimate of Iraq’s WMDs, Robert Jervis concludes that “while there were er-
rors and analysis could and should have been better, the result would have
Downloaded by [FU Berlin] at 10:04 03 July 2015

been to make intelligence judgments less certain rather than to reach fun-
damentally different conclusions.” Jervis adds, however, that the intelligence
estimates could have been better had they used procedures like those advo-
cated here.10 I emphasize this latter point by analyzing the evidence Jervis
uncovers in terms of several of the standards listed above.
On two crucial issues, analysts failed to consider adequately alternative
explanations. A key piece of evidence used to justify the conclusion that
Iraq was trying to build WMDs was the fact that Iraq had bought numerous
high-strength aluminum tubes, which analysts argued could be used to build
centrifuges to enrich uranium. This inference failed to consider alternative
uses for such tubes, including the actual use uncovered later: the tubes were
to be used as a component of conventionally armed rockets. Analysts also
overlooked that the Department of Energy (DOE) rejected the idea that the
tubes were intended for use in centrifuges (DOE analysts still agreed that
Iraq was pursuing nuclear weapons, but they did not specify any alternative
route through which it might do so).11 In Bayesian terms, analysts treated the
aluminum tubes as definitive “smoking gun” evidence that Iraq was seeking
nuclear weapons, but the DOE analysis suggested it was not even weakly
determinative “straw in the wind” evidence.
Analysts also failed to consider adequately possible alternative reasons
for why Saddam might have resisted weapons inspections even if he lacked
WMDs. We now know that he feared inspections might pinpoint his location
for assassination attempts, undercut his domestic authority, and weaken his
deterrent posture toward Iran. Yet opponents of the war also failed to predict
accurately Saddam’s motives for resisting inspections, so we might judge

10 Jervis, Why Intelligence Fails, 3.


11 Ibid., 143–44.
234 A. Bennett

this error to be a consequence of private information and incentives to


misrepresent it.12
A second error was the failure to assess the other observable implica-
tions that should have been true if Iraq had WMD programs. A program to
produce nuclear weapons would have required many components, and ana-
lysts failed to give sufficient weight to the fact that there was no evidence of
Iraqi acquisition of these many other elements. Judging the relevance of the
absence of evidence requires an assessment of the target actor’s ability and
motivation to hide evidence—here, Saddam had many motives for secrecy,
but it was unrealistic to think he had the ability to hide all but one of the
many components of a nuclear weapons program. This would have required
the ability to silence or fool the many foreign scientists and Iraqi scientists
who defected, all of whom told American analysts that they had no direct
knowledge of Iraqi WMD programs.13
Downloaded by [FU Berlin] at 10:04 03 July 2015

PROCESS PROJECTING AND PREDICTING POLICY OUTCOMES

More systematic use of Bayesian logic can improve scenario analysis as well
as intelligence assessments. The proponents of intervention in Iraq estimated
its costs to be thirty to fifty times lower than the actual costs (I focus on
economic costs as war proponents did not make predictions on casualties).
Secretary of Defense Donald Rumsfeld endorsed an Office of Management
and Budget estimate that intervention would cost between fifty and sixty
billion dollars, but later estimates of the actual costs range from two to three
trillion dollars.14
Had they more rigorously applied well-developed academic theories
and taken into account the relevant base rates for the costs, benefits, and
consequences of previous military occupations, and had they anticipated the
opposition of key actors in Iraq and the possibility of unintended conse-
quences, it would have been clear to the proponents of intervention that
its costs were likely to be high. A key indication that this was knowable
in advance, and not just in retrospect, is that the opponents of intervention
more accurately predicted not only the high costs of intervention, but the
processes through which those costs would arise. In particular, many ana-
lysts inside and outside of the US government drew on extant theoretical
and empirical analyses to identify in advance three risks that did indeed
materialize and greatly raised the costs of the intervention: the risk of civil

12 Ibid., 128.
13 Ibid., 151–52.
14 Linda Bilmes and Joseph Stiglitz, “The Economic Costs of the Iraq War: An Appraisal Three Years

after the Beginning of the Conflict” (working paper no. 12054, National Bureau of Economic Research,
Cambridge, MA, February 2006).
Process Tracing and Policy Making 235

conflict, the risk of limited assistance from US allies, and the risk of great
difficulties in establishing democracy in Iraq.15
In generating an estimate of the prior on the likelihood that the military
occupation of Iraq would succeed, officials should have paid more attention
to the base rates of successes and failures in military occupations in history.
David Edelstein’s systematic study of this issue, although published a year
after the intervention, concludes that of twenty-four military occupations
since 1815, from the point of view of the occupying power, only 29 percent
largely succeeded and 54 percent were mostly failures.16 Edelstein’s analysis
of the Iraqi case in 2004 suggests that it should have been judged at that
time as even less likely to succeed than these historical averages.
Regarding civil conflict, it was clear to experts that Iraq had many of the
conditions identified in earlier research as being conducive to civil conflict.
Iraq’s Shiites and Kurds had deep grievances toward the ruling Sunnis, con-
tending groups were intermingled in ways that would create group security
Downloaded by [FU Berlin] at 10:04 03 July 2015

dilemmas, and the wide availability of weapons and the presence of urban
and cross-border havens for fighters lowered the opportunity costs for armed
conflict.17 This should have led to the estimation of a high likelihood of civil
conflict.
As for alliance burden sharing, it was evident that international assistance
to the US intervention would be far smaller than in the Desert Storm coalition
in the 1991 Gulf War. In 1991, the UN Security Council approved the use of
force against Iraq, and Germany and Japan contributed due to their heavy
reliance on the United States for security in the immediate aftermath of the
Cold War.18 In 2003, however, these and other allies were not as dependent
on the United States, the UN did not approve of US plans to intervene, and
public opinion in most countries was running far more sharply than in the
1991 Gulf War against any help for US intervention. This should have led to a
low prior on the likelihood of large-scale military or economic contributions
from US allies.

15 A State Department study before the war, titled “The Future of Iraq Project,” proved prescient re-

garding the challenges of Iraq but was largely ignored by the Defense Department and the Coalition Provi-
sional Authority. For declassified documents from this study, see “The Future of Iraq Project,” National Se-
curity Archive Electronic Briefing Book No. 198, http://www2.gwu.edu/˜nsarchiv/NSAEBB/NSAEBB198/.
16 David Edelstein, Occupational Hazards: Success and Failure in Military Occupation (Ithaca, NY:

Cornell University Press), 2008, 58. Edelstein’s study finds that in cases like Iraq—where there is an
internal conflict, the population does not need help from the occupier to survive, and it is difficult for
the occupier to credibly commit that its forces will soon leave—there are no clear cases of success in any
military occupations since 1815.
17 For group security dilemmas, see Barry Posen, “The Security Dilemma and Ethnic Conflict.” Sur-

vival 35, no. 1 (Spring 1993): 27–47. On lowered opportunity costs for conflict, see James D. Fearon
and David D. Laitin, “Ethnicity, Insurgency, and Civil War,” American Political Science Review 97, no. 1
(February 2003): 75–90.
18 Andrew Bennett, Joseph Lepgold, and Danny Unger, “Burden-sharing in the Persian Gulf War,” In-

ternational Organization 48, no. 1 (Winter 1994): 39–75.


236 A. Bennett

Regarding democratization, Iraq lacked nearly all the factors identified in


the academic literature as being conducive to democratic transitions. Indeed,
Daniel Byman concluded that Iraq in 2003 had eight of the nine factors that
make democratization difficult. The only favorable factor—a strong state able
to provide security—was something the United States would erase in Iraq by
its intervention.19
Finally, it was clear to many that the United States would engender
fierce opposition in removing not just Saddam but the Sunni-dominated
regime from power. This would alienate a sizable proportion of Iraqi so-
ciety that had the resources and arms to mount an insurgency. Moreover,
by holding elections, the United States would de facto favor the majority
Shiite population that had been oppressed by the Sunnis, creating a security
dilemma between the two groups.
Thus it was clear to many experts at the time, and not just in retrospect,
that intervention in Iraq would involve a high risk of civil conflict, that
Downloaded by [FU Berlin] at 10:04 03 July 2015

allied contributions would be far smaller than in the 1991 Gulf War, and
that Iraqi democratization would face difficult challenges. On these issues
and the overall issue of the likelihood of a successful military occupation,
greater attention to extant theories and to the base rates evident in relevant
populations would have led to much lower expectations on the likelihood
of a successful intervention in Iraq.

PROCESS TRACKING AND MAKING POLITICIANS BETTER


BAYESIAN UPDATERS

A third problem is that the US government was slow to revise its policies
in Iraq even after it became evident that the post-invasion security situation
was proving far more challenging than the war’s proponents had expected.
Already by the summer of 2003 the failure to prevent looting, the dismissal
of the Iraqi army, and the decision to remove even low-level Baath party
officials from government offices contributed to a growing insurgency, yet
administration officials dismissed attacks by Sunnis as the last gasp of Sad-
dam’s supporters. The bombing of the al-Askari mosque in February 2006
and the violence that followed marked a major escalation in Iraq’s sectarian
conflict. Yet it was not until early 2007, more than three-and-one-half years

19 Daniel Byman, “Constructing a Democratic Iraq: Challenges and Opportunities,” International

Security 28, no. 1 (Summer 2003): 62. Ironically, after identifying these and other challenges, Byman
concluded that although democratization in Iraq would be difficult and would take years of occupation
by over one hundred thousand US soldiers, it was feasible, as the US forces could substitute for the Iraqi
government and provide security. Five years later, he forthrightly expressed a more pessimistic view that
even if the United States had avoided several preventable policy mistakes, the intervention would have
been an “immensely costly, fraught, and dangerous exercise.” See Daniel Byman, “An Autopsy of the
Iraq Debacle: Policy Failure or Bridge Too Far?” Security Studies 17, no. 4 (December 2008): 599–643.
Process Tracing and Policy Making 237

after abundant signs of trouble, that the United States substantially changed
its policies, deploying a “surge” of thirty thousand extra troops and reaching
out to convert Sunni insurgents.
Why was the Bush administration so poor at Bayesian updating? A
key challenge here, not unique to President Bush or his administration, is
that it is exceptionally difficult, both psychologically and politically, for a
person to acknowledge the failure of policies for which they were person-
ally responsible.20 There is a strong temptation to resort to counterfactuals
that justify the policy—“it will work, it just hasn’t worked yet,” or, “it is
still better than what would have happened without the policy”—rather
than acknowledging error and changing the policy. Observers and vot-
ers are not very sympathetic to such claims in the face of ongoing fail-
ures, but they are even less sympathetic to admissions of ignorance or
incompetence.21
In view of these inhibitions on politicians’ abilities and willingness to
Downloaded by [FU Berlin] at 10:04 03 July 2015

acknowledge errors and update policies, it is important to put in place


accountability systems that make it difficult for political leaders to justify on-
going failures via counterfactual reasoning. Pushing analysts and politicians
to specify time-bound benchmarks that will tell if their favored policies are
succeeding or failing is a way to establish greater accountability and foster
more realistic updating. This is analogous to the injunction in the literature on
process tracing that authors should specify in advance what evidence would
count for or against their explanations.22 In the case of Iraq, the Congress
failed to push the Bush administration to provide ex ante performance
and cost benchmarks, such as expected casualties, fiscal costs, numbers of
troops needed, and achievement measures for stability and democracy. The
Congress did not move toward establishing such benchmarks until 2005, well
after it was already clear that the intervention was proving far costlier than
advertised. Had the Congress insisted on benchmarks from the start, it might
have created greater and earlier political pressures to update US policies in
Iraq.

20 Keith D. Markman and Philip E. Tetlock, “I Couldn’t Have Known: Accountability, Foreseeability,

and Counterfactual Denials of Responsibility,” British Journal of Social Psychology 39, no. 3 (2000):
313–25; Kathleen M. McGraw, “Avoiding Blame: An Experimental Investigation of Political Excuses and
Justifications,” British Journal of Political Science 20, no. 1 (1990): 119–31.
21 McGraw, “Avoiding Blame.”
22 Nina Tannenwald “Process Tracing and Security Studies,” Security Studies 24, no. 2 (April 2015):

219–227.
238 A. Bennett

GUARDING AGAINST BIASES THROUGH SYSTEMATIC PROCESS


TRACING

Case-study analyses and process tracing are methods that policymakers find
intuitively appealing and useful.23 Yet informal applications of process trac-
ing logic in policy settings are vulnerable to well-known cognitive and orga-
nizational biases. Much of the methodological advice developed by qualita-
tive researchers over the past decade is geared toward circumventing these
biases by applying rigorous and standardized procedures, including proce-
dures derived from Bayesian logic. Applying these procedures in intelligence
assessment, policy analysis, and policy reviews does not guarantee success,
but it makes errors less likely and increases the chances that policymakers
can more quickly recognize when policies are not going as planned.
Downloaded by [FU Berlin] at 10:04 03 July 2015

ACKNOWLEDGMENT

I would like to thank Colin Elman, John Owen, Andy Moravcsik, David
Waldner, and the participants of workshops in the fall of 2013 at the Amer-
ican Political Science Association annual conference and the University of
Virginia, as well as two anonymous reviewers for Security Studies, for their
suggestions on this paper.

23 Avey and Desch, “What Do Policymakers Want from Us?”

You might also like