Professional Documents
Culture Documents
University of Leicester
E-tivity 3: Essay
Michael Grieves
different people use different standards and there is also a lack of understanding of what
intelligence analysis is meant to achieve (Marrin, 2012). Key points made are that evaluating
through accuracy and preventing surprise are unattainable (Dahl, 2010; Gill, 2007) and although
relevance of the analysis is needed for the decision-maker it does not necessarily indicate any
academics and intelligence professionals should confirm the criteria. He presents ideas to prevent
intelligence failures, but with a caveat that they will not eliminate all failures.
Overall, Marrin relies heavily on ideas initially formulated by Betts (Betts, 1978), thus
the article is not really offering anything new, as the consensus amongst most intelligence
scholars is that failures in intelligence will be inevitable (Froscher, 2010; Betts, 1978).
Evaluating the quality of intelligence is not easy and each criterion used in the process of
Marrin argues that influencing policy is the role of intelligence analysis. This idea is
consistent with earlier studies claiming that intelligence analysis should influence the
formulation of policy and moreover that failure is the result of intelligence analysis (Betts, 1978).
Through intelligence analysis governments develop and improve policies, which also enhances
the nature of intelligence analysis (Hedley, 2005; Lonsdale, 2012). Therefore it is reasonable to
argue that a lack of influence as a result of bad analysis is a failure of intelligence (Arcos, 2016).
1978). Marrin asserts that scholars have not accepted the importance of the ability of decision-
makers to assess analytic products. However, Betts (1978), Grabo (2004), Heue (1999), and
3
Wohlstetter (1962) all point out failures in intelligence due to cognitive biases of decision-
makers who will likely be unhappy with assessments that disagree with their biases. Thus, asking
Marrin argues that determining whether analysis is accurate is challenging using a metric.
Hastedt agrees with Marrin, arguing accuracy standards as a metric for measuring the quality of
intelligence have only two outcomes: accurate or inaccurate (Hastedt, 2009). Jenson (2012) also
intelligence analysis. Conversely, Friedman and Zeckhauser (2014) argue that assessing
estimative accuracy is feasible. However, somewhat consistent with Marrin’s view, they suggest
inaccuracy alone cannot be equated to intelligence failure. When there is an intelligence failure
as a result of uncertainty, then accuracy cannot be used as a metric, and analysts cannot estimate
assess, for example, what will be the ultimate conclusion of the political unrest in Syria?
(Friedman & Zeckhauser, 2014). This is because, when intelligence agencies piece together
information, there are usually gaps that yield uncertainty that requires caveats and qualifiers in
the finished pieces of intelligence (Friedman & Zeckhauser, 2014; Flannigan, 2011). However,
in contrast to Marrin’s claims, a study into the accuracy of 1,514 strategic intelligence forecasts
found that applying scoring rules to some intelligence forecasts as a means of outcome-based
quality control can assess accuracy (Mandel & Barnes, 2014) however, using historical literature
Marrin argues one purpose of intelligence analysis is to prevent surprise attacks (Marrin,
2004; Ratter, 2013) and having mechanisms to combat these should be in place (Pillar, 2006).
Thus, intelligence analysis fails if it does not assist security agencies in combating surprise
attacks (Dujmovic, 2 017). Marrin states preventing surprise is as important as accuracy even
though, where international relations are concerned, surprise is inevitable. However, aiming to
prevent surprises does reduce their occurrence (Marrin, 2007; Zhang & Di, 2016). However,
Marrin claims that prevention of surprise by intelligence analysis is not possible due to foreign
countries secrecy. Betts (1978) concurs and further claims that surprises are unavoidable and
analysts have cognitive limitations, which hamper their intelligence gathering process.
power with greater efficiency…” (Marrin 2012, p.827), Jensen (2012) notes such efficiency is
only attainable if intelligence is devoid of failures such as inaccurate information, which calls for
Lonsdale (2012), the power of accurate intelligence information transcends local boundaries,
which is consistent with Marrin’s idea about the theory of foreign intelligence analysis in that
intelligence analysts alert foreign policy makers of any changes in relationships with other
countries.
Marrin proposes that goals should be definite and clear. This notion is supported by Jensen
(2012) who affirms the role of accuracy as a critical goal towards proper intelligence. Moreover,
proper mechanisms that enable the achievement of the best results of intelligence analysis should
Marrin states that attainment of accuracy in intelligence analysis is not possible because
the intelligence production process is uncertain, and the intelligence gathered cannot give a full
picture of things in a foreign countries. This contention is supported by Betts (1978) who
suggests that failures of intelligence, like inaccuracy, are inevitable. Marrin suggests the way
forward is to improve intelligence rather than studying failures. He agrees with Coulthart in the
need to integrate better training methods (Coulthart, 2016). Bar-Joseph McDermott (2008)
believes there are insufficient analysts with high intelligence, strong verbal and written skills
with the ability to work under pressure, which hinders intelligence analysis.
Implications
All these concepts further our understanding of the challenges that underlie intelligence
analysis evaluation metrics. The paper suggests that the absence of standardised definitions and
organisations that operate differently. It implies that those involved must be clear on the purpose
of intelligence analysis and how to accomplish it, and moreover assessing accuracy is more than
just numbers. The paper makes valuable contributions in recommending practical changes in the
field to improve intelligence evaluation and through that, intelligence itself. Historical literature
on intelligence analysis demonstrates contradictions, and the frequently used batting metaphor is
flawed as ‘hit or miss’ cannot establish a batting average since it depends “heavily on the quality
Presenting the decision maker's evaluative framework as the preferred approach implies
that there is a possibility of improving intelligence analysis. The basic idea underlying the
evaluative framework is the linking of the decision-making with intelligence analysis. When the
two work together, they provide improved results, which enhances the accuracy of intelligence
6
analysis (Calcutt, 2008) although both scholars and practitioners have not welcomed this
gathering exercises are not carried out for a definitive purpose, their results being preserved
Marrin suggests a reassessment of what intelligence analysis is, and a focus on how to
improve it rather than studying failures, which is likely to be more practical for future directions
in intelligence evaluation. Training itself is not enough and what the individual analyst requires
is achieved by on the job learning. Achievement of ideal accuracy is impossible and surprise will
Bibliography
Bar-Joseph, U. & McDermott, R. (2008) ‘Change the Analyst and Not the System: A Different
Betts, R. (1978) ‘Analysis, War, and Decision: Why Intelligence Failures Are Inevitable’, World
Coulthart, S. (2016) ‘Why do analysts use structured analytic techniques? An in-depth study of
Dahl, E. (2010) ‘Missing the Wake-up Call: Why Intelligence Failures Rarely Inspire Improved
Flanigan, J. (2011) Intelligence supportability analysis for decision making. Available at:
http://www.spie.org/newsroom/3661-intelligence-supportability-analysis-for-decision-
Friedman, J. & Zeckhauser, R. (2014) ‘Why Assessing Estimative Accuracy is Feasible and
Gill, P. (2007) ‘Evaluating intelligence oversight committees: The UK Intelligence and Security
Committee and the ‘war on terror’’, Intelligence and National Security, 22(1), pp. 14-37.
8
Grabo, C. M. (2004) Anticipating Surprise. Analysis for Strategic Warning. Joint military
Hastedt, G. (2009) ‘Intelligence Estimates: NIEs vs. the Open Press in the 1958 China Straits
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-
Jensen, M. (2012) ‘Intelligence Failures: What Are They Really and What Do We Do about
Pillar, P. (2006) ‘Intelligence, Policy, and the War in Iraq’, Foreign Affairs, 85(2), 15.
9
Wohlstetter, R. (1962) Pearl Harbour: Warning and Decision. Stanford: Stanford University
Press.
Zhang, H. & Weichao, D. (2016) ‘Making intelligence more transparent: A critical cognitive