You are on page 1of 9

EVALUATING THE QUALITY OF INTELLIGENCE ANALYSIS 1

University of Leicester
E-tivity 3: Essay

Michael Grieves

Student number: 179049355

Stephen Marrin, 'Evaluating the Quality of Intelligence


Analysis: By What (Mis) Measure?', Intelligence and
National Security, 27/6 (2012)

Submission Date: 10 OCTOBER 2018

Word Count: 1244

(Excluding title and bibliography)


2

Marrin states there is no specific standard employed to evaluate intelligence analysis,

different people use different standards and there is also a lack of understanding of what

intelligence analysis is meant to achieve (Marrin, 2012). Key points made are that evaluating

through accuracy and preventing surprise are unattainable (Dahl, 2010; Gill, 2007) and although

relevance of the analysis is needed for the decision-maker it does not necessarily indicate any

influence on decision-making. Marrin also suggests before considering intelligence failures,

academics and intelligence professionals should confirm the criteria. He presents ideas to prevent

intelligence failures, but with a caveat that they will not eliminate all failures.

Detractions and Merits

Overall, Marrin relies heavily on ideas initially formulated by Betts (Betts, 1978), thus

the article is not really offering anything new, as the consensus amongst most intelligence

scholars is that failures in intelligence will be inevitable (Froscher, 2010; Betts, 1978).

Evaluating the quality of intelligence is not easy and each criterion used in the process of

evaluation is flawed (Coulthart, 2016).

Marrin argues that influencing policy is the role of intelligence analysis. This idea is

consistent with earlier studies claiming that intelligence analysis should influence the

formulation of policy and moreover that failure is the result of intelligence analysis (Betts, 1978).

Through intelligence analysis governments develop and improve policies, which also enhances

the nature of intelligence analysis (Hedley, 2005; Lonsdale, 2012). Therefore it is reasonable to

argue that a lack of influence as a result of bad analysis is a failure of intelligence (Arcos, 2016).

However, Betts argues that intelligence is commonly disregarded by decision-makers (Betts,

1978). Marrin asserts that scholars have not accepted the importance of the ability of decision-

makers to assess analytic products. However, Betts (1978), Grabo (2004), Heue (1999), and
3

Wohlstetter (1962) all point out failures in intelligence due to cognitive biases of decision-

makers who will likely be unhappy with assessments that disagree with their biases. Thus, asking

decision-makers opinions on analytical products is unlikely to improve accuracy.

Marrin argues that determining whether analysis is accurate is challenging using a metric.

Hastedt agrees with Marrin, arguing accuracy standards as a metric for measuring the quality of

intelligence have only two outcomes: accurate or inaccurate (Hastedt, 2009). Jenson (2012) also

agrees on the complexity of assessing on a practical scale the accuracy or inaccuracy of

intelligence analysis. Conversely, Friedman and Zeckhauser (2014) argue that assessing

estimative accuracy is feasible. However, somewhat consistent with Marrin’s view, they suggest

inaccuracy alone cannot be equated to intelligence failure. When there is an intelligence failure

as a result of uncertainty, then accuracy cannot be used as a metric, and analysts cannot estimate

accuracy when using probabilities such as likely, unlikely etc.

Marrin states it is impossible to get impartial, statistical conclusions on the accuracy of an

estimate. Estimative intelligence regularly centres on questions where certainty is difficult to

assess, for example, what will be the ultimate conclusion of the political unrest in Syria?

(Friedman & Zeckhauser, 2014). This is because, when intelligence agencies piece together

information, there are usually gaps that yield uncertainty that requires caveats and qualifiers in

the finished pieces of intelligence (Friedman & Zeckhauser, 2014; Flannigan, 2011). However,

in contrast to Marrin’s claims, a study into the accuracy of 1,514 strategic intelligence forecasts

found that applying scoring rules to some intelligence forecasts as a means of outcome-based

quality control can assess accuracy (Mandel & Barnes, 2014) however, using historical literature

demonstrates contradictions (Marrin (2012).


4

Marrin argues one purpose of intelligence analysis is to prevent surprise attacks (Marrin,

2004; Ratter, 2013) and having mechanisms to combat these should be in place (Pillar, 2006).

Thus, intelligence analysis fails if it does not assist security agencies in combating surprise

attacks (Dujmovic, 2 017). Marrin states preventing surprise is as important as accuracy even

though, where international relations are concerned, surprise is inevitable. However, aiming to

prevent surprises does reduce their occurrence (Marrin, 2007; Zhang & Di, 2016). However,

Marrin claims that prevention of surprise by intelligence analysis is not possible due to foreign

countries secrecy. Betts (1978) concurs and further claims that surprises are unavoidable and

analysts have cognitive limitations, which hamper their intelligence gathering process.

Although Marrin suggests “information or intelligence can enable the application of

power with greater efficiency…” (Marrin 2012, p.827), Jensen (2012) notes such efficiency is

only attainable if intelligence is devoid of failures such as inaccurate information, which calls for

the need for proper optimisation of resources as a function of intelligence. As confirmed by

Lonsdale (2012), the power of accurate intelligence information transcends local boundaries,

which is consistent with Marrin’s idea about the theory of foreign intelligence analysis in that

intelligence analysts alert foreign policy makers of any changes in relationships with other

countries.

For intelligence analysis to be more beneficial to security organs and policymakers,

Marrin proposes that goals should be definite and clear. This notion is supported by Jensen

(2012) who affirms the role of accuracy as a critical goal towards proper intelligence. Moreover,

proper mechanisms that enable the achievement of the best results of intelligence analysis should

be put into place and strictly implemented.


5

Marrin states that attainment of accuracy in intelligence analysis is not possible because

the intelligence production process is uncertain, and the intelligence gathered cannot give a full

picture of things in a foreign countries. This contention is supported by Betts (1978) who

suggests that failures of intelligence, like inaccuracy, are inevitable. Marrin suggests the way

forward is to improve intelligence rather than studying failures. He agrees with Coulthart in the

need to integrate better training methods (Coulthart, 2016). Bar-Joseph McDermott (2008)

believes there are insufficient analysts with high intelligence, strong verbal and written skills

with the ability to work under pressure, which hinders intelligence analysis.

Implications

All these concepts further our understanding of the challenges that underlie intelligence

analysis evaluation metrics. The paper suggests that the absence of standardised definitions and

uniform measures in estimating intelligence analysis means researchers are observing

organisations that operate differently. It implies that those involved must be clear on the purpose

of intelligence analysis and how to accomplish it, and moreover assessing accuracy is more than

just numbers. The paper makes valuable contributions in recommending practical changes in the

field to improve intelligence evaluation and through that, intelligence itself. Historical literature

on intelligence analysis demonstrates contradictions, and the frequently used batting metaphor is

flawed as ‘hit or miss’ cannot establish a batting average since it depends “heavily on the quality

of the pitching they face” (Betts, 1978).

Presenting the decision maker's evaluative framework as the preferred approach implies

that there is a possibility of improving intelligence analysis. The basic idea underlying the

evaluative framework is the linking of the decision-making with intelligence analysis. When the

two work together, they provide improved results, which enhances the accuracy of intelligence
6

analysis (Calcutt, 2008) although both scholars and practitioners have not welcomed this

approach. Additionally, implementation of this cannot always be achieved, as some intelligence

gathering exercises are not carried out for a definitive purpose, their results being preserved

solely for possible future benefit.

Marrin suggests a reassessment of what intelligence analysis is, and a focus on how to

improve it rather than studying failures, which is likely to be more practical for future directions

in intelligence evaluation. Training itself is not enough and what the individual analyst requires

is achieved by on the job learning. Achievement of ideal accuracy is impossible and surprise will

always be encountered when dealing with matters internationally.


7

Bibliography

Arcos, R. (2016) ‘Public relations strategic intelligence: Intelligence analysis, communication,

and influence’, Public Relations Review, 42(2), pp. 264-270.

Bar-Joseph, U. & McDermott, R. (2008) ‘Change the Analyst and Not the System: A Different

Approach to Intelligence Reform’, Foreign Policy Analysis, 4(2), pp. 127-145.

Betts, R. (1978) ‘Analysis, War, and Decision: Why Intelligence Failures Are Inevitable’, World

Politics, 31(01), pp. 61-89.

Calcutt, B. (2008) ‘The Role of Intelligence in Shaping Public Perceptions of

Terrorism’, Journal of Policing, Intelligence, and Counter Terrorism, 3(1), pp. 31-43.

Coulthart, S. (2016) ‘Why do analysts use structured analytic techniques? An in-depth study of

an American intelligence agency’, Intelligence and National Security, 31(7), pp. 933-948.

Dahl, E. (2010) ‘Missing the Wake-up Call: Why Intelligence Failures Rarely Inspire Improved

Performance’, Intelligence and National Security, 25(6), pp. 778-799.

Dujmovic, N. (2017) ‘Playing to the edge: American intelligence in the age of

terror’, Intelligence and National Security, 1-3.

Flanigan, J. (2011) Intelligence supportability analysis for decision making. Available at:

http://www.spie.org/newsroom/3661-intelligence-supportability-analysis-for-decision-

making?SSO=1 (Accessed: 9 October 2018)

Friedman, J. & Zeckhauser, R. (2014) ‘Why Assessing Estimative Accuracy is Feasible and

Desirable’, Intelligence and National Security, 31(2), pp. 178-200.

Gill, P. (2007) ‘Evaluating intelligence oversight committees: The UK Intelligence and Security

Committee and the ‘war on terror’’, Intelligence and National Security, 22(1), pp. 14-37.
8

Grabo, C. M. (2004) Anticipating Surprise. Analysis for Strategic Warning. Joint military

Intelligences College’s Strategic Research

Hastedt, G. (2009) ‘Intelligence Estimates: NIEs vs. the Open Press in the 1958 China Straits

Crisis’, International Journal of Intelligence and Counterintelligence, 23(1), pp. 104-132.

Hedley, J. (2005) ‘Learning from Intelligence Failures’, International Journal of Intelligence

and Counterintelligence, 18(3), pp. 435-450.

Heuer, R. J. Jr. (1999) The Psychology of Intelligence Analysis Available at:

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-

and-monographs/psychology- (downloaded: 9 October 2018)

Jensen, M. (2012) ‘Intelligence Failures: What Are They Really and What Do We Do about

Them?’, Intelligence and National Security, 27(2), pp. 261-282.

Lonsdale, D. (2012) ‘Intelligence Reform: Adapting to the Changing Security

Environment’, Comparative Strategy, 31(5), pp. 430-442.

Mandel, D. R. & Barnes, A. (2014) ‘Accuracy of forecasts in strategic intelligence’, Proceedings

of the National Academy of Sciences, 111, pp. 10984-10989.

Marrin, S. (2004) ‘Preventing Intelligence Failures by Learning from the Past’, International

Journal of Intelligence and Counterintelligence, 17(4), pp. 655-672.

Marrin, S. (2012) ‘Evaluating the Quality of Intelligence Analysis: By What (Mis)

Measure?’ Intelligence and National Security, 27(6), pp. 896-912.

Marrin, S. (2007) ‘Intelligence Analysis Theory: Explaining and Predicting Analytic

Responsibilities’, Intelligence and National Security, 22(6), pp. 821-846.

Pillar, P. (2006) ‘Intelligence, Policy, and the War in Iraq’, Foreign Affairs, 85(2), 15.
9

Ratter, B. (2013) ‘Surprise and Uncertainty—Framing Regional Geohazards in the Theory of

Complexity’, Humanities, 2(1), pp. 1-19.

Wohlstetter, R. (1962) Pearl Harbour: Warning and Decision. Stanford: Stanford University

Press.

Zhang, H. & Weichao, D. (2016) ‘Making intelligence more transparent: A critical cognitive

analysis of US strategic intelligence reports on Sino-US relation’, Journal of Language

and Politics, 15(1), pp. 63-93.

You might also like