Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
4Activity
0 of .
Results for:
No results containing your search query
P. 1
Marrin--EvaluatingtheQualityofIntelligenceAnalysis

Marrin--EvaluatingtheQualityofIntelligenceAnalysis

Ratings: (0)|Views: 1,111|Likes:
Published by Stephen Marrin

More info:

Published by: Stephen Marrin on Apr 29, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOCX, PDF, TXT or read online from Scribd
See more
See less

05/31/2013

pdf

text

original

 
1
Stephen MarrinPost-revision draft18 July 2011Original draft submitted to Intelligence and National Security on 4 February 2011.Accepted for publication on 24 May 2011 pending minor revision.Evaluating the Quality of Intelligence Analysis: By What (Mis) Measure?
Dr. Stephen Marrin is a Lecturer in the Centre for Intelligence and Security Studies at BrunelUniversity in London. He previously served as an analyst with the Central IntelligenceAgency and US Government Accountability Office. Dr. Marrin has written about many
different aspects of intelligence analysis, including new analyst training at CIA‘s Sherman
Kent School, the similarities and differences between intelligence analysis and medicaldiagnosis, and the professionalization of intelligence analysis. In 2004 the National Journalprofiled him as one of the ten leading US experts on intelligence reform.
 Abstract: Each of the criteria most frequently used to evaluate the quality of intelligenceanalysis has limitations and problems. When accuracy and surprise are employed as absolutestandards, their use reflects unrealistic expectations of perfection and omniscience. Scholarshave adjusted by exploring the use of a relative standard consisting of the ratio of success to failure, most frequently illustrated using the batting average analogy from baseball.Unfortunately even this relative standard is flawed in that there is no way to determine either 
what the batting average is or should be. Finally, a standard based on the decisionmakers’ 
 pers
 pective is sometimes used to evaluate the analytic product’s relevance and utility. But 
this metric, too, has significant limitations. In the end, there is no consensus as to which isthe best criteria to use in evaluating analytic quality, reflecting the lack of consensus as towhat the actual purpose of intelligence analysis is or should be.
Evaluating the Quality of Intelligence Analysis: By What (Mis) Measure?
Evaluating the quality of intelligence analysis is not a simple matter. Frequently quality isdefined not by its presence but rather by its absence. When what are popularly known asintelligence failures occur, sometimes attention focuses on flaws in intelligence analysis as acontributing factor to that failure.But a closer look at the intelligence studies scholarship reveals that rather than a single
meaning of the term ‗failure‘
there are instead many meanings, each reflecting an implicitassumption regarding the purpose of intelligence analysis. Some presume the purpose of intelligence analysis is providing assessments that are accurate, and as a result characterizeinaccuracy as failure. An example of the accuracy criterion would be the inaccurate USintelligence estimates regarding Iraqi WMD, subsequently described as an intelligencefailure. Others presume the purpose of intelligence analysis is preventing surprise, andcharacterize surprise as failure. An example of the surprise criterion would be the 1998 Indianuclear tests, which were a surprise to US decisionmakers and subsequently described as anintelligence failure. Yet others presume that the purpose of intelligence analysis is toinfluence policy for the better, and characterize lack of influence as failure. Sometimes thesefailures are described as policy failures instead of intelligence failures. An example of thelack of influence criterion would be
CIA‘s analysis during the Vietnam War, or perhapsCIA‘s assessme
nts of post-2003 Iraq conflict scenarios after the conventional portion of the
 
2
war was over, which may have been accurate but not sufficiently compelling as to lead to achange in decision or policy.So which criterion is most effective as a way to evaluate the quality of intelligence analysis?No one knows precisely because it depends on how one defines the purpose of intelligenceanalysis, and as of yet no consensus has developed on that issue. Nonetheless, all threecriteria
 — 
accuracy, preventing surprise, and influence on policy
 — 
are employed inretrospective evaluations of intelligence agency performance even though each hassignificant limitations and problems.
1
 
Accuracy: The Unattainable Ideal
One way to evaluate intelligence analysis is according to an accuracy standard. This is aneasy-to-understand, black-and-white absolute standard: the analysis is either accurate, or it isnot. When the analysis is not accurate, that must mean that there has been an intelligencefailure because the purpose of intelligence analysis is to be accurate and it has failed toachieve that objective. However, while using accuracy as an evaluative criterion is simple intheory, actually comparing the analysis to ground truth and determining whether the analysiswas accurate or inaccurate can be very difficult to implement in practice.First, there is the presence of qualifiers in the analysis. Uncertainty is part of the intelligenceproduction process. Intelligence collected rarely provides analysts with a complete picture of what is occurring in a foreign country. When a CIA analyst taps into all the various datastreams the U.S. government funnels into its secure communication system, the first sense isof an overwhelming amount of information about all kinds of topics. When precise
information is desired, such as the condition of a foreign country‘s weapons of mass
destruction (WMD) program, a CIA analyst cobbles together bits and pieces of information toform a picture or story and frequently discovers many gaps in the data. As a result, an
intelligence analyst‘s judgment frequently rests on a rickety foundation of assumptions,
inferences and educated guesses.Caveats and qualifiers are necessary in finished intelligence as a way to communicateanalytic uncertainty. Intelligence agencies would be performing a disservice to policy-makersif their judgments communicated greater certainty than the analysts possessed. Not makingevery analytic call correctly is just part of the territory in the broader process of governmentallearning, but accurately reflecting uncertainties when they exist is crucial. Unfortunately,caveats also complicate assessments of intell
igence accuracy. Words such as ―probably,‖
 
―likely‖ and ―may‖
are scattered throughout intelligence publications and prevent easyassessment of accuracy. For example, if CIA analysts had said Iraq probably had weapons of mass destruction, was that analysis accurate or inaccurate? There is no way to tell, given the
use of the word ―probably‖ which qualified the statement to incorporate the analysts‘
uncertainty.Removing caveats for sake of simplicity in assessing intelligence accuracy also unfairlyremoves the record of analytic uncertainty and, in the end, assesses something with which theanalyst never would have agreed. For example, if an analyst says that a coup is likely to occurin a foreign country within six months, and the coup attempt happened 12 months later,would that analysis be accurate or inaccurate? There is no easy way to tell. It was accurate inthat a coup occurred, but inaccurate on the timeframe. The determination of accuracy, then,may depend on whether or not one is most concerned about the occurrence of the event or thetimeframe in which it took place. In addition, the analytic judgment could not be considered
 
3
completely accurate nor completely inaccurate; it is somewhere in between. It is for thisreason that the then-
Director of Central Intelligence George Tenet said, ―In the intelligence business, you are almost never completely wrong or completely right.‖
2
Therefore, the use of accuracy as a metric to evaluate intelligence analysis must be done very, very carefully.
In addition, even if accurate analysis was produced, a ―self 
-
negating prophecy‖ resulting
from analysis produced within a decision cycle could occur. This means that intelligenceanalysis can help change what may happen in the future, making the analysis inaccurate.Since intelligence analysis can influence what decisionmakers decide to do, and what they dohas the potential to prompt or preclude actions of other international actors, an accuracyyardstick would not effectively capture the quality of the analysis. For example, if anintelligence analyst warns that a terrorist bombing is imminent and policymakers implementsecurity procedures to deter or prevent this incident based on this warning and the terroristsare deterred, then the warning will be inaccurate even though it helped prevent the bombing.This causal dynamic exists for all intelligence issues including political, economic, andscientific due to the nature of the intelligence mission. Therefore, post-hoc assessment of intelligence accuracy may not provide a true sense of the accuracy of the intelligence.It is precisely because of these practical difficulties of using accuracy as a criterion thatneither the
Office of the Director of National Intelligence‘s analytic integrity and standards
staff nor the Undersecretary of Defense for Intelligence uses it as a metric for analyticquality. While accuracy, or perhaps omniscience, is the desired goal of all intelligenceanalysts, using it as a criterion for reliably evaluating analytic quality is at this point notfeasible.
3
 
Preventing Surprise
Like accuracy, another absolute standard for evaluating analytic quality involves theprevention of decisionmaker surprise.
4
By describing, explaining, evaluating, and forecastingthe external environment, intelligence analysts facilitate decisionmaker understanding to thepoint that decisionmakers are not surprised by the events that take place. Whendecisionmakers are surprised, by definition there must have been an intelligence failure sinceit failed to achieve its objective; preventing surprise.The problem with this expectation, of course, is that surprise is ever present in internationalrelations.
5
Many surprises are the intentional result of adversaries who employ secrecy tohide their intentions. Secrecy in policy creation and implementation magnifies theeffectiveness of power application internationally because, when done successfully, theintended target has little or no time to effectively counter the respective policy. In military
terms, this power magnification is known as a ―force multiplier‖ although the concept is
applicable to the economic and political arenas as well. Secrecy has thus become a ubiquitoustechnique in the implementation of most international policies as a way to ensure policysuccess through surprise.Accordingly, preventing surprise has become just as necessary and is usually assigned tointelligence organizations because of their ability to uncover secrets. Not all surprises,however, can be prevented by uncovering secrets. Sometimes international forces canproduce spontaneous events that surprise everyone involved, such as the fall of the BerlinWall. These are the mysteries emphasized in some writings on intelligence.
6
 

Activity (4)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
Stephen Marrin liked this
smellthebell liked this

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->