Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
3Activity
0 of .
Results for:
No results containing your search query
P. 1
War games and national security with a grain of SALT

War games and national security with a grain of SALT

Ratings: (0)|Views: 557|Likes:
The article comments on the excessive reliance by U.S. policymakers on computer-based simulation in crafting decisions on war, military balance, and national defense. The reliance on simulations is misplaced because most military simulations and games lack quality control, are one-sided and not well documented. Furthermore, the scenarios drawn are overstructured and illogical. Computer models do not take into account human limitations and organizational weaknesses. This omission can prove fatal during actual war.
The article comments on the excessive reliance by U.S. policymakers on computer-based simulation in crafting decisions on war, military balance, and national defense. The reliance on simulations is misplaced because most military simulations and games lack quality control, are one-sided and not well documented. Furthermore, the scenarios drawn are overstructured and illogical. Computer models do not take into account human limitations and organizational weaknesses. This omission can prove fatal during actual war.

More info:

Published by: Jonathan Robert Kraus (OutofMudProductions) on Mar 29, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

03/30/2013

pdf

text

original

 
GARRY
.
BREWERnd BRUCE . BLAIR
a
.
.
.
this entire science
oj
long-range tncissive ciestruction-of CUI-culcitcd cidvcintcige
or
disadvcintcigein tn2~clernveeipori
ry
-h
as
gottenserioirsly out
of
hcind;
. . .
the vari-cibles it involves [ire rapidly growingbeyond the
power
of
either hutnantnindorcomputer.”-George Ken-ncin (December
1977)
It is difficult to determine how muchinfluence models, simulations andgames exert
on
defense programsand Toreign policy. Clearly, how-ever, their impact is substantial andgrowing, to the chagrin of Kennanand others who not only doubt theefficz.cy of quantitative analysis toinform decision-making and sharpendebaie, but also fear that analyticrigor may become
a
substitute forsound judgment and common sense.Because of its rigor and applica-bility to a variety of scientific pur-suits, quantitative methodology en-joys
a
certain prestige
in
defense cir-cles. Yet most defense studies thatrely heavily on mathematical andstatistical techniques are vulnerableon at. least two counts:“Dai:a inputs have obscure, un-known, or unknowable empiricalfoundations, and
the
relevance ofmuch data, even
if
valid, to the ef-fecti,(eness of weapon systems isunkr own”
[
11.
“The models, and the behavioral as-sumptions and propositions onwhich they are based, are not oftenreliable, and are usually not vali-dated at all”
[2].
Consequently, many analyses con-ceal spurious content behind protec-tive layers of mathematics andstati:jtics.Despite the high potentialcostis-in misplaced emphasis, un-wauanted confidence, and unwiseresource allocation-that may be in-curred, technical analyses are usedincreasingly for advocacy. In a
1977
speech at Mississippi State Univer-sity, Secretary of Defense HaroldBrown outlined the dangers of “ex-pert advocacy”:“If,
in
the guise of analysis and ex-position, [an expert] becomes an ad-vocate for a particular decision, hesometimes may have the satisfactionof getting his own way, but only bysubstituting his own judgment forthat of people who have the respon-sibility for decisions and who mightweigh values differently if given allthe facts, and whose judgment maybe better.”More recently, in a speech at theNaval War College, Navy SecretaryGraham Claytor complained that“One of the most frustrating things
I
have encountered in this job hasbeen a tendency on the part of somestaff people to use systems analysisas
a
cover for what
is
really subjec-tive judgment”
[3].
As
examples ofsuch misuse, consider the followingallegations from a recent profes-sional military publication:“The Air Force rigged a model ofSoviet air defenses to favor the B-1bomber and suppressed several in-dependent studies demonstratingthat a new bomber did not needsupersonic speed.”“The Navy’s Sea Plan
2000,
a com-prehensive in-house study of futurenaval requirements, lacks analyticsubstantiation and is ‘probably asblatant a political ploy and lobbyingeffort as any study in recent mem-ory.’
“The Navy has released only theconclusions of its studies and nottheir details, exerted tight censor-ship and suppressed dissent, andwithheld potentially ‘disruptive’studies from its own Navy Sec-retaries ‘while they were reworkedand edited to ensure that theyreached the proper conclusions’
4].Given the attention with whichdefense analysts’ pronouncementsare received, the time has come for acritical look at our growing relianceon computer-based and highlyabstract techniques to discover“facts” about war and the militarybalance. To this end, we shallexamine historical evidence ofanalysis misuse and its conse-quences; briefly appraise the generalquality of studies sponsored or con-ducted by the Department of De-fense; and assess strategic analysis.This assessment will be illustratedby a critique
of
the well-knownstrategic war model advanced byPaul Nitze, an exercise undertakenbecause it exemplifies the generalproblem of analysjs}being used forboth rhetorical and political pur-poses.It is admittedly dangerous to relyon historical analogies, but relatedpast events must at least beexamined, since failure to learn frompast mistakes risks incalculablecosts. In
a
provocative essay, PaulBracken exposes the “unintendedconsequences of strategic gaming”by the British
in
analyzing threats ofGerman strategic air attack,
1922-
by the French in constructingdefenses between the world wars;and by the Russians in preparing fora German assault on their westernborderIn the British case,
a
small groupof statistical specialists in the AirStaff prepared assessments of thelikely German strategic threat to
18
 
Great Britain. Only their summaryfindings-no documentation-werepresented to the top decision-makers, nor .were the findings eversubjected to detailed, externalevaluation. Highly selective,“mythical” numbers were used
in
planning, leading to the constructionof day bombers and the virtual ex-clusion of night bombers, fighters,and other necessary components of atotal defense system. Extrapolationfrom those numbers in fifth- andsixth-order studies generated fearand misttist in the public: Lloyds ofLondon refused to issue any kind ofwar insurance; the Home Office de-termined that civilian losses fromGerman bombing would be greaterthan the country’s capacity to buildcoffins,
so
orders were issued for theconstruction of mass graves; and theHealth Ministry judged
it
necessaryto print over one million extra deathcertificates
[6].
Bracken
[5]
cites the followingmistakes and lessons to be learnedfrom the experience:
0
No
one questioned the assump-
,
tions on which the studies and deci-sions were based; assumptions mustbe questioned constantly.
0
No
one reviewed the basic data;such, a review would have shownthat the numbers had been carefullyselected to support the worst possi-ble case.
NO,
one examined the sti-uctureof the models. generated by the AirStaff analysts to determine just whatkinds of outputs they were capableof generating; someone should have.
All
theory, data, and methodsused to support politically chargedand costly proposals for weaponsmust be subjected to thorough, inde-pendent review.The technique of selectiveomission can be used to prove nearlyanything.
0
One must question “politicalstrategists” who use highly quan-titative analyses produced by othersbut do not cut through the detail to
.
understand precisely what theanalyses include and omit, whattheir limitations are, and where thebasic data came from.The French case points up anotherlesson. After the strategic decisionwas made to build the Maginot Line,most analyses focused on the techni-cal details of that fortification sys-tem. Calculations of range, thicknessof concrete, firing angle qnd the likebecame
.a
substitute for more com-prehensive thought, and an anesthe-tic for decision-makers who refusedto confront the real problems posedby a mobile, flexible enemy. Thoseanalyses diverted attention from thereal problems facing French militarystrategists. The lesson
is
thatanalyses can be used to keep certainfacts and contingencies fromscrutiny.The Soviets learned the same les-son the hard way
in
their prepara-tions for World War
11.
During the
1930s
a series of remarkablestrategic games was conducted thatmight have afforded insight into thecoming conflict. However, anygaming that deviated even slightlyfrom Stalin’s strategic doctrine wasquickly suppressed. In fact, bril-liant Soviet general was purgedpartly for his heterodox way, ofplaying the games
[7].
In time, otherSoviet military thinkers learned that
it
was personally hazardous to dis-pute the basic strategic assumptions,no matter how fatal they seemed,
so
dissent ended and the games, wereincreasingly played by foreordainedrules Incidentally, the Japanesefell prey to organizational self-delusion
in
their prewar gaming ofthe Pearl Harbor attack
[9]
and theBattle of Midway
[lo].
The skeptical reader may be
in-
clined to dismiss these hoary illus-trations by asserting that the lessonshave indeed been learned and thattoday’s analyses are
truly
scientific.Current practice, we argue, does notwarrant such confidence.Although we do not attempt a fullappraisal of the
qrrnlity
of defensestudies here, several key findings ofa recent survey and assessment arerelevant
[l,
pp.
7-11].
Militaty ancrly.ws are ojten
kd
for advocacy.
Most military models,simulations, and games are producedby in-house analytic staffs and arenot,well documented.
If
model buil-ders do not question the environ-ment set by those soliciting thework, ‘they may be perpetuatingerror in their models, for practicallyany point of view can be supported
-
by seiecting appropriate “guessti-mates” about the environmentstudied
[
1
13.
Modeling
is
one-sicled.
Competinggroups of experts hardly coinmuni-cate, and “countervailing” technicalexpertise is seldom called upon tostrengthen a technical analysis byplaying devil’s advocate.
Quality control is virtucrllynonexistent.
,Most work is not sub-jected to thorough, independent re-view.
If
documentation
is
providedat all,
it
is of highly uneven quality orcomprehensible only to the modelbuilders. Standards of scientific vali-dation .are similarly lacking. In fact,the rubric “Validation is a happycustomer” appears to be as typical
a
criterion as any
[12].
Inslifjcient basic research hasdone
on
topics thcrt shouldroutinely
be
treated in military
els.
Data validation, sensitivityanalysis, and other technical tests ofmodel validation are not done, nor isthere much evidence of interest
in
doing them. The existing basic re-search on “softer” topics, such aspanic behavior, threat and confron-tation, and other aspects of humanbehavior and motivation is
in-
adequate as a base for constructing,testing and using these models.
In-
stead, the emphasis is on mechanis-tic, engineering-like weapon studiesthat omit far more of importancethan they treat.The tenuousness
of
much of thedata, ttie undeveloped, state of valizdation and the neglect of such
im-
portant procedures as sensitivityanalysis, and scrutiny of the work forits relevance to realistic conditionslead to the conclusion that advocacyrather than scientific inquiry is aprime motive.Perhaps senior officials responsi-ble for the billions of dollars worth of
1979
19
 
Garry
D.
Brewer
is
professor of organization
and
management
and
political
science
at
Yale University.
He
is
the
former editor
of
&
(1977-1979),
and
the
author of
(1 973)
and co-author of
of
(1979).
war-fighting machinery. and manymillions of lives cannot be expectedto care too much about technicalmatters. The link between those whobuild models and those who eventu-ally use their result; is tenuous atbest. Even the more technicallysophisticated genera! or high civilservant has little time
to
learn thefine points of strategic analyses. Theproblem of communication betweenanalytic staff and policy-maker isfurther complicated ,by desires .tokeep the boss happy and not to“open
up
a can of worms,” desireswhich ensure that the few chunks ofinformation presented to .a senioroffic‘i.4 are carefulli selected andpredigested
[13].
the results
of
an
analysis move into the stratosphereof,command and decision,. througtisuccessive layers of iritermediates,the message-whether from
a
quickand dirty model or a competentone-tends to become.more conciseand less equivocal. Only
so
.muchpredigestion can occur before allnu.tritive content has been extracted.Our calculations are remarkablyprecise
in
matters
in
which we havehad
IO
experience,.such
as
nuclearwar, and remarkably guarded andqualilied
in
matters
I
brimming withaccumulated data, such as conven-tional war. The attitude of strategiccalculators
in
the last decade has re-sembled that of the freshman algebrastudent who tailors ,his problems tothe few tools at his disposal and pre-sumes that problems are therebysolved.
In
both cases the wisdom isdubicus.The precise specification ofstratcgic parity, for example, pre-sents a difficult technical problembecause modern strategic forceshave quite different environmental,tech:~ological, nstitutional andhuman constraints. The analyst re-sorts quickly-almost automatical-
ly-to
the use of highly aggregatedand 2bstract indices; however, theexpression “quantitative methodol-ogy” is fundamentally misleadingbecause the requisite quantitativefoundation is simply lacking or ishighly uncertain, selective or
20
judgmental
[13, 141.
During the
ABM
debate a decadeago, both critics and proponentsviewed the role of quantitativemethods
in
the proper perspective:They did not argue about technicalfacts,’ “since in this complex area weare dealing
at
most with technicaljudg.ments, and more often withpolitical opinion”
[15].
The debatewas intelligent.
A
focal issue
in
the debate about
SALT
has been the vulnerability ofAmerica’s force of land-basedMinuteman missiles. That vulnera-bility, once projected, became apolitical fact;
it
may become techni-cal reality as well.
But
it
is impossi-ble to determine the true picture,given the many conflicting assess-ments. The strong. policy assertionsmade without qualification and onthe basis of closely guarded data,and the claims of validity for projec-tions five, ten, and more years
in
thefuture, leave little doubt that we areagain dealing at most with technicaljudgments and more often withemotions and political opinion.There are as many sets ‘:facts”there are advocates. Early
in
i977
the Joint Chiefs of Staff set the
up-
ward safe
limit
for Minuternansurvivability-an analytically de-termined threshold-at
550
land-based and MIRVed Soviet missiles.
In
March of that year a, constraintlimiting each side to
550
MIRVedwas proposed as part of Presi-dent Carter’s comprehensive
SALT
plan. After the Soviets rejected thisproposal, the United States agreedto a higher ceiling of
820
such
mis-
siles. Paul Nitze calculated thatunder the proposed ceiling ttieSoviet Union couid destroy
90
per-cent of the Minuteman force usingless than half of its own warheads,while the United States could, notpose a comparable ihreat by
1985[16,
p.
131.
Appearing before the Se-nate Committee on Foreign Rela-tions, Secretary of State CyrusVance offered to submit “a completecounter analysis” to Nitze’s asser-tion
[16].
The latest Pentagon posi-tion is that Minuteman could becomevulnerable by the early to middlewith or without
SALT
Ii
agreement
[17,
p.
1061.
study, re-ported
in
the Washington .Post ofDecember
9, 1977,
demonstrates
thdt
Minuteman could have becomevulnerable even if the March
1977
proposal had been accepted. Finally,the Secretary of Defense doubts thatthe projected vulnerability of bothU.S. and Soviet silo-based
ICBMS
can be reversed by negotiated ac-cords
[17,
p.
1061.
Mpst analysts agree that the tech-nical trend runs against silo-basedforees, but many differ
on
the prob-able timing and degree of vulnerabil-
ity.
Important policy choices, for
in-
stance the ratification of
SALT
anddecisions on the development anddeployment of new land-, air-, andsea-based systems, turn on such de-tails. Regrettably, advocacy seemsto have prevented the developmentof a clear statement of the problemsand their possible solutions.
For
the more abstract, ambitious,and uncertain task of projecting theoutcome of a nuclear war involvingthe
full
contingent of strategic forceson both sides, advocacy’s perniciouseffects become even more apparent.Not only do the analyses becomemore simplified, but the issues theyconsider appear to be fewer, under-standable, and resolvable. Paradoxi-cally, however, the conclusionsreached by different interestgroups-including attendant policyprescriptions-also become sharplypolarized and contradictory.One conclusion, summarized byDefense Secretary Brown
in
the
De-
1979,
relies on the results ofan assessment of the strategic bal-ance after a Soviet counterforce at-tack to which the United States re-taliates ,with a counterforce strike.For the p.eriod analyzed,
1978
through
1987,
and for all scenariosconsidered, the post-attack advan-tage lies with the United States
[17,
p.
1041.
Nitze reaches the opposite con-clusion
[18].
The United States fellbehind strategically in
1973
and has

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->