You are on page 1of 46

Journal of Strategic Studies

ISSN: 0140-2390 (Print) 1743-937X (Online) Journal homepage:

Choosing analytic measures

James G. Roche & Barry D. Watts

To cite this article: James G. Roche & Barry D. Watts (1991) Choosing analytic measures ,
Journal of Strategic Studies, 14:2, 165-209, DOI: 10.1080/01402399108437447

To link to this article:

Published online: 24 Jan 2008.

Submit your article to this journal

Article views: 87

View related articles

Citing articles: 7 View citing articles

Full Terms & Conditions of access and use can be found at

Download by: [NPS Dudley Knox Library] Date: 12 January 2017, At: 14:15
Choosing Analytic Measures1
James G. Roche and Barry D. Watts

The conceptual problems in constructing an adequate or useful

measure of military power have not yet been faced. Defining
an adequate measure looks hard, and making estimates in real
situations looks even harder.
A. W. Marshall, 19662
For most higher-order problems adequate measures of merit have
yet to be devised. Even if some improvement may be hoped for,
the irreducibly subjective element in such measures will remain
James R. Schlesinger, 19673
Generally speaking, the pioneers of analytic disciplines like operations
research and systems analysis were intensely concerned with the various
conceptual and pragmatic difficulties of choosing appropriate measures.
They realized instinctively that poor choices of measures could trap analysis
into reaching mistaken, misguided, or irrelevant conclusions, and that even
the best of measures, if pushed too far, could blind analysis to the broader
aspects of the problem at hand.
As time has gone by, however, the general problem of choosing analytic
measures has largely receded into the background. It is no longer, for
example, a major and early topic in most operations-research texts, as it
was in Philip M. Morse and George E. Kimball's 1946 classic Methods of
Operations Research. The present authors have come to believe that this
tendency toward the increasing neglect of the problem of measures has
been a most unfortunate one, and that the time to reverse this trend is
long, long overdue. The motivation behind the present essay, then, is to
take a fresh look at the old problem of choosing analytic measures.
To a great extent, the problem of measures in contemporary defense
analysis is the problem of an unbridled tendency to quantify everything.
The prevailing wisdom has come to be that of William Thomson (Lord
Kelvin): 'When you can measure what you are speaking about, and express
it in numbers, you know something about it; but when you cannot measure
it, when you cannot express it in numbers, your knowledge is of a meager
and unsatisfactory kind'.4 By contrast, it will be our basic contention
that Lord Kelvin's viewpoint is, at the least, shortsighted. The truth of
the matter, we would suggest, is much closer to Bert Fowler's recent
observation that the desire to quantify everything 'has too often become
a substitute for thinking', and promulgated elaborate quantitative analyses
'that seem to disregard history and even current experience'.5


In what follows, we will utilize military history and contemporary

experience to explore the issue of choosing adequate measures. Our
aims will be twofold: first, to delineate, based on history and experience,
the limits of quantification in everyday defense analysis; and, second, to
indicate the sorts of qualitative measures and analyses that could help us
to get beyond those limits in the future. The overarching implication of
the line of inquiry implicit in these aims will be to push the bounds of
acceptable analysis beyond quantification. In our judgment, qualitative
measures and judgments must, contrary to Lord Kelvin and his intellectual
heirs, be accorded their proper place alongside - and, in some cases, over -
quantitative ones. In this sense, the seemingly esoteric problem of choosing
measures is in truth a strikingly real, if not decisive, issue for the future
of defense analysis.

Introductory Examples: The Machine Gun and Strategic-Nuclear Forces

Two brief examples may help to make some of the preceding generalizations
about the problems of choosing measures a bit more concrete. Both focus
on the potential dangers of trying to evaluate technologically changed or
revolutionary weapons using the measures worked out for mature systems
whose combat utility is well understood. The main difference between the
two is one of currency. Whereas the case of the machine gun is historically
remote, the task of choosing adequate measures to evaluate the evolving
balance between US and Soviet strategic forces remains a current and
ongoing problem.
The case of the machine gun is fundamentally one of lost opportunities.
From 1861 to about 1903, the US Army as an institution failed to appreciate
the revolutionary potential of the machine gun as an infantry weapon and,
as a result, failed to embrace it. Why?
A fairly detailed account of this unfortunate history can be found in
Lieutenant Colonel David Armstrong's 1982 Bullets and Bureaucrats: The
Machine Gun and the United States Army, 1861-1916. As Armstrong's
title suggests, one of the reasons for the US Army's inability over four
decades to develop the hardware, doctrine, organizational structures,
tactics, and other support necessary to make the most of the machine
gun was bureaucratic opposition. Among other things, the Ordnance
Department's weapons-development system was content to wait passively
for civilian innovation in weaponry; the rules governing the boards of
officers that evaluated machine guns during this era severely constrained
what could be reported; and, until 1903, the US Army lacked agencies with
a major interest in doctrinal or organizational development.6
Beyond such bureaucratic difficulties, however, Armstrong's book iden-
tifies a more fundamental impediment to the machine gun's adoption: the
analytic or effectiveness measures that dominated official army thinking
during this period. Most of the machine guns developed or fielded during
the years 1861-1903 - particularly the Gatling gun - were too like
artillery pieces in their appearance and logistics requirements to be viewed

as infantry weapons; and, when judged as artillery pieces, the machine

guns of this period were inevitably (but not surprisingly) found wanting.7
For infantrymen and artillerymen alike, the implicit assumption that the
machine gun's effectiveness as an artillery piece was the appropriate
measure blinded them to its potential as an infantry weapon.8 Addition-
ally, the Ordnance Department emphasized factors such as cost, durability,
and the interchangeability of parts over operational capabilities.9 For both
operational soldiers and the Ordnance Department's weapons developers,
the choice of measures appears to have played a major role in blocking
the US Army's recognition of the machine gun, 'as a new and radically
different weapon', for the better part of half a century.10 In this instance
at least, the choice of measures had a palpable and prolonged impact on
the army's ability to field the weapon that would dominate the battlefields
of World War I.
A similar case can be seen in the extent to which changes in the
international security environment and weapons technology since the
1960s have undermined the prevailing criteria and assumptions by which
strategic-nuclear forces are judged. As John Battilega and Judy Grange
argued at a military operations research symposium in 1987, US deterrence
policies of the 1960s emphasized target destruction criteria as the principal
effectiveness measures for judging strategic forces, and, in time, American
forces were to a fair extent optimized to satisfy these criteria.11 Yet today,
as we look ahead to the short term likelihood of a strategic arms reduction
treaty with the Soviets that would limit each side to 6,000 'accountable'
warheads,12 our principal concern about strategic forces seems to have
shifted from target destruction to stability - especially to stability in the
sense that neither we nor the Soviets would face strong pressures in a
crisis to launch a nuclear strike against the other's homeland.13 Moreover,
of the nine other analytic assumptions cited by Battilega and Grange as
underpinning most of the current models used for assessing strategic forces,
there are none which have not been seriously undermined by technological
advances since the early 1960s. For example, the premise that the number
of weapons is small compared to the number of targets has been obviated
by the development of ever smaller warheads and multiple independently
targetable re-entry vehicles for ballistic missiles; and the assumption that
most strategic weapons could be directly targeted by the nuclear forces
of the other side has been eroded by ballistic-missile submarines and the
appearance of mobile land-based systems such as the Soviets' SS-25.
Of course, unlike the case of the machine gun, the long-term con-
sequences of these growing disparities between strategic realities and the
analytic measures traditionally used to assess them have yet to be played
out. We simply do not know whether the consequences will ultimately
prove trivial, catastrophic, or somewhere in between.14 Nevertheless, there
is a deeper point that can be made which is common to the historical case
of the machine gun as well as to the growing inadequacies of accepted
measures for judging strategic-nuclear forces. It is that analytic measures
are potentially perishable. Like it or not, technological and other changes

can erode the appropriateness of the criteria by which we have become

accustomed to assessing a given category of weapons or forces.
Consequently, the choice of analytic measures is more than an esoteric,
'academic', or trivial matter. The measures we select can blind us to
new possibilities at the level of strategic choices and arms control as
readily as at the level of individual weapons like the machine gun. It
is for such pragmatic reasons that we will now pose two elementary
questions about what might be termed the general problem of analytic
Why is choosing decent measures for a given analytic problem so difficult?
Further, are the difficulties of selecting appropriate measures likely to be
finally solved or go away?
We will endeavor to answer these questions by examining the difficulties
and complexities of analytic or effectiveness measures in two historical
instances. They are: (1) the strategic bombing of Germany from mid-1943
to the end of the war in Europe', and (2) Jimmy Doolittle's April 1942
attack on Tokyo.
The basic thrust of these cases will be to document the inherent
complexities and irreducible difficulties of choosing truly adequate analytic
measures. The general line of argument will be to show just how uncertain,
if not unpredictable, the linkages between causes and effects in war can be.
The wedge of uncertainty that these case studies ultimately drive between
ends and means in war is not, however, entirely negative in its implications.
The very same historical examples used to attack tight causal linkages
between measurable input and measurable output in military affairs also
suggest qualitative criteria - notably second-order consequences, virtual
attrition, and holding targets at risk - which should help us make wiser
choices of measures in the future.

Measures in Early Military Modeling, Operations Research, and Systems

Before tackling the selected cases, it seems worthwhile to review some of
the reflections experienced practitioners of military modeling, operations
research and systems analysis, have offered on the problems of measures.
As will quickly be apparent, the profound difficulties of selecting adequate
measures have a long history.
The extraordinary success of the physical sciences over the last 300
years has created a certain expectation that the quantitative methods of
mathematics and the natural sciences can be fruitfully applied to areas
like military affairs. One of the earliest such attempts was Frederick
W. Lanchester's postulation in 1916 of mathematical 'laws' relating force
ratios to attrition. Lanchester basically postulated one relationship, his
so-called 'linear law', for engagements between opposing forces equipped
with 'ancient' weapons such as swords and pikes; he postulated a second,
his 'square law', for engagements between sides armed with 'modern, long-
range' weapons such as rifles.15

Implicit in Lanchester's attrition models is a definite judgment about

which measures are most relevant to a 'scientific' analysis of engagement
dynamics. In effect, force ratios and attrition over time were presumed by
Lanchester to be the appropriate analytic measures of inputs and outcomes
during combat. This judgment was embedded in the further assumption
that, for purposes of scientific treatment, most other things, especially
hard-to-quantify factors like leadership, combatant skill, tactical doctrine,
etc., could be, on average, equal between the two sides. So for Lanchester,
inflicting greater attrition on the enemy was taken to be the crux of victory,
all other things being equal.
On one hand, a measure like attrition would seem to be exactly the sort of
no-nonsense, concrete, and quantifiable parameter one would concentrate
upon if the study of warfare is to be put on a rigorous, 'scientific' basis. On
the other hand, actual combat data does not appear to support the existence
of any solid, mathematical relationship between attrition and victory. As
Robert McQuie concluded in 1987 on the basis of data from some 80 battles
during the years 1941-1982:
No matter how casualties are measured, battles have been given up as
lost when casualties ranged from insignificant to overwhelming. . . .
The outcome of battle modeled by the Lanchesterian equations
postulates a development of combat in response to casualties incurred.
During the last 50 years, however, battles appear to have been
resolved largely on the basis of other considerations [such as having
been outmaneuvered].16
McQuie's assessment of modern data concerning the relation between
attrition and outcomes suggests a very important lesson regarding the
selection of analytic measures. In military affairs, the most obvious or
readily quantifiable measures may not necessarily be the right ones at all.
Consequently, the selection of measures is a critical step for any analysis.
This insight is not, of course, new. Many of the founding fathers of
operations research during World War II and, later, of systems analysis
have expressed similar views. Take P. M. S. Blackett. A Nobel Prize
winner, Blackett is considered one of the pioneers of the brand of the
military operations research that developed in the American and British
armed forces during the Second World War. Looking back, Blackett
characterized the operations research of that era as 'a scientific method
for providing executives with a quantitative basis for decisions'.17 A classic
example of such analysis, in Blackett's view, was the 'proof that large
convoys were safer than small ones'.18 In the dark days following Dunkirk,
when Britain's survival hinged on her sea lines of communication with the
United States, the problem of minimizing convoy losses to German U-boats
became a critical one, and British operations research made an important
contribution to eventual Allied success in the Battle of the Atlantic.
Yet Blackett's own reflections on such early operations-research suc-
cesses lead directly back to the difficulties of selecting adequate measures.
Not all operations problems, Blackett was the first to admit, are 'capable

of being tackled scientifically'.19 Therefore, he argued, the operations

researcher's first task is to find those 'pregnant problems' that are
susceptible to quantitative analysis. In the case of convoy size, the
proof that larger convoys were safer arose from an investigation into
the protective value of convoy escorts. This investigation, however,
was undertaken simply because an operations researcher happened to
be present at a meeting of the Anti-U-boat Committee at 10 Downing
Street when the problem arose as to how best to divide limited shipbuilding
resources between merchant ships and escort vessels.20 The impact of such
serendipitous events as the presence of the right person at a particular
meeting on World War II operations research led Blackett to emphasize
that it is essential for senior operations research workers to be admitted
to the executive levels as observers and potential critics. He also felt it
necessary for senior operations researchers to have, whenever possible,
close personal relationships with executive decision makers. Otherwise, he
cautioned, the 'pregnant problems' might be overlooked.
For similar reasons, Blackett suggested that the most important qualifi-
cation for success as an operations researcher was not scientific training but
the 'ability to take a broad view of a problem' so that 'important factors'
are not overlooked in the analysis (a point to which we will return).21
Some knowledge of statistical methods would be required, he thought, and
specialist knowledge in the field of application, intelligence and enthusiasm,
would certainly be desirable. But the most vital qualification of all Blackett
viewed as having the 'right personality'. The operations researcher could
thus win the confidence of decision makers who in turn would use the
findings and formulate those results in a manner that would 'appear
convincing to the executive personnel'.22
These observations suggest that common sense and judgment are
obligatory on at least three levels of operations research: first, in choosing
the right problem; second, in choosing the right measures for the analysis;
and, third, in choosing the right measures for communicating the results
of the analysis to decision makers. As the first of these - picking a useful
problem - is itself a determination which cannot be usefully analysed using
any conceivable quantitative measures, all three choices can be reduced
to the generic problem of selecting appropriate analytic measures. The
second and third are explicitly so; and the first boils down to realizing
there are problems for which no choice of measures would be appropriate
or analytically fruitful.
Concerns over the problems of effectiveness measures are, if anything,
even more prominent in the work of Philip Morse and George Kimball,
who are regarded as the founders of operations research in the US Navy
during World War II. In their view, if the operations of war were to be
subjected to scientific, quantitative study, researchers had to 'ruthlessly
strip away details' in order to arrive at 'a few broad, very appropriate
"constants of the operation"' which could be used to estimate operational
efficiency by comparing 'theoretically optimum values' with those observed
during actual combat.23 The sweep rates of aircraft or submarines searching

for enemy vessels, ratios of air-to-air kills versus losses, exchange rates
between submarines and surface vessels, and measures of accuracy for
aerial bombing or naval gunfire are representative of the various analytic
measures described by Morse and Kimball in their 1946 text on the methods
of operations research.24
To be emphasized, however, is that measures such as sweep rates and
kill ratios, straightforward though they may appear, were considered by
Morse and Kimball to be, at best, coarse-grained criteria which manifested
significant analytic limitations. Even in the case of simple operational
situations for which theoretical values of the constants of the operation
could be computed, their rule of thumb was that unless the theoretical
values were at least 'a factor of three' greater than those actually observed,
it was 'extremely unlikely' that 'significant improvement' could be made
on the basis of operational analysis.25 As for more complex situations
in which theoretical values could not be readily computed, Morse and
Kimball admitted they included many important problems of choice.
Enemy submarines, for example, could be combated by convoying, by
direct attacks on the high seas, or by going after them in port; similarly,
aircraft could be used to attack enemy front-line troops or the factories in
which their weapons and munitions were produced. But when World War
II operations researchers tackled these broader problems of choice, they
usually found it 'hard to find a common unit of measure', and, as a result,
decisions between such 'strategic' alternatives were frequently influenced or
made on the basis of 'political or other non-quantitative' considerations.26
Nor were these systemic problems of how best to scale and measure
effectiveness solved when systems (or cost-effectiveness) analysis was
brought to the Pentagon in the early 1960s by Robert S. McNamara and
applied to major resource-allocation issues.27 As thoughtful practitioners
such as James R. Schlesinger were the first to admit, even the so-called
'hard' elements with which systems analysis started - calculations of
costs (or inputs) and effectiveness measures (or outputs) - were not
without their problems and limitations. First, implicit in the very term
'cost-effectiveness' was (and remains) a presumption that political, social,
and other non-economic factors could be assigned little, if not zero, weight
in cost-benefit analyses.28 This questionable tendency was reinforced by
a preference among systems analysts for formal, mathematical models
which, in many cases, led to the neglect of important political, social,
and other variables.29 Second, there was the bureaucratic fact of life
that analyses could be easily skewed, biased, or constrained by decisions
about terms of reference, choice of scenario, or strategic assumptions of
the decision makers whom the analysis was intended to serve. Finally,
the analytic measures devised by defense secretary McNamara's analysts
suffered the same inability to cope with higher-order strategic choices
noted by Morse and Kimball. In fact, there remains every reason to
believe that Schlesinger's 1967 observation that 'adequate measures of
merit have yet to be devised' for 'most higher-order problems' continues
to be valid right down to the present day.30 For example, the probability

of arrival of a weapon at its target in a given scenario can easily be given

a numerical value, but quantifying the weapon's overall strategic utility
across a range of divergent scenarios is a different matter. The toughest
problems of strategic choice, 'dominated as they are by uncertainties and
by differences in goals', have simply not yielded to systems analysis and
similar 'analytic' approaches.31
The choice of measures, then, has been a first-order problem since
the earliest days of military modeling, operations research, and systems
analysis. Granted, those who have labored to apply such methods to
real-world issues have not been without their successes.32 The application
of quantitative analysis to operational and resource-allocation questions
has clarified some issues, and greatly sharpened our appreciation of the
trade-offs and dynamics in many others. Nonetheless, the difficulties of
choosing effectiveness criteria that are not too coarse-grained, too narrow,
or too remote from the complexities of operational and strategic realities
still persist.
As a matter of fact, not only do the fundamental difficulties of choosing
measures remain unresolved, but it is not even clear that our ability to cope
with them has improved with time. Due to what Andrew Marshall has called
the 'increasing routinization of analysis', US defense analysts today may not
generally deal as well with the problems of selecting measures as they did
in the 1950s.33 As Marshall points out, during the early days of applying
systems analysis to strategic warfare at RAND, everyone had to think long
and hard about effectiveness measures. After all, at that time war involving
nuclear weapons was uncharted territory, and the RAND analysts of that
era had no inventory of ready-made analytic measures from which to draw.
Instead, they had to develop measures as they progressed. By contrast,
analysts today have access to a large array of measures and models.
But if such tools are routinely pulled off the shelf and applied without
due consideration of their appropriateness, analysis runs the danger of
becoming more and more divorced from reality, which is basically what
Marshall senses has started to occur.
If nothing else, this prospective 'routinization of analysis' offers yet one
more motivation for revisiting the general problem of analytic measures.
Two 'case studies' designed to facilitate precisely such an examination have
already been identified. It is to the first of these that we now turn.

The Strategic Bombardment of Germany from June 1943 to April 1945

This section focuses on the 'combined bomber offensive' (CBO) period
of the Anglo-American strategic bombing effort against Germany during
World War II. Because a large volume of data on the bombing of Germany
was both collected and analyzed, the CBO is especially rich in lessons
concerning the profound difficulties of choosing adequate measures of
military effectiveness.
The CBO officially began on 10 June 1943 when the Anglo-American
Combined Chiefs of Staff established the POINTBLANK target system.

The initial objective of POINTBLANK was to ensure the defeat of the

German Air Force prior to the Normandy landings, first and foremost by
the precision bombardment of German aircraft assembly plants.34 As the
war progressed the CBO shifted its primary emphasis to other military
and industrial target systems, including transportation and synthetic oil
The fundamental division of labor throughout the CBO period was for the
British to conduct aerial bombardment of German industrial cities by night
while the Americans carried out precision attacks against 'vital' military-
industrial target systems by day. The ultimate aim of the combined strategic
bombing campaign was the 'progressive destruction and dislocation of the
German military, industrial and economic systems and the undermining
of the morale of the German people to a point where their capacity for
armed resistance' would be 'fatally weakened'.36 However, the majority
of the bomb tonnage dropped on Germany - over 70 per cent in fact - fell
during the final ten months of strategic operations (July 1944 to mid-April
Disagreement over the effectiveness of the CBO began to surface even
before the war in Europe ended, and debate continues to this day. The
fundamental point of contention concerns the decisiveness of the CBO.
During the 1930s and 1940s, American bomber enthusiasts were not content
merely to claim that mastery of the air had become a necessary precondition
for victory on the ground or at sea. Rather, they boldly asserted that
air power alone could defeat a 'highly industrialized enemy nation' by
destroying 'vital targets' in the so-called 'industrial web' underlying the
enemy's war machine.38 The American doctrine can be summarized as
The most efficient way to defeat an enemy is to destroy, by means
of bombardment from the air, his war-making capacity; the means to
this end is to identify by scientific analysis those particular elements
of his war potential the elimination of which will cripple either his
war machine or his will to continue the conflict; these elements
having been identified, they should be attacked by large masses of
bombardment aircraft flying in formation, at high altitude, in daylight,
and equipped with precision borhbsights that will make possible the
positive identification and destruction of 'pinpoint' targets; finally,
such bombing missions having been carried out, the enemy, regardless
of his strength in armies and navies, will lack the means to support
continued military action.39
The daylight portion of the CBO from mid-1943 to April 1945 constituted
an explicit attempt to carry out and, simultaneously, validate this doctrine
against Germany.
Initially, there was considerable optimism among US bomber enthusiasts.
As one American bombardment wing commander wrote in July 1943,
'There is no question in my mind as to the eventual result. VIII Bomber
Command is destroying and will continue to destroy the economic resources

of Germany to such an extent that personally I believe that no invasion of

the Continent or Germany proper will ever have to take place'.40
Moreover, even in the months immediately preceding the Normandy
landings (Operation Overlord), many American airmen persisted in the
opinion that the Allies might eventually have to fall back on strategic air
power to defeat Germany. In fact, no less than General Carl Spaatz, then
head of the US strategic air forces in Europe, expressed the view in March
1944 that the landings could not succeed, and that once they had failed the
airmen would then be called upon to show how the war could be won 'by
In the end, of course, Overlord did not fail. The western Allies
successfully landed in Normandy in June 1944, and the war in Europe
ended when their armies linked up with those of the Soviet Union along
the Elbe river.
Nonetheless, American strategic-bombing enthusiasts continued to insist
after the war that the CBO had been decisive in defeating Germany. In the
words of the official Army Air Forces history, the strategic air campaign,
had not been the perfect attack which air theorists had dreamed of, an
undistracted campaign against the enemy's vitals finally terminating
in his appeal for surrender. But it was decisive and, with the
onrush of ground forces toward a juncture with the Russians,
altogether victorious. The oil campaign was the brightest phase
of the triumph. German production of fuel and lubricants had
virtually ceased. . . . The German Air Force was gone. . . . No one
challenged the airmen's claim that victory over the Luftwaffe made
all other victories in Europe possible. . . . The appalling desolation
of Germany's industrial cities was all too apparent as the Allies moved
into the Reich. Even Spaatz, who had studied so painstakingly the
results of the air offensive he had led, was surprised by the magnitude
of the chaos. The Reich was strangled and paralyzed. Even without
the final ground invasion, it seemed, the Germans could not have
continued the war.42
By way of providing additional support for this upbeat assessment, Albert
Speer, who assumed increasing responsibilities for Nazi war production
from February 1942, emphatically stated during a May 1945 debriefing
that precision bombing 'could have won the war without a land invasion'.43
Although strategic bombing had not proven decisive in the full sense of
forcing Germany to surrender without invasion, Speer, who presumably
had been in a position to know, believed it could have done so.
Not everyone, of course, has agreed. Bernard Brodie, for example,
argued in the 1950s that the results of the strategic bombardment of
Germany 'came too late to have a clearly decisive effect'.44 The airmen's
claim 'that such a campaign could have been decisive even in the absence
of ground operations - with all the freeing of resources for the air battle that
such a situation would have implied for both sides - must', he insisted, 'be
regarded as neither proved nor provable'.45 And much the same judgment

can be found in R. J. Overy's more recent general history of the 1939-45

air war.
The difficult question to answer is not whether air power was
important, but how important it was. There can be no definite
conclusion about how decisive air power was. There was too much
inter-dependence between the services and between strategies to
produce a list of components that were either more or less deci-
sive. . . . The only conclusion that the evidence bears is the more
negative conclusion that victory for either side could not have been
gained without the exercise of air power.46
The sharpest condemnation of the strategic bombing of Germany,
however, has come not from postwar nuclear strategists like Brodie or
historians like Overy, but from the economist John Kenneth Galbraith.
Galbraith directed the division of the US Strategic Bombing Survey
(USSBS) which attempted to provide 'a single over-all account of the
effects of strategic bombing on the German economy'.47 His knowledge
of the subject was, arguably, extensive.48 And Galbraith's considered
judgment was that the strategic bombing of Germany had, on the whole,
been a failure, if not 'perhaps the greatest miscalculation of the war' insofar
as its direct impact on German armaments production was concerned.49
The heart of Galbraith's case against the strategic bombing of Germany
lies in his insistence that the campaign's effectiveness be judged exclusively
on the basis of quantitative production indices for the output of the main
types of finished armaments. Even before the fighting ended in Europe,
Galbraith's Overall Economic Effects Division of the USSBS obtained
some of the Germans' own statistics concerning their wartime production.50
What these data showed - and other evidence later confirmed - was that
until almost the end of 1944, German armaments production had generally
increased as the bombing had increased.
The factories producing tanks, self-propelled guns and assault guns
- Panzer in the German designation - were not a primary target.
But they drew on labor, coal, steel, ferroalloys, machine tools,
transportation and all the lesser resources and fabrics of industrial
life. A general disruption of the German economy could not be
meaningful if it did not affect the production of these items. In
1940, the first full year of the war, the average monthly production
of Panzer vehicles was 136; in 1941, it was 316; in 1942,516. In 1943,
after the bombing began in earnest, average monthly production was
1005, and in 1944, it was 1583. Peak monthly production was not
reached until December 1944, and it was only slightly down in
early 1945. For aircraft . . . and other weaponry the figures were
On this basis Galbraith not only concluded that the effort to devastate
German war production with strategic bombing had failed, but that in some
cases like German aircraft production 'it could be argued that the effect of

the air attacks was to increase . . . output'.52 The basic fact of the CBO, in
Galbraith's view, was that
German war production had . . . expanded under the bombing. The
greatly heralded efforts, those on the ball-bearing and aircraft plants
for example, emerged as costly failures. Other operations, those
against oil and the railroads, did have military effect. But strategic
bombing had not won the war. At most, it had eased somewhat
the task of the ground troops who did. The aircraft, manpower and
bombs used in the campaign had cost the American economy far more
in output than they had cost Germany.53
Having sketched the opposing sides of the seemingly endless (and often
bitter) debate over the effectiveness of the CBO, we will now attempt to
illustrate the pivotal role played by the choice of analytic measures in
assessing the campaign. Galbraith, as knowledgeable as he undoubtedly was
on the bombing of Germany, was also, by his own admission, emotionally
traumatized by it. In his 1981 autobiography, for example, he noted that
while he quickly got over the appalling poverty he encountered when he
first went to Calcutta, the 'utterly sickening sight' of the devastation of the
German (and, later, the Japanese) cities was still with him.54 Yet as appalled
by the results of the bombing as Galbraith was, and evidently remains, he
seems to evidence 'not the slightest suspicion that his feelings about war
might have corrupted the conclusions he came to as a director of the survey
of the effects of strategic bombing on the wartime German economy'.55
In retrospect, the most obvious point at which Galbraith's visceral
reaction to strategic bombing clearly conflicted with his analysis was in the
narrow choice of measures. The USSBS study he directed asserted that 'it
was to the lowering of the level of Germany's finished munitions output that
the Allied strategic bombing offensive was in the main addressed'.56 In other
words, the fundamental analytic yardstick he and his colleagues applied
was that if the level of bombing was increasing but German armaments
production was not leveling off or falling, the strategic bombing had to
be deemed a failure. Both non-economic military effects of the bombing,
as well as indirect economic effects, were thereby largely excluded.
On one hand, Galbraith's emphasis on the analytic measure of munitions
output as a basis for judging the effects of strategic bombing on Germany's
war economy is by no means inherently implausible. It does attempt
to relate quantifiable inputs (bomb tonnage delivered over time) to
quantifiable outputs (German production of eight main types of finished
munitions).57 It is also the metric suggested by the enthusiastic claims
for daylight precision bombings that its ardent supporters in the Army
Air Corps made throughout World War II. What American airmen
optimistically promised was to shatter the supposedly taut, fragile web
of Germany's war economy. And, last but not least, the measure of the
bombing's impact on finished armaments production was undoubtedly
compatible with the charter of Galbraith's Overall Economic Effects
Division. Galbraith's focus on production indices versus the weight of the

bombing was analytically, doctrinally, and bureaucratically defensible.

On the other hand, those familiar with the day-to-day operational
history of the CBO will recognize the manifestly narrow focus inherent
in Galbraith's choice of indices for assessing the overall impact of the
bombing. His revulsion to the destruction of the German cities appears
to have blinded him to the possibility that the strategic air campaign could
have had operational or strategic meaning beyond its direct impact on
production figures. Or, expressed in the language of Carl von Clausewitz's
On War, Galbraith simply could not bring himself to frame the bombing
in its broader context of a violent clash of independent wills dominated by
chance, human frailties, uncertainty, and other 'frictions of war'.58 As a
result, even within the restricted realm of the CBO's immediate impact on
German war production, he largely ignored all but the most immediate and
obvious effects, thereby vastly oversimplifying what was in reality a more
complex set of direct and indirect, immediate and subsequent, physical and
psychological interactions between the two sides.59
The daylight bombing efforts during the CBO period against the German
aircraft and oil industries can be used to illustrate the more obvious sorts of
oversimplifications to which Galbraith's choice of measures fell prey. The
economic basis for concluding that the bombing of the German aircraft
industry was a costly failure is readily apparent in the summary figure
entitled 'The German Aircraft Industry under Allied Air Attack' from the
final report of the USSBS' Aircraft Division. The three months in which
by far the most bombs were delivered by the US Eighth and Fifteenth Air
Forces against this target system were February, April, and August 1944;
yet actual German aircraft production grew substantially over this period,
and the direct monthly losses due to the bombing are comparatively small
(under 30 per cent).60
Looking back, it seems clear that attacking this particular target system
did not produce the results expected by American air planners. Their
original hope that direct attacks on aircraft assembly plants, when combined
with the combat wastage the Germans were expected to suffer defending
their aircraft industry, would defeat the Luftwaffe by cutting off its supply of
aircraft, flatly failed. In the event, the attainment of daylight air superiority
over Germany in April 1944 was due to the depletion of the Luftwaffe's
seasoned pilots, and the role of US long-range escort fighters and German
shortages of aviation fuel in bringing this about was probably greater than
that due to the direct bomb damage inflicted on aircraft factories.61
Nevertheless, attacking the aircraft factories produced significant indirect
effects which were not reflected in monthly production figures on which
Galbraith and his colleagues concentrated. In the case of the German
aircraft industry, the general dispersal order was officially issued in
February 1944, and from April to August of that year about 51
plants, some of which were themselves the result of earlier dispersals,
were scattered to 249 locations.62 The indirect 'costs' of this dispersal
were many. The dispersal itself placed increased demands on German
transportation as the Allies commenced a systematic attack on this target

system. In addition, dispersal caused a 'tremendous dilution' of German

supervisory and technical talent, increased the size of the work force the
aircraft industry required (by an estimated 20 per cent), made efficient
engineering and program changes 'practically impossible', and eventually
created 'tooling bottlenecks'.63
Although Galbraith's group did not attempt to quantify these indirect
dispersal effects, the USSBS Aircraft Division did. The estimated indirect
losses of monthly aircraft production grew rapidly after September 1944
and, after January 1945, exceeded the direct losses. The Aircraft Division
Industry Report concluded, therefore, that even if strategic bombing of
the German aircraft industry had accomplished nothing but that industry's
dispersal, 'it would have paid its cost'.64
Admittedly, in fairness to Galbraith, these indirect production losses
from dispersal did not manifest themselves, even in the USSBS Aircraft
Division's calculations, until some five or six months after the Luftwaffe
fighter arm had lost daylight air superiority over central Germany.
However, quantifiable production losses were by no means the only
indirect effects of the daylight attacks on German aircraft production.
In operational and strategic terms, the most important indirect result
was that the Luftwaffe was forced by these attacks to defend the skies
over central Germany, thereby exposing its limited cadre of experienced
pilots to attrition at the hands of American long-range escort fighters.65
Again, the German Air Force did not lose the struggle for air superiority
over Germany and occupied France in the spring of 1944 because it ran
out of aircraft. In this direct sense the German aircraft industry proved
far less vulnerable a target system than American air planners had hoped.
Yet, as the Luftwaffe's virtual absence over the Normandy beach-heads on
6 June 1944 proved, the aggregate direct and indirect effects of bombing
Germany's aircraft industry were broader and more costly than Galbraith's
narrow focus on quantifiable output versus quantifiable input allowed.
A somewhat different criticism of Galbraith's aggregate industrial input-
output measures emerges from the case of German oil production. The
USSBS Oil Division's final report makes it clear that the direct causal
linkage between bomb tonnage dropped over a period and the observable
impact on a given target system was far more tenuous than net industrial
performance measures implicitly assumed. For example, in the case of the
146,000 bombs dropped by the US and British strategic air forces on three
large German oil-chemical plants, only 3.4 per cent are estimated to have
hit and exploded on production facilities or pipelines; the other 96.6 per
cent fell outside the target area or on decoy facilities, detonated on open
terrain inside the target area, or failed to explode.66 Therefore it cannot
be inferred from the dropping of large or increasing bomb tonnages during
World War II that the critical elements in the target system under attack
were being hit.
In the case of oil, however, both decrypted intercepts of high-level
German signals traffic (ULTRA) and captured German production data
indicated that the May 1944 attacks on synthetic fuel production in

Germany, coming on the heels of three attacks on the Romanian oil fields
in April, produced immediate and dramatic drops in German production
of aviation fuel.67 Granted, the large stocks of reserves the Germans had
accumulated enabled them to delay the impact of the April-May 1944
oil attacks until August. But it is still hard, if not impossible, to dismiss
the strong likelihood that German oil production was a genuinely vital
military target during the first half of 1944. Whether oil was also an
economically vital target system in the full sense intended by American
precision bombing doctrine at that point in the war is a broader issue on
which debate continues to this day.68
Of course, the operational rub regarding oil at the time was that, due
to events such as General Dwight Eisenhower's March 1944 decision to
focus the Anglo-American strategic air forces on transportation prior to
the Normandy invasion, oil did not again receive the priority many felt
it deserved until the late fall of 1944. Nonetheless, the lesson regarding
aggregate input/output measures seems clear enough. 'Net bombs dropped'
versus 'munitions produced' give little insight into the detailed effects of
bombing German oil supplies in general and aviation fuel production in
particular. In this sense, the industrial measures Galbraith utilized to
criticize the CBO are simply too coarse and unappreciative of all but the
most direct economic and military effects of strategic bombing.
To this point, the measures by which we have tried to assess the CBO's
effectiveness have been the traditional: target damage or destruction,
bomb tonnage versus armament production, industrial dispersal, indirect
production losses, the creation of 'bottlenecks' in the supply of 'vital'
resources such as oil, the attainment of air superiority, etc. To illustrate
the potential importance of alternative measures, we turn next to the
potential of strategic air attack to divert enemy economic resources into less
productive areas and the associated 'opportunity costs' of such diversions.
This sort of 'second-order' effect not only entails more subtle measures
than any we have considered so far, but it also leads to a more favorable
bottom-line judgment of the CBO's effectiveness than Galbraith's.
The 'second-order' effect we want to consider first is the potential
diversion of German economic resources to defending against the Com-
bined Bomber Offensive. Intriguingly, Galbraith's report on The Effects
of Strategic Bombing on the German War Economy is by no means
oblivious to this particular effect. 'It is', his division's report noted,
'just as much a loss by bombing if a rifle, for example, is not produced
because resources have been diverted to the output of anti-aircraft guns
as it is if a rifle is not produced because the rifle plant is bombed out'.69
Yet the general impression left by the discussion of anti-aircraft artillery
and ammunition which follows this passage is that the commitment of
resources to these items was neither great nor disproportionate. 'In 1943
and 1944 the value of anti-aircraft artillery was from 25 to 30 per cent
of the value of all weapons produced, while the value of anti-aircraft
ammunition was between 15 and 20 per cent of the ammunition total'.70
Moreover, the entire cost of anti-aircraft artillery could not be attributed

to strategic bombing because some lighter weapons were probably intended

to defend against low-flying tactical aircraft rather than heavy bombers, and
even the heavier anti-aircraft weapons-notably the 88mm gun-were widely
used for other purposes such as anti-tank defense. Finally, there was 'much
less German heavy anti-aircraft fire per Allied sortie in 1944 than in the
early years of the war'.71
From these comments it appears that defending against strategic bombing
did not absorb any large or disproportionate share of German industrial
resources. While such a second-order effect was possible, it seemingly did
not occur.
This picture, however, is not accurate. The reason lies in the incomplete-
ness of the measures on which it is based: namely, on the categories of anti-
aircraft artillery and ammunition. The problem is that these two categories
of finished armaments ignore the lion's share of German investment in
air defenses during 1943 and 1944 by excluding growing investment in
defensive aircraft.
Perhaps the most pointed summary of the actual extent of German
resource allocation by mid-1944 to defending against strategic bombing
was made in 1959 by none other than Galbraith's deputy in the Overall
Economic Effects Division of the USSBS, Burton Klein. After the war,
Klein wrote a book dealing mainly with German pre-war and wartime
economies up to the period of massive British and American aerial attacks.
The focus of this effort, as the title Germany's Economic Preparations for
War suggests, was not on the effects of strategic bombing.72 In Klein's
opinion that topic had been 'adequately covered in the Bombing Survey
studies, especially The Effects of Strategic Bombing [on the German War
Economy]'.73 Nevertheless, at the end of his 1959 volume, he did offer the
following retrospective judgment concerning strategic bombing's broader
impact on Germany.
The preinvasion air raids . . . did affect the German war effort - and
in a manner which has been little commented on even since the war.
This was in causing the Germans to devote a very significant part
of their production effort and also a large number of highly trained
military personnel to air defense. From 1942 to the first half of 1944
expenditures on air defense armaments - defensive fighter planes
and their armament, antiaircraft weapons, and ammunition - nearly
tripled, and at the time of the [Normandy] invasion amounted to
about one third of Germany's entire munitions output. Indeed, in
mid-1944 production of air defense armaments was at a higher level
than was munitions output as a whole at the time Germany went to
war with Russia. It can be seen, therefore, that where the preinvasion
attacks really paid off was not nearly so much in the damage they did,
but rather in the effect they had on causing the Germans to put a very
significant part of their total war effort into air defense.74
Klein's assessment, then, is that the share of German armaments
production devoted to defending against the CBO by mid-1944 was large

and the opportunity costs of that investment great.

Because this assessment is so sharply at odds with both Galbraith's view
of the CBO's effectiveness and the impression conveyed by the previously
cited discussion of anti-aircraft weapons and ammunition from The Effects
of Strategic Bombing on the German War Economy, we emphasize that the
basic data on German war production used by the Bombing Survey supports
Klein. While the share of German armaments production consumed by
aircraft weapons and ammunition, anti-aircraft guns, and anti-aircraft-gun
ammunition in mid-1944 was only about 10 per cent of the total value of
finished munitions (in constant prices), the portion devoted to aircraft of all
types in June 1944 was 46.1 per cent (see Table 1), of which about half can
be attributed to defensive aircraft.75 So when the constant-price value of the
output of guns and ammunition for defensive aircraft, anti-aircraft artillery
(AAA), and AAA ammunition during the summer of 1944 is added to that
of the aircraft the Germans themselves categorized as defensive, the
'air defense' share comes to roughly one-third of the value of finished
armaments production (10 per cent + 46.1 per cent/2 = 33.1 per cent), just
as Burton Klein claimed. And while not all this investment can be attributed
solely to the CBO, the growing emphasis on fighters from 1942 to 1944 is
unmistakable. During 1942 only 35 per cent (5,460) of the 15,556 aircraft
the Germans produced were for defense; during 1944 this had risen to 65
per cent (25,824) of a total 39,807 aircraft.76
A point that Andrew Marshall has made about Klein's post-war views
of the CBO's 'second-order' impact is that the measures on which they are
based encompass a much wider range of factors and considerations than did
most earlier (and many later) assessments, especially Galbraith's.78 This is
not to suggest that Klein ignored the dramatic rises in German armaments
production during most of 1944 which occurred despite the increasing
weight of Allied bombing. To the contrary, Klein's assessment highlights
the very same trends Galbraith used to condemn the bombing effort. By
the time Klein finished his book on Germany's economic preparations for
World War II, he had had a lot more time to reflect on the broader, less
direct implications of this sort of economic data. Consequently, he was
able to place these trends within the broader context of the military and
economic opportunity costs to the Germans of their increasing diversion of
war production to air defense against the British and American bombing
and, as a result, reached a more balanced assessment of the CBO's
overall efficacy.79 June 1944, after all, witnessed more than the Normandy
landings. On the Eastern Front, the Soviets began a series of multi-front
encirclement operations in Belorussia (under the codename 'Bagration')
which, by the end of August, had literally destroyed the Germans'
Army Group Centre and carried the Red Army some 500 kilometers
westward to the borders of East Prussia.80 Yet throughout this period the
Germans continued to devote a disproportionate share of their armaments
production to defending against strategic air attack - which included
marauding deep-escort fighters as well as heavy bombers - while receiving
less and less in return for the heavy commitment of resources. By

the late summer of 1944, as monthly fighter production climbed toward its
wartime peak, the reality was more and more that German factories were
turning out hundreds of aircraft a month which 'had neither fuel to fly nor
pilots'.81 Even though fighter output soared, the pay-off in operational
capability declined.82

1943 1943 1943 1943 1944 1944 1944 1944

Mar Jun Sep Dec Mar Jun Sep Dec
Total Aircraft 814 926 882 739 1041 1275 1221 890
(millions of
Total Armaments 2009 2102 2176 2065 2511 2763 2800 2446
(millions of
Aircraft/Armaments 40.5 44.1 40.5 35.8 41.5 46.1 43.6 36.4

Moreover, increased allocation of resources to air defense was not the

only second-order 'cost' which the CBO imposed on the German war effort.
There were also significant psychological effects. True, the bombing did
not shatter German morale as some of the more enthusiastic American
and British proponents of strategic air power hoped. While German
workers subjected to bombing grew gloomy and depressed, most of them
continued to work. Yet the psychological impact of the Allied bombing
distorted German strategy in other ways. As Williamson Murray noted
in 1983, the German people grew not only gloomy and depressed, but
also became extraordinarily angry and began demanding retaliation for
the damage inflicted on local towns and homes. This demand, in turn,
appears to have been a driving factor in the Germans' decision to begin
production of the V-l and V-2 for use against England.83
The SD [Sicherheitsdeinst, Secret Police] reports, reflecting the
popular mood [in late-1943 and early-1944], explain the leadership's
demand for retaliation weapons (the V-l and V-2), its willingness
to waste the Luftwaffe's, bomber fleet over the winter of 1944 even
though faced by the threat of an Allied invasion, and its refusal
to provide the necessary support needed to the fighter forces
until military defeat was obvious and inescapable. Moreover, the
distortion in military production as a result of the demands for V-l
and V-2 retaliation weapons was enormous. The strategic bombing
survey estimates that the industrial effort and resources expended for
these weapons in the last year and half of the war alone equalled the
production of 24,000 fighter aircraft [which would be over 90 per cent
of Germany's output of defensive fighters throughout the year 1944].
Here the regime was reacting to popular pressures, and the resulting

decisions responded to political factors rather than to strategic and

military realities. Thus, just in terms of retaliatory weapons policy,
the distortion that the bombing achieved in the German war effort
was of real consequence to the war's outcome.84
The resulting lesson about choices of analytic measures, then, would
appear to be the following. A balanced assessment of the CBO cannot
be based on one or two coarse-grained quantitative measures. A wide
range of military as well as economic factors must be taken into
account; indirect effects as well as direct, second-order as well as
first-order consequences, must be included. Additionally, because many
of these factors are quantitatively incommensurable, their integration into
a genuine net assessment inevitably demands some modicum of art. Phrased
in more methodological terms, to think seriously about the overall impact
of strategic bombing on Germany during World War II is, inescapably,
to think about the problems of choosing adequate analytic measures. The
history of the post-war debate over the CBO is eloquent testimony that such
choices are never easy. Sadly, there continues to be too much truth in Noble
Frankland's memorable comment that 'people have preferred to feel rather
than to know about strategic bombing'.85
Poor choices of measures, as we have sought to illustrate in the case
of the Combined Bomber Offensive, lead to poor analyses and dubious
conclusions. Our assessments of weapons and campaigns alike are highly
dependent on the breadth and depth of the measures we choose, and the
view of the CBO's efficacy that emerges from using a set of measures broad
enough to include indirect and second-order effects is, not surprisingly, very
different from Galbraith's. Did the strategic bombing of Germany obviate
the need for the Western Allies to land their armies in northern Europe?
Obviously not. Did the airmen ever solve the friction-riddled problem of
being able to determine in real time what the truly 'vital' German target
systems to attack were? Again, the answer must be 'no'. 86 Electric power,
the industrial target system that, arguably, came the closest to providing
the sort of 'single-point-failure' vulnerability that American airmen sought,
was never seriously attacked.87 Nevertheless, to argue that the bombing
failed to impose disproportionately heavy 'costs' on the German wartime
economy in relation to resources the Allies devoted to the bombing is not,
in our view, credible - not at least when anything approaching an adequate
set of measures is used.
A final thought needs to be appended to these judgments. The selection
and justification of appropriate measures is far easier with the aid of
hindsight and fuller knowledge than it is for decision makers caught up
in the onrush of events at the time.
To return to the example of German electric power, the pivotal event
that led to this target system falling through the cracks seems to have been
the conclusion reached in early 1943 by the Committee of Operations
Analysts (COA) that German electric power should not be given the high
priority in the forthcoming combined bomber offensive from the United

Kingdom that it had been accorded in earlier strategic air plans.88 As

a result, when General H. H. Arnold took the COA's implicit target
prioritization and, without consulting the plans or intelligence divisions of
his own Air Staff, sent it to England on 23 March 1943 with the instruction
that General Ira Eaker develop a plan 'to do this job in the order of priority
listed', electric power was effectively 'closed out from further consideration'
as a primary target system.89
General Hansell, who headed the CBO planning group in England,
argued after the war that the COA had committed 'a major error' in making
the operational judgment that disruption of electric power was not feasible
for the available strategic air forces.90 The COA, he pointed out, was
largely made up of civilian industrialists, professors and engineers. While
these individuals were eminently qualified to estimate which industrial
targets were the most vital to German war production, in HanselFs
opinion they had no business trying to evaluate operational feasibility.
In this sense, they used a measure which they were not really qualified
to assess.91 Nonetheless, at the time the CBO plan was being developed
Hansell could not bring himself to challenge the COA's downgrading of
electric power, even though he had combat experience leading heavy
bombers over occupied Europe. Why did Hansell not object if he believed
that the target system in question was not too difficult to tackle? Part of the
reason was that the COA had also downgraded transportation, and Hansell
had already exceeded his authority to give this target system high enough
priority in the CBO to ensure that it would be attacked.92 But beyond this,
he was concerned that the COA might have knowledge of electric power
which he did not and, perhaps most important, that if electric power, as
well as transportation, reverted to near the top of the list, he might unhinge
the analytic basis on which the entire strategic air offensive rested.93
This example reinforces several lessons about measures. First their abuse
can - and in the case of the CBO did - adversely affect the course of
operations. Electric power, though vulnerable, was never really attacked.
Second, the measures which real-world decision makers use to choose
among strategic options in actual situations can be altogether different
from those which hindsight may later recommend. The COA's lack
of enthusiasm for electric power stemmed from applying a measure
which most members of this group were not qualified to assess, and
hindsight suggests that their pessimism regarding this target system
was mistaken. Finally, actual strategic choices in actual situations can
involve higher-level considerations which go well beyond anything we
can credibly quantify. Despite all that is now known about the CBO's
effects on German armaments production, who would be willing to insist
that Hansell's decision to discard electric power in the interests of ensuring
that there would be a strategic air campaign was categorically wrong?

Doolittle's Raid and Holding Targets at Risk

The second-order consequences of military actions such as strategic

bombing are by no means limited to relatively indirect costs, like diverting

some portion of the adversary's wartime production of finished armaments
into less productive areas. In this part we will use the case of Doolittle's
April 1942 raid on the Japanese home islands to illustrate second-order
effects which affected the subsequent course of military operations more
The basic historical facts of the B-25 raid on Japan led by (then)
Lieutenant Colonel James H. Doolittle are as follows. On the morning of
18 April 1942, sixteen B-25s were launched from the aircraft carrier Hornet
about 700 nautical miles from Japan.94 The mission of Doolittle's raiders
was to boost American morale in the dark days immediately following
Pearl Harbor by an attack on Japan's 'sacred home soil'.95 According
to the mission's plan, 13 of the B-25s, led by Doolittle, were to strike
military and industrial facilities in the Tokyo Bay region (12 had targets
in the Tokyo-Yokohama metropolitan area while the thirteenth was to
bomb the Yokosuka naval base).96 The other three B-25s were to attack
similar targets in Nagoya, Osaka, and Kobe.97 Eleven of the B-25s carried
a payload of three 5001b demolition bombs plus a single 5001b incendiary
cluster bomb; the remaining five - two of the Tokyo raiders (including
Doolittle's plane) and the three aircraft targeted against Nagoya, Osaka,
and Kobe - were each loaded with four incendiaries.
In actual execution, there were at least four deviations from the planned
targets. One of the B-25s targeted against the Tokyo area experienced a
fuel leak and ended up jettisoning its bombs south of Tokyo Bay; this
aircraft did not reach Honshu. Another of the Tokyo raiders strayed and
it is thought attacked a power plant and railroad yard in the vicinity of
Kawagoe, some 18 miles north-west of downtown Toyko, before heading
for Vladivostok instead of China due to low fuel. The pilot of a third Tokyo
raider discovered his target hidden by barrage balloons and attacked other
facilities nearby. And the B-25 targeted against Osaka most likely bombed
Nagoya instead.
Overall, 15 of Doolittle's B-25s dropped their bombs on the Japanese
home island of Honshu. Some strafed ground targets as well. All 16 of the
bombers are thought to have left Japan under their own power. Tactically,
the raid achieved considerable surprise and only one B-25 was vigorously
opposed by Japanese fighters. However, all 15 of the planes that reached
China had to be abandoned by their crews and were destroyed; only the
B-25 that landed in the Soviet Union, where it was interned, survived
the mission intact.98 Of the airmen themselves, two crews are believed
to have perished in Japanese hands, but the other 14 survived and were
The direct damage inflicted by this raid, which also risked two of the four
aircraft carriers the US Navy had in the Pacific, was minimal. A school
in Tokyo was 'inadvertently struck, and a total of 12 people killed, 50
houses and shops demolished, and the bow of a warship in dry dock
[the carrier Ryuho] damaged'.99 But there was no significant military
damage. Judged in terms of the direct damage inflicted on Japanese

combat potential and war production, the Doolittle raid was, at most, a
Indeed, the direct and predictable military effects of the Doolittle raid
were so modest that it is possible to question whether the operation should
have been mounted at all given the likely benefits, costs, and risks it
entailed. Regarding predictable benefits, the bombs of a mere 16 B-25s
could never have been expected to inflict any appreciable damage on the
Japanese military, or to eliminate any vital link in Japan's war industry.
Concerning costs, there was no expectation that the bombers would be
recovered for further use by the Americans. After bombing Japan the
planned mission profile was for the B-25s to head for China and attempt
to reach bases in the Chuchow area, presumed to be under the control of
Chiang Kai-shek, where the aircraft would be turned over to his forces.100
As for the risks, to give the B-25s a chance of reaching China, the original
planning called for them to be launched at a distance of about 395-485
nautical miles (450 to 550 statute miles) from Japan, which in turn meant
that two of the US Pacific Fleet's four carriers, Hornet and Enterprise, along
with their accompanying cruisers and destroyers, would have to sail into
what were, at the time, Japanese-controlled waters. (To indicate how far
into Japanese waters the carrier task force had to penetrate, Iwo Jima lies
about 790 nautical miles from Tokyo.) And because American naval power
in the Pacific during early 1942 essentially rested on the four carriers that
survived the Japanese attack on Pearl Harbor, the Doolittle raid risked far
more than a handful of B-25s and their crews.101 In light of this substantial
risk, American naval officers have tended to doubt whether the raid was
worth the risk. Edwin Layton, for example, who was the Pacific Fleet
intelligence officer from December 1940 to the end of the war, concluded
in 1985 that the 'effects of the raid were not momentous, nor commensurate
with the American risk of two of our four precious aircraft carriers in the
Yet with the advantage of hindsight, we can see that the daring raid by
Doolittle's B-25s had second-order effects on the war in the Pacific that
went far beyond the modest amount of physical damage inflicted on 18
April 1942. To begin with, there were the obvious psychological goals of
the raid: to boost American morale and dent that of the Japanese. The
extent to which the raid achieved its psychological goals can be seen in
such triumphant post-strike headlines in the United States as 'JAPAN
BOMBED, DOOLITTLE DOOED IT' and the outrage expressed in
Japanese newspapers that innocent school children had been 'murdered' by
US fliers.103 On the American side, the feat 'lifted the morale of Americans
still shaken by the fall of Bataan' and gave 'fresh hope' to Allied forces
worldwide.104 On the Japanese side, there was outrage, shock, and loss of
The loss of face the raid inflicted on the Japanese military was, by far,
the raid's most important psychological effect because of its impact on
subsequent Japanese operations, deployments, and strategy. Some 53
battalions of the Japanese Army were dispatched on a punitive expedition

through Chekiang province, where most of the American B-25s landed.105

Beyond this short-term diversion of land forces, four army fighter groups
were brought home to provide air defense of the Japanese cities. Moreover,
these fighter groups were retained in Japan 'during 1942 and 1943'
until they were finally redeployed to meet urgent requirements in the
Solomons even though no subsequent American bombing raids were
attempted on the Japanese home islands during this period.106 The
Doolittle raid resulted in the diversion of Japanese Army forces in
China and imposed an extended 'virtual attrition' of four fighter groups.
Even more critically, the bombing of Japan's 'sacred home soil' affected
Japanese strategy. Admiral Isoroku Yamamoto, his immediate Combined
Fleet staff, and the upper echelons of the Japanese naval staff in Tokyo
were so mortified by the raid that they committed themselves to a course
of military action in the Pacific that culminated in major defeats. Not
only was any lingering resistance to Yamamoto's proposed Midway
operation quelled by Doolittle's B-25s, but the Japanese high command
additionally resolved to forestall any further raids on Japan by Admiral
Shigeru Fukudome's drive to Fiji and New Caledonia aimed at cutting
off Australia from America.107 As a result the Japanese committed
themselves to attempting two major operations in widely separated
locations. The close timing of these divergent operations - early
May of 1942 for Fiji/New Caledonia and early June for Midway -
ultimately prevented concentration of Japan's full strength in either.
The most lasting impact of Doolittle's raid was, therefore, that it
provided the 'final straws' that committed the Japanese high command
to courses of action which ended in the strategic reverses they suffered
at the Battles of the Coral Sea and Midway Island (5-8 May and 3-5
June 1942, respectively). The former was a tactical victory for the Japanese,
sinking the carrier Lexington in exchange for their light carrier Shoho. But
this tactical victory in the Coral Sea was also a strategic reverse because
the Japanese were forced to cancel their plans to invade Port Moresby.108
At the same time, the Shokaku, which was also heavily damaged, and the
Zuikaku lost so many aircraft and crew in the Coral Sea battle they could
not be made ready in time for the Midway operation, thus cutting the
Combined Fleet carrier striking force on 3 June by one-third.109 As for
the Battle of Midway, it constituted both a tactical and strategic defeat.
All four of the Japanese fleet carriers in Admiral Chuichi Nagumo's
strike group were lost at the cost of the American carrier Yorktown.
In addition, the Japanese had to call off the invasion of Midway.
The Doolittle raid itself did not, of course, ordain the outcome of either
battle. In the case of Midway, the stunning loss of four Japanese fleet
carriers, some 330 aircraft and their well-trained pilots was more directly
due to factors such as the American success in decrypting enough of the
relevant radio traffic to enable Admiral Chester W. Nimitz's carriers
to surprise and ambush Admiral Nagumo's strike force north-west of
Midway.110 Nonetheless, the Doolittle raid committed the Japanese to the
Midway operation, and the outcome of the resulting battle, besides giving

the US 'an invaluable breathing space', constituted 'the turning point that
spelt the ultimate doom of Japan'.111
Consequently, the broader strategic significance of the Doolittle raid
stems from the second-order effects it had on Japanese thinking and
strategy. In the early 1970s the Japanese Defense Agency's official history
of the war concluded that Doolittle's raid: (1) caused the Japanese great
morale problems; (2) caused Japanese military leaders to lose face because
they had said Tokyo could never be bombed; (3) caused diversions of
Japanese forces such as the four fighter groups recalled for homeland air
defense - a fact that most historians overlook; (4) caused the Japanese
Army to join the assembly for the Midway operation; and (5) aligned the
Imperial General Headquarters unreservedly behind the Combined Fleet's
Midway-Aleutian operations plan (which, at the Battle of Midway, resulted
in a further dilution of Japanese strength).112
What do these second-order consequences of Doolittle's raid imply
about the problems of choosing adequate analytic measures? Simply put,
they suggest that if we concentrate exclusively on the direct, measurable
damage inflicted on enemy forces by a given military action, we could
very well overlook its most important consequences. The principal impact
Doolittle's B-25s had was to influence the minds of the Japanese. Affecting
the thinking of the Japanese military, in turn, produced at least three
second-order military consequences. First, some Japanese ground forces
were diverted to peripheral operations (the punitive expedition in Chekiang
province); second, other Japanese forces, the four fighter groups, were
tied down in homeland defense because the Japanese perceived their
home islands to be at risk to American bombers, thus costing the
virtual attrition of these air groups for many months; and, finally,
Japanese military commanders and their staffs were provoked into
committing what proved to be operational and strategic mistakes (the
decisions which led to the Battles of the Coral Sea and Midway).
The common thread in these diverse effects would appear to be the
psychological consequences of the Americans' ability to demonstrate that
the Japanese home islands were at risk. Again, the 60 bombs delivered by
Doolittle's B-25s during their '30 seconds' over Tokyo and other Japanese
cities did little physical damage. Moreover, American bombers did not
again overfly Tokyo until November 1944.113 Yet the personal loss of face
felt by key Japanese military leaders that immediately resulted 'from the
public having witnessed enemy planes winging over the imperial palace'
produced 'fundamental changes' in their plans for, and subsequent conduct
of, operations in the Pacific.114
To be emphasized is the lack of continuity that occurred because,
suddenly, Japan's 'sacred soil' was demonstrably at risk to American
bombers. As Admiral Layton later wrote, in the minds of the Japanese
the Doolittle raid had 'a far greater impact than any of us at CINCPAC
could have calculated'.115 Manifestly absent in this 'far greater impact' on
Japanese strategic thinking seems to be any direct, linear, quantifiable
linkage to the bombs delivered, or even bomb damage inflicted. There

simply does not appear to be any simple, precise mathematical relationship

such as 'X bombs on Tokyo equals Y units of operational-strategic errors
by the Japanese'.
The reason for this is we are not dealing with chains of exclusively
physical causes and effects. On one end of the chain we indeed have
something reasonably concrete and physical: 15 B-25s delivering 60 bombs
on Japanese cities. On the other end we have equally concrete effects,
including the diversion of Japanese battalions in China to a punitive
expedition, the recall of four fighter groups to Japan, and the final
commitment of the Japanese high command to divergent thrusts toward
Port Moresby and Midway Island. The mediating element, however, is
the psychological impact Doolittle's bombers had on Japanese thinking
by demonstrating that even Tokyo was at risk. It was inside the minds
Admiral Yamamoto and other Japanese military leaders that the concrete
tactical act of bombing Japan in April 1942 was transmuted into decisions
which led to lasting strategic consequences - not the least of which was
that after Midway the Japanese never again resumed the offensive in the
Like Schlesinger in the 1960s, Morse and Kimball in the 1940s flagged
higher-level or strategic problems as those for which adequate measures
had yet to be devised. The irreducible non-linear factor evident in the
strategic consequences of the Doolittle raid not only tends to confirm
their insight, but indicates why the problems of adequate measures for
getting from tactical actions to strategic consequences may not, in general,
be amenable to precise quantitative resolution. It seems we are looking at
an area of transition akin to the onset of turbulence in fluid dynamics. After
the fact we can readily see, so to speak, that a transition from laminar to
turbulent flow occurred, just as we realize in looking back at the Doolittle
raid that it led to strategic opportunities far out of proportion to the direct,
physical damage it inflicted on the Japanese. Yet no matter how carefully
we sift through the detailed, measurable antecedents of the raid's impact
on Japanese strategic thinking, we seem unable to find any quantitative
measures or indices which point precisely and unambiguously to its broader
By no means does it follow, however, that we are helpless before the
problem of anticipating - if no more than vaguely and imprecisely - such
higher-level strategic consequences. In the case of the Doolittle raid, it
is not difficult to imagine a back-of-the-envelope, qualitative 'analysis',
based on common sense and insight into the Japanese concern with face,
which would have recognized that bombing the Japanese home islands was,
during the bleak weeks and months following Pearl Harbor and Bataan,
a good, even vital, thing to do. While we are aware of no evidence that
such an analysis was explicitly made, the President and his senior military
advisors in Washington seemingly acted as if it had been. The problem of
anticipating higher-level consequences, therefore, does not seem to be one
of looking harder for the right quantitative measures, but of considering
different kinds of measures and analyses altogether, namely qualitative ones.

The deeper implications for defense analysis of the preceding paragraphs

- especially the intentional references to nonlinear dynamics or 'chaos'116
- are, we believe, far reaching. It is not just that analysts have, on all too
many occasions in the past, picked poor measures, confined themselves to
too limited a set of quantitative measures, or else misused the measures
they chose whether adequate or not. The deeper implication is that the
very higher-order, strategic considerations we want most to quantify and
reflect in our analyses tend to be precisely the things least amenable
to exact quantification or ready inclusion. Hence the problems of
incorporating higher-order, hard-to-quantify measures into our thinking
stubbornly persist. Indeed, insofar as people today are more inclined to
pull old measures 'off the shelf and apply them indiscriminately to new
problems, the situation may be worse now than it was in the 1950s.
It may be helpful to recast this line of thought in more concrete terms.
And since both of the historical cases we have examined involved bombing,
the long-range bomber offers a natural focus. Bombers can, of course,
be used for basically tactical purposes, and they were certainly so used
during the Combined Bomber Offensive. Land transportation targets,
for example, accounted for over 32 per cent of the total bomb tonnage
delivered by the British and American strategic air forces during the
years 1941-45.117 The vast majority of this 'tactical' tonnage was
delivered after March 1944 in direct support of Overlord and other
land-force operations as a result of General Eisenhower's decision to
focus the Anglo-American strategic air forces on transportation targets.
Yet it is inherent in the nature of a platform like a bomber that it can
also be used for broader, more strategic purposes. The CBO and the
Doolittle raid illustrate a number of such possibilities, including: attacking
vital links in the enemy's war economy (oil, electric power, etc.); distorting
the enemy's overall production priorities, limiting further production of
particular weapons; imposing indirect costs such as those associated with
the dispersal of the German aircraft industry; helping to achieve theater air
superiority, boosting friendly morale; inducing virtual attrition of enemy
forces; and provoking enemy commanders into operational or strategic
mistakes. These broader, more strategic uses of bombers seem to arise
from the basic fact that a bomber can generally hold at risk a larger and
potentially more important array of targets than can a rifle or a tank.
The main methodological implication of the CBO and Doolittle cases for
the problem of assessing the overall value of a weapon like the long-range
bomber, therefore, would appear to be that while aspects of a bomber's utility
and effectiveness can be analyzed quantitatively and in detail, those pertain-
ing to its broader efficacy tend to fall outside the realm of detailed, quantita-
tive measures. Analyses based on such measures either cannot incorporate
things like holding targets at risk or second-order strategic consequences at
all, or else they do so poorly and inadequately. The reason lies in the essential
non-linearities of such effects. To deal with them in any comprehensive
manner, we must look to other, more qualitative types of analyses.
Last but not least, the suggestion that we need to look to other, more

qualitative forms of analyses to deal with the higher-level or global aspects

of a weapon such as the long-range bomber is by no means without parallel
in the quantitative or 'hard' sciences. The modern topological approach
to dynamics allows us, for example, to illustrate 'virtually all of the
dynamical features of a pendulum - not just near its rest state, but
globally, everywhere, at high or low energy - . . . in a single, geometrical
picture'.118 As a matter of fact, such qualitative analysis enables us 'to obtain
information about dynamics that is totally inaccessible from the classical
bash-out-a-formula' viewpoint'.119 So the suggestion that detailed quantita-
tive forms of defense analysis need to be supplemented by more qualitative
measures and forms of analysis is neither unprecedented nor frivolous.120

Answering the Questions

Early in this paper we raised two elementary questions concerning analytic
measures. One asked why the problem of choosing adequate measures is so
difficult. The other asked whether this problem is ultimately solvable. To
the first question, the short answer is that no set of precise, quantitative
measures can be fully adequate in the global sense of capturing all the
higher-level aspects of a particular problem in defense analysis. The harder
we try to nail down quantitatively every detail on any given level, the more
certain we are to overlook 'strategic' considerations at higher levels. One
reason is that higher-level considerations such as holding targets at risk,
being non-linear, are inherently resistant to exact quantification. The other
is that there are always broader perspectives from which a given problem
can be viewed, and we cannot quantify them all. Hence the recurring
difficulties of choosing adequate quantitative measures, as well as the
constant imperative to think about our choices.
Is there a direct solution to this problem of choice - some mechanical
way of being able to select exactly the 'right' quantitative measures for
a particular analytic problem? If the preceding paragraph is correct, the
answer would appear to be 'no'.
Again, though, this outcome need not drive us to despair. Rather it
suggests that our attempts to quantify the aspects of the problem which
are legitimately quantifiable always stand in need of being supplemented.
Having established what can be reasonably quantified, there are always
other higher-level questions to be asked. WTiat are the broader, more
strategic, or global aspects of the problem that elude quantification? And
what kind of broader, qualitative analysis might be required to deal with
them? The limits of quantitative measures and analyses do not, therefore,
lead to negativism but to different questions, and the attempt to answer
those questions in specific cases leads, in turn, to a broader conception of
what 'analysis' could, or should, be.
Perhaps the simplest way to highlight this broader conception of analysis
is to consider the full range of analytic measures discussed at one point
or another in this paper. Table 2 is intended to present a reasonably
comprehensive sampling of the various measures we have examined. As

indicated by the use of italics, however, this listing can be readily divided
into at least two distinct classes: those which are relatively quantifiable and
linear in their effects, and those, shown in italics, which are not.

Force ratio Surviving warheads

Red vs. blue attrition Deterrence
Effectiveness as artillery Production bottlenecks
Maintainability Vital target systems
Target destruction Bomb tonnage vs. production indices
Target damage Pilot attrition
Merchant losses to subs Pilot quality
Search/sweep rates A utonomy as a separate service
Kill/loss ratios Indirect production losses
WEI/WUV scores Distorted resource allocations
ADE ratios Air superiority
Rate of advance Second-order consequences
FEB A/FLOT movement Indirect effects on enemy morale
Cost Impact on enemy thinking
Cost effectiveness Virtual attrition
Opportunity costs Induced strategic mistakes
Throw weight Holding targets at risk
Equivalent megatonnage Higher-level effects

Now without a doubt, it would be possible to quibble almost endlessly

as to whether a given entry in Table 2 ought to be italicized or not. For
example, those who feel that traditional cost-effectiveness analyses have
done a far better job of quantifying costs than military effectiveness -
and hence that the denominators in cost-effectiveness ratios are in fact
qualitative estimates - could certainly make a case for italicizing cost
effectiveness. Conversely, it is conceivable that a measure like virtual
attrition could be largely quantified in terms of the forces tied down for
many specific cases - at least after the fact. So there are situations in which
such measures might not deserve to be italicized.
The main thrust of Table 2, though, is not about the status of any
specific entry. Rather the point lies in two global observations. First,
Table 2 contains a significant number of italicized measures. Second, these
indices tend to be the more important for considering strategic choices and
overall outcomes. In a nutshell, there are limits to linear quantification in
military 'analysis', and candid recognition of these limits inevitably confers
legitimacy on alternative (albeit qualitative) forms of analysis.
Those who have been captivated by the impulse to quantify everything
in sight are unlikely to be enthusiastic about embracing these conclusions.
To cite a fairly current instance of this powerful impulse, consider the
continuing public debate between John Mearsheimer and Joshua Epstein
over which of them has the better mathematical model for analyzing
conventional force balances. Mearsheimer's basic position in the spring
of 1988 was that the balance between NATO and Warsaw Pact (WP)

conventional forces was not risky for the West because the Soviets
could not win the breakthrough battles on which their campaign strategy
depended. Based on what Mearsheimer portrayed as the 'widely accepted
rule of thumb' that an advantage of three-to-one or more (in armored
division equivalents) would be needed to break through,121 he confidently
predicted that a Pact blitzkrieg in central Europe would fail because 'severe
force-to-space' constraints would limit the Soviet spearheads to, at best,
'temporary local superiorities in the vicinity of 1.6:1'.122
While Epstein essentially agreed with Mearshimer's global judgment
that a Pact conventional assault against NATO in central Europe would
fail, he did so on the basis of an entirely different set of calculations.
After criticizing Mearsheimer's three-to-one rule as being 'akin to bean
counts' and unsupported by any data, Epstein went on to offer, instead,
his own Adaptive Dynamic Model as a more credible basis for examining
conventional force balances.123 Epstein used his model to run two different
scenarios: one in which the Pact was willing to suffer high attrition in order
to gain ground; and one in which the Pact attack was less ferocious. In the
ferocious case, the Adaptive Dynamic Model indicated that Pact forces
would be 'annihilated after 91 days' while penetrating only 7.1km into
NATO territory; in the less ferocious scenario, Epstein's model showed
the Pact losing after 136 days while gaining no ground whatsoever.124
The main dispute between Mearsheimer and Epstein in the spring of
1988, then, was over methodology-specifically which of their respective
models offers a more credible or 'scientific' tool for conventional-force
analysis. Their disagreement persisted as recently as the spring 1989 issue
of International Security. In this round of the dispute, both of them basically
dug in and defended their previous positions. Mearshimer argued that his '3:1
rule' appears 'quite reliable' when 'evaluated against the proper evidence'.125
Epstein, for his part, expanded his criticisms of the 3:1 rule, Mearsheimer's
defense of it, and offered a ringing defense of his Adaptive Dynamic Model.126
Having characterized both of their positions in 1989 as reflecting a strong
preference for quantitative measure, we should note that Mearsheimer at
least has subsequently leaned more toward qualitative ones. Although he
has not abandoned the 3:1 rule, his prescient prediction in mid-January
1991 that the US-led coalition would clobber Iraq's forces and score a
stunning victory with far fewer than 1,000 American casualties was not
only vindicated by events, but was based primarily on qualitative measures
of military power.126a Of the two of them, it is Epstein who appears to
remain even after the 1991 Gulf War, stubbornly wedded to quantitative
measures.128b As Epstein wrote at the end of his spring 1989 rebuttal to
Mearsheimer's defense of the 3:1 rule: 'The main issue before us here is not
the 3:1 rule versus the Adaptive Dynamic Model. The main issue is whether
the field of security studies is going to move in the direction of science'.127
However, the kind of science towards which Epstein hopes security
studies will move is, plainly, the linear science of Pierre Simon de Laplace
(1749-1827).128 Linear science, after all, rests on two propositions: (1)
that 'changes in system output are proportional to changes in input

(proportionality); and (2) . . . that we can deal with the effects of a system
either as a whole, or we can break it into its component parts and then
add the effects of the parts together to represent the effect of the whole,
so that F(x+y) = F(x) + F(y)'.129 And even the most cursory inspection of
the equations underlying Epstein's Adaptive Dynamic Model reveals that
his 'calculus of conventional war' embraces both these assumptions, just as
did Laplace's physics.130
Still, what is wrong with attempting to move security studies in the
direction of linear science akin to nineteenth century mechanics? There
are two reasons for thinking movement in this direction makes no sense.
First, we now know something that Laplace did not: namely that linear
science is not all the science there is. Again, the discovery of simple,
deterministic systems whose long-term behavior is non-linear in the sense of
not being completely predictable shows that, even in mathematical physics,
proportionality does not hold universally and that the whole is not always
equal to the arithmetic sum of its parts.131 Laplace, in short, was wrong
when he assumed the universe is relentlessly linear.132 Second, one area
of human endeavor in which the assumptions of linearity surely do not
hold is the realm of war. If there is one idea that we have tried to place
beyond reasonable doubt in this paper, it is that in war non-linear effects
occur and the whole is not always equal to the arithmetic sum of its parts.
Human involvement alone argues that combat interactions and processes
cannot be universally linear, that effects can be all out of proportion to
their causes. As a result, the measures and analyses by which we attempt
to deal with such complex interactions as strategic bombing cannot be
wholly linear, not even in principle. Contrary to Laplace, Lord Kelvin,
Lanchester, Epstein, and numerous others, war cannot be adequately
captured by explicit, linear mathematical formulas anymore than can
chaotic dynamic systems.
The brute fact of non-linearity dooms the plausibility and adequacy
of wholly linear approaches to operations analysis, policy formulation,
procurement choices, systems analysis, military modeling, wartime plan-
ning, assessments of operational effects, comparative force assessments,
and the rest. These fields cannot be reduced to linear equations and
predictive measures. Other measures and other forms of analyses are
required to supplement the Procrustean bed of linear science into which
we have been finitely trying to force the study of war since at least the time
of Clausewitz. It is, in our judgment, that simple.

Toward the Future

We conclude by offering three global thoughts on the general problem of
choosing analytic measures. The first harks back to Andrew Marshall's
concern about the increasing 'routinization' of analysis. Marshall's sense,
once again, is that as the stockpile of quantitative measures and models
has grown, it has become easier to deal with whatever analytic problems
happen to occur simply by pulling old measures and models off the shelf

and applying them with little or no thought as to their applicability or

appropriateness. The point we make is that the situation is likely to get
worse in the decade ahead. Almost all the quantitative measures and models
we currently possess were developed within a bipolar world in which the
Soviet Union and its allies provided a fairly well-defined, reliable threat.
Yet the long-heralded multipolar world, which the political scientists have
been promising since the 1970s, does finally appear to be imminent, if not,
with some modifications, already upon us. Moreover, the breathtaking (and
decidedly non-linear and unpredictable) changes now underway in Mikhail
Gorbachev's Russia and Eastern Europe suggest that the archetypal threats
upon which we have so long relied will undergo potentially profound
transformations in the 1990s. Adding to these two trends the accelerating
pace of technological innovation-which promises a whole generation of
advanced weapons as new and different as the machine gun in the 186Os-it
seems certain that the decade ahead will be a period in which it will be
more critical than ever before to seriously consider choices of measures
and related analytic tools.
We want to stress the generality of the analytic problems with which we
have grappled in this paper. The difficulties of choosing adequate measures
are not restricted to defense analysis or national security studies. Consider,
for example, Samuel Huntington's recent critique of Paul Kennedy's
thesis that American power will unavoidably undergo 'relative erosion1
in the decades ahead because the country's global military commitments
increasingly exceed its economic strength.133
The core of Huntington's critique is exactly the same one we have made
regarding Galbraith's assessment of the Combined Bomber Offensive:
namely, that too narrow and coarse a set of measures was employed. In
brief outline, Huntington's argument begins by noting that the latest crop
of declinists, the fifth since the 1950s, 'see power coming out of a belching
smokestack' in much the same mechanical way that Mao Zedong saw it
growing out of the barrel of a gun.134 However, Huntington insists, 'the
ultimate test' or measure 'of a great power is its ability to renew its power',
and the US is particularly robust and multidimensional in this regard.135
In contrast to other countries, the United States ranks extraordinarily
high in almost all the major sources of national power: population
size and education, natural resources, economic development, social
cohesion, political stability, military strength, ideological appeal,
diplomatic alliances, technology. It is, consequently, able to sustain
reverses in any one arena while maintaining its overall influence
stemming from other sources. At present, no country can mount
a multidimensional challenge to the United States, and with one
conceivable exception [Japan] no country seems likely to be able to
do so in the relevant future.136
We need not take sides in this debate (although the authors' instincts,
based on our experience with measures, tend to lie with Huntington). The
bottom line is clear. The problems of choosing adequate measures go far

beyond military affairs, and we ignore them at our peril.

Finally, returning to the more narrow case of military analyses, we would
observe that it is not necessary to rely on the hindsight of full historical
knowledge to evaluate other than the simplest of military systems or
situations. Analysts may not be soothsayers, but they can certainly enrich
their analyses by consciously and systematically incorporating more global,
second-order (albeit qualitative) measures into their work. In fact, common
sense would dictate that things as complex as 'military effectiveness'
or 'deterrence' can only be adequately described in multidimensional
ways. Competent analysis should not, as has often been the case in
the past, be narrowly focused or myopically simplistic. Inevitably, the
defense questions most in need of analysis are not simple but complex,
and the best 'answers' to them may be 'vectors' whose components
are themselves complicated. Good analysts, nevertheless, ought to be
able to describe the resulting 'vector spaces' in ways which permit
those who must make decisions to grasp the inherent complexity of the
questions at issue. National security, we would suggest, can only stand
to benefit from the thoughtful attempts of good analysts to do so.

1. This essay presents the views of the authors on the general problem of selecting analytic
measures. It does not necessarily reflect the views of the Northrop Corporation or any
agency of the US Government. While accepting full responsibility for the arguments and
conclusions expressed herein, we want to acknowledge Andrew W. Marshall's constant
encouragement, wise counsel, and numerous suggestions. We also want to thank Eliot
A. Cohen and Thomas A. Fabyanic for their invaluable comments on an earlier draft.
2. Andrew W. Marshall, Problems of Estimating Military Power (Santa Monica, CA:
RAND paper P-3417, Aug. 1966), p.9.
3. James R. Schlesinger, 'Systems Analysis and the Political Process,' RAND paper
P-3464, June 1967; Selected Papers on National Security 1964-1968 (Santa Monica,
CA: RAND paper P-5284, Sept. 1974), p.93.
4. John Bartlett, Familiar Quotations (Boston, MA: Little, Brown & Co., 15th ed., 1980),
5. C. A. 'Bert' Fowler, 'The Never-Never Land of Defense Analysis', keynote address,
Cruise Missile Workshop, MIT Lincoln Laboratories, 2 May 1989, p.2.
6. David A. Armstrong, Bullets and Bureaucrats: The Machine Gun and the United States
Army, 1861-1916 (Westport, CT: Greenwood Press, 1982), pp.210-13. There were also
occasions when the opposition of particular individuals severely obstructed adoption of
the machine gun. For example, during the American civil war, the War Department's
chief of ordnance, Brigadier General James W. Ripley, was so strong in his conviction
that imperfect prototype machine guns were not worth purchasing that, ignoring the
desires of both President Abraham Lincoln and the field commanders it was his duty
to support, he was able to prevent the Union repeating gun from being adopted for
general issue (Ibid., p.22).
7. Ibid., pp.15, 23, 210.
8. In 1862, for example, the Raphael repeater was rejected for service with the Union
army by the Frankford Arsenal on the grounds that the gun 'possessed all the
restrictions on movements and requirements for infantry support of conventional
artillery pieces without having the "great moral effect of Artillery,"' (Armstrong,
Bullets and Bureaucrats, p.23).
9. Ibid., p.211. Interestingly, the US Army's tendency to emphasize standardization for

maintainability at the expense of operational performance persisted at least as late

as World War II. Standardization made the US M4 Sherman tank relatively easy to .
maintain overseas, but by the time it finally got to the field its armor and gun were not
able to match those of the newer German models.
10. Ibid., p.210. It was not until 1903, when combat experience and purchase of the radically
different Maxim gun finally 'convinced military men that whatever it was, the machine
gun was not an artillery piece', that progress toward its assimilation as an infantry
weapon began to occur (Ibid.).
11. John A. Battilega and Judith K. Grange, 'On the Need for a New Architecture for
Strategic Force Modeling', Science Applications International Corporation briefing
SAIC-87/6058&FSRC/B, 55th Military Operations Research Symposium, Mont-
gomery, AL, 19-21 May 1987, pp.2, 4. The most obvious evidence of the aggregate
influence of target destruction criteria on the US strategic force posture would appear to
be the magnitude of the long-term growth from the early 1960s to the present in indices
such as the number of warheads, throw-weight, and deliverable megatonnage.
12. Because the countable warheads would be keyed to deployed vehicles by rules such as
'each MX missile counts for ten warheads against the 6,000 limit', the warhead ceiling
does not truly count physical warheads. For example, the rule tabled by the Soviets at the
Reykjavik summit conference for non-cruise-missile-carrying bombers would only count
one warhead per vehicle against the 6,000 limit (Thomas S. Troyano, 'Strategic Bomber
Modernization and START: An Analytic Framework', Strategic Review, Summer 1989,
p.49). Hence a non-cruise-missile bomber like the B-2 might well provide 15 'uncounted'
warheads on a SIOP mission.
13. Glenn A. Kent and David E. Thaler, First-Strike Stability: A Methodology for
Evaluating Strategic Forces (Santa Monica, CA: RAND report R-3765-AF, Aug.
1989), p.4.
14. Battilega and Grange noted that, as of 1987, the deficiencies in US tools for strategic
analysis had been worsening for at least a decade, that the nuclear forces then being
discussed were even further from traditional assumptions, and that by the late-1990s
almost all strategic-force analysis would fall outside traditional models and measures
('On the Need for a New Architecture for Strategic Force Modeling', p.2). On these
grounds they recommended, therefore, that a community-wide effort be initiated to
develop new analytic tools sensitive to the increasingly neglected dimensions of the
evolving US-Soviet strategic-nuclear balance.
15. F. W. Lanchester, Aircraft in Warfare: The Dawn of the Fourth Arm (London:
Constable, 1916), pp.40-47. The shortcomings of Lanchester's original equations
as models of combat are legion. For example, at the 35th military operations research
symposium in 1975, J. G. Taylor listed no less than 15 major shortcomings (John
A. Battilega and Judith K. Grange (eds.), The Military Applications of Modeling
(Wright-Patterson AFB, OH: Air Force Institute of Technology Press, 1984), p.90).
Moreover, efforts to validate such laws with historical combat data have proven, at best,
inconclusive. 'Attempts to correlate the Lanchester-type models with historical combat
data have proven generally inconclusive, primarily, it seems, because the quality of the
data base is poor. One of the most recent is by Janice Fain, who analyzed data on 60
World War II engagements. . . Nevertheless, at this writing it is still difficult to say that
historical data prove or disprove the validity of Lanchester-type models'. (Ibid., p.92).
16. Robert McQuie, 'Battle Outcomes: Casualty Rates As a Measure of Defeat', Army,
Nov. 1987, p.33. McQuie found that in 64 per cent of the 52 battles in which the likely
reason for defeat could be identified, one side gave up through being outmaneuvered.
Casualties or equipment losses only accounted for defeat in ten per cent of the cases.
This would appear to explain why repeated attempts over the years to confirm one or
the other of Lanchester's 'laws' are generally believed to have been unsuccessful.
17. P. M. S. Blackett, 'The Scope of Operational Research', Studies in War: Nuclear
and Conventional (New York, NY: Hill & Wang, 1962), p.201. This essay originally
appeared in the No.1 (March 1950) issue of the Operational Research Quarterly.
18. Blackett, p.202.
19. Blackett, p.202.

20. Blackett, p.202.

21. Blackett, p.203.
22. Blackett, pp.203-4.
23. Philip M. Morse and George E. Kimball, Methods of Operations Research (Washington,
DC: Navy Department OEG Report No.54, 1946), pp.38-9.
24. Actually, Morse and Kimball use the term 'measure of effectiveness', from which
the acronym MOE derives. Given current usage, 'analytic measure' is probably
a better term for what Morse and Kimball mean than MOE (despite the fact
that this paper has already used 'analytic measure' and 'effectiveness measure'
interchangeably). Since the 1970s, MOE has come to be equated by at least one
segment of the American defense-analytic community with firepower scores for
conventional weapons. Examples of such MOE scores can be found in the Army's
WEI/WUV (Weapon Effectiveness Indices/Weighted Unit Values) system and The
Analytic Sciences Corporation's force-modernization (TASCFORM) methodology.
In themselves, though, raw 'firepower' scores are not true analytic measures of
operational effectiveness in the broader sense of sweep rates or kill ratios. WEI/WUV
and TASCFORM weapons scores focus on the firepower inputs to combat processes
rather than on measurable indices of the output or results of combat. Such scores
can be used to construct crude analytic measures (for example, by calculating the
ratio of firepower between opposing forces). However, due precisely to the lack
of direct causal linkages between input and output illustrated in McQuie's research
into the relation between casualties and victory, the value of firepower ratios for
predicting combat outcomes or effectiveness is not generally thought to be great.
25. Morse and Kimball, p.38.
26. Morse and Kimball, p.48.
27. 'Systems analysis', wrote Alain C. Enthoven and K. Wayne Smith in 1971, 'is a reasoned
approach to highly complicated problems of choice in a context characterized by much
uncertainty; it provides a way to deal with differing values and judgments; it looks
for alternative ways of doing a job; and it seeks, by estimating in quantitative terms
where possible, to identify the most [cost] effective alternative' (How Much Is Enough?
Shaping the Defense Program, 1961-1969 (New York, NY: Harper and Row, 1971),
p.62). While the broad resource-allocation problems upon which Pentagon systems
analysts focused during the 1960s were different in scale and scope from the tactical-
operational concerns of World War II operations research, the spirit of trying to apply
the 'quantified common sense' of the 'hard' sciences to military affairs was fundamental
to both forms of analysis. It should be noted, though, that the quantitative analytic tools
which systems analysts in the Office of the Secretary of Defense used during the 1960s
to justify killing programs like the B-70 bomber and the Skybolt air-launched ballistic
missile, to force the aircraft carrier John F. Kennedy to be built with conventional
rather than nuclear-powered propulsion, and to limit the Minuteman deployment to
1,000 missiles were largely derived from the field of economics.
28. James R. Schlesinger, 'On Relating Non-Technical Elements to Systems Studies',
Selected Papers on National Security 1964-1968 (Santa Monica, CA: RAND
Corporation paper P-5284, Sept. 1974), p.77.
29. In early 1961, then Defense Secretary McNamara installed Charles J. Hitch, 'the
father of the planning, programming and budgeting system', as Defense Department
comptroller, and Alain C. Enthoven as Hitch's deputy for systems analysis (David C.
Morrison, '"Whiz Kids" Rebound?' National Journal, 11 Nov. 1989, p.2741). Enthoven
then ran the Office of Systems Analysis (OSA) - which was renamed PA&E (Program
Analysis and Evaluation) in 1973 - from 1961 to 1969. The 'whiz kid' ethos of OSA, as
distilled by a former member of the organization, was that 'Other people had adjectives,
we had arithmetic'. (Ibid., p.2740).
30. Schlesinger, 'Systems Analysis and Political Process', P-5284, p.93.
31. Ibid., p.114.
32. For example, Schlesinger's considered judgment, as an unabashed but qualified
defender of systems analysis, was that he could give it two-and-one-half cheers, but
not three ('Uses and Abuses of Analysis', P-5284, p.106).

33. Discussion with Andrew W. Marshall, 19 Nov. 1988. Marshall has been the Director of
Net Assessment in the Office of the Secretary of Defense since the fall of 1973; in this
capacity he has been both a producer and consumer of defense analyses.
34. Arthur B. Ferguson, 'POINTBLANK', in Wesley F. Craven and James L. Cate, The
Army Air Forces in World War II, Vol.2, Europe: TORCH to POINTBLANK, August
1942 to December 1943 (Washington, DC: Government Printing Office, 1976 imprint),
35. The POINTBLANK target system, nevertheless, continued to be the basis for precision
bombing operations by the US heavy bomber forces through the end of the war in
Europe (J. Kenneth Galbraith, Burton H. Klein, et al., The Effects of Strategic Bombing
on the German War Economy (Washington, DC: Government Printing Office, 31 Oct.
1945), p.3).
36. 'The Bomber Offensive from the United Kingdom', C.S.S. 166/1/D, 21 Jan. 1943, in
The Casablanca Conference, January 1943: Papers and Minutes of Meetings, Office of
the Combined Chiefs of Staff, file 119.151-1, Alfred F. Simpson Historical Research
Center, Maxwell AFB, Alabama, p.88. By the time the detailed plan implementing this
directive was presented in late April 1943, the mission statement had been explicitly
construed to mean that Germany would be 'so weakened as to permit initiation of
final combined operations on the Continent' (Major General Ira C. Eaker, Minutes
of Meeting: Presentation of the Combined Bomber Offensive Plan to the Joint Chiefs
of Staff, US National Archives, Record Group 218, CCS 334, 71st-86th Meetings, 29
April 1943, p.A31826).
37. Franklin D'Olier, Henry C. Alexander, et al., Over-all Report (European War)
(Washington, DC: GPO, 30 Sept. 1945), p.10. The British suspended strategic
bombing of German cities on 7 April 1945, and the American strategic air forces
in Europe ceased strategic operations on 16 April 1945 (Craven and Cate, The Army
Air Forces in World War II, Vol.3, Europe: ARGUMENT to V-E Day, January 1944
to May 1945, pp.753-4).
38. Major General Haywood S. Hansell, Jr., Strategic Air War Against Japan (Washington,
DC: GPO, 1980), p.3; Hansell, The Air Plan That Defeated Hitler (Atlanta, GA:
Higgins-McArthur/Longino & Porter, 1972), pp.33, 37, 46; Donald Wilson, 'Origin of
a Theory for Air Strategy', Aerospace Historian, March 1971, pp.19-20, 25.
39. David Maclsaac, Strategic Bombing in World War Two: The Story of the United
States Strategic Bombing Survey (New York, NY: Garland, 1976), p.8. MacIsaac's
encapsulation is a concise, literate summary of the theory of strategic air attack worked
out by Haywood S. Hansell, Jr., Donald Wilson, and other Army Air Corps officers at
the Air Corps Tactical School during the 1930s.
40. USAFHD 168. 491 (Operational letters), Vol.1, 21 July 1943, cited in William R.
Emerson, 'Operation POINTBLANK: A Tale of Bombers and Fighters', The
Harmon Memorial Lectures in Military History, 1959-1987, ed. Harry R. Borowski
(Washington, DC: GPO, 1988), p.455.
41. W. W. Rostow, Pre-invasion Strategy: General Eisenhower's Decision of 29 March 1944
(Austin, TX: University of Texas, 1981), p.45.
42. Fagg, The Army Air Forces in World War II, Vol.3, pp.754-5. This 1951 conclusion
from the official Army Air Forces history is strikingly similar to that reached in 1945 by
the United States Strategic Bombing Survey's overall report on the European theater.
The USSBS offered the following summary: 'Allied air power was decisive in the war
in western Europe. Hindsight inevitably suggests that it might have been employed
differently or better in some respects. Nevertheless it was decisive'. (Franklin D'Olier,
et al., Over-all Report (European War), 30 Sept. 1945, p.107).
43. USSBS Interview 11-A, Albert Speer, 22 May 1945, cited in Fagg, 'Mission Accom-
plished', The Army Air Forces in World War II, Vol.3, p.786. Speer has subsequently
stuck with his original judgment that air power alone could have won the war. See,
in particular, Albert Speer, Inside the Third Reich, trans. Richard and Clara Winston
(New York, NY: Collier, 1981; Verlag Ullstein GmbH., 1969), p.285.
44. Bernard Brodie, Strategy in the Missile Age (Princeton, NJ: Princeton University Press,
1959), p.107. More generally, Brodie argued that the purely strategic successes of

US bombing efforts in World War II, 'however far-reaching in particular instances,

were never completely convincing to uncommitted observers' (Ibid.). Not surprisingly,
many American bomber commanders during World War II took vehement exception
to Brodie's assessment. For example, in 1976 Lieutenant General Ira Eaker, who had
been the wartime commander of the US Eighth Air Force until the end of 1943,
denounced the paragraph, cited above, from Strategy in the Missile Age as 'slanted,
prejudiced', and 'wholly unrelated to the facts' (David Maclsaac, 'Voices from the
Central Blue: The Air Power Theorists', Makers of Modern Strategy from Machiavelli
to the Nuclear Age, ed. Peter Paret (Princeton, NJ: Princeton University Press, 1986),
note 22, p.636).
45. Brodie, Strategy in the Missile Age, p.127.
46. R. J. Overy, The Air War 1939-1945 (New York, NJ: Stein and Day, 1981), pp.
47. Galbraith, Klein, et al., The Effects of Strategic Bombing, p.ii.
48. In his 1981 autobiography, Galbraith went so far as to opine that by the end of 1945
it was possible that he 'knew more about the drained and shattered economies of
Germany and Japan than anyone alive' (John Kenneth Galbraith, A Life in Our Times:
Memoirs (Boston, MA: Houghton Mifflin, 1981), p.240).
49. Galbraith, A Life in Our Times, p.206. As used by Galbraith's Economic Effects
Division, the term 'armaments production' refers to Germany's wartime output of
the 'main types' of 'finished' military end items (Galbraith, Klein, et al., The Effects
of Strategic Bombing, pp.139,144). This usage includes such major equipment items as
aircraft, tanks, powder and ammunition, artillery, and naval combatants; but it excludes
lesser items such as communications gear, fire-control mechanisms, medical stores, and
the entire range of quartermaster supplies (Ibid., p.192). By 1943, German production
of the main types of 'finished munitions output' represented about one-third of total
industrial production, and about 15 per cent of Germany's gross national product (Ibid.,
pp.23, 139).
50. Galbraith, A Life in Our Times, p.205.
51. Galbraith, A Life in Our Times, p.205. German fighter monthly production peaked at
3,375 aircraft in Sept. 1944 (Galbraith, Klein, et al., The Effects of Strategic Bombing,
Table 102, p.277).
52. Galbraith, A Life in Our Times, p.215. The suggestion that strategic bombing may
have perversely aided German war production crops up even in the USSBS volume
for which Galbraith was the principal author (see, in particular, Galbraith, Klein, et
al., The Effects of Strategic Bombing, pp.26, 38, 157).
53. Galbraith, A Life in Our Times, p.226. The parallel passage in the economic-effects
volume of the USSBS reads as follows: 'The most that can be said is that the bombing
destroyed a substantial part of the consumer goods cushion and thereby prevented the
further conversion of the civilian economy to war production in 1944. From December
1944 onwards, all sectors of the German economy were in rapid decline. This collapse
was due to the results of air raids working in combination with other causes'. (Galbraith,
Klein, et al., The Effects of Strategic Bombing, p.13).
54. Galbraith, A Life in Our Times, p.200.
55. Kenneth S. Lynn, 'Galbraith's Brain', National Review, 7 Aug. 1981, p.906.
56. Galbraith, Klein, et al., The Effects of Strategic Bombing, p.139.
57. The eight main types of 'finished munitions output' analysed by Galbraith's Overall
Economic Effects Division were: aircraft, armored fighting vehicles, motor vehicles and
half-tracks, naval construction, powder, weapons, and ammunition (Galbraith, Klein,
et al. The Effects of Strategic Bombing, p. 139). For military items excluded by these
categories, see Note 49 above.
58. Carl von Clausewitz, On War, ed. and trans. Michael Howard and Peter Paret
(Princeton, NJ: Princeton University Press, 1976), pp.75, 86, 89, 101, 104, 113-23,
146, 577, 579-80.
59. We are not the first to have raised this criticism of Galbraith's view of the Combined
Bomber Offensive. Both Walt Rostow and Guido Perera have criticized Galbraith
for mistakenly believing that the sole aim of the strategic air effort against Hitler's

Germany was 'to reduce the general level of German industrial production' (Guido
R. Perera, Leaves from My Book of Life, Vol.2, Washington and War Years (Boston,
MA: privately printed at the Stinehour Press, 1975), pp.ix-x).
60. Colonel Carl Norcross, et al., Aircraft Division Industry Report (Washington, DC:
GPO, 1st ed. 2 Nov. 1945), Chart 'The German Aircraft Industry Under Allied Air
61. Ferguson, The Army Air Forces in World War II, Vol.3, p.62; James H. Doolittle with
Beirne Lay, Jr., 'Daylight Precision Bombing' in IMPACT: The Army Air Forces'
Confidential Picture History of World War II (New York, NY: James Parton, 1980),
Vol.6, p.xv. It is our sense that the full contribution of US long-range escort fighters
to the Luftwaffe's loss of daytime air superiority over central Germany in the spring of
1944 remains one of the least appreciated aspects of the CBO.
62. Norcross, et al., Aircraft Division Industry Report, pp.24-5.
63. Ibid., p.26.
64. Ibid., p.7.
65. 'Beginning in March [1944], the Eighth Air Force discontinued efforts to evade enemy
fighters in its operations. To accomplish our mission, [the command's planners
reasoned,] we must not only bomb the aircraft factories, but also force enemy
fighters into the air. We now sought to provoke enemy fighter reaction'. (Major
General William E. Kepner, Eighth Air Force Tactical Development: August 1942-May
1945 (England: Eighth Air Force and Army Air Forces Evaluation Board, July 1945),
pp.76-7). The overriding strategic mission at this stage was, as General H. H. Arnold
emphasized in his 27 Dec. 1943 New Year's message to the commanding generals of
the Eighth and Fifteenth air forces, to destroy the enemy air force 'wherever you find
them, in the air, on the ground and in the factories' as a prerequisite for Overlord and
Anvil (Arthur Ferguson, 'Winter Bombing', The Army Air Forces in World War II,
Vol.3, p.8). In this regard, General Doolittle viewed his decision in Jan. 1944 to allow
Eighth Air Force's escort fighters to take the offensive and begin seeking out German
fighters wherever they could be found as his 'most important decision of World War II'
(Doolittle with Lay, IMPACT, Vol.6, p.xv).
66. Oil Division Final Report (Washington, DC: GPO, 1st ed. 25 August 1945), p. 4 and
fig. 7.
67. Oil Division Final Report, Figure 2; Rostow, Pre-Invasion Bombing Strategy, pp.53,
68, 78-9; Emerson, pp.447-8 and 468-9; ULTRA: History of US Strategic
Air Force Europe vs. German Air Forces, Special Research History 013, National
Archives, Record Group 457, pp.179-80. German aircraft fuel production dropped
from 180,000 tons in March 1944 to 54,000 tons in June, and was down to only 10,000
tons a month by September (Rostow, p.53). The German Air Force was the first to
feel the pinch. From September the Luftwaffe, with a monthly minimum requirement
of 160,000 tons of octane, was allotted only 30,000 tons for operations (Ibid., 79).
ULTRA decryptions in May 1944 following the US attacks showed that the Germans
reassigned 'oil defense a priority over even the defense of aircraft manufacture' (SRO
013, p.179).
68. Walt Rostow, who was part of the Enemy Objectives Unit (EOU) of the Economic
Warfare Division of the US embassy in London during World War II, was still
arguing in 1981 that General Dwight D. Eisenhower's 29 March 1944 decision to
focus the strategic air forces on transportation rather than oil was a major strategic
error (Rostow, Pre-Invasion Bombing Strategy, pp.75-84). Yet Alfred Mierzejewski
has recently taken the position that the nexus of transportation and coal, not oil, was
the only target system that 'was both sufficiently well defined to enable concentration
of effort and functionally broad enough to strike at the root of the German economy
while preventing it from shifting resources to counter losses' (Alfred C. Mierzejewski,
The Collapse of the German War Economy, 1944-1945: Allied Air Power and the
German National Railway (Chapel Hill, NC: University of North Carolina Press, 1988),
p.182). Major General Haywood Hansell, who was one of the architects of the CBO
and earlier American strategic air plans, has consistently maintained that the German
electric power system (some 45 generating plants and 12 switching stations) could have

been destroyed by mid-May 1944 without lessening the attacks on the Luftwaffe or
oil, and that doing so 'would have caused the prompt collapse of the Reich' (Hansell
The Air Plan That Defeated Hitler, pp.267-8). Guido Perera, who was a member of
both the Committee of Operations Analysts and the US Strategic Bombing Survey, has
argued that ball bearings could have had similar effects if only a bit more effort had been
focused on this target system in 1943 (Perera, Leaves from My Book of Life, Vol.2,
pp.139, 156-9). Perera also supports Rostow in arguing that had a heavy, coherent
bombing effort been focused on oil prior to Normandy, 'the war might have been
ended in later 1944 or early 1945' (Ibid., p.169). So the debate over strategic targeting
priorities against Germany during the CBO period persists. Indeed, disagreement over
target selection appears to be as deep today as it was during World War II.
69. Galbraith, Klein, et al., The Effects of Strategic Bombing, p.185.
70. Ibid., p.187.
71. Ibid., p.l85.
72. Klein's basic thesis in 1959 appears to have been derived from the USSBS observation
that had 'Germany's leaders decided to make an all-out war effort in 1939 instead of
1942, they would have had time to arm in "depth"; that is, to lay the foundations of
a war economy by expanding their basic industries and building up equipment for the
mass production of munitions' (Galbraith, Klein, et al., The Effects of Strategic Bombing
on the German War Economy, p.7). Williamson Murray, however, has criticized this
contention on the grounds that it assumes 'the circumstances that determined German
rearmament in the 1930s and American rearmament in the 1940s' were essentially the
same when in fact they 'were vastly different' (Williamson Murray, The Change in the
European Balance of Power, 1938-1939 (Princeton, NJ: Princeton University Press,
1984), p.12). One clear weakness in Klein's case is his contention that the 'tremendous
increase in military output which occurred from late-1942 to mid-1944 was accomplished
with a relatively small increase in the resources available for the German war effort'
(Klein, Germany's Economic Preparations for War, p.213). As Murray has noted,
the resources available to the Germans in 1942, after they had gained control of 'the
resources of almost the entire European continent', were much greater than they were
in 1938-39, and 'German success in expanding production throughout the last three
years of the war. . .occurred because the Germans were able to exploit ruthlessly the
resources of the occupied and neutral countries within their sphere of control' (Murray,
The Change in the European Balance of Power, pp.13-14).
73. Burton H. Klein, Germany's Economic Preparations for War (Cambridge, MA:
Harvard University Press, 1959), p.206. Nevertheless, Klein does emphasize that the
overall economic effects report was written under such a pressing deadline that those
involved were not able to go through all the captured German economic documents
before the report had to be completed (Telephone conversation with Burton H. Klein,
2 Jan. 1990).
74. Klein, pp.232-3. Klein's assessment of the CBO also indicates that the claim that
the bombing had opened up a 'second front' against the Germans by early 1944 is
not without merit.
75. Galbraith, Klein, et al., The Effects of Strategic Bombing, pp.144-6, 149, 181-3,
276. The portion of total aircraft production due to defensive aircraft was estimated
as follows. The percentage of German finished armaments production consumed by
aircraft of all types in June 1944 was, as shown in Table 1, 46.1 per cent, and roughly
half of German aircraft production by weight was going to defensive fighters. So if we
assume that aircraft costs are proportional to airframe weight - which they basically
were in that era - then the portion of finished armaments production due to defensive
aircraft would be 46.1 per cent/2 = 23.05 per cent.
76. Galbraith, Klein, et al., The Effects of Strategic Bombing, Table 84 (II), p.149.
77. Tables 80 and 81, p. 145; Table 100, p.275. Tables 80 and 81 give values in constant
prices for total munitions and aircraft for just a few selected months. Table 100 contains
monthly and annual indices for the entire war based on Jan.-Feb. 1942 being 100 in each
category. These indices were used to calculate constant prices for months not listed in
Tables 80 and 81.

78. Regarding the circumstances in which Overall Economic Effects. Division's report was
written, Klein stresses that there simply was not sufficient time to consider more than
the most direct and narrow effects of strategic bombing (conversation with Klein, 2 Jan.
79. Klein not only used a wider set of measures than Galbraith. His associated conceptual
framework was different. Implicitly at least, Klein seems to look at the war in Europe
as a two-sided competition to generate men, armament, ammunition, etc. for use in
military operations. From this perspective, it was natural to go beyond mere monthly
production rates and consider the broader question of how effectively the two sides
were able to translate industrial output into operational capabilities over a period. In
this regard, perhaps the overriding conclusion Klein drew from Germany's experience
was 'that a nation's economic war potential may be a very poor measure of her actual
military strength' (Klein, Germany's Economic Preparations for War, p.238). By the
time of his 1959 book, what was most surprising to Klein about Germany was not that
she eventually lost the war to a combination of powers whose economic strength vastly
exceeded hers, but 'how well she did despite the economic odds against her' (Ibid.).
80. B. V. Panov, V. N. Kiselev, I. I. Kartavtsev, et al., Isloriua voyennogo iskusstva
[History of Military Arts] (Moscow: Voyenizdat, 1984), Ch. II, section 3; Colonel
David M. Glantz, The Great Patriotic War and the Maturation of Soviet Operational
Art: 1941-1945, Soviet Army Studies Office, Fort Leavenworth, April 1987 draft,
81. Williamson Murray, Strategy for Defeat: The Luftwaffe 1933-1945 (Maxwell AFB:
Air University Press, 1983), p.275.
82. The great missed opportunity in this story for the Germans, of course, appears to have
been their failure to begin producing the Me-262 in operationally significant numbers
earlier. Had the Germans fielded the Me-262 in the fall of 1943 - Burton Klein believes
they could have easily done so - the CBO might well have turned out very different
(Klein, Germany's Economic Preparations for War, p.238). However, the first Me-262s
did not appear in combat until July 1944, and by then Germany no longer had the fuel
or the pilots to fully exploit its potential against the American bomber streams. Even
so, it is clear that from July 1944 to early 1945, US bomber commanders in Europe
were genuinely fearful that the 'daylight offensive could be seriously impeded by the
introduction . . . of the jet fighter in force' (Major General F. L. Anderson, letter to
Air Vice Marshal W. Coryton, 13 July 1944, Library of Congress, Spaatz papers, Box
50; also, Lt. Col. Donald R. Baucom, 'The Coming of the German Jets', Air Force
Magazine, Aug. 1987, p.90).
83. The initial operational employment of a German V-weapon against England did not
occur until the night of 12-13 June 1944, when the first V-1s were launched from
sites in France (Craven and Cate, The Army Air Forces in World War II, Vol.3, p.84).
The last V-1 fired from France fell in Kent on the afternoon of 1 Sept. 1944 (Ibid.,
84. Murray, Strategy for Defeat: The Luftwaffe 1933-1945, pp.300-01. Neither
Germany nor Japan fielded long-range bomber forces in World War II comparable to
those of the British or the Americans. The closest analog fielded by either Axis country
would be the Germans' V-1 and V-2. But compared to the B-29, or even the B-17, the
German V-weapons were relatively short range. They were also committed to battle
too late and in insufficient numbers to affect the outcome of the war in Europe.
Furthermore, because the V-weapons were only used against England, the Soviets
in the east were never required to defend their war industry, rear areas, or lines of
communications against German strategic air attack.
85. MacIsaac, 'Voices from the Central Blue', p.636. Noble Frankland, together with Sir
Charles Webster, authored The Strategic Air Offensive against Germany, 1939-1945,
the official history of the British Bomber Command during World War II.
86. For a sense of how great the disagreement over target selection during the CBO remains
to this day, see Note 68.
87. Galbraith, Klein, et al., The Effects of Strategic Bombing, pp.124-6; Hansell, The Air
Plan That Defeated Hitler, pp.259, 261-2, 267-8, 286-97. The primary reason

German electric power was never systematically attacked during the years 1943-45
was the opinion of many American economic analysts, both in Washington and London,
that the target system was too dispersed and too difficult to be vulnerable to strategic
bombing. Not surprisingly, Galbraith's overall economic report repeated some of these
same objections to electric power as a target system. Yet the overall economic effects
report, which Galbraith characterized in 1981 as a 'competent and literate document'
which was 'published without censorship of any kind', contains an overall assessment
of the potential vulnerability and decisiveness of this target system which is strikingly
at odds with Galbraith's pointed denigration of strategic bombing as a whole (A Life in
Our Times, pp.225,227). Notwithstanding the obvious difficulties of bombing elements
of the system such as transmission lines and towers, the report offered two pivotal
judgments about electric power: first, that the critical nodes in Germany's electric
power system - notably the 40-60 largest generating plants and the 9-12 vital
switching, control, and transformer stations - would not have been an unreasonable
target set for the bombers to have attacked; and, second, that electric power, unlike
most other elements of the German war economy during the early years of the war, ran
out of excess capacity by the fall of 1941 and was thereafter stretched taut (The Effects
of Strategic Bombing, p.126). So Galbraith's own volume of the USSBS concluded that,
on balance, attacking electric power 'might well' have been 'significant enough to have
had a decisive effect on the ability of the industrial war economy to continue to supply
the needs of war' (Ibid.).
88. Hansell, The Air Plan That Defeated Hitler, p.259; Craven and Cate, The Army Air
Forces in World War II, Vol.2, p.362; Perera, Leaves from My Book of Life, Vol.2,
p.96. German electric power was accorded first priority in the AWPD-1 air plan done
in Aug. 1941; and in the post-Pearl Harbor update conducted in Sept. 1942, AWPD-42,
electric power was given fourth priority behind the German Air Force, submarine yards,
and transportation system (Hansell, The Air Plan That Defeated Hitler, p.259). Based
on the arrangement of the COA's final report and certain oral statements made by the
Committee to General Arnold, electric power was dropped to tenth priority in a list of
14 target systems (Perera, Vol.2, p.96; Craven and Cate, Vol.2, pp.361-2).
89. Interview of Major General Haywood S. Hansell, Jr., conducted by Thomas A.
Fabyanic, Boiling AFB, 21 Jan. 1987, p.15. Guido Perera essentially ran the COA
and was among the authors of its European reports. He has pointed out that the
COA's report of March 1943 (on the advice of the Eighth Air Force, the Economic
Warfare Division of the American Embassy, and the British Ministry of Economic
Warfare) did not provide a 'formal list of target priorities' for the bombing of Germany
(Perera, Leaves from My Book of Life, Vol.2, pp.95, 96). Instead the report offered
criteria by which others might carry out such a prioritization. But the ever impulsive
General Arnold quickly produced a prioritized list of target systems from the COA's
work in which electric power was tenth out of 14 (Ibid.).
90. Hansell, The Air Plan That Defeated Hitler, p.259. Perera, however, has emphasized
that all the COA's assumptions concerning capabilities were provided by the Eighth
Air Force (Interview of Guido R. Perera, conducted by Thomas A. Fabyanic and David
MacIsaac, Boston, MA., 10 June 1987, tape 1, side 2).
91. Perera's understanding of the COA's charter was that the committee had been tasked
by General Arnold to look at targets whose disruption would 'have an immediate and
maximum effect upon [German] front line military strength', thereby making 'possible
the earliest invasion of Europe' (Perera, Leaves from My Book of Life, Vol.2, pp.77,
151). On the basis of these criteria electric power did not appear operationally feasible
to the COA. On the other hand, members of the COA (including Perera) did journey
to England in late-January 1943 and visit the Eighth Air Force, the Enemy Objectives
Unit (EOU) of the Office of Strategic Services, and the British Ministry of Economic
Warfare (Ibid., pp.84-92). Furthermore, the EOU is known to have concluded prior
to the COA's visit that electric power was not only a difficult target system to attack, but
even if attacked successfully 'would yield relatively small returns' which would be 'long
delayed' ('The German Electric Power System as a Bombing Objective: With Special
Reference to the Rhineland and Westphalia', 5 Jan. 1943, EOU, US National Archives,

Record Group 243, Box 18, Folder 3a79). Therefore, if the members of the COA were
guilty of rendering an operational judgment about the feasibility of attacking electric
power they were unqualified to make, they were not alone in having done so.
92. Interview of Hansell, 21Jan. 1987, p.15.
93. Following the AWPD-42 effort in Sep. 1942, both the Navy and the Joint Intelligence
Committee had begun to argue vigorously that airmen were not qualified to select
industrial target systems for strategic air attack. It was this line of objection to the
AWPD-42 program that led General Muir S. Fairchild to provoke General Arnold
into establishing the COA in order to provide a 'scientific' basis for target selection
(Craven and Cate, The Army Air Forces in World War II, Vol 2., p.349; Perera, Leaves
from My Book of Life, Vol.2, pp.68-71). As Hansell later said, this overarching goal
for the COA left him with a dilemma regarding the priority of electric power in the
CBO: 'If I ordered electricity put back in top priority I would be opening up the whole
problem of selecting industrial targets. If I should do that I would be challenging the
competence of the Committee of Operations Analysts. This was the very agency that
we had pulled together to save us from having industrial targets and the whole idea of
strategic air warfare eliminated all together. . .So I went along with the elimination of
electric power'. (Interview of Hansell, 21 Jan. 1987, pp.15-16).
94. Craven and Cate, The Army Air Forces in World War II, Vol.1, Plans and Early
Operations: January 1939 to August 1942, p.441.
95. Rear Admiral Edwin T. Layton with Captain Pineau and John Costello, 'And I Was
There': Pearl Harbor and Midway - Breaking the Secrets (New York, NY: Quill,
1985), pp.380-1, and 385. The Doolittle mission was conceived in Washington
D.C. during Jan. 1942, evidently with some involvement by President Franklin D.
Roosevelt (Craven and Cate, Vol.1, p.438). 'President Roosevelt had expressed his
desire to see the Japanese home islands bombed; Britain's Air Chief Marshal Sir
Charles Portal had suggested a carrier raid on Japan to General Arnold at ARCADIA
[in Dec. 1941]. Arnold thought the suggestion impractical. But a few days later Admiral
King's operations officer, Captain Francis S. Low, suggested a plan for such a raid. It
would use army bombers, launched from carriers outside the range of Japanese fighters.
With King's approval, Low and Captain Donald B. Duncan, King's air officer, prepared
a detailed proposal. They submitted it to Arnold in mid-Jan.. The air force chief, who
was already thinking of operating bombers from aircraft carriers in connection with a
projected invasion of North Africa, readily agreed. Arnold assigned Lieutenant Colonel
James H. Doolittle, a distinguished aviator and an aeronautical engineer, to head the
mission'. (Ronald H. Spector, Eagle Against the Sun (New York, NY: The Free Press,
1985), p.154).
96. Unless otherwise noted, the details of the Doolittle raid - officially Special Aviation
Project No.l - have been based on correlating four documents in file 142.034, Alfred
H. Simpson Historical Research Center, Maxwell AFB, Alabama. These documents
include a 95-page historical manuscript, originally classified SECRET, recounting the
operational details of Special Aviation Project No.l.
97. Nagoya is about 140 nautical miles west of Tokyo. Kobe and Osaka lie about 85 nautical
miles west of Nagoya.
98. Spector, p.155.
99. Layton, Pineau, and Costello, p.387.
100. Craven and Cate, The Army Air Forces in World War II, Vol.1, p.440.
101. In the event, Admiral William F. Halsey, who commanded the task force that launched
Doolittle's raiders, elected to launch the B-25s some ten hours earlier than planned at a
distance of 800 statute miles from Japan following detection of the task force's presence
by a Japanese picket ship on the morning of 18 April 1942 (Craven and Cate, The Army
Air Forces in World War II, Vol.1, p.441). Layton notes that the picket ship did manage
to alert Tokyo by radio before being sunk (Layton, Pineau, and Costello, p.385). Apart
from forcing the B-25s to fly further than planned, Halsey's decision changed their
arrival time over their targets from night to midday. This change not only increased their
vulnerability to Japanese fighters, but prevented darkness from masking the number of
bombers employed.

102. Layton, Pineau, and Costello, p.387. Admiral Layton was not only a naval intelligence
officer, but fluent in Japanese as well.
103. Layton, Pineau, and Costello, p.387.
104. John Toland, The Rising Sun: The Decline and Fall of the Japanese Empire,
1939-1945 (New York, NY: Random House, 1970), Vol.1, p.386.
105. B. H. Liddell Hart, History of the Second World War (New York, NY: Putnam's Sons,
1971), p.345.
106. Craven and Cate, The Army Air Forces in World War 11, Vol.1, p.444.
107. Liddell Hart, p.345; Layton, Pineau, and Costello, p.385.
108. Layton, Pineau, and Costello, p.403.
109. Spector, pp.162-3. It is ironic in light of the reduction of Nagumo's carrier striking
force from six to four carriers at Midway that a post-war Naval War College study
could find 'no serious strategical reason' for the Doolittle raid other than to raise Allied
morale (Ibid., p.154).
110. Layton, Pineau, and Costello, pp.419-32. The decryption and analysis effort that
literally made possible the American success at Midway focused primarily on
information gleaned from a version of the JN-25 operational code system used
by the Japanese fleet (Ibid., p.174). By 27 May 1942, Layton's signals-decryption
unit on Oahu under Commander Joseph J. Rochefort had given Nimitz most (but
not all) of the essentials of the Japanese plan for attacking Midway, including timing.
As a result, when Admiral Nagumo began launching ground-attack configured aircraft
against Midway on the morning of 4 June 1942, he had no idea that three American
carriers were almost within striking range (Ibid., p.437).
111. Liddell Hart, pp.352-3.
112. Senshi Sosho [War History Series] (Tokyo: Defense Headquarters History Office) as
paraphrased in Layton, Pineau, and Costello, pp.387-8. The success of the US
decryption unit in Hawaii in the weeks preceding the Japanese effort against Midway
rendered Admiral Yamamoto's occupation of Attu and Kiskas in the western Aleutians,
intended to divide American forces, worthless. The decryptions allowed Layton and
Nimitz to recognize the diversion for what it was, and all available American naval
forces were concentrated against the Japanese at Midway. Moreover, between the
battles of the Coral Sea and Midway, Nimitz sent Halsey with two carriers to a
position east of the Solomons where he was under secret orders to reveal his presence
to Japanese patrol planes attached to an invasion force headed for Nauru and Ocean
islands before himself setting off for Pearl Harbor and Midway (Katherine Herbig,
'American Strategic Deception in the Pacific, 1942-44', Strategic and Operational
Deception in the Second World War, ed. Michael Handel (London: Frank Cass, 1988),
p.262). This action, coupled with simulated message traffic from two ships which
remained off the Solomons following Halsey's departure for Pearl in radio silence,
left the Japanese Naval General Staff in Tokyo convinced as late as 30 May that the
Americans remained ignorant of their plan to attack Midway, and had at least two of
their three carriers deployed elsewhere (Ibid., p.263).
113. Hansell, Strategic Air War Against Japan, p.36.
114. Layton, Pineau, and Costello, p.387.
115. Ibid., p.388.
116. The basic empirical fact underlying the growing field of nonlinear dynamics is the
discovery of simple, deterministic systems whose few elements can, despite being
governed by wholly unambiguous rules, exhibit nonlinear or unpredictable behavior
(J. P. Crutchfield, J. D. Farmer, N. H. Packard, and R. S. Shaw, 'Chaos', Scientific
American, Dec. 1986, p.46). 'Although most modern physicists and gamblers would
concede that dynamical systems with large numbers of degrees of freedom, such as
the atmosphere or a roulette wheel, can exhibit random behavior for all practical
purposes, the real surprise is that deterministic systems with only one or two degrees
of freedom can be just as chaotic' (Roderick V. Jensen, 'Classical Chaos', American
Scientist, March-April 1987, p.168). Moreover, because nonlinear dynamic systems 'can
exhibit all the attributes of an idealized random process', gathering or processing more
information will not make the fundamental randomness exhibited by such systems go

away (Jensen, p. 178; Crutchfield, Farmer, Packard, and Shaw, p.46).

117. D'Olier, Alexander, et al., Over-all Report (European War), p.8. By contrast, less,
than two per cent of the total bomb tonnage was directed against German aircraft
production (Ibid.). In fact, less than 20 per cent of the bomb tonnage dropped by
US and British heavy bombers during the period 1941-45 was targeted against the
'prime industrial target systems, including aircraft production, ball bearings, petroleum
and rubber' (Perera, Leaves from My Book of Life, Vol.2, p.148).
118. Ian Stewart, Does God Play Dice? The Mathematics of Chaos (Cambridge, MA: Basil
Blackwell, 1989), pp.86-7.
119. Stewart, p.87. Andrew Marshall's inclination at this stage of the argument was to raise
biological examples of what Michael Polanyi has termed 'tacit knowledge'. By tacit
knowledge Polanyi means our ability to know or sense more than we can tell or specify
(Michael Polanyi, Knowing and Being, ed. Marjorie Grene (Chicago, IL: University
of Chicago Press, 1969), p.151). A classic example of such knowledge is the ability
to remember the face of a friend or relative (Ibid., p.211). Recognition of faces is
based on a pattern of particular features we are not generally able to specify fully or
in detail. Marshall's spontaneous example was the instinctive preference of men for
women whose eyes reflect, during conversation, their interest or approval by dilation.
What struck him most about this example was that men would respond unconsciously
to the dilation of a woman's eyes and could offer no intellectual explanation as to why
they preferred certain women over others.
120. In the 1960s, the American topologist Stephen Smale made major advances in the
qualitative theory of differential equations first pioneered by Henri Poincard. The
basic insight of this qualitative theory is to think about dynamical systems 'in terms
of their geometry - the topology of the phase portrait - rather than the formulas used
to define them' (Stewart, p.107). For an explicit example of this kind of qualitative
analysis yielding information on dynamic systems which cannot be derived from their
detailed equations, see Stewart's proof of the theorem that every continuous map from
an interval to itself has a fixed point, meaning one that maps onto itself. This remarkable
theorem establishes the existence, given a suitable line segment (namely, a Poincar
section), of a periodic motion independent of the detailed dynamics of the systems to
which it is applicable (Ibid., p. 116). The tragedy of much contemporary defense analysis
is that analogous forms of qualitative analysis have not even been attempted due to the
prevailing obsession with quantification.
121. Robert McQuie recently tried to shed some light on both Mearsheimer's claim that his
three-to-one rule is supported by actual combat data and Epstein's attack on this claim.
Based on force ratios derived from some 225 battles between 1937 and 1982, McQuie
concluded that Mearsheimer 'is on weak ground in arguing in favor of the 3:1 ratio', but
that Epstein 'is on equally weak ground in arguing against it' (Robert McQuie, 'The 3:1
Rule in Theory and in Fact', Phalanx, Dec. 1989, p.7).
122. John J. Mearsheimer, 'Numbers, Strategy, and the European Balance', International
Security, Spring 1988, pp.175, 177, 182. Regarding Mearsheimer's insistence that
'severe force-to-space ratio constraints' would prevent Soviet or WP forces from
achieving even temporary force-ratios greater than about 1.6-to-1 in their main attack
sectors, we would point out the following facts. First, in the final stage of the Great
Patriotic War (1944-45), Soviet forces consistently were able to achieve force-ratios in
main attack sectors of five-to-one or better in actual combat. Second, contemporary
Soviet operational art calls for comparable force-ratios advantages (see, for example,
Lieutenant General Yu Kardashevskiy, 'Plan the Fire Destruction of Targets by Fire
Creatively', Voyennyi Vestnik, No.7, July 1978, pp.64-7). Last, in Sept. 1986 Major
General Philip H. Mallory witnessed a WP exercise in East Germany (Druzhba '86)
in which a counter-attack concentrated two tank regiments, containing some 325-350
armored vehicles, against a three kilometer sector, roughly the defensive frontage
of a US rifle company (Scott D. Dean and Benjamin F. Schemmer, 'Warsaw Pact
Success Would Hinge on Blitzkrieg, US Army Observer Says', Armed Forces Journal
International, Nov. 1987, p.32). We would suggest, therefore, that Mearsheimer's view
that the Soviets could not achieve their longstanding force-ratio norms in main attack

sectors flies in the face of substantial evidence. Moreover, the US Array's OPFOR
(Opposing Force) at Fort Irwin has been regularly achieving Soviet force-ratio attack
norms at the tactical level since it was created in the 1970s (Ibid.). Eliot Cohen, of
course, has made similar points in more than one issue of International Security, but,
as of the spring of 1989, Mearsheimer continued to ignore them (Eliot A. Cohen,
International Security, Spring 1989, p.169).
123. Joshua M. Epstein, 'Dynamic Analysis and the Conventional Balance in Europe',
International Security, Spring 1988, pp.155, 157, 162-4.
124. Epstein, p.163.
125. John J. Mearsheimer, 'The 3:1 Rule and Its Critics', International Security, Spring 1989,
126. Joshua M. Epstein, 'The 3:1 Rule, the Adaptive Dynamic Model, and the Future of
Security Studies', International Security, Spring 1989, especially pp. 121-3, 125-7.
126a. John J. Mearsheimer, 'A War the US Can Win - Decisively', Chicago Tribune, 15
January 1991, p.13.
126b. See Joel Achenbach, 'The Experts in Retreat: After-the-Fact Explanations for the
Gloomy Predictions', Washington Post, 28 February 1991, p.D12.
127. Ibid., p.127.
128. Laplace's world view was that if a vast intelligence could comprehend and analyze
all the forces by which nature is animated, that intelligence 'would embrace in the
same formula the movements of the greatest bodies of the universe and those of
the lightest atom; for it, nothing would be uncertain and the future, as the past,
would be present to its eyes' (Pierre Simon de Laplace, 'Concerning Probability',
The World of Mathematics: A Small Library of the Literature of Mathematics from
A'h-mose the Scribe to Albert Einstein (Redmond, WA: Tempus, 1988 reprint of 1956
original), Vol.2, p.1301).
129. Alan D. Beyerchen, 'Nonlinear Science and the Unfolding of a New Intellectual
Vision', Rethinking Patterns of Knowledge, ed. Richard Bjornson and Marilyn
Waldman, Papers in Comparative Studies (Columbus, OH: Ohio State University,
1989), No.6, p.30.
130. Joshua M. Epstein, The Calculus of Conventional War: Dynamic Analysis without
Lanchester Theory (Washington, DC: Brookings, 1985), pp.21-31.
131. Those unfamiliar with how a simple, deterministic system can, nevertheless, give rise to
unpredictability may wish to explore the logistic mapping for themselves. This example
of a nonlinear system is based on the equation xn = kxn(l - x|)) where k is a constant
such that 0 < k < 4 and 0 < x < 1. The logistic mapping for a particular value of the
constant k is then generated by picking some starting value for the variable x between
0 and 1, plugging it into the equation, and thereafter using each new value of x as the
input for its successor. The unpredictability has to do with the long-term behavior of the
sequence of numbers x,,, Xj, x2,. . .With k = 2, for example, the successive values of x
converge to 0.5 in less than ten iterations using a Hewlett-Packard HP-41CV calculator
and initial values of x from 0.1 to 0.9. But with k = 3.58, the mapping generates a
random sequence of numbers which never repeats. Or at least the present authors were
unable to find any hint of convergence to a finite set of point attractors through the first
10,000 iterations or so. The ambitious reader can readily verify these claims with a hand
calculator (preferably programmable). For a more thorough discussion of first-order
difference equations, of which the logistic mapping is an example, see Robert M. May,
'Simple Mathematical Models with Very Complicated Dynamics', Nature, Vol.261,10
June 1976, pp.459-67.
132. From the standpoint of Laplace's perceived perfection of Newtonian mechanics,
the most telling evidence against his presumption that linear predictability prevails
universally comes from contemporary astronomy. Contrary to Laplace's apparent
success in demonstrating the complete absence of chaos or long-term unpredictability
in the solar system, it is now thought, for example, that Hyperion, one of Saturn's
smaller moons, tumbles chaotically as it orbits that ringed planet (Anita M. Killian,
'Playing Dice with the Solar System', Sky and Telescope, Aug. 1989, p.138). It also
appears that the orbit of Pluto becomes unpredictable on a time scale of about 20

million years (Gerald Jay Sussman and Jack Wisdom, 'Numerical Evidence That the
Motion of Pluto is Chaotic', Science, 22 July 1988, p.437).
133. Paul Kennedy, The Rise and Fall of Great Powers: Economic and Military Conflict from
1500 to 2000 (New York, NY: Random House, 1987), p.534.
134. Samuel P. Huntington, 'The US-Decline or Renewal?' Foreign Affairs, Winter 1988/89,
135. Huntington, p.90.
136. Huntington, p.91.