Prepared Under Contract With
‘The U.S. Atomic Energy Conmission Contract No. AT(11-1) -135
Project Agreement No. 1
and the U.S. Air Force Project RAND
RESEARCH MEMORANDUM
APPLICATIONS OF MONTE CARLO
Herman Kaha
RM-1237-AEC
19 April 1954
Revised 27 April 1956
This is @ working poper. Because it may be expanded, modified, or withdrawn at
‘ony time, permission to quote oF reproduce must be obtained from RAND. The
Views, conclusions, and recommendations expressed herein do not necessarily
‘reflect the offical views or policies of the United States Air Force.
DISTRIBUTION RESTRICTIONS
Copyright, 1956 "RN-1237-aE¢
4-27-56
- iti -
FOREWORD
This document discusses the general principle of doing Monte
Carlo calculations with particular emphasis on reducing the amount
of work involved, It does not discuss, but for a few exceptions,
relationships between probabilistic problens and deterministic ones,
and how either can be chosen to model the other. More importantly,
it does not include any important specific applications. Both of
these other subjects are widely discussed in Monte Carlo literature
by many people. At a later date the author hopes to put out a
book on the subject which will supersede this report and include
applications.
The work that preceded this report has been supported by the
U.S. Aix Force and several laboratories of the A.E.C. In addition,
I would like to express my appreciation to the Reactor Division of
the A.E.C, for their sympathetic and long range support of basic
studies in the Monte Carle method.
A short description of the Monte Carlo method can be given
as follows. The expected score of a player in any reasonable
game of chance, however complicated, can in principle be estimated
by averaging the results of a large number of plays of the game.
Such estimation can be rendered more efficient by various devices
which replace the original game with another known to have the
same expected score. The new game may lead to a more efficiént
estimate by being less erratic, that is, having a score of lower
variance or by being cheaper to play with the equipment on hand.RM-1237-ABC
4-27-56
There are obviously many problems about probability that can be
viewed as problens of calculating the expected score of a game.
Still more, there are problems that do not concern probability but
are none the less equivalent for some purposes to the calculation
of an expected score, The Monte Carlo method refers simply to the
exploitation of these remarks.
‘The method has been extensively used by. statisticians and
others under the name of Model Sampling. Many of the variance
reducing techniques discussed in this report have been developed by
statisticians for use in Survey Sampling.
John von Neumann and Stanley Ulam seem to be mainly responsi-
ble, both as practitioners and propagandists, for the present
widespread use in physics and engineering. They also seen to have
been the first to have advocated the idea of systenatically inverting
the usual situation and treating determinate mathematical problems
by first finding a probabilistic analogue and then solving this
analogue by some experimental sampling procedures, In this report
though, most of the applications are to problens which have been
derived from probabilistic situations. The name of Monte Carlo is
used rather than Model Sampling partly because we wish to differen~
tiate the relatively sophisticated sampling techniques used in the
former from the Straightforward approach that seems to be customary
in the usual applications of the latter, and partly because the
more picturesque name of Monte Carlo has just about replaced -its
predecessor in physical applications.RM-1237-AEC
4-27-56
In writing a report of this nature it is difficult to apportion
credits and acknowledgments in a reasonable manner. The author has
spent about half of his time between 1948 and 1952 on applications of
the method. Some of the applications with which he has been concerned
have been fairly large problens involving the collaboration of
several organizations and many individuals, Because major emphasis
has always been on physics or engineering, and not statistics, and
also because most of the problens are classified, it is difficult to
pinpoint many individual contributions. Therefore, except for
Part I (inspired by John von Neumann) and for specific statistical
suggestions, there will be almost no specific acknowledgments made.
Instead, a simple listing of the individuals who have contributed
to the problems upon which we learned how to do Monte Carlo will be
given.
The following either originated problems or collaborated on
their design: Hans Bethe, Jim Coon, Robert Day, Walter Goad,
Herbert Goldstein, Frederic de Hoffman, Frank Hoyt, Richard Latter,
Louis Nelson, Lothar Nordheim, Milton Plesset, Fred Reines, Paul
Stein, Edvard Teller, Robert Thomas and Carl Wahlske.
I am indebted to the following for helpful discussions:
George Brown, Herman Feschbach, Francis Friedman, Gerald Goertzel,
Mario Juncosa, John von Neumann, Melvin Peisakoff, Leonard J. Savage,
John W. Tukey and Theodore Welton.
Most of the actual work of programming, coding and computing
was done by Barbara Batchelder, Barbara Cohen, Ruth Ann Engvall,
Lois Foster, Esther Gersten, Trwin Greenwald, Jean Hall, Cylde Hauff,RM-1237-AEC
4-27-56
Gas
Herbert Hilton, Robert Johnson, Winifred Jonas, David Langfield,
Don Madden, Wes Melahn, Cynthia Mercer, Leona Otfinoski, Josephine
Powers, Frieda Rosenberg, Cff Shaw, and Charles Swift. Without
their high morale, professional skill, and enthusiasm, it would have
been impossible to have met many of ovr deadlines on the always
capricious and sometimes malignant computing equipment available
fron 1948 to 1952.
Finally, an dnadequate thanks to Theodore Harris and Andrew
Marshall, with whom the author has collaborated extensively and on
whom he has always been able to lean for a learned opinion on
statistics and probability. Some of the ideas in this report have
Previously appeared in joint papers by then and the author.
I would also like to thank Leonard J. Savage for reading an
earlier version of this report and making prolific conments. This
version doesn't show the full effect of his coments as I an saving
many of them for a future book.INTRODVOTION
‘The Monte Carlo Method is concerned with the application
random sampling to problens of applied mathenatics. While subtle
ror difficult questions may arise in applications, most problens
can be treated without using much statistical theory. Nevertheless
statistical theory can be very helpful. This report presents an
elenentary exposition of sone of the ideas and techniques that have
proved useful in problens with which the author has been concerned.
In this case, the word elementary implies that the author has tried
to make the presentation intelligible to a mathematician, physicist,
or engineer with only a slight formal background in probability
theory. There will be a strong flavor of the "cookbook" about many
selections. The author can only suggest, judicious skipping.
It will be assuned that the reader has in intuitive notion of
the idea of probability (even though philosophers may argue). That
is, that he knows what is meant by the statement "The probability
that a 'fairt coin lands heads up when tossed is 1/2," and that he
knows and has had sone basic experience with the simplest rules of
the calculus of probabiliti seb In any case most of the statistical
ideas that are used will be presented or reviewed in the first two
chapters.
‘1 These rules are of the following types. The probability that
‘ono or the other of two mutually exclusive events occurs ia the sum
of the separate probabilities. The probability that two independent
events ocour is the product of their separate probabilities, vicChapters
Ir.
It.
Appendices
Le
IL
ur,
La27=58
size
TECHNIQUES OF MONTE CARLO
TABLE OF CONTENTS
‘Techniques with Random Variables
Evaluating Integrals
Integral or Matrix Equations
Generation of Peuedo-Random tunbe:
Constrained Maximum of a Funetion
The Variance Associated with Double Systematic
SamplingPMAL237 AIC
La2resé
=i9s)
PART I
TECHNIQUES WITH RANDOM VARTABLESa
2
be
Se
TABLE OF CONTENTS
Pandon Variables eee eee seer cece
‘Transformation of Randon Variables and
their Realization «eeeeeeecees
The Rejection Technique see eee eee
Variations of the Easic Rejection Technigne
Manipulations with Distributions 6.6.
Brambles sss eee teeter eee
vt) CoOGd0DDDO0GKd
Table of Examples ++ ee eee eee
Representations of Examples Considored
References ++ eee eee ececcees
-do
otk
+B
+20RAREIT=ADC
Lat TeFE
See
I. TECHNIQUES WITH RAUDOM VARIAELES
L._Random Variables
In the following a randon variable (generelly dencted by a
capital letter) will mean a numerical quantity (or quantities)
associated with a gane of chance in such a way that as the various
events or possible outcones of the gane occur, the randon variable
takes on definite values. Thus one could associate a random variable
C with the coin tossing process by saying that when a head cones
up, C= 0, and when a tail cones up, C = 1. C then has a probability
of 1/2 of being zero and 1/2 of being 1. All other values have zero
probability.
Associated with any randon variable X is a cumlative distri-
bution function (c¢.def.) which will be called "F(x)". F(x) is defined
as the probability that the randon variable X will assune values less
than or equal to x, If F(x) is the integral, at least in sone regicns,
of a function £(x), the randon variable is said to have a probability
density then and f(x) is called the probability density function
(p.d-f.). If F(x) makes a finite jump at sone point x,, there is a
nonegero probability of x, occurring. Tus in the coin tossing 2» blem
mentioned above
F(c) = 0 c0, this is just.
[
a+b .
Since £ f(x)dx = 1 the above expression is just 1/oM. The probability
of accepting some value the first tine is called the efficiency of the
technique, because of its obvious economic implication for applications,
and is denoted by E. 1B is the probability that the first value
picked will be rejected. The probability that the process will fail
nel tines and then succeed on the n“ trial is (14E)" Ez, The expected
nunber of trials, i, is then
(5)
iRN-1237-aEC
4-27-56
eis
‘The principle of the rejection technique can te illustrated by
the following diagran,
f(x)
—_—?>
=
a aed
Figure 1
In Figure 1a rectangle of area bY encloses the p.d.f. f(x).
‘The shaded portion under f(x) has unit area. If a nunber of points
are selected in the rectangle at random fron a uniforn distribution,
but only those points saved that fall within the shaded portion,
then the probability that any of these saved values lies between
x and x + fx will be £(x)ix/o4. The fraction of points saved will
be given by (shaded area)/(total area) or 1/bH.
The rejection technique may be generalized as follows. Let
n(x) and m(y) be pedefets and let (x) be an arbitrary function. Then
1, Select an x out of the p.d.f. n(x)
2, Selest andependentiy a y out of the pedete my) [e.det. u(y)]
3. If y< ix) accept x. Otherwise repeat steps 1 and 2.
It is often computationally convenient to write the inequality
¥ = (x) in the form s(y) < t(x) where
T(x) = 8" [eG]
The @ priori probability of getting an x in the region
(x, x + dx) is, of course, n(x)ix. The probability of accepting
xy [Protabinity tint yc utxJ] , se [Ptei] . mereore, theRN-1237-AEC
4-27-56
ce
probability of selecting an x in the region dx and accepting it is
¥ [FGx)] nGerax
‘The probability of getting any x at all on the first it:
2
E -f uf] alx)ox @
is
By choosing m, n, and T appropriately it is usually possible
to design a nunerically convenient and efficient process for
selecting’ an x fron the padefe f(x) = “{[F(xj] nfx)/e.
If, in a special case, ¥ is the sane as R, M is then the
distribution of R, If also M(x) is bounded such that S(x)< 1, we
can say
u [atx)] = 1%)
‘The technique now becones:
1. Select an x out of the p.d.f. n(x)
2. Select an R
£00), vhere K is larger thin or equal to the
i £(x)
naximm value of D2) , accept x. Otherwise repeat steps
3. IfRS
Land 2.
‘The efficiency of the technique is now 1/K. Hence E can be equal
bu irger i a R(x) If it happens
to, but not larger than, the minimum value of tx) *
ninimun value is know, than the
that only a lower bound for ti
flotency will be ese than At would have beens
Since the areas under the curves f(x) and n(x) are the same,
“tho requirenent that the efficiency be high (ises, close to 1) inposes
@ serious restriction on n(x). One way to meet it is to choose n(x)
“similar” to f(x). It must also be simple to select from, or thereRM-1237-AEC
4-27-56
are
would be no point in using a rejection technique. The choice of
n(x) is a compromise between these two criteria.
dations of the basic reject:
techni
In certain cases, realization of the variations mentioned below
ray give rise to considerable savings in computing tine.
1s Select x out of n(x), ¥ out of m(y), and yp out of m,(y)
and accept x if either yy ta
= Day dmg Go)
z
into the form
ng ()
The Ay here are, it 4s clear, the probability of getting 4
multiplied by the maximun value of T.(x). The A. must be larze enoughRM-1237-AEC
4=27-56
ee
Fy 00)
to ansure thet <1. as before the efftetency ts 1/FAy an
an efficient process 4s one in which the r,(x) vary but Little,
ina sense. When the 7, (x) are constants then the SA, = 2 and
the process is 100% efficient; it then just reduces to a convenient
wey to sample from a p.dafe
If the i's with relatively uniform r,(x) have large 4,'s while the
ones with large variations have small Ayts, the process will still be
efficients
Sonctinee as a special case of the above, it is desirable to teke
the n,(x) to be the same functions i.es, to break up f(x) into the form
#(x) =F Ay, abo
Te) DATO)
‘This is advantageous when it is difficult to find the maximum value
of T(x), but relatively easy to find the maximun value of the individual
terms. However, breaking up T(x) into separate terms always decreases
the efficiency of the techniques
A special case of this last situation ocours very frequently
when the padafs f(x) ds fitted oy sections, For exanple if
X61
f£(x)dx
is the protability that the event x,