Professional Documents
Culture Documents
The basic building block of all of the methods that we will discuss are rou-
tines like random() in the C/C + + programming language which, with each
call, returns an integer in the range [0, M ] where M is a very large positive
integer. The best of these routines generate a deterministic periodic sequence
of integers that, for most practical purposes, is indistinguishable from a ran-
dom independent sequence of integers chosen uniformly (each one as likely
as any other). Random number generators are interesting in their own right
but we will not discuss them here.
Assuming that your random number generator (we’ll assume it’s random()) is
actually producing an independent sequence of uniformly chosen integers, one
can easily construct a very good approximation of a sequence of independent
U (0, 1) random variables by the transformation
random()
U=
M
27
28 CHAPTER 2. A FEW EXACT SAMPLING TECHNIQUES
2.1 Inversion
F (x) = P [X x] .
i.e. ⇡ = F 0 .
1 x2 2
0 +x1
⇡(x) = e 2
2⇡
(the ⇡ on the right hand side of the last display is the area of the unit disk
and not the density ⇡(x)).
Exercise 17. Write a routine that generates two independent Gaussian ran-
dom variables and produce a (2 dimensional) histogram to verify your code.
Exercise 18. Use a change of variables similar to the one you used for
the Gaussian to generate a uniformly distributed sample on the unit disk
given two independent samples from U (0, 1) and produce a (2 dimensional)
histogram to verify your code.
2.3 Rejection
For the algorithm described in this subsection we again assume that our goal
is to draw samples from the density ⇡. Assume that we can draw samples
from ⇡˜ instead and that for some constant K 1,
⇡ K⇡
˜. (2.1)
⇡(Y (⌧ ) )
U (⌧ )
K⇡˜ (Y (⌧ ) )
The first question one must ask is whether this scheme returns a sample in
finite time, i.e. is ⌧ < 1? We will return to this question in a moment. For
now, we assume that P [⌧ < 1] = 1. Having made that assumption, breaking
up the event {Y (⌧ ) 2 dx} according to the value of ⌧ and plugging in the
32 CHAPTER 2. A FEW EXACT SAMPLING TECHNIQUES
Using the fact that the di↵erent samples are independent we can factor the
last expression to obtain
1
X
⇥ (⌧ )
⇤ (k) (k) ⇡(Y (k) )
P Y 2 dx = P Y 2 dx, U
k=1
˜ (Y (k) )
K⇡
k 1
(1) ⇡(Y (1) )
⇥P U >
K⇡˜ (Y (1) )
Finally, appealing to the relation
(k) (k) ⇡(Y (k) ) (k) ⇡(Y (k) )
P Y 2 dx, U = P U Y (k) 2 dx
K⇡ ˜ (Y (k) ) K⇡˜ (Y (k) )
⇥ ⇤
⇥ P Y (k) 2 dx
Note that to use this algorithm you need to relate some distribution that you
can easily sample (say a Gaussian) to the distribution that you’d actually
like to sample, by a bound of the form (2.1). In all but some simple cases this
is not possible. Nonetheless, rejection sampling can be a useful component
of more complicated sampling schemes.
again decomposing the event ⌧ < 1 by the possible values taken by ⌧ and
using the independence of the variables, we find that
1
X
P (⌧ < 1) = P (⌧ = k)
k=1
1 ✓ ◆k
1 X
1
1
= 1
K k=1 K
=1
so the algorithm will at least exit properly with probability one (in fact we
already assumed this with our previous calculation).
2.4 bibliography