You are on page 1of 44

Chapter 7

Random-Variate Generation

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.1


Contents
•  Inverse-transform Technique
•  Acceptance-Rejection Technique
•  Special Properties

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.2


Purpose & Overview
•  Develop understanding of generating samples from a
specified distribution as input to a simulation model.

•  Illustrate some widely-used techniques for generating


random variates:
• Inverse-transform technique
• Acceptance-rejection technique
• Special properties

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.3


Preparation
•  It is assumed that a source of uniform [0,1] random
numbers exists.
• Linear Congruential Method (LCM)

f(x)
•  Random numbers R, R1, R2, … with
• PDF
⎧1 0 ≤ x ≤ 1
f R ( x) = ⎨
⎩0 otherwise x
0 1
• CDF F(x)
⎧0 x < 0
⎪
FR ( x) = ⎨ x 0 ≤ x ≤ 1
⎪1 x > 1
⎩
x
0 1

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.4


Inverse-transform Technique

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.5


Inverse-transform Technique
•  The concept:
• For CDF function: r = F(x)
• Generate r from uniform (0,1), a.k.a U(0,1)
• Find x, x = F-1(r)

F(x) F(x)

1 1
r = F(x) r = F(x)
r1 r1

r2

x x
x1 x2 x1

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.6


Inverse-transform Technique
•  The inverse-transform technique can be used in principle
for any distribution.
•  Most useful when the CDF F(x) has an inverse F -1(x)
which is easy to compute.

•  Required steps
1.  Compute the CDF of the desired random variable X
2.  Set F(X) = R on the range of X
3.  Solve the equation F(X) = R for X in terms of R
4.  Generate uniform random numbers R1, R2, R3, ... and compute
the desired random variate by Xi = F-1(Ri)

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.7


Inverse-transform Technique: Example
•  Exponential Distribution •  To generate X1, X2, X3 …
•  PDF
f ( x) = λe − λx 1 − e − λX = R
e − λX = 1 − R
•  CDF
F ( x ) = 1 − e − λx − λX = ln(1 − R)
ln(1 − R )
X=
−λ
•  Simplification
ln(1 − R )
ln(R) X =−
X =− λ
λ
X = F −1 ( R )
•  Since R and (1-R) are uniformly
distributed on [0,1]

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.8


Inverse-transform Technique: Example

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.9


Inverse-transform Technique: Example

Inverse-transform technique for exp(λ = 1)

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.10


Inverse-transform Technique: Example
•  Example:
•  Generate 200 or 500 variates Xi with distribution exp(λ= 1)
•  Generate 200 or 500 Rs with U(0,1), the histogram of Xs becomes:

0,7  
0,6  

0,6  
0,5  
0,5  
0,4  
0,4  

0,3  
0,3  

0,2   0,2  

0,1   0,1  

0  
0,5   1   1,5   2   2,5   3   3,5   4   4,5   5   5,5   6   6,5   7   0  
0,57   1,15   1,72   2,30   2,87   3,45   4,02   4,60   5,17   5,75  
Empirical  Histogram  
Rel  Prob.   Theor.  PDF  

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.11


Inverse-transform Technique
•  Check: Does the random variable X1 have the desired distribution?
P( X 1 ≤ x0 ) = P( R1 ≤ F ( x0 )) = F ( x 0 )

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.12


Inverse-transform Technique:
Other Distributions
•  Examples of other distributions for which inverse CDF
works are:
• Uniform distribution
• Weibull distribution
• Triangular distribution

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.13


Inverse-transform Technique:
Uniform Distribution
•  Random variable X uniformly distributed over [a, b]

F(X ) = R
X −a
=R
b−a
X − a = R(b − a)
X = a + R(b − a)

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.14


Inverse-transform Technique:
Weibull Distribution
•  The Weibull Distribution is •  The variate is
described by F(X ) = R
β
− ( αX )
•  PDF 1− e =R
β β −1 −(α )β x
f ( x) = β x e
β
− ( αX )
α e = 1− R
− ( αX ) = ln(1 − R)
β

•  CDF

−(αx )
β
β
= − ln(1 − R)
F ( X ) =1 − e α
X β = −α β ⋅ ln(1 − R)
X = β − α β ⋅ ln(1 − R)
X = α ⋅ β − ln(1 − R)
Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.15
Inverse-transform Technique:
Triangular Distribution
•  The CDF of a Triangular
Distribution with endpoints
(0, 2) is given by

⎧0 x≤0
⎪ x 2 ⎧ X2
⎪⎪ 2 0 < x ≤1 ⎪⎪ 0 ≤ X ≤1
F ( x) = ⎨ R( X ) = ⎨ 2
(2 − x) 2 ( 2 − X ) 2
⎪1 − 1< x ≤ 2 ⎪1 − 1≤ X ≤ 2
⎪ 2 ⎪⎩ 2
⎪⎩1 x>2
•  X is generated by
⎧⎪ 2R 0 ≤ R ≤ 12
X = ⎨
⎪⎩2 − 2(1 − R) 12 < R ≤ 1

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.16


Inverse-transform Technique:
Empirical Continuous Distributions
•  When theoretical distributions are not applicable
•  To collect empirical data:
• Resample the observed data
• Interpolate between observed data points to fill in the gaps

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.17


Inverse-transform Technique:
Empirical Continuous Distributions
•  For a small sample set (size n):
•  Arrange the data from smallest to largest
x (1) ≤ x (2) ≤ … ≤ x (n)
•  Set x(0)=0
•  Assign the probability 1/n to each interval x (i -1) ≤ x ≤ x (i) i = 1,2, …, n
•  The slope of each line segment is defined as

x(i ) − x(i −1) x(i ) − x(i −1)


ai = =
i (i − 1) 1

n n n
•  The inverse CDF is given by

⎛ (i − 1) ⎞ (i − 1) i
X = Fˆ −1 ( R) = x(i −1) + ai ⎜ R − ⎟ when <R≤
⎝ n ⎠ n n

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.18


Inverse-transform Technique:
Empirical Continuous Distributions

i Interval PDF CDF Slope ai


1 0.0 < x ≤ 0.8 0.2 0.2 4.00
2 0.8 < x ≤ 1.24 0.2 0.4 2.20
3 1.24 < x ≤ 1.45 0.2 0.6 1.05
4 1.45 < x ≤ 1.83 0.2 0.8 1.90
5 1.83 < x ≤2.76 0.2 1.0 4.65

R1 = 0.71
X 1 = x( 4−1) + a4 ( R1 − (4 − 1) / n)
= 1.45 + 1.90(0.71 − 0.6)
= 1.66

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.19


Inverse-transform Technique:
Empirical Continuous Distributions
•  What happens for large samples of data
• Several hundreds or tens of thousand
•  First summarize the data into a frequency distribution
with smaller number of intervals
•  Afterwards, fit continuous empirical CDF to the frequency
distribution
•  Slight modifications
• Slope
ci cumulative probability of
x(i ) − x(i −1) the first i intervals
ai =
ci − ci −1

• The inverse CDF is given by


X = Fˆ −1 ( R) = x(i −1) + ai (R − ci −1 ) when ci −1 < R ≤ ci

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.20


Inverse-transform Technique:
Empirical Continuous Distributions
•  Example: Suppose the data collected for 100 broken-widget
repair times are:
Interval Relative Cumulative
(Hours) Frequency Frequency Frequency, c i Slope, a i
0.25 ≤ x ≤ 0.5 31 0.31 0.31 0.81
0.5 ≤ x ≤ 1.0 10 0.10 0.41 5.00
1.0 ≤ x ≤ 1.5 25 0.25 0.66 2.00
1.5 ≤ x ≤ 2.0 34 0.34 1.00 1.47

Consider R1 = 0.83:

c3 = 0.66 < R1 < c4 = 1.00

X1 = x(4-1) + a4(R1 – c(4-1))


= 1.5 + 1.47(0.83-0.66)
= 1.75

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.21


Inverse-transform Technique:
Empirical Continuous Distributions
•  Problems with empirical distributions
• The data in the previous example is restricted in the range
0.25 ≤ X ≤ 2.0
• The underlying distribution might have a wider range
• Thus, try to find a theoretical distribution

•  Hints for building empirical distributions based on


frequency tables
• It is recommended to use relatively short intervals
•  Number of bins increase
• This will result in a more accurate estimate

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.22


Inverse-transform Technique:
Continuous Distributions
•  A number of continuous distributions do not have a closed
form expression for their CDF, e.g.
x
• Normal F ( x) = ∫ 1
σ 2π
(
exp − 12 ( ) )dt
t −µ 2
σ

• Gamma −∞

• Beta
•  The presented method does not work for these
distributions
•  Solution
• Approximate the CDF or numerically integrate the CDF
•  Problem
• Computationally slow

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.23


Inverse-transform Technique:
Discrete Distribution
•  All discrete distributions can be generated via inverse-
transform technique
•  Method: numerically, table-lookup procedure,
algebraically, or a formula
•  Examples of application:
• Empirical
• Discrete uniform
• Geometric

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.24


Inverse-transform Technique:
Discrete Distribution
•  Example: Suppose the number of shipments, x, on the
loading dock of a company is either 0, 1, or 2
• Data - Probability distribution:

x P(x) F(x)
0 0.50 0.50
1 0.30 0.80
2 0.20 1.00

•  The inverse-transform technique as table-lookup


procedure

F ( xi −1 ) = ri −1 < R ≤ ri = F ( xi )
• Set X = xi

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.25


Inverse-transform Technique:
Discrete Distribution

Method - Given R, the


generation scheme
becomes: 0.8

⎧0, R ≤ 0.5
⎪
x = ⎨1, 0.5 < R ≤ 0.8
⎪2, 0.8 < R ≤ 1.0
⎩

Table for generating the Consider R1 = 0.73:


discrete variate X F(xi-1) < R ≤ F(xi)
F(x0) < 0.73 ≤ F(x1)
i Input ri Output xi Hence, X1 = 1
1 0.5 0
2 0.8 1
3 1.0 2

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.26


Acceptance-Rejection Technique

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.27


Acceptance-Rejection Technique
•  Useful particularly when inverse CDF does not exist in closed form
•  Thinning
•  Illustration: To generate random variates, X ~ U(1/4,1)
Generate R

Procedure: no
Step 1. Generate R ~ U(0,1)
Condition
Step 2. If R ≥ ¼, accept X=R.
yes
Step 3. If R < ¼, reject R, return to Step 1
Output R’

•  R does not have the desired distribution, but R conditioned (R’) on


the event {R ≥ ¼} does.
•  Efficiency: Depends heavily on the ability to minimize the number
of rejections.

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.28


Acceptance-Rejection Technique:
Poisson Distribution
•  Probability mass function of a Poisson Distribution
αn
P ( N = n) = e −α
n!
•  Exactly n arrivals during one time unit
A1 + A2 +  + An ≤ 1 < A1 + A2 +  + An + An+1
•  Since interarrival times are exponentially distributed we can set

− ln( Ri )
Ai =
α
•  Well known, we derived this generator in the beginning of the class

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.29


Acceptance-Rejection Technique:
Poisson Distribution
•  Substitute the sum by
n n +1
− ln( Ri ) − ln( Ri )
∑ ≤1< ∑
i =1 α i =1 α
•  Simplify by
• multiply by -α, which reverses the inequality sign
• sum of logs is the log of a product
n n +1

∑ ln( R ) ≥ −α > ∑ ln( R )


i =1
i
i =1
i

n n +1
ln ∏ Ri ≥ −α > ln ∏ Ri
i =1 i =1

•  Simplify by eln(x) = x n n +1

∏Ri =1
i ≥e −α
> ∏ Ri
i =1

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.30


Acceptance-Rejection Technique:
Poisson Distribution
•  Procedure of generating a Poisson random variate N is as
follows
1.  Set n=0, P=1
2.  Generate a random number Rn+1, and replace P by P x Rn+1
3.  If P < exp(-α), then accept N=n
•  Otherwise, reject the current n, increase n by one, and return
to step 2.

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.31


Acceptance-Rejection Technique:
Poisson Distribution
•  Example: Generate three Poisson variates with mean α=0.2
•  exp(-0.2) = 0.8187
•  Variate 1
•  Step 1: Set n = 0, P = 1
•  Step 2: R1 = 0.4357, P = 1 x 0.4357
•  Step 3: Since P = 0.4357 < exp(- 0.2), accept N = 0
•  Variate 2
•  Step 1: Set n = 0, P = 1
•  Step 2: R1 = 0.4146, P = 1 x 0.4146
•  Step 3: Since P = 0.4146 < exp(-0.2), accept N = 0
•  Variate 3
•  Step 1: Set n = 0, P = 1
•  Step 2: R1 = 0.8353, P = 1 x 0.8353
•  Step 3: Since P = 0.8353 > exp(-0.2), reject n = 0 and return to Step 2 with n = 1
•  Step 2: R2 = 0.9952, P = 0.8353 x 0.9952 = 0.8313
•  Step 3: Since P = 0.8313 > exp(-0.2), reject n = 1 and return to Step 2 with n = 2
•  Step 2: R3 = 0.8004, P = 0.8313 x 0.8004 = 0.6654
•  Step 3: Since P = 0.6654 < exp(-0.2), accept N = 2

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.32


Acceptance-Rejection Technique:
Poisson Distribution
•  It took five random numbers to generate three Poisson
variates
•  In long run, the generation of Poisson variates requires
some overhead!

N Rn+1 P Accept/Reject Result


0 0.4357 0.4357 P < exp(- α) Accept N=0
0 0.4146 0.4146 P < exp(- α) Accept N=0
0 0.8353 0.8353 P ≥ exp(- α) Reject
1 0.9952 0.8313 P ≥ exp(- α) Reject
2 0.8004 0.6654 P < exp(- α) Accept N=2

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.33


Special Properties

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.34


Special Properties
•  Based on features of particular family of probability
distributions
•  For example:
• Direct Transformation for normal and lognormal distributions
• Convolution

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.35


Direct Transformation
•  Approach for N(0,1)
• PDF
x2
1 −
f ( x) = e 2

• CDF, No closed form available


x t2
1 −
F ( x) = ∫ e 2
dt
−∞ 2π

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.36


Direct Transformation
•  Approach for N(0,1)
•  Consider two standard normal random variables, Z1 and Z2, plotted as
a point in the plane:
•  In polar coordinates:
•  Z1 = B cos(α)
•  Z2 = B sin(α)

(Z1,Z2)
Z2

α
Z1

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.37


Direct Transformation
•  Chi-square distribution
• Given k independent N(0, 1) random variables X1, X2, …, Xk, then
the sum is according to the Chi-square distribution

k
2 2
• PDF χ =∑X
k i
i =1

1 k −1 − 2x
f ( x, k ) = k x e
2

Γ( k2 )2 2

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.38


Direct Transformation
•  The following relationships are known
•  B2 = Z21 + Z22 ~ χ2 distribution with 2 degrees of freedom = exp(λ = 1/2).
•  Hence:

B = − 2 ln R
•  The radius B and angle α are mutually independent.

Z1 = − 2 ln R1 cos( 2πR2 )
Z 2 = − 2 ln R1 sin( 2πR2 )

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.39


Direct Transformation

•  Approach for N(µ, σ 2):


• Generate Zi ~ N(0,1)

Xi = µ + σ Zi

•  Approach for Lognormal(µ,σ2):


• Generate X ~ N(µ,σ2)

Yi = eXi

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.40


Direct Transformation: Example
•  Let R1 = 0.1758 and R2=0.1489
•  Two standard normal random variates are generated as
follows:
Z1 = − 2 ln(0.1758) cos( 2π 0.1489) = 1.11
Z 2 = − 2 ln(0.1758) sin( 2π 0.1489) = 1.50

•  To obtain normal variates Xi with mean µ=10 and variance


σ 2= 4
X 1 = 10 + 2 ⋅1.11 = 12.22
X 2 = 10 + 2 ⋅1.50 = 13.00

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.41


Convolution
•  Convolution
• The sum of independent random variables

•  Can be applied to obtain


• Erlang variates
• Binomial variates

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.42


Convolution
•  Erlang Distribution
• Erlang random variable X with parameters (k, θ) can be
depicted as the sum of k independent exponential random
variables Xi, i = 1, …, k each having mean 1/(k θ)

k
X = ∑ Xi
i =1
k
1
= ∑− ln( Ri )
i =1 kθ
1 ⎛ k ⎞
=− ln⎜⎜ ∏ Ri ⎟⎟
kθ ⎝ i =1 ⎠

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.43


Summary
•  Principles of random-variate generation via
• Inverse-transform technique
• Acceptance-rejection technique
• Special properties

•  Important for generating continuous and discrete


distributions

Prof. Dr. Mesut Güneş ▪ Ch. 7 Random-Variate Generation 7.44

You might also like