This action might not be possible to undo. Are you sure you want to continue?
Modeling & Simulation
Lecture 11
Output Analysis
Instructor:
Eng. Ghada AlMashaqbeh
The Hashemite University
Computer Engineering Department
The Hashemite University 2
Outline
• Distinguish the two types of simulation:
transient vs. steady state.
• Illustrate the inherent variability in a
stochastic discreteevent simulation.
• Cover the statistical estimation of
performance measures.
• Discusses the analysis of transient
simulations.
• Discusses the analysis of steadystate
simulations.
The Hashemite University 3
Output Analysis
• Analyzing the output of the simulation is a part that
is often done incorrectly (by analysts and
commercial software)
• We consider several issues including:
– One simulation run is not sufficient to analyze the behavior
of the system. Why?
• Inputs are drawn from random variables. These values for
some run may be biased.
– But, how to determine the needed number of simulation
runs to get the correct output?
– Failure to determine when a simulation run ends
• Is there a clear event to determine when to end simulation.
• Or you want to study the system on the long run (i.e. steady
state).
– Does the system model under study has a steady state
behavior?
– How to determine the needed warmup period during which
the obtained output is not correct or accurate for a steady
state simulation?
The Hashemite University 4
Introduction and Motivation I
Remember the following definition:
• Replication or Run.
– If two runs of a simulation model have the same input
parameters / initial conditions and differ only in their
random seeds, they are called replications.
– If the seeds are chosen independently, they are called
independent replications.
• Example:
– Simulation of a bank with five tellers, bounded opening
times, Poisson arrivals and IID exponential service times
(utilization = 0.8)
– Ten independent replications, computed mean customer
waiting time in the queue for each replication evaluation.
– See the result of each run in the next slide.
The Hashemite University 5
Introduction and Motivation II
The Hashemite University 6
Purpose
• Objective: Estimate system performance via
simulation
• If u is the system performance, the precision of
the estimator can be measured by:
– The standard error of .
– The width of a confidence interval (CI) for u.
• Purpose of statistical analysis:
– To estimate the standard error or CI .
– To figure out the number of observations (or simulation
runs) required to achieve desired error/CI.
• Potential issues to overcome:
– Autocorrelation, e.g. inventory cost for subsequent weeks
lack statistical independence.
– Initial conditions, e.g. inventory on hand and # of
backorders at time 0 would most likely influence the
performance of week 1.
u
ˆ
u
ˆ
The Hashemite University 7
Stochastic Nature of Output
Data
• The output model consists of one or more
random variables (r. v.) because the
model is an inputoutput transformation
and the input variables are r.v.’s.
• The obtained output observation may not
be IID within the same run but could be
IID across different runs.
• Independence across runs is the key for
simple output data analysis methods as
we will see later.
The Hashemite University 8
Example
• M/G/1 queueing example:
– Poisson arrival rate = 0.1 per minute;
service time ~ N(µ = 9.5, o =1.75).
– System performance: longrun mean queue
length, L
Q
(t).
– Suppose we run a single simulation for a total
of 5,000 minutes
• Divide the time interval [0, 5000) into 5 equal
subintervals of 1000 minutes.
• Average number of customers in queue from time (j
1)1000 to j(1000) is Y
j
. Where j is the simulation run
subinterval number.
The Hashemite University 9
Example … cont’d.
– Batched average queue length for 3 independent
replications:
– Inherent variability in stochastic simulation both
within a single replication and across different
replications.
– The average across 3 replications, can
be regarded as independent observations, but
averages within a replication, Y
11
, …, Y
15
, are not.
1, Y
1j
2, Y
2j
3, Y
3j
[0, 1000) 1 3.61 2.91 7.67
[1000, 2000) 2 3.21 9.00 19.53
[2000, 3000) 3 2.18 16.15 20.36
[3000, 4000) 4 6.92 24.53 8.11
[4000, 5000) 5 2.82 25.19 12.62
[0, 5000) 3.75 15.56 13.66
Replication
Batching Interval
(minutes) Batch, j
, , ,
. 3 . 2
.
1
Y Y Y
The Hashemite University 10
Transient and Steady State
Systems’ Behavior
• Transient system behavior:
– Behavior on the short run as long as the output depends
on the initial conditions I of the system.
• So changing I will change the measured system behavior.
• Steady state system behavior:
– Behavior on the long run where output does not depend
on the initial conditions I of the system.
• So changing I will not affect the measured system
behavior.
• Remember that every system starts with the transient
state and then reaches the steady state as t ∞ (i.e.
after a long period of time).
• Steady state does not depend on the initial conditions
I but the time needed to the convergence from the
transient to the steady state depends on I.
The Hashemite University 11
Types of Simulation Based on
Output Analysis I
Terminating simulation (Never reaches steadystate,
Initial conditions are included)
Nonterminating simulation
Steadystate parameters
Steadystate cycle parameters
Other parameters
 Any system in continuous operation (could have a ‘break’)
 Interested in steadystate parameters
 Initial conditions should be discarded
 Sometimes no steadystate because the system is cyclic
Then we are interested in steadystate cycle parameters
The Hashemite University 12
Terminating Simulation
• Terminating simulation:
– Runs for some duration of time T
E
, where E is
a specified event that stops the simulation.
– Starts at time 0 under wellspecified initial
conditions.
– Ends at the stopping time T
E
.
– Bank example:
• Opens at 8:30 am (time 0) with no customers
present and 8 of the 11 teller working (initial
conditions), and closes at 4:30 pm (Time T
E
= 480
minutes).
• The simulation analyst chooses to consider it a
terminating system because the object of interest is
one day’s operation.
The Hashemite University 13
NonTerminating Simulation
• Nonterminating simulation:
– Runs continuously, or at least over a very long period of
time.
– Examples: assembly lines that shut down infrequently,
telephone systems, hospital emergency rooms.
– Initial conditions defined by the analyst.
– Runs for some analystspecified period of time T
E
.
– Study the steadystate (longrun) properties of the
system, properties that are not influenced by the initial
conditions of the model.
• Whether a simulation is considered to be
terminating or nonterminating depends on both
– The objectives of the simulation study and
– The nature of the system.
The Hashemite University 14
Output Analysis (or
Measures of Performance)
The Hashemite University 15
Output Analysis
• Since simulation use random variables as inputs
then the output is also a random variable.
• Mainly we are interested in measuring the mean
and then the variance of the collected output
data, why?
• Consider the estimation of a performance
parameter, u (or ), of a simulated system.
– Discrete time output data: [Y
1
, Y
2
, …, Y
n
], with ordinary
mean: u
• Example: customers delays (D
i
) in a single server queuing
system.
– Continuoustime data: {Y(t), 0 s t s T
E
} with time
weighted mean: 
• Example: queue length (Q(t)) which is the number of
customers waiting at the time moment in the queue.
The Hashemite University 16
Mean Estimation of the
Output Data
• Your basic objective is to judge whether the
obtained output of the simulation is correct (or
accurate) or not.
– i.e. you must validate your simulation model.
• This can be done by comparing your results with
actual output obtained from either the real system
or an analytical solution.
• You will depend on the mean to represent the large
amount of output data that you have obtained.
• So, your first step is to compute (or estimate) the
mean of simulation output.
• Two methods are used to estimate the mean of the
output observations:
– Point estimator
– Quantile or percentile estimator
The Hashemite University 17
Point Estimator I
• Point estimation of the mean for discrete time
data.
– The point estimator:
– Is unbiased if its expected value is u, that is if:
• Is biased if:
– But θ is the actual mean value of the system value,
what if we have no information about the correct or
actual mean value?
• How we can compare the estimated mean to the correct
one?
– Solution: construct a confidence interval for θ as we will
see later.
u u = )
ˆ
( E
u u = )
ˆ
( E
¿
=
=
n
i
i
Y
n
1
1
ˆ
u
The Hashemite University 18
Point Estimator II
• Point estimation of the mean for continuous
time data.
– The point estimator:
• Is biased in general where: .
• An unbiased or lowbias estimator is desired.
• Again we need a confidence interval to
determine how accurate is the simulation
output mean.
• By the expectation of the mean estimate we
mean the average of the mean across
multiple runs.
  = )
ˆ
( E
}
=
E
T
E
dt t Y
T
0
) (
1
ˆ

The Hashemite University 19
Quantile Estimator I
• When the point estimator fails you can use
the quantile or percentile estimator.
– Remember the difference between the median and
the mean to judge.
• A quantile is the inverse of the probability
estimation problem:
• Estimating quantiles: the inverse of the problem
of estimating a proportion or probability.
– Consider a histogram of the observed values Y:
• Find such that 100p% of the histogram is to the left of
(smaller than) .
u
ˆ
u
ˆ
Find u such that Pr(Y s u) = p
p is given
The Hashemite University 20
Quantile Estimator II
• The best way is to sort
the outputs and use
the (R*p)
th
smallest
value.
– Example: If we have
R=10 replications
and we want the p =
0.8 quantile, first
sort, then estimate u
by the (10)(0.8) =
8
th
smallest value
(round if necessary).
5.6 sorted data
7.1
8.8
8.9
9.5
9.7
10.1
12.2 this is our point estimate
12.5
12.9
The Hashemite University 21
ConfidenceInterval: A Motivation
Suppose we want to estimate the average number
of customers in system; N, before it breaks down.
• We simulate the system n times until it breaks
down.
• Each run, we get an estimate of the number of
customers as N
i
(Note that N
i
is a random variable).
• We estimate the average number of customers in
the system by the estimator
• We want to find an interval [a, b] such that we are
sure that the estimated average lies in this interval
with confidence of p% (say 90 %), i.e. it is close to
the correct value with probability of 90%
¿
=
=
n
1 i
i
N
n
1
(n) X
The Hashemite University 22
22
Confidence Interval I
• Simulation results without
confidence intervals are
(almost) worthless!"
• The question: How to
construct confidence intervals
for the unknown mean ,
based on our observations of
X(n) and S
2
(n)?
For large number of
runs, the distribution of a
given output will follow
the normal distribution
The Hashemite University 23
ConfidenceInterval II
• To understand confidence intervals fully, it is
important to distinguish between measures of
error, and measures of risk, e.g., confidence
interval versus prediction interval.
• Suppose the model is the normal distribution
with mean μ, variance o
2
(both unknown).
– Let Y
i
be the average cycle time for parts produced
on the i
th
replication of the simulation (its
mathematical expectation is ).
– Average cycle time will vary from day to day, but
over the longrun the average of the averages will
be close to μ.
– Sample variance across R replications:
¿
=
÷
÷
=
R
i
i
Y Y
R
S
1
2
.. .
2
) (
1
1
u
ˆ
The Hashemite University 24
ConfidenceInterval III
• Confidence Interval (CI):
– A measure of error.
– Where we assume that Y
i
.
are normally
distributed.
– We cannot know for certain how far is from
μ but CI attempts to bound that error.
– A CI, such as 95%CI, tells us how much we
can trust the interval to actually bound the
error between and μ .
– The more replications we make, the less error
there is in (converging to 0 as R goes to
infinity).
..
Y
..
Y
..
Y
The Hashemite University 25
ConfidenceInterval
Estimation I
• Obtain t
n1,1α/2
from the table in Appendix
T1
.
• where t
n  1, 1  α/2
is the upper 1  α/2
critical point for a t distribution with n 
1 df (see Figure 4.10).
¿
=
=
n
1 i
i
N
n
1
(n) X
n
n s
t
n
) (
(n) X
2
2 / 1 , 1 o ÷ ÷
±
The Hashemite University 26
1  Student's t 2  Normal
0.0 1.0 2.0 3.0 0.0 1.0 2.0 3.0
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0
0
0 z
1 / 2 o ÷
z
1 / 2 o ÷
÷
n
t
1,1 / 2 o ÷ ÷
Standard normal
distribution
t distribution with n  1 df
n
t
1,1 / 2 o ÷ ÷
÷
Figure 4.10. Standard normal distribution and t distribution with n  1 df.
The Hashemite University 27
ConfidenceInterval Estimation II
• The coverage of the confidence interval is the range of
values covered by it.
• The idea behind CI is that as n (number of runs) increases
the distribution of Xi will be very close to the normal
distribution.
• So, as the distribution of Xi is further from normal (i.e.
skewed) we need larger n to make the assumption of CI
valid.
• Example: for some system we have found that (where n =
10):
– The 90% confidence interval (based on the previous equation)
is [1.1, 1.58]
– This means that we claim that with 90% confidence μ lies in
the interval [1.1, 1.58]
34 . 1 (n) X =
17 . 0 (n) S
2
=
The Hashemite University 28
ConfidenceInterval Quantiles I
• Confidence Interval of Quantiles: An approximate (1
o)100% confidence interval for u can be obtained by
finding two values u
l
and u
u
.
u
l
cuts off 100p
l
% of the histogram (the Rp
l
smallest
value of the sorted data).
u
u
cuts off 100p
u
% of the histogram (the Rp
u
smallest
value of the sorted data).
1
) 1 (
1
) 1 (
where
2 / 1
2 / 1
÷
÷
+ =
÷
÷
÷ =
÷
÷
R
p p
z p p
R
p p
z p p
u o
o
The Hashemite University 29
ConfidenceInterval Quantiles II
• Example: Suppose R = 1000 reps, to estimate the p = 0.8
quantile with a 95% confidence interval.
– First, sort the data from smallest to largest.
– Then estimate of u by the (1000)(0.8) = 800th smallest value,
and the point estimate is 212.03.
– And find the confidence interval:
– The 95% CI of the quantile estimate is [188.96, 256.79]
alues smallest v 820 and 780 the is CI The
82 . 0
1 1000
) 8 . 1 ( 8 .
96 . 1 8 . 0
78 . 0
1 1000
) 8 . 1 ( 8 .
96 . 1 8 . 0
th th
=
÷
÷
+ =
=
÷
÷
÷ =
u
p
p
A portion of the 1000
sorted values:
Output Rank
180.92 779
188.96 780
190.55 781
208.58 799
212.03 800
216.99 801
250.32 819
256.79 820
256.99 821
The Hashemite University 30
Analysis of Terminating
Simulation
The Hashemite University 31
Output Analysis for
Terminating Simulation
• A terminating simulation: runs over a
simulated time interval [0, T
E
].
• A common goal is to estimate:
• In general, independent replications are
used, each run using a different random
number stream and independently chosen
initial conditions.
E
E
n
i
i
T t t Y dt t Y
T
E
Y
n
E
s s


.

\

=

.

\

=
}
¿
=
0 ), ( output continuous for , ) (
1
output discrete for ,
1
E
T
0
1

u
The Hashemite University 32
C.I. with Specified Precision I
• The halflength H of a 100(1 – o)% confidence
interval for a mean u, based on the t distribution,
is given by:
• Suppose that an error criterion c is specified with
probability 1  o, a sufficiently large sample size
should satisfy:
• Three methods to specify this precision:
– Absolute error.
– Relative error.
– Sequential procedure.
R
S
t H
R
2
2 / 1 , 1 o ÷ ÷
=
R is the # of
replications
S
2
is the sample
variance
( ) o c u ÷ > < ÷ 1
..
Y P
The Hashemite University 33
C.I. with Specified Precision II
• Remember that the calculation of CI is
controlled by the number of runs and the
value of both the mean and the variance
of these runs.
• So, R (or n) is the factor that determines
the precision of CI and so the precision of
the overall simulation.
• So, the previous listed three methods of
precision are used to determine the best
value of R (or n) to get the required
precision.
The Hashemite University 34
Absolute Precision I
• Absolute error
• To obtain this
µ  ÷ = X
 
 
   µ
µ
µ o
s ÷ s
s ÷ =
+ s s ÷ ~ ÷
X P
X P
X X P
length half
length half length half 1
The Hashemite University 35
Absolute Precision II
• To obtain absolute error of , the number
of replications needed is approximately
• How to use the above formulation:
– For some value of n compute the variance.
– So, the only unknown is i.
– From Table T.1 try different values of critical
values based on different values of i to get the
required precision.
– Return i.
( )
¦
)
¦
`
¹
¦
¹
¦
´
¦
s > =
÷ ÷
 
o
i
n S
t n i n
i a
) (
: min
2
2 / 1 , 1
*
The Hashemite University 36
Relative Precision I
• Also interested in the relative error
• Now we have
µ
µ
¸
÷
=
X
 
 
 
( )  
 
  ) 1 (
) 1 (
length / half / 1
¸ ¸ µ µ
µ ¸ µ ¸
µ µ ¸ µ
µ µ ¸ µ
¸ µ
µ o
÷ s ÷ =
s ÷ ÷ =
+ ÷ s ÷ s
+ ÷ s ÷ =
s ÷ s
s ÷ ~ ÷
X P
X P
X X P
X X P
X X P
X X X P
The Hashemite University 37
Relative Precision II
• To obtain relative error of ¸, the number of
replications needed is approximately
• How to use the above formulation?
– The same as in the absolute precision equation.
( )
¦
¦
)
¦
¦
`
¹
¦
¦
¹
¦
¦
´
¦
+
s > =
÷ ÷
¸
¸
¸
o
1 ) (
) (
: min
2
2 / 1 , 1
*
n X
i
n S
t
n i n
i
r
The Hashemite University 38
Sequential Procedure I
• The major disadvantage of both the absolute
precision and relative error algorithms is that:
– You compute n based on mean and variance values
obtained from a small number of runs which can be
inaccurate.
– Thus most of the time you either obtain too large n
which includes more replications than needed for the
specified precision.
– Or you get too small n resulting in inaccurate simulation
results.
• The sequential procedure comes to solve this
problem where:
– The computed n is updated by one at each time till you
reach the optimal value.
– Each time you update n you also update the mean and
variance based on the new value of n.
The Hashemite University 39
Sequential Procedure II
• Steps of the sequential algorithm:
– Make c runs of the simulation and set n =
c.
– Compute and from
the results of the c runs.
– If stop and return n, else
increment n by 1, make an additional
simulation run and return to step 2.
n
n S
t b
n
) (
2
2 / 1 , 1 o ÷ ÷
=
) (n X
¸ s ) ( / n X b
The Hashemite University 40
Recommended Use of
Procedures
• Highest accuracy use sequential
procedure.
• Reasonable accuracy with low
resource consumption use
absolute or the relative error method
(relative is more accurate).
• So, you must find a trade off
between cost and accuracy.
The Hashemite University 41
Analysis of Steady State
Simulation
The Hashemite University 42
Output Analysis for Steady
State Simulation I
• Consider a single run of a simulation
model to estimate a steadystate or long
run characteristics of the system.
– The single run produces observations Y
1
, Y
2
, ...
(generally the samples of an autocorrelated
time series).
– Performance measure:
• Independent of the initial conditions.
measure discrete for ,
1
1
lim
¿
= · ÷
=
n
i
i
n
Y
n
u
measure continuous for , ) (
1
0
lim }
· ÷
=
E
E
T
E
T
dt t Y
T

The Hashemite University 43
Output Analysis for Steady
State Simulation II
• The number of replications is a
design choice, with several
considerations in mind:
–Any bias in the point estimator that is
due to artificial or arbitrary initial
conditions (bias can be severe if run
length is too short).
–Desired precision of the point estimator.
–Budget constraints on computer
resources.
The Hashemite University 44
Initialization Bias I
• Methods to reduce the pointestimator bias caused by
using artificial and unrealistic initial conditions:
– Intelligent initialization.
– Divide simulation into an initialization phase and data
collection phase.
• Intelligent initialization
– Initialize the simulation in a state that is more
representative of longrun conditions.
– If the system exists, collect data on it and use these
data to specify more nearly typical initial conditions.
– If the system can be simplified enough to make it
mathematically solvable, e.g. queueing models, solve
the simplified model to find longrun expected or most
likely conditions, use that to initialize the simulation.
The Hashemite University 45
Initialization Bias II
• Divide each simulation run into two phases:
– An initialization phase, from time 0 to time T
0
.
– A datacollection phase, from T
0
to the
stopping time T
0
+T
E
.
– The choice of T
0
is important:
• After T
0
, system should be more nearly
representative of steadystate behavior.
– System has reached steady state: the
probability distribution of the system state is
close to the steadystate probability
distribution (bias of response variable is
negligible).
The Hashemite University 46
Mean Calculation Methods for
SteadyState Simulation
• Two main approaches:
–Replication/Deletion approach:
• Based on n independent short replications of
length m observations , i.e. multiple runs.
• First determine the warmup period l.
• Delete the results obtained during the
warmup period to get ml observations for
each replication.
• Compute the mean, variance, and CI as in
the terminating simulation for ml
observations for all replications.
–Batch mean.
The Hashemite University 47
Choosing the WarmUp Period I
• No widely accepted algorithm, objective and
proven technique to guide how much data to
delete to reduce initialization bias to a negligible
level.
• Plots can, at times, be misleading but they are
still recommended.
– Ensemble averages reveal a smoother and more precise
trend as the # of replications, R, increases.
– Ensemble averages can be smoothed further by plotting
a moving average.
– Cumulative average becomes less variable as more data
are averaged.
– The more correlation present, the longer it takes for
to approach steady state.
– Different performance measures could approach steady
state at different rates.
j
Y
.
The Hashemite University 48
Choosing the WarmUp Period II
• In replication/deletion
method the main issue is to
choose the warmup period
• Very difficult to determine
from a single replication.
When is it warmed up?
Much smoother and
easier to tell where it has
converged
The Hashemite University 49
Batch Means Method I
• It is based on a single, long replication:
– Problem: data are dependent so the usual estimator is
biased.
– Solution: batch means.
• Batch means: divide the output data from 1 replication
(after appropriate deletion) into a few large batches and
then treat the means of these batches as if they were
independent.
• A continuoustime process, {Y(t), T
0
s t s T
0
+T
E
}:
– k batches of size m = T
E
/k, batch means:
• A discretetime process, {Y
i
, i = d+1,d+2, …, n}:
– k batches of size m = (n – d)/k, batch means:
¿
+ ÷ =
+
=
jm
m j i
d i j
Y
m
Y
1 ) 1 (
1
}
÷
+ =
jm
m j
j
dt T t Y
m
Y
) 1 (
0
) (
1
The Hashemite University 50
Batch Means Method II
• Starting either with continuoustime or discretetime data,
the variance of the sample mean is estimated by:
• If the batch size is sufficiently large, successive batch
means will be approximately independent, and the variance
estimator will be approximately unbiased.
• No widely accepted and relatively simple method for
choosing an acceptable batch size m (see text for a
suggested approaches). Some simulation software does it
automatically.
km d m k d m d m d m d d d
Y Y Y Y Y Y Y Y
+ + ÷ + + + + + +
..., , , ... , ..., , , ..., , , ..., ,
1 ) 1 ( 2 1 1 1
( )
¿
=
÷
÷
=
k
j
j
Y Y
k
S
1
2
2
1
1
deleted
1
Y
2
Y
k
Y
The Hashemite University 51
Determine Simulation Run
Length
• Length of each replication (m) beyond deletion
point (d):
(m  d) > 10d
• Number of replications (R) should be as many as
time permits, up to about 25 replications.
• For a fixed total sample size (m), as fewer data are
deleted ( d):
– C.I. shifts: greater bias.
– Standard error of decreases: decrease variance. ) , (
..
d n Y
Reducing
bias
Increasing
variance
Trade off
The Hashemite University 52
Additional Notes
• The lecture covers the following
sections from the textbook:
–Chapter 9
• Sections: 9.1 – 9.5
–Chapter 4
• Section 4.3, 4.4, 4.5