Professional Documents
Culture Documents
C H A P T E R
6 DATA COLLECTION
AND ANALYSIS
6.1 Introduction
In the previous chapter, we discussed the importance of having clearly defined
objectives and a well-organized plan for conducting a simulation study. In this
chapter, we look at the data-gathering phase of a simulation project that defines
the system being modeled. The result of the data-gathering effort is a conceptual
or mental model of how the system is configured and how it operates. This con-
ceptual model may take the form of a written description, a flow diagram, or even
a simple sketch on the back of an envelope. It becomes the basis for the simula-
tion model that will be created.
Data collection is the most challenging and time-consuming task in simula-
tion. For new systems, information is usually very sketchy and only roughly esti-
mated. For existing systems, there may be years of raw, unorganized data to sort
through. Information is seldom available in a form that is directly usable in build-
ing a simulation model. It nearly always needs to be filtered and massaged to get it
into the right format and to reflect the projected conditions under which the system
is to be analyzed. Many data-gathering efforts end up with lots of data but little use-
ful information. Data should be gathered purposefully to avoid wasting not only
the modeler’s time but also the time of individuals who are supplying the data.
This chapter presents guidelines and procedures for gathering data. Statistical
techniques for analyzing data and fitting probability distributions to data are also
discussed. The following questions are answered:
• What is the best procedure to follow when gathering data?
• What types of data should be gathered?
125
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
records of repair times, for example, often lump together the time spent
waiting for repair personnel to become available and the actual time
spent performing the repair. What you would like to do is separate the
waiting time from the actual repair time because the waiting time is
a function of the availability of the repair person, which may vary
depending on the system operation.
4. Look for common groupings. When dealing with lots of variety in a
simulation such as hundreds of part types or customer profiles, it helps to
look for common groupings or patterns. If, for example, you are
modeling a process that has 300 entity types, it may be difficult to get
information on the exact mix and all of the varied routings that can
occur. Having such detailed information is usually too cumbersome to
work with even if you did have it. The solution is to reduce the data
to common behaviors and patterns. One way to group common data is to
first identify general categories into which all data can be assigned. Then
the percentage of cases that fall within each category is calculated or
estimated. It is not uncommon for beginning modelers to attempt to use
actual logged input streams such as customer arrivals or material
shipments when building a model. After struggling to define hundreds of
individual arrivals or routings, it begins to dawn on them that they can
group this information into a few categories and assign probabilities that
any given instance will fall within a particular category. This allows
dozens and sometimes hundreds of unique instances to be described in a
few brief commands. The secret to identifying common groupings is to
“think probabilistically.”
5. Focus on essence rather than substance. A system definition for
modeling purposes should capture the cause-and-effect relationships
and ignore the meaningless (to simulation) details. This is called system
abstraction and seeks to define the essence of system behavior rather
than the substance. A system should be abstracted to the highest level
possible while still preserving the essence of the system operation.
Using this “black box” approach to system definition, we are not
concerned about the nature of the activity being performed, such as
milling, grinding, or inspection. We are interested only in the impact
that the activity has on the use of resources and the delay of entity
flow. A proficient modeler constantly is thinking abstractly about the
system operation and avoids getting caught up in the mechanics of the
process.
6. Separate input variables from response variables. First-time modelers
often confuse input variables that define the operation of the system with
response variables that report system performance. Input variables define
how the system works (activity times, routing sequences, and the like)
and should be the focus of the data gathering. Response variables describe
how the system responds to a given set of input variables (amount of
work in process, resource utilization, throughput times, and so on).
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
operational information is easy to define. If, on the other hand, the process has
evolved into an informal operation with no set rules, it can be very difficult to de-
fine. For a system to be simulated, operating policies that are undefined and
ambiguous must be codified into defined procedures and rules. If decisions and
outcomes vary, it is important to at least define this variability statistically using
probability expressions or distributions.
Station 3B
go and defines what happens to the entity, not where it happens. An entity flow
diagram, on the other hand, is more of a routing chart that shows the physical
movement of entities through the system from location to location. An entity flow
diagram should depict any branching that may occur in the flow such as routings
to alternative work centers or rework loops. The purpose of the entity flow dia-
gram is to document the overall flow of entities in the system and to provide a
visual aid for communicating the entity flow to others. A flow diagram is easy to
understand and gets everyone thinking in the same way about the system. It can
easily be expanded as additional information is gathered to show activity times,
where resources are used, and so forth.
Check-in counter N(1,.2) min. Secretary Waiting room None 0.2 min. None
Waiting room None None Exam room When room is 0.8 min.* Nurse
available
Exam room N(15,4) min. Doctor Checkout counter None 0.2 min. None
Checkout counter N(3,.5) min. Secretary Exit None None None
Notice that the description of operation really provides the details of the en-
tity flow diagram. This detail is needed for defining the simulation model. The
times associated with activities and movements can be just estimates at this stage.
The important thing to accomplish at this point is simply to describe how entities
are processed through the system.
The entity flow diagram, together with the description of operation, provides
a good data document that can be expanded as the project progresses. At this
point, it is a good idea to conduct a structured walk-through of the operation using
the entity flow diagram as the focal point. Individuals should be involved in this
review who are familiar with the operation to ensure that the description of oper-
ation is accurate and complete.
Based on this description of operation, a first cut at building the model can
begin. Using ProModel, a model for simulating Dr. Brown’s practice can be built
in a matter of just a few minutes. Little translation is needed as both the diagram
(Figure 6.2) and the data table (Table 6.1) can be entered pretty much in the same
way they are shown. The only additional modeling information required to build
a running model is the interarrival time of patients.
Getting a basic model up and running early in a simulation project helps hold
the interest of stakeholders. It also helps identify missing information and moti-
vates team members to try to fill in these information gaps. Additional questions
about the operation of the system are usually raised once a basic model is running.
Some of the questions that begin to be asked are Have all of the routings been
accounted for? and Have any entities been overlooked? In essence, modeling the
system actually helps define and validate system data.
At this point, any numerical values such as activity times, arrival rates, and
others should also be firmed up. Having a running model enables estimates and
other assumptions to be tested to see if it is necessary to spend additional time get-
ting more accurate information. For existing systems, obtaining more accurate
data is usually accomplished by conducting time studies of the activity or event
under investigation. A sample is gathered to represent all conditions under which
the activity or event occurs. Any biases that do not represent normal operating
conditions are eliminated. The sample size should be large enough to provide
an accurate picture yet not so large that it becomes costly without really adding
additional information.
For comparative studies in which two design alternatives are evaluated, the
fact that assumptions are made is less significant because we are evaluating
relative performance, not absolute performance. For example, if we are trying to
determine whether on-time deliveries can be improved by assigning tasks to a
resource by due date rather than by first-come, first-served, a simulation can pro-
vide useful information to make this decision without necessarily having com-
pletely accurate data. Because both models use the same assumptions, it may be
possible to compare relative performance. We may not know what the absolute
performance of the best option is, but we should be able to assess fairly accurately
how much better one option is than another.
Some assumptions will naturally have a greater influence on the validity of a
model than others. For example, in a system with large processing times com-
pared to move times, a move time that is off by 20 percent may make little or no
difference in system throughput. On the other hand, an activity time that is off by
20 percent could make a 20 percent difference in throughput. One way to assess
the influence of an assumption on the validity of a model is through sensitivity
analysis. Sensitivity analysis, in which a range of values is tested for potential im-
pact on model performance, can indicate just how accurate an assumption needs
to be. A decision can then be made to firm up the assumption or to leave it as is.
If, for example, the degree of variation in a particular activity time has little or no
impact on system performance, then a constant activity time may be used. At the
other extreme, it may be found that even the type of distribution has a noticeable
impact on model behavior and therefore needs to be selected carefully.
A simple approach to sensitivity analysis for a particular assumption is to run
three different scenarios showing (1) a “best” or most optimistic case, (2) a
“worst” or most pessimistic case, and (3) a “most likely” or best-guess case. These
runs will help determine the extent to which the assumption influences model
behavior. It will also help assess the risk of relying on the particular assumption.
FIGURE 6.3
Descriptive statistics
for a sample data set
of 100 observations.
0.99 0.41 0.89 0.59 0.98 0.47 0.70 0.94 0.39 0.92
1.30 0.67 0.64 0.88 0.57 0.87 0.43 0.97 1.20 1.50
1.20 0.98 0.89 0.62 0.97 1.30 1.20 1.10 1.00 0.44
0.67 1.70 1.40 1.00 1.00 0.88 0.52 1.30 0.59 0.35
0.67 0.51 0.72 0.76 0.61 0.37 0.66 0.75 1.10 0.76
0.79 0.78 0.49 1.10 0.74 0.97 0.93 0.76 0.66 0.57
1.20 0.49 0.92 1.50 1.10 0.64 0.96 0.87 1.10 0.50
0.60 1.30 1.30 1.40 1.30 0.96 0.95 1.60 0.58 1.10
0.43 1.60 1.20 0.49 0.35 0.41 0.54 0.83 1.20 0.99
1.00 0.65 0.82 0.52 0.52 0.80 0.72 1.20 0.59 1.60
distribution), and stationarity (the distribution of the data doesn’t change with time)
should be determined. Using data analysis software such as Stat::Fit, data sets can
be automatically analyzed, tested for usefulness in a simulation, and matched to the
best-fitting underlying distribution. Stat::Fit is bundled with ProModel and can be
accessed from the Tools menu after opening ProModel. To illustrate how data are
analyzed and converted to a form for use in simulation, let’s take a look at a data set
containing 100 observations of an inspection operation time, shown in Table 6.2.
By entering these data or importing them from a file into Stat::Fit, a descrip-
tive analysis of the data set can be performed. The summary of this analysis is
displayed in Figure 6.3. These parameters describe the entire sample collection.
Because reference will be made later to some of these parameters, a brief defini-
tion of each is given below.
Mean—the average value of the data.
Median—the value of the middle observation when the data are sorted in
ascending order.
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
TABLE 6.3 100 Outdoor Temperature Readings from 8:00 A.M. to 8:00 P.M.
57 57 58 59 59 60 60 62 62 62
63 63 64 64 65 66 66 68 68 69
70 71 72 72 73 74 73 74 75 75
75 75 76 77 78 78 79 80 80 81
80 81 81 82 83 83 84 84 83 84
83 83 83 82 82 81 81 80 81 80
79 79 78 77 77 76 76 75 74 75
75 74 73 73 72 72 72 71 71 71
71 70 70 70 70 69 69 68 68 68
67 67 66 66 66 65 66 65 65 64
Scatter Plot
This is a plot of adjacent points in the sequence of observed values plotted against
each other. Thus each plotted point represents a pair of consecutive observations
(Xi, Xi+1) for i = 1, 2, . . . , n − 1. This procedure is repeated for all adjacent data
points so that 100 observations would result in 99 plotted points. If the Xi’s are
independent, the points will be scattered randomly. If, however, the data are
dependent on each other, a trend line will be apparent. If the Xi’s are positively
correlated, a positively sloped trend line will appear. Negatively correlated Xi’s
will produce a negatively sloped line. A scatter plot is a simple way to detect
strongly dependent behavior.
Figure 6.4 shows a plot of paired observations using the 100 inspection times
from Table 6.2. Notice that the points are scattered randomly with no defined pat-
tern, indicating that the observations are independent.
Figure 6.5 is a plot of the 100 temperature observations shown in Table 6.3.
Notice the strong positive correlation.
Autocorrelation Plot
If observations in a sample are independent, they are also uncorrelated. Correlated
data are dependent on each other and are said to be autocorrelated. A measure of
the autocorrelation, rho (ρ), can be calculated using the equation
n− j
! (xi − x̄)(xi+ j − x̄)
ρ=
i=1
σ 2 (n − j)
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.4
Scatter plot showing
uncorrelated data.
FIGURE 6.5
Scatter plot showing
correlated temperature
data.
where j is the lag or distance between data points; σ is the standard deviation of
the population, approximated by the standard deviation of the sample; and X̄ is
the sample mean. The calculation is carried out to 15 of the length of the data set,
where diminishing pairs start to make the calculation unreliable.
This calculation of autocorrelation assumes that the data are taken from a sta-
tionary process; that is, the data would appear to come from the same distribution
regardless of when the data were sampled (that is, the data are time invariant). In
the case of a time series, this implies that the time origin may be shifted without
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.6
Autocorrelation
graph showing
noncorrelation.
FIGURE 6.7
Autocorrelation graph
showing correlation.
affecting the statistical characteristics of the series. Thus the variance for the whole
sample can be used to represent the variance of any subset. If the process being
studied is not stationary, the calculation of autocorrelation is more complex.
The autocorrelation value varies between 1 and −1 (that is, between positive
and negative correlation). If the autocorrelation is near either extreme, the data are
autocorrelated. Figure 6.6 shows an autocorrelation plot for the 100 inspection
time observations from Table 6.2. Notice that the values are near zero, indicating
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.8
Runs test based on
points above and
below the median and
number of turning
points.
little or no correlation. The numbers in parentheses below the x axis are the max-
imum autocorrelation in both the positive and negative directions.
Figure 6.7 is an autocorrelation plot for the sampled temperatures in
Table 6.3. The graph shows a broad autocorrelation.
Runs Tests
The runs test looks for runs in the data that might indicate data correlation. A run
in a series of observations is the occurrence of an uninterrupted sequence of num-
bers showing the same trend. For instance, a consecutive set of increasing or
decreasing numbers is said to provide runs “up” or “down” respectively. Two
types of runs tests that can be made are the median test and the turning point test.
Both of these tests can be conducted automatically on data using Stat::Fit. The runs
test for the 100 sample inspection times in Table 6.2 is summarized in Figure 6.8.
The result of each test is either do not reject the hypothesis that the series is random
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
or reject that hypothesis with the level of significance given. The level of signifi-
cance is the probability that a rejected hypothesis is actually true—that is, that the
test rejects the randomness of the series when the series is actually random.
The median test measures the number of runs—that is, sequences of num-
bers, above and below the median. The run can be a single number above or below
the median if the numbers adjacent to it are in the opposite direction. If there are
too many or too few runs, the randomness of the series is rejected. This median
runs test uses a normal approximation for acceptance or rejection that requires
that the number of data points above or below the median be greater than 10.
The turning point test measures the number of times the series changes direc-
tion (see Johnson, Kotz, and Kemp 1992). Again, if there are too many turning
points or too few, the randomness of the series is rejected. This turning point runs
test uses a normal approximation for acceptance or rejection that requires more
than 12 data points.
While there are many other runs tests for randomness, some of the most sen-
sitive require larger data sets, in excess of 4,000 numbers (see Knuth 1981).
The number of runs in a series of observations indicates the randomness of
those observations. A few runs indicate strong correlation, point to point. Several
runs may indicate cyclic behavior.
bimodal, as shown in Figure 6.9. The fact that there are two clusters of data indi-
cates that there are at least two distinct causes of downtimes, each producing
different distributions for repair times. Perhaps after examining the cause of the
downtimes, it is discovered that some were due to part jams that were quickly
fixed while others were due to mechanical failures that took longer to repair.
One type of nonhomogeneous data occurs when the distribution changes over
time. This is different from two or more distributions manifesting themselves over
the same time period such as that caused by mixed types of downtimes. An exam-
ple of a time-changing distribution might result from an operator who works 20 per-
cent faster during the second hour of a shift than during the first hour. Over long pe-
riods of time, a learning curve phenomenon occurs where workers perform at a
faster rate as they gain experience on the job. Such distributions are called nonsta-
tionary or time variant because of their time-changing nature. A common example
of a distribution that changes with time is the arrival rate of customers to a service
facility. Customer arrivals to a bank or store, for example, tend to occur at a rate that
fluctuates throughout the day. Nonstationary distributions can be detected by plot-
ting subgroups of data that occur within successive time intervals. For example,
sampled arrivals between 8 A.M. and 9 A.M. can be plotted separately from arrivals
between 9 A.M. and 10 A.M., and so on. If the distribution is of a different type or is
the same distribution but shifted up or down such that the mean changes value over
time, the distribution is nonstationary. This fact will need to be taken into account
when defining the model behavior. Figure 6.10 is a plot of customer arrival rates for
a department store occurring by half-hour interval between 10 A.M. and 6 P.M.
downtimes indicating
occurrence
multiple causes.
Repair time
FIGURE 6.10
Change in rate of
Rate of arrival
customer arrivals
between 10 A.M. and
6 P.M.
10:00 A.M. 12:00 noon 2:00 P.M. 4:00 P.M. 6:00 P.M.
Time of day
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
Note that while the type of distribution (Poisson) is the same for each period, the
rate (and hence the mean interarrival time) changes every half hour.
The second case we look at is where two sets of data have been gathered and we
desire to know whether they come from the same population or are identically dis-
tributed. Situations where this type of testing is useful include the following:
• Interarrival times have been gathered for different days and you want to
know if the data collected for each day come from the same distribution.
• Activity times for two different operators were collected and you want to
know if the same distribution can be used for both operators.
• Time to failure has been gathered on four similar machines and you are
interested in knowing if they are all identically distributed.
One easy way to tell whether two sets of data have the same distribution is to
run Stat::Fit and see what distribution best fits each data set. If the same distribu-
tion fits both sets of data, you can assume that they come from the same popula-
tion. If in doubt, they can simply be modeled as separate distributions.
Several formal tests exist for determining whether two or more data sets can
be assumed to come from identical populations. Some of them apply to specific
families of distributions such as analysis of variance (ANOVA) tests for normally
distributed data. Other tests are distribution independent and can be applied to
compare data sets having any distribution, such as the Kolmogorov-Smirnov two-
sample test and the chi-square multisample test (see Hoover and Perry 1990). The
Kruskal-Wallis test is another nonparametric test because no assumption is made
about the distribution of the data (see Law and Kelton 2000).
Arrivals Arrivals
per 5-Minute Interval Frequency per 5-Minute Interval Frequency
0 15 6 7
1 11 7 4
2 19 8 5
3 16 9 3
4 8 10 3
5 8 11 1
FIGURE 6.11
Histogram showing arrival count per five-minute interval.
20
15
Frequency
10
0
0 1 2 3 4 5 6 7 8 9 10 11
Arrival count
FIGURE 6.12
Frequency distribution for 100 observed inspection times.
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.13
Histogram distribution
for 100 observed
inspection times.
generate random variates in the simulation rather than relying only on a sampling
of observations from the population. Defining the theoretical distribution that best
fits sample data is called distribution fitting. Before discussing how theoretical
distributions are fit to data, it is helpful to have a basic understanding of at least
the most common theoretical distributions.
There are about 12 statistical distributions that are commonly used in sim-
ulation (Banks and Gibson 1997). Theoretical distributions can be defined by a
simple set of parameters usually defining dispersion and density. A normal distri-
bution, for example, is defined by a mean value and a standard deviation value.
Theoretical distributions are either discrete or continuous, depending on whether
a finite set of values within the range or an infinite continuum of possible values
within a range can occur. Discrete distributions are seldom used in manufacturing
and service system simulations because they can usually be defined by simple
probability expressions. Below is a description of a few theoretical distributions
sometimes used in simulation. These particular ones are presented here purely be-
cause of their familiarity and ease of understanding. Beginners to simulation usu-
ally feel most comfortable using these distributions, although the precautions
given for their use should be noted. An extensive list of theoretical distributions
and their applications is given in Appendix A.
Binomial Distribution
The binomial distribution is a discrete distribution that expresses the probability
( p) that a particular condition or outcome can occur in n trials. We call an oc-
currence of the outcome of interest a success and its nonoccurrence a failure.
For a binomial distribution to apply, each trial must be a Bernoulli trial: it must
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
be independent and have only two possible outcomes (success or failure), and
the probability of a success must remain constant from trial to trial. The mean
of a binomial distribution is given by np, where n is the number of trials and p
is the probability of success on any given trial. The variance is given by
np(1 − p).
A common application of the binomial distribution in simulation is to test for
the number of defective items in a lot or the number of customers of a particular
type in a group of customers. Suppose, for example, it is known that the probabil-
ity of a part being defective coming out of an operation is .1 and we inspect the
parts in batch sizes of 10. The number of defectives for any given sample can be
determined by generating a binomial random variate. The probability mass func-
tion for the binomial distribution in this example is shown in Figure 6.14.
Uniform Distribution
A uniform or rectangular distribution is used to describe a process in which the
outcome is equally likely to fall between the values of a and b. In a uniform
distribution, the mean is (a + b)/2. The variance is expressed by (b − a)2 /12.
The probability density function for the uniform distribution is shown in
Figure 6.15.
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
x
FIGURE 6.15
The probability density
function of a uniform f (x )
distribution.
a b
x
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.16
The probability density
function of a
triangular distribution.
f (x )
a m b
x
The uniform distribution is often used in the early stages of simulation pro-
jects because it is a convenient and well-understood source of random variation.
In the real world, it is extremely rare to find an activity time that is uniformly
distributed because nearly all activity times have a central tendency or mode.
Sometimes a uniform distribution is used to represent a worst-case test for varia-
tion when doing sensitivity analysis.
Triangular Distribution
A triangular distribution is a good approximation to use in the absence of data, es-
pecially if a minimum, maximum, and most likely value (mode) can be estimated.
These are the three parameters of the triangular distribution. If a, m, and b repre-
sent the minimum, mode, and maximum values respectively of a triangular distri-
bution, then the mean of a triangular distribution is (a + m + b)/3. The variance
is defined by (a 2 + m 2 + b2 − am − ab − mb)/18. The probability density func-
tion for the triangular distribution is shown in Figure 6.16.
The weakness of the triangular distribution is that values in real activity times
rarely taper linearly, which means that the triangular distribution will probably
create more variation than the true distribution. Also, extreme values that may be
rare are not captured by a triangular distribution. This means that the full range of
values of the true distribution of the population may not be represented by the tri-
angular distribution.
Normal Distribution
The normal distribution (sometimes called the Gaussian distribution) describes
phenomena that vary symmetrically above and below the mean (hence the
bell-shaped curve). While the normal distribution is often selected for defining ac-
tivity times, in practice manual activity times are rarely ever normally distributed.
They are nearly always skewed to the right (the ending tail of the distribution is
longer than the beginning tail). This is because humans can sometimes take sig-
nificantly longer than the mean time, but usually not much less than the mean
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.17
The probability density
function for a normal
distribution.
f (x )
Exponential Distribution
Sometimes referred to as the negative exponential, this distribution is used fre-
quently in simulations to represent event intervals. The exponential distribution is
defined by a single parameter, the mean (µ). This distribution is related to the
Poisson distribution in that if an occurrence happens at a rate that is Poisson dis-
tributed, the time between occurrences is exponentially distributed. In other
words, the mean of the exponential distribution is the inverse of the Poisson rate.
For example, if the rate at which customers arrive at a bank is Poisson distributed
with a rate of 12 per hour, the time between arrivals is exponentially distributed
with a mean of 5 minutes. The exponential distribution has a memoryless or
forgetfulness property that makes it well suited for modeling certain phenomena
that occur independently of one another. For example, if arrival times are expo-
nentially distributed with a mean of 5 minutes, then the expected time before the
next arrival is 5 minutes regardless of how much time has elapsed since the pre-
vious arrival. It is as though there is no memory of how much time has already
elapsed when predicting the next event—hence the term memoryless. Examples
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
FIGURE 6.18
The probability density
function for an
exponential
distribution.
f (x )
FIGURE 6.19
Ranking distributions by goodness of fit for inspection time data set.
Choosing a distribution that appears to be a good fit to the sample data requires
a basic knowledge of the types of distributions available and their properties. It is
also helpful to have some intuition about the variable whose data are being fit.
Sometimes creating a histogram of the data can reveal important characteristics
about the distribution of the data. If, for example, a histogram of sampled assembly
times appears symmetric on either side of the mean, a normal distribution may be
inferred. Looking at the histogram in Figure 6.13, and knowing the basic shapes of
different theoretical distributions, we might hypothesize that the data come from a
beta, lognormal, or perhaps even a triangular distribution.
After a particular type of distribution has been selected, we must estimate the
defining parameter values of the distribution based on the sample data. In the case
of a normal distribution, the parameters to be estimated are the mean and standard
deviation, which can be estimated by simply calculating the average and standard
deviation of the sample data. Parameter estimates are generally calculated using
moment equations or the maximum likelihood equation (see Law and Kelton
2000).
Once a candidate distribution with its associated parameters has been
defined, a goodness-of-fit test can be performed to evaluate how closely the
distribution fits the data. A goodness-of-fit test measures the deviation of the
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
For a uniform distribution the probabilities for all intervals are equal, so the
remaining intervals also have a hypothesized probability of .20.
Step 3: Calculate the expected frequency for each cell (ei). The expected
frequency (e) for each cell (i ) is the expected number of observations that
would fall into each interval if the null hypothesis were true. It is calculated
by multiplying the total number of observations (n) by the probability ( p)
that an observation would fall within each cell. So for each cell, the
expected frequency (ei ) equals npi .
In our example, since the hypothesized probability ( p) of each cell is
the same, the expected frequency for every cell is ei = npi = 40 × .2 = 8.
Step 4: Adjust cells if necessary so that all expected frequencies are at
least 5. If the expected frequency of any cell is less than 5, the cells must
be adjusted. This “rule of five” is a conservative rule that provides
satisfactory validity of the chi-square test. When adjusting cells, the easiest
approach is to simply consolidate adjacent cells. After any consolidation,
the total number of cells should be at least 3; otherwise you no longer have
a meaningful differentiation of the data and, therefore, will need to gather
additional data. If you merge any cells as the result of this step, you will
need to adjust the observed frequency, hypothesized probability, and
expected frequency of those cells accordingly.
In our example, the expected frequency of each cell is 8, which meets
the minimum requirement of 5, so no adjustment is necessary.
Step 5: Calculate the chi-square %kstatistic. The equation for calculating the
2
chi-square statistic is χcalc = i=1 (oi − ei )2 /ei . If the fit is good the
chi-square statistic will be small.
For our example,
5
2
! (oi − ei )2
χcalc = = .125 + .125 + .50 + 0 + 0 = 0.75.
i=1
ei
FIGURE 6.20
Visual comparison
between beta
distribution and a
histogram of the 100
sample inspection time
values.
FIGURE 6.21
Normal distribution with mean = 1 and standard deviation = .25.
FIGURE 6.22
A triangular
distribution with
minimum = 2,
mode = 5, and
maximum = 15.
2 5 15
random variable X will have values generated for X where min ≤ X < max. Thus
values will never be equal to the maximum value (in this case 20). Because gen-
erated values are automatically truncated when used in a context requiring an in-
teger, only integer values that are evenly distributed from 1 to 19 will occur (this
is effectively a discrete uniform distribution).
Objective
The objective of the study is to determine station utilization and throughput of the
system.
Rejected monitors
Entities
19" monitor
21" monitor
25" monitor
Workstation Information
Station 1 5 5%
Station 2 8 8%
Station 3 5 0%
Inspection 5 0%
Processing Sequence
Arrivals
A cartload of four monitor assemblies arrives every four hours normally distrib-
uted with a standard deviation of 0.2 hour. The probability of an arriving monitor
being of a particular size is
19" .6
21" .3
25" .1
Move Times
All movement is on an accumulation conveyor with the following times:
Station 1 Station 2 12
Station 2 Inspection 15
Inspection Station 3 12
Inspection Station 1 20
Inspection Station 2 14
Station 1 Inspection 18
Move Triggers
Entities move from one location to the next based on available capacity of the
input buffer at the next location.
Work Schedule
Stations are scheduled to operate eight hours a day.
Assumption List
• No downtimes (downtimes occur too infrequently).
• Dedicated operators at each workstation are always available during the
scheduled work time.
• Rework times are half of the normal operation times.
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
6.13 Summary
Data for building a model should be collected systematically with a view of how
the data are going to be used in the model. Data are of three types: structural, op-
erational, and numerical. Structural data consist of the physical objects that make
up the system. Operational data define how the elements behave. Numerical data
quantify attributes and behavioral parameters.
When gathering data, primary sources should be used first, such as historical
records or specifications. Developing a questionnaire is a good way to request in-
formation when conducting personal interviews. Data gathering should start with
structural data, then operational data, and finally numerical data. The first piece of
the puzzle to be put together is the routing sequence because everything else
hinges on the entity flow.
Numerical data for random variables should be analyzed to test for indepen-
dence and homogeneity. Also, a theoretical distribution should be fit to the data if
there is an acceptable fit. Some data are best represented using an empirical dis-
tribution. Theoretical distributions should be used wherever possible.
Data should be documented, reviewed, and approved by concerned individu-
als. This data document becomes the basis for building the simulation model and
provides a baseline for later modification or for future studies.
10.7, 5.4, 7.8, 12.2, 6.4, 9.5, 6.2, 11.9, 13.1, 5.9, 9.6, 8.1, 6.3, 10.3, 11.5,
12.7, 15.4, 7.1, 10.2, 7.4, 6.5, 11.2, 12.9, 10.1, 9.9, 8.6, 7.9, 10.3, 8.3, 11.1
18. Customers calling into a service center are categorized according to the
nature of their problems. Five types of problem categories (A
through E) have been identified. One hundred observations were made
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
Type A B C D E
Observations 10 14 12 9 5
19. While doing your homework one afternoon, you notice that you are
frequently interrupted by friends. You decide to record the times
between interruptions to see if they might be exponentially distributed.
Here are 30 observed times (in minutes) that you have recorded;
conduct a goodness-of-fit test to see if the data are exponentially
distributed. (Hint: Use the data average as an estimate of the mean. For
the range, assume a range between 0 and infinity. Divide the cells based
on equal probabilities ( pi ) for each cell rather than equal cell intervals.)
This exercise is intended to help you practice sorting through and organizing data on a ser-
vice facility. The facility is a fast-food restaurant called Harry’s Drive-Through and is
shown on the next page.
Harry’s Drive-Through caters to two types of customers, walk-in and drive-through.
During peak times, walk-in customers arrive exponentially every 4 minutes and place
their order, which is sent to the kitchen. Nonpeak arrivals occur every 8–10 minutes. Cus-
tomers wait in a pickup queue while their orders are filled in the kitchen. Two workers are
assigned to take and deliver orders. Once customers pick up their orders, 60 percent of
the customers stay and eat at a table, while the other 40 percent leave with their orders.
There is seating for 30 people, and the number of people in each customer party is one
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
Drive-through order
Kitchen
Drive-through
pickup
Pickup Order
Table area
40 percent of the time, two 30 percent of the time, three 18 percent of the time, four 10
percent of the time, and five 2 percent of the time. Eating time is normally distributed
with a mean of 15 minutes and a standard deviation of 2 minutes. If a walk-in customer
enters and sees that more than 15 customers are waiting to place their orders, the cus-
tomer will balk (that is, leave).
Harry’s is especially popular as a drive-through restaurant. Cars enter at a rate of 10 per
hour during peak times, place their orders, and then pull forward to pick up their orders. No
more than five cars can be in the pickup queue at a time. One person is dedicated to taking
orders. If over seven cars are at the order station, arriving cars will drive on.
The time to take orders is uniformly distributed between 0.5 minute and 1.2 minutes in-
cluding payment. Orders take an average of 8.2 minutes to fill with a standard deviation of
1.2 minutes (normal distribution). These times are the same for both walk-in and drive-
through customers.
The objective of the simulation is to analyze performance during peak periods to see
how long customers spend waiting in line, how long lines become, how often customers
balk (pass by), and what the utilization of the table area is.
Problem
Summarize these data in table form and list any assumptions that need to be made in
order to conduct a meaningful simulation study based on the data given and objectives
specified.
Harrell−Ghosh−Bowden: I. Study Chapters 6. Data Collection and © The McGraw−Hill
Simulation Using Analysis Companies, 2004
ProModel, Second Edition
References
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Banks, Jerry, and Randall R. Gibson. “Selecting Simulation Software.” IIE Solutions, May
1997, pp. 29–32.
———. Stat::Fit. South Kent, CT: Geer Mountain Software Corporation, 1996.
Breiman, Leo. Statistics: With a View toward Applications. New York: Houghton Mifflin,
1973.
Brunk, H. D. An Introduction to Mathematical Statistics. 2nd ed. N.Y.: Blaisdell Publish-
ing Co. 1965.
Carson, John S. “Convincing Users of Model’s Validity Is Challenging Aspect of Mod-
eler’s Job.” Industrial Engineering, June 1986, p. 77.
Hoover, S. V., and R. F. Perry. Simulation: A Problem Solving Approach. Reading, MA,
Addison-Wesley, 1990.
Johnson, Norman L.; Samuel Kotz; and Adrienne W. Kemp. Univariate Discrete Distribu-
tions. New York: John Wiley & Sons, 1992, p. 425.
Knuth, Donald E. Seminumerical Algorithms. Reading, MA: Addison-Wesley, 1981.
Law, Averill M., and W. David Kelton. Simulation Modeling & Analysis. New York:
McGraw-Hill, 2000.
Stuart, Alan, and J. Keith Ord. Kendall’s Advanced Theory of Statistics. vol. 2. Cambridge:
Oxford University Press, 1991.