You are on page 1of 15

Journal of Simulation (2009), 115

r 2009 Operational Research Society Ltd. All rights reserved. 1747-7778/09


www.palgrave-journals.com/jos/

Sensitivity analysis in discrete-event simulation


using fractional factorial designs
JAB Montevechi*, RG de Almeida Filho, AP Paiva, RFS Costa and AL Medeiros
Instituto de Engenharia de Producao e Gestao, Universidade Federal de Itajuba, Itajuba, Brasil
This paper presents a sensitivity analysis of discrete-event simulation models based on a twofold approach formed by
Design of Experiments (DOE) factorial designs and simulation routines. This sensitivity analysis aim is to reduce the
number of factors used as optimization input via simulation. The advantage of reducing the input factors is that
optimum search can become very time-consuming as the number of factors increases. Two cases were used to illustrate
the proposal: the rst one, formed only by discrete variable, and the second presenting both discrete and continuous
variables. The paper also shows the use of the Johnsons transformation to experiments with non-normal response
variables. The specic case of the sensitivity analysis with a Poisson distribution response was studied. Generally,
discrete probability distributions lead to violation of constant variance assumption, which is an important principle in
DOE. Finally, a comparison between optimization conducted without planning and optimization based on sensitivity
analysis results was carried out. The main conclusion of this work is that it is possible to reduce the number of runs
needed to nd optimum values, while generating a system knowledge capable to improve resource allocation.
Journal of Simulation advance online publication, 13 November 2009; doi:10.1057/jos.2009.23
Keywords: discrete-event simulation; design of experiments; optimization

1. Introduction
Manufacturing system simulation modelling dates back to at
least the early 1960s (Law and Mccomas, 1998) and is one of
the most popular and powerful tools employed to analyze
complex manufacturing systems (OKane et al, 2000, Banks
et al, 2005). According to OKane et al (2000), one way to
forecast the behaviour of these systems is the use of discreteevent simulation, which consists of modelling a system where
changes occur at discrete-time intervals. This is appropriate
for manufacturing systems as their behaviour changes in
such a way.
Some manufacturing issues addressed by simulation
include specifying the need and quantity of equipment and
personnel, performance evaluation and evaluation of operational procedures (Law and Mccomas, 1998). The objectives
of simulation are classied as performance analysis,
capacity/constraint analysis, conguration comparison, optimization, sensitivity analysis and visualization (Harrell
et al, 2000).
The optimization via simulation deserves special attention.
Harrell et al (2000) dene optimization as the process of
trying different combinations of values for the variables that
can be controlled in order to seek for the combination of
values that provides the most desirable output from the
*Correspondence: JAB Montevechi, Universidade Federal de Itajuba
IEPG, Av. BPS, 1303Caixa Postal: 50, Itajuba, MG, 37500-903,
Brasil.
E-mail: montevechi@unifei.edu.br

simulation model. However, as the number of variables


increases, the optimization phase becomes more timeconsuming.
To avoid this drawback, sensitivity analysis can be used to
select the variables that are responsible for most of the
variation in the experimental response, eliminating from the
model those variables that are not statistically signicant. In
the simulation, sensitivity analysis can be interpreted as a
systematic investigation of simulation outputs according to
the input parameters chosen from the model (Kleijnen,
1998).
In this paper, factorial designs are used to perform a
sensitivity analysis of a simulation model. First, a fractional
factorial design is used to determine the no statistically
signicant parameters. Once they are determined, a 2k full
factorial design can be built with the remainders. The
objective is to show how factorial designs can speed up the
simulation optimization to reach the best solution. Two
comparative studies between an all model factors optimization and another optimization using only the statistically
signicant model factors were carried out.
The remainder of this paper is organized as follows: the
next section introduces discrete-event simulation and presents considerations on performing a simulation. After that,
fractional factorial designs are introduced. And nally, the
considerations on the construction of two simulation
models, the experiments and results analysis are presented.
The paper concludes with some considerations on the
integration between factorial designs and optimization.

2 Journal of Simulation Vol. ] ], No. ] ]

2. Discrete-event simulation
Simulation is the process of designing a model of a real
system and conducting experiments with such a model. This
is done with the purpose of understanding the behaviour
of the system and/or evaluating various strategies for the
operation of the system (Shannon, 1998). Some advantages
of simulation are:
 One can simulate systems that already exist as well as
those that are capable of being brought into existence.
 Simulation allows one to identify bottlenecks in information, material and product ows and to test options for
increasing ow rates.
 It allows one to gain insights into how a modelled system
actually works and to understand which variables are
most important to performance.
 A signicant advantage of simulation is its ability to let
one experiment with new and unfamiliar situations and to
answer what if questions.

in order to seek for the combination of values that provides


the most desirable output from the simulation model
(Harrell et al, 2000). According to Fu (2002), the integration
between optimization and simulation is recent and has been
occurring since the end of the last millennium; the relationship commonly encountered in commercial software is a
subservient one where the optimization routine is an add-on
to the simulation engine. This optimization routine needs the
simulation engine outputs to nd the parameter set which
leads to the best solution.
There are several techniques for optimization. Some of
them are based on heuristics. According to Silva and
Montevechi (2004), heuristic techniques accomplish good
solutions and eventually nd the optimum solution. However, it is not possible to state that the solutions found
by these techniques are the best ones, because heuristics do
not test all possible responses.

4. Fractional factorial design


Some simulation disadvantages are: model building
requires special training, simulation results can be difcult
to interpret, simulation modelling and analysis can be time
consuming and expensive, the misuse of simulation to solve
problems instead of analytical solution when it is possible or
even preferable, and each run of a stochastic simulation
model produces only estimates of the true characteristics of
model for a particular set of input parameters (Law and
Kelton, 2000; Banks et al, 2005).
In this study, a discrete-event simulation is used. It
consists of modelling a system as it evolves over time by a
representation in which the state variables change instantaneously at separate points in time (Law and Kelton, 2000).
This methodology is ideal to be applied to manufacturing
systems because they exhibit discrete production changes
(OKane et al, 2000).
According to Strack (1984), Law and Kelton (2000) and
OKane et al (2000), some characteristics found on problems
to be analyzed that justied the use of simulation are:
 real-world systems with stochastic elements cannot be
accurately described by a mathematical model that can
be evaluated analytically;
 it is easier to obtain results from a simulation model than
using an analytical method;
 experimentation is impossible or very difcult in a real
world system; and
 the need of long-period time studies or alternatives that
physical models do not provide.

According to Montgomery (2001), when an experiment


involves the study of two or more factors, the most efcient
strategy is the factorial design. In this strategy, the factors
are altered together not one at a time, it means that for each
complete run or replica all possible combination levels are
investigated (Montgomery and Runger, 2003).
When the interest is to study the effects of two or more
factors, factorial designs are more efcient than the onefactor-at-a-time approach (Montgomery and Runger, 2003).
By a factorial design, it means that in each complete
trial or replication of the experiment all possible combinations of the levels of the factors are investigated. For
example, if there are a levels of factor A and b levels of factor
B, each replicate contains all ab treatment combinations
(Montgomery, 2001). Factorial designs are the only way
to nd factor interactions (Montgomery, 2001; Montgomery
and Runger, 2003) avoiding incorrect conclusions when
factors interactions are presented. The factorial design major
issue is the exponential growth of the level combination as
the number of factors increases (Kleijnen, 1998).
The fractional factorial design deals with this by selecting
a subset of all possible points of a factorial design and the
simulation is run only for these points (Law and Kelton,
2000). Therefore, the fractional factorial design provides
an alternative to obtain good estimates of main effects and
low order interactions but requires a fraction of computational work of a full factorial design (Law and Kelton, 2000).

5. System modelling
3. Optimization via simulation
Optimization via simulation is the process of trying different
combinations of values for variables that can be controlled

To accomplish the objective of this paper, two manufacturing cells from different companies have been modelled. The
rst one is an application in a Brazilian weapon factory

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 3

whereas the second is an application in a multinational


automotive plant.

5.1. Brazilian weapon factory


The objective of the factory is to increase the throughput by
adding new equipment to the cell. This model is deterministic. It means that no source of randomness is considered and
all input data are kept constant (Law and Kelton, 2000;
Banks et al, 2005).
The following information was gathered about the cell:
 nine locations;
 number of workers available (14 resources); and
 the process and how long they last (37 activities), shifts
and part routes.
Promodel 7.0 (http://www.promodel.com/products/
promodel/), which is a discrete-event simulator for manufacturing and material handling, was used to build the
models. Figure 1 shows a screen of the models animation in
Promodel 7.0s.
The model verication and validation were performed in
two different ways. Initially, a plant expert analyzed whether
the behaviour of the model would seem reasonable
(Kleijnen, 1995; Sargent, 1998). Then, real historical data
were compared with simulated results.
In order to accomplish a comparison study, an investment
scenario was created. In this scenario, a loan was taken
in order to buy new equipment. The loan payment is made
on a monthly basis and this payment is subtracted from
monthly prot, which is determined by the monthly

Figure 1

production multiplied by unitary prot. The problem is to


select the optimum set of equipment for which the increase
of prot compensates the additional purchase cost of the
equipment. There are nine simulation model parameters
which can be changed. They represent how much equipment
is available to perform a specic task or operation. They
can assume two possible values: 1 or 2. Table 1 presents
these parameters. In this case, the Design of Experiments
(DOE) factors are represented by discrete variables.
The optimum set of equipment is determined by three
approaches. The rst one performs several experiments
to identify the main factors and to get a mathematical
model that will be the objective function in an optimization
with Solvers (http://www.solver.com/). After identifying
the statistically signicant simulation factors by using a twosample t hypothesis test (an usual procedure from any
statistic package), the original fractional factorial design can
be converted into a full factorial, eliminating from the design

Table 1 Variable factor assignment for the Brazilian weapon


factory application
Variable Factor
A
B
C
D
E
F
G
H
J

Quantity
Quantity
Quantity
Quantity
Quantity
Quantity
Quantity
Quantity
Quantity

Low level High level


()
()
of
of
of
of
of
of
of
of
of

machines
machines
machines
machines
machines
machines
machines
machines
machines

Screen of the Brazilian weapon factory models animation.

type
type
type
type
type
type
type
type
type

1
2
3
4
5
6
7
8
9

1
1
1
1
1
1
1
1
1

2
2
2
2
2
2
2
2
2

4 Journal of Simulation Vol. ] ], No. ] ]

the no statistically signicant terms. As these parameters are


also important to the simulation arrangement, despite not
being statistically signicant, they can be kept constant in
proper levels. Comparatively, a second approach can be
established by using the main factors identied at the
experiments DOE as input for the optimization via
Simrunners. The optimization software used was Simrunner
3.0s, which uses Genetic Algorithms and is packaged along
with Promodel 7.0s.
Finally, the third approach is performed using all nine
factors in optimization via Simrunners.

5.2. Multinational automotive plant


The objective of this plant is to reduce the production lead
time by adding new operators and reducing setup times.
Production lead time means the time that one batch takes
from the moment raw materials are received to when the
nished products exit. This is a random model, that is, it is
governed by random variables.
The following information was gathered about the cell:
 22 locations;
 number of workers (six resources); and
 the process and their durations (32 activities), shifts and
part routes.

Figure 2

Figure 2 shows a screen of the models animation in


Promodel 7.0.
Similarly to the rst application, this model was veried
and validated through process experts analysis and comparison between real historical data and simulated results.
The problem is to minimize the production lead time.
There are 10 simulation model parameters which can be
changed. They represent the number of operators that are
available to perform the cell activities and the setup mean
time of four machines (total setup and partial setup). Table 2
presents these parameters. It can be observed that in this
case, the factors related to the simulation are represented by
both continuous and discrete variables. For sake of
simplicity, the standard deviation of each machine setup
time was kept constant.
The optimum set of parameters is determined by three
approaches similar to the rst application.

6. Experimentation
Initially, the experiments are planned to identify the most
statistically signicant model factors. After that, the experiments generate an objective function that is used in the
optimization via Solver. Then, these most statistically
signicant factors are used as input data for optimization
via Simrunner. Also, this model is optimized using all factors
as input data in order to compare the results.

Screen of the multinational automotive plant models animation.

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 5

Table 2 Variable factor assignment for the multinational automotive plant application
Variable

Factor

A
B
C
D
E
F
G
H
J
K

Quantity of operators
Mean time of the processing of the operators
Mean time of the total setup to machine_01
Mean time of the partial setup to machine_01
Mean time of the total setup to machine_02
Mean time of the partial setup to machine_02
Mean time of the total setup to machine_03
Mean time of the partial setup to machine_03
Mean time of the total setup to machine_04
Mean time of the partial setup to machine_04

6.1. Identifying important factors


According to Law and Kelton (2000), in simulation,
experimental designs provide a way to decide which specic
congurations to simulate before the runs are performed so
the desired information can be obtained with the lowest runs
of simulation. For instance, considering the second application where there are 10 factors, if a full factorial experiment
were chosen, it would be necessary 210 1024 runs. Therefore, a screening experiment must be considered.
Screening or characterization experiments are experiments
in which many factors are considered and the objective is to
identify those factors (if any) that have large effects
(Montgomery, 2001). Typically, screening experiment
involves using fractional factorial designs and it is performed
in the early stages of the project when many factors are likely
considered to have little or no effect on the response
(Montgomery, 2001). According to this author, in this
situation it is usually best to keep the number of factors
levels low.

6.2. Brazilian weapon plant


The experimental design adopted here was a two-level ninefactor fractional factorial with resolution IV. Resolution IV
means no main effect is aliased with any other main effect or
with any two-factor interaction, but two-factor interactions
are aliased with each other (Montgomery, 2001). The
available resolution IV designs are 293
IV 64 runs and
294
IV 32 runs.
The 294
IV design was chosen because it has fewer runs
than 293
IV design. If necessary, the former design collapses
into a veor lessfactor full factorial. As preliminary
studies have shown that ve factors (C, D, E, G and H) are
critical to cell performance, this design works well. The
generators for this design are F 7BCDE, G 7ACDE,
H 7ABDE and J 7ABCE, and the dening relations
are I BCDEF ACDEG ABDEH ABCEJ ABFG
ACFH ADFJ BCGH BDGJ CDHJ DEFGH
CEFGJ BEFHJ AEGHJ ABCDFGHJ. Table 1 shows
the factor assignment to the variables of the design.

Low level ()

High level ( )

2
2
109
39
154
80
152
55
74
49

4
1
83
21
105
55
115
35
46
28

Table 3 shows the design matrix for principal fraction with


the results obtained for each run. Analyzing the response
variable (Prot) contained in this table, it can be observed
that this response variable is not normal. The aforementioned response (Prot) was determined multiplying the
monthly production by a unitary prot. Mathematically, the
resulting output variable can be interpreted as a number of
units produced in a month, which is typically a Poisson
distribution case. According to many researchers (Bisgaard
and Fuller, 1994; Lewis et al, 2000; Montgomery, 2001)
when using counts as the experimental response, the
assumption of constant variance made with all standard
analysis is violated. Then, a common method for dealing
with this problem is to transform the data before the
signicance analysis, so that the assumption of constant
variance is more probable. To verify if the experimental data
really follows a Poisson distribution as discussed early, a
Goodness-of-t test based on the chi-square distribution can
be applied (Haldar and Mahadevan, 2000). In this test, the
null hypothesis (H0) indicates the observed data follows a
Poisson distribution. To obtain the chi-square statistic it is
necessary to calculate the probability density function for
Poisson distribution, which can be written as Equation (1):
PX ni

el lni
ni !

X 0; 1; 2;

To accomplish with the aims of the chi-square test, the


expected frequency Ei of the data, can be calculated as:
 l ni 
e l
2
Ei N
ni !
In Equation (2), N is the total number of observations in
the sample (N 32 in this case), l is the mean of Poisson
distribution and ni is the number of observation counted for
each class.
For the specic case of the Goodness-of-t test, where there
is only one column to the data, the chi-square statistic becomes
r
X
ni  Ei 2
4w2m1k;1a
3
w2T
E
i
i1
In the Equation (3), ni is the frequency observed in the
sample, m is the used number of classes used in distribution

6 Journal of Simulation Vol. ] ], No. ] ]

Table 3 The 294


IV design matrix for principal fraction and results
Run

Prot (Y)

Transformed Y

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32


















































































































































280 036
299 418
299 518
280 236
314 818
295 436
315 036
295 654
299 418
280 036
280 136
260 854
295 436
276 054
295 654
315 036
309 109
270 345
270 545
270 445
285 845
285 745
391 945
391 945
270 345
270 345
270 545
270 445
285 945
285 845
398 545
359 781

0.24076
0.56152
0.56448
0.22885
0.94377
0.43683
0.94834
0.44403
0.56152
0.24076
0.23479
2.36766
0.43683
0.50647
0.44403
0.94834
0.81693
1.02157
0.99989
1.01069
0.06364
0.05902
1.93948
1.93948
1.02157
1.02157
0.99989
1.01069
0.06824
0.06364
1.99412
1.62554

Table 4 Goodness-of-t test for Poisson distribution in prot data


Classes

Observed

Poisson probability

Expected

Contribution to chi-square

p299 418
299 419299 517
X299 518

23
0
9

0.733295
0.055936
0.210769

23.4654
1.7900
6.7446

0.00923
1.78996
0.75421

Total

32

2.55341

table (see Table 4) and k is the number of parameters of the


Poisson distribution. Table 4 present the results. Notice that
w2T 2.5534ow2(1),10.95 3.8415, with a p-value 0.110. This
p-value for the Goodness-of-t test is not low enough to
reject the null hypothesis of a good t. At an a-level of 0.05,
the data shows to be well modelled by the Poisson
distribution. Then, to avoid problems with the non-normality and instability of variance provoked by an adoption of a
Poisson variable as an experimental response, the Johnson
transformation (Chou et al, 1998) was applied to the original
data (shown in Figure 3). As an adequate response is
obtained, the experimental analysis can be done using the

classical methods, like ANOVA and ordinary least squares


regression. Lewis et al (19992000) however, point out that
the transformation changes the nature of the response, or in
other words, it changes its units, which consists in further
difculty. To surpass this barrier, this work adopted a
correlation analysis between original and transformed
responses. In this sense, as it was identied a positive
correlation between the output variables, then, the best levels
determined for the transformed variable are the same as the
ones used for the original response.
The 294
IV fractional factorial design used in this research is
unreplicated. Therefore, it is not possible to assess the

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 7

Select a Transformation

Probability Plot for Original Data


N
32
AD
2,809
P-Value <0.005

Percent

90

50

10

0.66
P-Value for AD test

99

0.20
0.15
Ref P

0.10
0.05
0.00
0.2

0.4

0.6

1
200000

300000

400000

Probability Plot for Transformed Data


99

Percent

90

0.8
Z Value

1.0

1.2

(P-Value = 0.005 means <= 0.005)

N
32
AD
0.485
P-Value 0.212

50

P-Value for Best Fit: 0.212230


Z for Best Fit: 0.66
Best Transformation Type: SU
Transformation function equals
-2,01786 + 1,09517 * Asinh( ( X - 263112 ) / 6951,30 )

10

1
-2

Figure 3

Johnson transformation for Prot.

signicance of the main and the interaction effects using the


conventional bilateral t-test or ANOVA. The standard
analysis procedure for an unreplicated two-level design is
the normal plot of the estimated factor effects. However,
these designs are so widely used in practice that many formal
analysis procedures have been proposed to overcome the
subjectivity of the normal probability (Montgomery, 2001).
Ye and Hamada (2001), for instance, recommend the use of
Lenths method, a graphical approach based on a Pareto
Chart for the error term. If the error term has one or more
degrees of freedom, the line on the graph is drawn at t, where
t is the (1a/2) quantile of t-distribution with degrees of
freedom equal to the (number of effects/3) (as shown in
Figure 4). The vertical line in the Pareto Chart is the margin
of error, dened as ME t  PSE. Lenths pseudo standard
error (PSE) is based on sparsity of the effects principle,
which assumes the variation in the smallest effects is due to
the random error. To calculate PSE the following steps are
necessary: (a) calculates the absolute value of the effects; (b)
calculates S, which is 1.5  median of the step (c); calculates
the median of the effects that are less than 2.5  S and (d)
calculates PSE, which is 1.5  median calculated in step (c).
Examining the Pareto Chart in Figure 4, it can be noted
that at a signicance level of 10% only factor C (Quantity of
equipment 3) and the interactions BC and CE are signicant.
According to Montgomery (2001), if the experimenter can
reasonably assume that certain high-order interactions are
negligible, information on the main effects and low-order
interactions may be obtained. Otherwise, when there are
several variables, the system or process is likely to be driven

primarily by some of the main effects and low-order


interactions. Therefore, taking into consideration that
factors A, D, F, G, H and J are not signicant, though
they are necessary to the simulation process, in the next step
of the experimentation process these factors must be kept at
low level (1), once these levels present higher prot values.
The factors B and E were kept in the set of meaningful input
variables because they are placed in the interactions BC and
CE, respectively. The main effects for transformed response
are shown in Figure 5. It is noticed that factors B (Quantity
of equipment 2) and C (Quantity of equipment 3) have
positive effect. These factors increase prot when they are
increased (high level). The other factors have negative effects
that decrease prot when they are increased (high level). The
analysis of the interactions in a fractional factorial is not
recommended as the confounding among the effects are
tremendous. However, in a strict sense, it is possible to use
the aliases (confounding) among the main and interaction
effects to explain why only the factors B, C and E were
chosen to compose the new design. Examining the confounding among the statistically signicant terms C, BC and
CE and the other factors and interactions of the 294
IV
fractional factorial design, it is possible to notice that the
factor C is aliased with a three-factor interactions AFH,
BGH and DHJ, which may be neglected according to the
resolution of the chosen design and sparsity of the effects
principle (Montgomery, 2001). The two-way interaction CE
is aliased with three-way interactions ABJ, ADG, BDF and
FGJ, which may also be discarded. The interaction BC is
aliased with the two-way interaction GH and the three-way

8 Journal of Simulation Vol. ] ], No. ] ]

Term

0.497
C
BC
CE
BE
A
J
F
D
G
AJ
B
H
BH
CD
AH
AC
CJ
AEF
DE
EF
EJ
AE
EG
E
AD
EH
AF
AG
BD
AB

Lenth's PSE = 0.275013


0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Effect

Pareto Chart for 294


IV Fractional design with signicance level a 10%.

Figure 4

0.5

Mean of Transformed response

0.0
-0.5
-1

-1

-1

1
F

0.5
0.0
-0.5
-1

-1

-1

1
J

0.5
0.0
-0.5
-1

Figure 5

-1

-1

Main effects plot for transformed Prot.

interactions AEJ and DEF. Neglecting these three-way


interactions and considering that GH is formed by two weak
effects (G and H), besides being aliased with three-way
interactions, there is no reason to believe that any of these
factors is statistically signicant. The alias structure used in
this analysis is available in many statistics packages.

As only factors B, C and E have presented some evidence


of signicance, the 294
IV design can be converted into a
replicated 23 full factorial design. Figures 6 and 7 show the
analysis for this new design, where it is possible to notice
that factors B and C must be changed to the high level (2) to
maximize the transformed response and, consequently, the

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 9

1,711
C
BC

Term

CE
BE
BCE
B
E

3
Standardized Effect

Pareto Chart for full factorial design with signicance level a 10%.

Figure 6

Main Effects Plot (data means) for Johnson


B

Interaction Plot (data means) for Johnson


-1
1
-1
1

1
0.5
B

Mean of Johnson

0.0
-1
1

-0.5
-1

-1

0.5

B
-1
1

0.0

C
-1
1

-1

-0.5
-1

Figure 7

Factorial and interaction plots for 23 full factorial design.

process Prot. Although the main effect E is not statistically


signicant, observing the interaction plot of Figure 7, it is
clear that choosing factor E placed in the high level
combined with factors B and C in their respective higher
levels, the response increase considerably. However, in order
to check if it is the best solution, an optimization using these
three factors is performed. Also, an optimization using all
nine factors is performed to compare their performances.

6.3. Multinational automotive plant


The experimental design adopted here was a two-level
10-factor fractional factorial with resolution IV. The avail94
able resolution IV designs are 293
IV 64 runs and 2IV 32
93
runs. The 2IV design was chosen because it has a higher
number of experiments (Montgomery, 2001). Table 2 shows
the factor assignment to the variables of the design.

10 Journal of Simulation Vol. ] ], No. ] ]

Term

0.279
A
AC
E
FH
EG
AD
H
EJ
FG
B
BJ
AGK
BF
CF
ACJ
AH
AGJ
EH
EK
AEF
D
BK
HJ
DG
AF
BC
AEH
CE
FJ
ADG

Factor
A
B
C
D
E
F
G
H
J
K

Name
num_oper
pro_time
TS_01
PS_01
TS_02
PS_02
TS_03
PS_03
TS_04
PS_04

Lenth's PSE = 0.134063


0.0

0.2

Figure 8

0.4

0.6

0.8

1.0
Effect

1.2

1.4

1.6

1.8

Pareto Chart for 2104


IV fractional design with signicance level a 5%.

The design matrix for main fraction with the results


obtained was omitted because it is not the centre of interest
of this paper. Analyzing the response variable (production
lead time), it can be veried through a normality test that
this response variable is normal.
Similar to the rst application, the 293
IV fractional factorial
design used here is unreplicated. Examining the Pareto Chart
in Figure 8, it can be noted that at a signicance level of 5%
only factors A (number of operators), E (mean time of the
total setup for machine 2) and the interactions AC, FH and
EG are signicant. Therefore, taking into consideration that
factors B, C, D, G, J and K are not signicant, though they
are necessary to the simulation process, in the next step of
the experimentation process these factors must be kept at
low level. Factors F and H were kept in the set of statistically
signicant input variables because they are placed in the
interactions FH.
The aliases (confounding) among the main and interaction
effects is used to explain why only factors A, E, F and H
were chosen to compose the new design. Examining the
confounding among the statistically signicant terms A, E,
AC, FH and EG and the other factors and interactions of
the 293
IV fractional factorial design, it is possible to notice that
factor C is aliased with a three-factor interaction BGH and
with a four-factor interactions ABEK, ADFH, BDFG and
EGHK, which may be neglected according to the resolution
of the chosen design and sparsity of the effects principle
(Montgomery, 2001). Similarly, factor E is aliased with a
four-factor interaction. The two-way interaction AC is
aliased with three-way interactions BEK and DFH and
with four-factor interactions BCGH and EFGJ, which may

also be discarded. The interaction EG is aliased with the


three-way interactions CHK and DHJ and with the fourfactor interactions ACFJ and ADFK, neglecting these threeway and four-way interactions. Similarly, the two-way
interaction FH is aliased with a three-way and a four-factor
interaction.
The main effects for production lead time are shown in
Figure 9. It is noticed that factor A (number of operators)
has a strong negative effect on production lead time. This
factor decreases the production lead time when it is increased
(high level). Factor E (mean time of the total machine 2
setup) also reduces the production lead time when it is
increased (high level).
As only factors A, E, F and H have presented some
evidence of signicance, the 293
IV design can be converted in a
replicated 24 full factorial design. Figures 10 and 11 show the
analysis for this new design, where it is possible to notice
that factors A, E, F and H must be changed to the high level
to minimize the lead time.
This nding can be explained in practice as follows: the
company needs to reduce production lead time to increase
output and to answer to its customers, then it is possible to
change 10 factors to reach this objective, however, only four
factors are signicant to reduce production lead time. So, the
company can apply its efforts only by adding new operators
and by reducing three setup times of two different machines
(2 and 3) that it will reach its objective.
A mathematical model can be obtained from the full
factorial analysis, as is shown by Equation (1), where the
characters represent the coded values of the respective
factors. A minimum production lead time can be calculated

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 11

num_oper

pro_time

TS_01

PS_01

10.4

Mean of Production Lead Time

9.6
8.8
2

TS_02

109

PS_02

83

39

TS_03

21
PS_03

10.4
9.6
8.8
154

105

80

55

152

115

55

35

PS_04

TS_04
10.4
9.6
8.8
74

46

49

Figure 9

28

Main effects plot for lead time.

2.01
Factor
A
B
C
D

A
B
CD

Name
num_oper
TS_02
PS_02
PS_03

D
AD
BD
Term

ABC
AC
ABD
ABCD
C
AB
ACD
BC
BCD
0

Figure 10

6
8
Standardized Effect

10

12

Pareto Chart for full factorial design with signicance level a 5%.

by using linear programming via Solver from Excel,


considering Equation (4) as the objective function and the
values shown in Table 2 for the restrictions of the statistically
signicant factors (A, E, F and H). This procedure points
out the minimum production lead time as 8.59 h. The values
to variables A, E, F and H are shown in Table 5.

Leadtime 9:8656  0:8325A


 0:1722E  0:0087F

 0:1278G 0:1478FG
In order to make a comparison between the current
approach and the genetic algorithms (GAs) approach, an

12 Journal of Simulation Vol. ] ], No. ] ]


num_oper

Interaction Plot (data means) for Lead Time


154

105

80

55

55

35

TS_02

10.5
11

10

10.0

9
11

9.5

10

TS_02

9
11

10

PS_02

PS_03

Mean of Production Lead Time

num_oper

9.0
2

154

105
PS_03

PS_02
10.5

10.0

9.5

9.0
80

Figure 11

55

55

35

Factorial and interaction plots for 2 full factorial design.

Table 5 Best solution for optimization using four factors for


the multinational automotive plant application
Parameter

Value

Quantity of operators
Total setup time of the machine_02
Partial setup time of the machine_02
Partial setup time of the machine_03

3
N (153; 22)
N (73; 46)
N (48; 4)

optimization routine was developed using Simrunner package, considering the signicant factors. Also, an optimization using all ten factors is performed to compare their
performances.

Figure 12 Performance measures plot for optimization using


three factors.

6.4. Optimization using the statistically signicant factors

Table 6 Best solution for optimization using three factors for


the Brazilian weapon factory application

Brazilian weapon factory. In this optimization phase, the


three factors B, C and E are selected as inputs. The values
these parameters can assume are 1 or 2. The other factors
are kept at their original values. The objective function is to
maximize prot as presented earlier in Section 4. After 8
runs, the Simrunner stops the search. The best result found
is 411.327, which corresponds to experiment 6, as presented
in Figure 12 and the correspondent parameter values are
presented in Table 6.
Multinational automotive plant. In this case, factors A,
E, F and H are selected as inputs. The values these

Parameter
Neq_Op080
Neq_Op082
Neq_Op120

Value
2
2
2

parameters can assume were shown in Table 2. The other


factors are kept at their original values. The objective
function is to minimize the production lead time. After 213
runs (16 min), the Simrunner stops the search. The best
result found is 8.28 h and the correspondent parameter
values are presented in Table 5.

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 13

6.5. Optimization using all factors


Brazilian weapon factory. In this optimization phase, the
nine parameters presented in Table 1 are selected as inputs.
The values these parameters can assume are 1 or 2. The
objective function is to maximize prot, as presented
earlier.
After 98 runs, the Simrunner stops the search. The best
result found is 411.327, which corresponds to experiment 55
as presented at Figure 13 and the correspondent parameter
values are presented in Table 7.
Multinational automotive plant. The 10 parameters
presented in Table 2 are selected as inputs. The values
these parameters assumed were also shown in Table 2. The
objective function is to minimize the production lead time,
as presented earlier.
After 2571 runs (3 h 20 min), the Simrunner stops the
search. The best result found is 7.42 h and the correspondent
parameter values are presented in Table 8.
6.6. Result analysis
Brazilian weapon factory. Table 9 shows the results
obtained by the three procedures. The three procedures

lead to the same results indicating that there is coherency


among them. Taking into consideration the number of runs
necessary to optimize the model, it is clear the advantage of
determining previously the main parameters and then
proceeding the optimization using them instead of proceeding with the optimization using all factors. The former
demanded 32 8 40 runs to obtain the best result against
the 98 runs from the latter, a reduction of 59% in the
number of runs. Considering only the runs used in the
Factorial Design, 32 runs versus 98 runs from optimization
using all factors, the reduction is about 67%.
For this application, since the optimization factor
levels and the Factorial Design levels are equal, the
optimization using three factors seems meaningless.
However, other applications where the optimization factors
could have several levels or even were represented by
continuous variables, the Factorial Designs would only
identify the main factors without specifying their optimum
values.

Table 8 Best solution for optimization using all factors for the
multinational automotive plant application
Parameter

Value

Quantity of operators
Processing time of the operators
Total setup time of the machine_01
Partial setup time of the machine_01
Total setup time of the machine_02
Partial setup time of the machine_02
Total setup time of the machine_03
Partial setup time of the machine_03
Total setup time of the machine_04
Partial setup time of the machine_04

4
N (1; 0.30)
N (102; 42)
N (21; 12)
N (126; 22)
N (55; 46)
N (119; 41)
N (35; 4)
N (54; 33)
N (33; 15)

Table 9 Results from the three procedures for the Brazilian


weapon factory application
Figure 13 Performance measures plot for optimization using
all factors.

Table 7 Best solution for optimization using all factors for the
Brazilian weapon factory application
Parameter
Neq_Op050
Neq_Op052
Neq_Op070
Neq_Op080
Neq_Op082
Neq_Op100
Neq_Op110
Neq_Op120
Neq_Op170

Value
1
1
1
2
2
1
1
2
1

Parameter

A
B
C
D
E
F
G
H
J
Result (Prot)
Number of runs

Design of
experiments

Optimization
using three
factors

Optimization
using all
factors

1
1
1
2
2
1
1
2
1
411 327

1 (*)
1 (*)
1 (*)
2
2
1 (*)
1 (*)
2
1 (*)
411 327

1
1
1
2
2
1
1
2
1
411 327

32

98

Note: The parameters identied with (*) were not used as input for
optimization. They were kept at low level (1).

14 Journal of Simulation Vol. ] ], No. ] ]

Table 10 Results from the three procedures for the multinational automotive plant application
Parameter

A
B
C
D
E
F
G
H
J
K
Lead Time (hour)

Design of Optimization Optimization


experiments using four
using all
and solver
factors
factors
2
2 (*)
109 (*)
39 (*)
105
80
152 (*)
35
74 (*)
49 (*)
8.59

3
2 (*)
109 (*)
39 (*)
153
73
152 (*)
48
74 (*)
49 (*)
8.28

Number of runs
64
213
Computational time 3 0.2 3.2 3 0.3 3.3
(hour)

4
1
102
21
126
55
119
35
54
33
7.42
2571
3.2

Note: The parameters identied with (*) were not used as input for
optimization. They were kept at low level.

Multinational automotive plant. Table 10 summarizes


the results obtained by the three procedures. The optimization with all factors generates the lowest production lead
time and spent 3.2 h of optimization, whereas the optimization with only the statistically signicant factors lead to a
value to the production lead time 12% higher (52 min), by
spending 0.3 h in optimization plus 3 h in experimentation
through factorial designs. At last, the optimization through
the mathematical model obtained from the factorial design
and solved by Solver lead to a value to the production
lead time 16% higher (70 min) than the rst procedure
(using all factors). It took 0.2 h in optimization plus 3 h in
experimentation through factorial design. Taking into
account the objective of the analysis, that is, to nd the
best production lead time, the best procedure is
the optimization with all the factors. By doing it this way,
the computational time spent is practically the same in
comparison to the other procedures.
Additionally, the human effort spent in the optimization
with all factors is the lowest when compared to building 64
experiments (simulation models) in the factorial designs.
However, in practice, to get the results indicated by the
optimization with all factors, it would be necessary to
modify eight setup times, one processing time and the
number of operators. It means that the company would
spend more money and time by changing all these factors
than by changing only the four statistically signicant
factors, that is the result of the factorial designs (three setup
times and the number of operators). Thus, the factorial
designs can be used to point out the main parameters
and then the optimization can be used to specify their
optimum values.

7. Conclusions
The objective of this work was to show how a sensitivity
analysis using Factorial Designs can help the simulation
optimization to reach the best solution. Initially, to
accomplish this objective a fractional factorial design was
used to identify the most statistically signicant effects of
two models. As the experimental response variable of the
rst application does not follow a normal distribution, it was
necessary to apply a transformation to make it normal. As
the response variable follows a Poison distribution, the
Johnson transformation was used. The analysis was
performed using the Lenths method because the fractional
factorial design was unreplicated which made the bilateral
t-test or ANOVA not possible to assess the signicance
of the main and interactions effects.
Afterwards, three optimization trials/tests were carried
out: two using the factors previously identied, and the other
using all factors of the model. For these applications, it is
clear the advantage of determining previously the main
factors and then proceeding the optimization using them
(Solver or Simrunner) instead of proceeding with the
optimization using all factors. Beyond this discussion, it
must be pointed out that the DOE approach improves the
manufacturing system understanding, generating further
knowledge about the importance and signicance of each
resource used, which in turn favors the improvement of the
decision-making process. More than how many resources
are needed to optimize a system, improving its productivity,
increasing its prots and reducing costs, despite of the
relative time spent in the construction of the models,
this twofold approach (DOE/Simulation) elucidates how
the resource can be efciently changed and employed.

AcknowledgementsThe authors acknowledge CNPq, PadTec Optical


Components and Systems, CAPES and FAPEMIG for the support to
this research study.

References
Banks J, Carson JS, Nelson BL and Nicol DM (2005). DiscreteEvent System Simulation 4th edn, Prentice-Hall: New Jersey.
Bisgaard S and Fuller H (1994). Analysis of factorial experiments
with defects or defectives as the response. Quality Eng 7(2):
429443.
Chou Y, Polansky AM and Mason RL (1998). Transforming
nonnormal data to normality in statistical process control.
J Qual Technol 30: 133141.
Fu MC (2002). Optimization for simulation: Theory vs. practice.
INFORMS J Comput 14(3): 192215.
Haldar A and Mahadevan S (2000). Probability, Reliability and
Statistical Methods in Engineering Design 1st edn, John Wiley &
Sons: New York.
Harrell C, Ghosh BK and Bowden R (2000). Simulation Using
ProModel 3rd edn, McGraw-Hill: Boston.

Montevechi et alSensitivity analysis in discrete-event simulation using fractional factorial designs 15

Kleijnen JPC (1995). Theory and methodology: Verication


and validation of simulation models. Eur J Opl Res 82:
45162.
Kleijnen JPC (1998). Experimental design for sensitivity analysis,
optimization, and validation of simulation models. In: Banks J
(ed). Handbook of Simulation 1998. John Wiley & Sons: New
York, pp 173223.
Law AM and Kelton WD (2000). Simulation Modeling and Analysis
3rd edn, McGraw-Hill: New York.
Law AM and McComas MG (1998). Simulation of manufacturing
systems. In: Medeiros DJ, Watson EF and Manivannan MS
(eds). Proceedings of the 1998 Winter Simulation Conference.
Institute of Electrical and Electronics Engineers; Piscataway,
New Jersey, pp 4952.
Lewis SL, Montgomery DC and Myers RH (19992000). The
analysis of designed experiments with non-normal responses.
Quality Eng 12(2): 225243.
Montgomery DC (2001). Design and Analysis of Experiments 5th
edn, John Wiley & Sons: New York.
Montgomery DC and Runger GC (2003). Applied Statistics and
Probability for Engineers 2nd edn, John Wiley & Sons: New
York.

OKane JF, Spenceley JR and Taylor R (2000). Simulation as an


essential tool for advanced manufacturing technology problems.
J Mater Process Tech 107: 412424.
Sargent R.G (1998). Verication and validation of simulation
models. Proceedings of the 1998 Winter Simulation Conference.
Shannon RE (1998). Introduction to the art and science of
simulation. In: Medeiros DJ, Watson EF and Manivannan
MS (eds). Proceedings of the 1998 Winter Simulation Conference.
Institute of Electrical and Electronics Engineers; Piscataway,
New Jersey, pp 714.
Silva WA and Montevechi JAB (2004). Vericacao do custeio de
uma celula de manufatura usando simulacao e otimizacao. In:
Anais do XXXVI Simposio Brasileiro de Pesquisa Operacional
SBPO: Sao Joao Del Rei, Minas Gerais. CD-ROM.
Strack J (1984). GPSS: modelagem e simulacao de sistemas. LTC:
Rio de Janeiro.
Ye KQ and Hamada M (2001). A step-down Lenth method for
analyzing unreplicated factorial designs. J Qual Technol 33(2):
140153.

Received 27 September 2007;


accepted 29 July 2009 after two revisions

You might also like