You are on page 1of 2

Optimum Seeking

The ultimate, perhaps, in analysing a simulation model is to find a combination of the input
factors that optimizes (maximizes or minimizes, as appropriate) a key output performance
measure. For example, there may be an output of direct economic importance, such as profit or
cost, which we would like to maximize or minimize over all possible values of the input factors.

In general, the input factors in question could include discrete quantitative variables such as the
number of machines at a work station in a manufacturing system, continuous quantitative
variables such as the mean processing time for a machine, or qualitative variables such as the
choice of a queue discipline. Though it would be possible in a simulation study to seek optimal
values of both controllable and uncontrollable input variables, the primary focus in most
applications is on input variables that are controllable as part of a facility design or an
operational policy.

At first glance, this optimization goal might seem quite similar to the goals of selecting a best
system, as discussed in Sec. 10.4. There, however, we assumed that the alternative system
configurations were simply given. But now we are in a much less structured situation where we
have to decide what alternative system configurations to simulate as well as how to evaluate
and compare their results.

It is helpful to think of this problem in terms of classical mathematical optimization, e.g., linear
or nonlinear programming. We have an output performance measure from the simulation, say
R, whose value depends of the values of input factors, say v1, v2,……vk; these input factors are
the decision variables for the optimization problem. Since R is the output from a simulation, it
will generally be a random variable subject to variance. The goal is to maximize or minimize the
objective function E[R(v1,v1,……,vk)] over all possible combinations of v1, v2,…..vk. There may
be constraints on the input-factor combinations, such as range constraints of the form

𝑙𝑖 ≤ 𝑣𝑖 ≤ 𝑢𝑖
for constant 𝑙𝑖 (lower bound) and 𝑢𝑖 (upper bound), as well as more general constraints, perhaps
p linear constraints of the form

𝑎𝑗1 𝑣1 + 𝑎𝑗2 𝑣2 + ⋯ + 𝑎𝑗𝑘 𝑣𝑘 ≤ 𝑐𝑗

for constants 𝑎𝑗1 and 𝑐𝑗 , for j = 1,2,…,p. For instance, if v1, v2, v3, and v4 are the numbers of
machines of types 1,2,3 and 4 that we need to decide to buy, 𝑎𝑗1 is the cost of machine of type
i, and 𝑐𝑗 is the amount budgeted for machine purchases, then in choosing the values of the 𝑣𝑖 ’s
we would have to obey the machine-budget constraint

𝑎11 𝑣1 + 𝑎12 𝑣2 + 𝑎13 𝑣3 + 𝑎14 𝑣4 ≤ 𝑐1


In general, if the output R is, say, profit that we would seek to maximize, the problem can be
formally stated as

max 𝐸[𝑅( 𝑣1 , 𝑣2, … , 𝑣𝑘 )]


𝑣1 , 𝑣2, …,𝑣𝑘

Subject to
𝑙1 ≤ 𝑣1 ≤ 𝑢1
𝑙2 ≤ 𝑣2 ≤ 𝑢2
.
.
.

𝑙𝑘 ≤ 𝑣𝑘 ≤ 𝑢𝑘
𝑎11 𝑣1 + 𝑎12 𝑣2 + ⋯ + 𝑎1𝑘 𝑣𝑘 ≤ 𝑐1
𝑎21 𝑣1 + 𝑎22 𝑣2 + ⋯ + 𝑎2𝑘 𝑣𝑘 ≤ 𝑐2
.
.
.

𝑎𝜌1 𝑣1 + 𝑎𝜌2 𝑣2 + ⋯ + 𝑎𝜌𝑘 𝑣𝑘 ≤ 𝑐𝜌

Solving such a problem in a real simulation context will usually be truly daunting. First, as in any
optimization problem, if the number of decision variables (input factors in the simulation) k is
large, we are looking for an optimal point in k dimensional space; of course, a lot of
mathematical-programming research spanning decades has been devoted to solving such
problems. Second, in simulation we cannot evaluate the objective function by simply plugging a
set of possible decision variables into a simple closed-form formula-indeed, the entire
simulation it self must be run to produce an observation of the output R in the above notation.
Finally, in a stochastic simulation we cannot evaluate the objective function exactly due to
randomness in the output; one way to ameliorate this problem is to replicate the simulation,
say, n times at a set of input factors of interest and use the average value of R across these
replications as an estimate of the objective function at that point, with larger n leading to a
better estimate (and, of course, to greater computational effort).

You might also like