Professional Documents
Culture Documents
To cite this article: I. M. OVACIKT & R. UZSOY (1994) Rolling horizon algorithms for a single-machine dynamic scheduling
problem with sequence-dependent setup times, International Journal of Production Research, 32:6, 1243-1263, DOI:
10.1080/00207549408956998
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the
publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations
or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any
opinions and views expressed in this publication are the opinions and views of the authors, and are not the
views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be
independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses,
actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever
caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
INT. J. PROD. RES., 1994, VOL. 32, No.6, 1243-1263
t. Jntroduction
The effective control of material movement through manufacturing facilities is
becoming increasingly important in today's highly competitive global markets.
Companies are under pressure to shorten lead times and meet customer due-dates to
maintain high levels of customer satisfaction. Effective management of work-in-process
inventories (WIP) can also give companies significant cost advantages. Hence the
development of scheduling procedures to achieve these advantages is of considerable
economic significance. However, the proven intractability of job-shop scheduling
problems makes it difficult to develop efficient procedures that are applicable to
problems of realistic size. Most practical job-shop scheduling problems have been
addressed using myopic dispatching rules (Bhaskaran and Pinedo 1991). While these
rules are computationally efficient and easy to implement, they may result in poor long-
term performance. In manufacturing environments with heavy competition for
capacity at key resources, scheduling procedures that take a global view of the shop
should result in substantial improvements in performance.
The research we describe in this 'paper is part of a larger effort to develop a
decomposition methodology by scheduling complex dynamic job shops. These
facilities are characterized by the presence of different types of workcentres, some of
which have sequence-dependent setup times; reentrant product flows, where ajob may
return to a machine several times; and due-date related performance measures. We
focus on the performance measure of maximum lateness (L m ax ), to capture manage-
ments' concern with providing consistent levels of customer service. A workcentre may
consist of a single machine, a number of parallel identical machines, or of a batch
processing machine like a heat treatment oven, where a number of jobs are processed
simultaneously as a batch. These problems represent a considerable generalization of
the classical job shop scheduling problem (Baker 1974), which assumes that there are
no sequence-dependent setup times, that each job visits each workcentre exactly once,
that each workcentre consists of a single machine and that the performance measure to
be minimized is makespan.
The obvious difficulty of these problems (Garey and Johnson 1979) has resulted in
their being largely ignored by researchers. However, decomposition methods that
exploit recent developments in information technology offer a promising avenue of
attack on these problems. In addition, decomposition methods allow us to exploit the
special structure present in many industrial contexts, rendering these problems more
amenable to efficient, near-optimal solution procedures than the generic problems on
which much past research has focused.
The decomposition method we propose proceeds in a manner similar to the Shifting
Bottleneck approach of Adams et al. (1988) by decomposing the job shop into a number
of workcentres. These are scheduled in order of criticality until all workcentres have
Downloaded by [University of Cambridge] at 04:50 10 October 2014
problem each job j requires qj units of time to reach its destination after completing
processing on the machine. The objective is to minimize Cmu" where Cmu, denotes the
time the last job reaches its destination. We shall denote this problem by 1/r j, qj/Cmu,'
This problem is also time-symmetric, in the sense that for any instance P of I/r j• q}C mu"
we can ercate another instance P' with release times rj = qj and delivery times qj=r j that
has the same optimal sequence (although in reverse) and C mu, value as the original
problem. These results motivate various aspects of our approach in this paper.
The problem of minimizing L mu, with sequence-dependent setup times has not been
extensively examined to date. Monma and Potts (1989) present a dynamic program-
ming algorithm and optimality properties for the case of batch setups, where setups
between jobs from the same batch are zero. Picard and Queyranne (1978) model a
related problem as a time-dependent travelling salesman problem and develop a
branch and bound algorithm. Uzsoy et al. (1991) provide a branch and bound
algorithm for I/prec, su/L mu,' For problems with more than fifteen operations,
however, the computational burden of this algorithm increases rapidly. Uzsoy et al.
(1992) develop dynamic programming procedures of the I/prec, su/L mu, problem where
the precedence constraints consist of a number of strings. Unal and Kiran (1992)
consider the problem of determining whether a schedule in which all due dates can be
met exists in a situation without precedence constraints but with batch setups. They
provide a polynomial-time heuristic and an exact algorithm which runs in polynomial
time given a fixed upper bound on the number of setups.
Several authors have suggested heuristics for related problems. Zdrzalka (1992)
considers the I /r j • pmll1/Lmu, problem where the jobs have sequence-independent setup
times. He proves that this problem is N P-hard and presents a heuristic with a tight
worst-case error bound. Uzsoy et al. (1992) analyse the performance of the myopic
Earliest Due Date (EDD) dispatching rule, which gives priority to the available job with
earliest due date, for the I h. sui L mu, problem. Assuming that the setup times are
bounded by the processing times, i.e. that Sij~ Pj for all j, they develop tight worst-case
error bounds for this heuristic. Sahni and Gonzalez (1976) show that unless P = N P
there can be no polynomial-time heuristic with a constant, data-independent worst-
case error bound for the TSP with arbitrary intercity distances. Since the TSP is a
special case of I /rj' sui LmDX> this indicates that efficient heuristics with data-dependent
worst-case bounds are unlikely to exist for Ih,sulLmu,' Ovacik and Uzsoy (1992)
combine the EDD heuristic with a local improvement procedure similar to that of
Uzsoy et al. (1991). They show that the addition of the local improvement procedure
results in substantial improvements over the schedules obtained by the dispatching rule
Single-machine dynamic scheduling 1247
alone. In addition, they show that EDD performs best out of a number of other myopic
dispatching rules (Ovacik and Uzsoy 1992, Uzsoy et al. 1993).
The motivation for the rolling horizon approach followed in this paper is derived
from insights into the deficiencies of other techniques for related problems. While EDD
is optimal for the static problem, when it is applied to the problem with nonsimul-
taneous arrival times it may make poor decisions due to its myopic nature. An example
of this is when a longjob with a large due date is scheduled just before a short job with a
very tight due date arrives. The ability to predict future job arrivals over a certain
forecast window in the future can alleviate this problem to some extent. However, when
sequence-dependent setup times are also involved, simply having some visibility of
future events does not suffice. The complex interactions between setup times and due
dates must be addressed explicitly in order to arrive at good decisions. This is clearly
achieved by a branch and bound procedure for the entire problem, taking into account
the entire set of jobs. However, the computational burden of such a procedure increases
Downloaded by [University of Cambridge] at 04:50 10 October 2014
a subset of the jobs available over the forecast window is selected. An optimal schedule
is found for the resulting Ih,su/Lmu problem by complete enumeration, and the first
job in this schedule is processed next on the machine. The encouraging results obtained
for this approach motivate the work in this paper.
In this paper we present a family of rolling horizon algorithms for the I/rj,su/L m ax
problem, which has not been addressed in the literature to date. We develop a branch
and bound algorithm for the problem which we use to solve the subproblems in the
RHPs. We study the effects of different forecast windows on the performance of our
procedures, describing the tradeoff between computation time and solution quality.
Our computational results show that the RHP obtains improvements of up to 58%
over dispatching rules combined with local improvement methods. Solutions are
obtained for problems with 100 jobs in 3 min of CPU time.
In this section we describe the problem under study and the RH Ps developed for its
solution. We are given n jobs, each job} with a known release time r j , a processing time
Pj' and a due date dj. We incur a setup time of sij when job} is processed immediately
after job i. We assume that the jobs are indexed in order of increasing release times, such
that}>i implies rj~ri'
We define a decision point to be a point in time t when a decision as to which job(s)
to schedule next needs to be made. The forecast window is the time period within which
we can predict the arrival times of future jobs. Since arrival times of the jobs are given
by the decomposition method discussed in § I, the length of the forecast window is a
decision variable rather than a system parameter. The set of jobs considered while
making a scheduling decision at a given point in time consists of the set J(t) of jobs
already available for processing and the set F(t) of those that will become available
within the forecast window.
Although it is important to take jobs that will arrive over the forecast window into
account while making the current decision, it is not necessarily to our advantage to
consider all jobs in the set J(t)vF(t). In the problem under study, the relative urgency of
a job is defined by its due date. If we consider jobs which are due far in the future, we
may make a poor decision due to considering jobs which could safely have been
processed later. Hence the selection of the set K(t) of candidate jobs considered at the
current decision point t becomes important. We define K(t) as the k jobs in J(t)vF(t)
with the earliest due dates, where k = min {K, IJ(t)vF(t)l} and K is a decision parameter
defining the maximum size of the candidate set K(t). This ensures that the k most urgent
jobs in J(t)vF(t) are considered in the current decision.
This selection of candidate jobs follows naturally from insights into the time
symmetry of the Ilrj' q)C ma x problem, whose equivalence to the I/r)L max problem was
discussed in the previous section. It can be shown that similar relationships exist
between the problems with sequence-dependent setup times. Recall that for any
instance of the I/r) L ma x problem, a corresponding instance P of the Ilr j, qjlC ma x
problem can be constructed, where the qj depend on the due dates of the jobs. When
constructing the set of jobs to be considered, the first consideration is the arrival times
of those jobs. lt is unlikely that a job arriving far into the future will affect the current
decision. Hence we consider only jobs arriving over the forecast window. The selection
of a set of these jobs based on due dates is motivated by considering the time symmetric
I/r j, qjlC m a x problem P' whose release times rj = qj and delivery times qj= r j. Consider a
set G of jobs that become available in P' over some forecast window for this problem.
Single-machine dynamic scheduling 1249
Index the c jobs in G such r'[ ,,:; r~":; ... ,,:; r;. Since for every job i in G, r;=qi = K -d" we
have d, ~d2~ ... ~dc' Hence choosing a set of jobs with consecutive arrival times
occurring over a given forecast window in P' corresponds to selecting a set of jobs with
consecutive due dates in the original IjriLmax problem. Thus, selecting a set of jobs
with consecutive due dates in the I jri L max problem corresponds to choosing a certain
forecast window in the corresponding P'. Hence our process of selecting the jobs in K(t)
based on both arrival times and due dates reflects the use of forecast windows in both
the problem of interest and its time-symmetric equivalent.
Since some jobs in K(t) may not be available at time t, each subproblem is a
Ih,siiLmax problem consisting of at most K jobs. We use a branch and bound
procedure to solve these subproblems to optimality. Although the computational
requirements of this procedure grow exponentially as the number of jobs to be
scheduled increases, the restricted size of the subproblems limits the computational
effort required to solve a given subproblem, ensuring that the computational burden of
Downloaded by [University of Cambridge] at 04:50 10 October 2014
the worst case complexity of which is O(K!). The effort involved in developing the set
K(t) at each decision point t is O(n log n), due to ordering the jobs in increasing order
of due dates. Hence in the worst case, when ).= I, the complexity of the algorithm is
O(Il(K! + n log II)). Since K is a parameter of the algorithm, not the problem, this leads to a
polynomial-time complexity for this procedure which, by the results of Sahni and
Gonzalez (1976), implies that unless P= NP a data-independent worst-case bound for
its performance does not exist. Hence its performance may be arbitrarily bad. For
relatively small values of K, the nK! term in the complexity will dominate, resulting in the
worst-case computational effort increasing linearly with II.
The key to-the branch and bound procedure is a tree of partial solutions each of
whose nodes at level h represents a partial solution with h jobs. Associated with each
node at level h is a lower bound LB which is the minimum L ma x value that can be
obtained by any schedule whose first h jobs are scheduled as in the partial schedule
corresponding to the node.
We start by finding an initial upper bound U B to the optimal solution by
constructing a feasible solution to the problem using the EDD dispatching rule. A local
search based on adjacent pairwise interchanges is applied to this schedule to ensure
that the initial solution is at least at a local minimum. This schedule becomes our initial
incumbent solution and its L ma x value the initial U B. The incumbent solution and the
UB are updated as better solutions are found throughout the course of the procedure.
We expand the tree by branching on a selected node S at level h. For each job i not in
the partial schedule of S, we add a new node Si to the tree. The first h jobs of the partial
schedule of node Si are those of node S, and the (Il + I)st job is job i.
Whenever we branch on a specific node, we generate all possible nodes that can be
generated from that node. The new nodes generated inherit all characteristics of their
parent nodes. Therefore it is sufficient to keep track of only the active nodes, those
nodes that have not been branched on yet. By keeping these nodes in the order that we
are going to select them, the problem of identifying which node to branch on reduces to
picking the first node in an ordered list.
There arc two well-known methods for selecting the next node to branch on (Parker
and Rardin 1988). Depth-first search selects the last node that has been added to the
tree, i.e. the node at the deepest level of the tree. It has the advantage of requiring the
least number of nodes to be kept active at any time, although it may end up processing a
large number of nodes to reach the optimal solution. On the other hand, best-bound
search, which selects the active node with the lowest LB, minimizes the number of nodes
processed, but its memory requirements can be prohibitive since many nodes are active
at any time. We adopt a hybrid of these two methods by generating all possible nodes
that can be generated when branching on a specific node and selecting the node with
lowest LB among those at the deepest level of the tree.
We fathom a partial schedule in two different ways. A node is fathomed by
completion of a solution when it represents a full schedule, since it cannot be expanded
any further. If its objective function value is less than the UB, we have a solution that is
better than any solution found so far, and we update the incumbent solution and the
UB. A node is fathomed by bound ifits LB is greater than the current UB. This indicates
Single-machine dynamic scheduling 1251
that expanding the tree from that node can only give us solutions inferior to what we
already have. If a new incumbent solution is found, all nodes with LBs larger than the
new UB are eliminated from the list of active nodes for the same reason.
The LBs we use are derived from the results of Potts (1980) and earlier (1982) for the
l/r)L m ex problem. They show that for any subset S of the set N of jobs to be scheduled,
min {r/} +
JeS
L p,-max {d,}
leS leS
(I)
is a lower bound on the optimal L max of the problem and is tightened by taking the
maximum over all possible subsets S. This bound also applies to the lh,s;)Lmax
problem since we can only do worse by inserting setup times into the schedule. Since for
eachjobj scheduled, we incur a setup time of at least s minj = min'eN{siJ, we can tighten
(I) by adding in the sum of the s min/s for all j in the subset S. Therefore,
Downloaded by [University of Cambridge] at 04:50 10 October 2014
becomes a LB for I/rj , s;)L max for any subset S of N. The same bound applies when we
are trying to find a LB for a partial schedule during the course of the branch and bound
procedure. For a partial schedule S' with operation h scheduled last and with makespan
Cmax(S') when N' is the set of all jobs remaining to be scheduled, the expression
min {r;} +
leS
L (s min, + pil- max {d,}
leP leP
(3)
where P is any subset of the set of unscheduled jobs, forms a LB on the minimum L m ..
value that can be obtained by completing the partial schedule S'. Note that the release
time of any job i in the set P must be updated to r; = max {r" Cmax(S')} since ajobcannot
start before the completion time of the last job scheduled in the partial schedule S'.
Since there are (2"- I) such subsets, where n is the number of jobs that remain to be
scheduled, it is not feasible to check all subsets of N'. Therefore we consider only those
P of size 1,2, and n. When P= N', that is if the subset consists of all unscheduled jobs, we
can tighten the LB further by using a better lower bound on the amount of setup time
that will be incurred. The minimum amount of setup time that we will incur can be
found by solving a TSP problem where the intercity costs correspond to the sequence-
dependent setup times between jobs. However, since TSP is NP-hard, solving this
problem to optimality is computationally burdensome. Hence we opt for a lower
bound on the optimal value of the TSP obtained from the assignment problem which is
polynomially solvable (Balas and Toth 1985). The result is an expression of the form
where S MIN is a lower bound on the minimum amount of setup that will be incurred,
which forms a LB on the L max of the partial solution S'.
Another lower bound on the L max value that can be achieved by completing a
partial schedule S' is the L max of the partial schedule itself which we will denote as
Lm,,(S'). Since any schedule that we generate by expanding S' will contain S', its L m"
cannot be less than the L max of the partial schedule S'. Therefore, for a partial schedule
S', a lower bound to the minimum L maxachievable is found by finding the maximum of
Lmax(S'), expressions of the form (3) for all subsets P with I or 2jobs, and the expression
(4).
1252 I. M. Ovacik and R. Uzsoy
5. Experimental design
To evaluate the performance of the RHPs, we use two different algorithms as
benchmarks. The first of these is the EDD dispatching rule. Whenever the machine falls
idle, this rule myopically selects the available job with the earliest due date. This rule
has consistently shown itself to outperform other, more complex dispatching rules for
the performance measure of L max (Ovacik and Uzsoy 1992, 1993, Uzsoy et al. 1993). In
addition, Uzsoy et al. (1992) have shown that if the setup times are bounded by the
processing times, this rule has a tight worst-case error bound. The main weakness of
this rule is that it ignores the setup times. To remedy this deficiency, we have augmented
it with a local search procedure that performs adjacent pairwise exchanges to improve
the EDD schedule. We shall refer to this procedure as the EDD-LI procedure. EDD-LI
can never perform worse than EDD, and we would expect it to yield improved
schedules at the expense of moderate increases in computation time.
Downloaded by [University of Cambridge] at 04:50 10 October 2014
We have selected these benchmarks due to the fact that it is extremely difficult to
obtain optimal solutions, or even a reliable lower bound, on the optimal solution value
for this problem. These two rules are, in our experience, representative of approaches
taken to this problem in practice. One of our major results is that these rules often
perform extremely poorly, indicating that the widespread reliance often placed on
dispatching-based procedures may be misplaced for problems with sequence-
dependent setup times.
We compare the dispatching rules discussed above to the RHP with different
combinations of decision parameter values. We represent the forecast window in two
different ways: job- and time-based. If we assume that the n jobs to be scheduled are
indexed by increasing release times and let S(I) be the set of jobs that have been
scheduled at time I, then using a job-based forecast window, we include the nextj jobs
with release time greater than I in the forecast window. More formally, the forecast
window will contain the jobs (s+ l,s+2, ... ,s+j) where job s is last job that has
arrived, i.e. the highest indexed job i with ';:$;I and j=min {Jl,n-s} where Jl is a
decision parameter determining the maximum number of jobs we allow in the forecast
window at any time. While, the job-based approach allows a fixed number of jobs in the
forecast window, the time-based approach allows the jobs that will become available
over a fixed period of time to be in the window, i.e. all jobs i such that 'i:$; I + T where T
is the decision parameter denoting the length of the time-based forecast window. For
our experiments we use values of 1,2,3,4 for JI and 200,400,600 and 800 for T. These
values for Tcorrespond to the expected processing and setup time for I, 2, 3 and 4 jobs,
respectively. We also examine the two extreme cases where we have no visibility (Jl = T
=0) and where we have visibility over the entire horizon (Jl = n, T = 'n). These enable us
to examine the effects of having no forward visibility at all and perfect forward visibility
on the quality of the schedules generated.
For the parameter K, we use the values of 5 and 10. This parameter is the major
factor determining the computational burden of the procedure by limiting the size of
the largest subproblem solved. The choices of 5 and 10 represent a low and a high value
for this parameter, allowing us to isolate its effect on the performance of the procedures
in the experiments.
For A, we use values of I, 2 and 3, corresponding to fixing the schedule of I, 2, and 3
jobs at any decision point. As A decreases, the number of subproblems solved, and
therefore the computational burden of the procedure, increases. By assigning a higher
value to A, i.e. by fixing a larger number of jobs at any decision point, we commit
ourselves to a schedule for a longer period of time, which prevents us from reacting to
events such as the arrival of an urgent job that may occur during that time.
Single-machine dynamic scheduling 1253
where k is an integer uniformly distributed over the interval [ - 1,4]. This way we allow
each job a multiple of its processing time to complete before it is due. The multiplicative
factor 2 serves to include an estimate of setup time in the due-date setting procedure.
Since k can take on negative values, we may have jobs that are already tardy when they
become available. This is often the case in industrial situations where a job may be
delayed in preceding stages of the manufacturing process. When the problem is solved
as a subproblem in a decomposition procedure, ajob may be tardy due to interactions
with other jobs and machines in the job shop problem the decomposition procedure is
attempting to solve.
We examine problems of sizes ranging from 10 jobs through 100 jobs in 10 job
increments. For each combination of range parameter R and problem size, we
randomly generate 20 problems. Each of the 1,000 problems generated is solved using
the EDD and EDD + LI procedures and the 72 different parameter combinations of the
RHP procedure. For each problem, the L max is calculated and the CPU time to solve
the problem is measured. All algorithms are coded in C and run on a SUN SPARC
workstation. The design of the experiment is summarized in Tables I and 2.
6. Results
To evaluate the performance of the benchmarks and the RH Ps, we use the ratio of
the average solution value found by each procedure to the average of the best solutions
found for a given problem class. A problem class is characterized by a release time range
Downloaded by [University of Cambridge] at 04:50 10 October 2014
R and a problem size (number of jobs) n. We denote this ratio by r(R, n). We define
A VE(R, *), A VE(*, n), A VE(*, *) to be the average of r(R, n) over all values of n for fixed
R, average of r(R, n) over all values of R for fixed n, and the average of all r(R, n) over all
values of Rand n, respectively. MAX(R, *), MAX(*,n), and MAX(*, *) are defined
similarly for the maximum values of r(R, n).
The first issue to be examined is the performance of EDD and EDD-LI relative to
the RHPs with time-based forecast windows. Table 3 shows the A VE(R, "), A VE(*, n),
and A VE(*, *) values for the different algorithms. The columns marked xx denote the
average results for all RHPs with the same K and A values. The columns marked 0 and
C1J represent the results from the RH P with no knowledge and perfect knowledge of all
job arrival times, respectively.
These results show that EDD yields very poor solutions for this problem, being on
average 184% worse than the best solution found, even though a number of
computational studies (Uzsoy el al. 1993) have shown that EDD performs better than
several other dispatching rules. This illustrates the difficulties of evaluating the
performance of dispatching rules against each other. While a given dispatching rule
may perform well relative to other dispatching rules, its performance relative to the
optimum may be extremely poor.
The addition of the local improvement procedure to the EDD rule leads to
dramatic improvements in performance. This is due to the fact that the local
improvement procedure in effect has perfect visibility of all jobs in the problem, thus
remedying the poor decisions resulting from the myopic nature of EDD. However,
these improved solutions obtained by EDD-LI are still on average 57%, worse than the
best solution obtained, indicating how unreliable procedures which guarantee only
local optimality can be. It is also interesting how much room for improvement remains
after the improvements from EDD.
Examining the performance of the RHPs, we see that the most significant factor
affecting solution quality is the parameter K, which defines the maximum size of the
subproblems. This effect can be seen clearly when we compare the performance of the
EDD rule, which corresponds to K= I, A= I and T=O, with that of the RHPs with K
values of 5 and 10 and the same Tand ), values, corresponding to columns 3 and 12
Table 3. As K goes from I to 5, there is a 151'6% improvement in solution quality.
Increasing K to 10 yields a further improvement of 12'2%. The initial improvement
indicates the benefit of solving the subproblems to optimality rather than using a
myopic heuristic. The small improvement from K= 5 to K= 10 suggests the advantages
of using an optimal procedure myopically, without forward visibility, are limited.
Downloaded by [University of Cambridge] at 04:50 10 October 2014
1<=5 1<=10
T=O T=xx T= co T=O T=xx T= co T=O T=xx T=ro T~O T=xx T= cc T=O T=xx T=ro T~O T=xx T=<:I)
AVE(*,70) 3·176 1·709 1·377 1·376 1·374 1-434 1·420 1-427 1'534 1·500 1'499 1·249 1'152 1·026 \·313 ',175 1·033 1·402 1·228 1·062 it
AVE (*,80) 3'311 1·686 1·356 1·345 1·333 1·447 1·395 1·394 1·511 1-460 1·525 1·241 1·149 1·020 1·316 1·172 1·031 1·381 1·211
1·215
1·060
1·067
"S·
AVE(',90) 3·290 1·767 1·404 1-355 1·372 1-457 1·407 1'468 1·532 1·490 1·540 1·275 1'141 1'029 1·306 1·171 1·040 1·351
AVE(',I00) 3·145 1·761 1'372 1·379 1-376 1-479 1·461 1-482 1·577 1·483 1·541 1·241 1'148 1·030 1·332 1·186 1·043 1'418 1·215 1'075 ""
AVE(', *J 2·865 1·599 1·349 1·306 1·300 1'415 1·351 1·364 1·488 1-410 1-439 1·277 1·145 1·022 1'329 1·171 1·034 1·396 1·214 1·054
N
V.
V.
1256 I. M. Ovacik and R. Uzsoy
"35"_ _
',30
1·25
,"--- -----_ ,
<=S
_
1·20
AVEr:)
1-15
1-10
1·05
l,OO.J-----+-----+------1------l-------l
T=o
Downloaded by [University of Cambridge] at 04:50 10 October 2014
""
Figure 1. Effect of length of forecast window (T) on RHP performance.
There are clear interactions between T, the length of the forecast window and K. As
shown in Fig. I, when K = 5, increasing Thas little effect on solution quality since the
future information obtained cannot be taken into account in the subproblems. When
A= I, increasing T from I to CIJ results in only 3·1 % improvement. However, when
K= 10, extending the forecast window results in a steady, significant improvement,
reaching 20'2% as T increases to CIJ. This is due to the fact that when K is small, the
amount offuture information taken into account in the current decision is limited. The
larger K value allows more future oriented information to be considered, resulting in
superior solutions.
The effects of the forecast window become clear when we compare the RHPs with
time-base forecast windows to those with job-based forecast windows. Figure 2 plots
the AVE(*, *) values for the two families of RHPs. It can be seen that the time-based
procedures consistently outperform the job-based ones. When R is large, the time-
based procedure considers fewer jobs than the job-based procedure, but the jobs it
ignores will be those arriving far into the future. When R is small, the time-based
procedure may consider more jobs than the job-based procedure, allowing it to select
the set K(t) from a larger set of candidates, hence capturing a 'better' set K(t). The job-
based procedure, on the other hand, may ignore urgent jobs that arrive in the near
future, resulting in poor decisions. Since the time-based procedures are consistently
better than their job-based counterparts, we shall focus on the results of the time-based
procedures for the rest of this paper.
The number of jobs fixed at each decision point, A, also affects solution quality. As).
increases, solution quality degrades steadily, exhibiting a linear trend. This is illustrated
in Fig. 3 for the cases where T = I and 4 and K = 5 and 10. This is because a procedure
with a low value of K uses little future information, resulting in poor schedules for the
subproblems. While for small Adecisions are revised frequently, as Aincreases we are
committed to these poor decisions for a longer period of time, resulting in poorer
performance overall.
Although there are some exceptions, the performance of all procedures degrades
somewhat as the number of jobs increases. However, the RHPs appear to perform
rather more consistently than EDD and EDD-L1, which exhibit a marked degradation
Single-machine dynamic scheduling 1257
AVEr:)
1·05
Downloaded by [University of Cambridge] at 04:50 10 October 2014
1·00 +--+--l-~--+--+-+--+--l----if----+--l
T=200 T=-eOO T=6()O T=800 T=200 T=400 r-soo T=800 T"'200 T=400 T=600 T=800
~=1 1l=2 J.I=3 1l=4 ....=1 J.1=2 J.I=3 fJ=4 J.I=I 1.1=2 J.I=] J.1=4
J..=l ),.:1 ),.::1 A.=I A.=2 A.=2 ).,;::2 A.=2 ).=3 )..:3 ).,;::) 11.=3
::
:::t ~
~~5'T=800.
K'=10. T::200
'-25 •
AVEr:)
1·20
"'5
110L-_--~
IC'=IO. T:800
',05
1·00 + - - - - - - - - - - - + - - - - - - - - - - - 1
l.=.
Figure 3. Effect of number of jobs fixed at each decision point (A) on RHP performance.
in performance with increasing problem size. This indicates another benefit of the
RHPs, that their performance relative to the other procedures improves as problem
size increases. Similar conclusions can be drawn for the effect of the range parameter R
on the performance of EDD and EDD-LI. Both these procedures show declining
performance as R decreases. This is due to the fact that with a small R, the number of
available jobs for the dispatching rule to choose from is high, and thus a myopic choice
ignoring setup times is more likely to be a poor one.
To evaluate the robustness ofthe algorithms we use the MAX (R, *), MAX (*, n), and
MAX (*, *) values shown in Table 4. All the RHPs outperform EDD and EDD-LI
significantly in the worst case. The worst of the RHPs outperforms EDD-LI by 38'6%
in the worst case, and the best by 104'2%. This indicates a major strength of the RHPs,
that even when they do not yield the best solution they are unlikely to deviate from it
N
V.
00
Downloaded by [University of Cambridge] at 04:50 10 October 2014
1(=5 1(= 10
T=O T=xx T=ro T=O T=xx T=ro T=O T=xx T= ro T~O T=xx T=ro T=O T=xx T=ro T=O T=xx T=ro
MAX(0'6,*)
MAX (0'8, *)
MAX(I·O, *)
MAX(\'2, *)
3·767
5·068
3·697
2·734
2']18
2·065
1·809
1·627
1·503
]'493
1'456
1·442
1·521
1-497
1-446
1·399
1·515
1·491
1·435
1-375
],586
1·589
1·531
1·484
1·629
],631
1·553
"437
1·631
1·624
'·569
1·507
],665
1·662
',615
1·624
1·732
1·677
],565
1·474
],731
1·687
1·628
1·546
1·275
1-425
1·422
1'442
1·231
1·306
1·404
1·363
],046
1·076
1·034
1·029
1·313
I·589
]'468
1-460
1·254
1-300
1·401
1·405
1·089
1·074
',042
1·047
1·390
1·662
1·595
1·616
1·409
1·451
I-488
]-438
1·131
1'129
]·080
1·067
-
~
MAX(]'4,*) 2·640 ],720 1'536 1·485
0
1·540 1·366 1·326 1·394 1·534 1·411 \·406 1·366 \·317 1·016 1·534 1·371 1·067 1·720 1·571 1·081
'"'"
n
MAX(*,IO) 2·389 1·455 1-425 1·233 1·233 1·589 1'263 1·242 1·662 1-451 1·263 1-425 1-193 1-000 1'589 1·216 1·000 1·662 1·451 1·000
MAX(*,20)
MAX(*,30)
2·728
3·057
1·556
1·734
1·360
1·442
1·374
1·422
1·373
1·368
1·453
1·534
1·388
1'443
1·387
1·439
1·605
1·720
1·472
',536
1·501
],528
1·375
1·442
1·278
1·404
1·076
1·041
1'453
1·534
],320
1·401
1-074
1-067
],605
1·720
1-438
1'571
1·067
\·073 ''""
;::
"-
MAX (*, 40) 3-431 1·819 1·454 ]'428 1·413 1-490 1'492 1-486 1·489 1·574 1·531 1·383 1·317 ]·049 1·339 1·371 1·061 1·489 1·408 1·082
MAX(*,50) 3·471 ],869 1-398 1·443 1·385 ',531 1·513 1-474 1·554 1·586 1·628 1·376 1·355 1·045 1·435 ],320 1·057 1·417 1·488 1·107 ?:'
MAX(*,60) 3·625 ',991 1·463 ',498 1·465 1·560 ',594 1·560 1-614 1·710 1·710 ',422 1·317 1·046 1'444 1-318 1·067 ]'527 1'443 1·131 c::
N
MAX(*,70) 4·135 2'112 1·495 1·516 1·504 1·553 1-621 ],612 ],665 1·732 ],731 1·339 1·277 1·053 1-410 1·355 1·063 1·536 1-434 ]']19
'"
MAX(*,80)
MAX (*, 90)
4·662
4·814
1·995
2·065
1·460
1·503
1·468
1·521
1·462
1'515
1'523
1·547
1·562
1·573
'-536
1·564
1-609
1·654
1·654
1·664
1·656
1·687
],342
1·362
1·363
1·349
1·032
],055
1·455
1·434
1-405
',342
],051
1·071
1'521
1-477
1·399
1·365
1-105
1·106 '"'"
MAX (*,100) 5-068 2·118 1·493 1-519 1·506 1-586 1·631 1·631 ]·647 1·677 1·675 1·380 1'363 1·059 1'460 1·356 1-089 1·616 1-376 ],129
MAX(*, *) 5·068 - 2-118 1·503 1·521 1·515 ],589 1·631 1·631 1·720 1·732 1·731 1-442 1·404 1·076 1·589 1·405 1·089 1·720 1'571 1·131
Table 4. MAX (a, *), MAX (*, b), and MAX (*, *) values.
Single-machine dynamic scheduling 1259
drastically. Dispatching rules, on the other hand, may yield extremely poor solutions,
as the results for EDD show.
Summarizing our results on solution quality, several conclusions emerge. The first
is that dispatching rules can yield extremely poor solutions in the presence of sequence-
dependent setup times. Even the inclusion of a local improvement procedure does not
remedy these defects. The RH Ps with appropriate choices of parameters consistently
yield better solutions than EDD and EDD-LI both on average and in the worst case.
The RHPs with time-based forecast windows consistently outperform their job-based
counterparts. The performance of both job-based and time-based procedures is
affected by the algorithm parameters in the same way. However, solution quality is not
the only attribute to be considered when selecting for a problem. The computational
effort required by the algorithm is also an important factor which must often be traded
off against solution quality. We shall first discuss the computational burden of the
Downloaded by [University of Cambridge] at 04:50 10 October 2014
different procedures studied, and then address the issue of the quality/time tradeoff.
The computational effort required by the RHPs is heavily affected by the choice of
the parameters K, A. and T. The average CPU times for the RHPs are shown in Table 5,
and the maximum times in Table 6. The effect of K is particularly significant, which
follows from the discussion of the complexity of the RHPs in § 3. As K increases from 5
to 10 there is an order of magnitude increase in both average and maximum CPU time.
This is due to the exponential worst-case complexity of the branch and bound
algorithm used to solve the subproblems. The effects of ). and Tare weaker, but still
significant. As Tincreases, the number of jobs considered in a given subproblem, and
thus computation time, increases. As A. increases, the number of subproblems solved
CPU
10 20 30 40 50 60 70 80 90 100
Downloaded by [University of Cambridge] at 04:50 10 October 2014
Number of jobs
Figure 4. Effect of problem size on average and maximum CPU time (s) for K = 5, T = 800
and A=2.
1'50
1'45 1
...0
l
1·35
1·30
AVEj",' 1-25
\
,.",
1015
HO
1'05
"00
10 15 20 2. 30
CPU (se<:ondsl
I(, A. .5,1 Q 5.2 • 5,3 0 10.1 .. 10,2 6 10,3
decreases, reducing computation time. The effects of the range parameter R and the
number of jobs are more marked than for solution quality. As R increases, compuiation
time decreases rapidly since fewer jobs are available in the forecast window. Neither the
average nor the maximum computation time increase exponentially with number of
jobs, as shown in Fig. 4 for a representative RHP with K=5, T=800, ).=2. This is
consistent with our analysis of the complexity of the RHPs in § 3.
The tradeoff between solution time and quality is illustrated in Fig. 5. The vertical
axis represents A VE(*, *), and the horizontal axis is the average computation time
required by the procedure. Each point corresponds to an RHP with a specific set of
parameter values. There are a number of procedures which are dominated, in the sense
that another procedure which obtains a better solution faster exists. Once we discard
Single-machine dynamic scheduling 1261
these points, we have a set of procedures that form the efficient frontier. We can see
diminishing returns on CPU time. Getting within 3'4% of the best solution on average
requires an average of approximately II s. Improving this to 2'2% requires approxi-
mately 25 s. The choice of procedure to use depends on the purpose for which the
solution will be used. If we are trying to make a real-time dispatching decision, then a
solution time of II s may be acceptable. On the other hand, if we seek a procedure to be
used repeatedly in a decomposition procedure which is itself being used in a real-time
environment, we may seek a faster, slightly less accurate procedure.
There are a number of issues to explore to further improve the efficiency of the
R H Ps. Empirically, problems where arrival times are distributed over a wide interval
are easier to solve. For problems which do not have this characteristic, we may be able
to exploit the time-symmetry or the related makespan problem with delivery times. H
the due dates are such that the time-symmetric problem has its arrival widely
distributed, then we may obtain considerable computational savings by applying the
R H P to this problem. Another aspect is that very often the subproblems arising in the
decomposition methods have precedence constraints between jobs, which could reduce
computation time if exploited appropriately.
In summary, rolling horizon procedures provide a promising avenue or attack in a
broad family or complex dynamic scheduling problems. When combined with an
intelligent exploitation or the structure or the problems at hand, they can yield high
quality solutions in very reasonable computation times. For this reason they form a
Downloaded by [University of Cambridge] at 04:50 10 October 2014
Acknowledgments
This research was partially supported by the National Science Foundation under
Grant No. DDM-9J07591 and the Purdue Research Foundation.
Appendix
In order to be able to refer to the problems under study in a concise manner, we shall
use the notation of Lageweg et al. (1981), extended to include sequence-dependent setup
times. This notation consists of three fields alPly. The first field represents the type of
shop (single machine (a = I), parallel identical machines (a = P), etc). The second field is
used to represent problem characteristics such as precedence constraints, dynamic job
arrivals, batch processing machines or special processing time structures. The last field
denotes the measure of performance to be optimized. Thus, for example, I/ri,sulLm..
represents the problem or minimizing maximum lateness on a single machine where
each job j is available at time rj and there arc sequence-dependent setup times. Some
examples of the notation are as follows:
J IIL m ax : minimize L ma x on a single machine with all jobs available
simultaneously,
I/s,)L max: IllL max with sequence-dependent setup times,
I/r)L max: minimize L max on a single machine withjobj available at time ri,
I/r)C m a x : minimize Cm ax on a single machine withjobj available at time ri'
hi
I Ir i , prec] L m a x : I L m ax with precedence constraints,
I Ir i, pmtn] L max : I Ir) L max where preemption of jobs is allowed,
I/r j , prec, sui L max : JIri' prec/L m a x with sequence-dependent setup times.
References
ADAMS, J., BALAS, E., and ZAWACK, D., 1988, The shifting bottleneck procedure for job-shop
scheduling. Munaqement Science, 34, 391-401.
BALAS, E., and TOTH, P., 1985,Branch and bound methods. In The Traveling Salesman Problem:
A Guided Tour of Combinatorial Optimization, E. L. Lawler, J. K. Lenstra, A. H. G.
Rinnooy Kan and D. B. Shmoys (eds) (New York: Wiley).
BAKER, K. R., 1974, llllroduction to Sequencing and Scheduling (New York: Wiley).
Single-machine dynamic scheduling 1263
BAKER, K. R., and Su, Z. S., 1974, Sequencing with due dates and early start times to minimize
maximum tardiness. Naval Research Logistics Quarterly, 21,171-176.
BHASKARAN, K., and PINEDO, M., 1991, Dispatching. In Handbook of Industrial Engineering,
G. Salvendy (ed.) (New York: Wiley).
CARLlER, J., 1982, The one-machine scheduling problem. European Journal of Operational
Research, II, 42-47.
CONSILIUM INC., 1988, Short Interval Scheduling System Users Manual. Internal Publication
(Mountain View, CAl.
FOWLER, J. W., HOGG, G. L., and PHILLIPS, D. T., 1992, Control of multiproduct bulk service
diffusion/oxidation processes. lIE Transactions on Scheduling and Logistics, 24, 84-96.
GAREY, M. R., and JOHNSON, D. S., 1979, Computers and Intractability: A Guide to the Theory of
N P Completeness (San Francisco: W. H. Freeman).
GLASSEY, CR., and WENG, W. W., 1991, Dynamic batching heuristic for simultaneous
processing. IEEE Transaction on SemicOllductor Manufacturing, 4, 77-82.
HALL, L., and SHMOYS, D., 1992, Jackson's rule for one-machine scheduling: making a good
heuristic better. Mathematics of Operations Research, 17,22-35.
Downloaded by [University of Cambridge] at 04:50 10 October 2014
LAGEWEG, B. J., LAWLER, E. L., LENSTRA, J. K., and RINNOOY KAN, A. H. G., 1981, Computer
aided complexity classification of deterministic scheduling problems, Research Report
BW 138/81 (Amsterdam: Mathematisch Centrum).
LAGEWEG, B. J., LENSTRA, J. K., and RINNOOY KAN, A. H. G., 1976, Minimizing maximum
lateness on one machine: computational experience and some applications. Siatistica
N eerlandica, 30, 25-41.
LAWLER, E. L., 1973, Optimal sequencing of a single machine subject to precedence constraints.
Management Science, 19, 544-546.
McMAHON, G., and FLORIAN, M., 1975, On scheduling with ready times and due dates to
minimize maximum lateness. Operations Research, 23, 475-482.
MONMA, C, and POTTS, Ci N, 1989, On the complexity of scheduling with batch setup times.
Operations Research, 37, 798-804.
MORTON, T. E., Forward algorithms for forward-thinking managers. In ApplicatiollS of
Manaqement Science, R. L. Schulz (ed.) (Greenwich, CT: JAI Press), pp. I-55.
OVACIK, I. M., and UZSOY, R., 1992, A shifting bottleneck algorithm for scheduling semi-
conductor testing operations. Journal of Electronics Manufacturing, 2,119-134.
OVACIK, I. M., and UZSOY, R., 1993, Exploiting shop floor status information to schedule
complex job shops. Journal of Manufacturing Systems, forthcoming.
PARKER, R. G., and RARDIN, R. L., 1988, Discrete Optimization (San Diego: Academic).
PICARD, J. C, and QUEYRANNE, M., 1978, The time-dependent travelling salesman problem and
its application to the tardiness problem in one-machine scheduling. Operations Research,
26,86-110.
POTTS, Ci N; 1980, Analysis of a heuristic for one machine sequencing with release dates and
delivery times. Operations Research, 28, 1436-1441.
SAHNI, S., and GONZALEZ, T., 1976, P-complete approximation problems. Journal of the
Associationfor Computing Machinery, 23, 555-565.
UNAL, A. T., and KIRAN, A. S., 1992, Batch sequencing. liE Transactions on Scheduling and
Logistics, 24, 73-83.
UZSOY, R., 1993, Decomposition methods for scheduling complex dynamic job shops.
Proceedings of the NSF Grantees' Conference, Charlotte, NC, pp. 1253-1257.
UZSOY, R., CHURCH, L. K., OVACIK, I. M., and HINCHMAN, J., 1993, Performance evaluation of
dispatching rules for semiconductor testing operations. Journal ofElectronics Manufactur-
ing, 3, 95-105.
UZSOY, R., LEE, C Y.,and MARTIN-VEGA, L. A., 1992, Scheduling semiconductor test operations:
minimizing maximum lateness and number of tardy jobs on a single machine. Naval
Research Logistics, 39, 369-388.
UZSOY, R., MARTIN-VEGA, L. A., LEE, C Y., and LEONARD, P. A., 1991, Production scheduling
algorithms for a semiconductor tesing facility. IEEE Transactions on Semiconductor
Manufacturing, 4, 270-280.
ZDRZALKA, S., 1992, Preemptive scheduling with release dates, delivery times and sequence
independent setup times. Institute of Engineering Cybernetics, Technical University of
Wroclaw, Wroclaw, Poland.