You are on page 1of 22

This article was downloaded by: [University of Cambridge]

On: 10 October 2014, At: 04:50


Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Production Research


Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/tprs20

Rolling horizon algorithms for a single-machine dynamic


scheduling problem with sequence-dependent setup
times
a a
I. M. OVACIKT & R. UZSOY
a
School of Industrial Engineering , Purdue University , 1287 Grissom Hall, West Lafayette,
IN, 47907-1287, USA
Published online: 07 May 2007.

To cite this article: I. M. OVACIKT & R. UZSOY (1994) Rolling horizon algorithms for a single-machine dynamic scheduling
problem with sequence-dependent setup times, International Journal of Production Research, 32:6, 1243-1263, DOI:
10.1080/00207549408956998

To link to this article: http://dx.doi.org/10.1080/00207549408956998

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the
publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations
or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any
opinions and views expressed in this publication are the opinions and views of the authors, and are not the
views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be
independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses,
actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever
caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
INT. J. PROD. RES., 1994, VOL. 32, No.6, 1243-1263

Rolling horizon algorithms for a single-machine dynamic scheduling


problem with sequence-dependent setup times

I. M. OVACIKt, and R. UZSOytt

We present a family of rolling horizon heuristics 10 minimize maximum lateness on


a single machine in the presence of sequence-dependent setup times. This problem
occurs as a subproblem in a decomposition procedure for more complicated job
shop scheduling problems. The procedures solve a sequence of subproblems to
optimality with a branch and bound algorithm and implements only part of the
solution obtained. The size and number of the subproblems are controlled by
algorithm parameters. Extensive computational experiments show that these
Downloaded by [University of Cambridge] at 04:50 10 October 2014

procedures outperform myopic dispatching rules by an order of magnitude, both on


average and in the worst case, in very reasonable computation times.

t. Jntroduction
The effective control of material movement through manufacturing facilities is
becoming increasingly important in today's highly competitive global markets.
Companies are under pressure to shorten lead times and meet customer due-dates to
maintain high levels of customer satisfaction. Effective management of work-in-process
inventories (WIP) can also give companies significant cost advantages. Hence the
development of scheduling procedures to achieve these advantages is of considerable
economic significance. However, the proven intractability of job-shop scheduling
problems makes it difficult to develop efficient procedures that are applicable to
problems of realistic size. Most practical job-shop scheduling problems have been
addressed using myopic dispatching rules (Bhaskaran and Pinedo 1991). While these
rules are computationally efficient and easy to implement, they may result in poor long-
term performance. In manufacturing environments with heavy competition for
capacity at key resources, scheduling procedures that take a global view of the shop
should result in substantial improvements in performance.
The research we describe in this 'paper is part of a larger effort to develop a
decomposition methodology by scheduling complex dynamic job shops. These
facilities are characterized by the presence of different types of workcentres, some of
which have sequence-dependent setup times; reentrant product flows, where ajob may
return to a machine several times; and due-date related performance measures. We
focus on the performance measure of maximum lateness (L m ax ), to capture manage-
ments' concern with providing consistent levels of customer service. A workcentre may
consist of a single machine, a number of parallel identical machines, or of a batch
processing machine like a heat treatment oven, where a number of jobs are processed
simultaneously as a batch. These problems represent a considerable generalization of

Revision received April 1993.


t School of Industrial Engineering, 1287 Grissom Hall, Purdue University, West Lafayette,
IN 47907-t287, USA.
:I: To whom correspondence should be addressed.

002ll-7543/94 $\0-00 © 1994 Taylor & Francis Ltd.


1244 I. M. Ovacik and R. Uzsoy

the classical job shop scheduling problem (Baker 1974), which assumes that there are
no sequence-dependent setup times, that each job visits each workcentre exactly once,
that each workcentre consists of a single machine and that the performance measure to
be minimized is makespan.
The obvious difficulty of these problems (Garey and Johnson 1979) has resulted in
their being largely ignored by researchers. However, decomposition methods that
exploit recent developments in information technology offer a promising avenue of
attack on these problems. In addition, decomposition methods allow us to exploit the
special structure present in many industrial contexts, rendering these problems more
amenable to efficient, near-optimal solution procedures than the generic problems on
which much past research has focused.
The decomposition method we propose proceeds in a manner similar to the Shifting
Bottleneck approach of Adams et al. (1988) by decomposing the job shop into a number
of workcentres. These are scheduled in order of criticality until all workcentres have
Downloaded by [University of Cambridge] at 04:50 10 October 2014

been scheduled and a feasible schedule achieved. A network representation of the


scheduling problem is used to model the interactions between the workcentres so as to
allow the solutions to the subproblems to be integrated into a solution to the job shop
problem (Uzsoy 1993).
An important aspect of the decomposition method we propose is that it takes a
global view of the shop while developing a schedule. In the past, a major obstacle to the
development and implementation of such methods has been the difficulty of obtaining
reliable information on the current state of the shop. However, many companies have
implemented sophisticated Shop Floor Information Systems (SFIS) which can track
WIP and machine status in real time. These systems provide real-time information on
where cachjob is currently located, whether it is in process or in queue, what operations
it will require in the future and when it is due to the customer. They also provide status
information on machine breakdowns and setups. This information makes it possible for
a scheduling system to take the state of the entire shop into account when developing a
schedule, rather than only a subset of the shop such as most dispatching rules. A
description of a commercially available SFIS is given in Consilium (1988).
In developing an effective decomposition method, there are two fundamental sets of
problems. The first is that of deciding upon a decomposition that isolates the 'correct'
subproblems as critical and ensures that their solution is meaningful relative to the
original problem, that is, that the solution to the subproblems should result in a feasible
schedule consistent with management goals. This requires a mechanism to integrate the
solutions of the subproblems, i.e. to model the interactions between subproblems and
their effect on the solution to the overall problem. This can be thought of as a
mechanism to capture global information used in setting up the local subproblems
'correctly'. Related to this issue is that of prioritizing the subproblems, deciding in what
order to solve the subproblems so that the most constraining ones are solved first,
leading to a better overall solution.
Once a set of subproblems has been formulated, the second issue is that of finding
solution procedures for the subproblems. These procedures must be fast enough to be
used repeatedly without resulting in intolerable computational burden, and must
obtain high-quality solutions. Often the subproblems themselves are intractable, which
makes this a challenging task. The inherent modularity of the decomposition method
allows us to apply different solution techniques to different subproblems, making it
possible to select the most appropriate procedure for each subproblem and exploit any
special structure the subproblems may have.
Single-machine dynamic scheduling 1245

The main interaction between different subproblems in a scheduling problem is due


to jobs and machines becoming available at different times depending on scheduling
decisions made during the solution of the different subproblems. As a result, the
subproblems will be dynamic, where jobs arrive at machines or machines become
available over time. Hence the scheduling algorithms for the subproblems must be
capable of handling the dynamic versions of these problems.
Extensive computational experiments with a prototype decomposition method
have shown that the decomposition method outperforms dispatching rules both on
average and in the worst case (Ovacik and Uzsoy 1992). In these experiments, the
decomposition procedure used a simple dispatching heuristic combined with a local
improvement procedure to schedule the work centres. Our results indicate that the
quality of the solution obtained for the subproblems has a significant effect on the
quality of the solution obtained for the overall problem.
In this paper we present a class of procedures to minimize L m ax on a single machine
Downloaded by [University of Cambridge] at 04:50 10 October 2014

with sequence-dependent setup times. The objective is to use these procedures in a


decomposition method to schedule complex job shops containing machines of this
type. The industrial application motivating this study was that of scheduling test
systems in the final test phase of semiconductor manufacturing (Ovacik and Uzsoy
1992). In addition to the need for such procedures in decomposition method, this
problem has not been studied extensively to date, giving it an interest in its own right.
The heuristics we suggest operate on a rolling horizon basis. At any point in time
when a scheduling decision is to be made, we solve a subproblem consisting of the jobs
on hand and a subset of the jobs that will arrive in the near future. Arrival times are
calculated from the network representation in the decomposition method, and so are
known a priori. We develop a branch and bound algorithm to solve the subproblems
optimally. Although the computational burden of this procedure increases exponenti-
ally with the number ofjobs, the restricted size of the subproblems in the rolling horizon
procedures allows us to use it effectively within this framework. The rolling horizon
procedures consistently yield better schedules than dispatching rules combined with
local improvement procedures, demonstrating that the latter methods may perform
extremely poorly.
In the following section we state the problem of interest and review previous related
work. Section 3 describes the rolling horizon algorithms, while § 4 describes the branch
and bound procedure used to solve the subproblems. We present the design of our
computational experiments and their results in §§ 5 and 6, respectively, and conclude
the paper with some directions for future research.
For the remainder of this paper, we shall use the notation of Lageweg et al. (1981) to
refer to the problems studied in a concise manner. Thus the problem of interest,
scheduling a single machine in the presence of sequence-dependent setup times and
non-simultaneous release times to minimize maximum lateness will be denoted as
l/rj,s,)L m .,. The notation is briefly described in the Appendix.

2. Problem description and previous related work


The problem of minimizing L m . . on a single machine without setup times has been
extensively examined. Cases with simultaneous release times (l//L m .. and I/prec/Lm ax )
are easy to solve using the Earliest Due Date rule and Lawler's Algorithm respectively
(Baker 1974, Lawler 1973). However, the presence of non-simultaneous release time
renders the l/r)L m a x problem NP-hard in the strong sense (Garey and Johnson 1979).
Thus the problem addressed in this paper, l/rj,sj Lm a " is NP-hard in the strong sense
1246 I. M. Ovacik aL R. Uzsoy
I
even without sequence-dependent setup times. Furthermore, the special case of
l/si}L mu, where all jobs have a common duci date is equivalent to l/sulCm..' which is
equivalent to the Travelling Salesman Probh~m (TSP) (Baker 1974), which is NP-hard
in the strong sense. Thus it is unlikely thatl a polynomial-time procedure to obtain
optimal solutions exists. Research to date has focused on two main areas: developing
implicit enumeration algorithms to obtain optimal solutions, or using heuristics to
efficiently obtain near-optimal solutions. In this paper we follow the latter approach.
The dynamic problem without sequence-dependent setup times, l/r}L max has been
studied extensively. Baker and Su (1974). McMahon and Florian (1975) and Carlier
(1982) present branch and bound algorithms. while Potts (1980), Carlier (1982) and
Hall and Shmoys (1992) analyse heuristics. It has been shown that this problem is
equivalent to the problem of minimizing makes pan (C mu,) on a single machine in the
presence of delivery times qj = K -dj • where K ;;. max j{dj } (Lageweg et al. 1976). In this
Downloaded by [University of Cambridge] at 04:50 10 October 2014

problem each job j requires qj units of time to reach its destination after completing
processing on the machine. The objective is to minimize Cmu" where Cmu, denotes the
time the last job reaches its destination. We shall denote this problem by 1/r j, qj/Cmu,'
This problem is also time-symmetric, in the sense that for any instance P of I/r j• q}C mu"
we can ercate another instance P' with release times rj = qj and delivery times qj=r j that
has the same optimal sequence (although in reverse) and C mu, value as the original
problem. These results motivate various aspects of our approach in this paper.
The problem of minimizing L mu, with sequence-dependent setup times has not been
extensively examined to date. Monma and Potts (1989) present a dynamic program-
ming algorithm and optimality properties for the case of batch setups, where setups
between jobs from the same batch are zero. Picard and Queyranne (1978) model a
related problem as a time-dependent travelling salesman problem and develop a
branch and bound algorithm. Uzsoy et al. (1991) provide a branch and bound
algorithm for I/prec, su/L mu,' For problems with more than fifteen operations,
however, the computational burden of this algorithm increases rapidly. Uzsoy et al.
(1992) develop dynamic programming procedures of the I/prec, su/L mu, problem where
the precedence constraints consist of a number of strings. Unal and Kiran (1992)
consider the problem of determining whether a schedule in which all due dates can be
met exists in a situation without precedence constraints but with batch setups. They
provide a polynomial-time heuristic and an exact algorithm which runs in polynomial
time given a fixed upper bound on the number of setups.
Several authors have suggested heuristics for related problems. Zdrzalka (1992)
considers the I /r j • pmll1/Lmu, problem where the jobs have sequence-independent setup
times. He proves that this problem is N P-hard and presents a heuristic with a tight
worst-case error bound. Uzsoy et al. (1992) analyse the performance of the myopic
Earliest Due Date (EDD) dispatching rule, which gives priority to the available job with
earliest due date, for the I h. sui L mu, problem. Assuming that the setup times are
bounded by the processing times, i.e. that Sij~ Pj for all j, they develop tight worst-case
error bounds for this heuristic. Sahni and Gonzalez (1976) show that unless P = N P
there can be no polynomial-time heuristic with a constant, data-independent worst-
case error bound for the TSP with arbitrary intercity distances. Since the TSP is a
special case of I /rj' sui LmDX> this indicates that efficient heuristics with data-dependent
worst-case bounds are unlikely to exist for Ih,sulLmu,' Ovacik and Uzsoy (1992)
combine the EDD heuristic with a local improvement procedure similar to that of
Uzsoy et al. (1991). They show that the addition of the local improvement procedure
results in substantial improvements over the schedules obtained by the dispatching rule
Single-machine dynamic scheduling 1247

alone. In addition, they show that EDD performs best out of a number of other myopic
dispatching rules (Ovacik and Uzsoy 1992, Uzsoy et al. 1993).
The motivation for the rolling horizon approach followed in this paper is derived
from insights into the deficiencies of other techniques for related problems. While EDD
is optimal for the static problem, when it is applied to the problem with nonsimul-
taneous arrival times it may make poor decisions due to its myopic nature. An example
of this is when a longjob with a large due date is scheduled just before a short job with a
very tight due date arrives. The ability to predict future job arrivals over a certain
forecast window in the future can alleviate this problem to some extent. However, when
sequence-dependent setup times are also involved, simply having some visibility of
future events does not suffice. The complex interactions between setup times and due
dates must be addressed explicitly in order to arrive at good decisions. This is clearly
achieved by a branch and bound procedure for the entire problem, taking into account
the entire set of jobs. However, the computational burden of such a procedure increases
Downloaded by [University of Cambridge] at 04:50 10 October 2014

exponentially, rendering it impractical for problems of realistic size. In particular, the


use of such a technique in a decomposition procedure, where many single-machine
problems must be solved at each iteration, is impossible if the decomposition procedure
is to have reasonable computational performance.
Thus, given the computational impossibility of using an exact optimal procedure
and the poor solution quality of myopic dispatching rules, we are motivated to seek
intermediate methods which obtain higher-quality schedules than myopic dispatching
rules at the cost of additional computational effort. This leads us to the idea of rolling
horizon procedures (RHPs), where a dynamic scheduling problem is decomposed into
a series of smaller subproblems of the same type. The limited size ofthese subproblems
allows us to use exact methods for their solution, which would be impossible for the
overall problem. The solution to the overall problem is approximated by segments of
the solutions of these subproblems. Thus we obtain a procedure that combines a degree
of forward visibility at each decision point with an optimization procedure that
explicitly takes into account due dates and setup times, addressing both deficiencies of
dispatching rules described above. One extreme case of such a procedure, with no
forward visibility, is a myopic dispatching rule. Another extreme, when forward
visibility is perfect so that all jobs are considered in a single subproblem, yields an exact
solution procedure. This allows us to explicitly address the tradeoff between solution
quality and computation time through the choice of parameter values defining the size
and number of the subproblems.
In a RH P, at each decision point a subproblem is solved using forecasts of future
events that arc predicted to occur over a certain time period in the future called a
forecast window. This yields decisions for a certain time period in the future. Only the
decisions related to the current decision point are implemented and decisions are
revised at the next decision point.
RHPs have been developed for a number of different problems (Morton 1981).
However, there have been few efforts to apply them to dynamic scheduling problems.
Glassey and Weng (1991) and Fowler et al. (1992) consider the problem of scheduling
batch processing machines in the presence of dynamic job arrivals. They assume the
availability of information on jobs that will arrive over a certain forecast window and
use this information to decide whether or not to start processing a batch at each
decision epoch. Ovacik and Uzsoy (1993) use information on jobs that will become
available over a given forecast window to make dispatching decisions in a job shop
with sequence-dependent setup times. Whenever a dispatching decision must be made,
1248 I. M. Ovacik and R. Uzsoy

a subset of the jobs available over the forecast window is selected. An optimal schedule
is found for the resulting Ih,su/Lmu problem by complete enumeration, and the first
job in this schedule is processed next on the machine. The encouraging results obtained
for this approach motivate the work in this paper.
In this paper we present a family of rolling horizon algorithms for the I/rj,su/L m ax
problem, which has not been addressed in the literature to date. We develop a branch
and bound algorithm for the problem which we use to solve the subproblems in the
RHPs. We study the effects of different forecast windows on the performance of our
procedures, describing the tradeoff between computation time and solution quality.
Our computational results show that the RHP obtains improvements of up to 58%
over dispatching rules combined with local improvement methods. Solutions are
obtained for problems with 100 jobs in 3 min of CPU time.

3. Rolling horizon procedures


Downloaded by [University of Cambridge] at 04:50 10 October 2014

In this section we describe the problem under study and the RH Ps developed for its
solution. We are given n jobs, each job} with a known release time r j , a processing time
Pj' and a due date dj. We incur a setup time of sij when job} is processed immediately
after job i. We assume that the jobs are indexed in order of increasing release times, such
that}>i implies rj~ri'
We define a decision point to be a point in time t when a decision as to which job(s)
to schedule next needs to be made. The forecast window is the time period within which
we can predict the arrival times of future jobs. Since arrival times of the jobs are given
by the decomposition method discussed in § I, the length of the forecast window is a
decision variable rather than a system parameter. The set of jobs considered while
making a scheduling decision at a given point in time consists of the set J(t) of jobs
already available for processing and the set F(t) of those that will become available
within the forecast window.
Although it is important to take jobs that will arrive over the forecast window into
account while making the current decision, it is not necessarily to our advantage to
consider all jobs in the set J(t)vF(t). In the problem under study, the relative urgency of
a job is defined by its due date. If we consider jobs which are due far in the future, we
may make a poor decision due to considering jobs which could safely have been
processed later. Hence the selection of the set K(t) of candidate jobs considered at the
current decision point t becomes important. We define K(t) as the k jobs in J(t)vF(t)
with the earliest due dates, where k = min {K, IJ(t)vF(t)l} and K is a decision parameter
defining the maximum size of the candidate set K(t). This ensures that the k most urgent
jobs in J(t)vF(t) are considered in the current decision.
This selection of candidate jobs follows naturally from insights into the time
symmetry of the Ilrj' q)C ma x problem, whose equivalence to the I/r)L max problem was
discussed in the previous section. It can be shown that similar relationships exist
between the problems with sequence-dependent setup times. Recall that for any
instance of the I/r) L ma x problem, a corresponding instance P of the Ilr j, qjlC ma x
problem can be constructed, where the qj depend on the due dates of the jobs. When
constructing the set of jobs to be considered, the first consideration is the arrival times
of those jobs. lt is unlikely that a job arriving far into the future will affect the current
decision. Hence we consider only jobs arriving over the forecast window. The selection
of a set of these jobs based on due dates is motivated by considering the time symmetric
I/r j, qjlC m a x problem P' whose release times rj = qj and delivery times qj= r j. Consider a
set G of jobs that become available in P' over some forecast window for this problem.
Single-machine dynamic scheduling 1249

Index the c jobs in G such r'[ ,,:; r~":; ... ,,:; r;. Since for every job i in G, r;=qi = K -d" we
have d, ~d2~ ... ~dc' Hence choosing a set of jobs with consecutive arrival times
occurring over a given forecast window in P' corresponds to selecting a set of jobs with
consecutive due dates in the original IjriLmax problem. Thus, selecting a set of jobs
with consecutive due dates in the I jri L max problem corresponds to choosing a certain
forecast window in the corresponding P'. Hence our process of selecting the jobs in K(t)
based on both arrival times and due dates reflects the use of forecast windows in both
the problem of interest and its time-symmetric equivalent.
Since some jobs in K(t) may not be available at time t, each subproblem is a
Ih,siiLmax problem consisting of at most K jobs. We use a branch and bound
procedure to solve these subproblems to optimality. Although the computational
requirements of this procedure grow exponentially as the number of jobs to be
scheduled increases, the restricted size of the subproblems limits the computational
effort required to solve a given subproblem, ensuring that the computational burden of
Downloaded by [University of Cambridge] at 04:50 10 October 2014

the overall procedure does not show exponential growth.


Let us denote the current decision point by t and define S(t) to be the set of all jobs
scheduled until that time. We can now state the RHP as follows.

3.1. Algorithm RHP


Step 0 Let t=rt,S(t)={¢}.
Step I Determine the set K(t).
Step 2 Optimally schedule the jobs in K(t). Select the next I,I = min p., IK(t)l),jobs in the
optimal schedule to the subproblem and place them in the schedule, where). is a
decision parameter denoting the maximum number of jobs that we schedule at
any decision point. Let these jobs form the set L. Set t to the completion time of
the last job scheduled in L, and S(t)= S(t)uL. If all jobs have been scheduled,
stop. Else go to step I.
The procedure has three decision parameters: the length of the forecast window,
which we shall denote by T, the maximum number of jobs considered for scheduling at
any decision point (K), and the maximum number of jobs we schedule at each decision
point (A). The first two of these parameters have been discussed above. The parameter).
determines the maximum number of jobs we schedule at each decision point. As ).
increases, the number of decision points at which subproblems need to be solved, and
therefore the computational burden of the procedure, decreases.
When we set T=O and K=).= I, we obtain the EDD dispatching rule. If T=r m the
release time of the last job, and K =). = n, we obtain a single subproblem identical to the
original problem, and hence the RHP yields an optimal solution. By assigning different
values to the decision parameters we can define a range of solution procedures ranging
from myopic dispatching rules to exact solution methods with increasing solution
quality and computational burden. This ability to specify the degree of precision and
computational effort is useful when the procedure is to be used in a decomposition
procedure. Since the decomposition procedure schedules workcentres in order of
criticality, we can use a more time consuming but more precise procedure for more
critical workcentres, and faster but less accurate methods for less critical ones.
The computational burden of the RHP depends on the decision parameters K and)..
The number of decision points at which subproblems have to be solved is defined by)..
At each decision point we solve a branch and bound problem with at most K,,:;njobs,
1250 I. M. Ovacik and R. Uzsoy

the worst case complexity of which is O(K!). The effort involved in developing the set
K(t) at each decision point t is O(n log n), due to ordering the jobs in increasing order
of due dates. Hence in the worst case, when ).= I, the complexity of the algorithm is
O(Il(K! + n log II)). Since K is a parameter of the algorithm, not the problem, this leads to a
polynomial-time complexity for this procedure which, by the results of Sahni and
Gonzalez (1976), implies that unless P= NP a data-independent worst-case bound for
its performance does not exist. Hence its performance may be arbitrarily bad. For
relatively small values of K, the nK! term in the complexity will dominate, resulting in the
worst-case computational effort increasing linearly with II.

4. Branch and bound algorithm


The RH Ps described in the previous section require the solution of a I/rj,su/L max
problem at each decision point. In this section we present a branch and bound
algorithm to solve this problem optimally.
Downloaded by [University of Cambridge] at 04:50 10 October 2014

The key to-the branch and bound procedure is a tree of partial solutions each of
whose nodes at level h represents a partial solution with h jobs. Associated with each
node at level h is a lower bound LB which is the minimum L ma x value that can be
obtained by any schedule whose first h jobs are scheduled as in the partial schedule
corresponding to the node.
We start by finding an initial upper bound U B to the optimal solution by
constructing a feasible solution to the problem using the EDD dispatching rule. A local
search based on adjacent pairwise interchanges is applied to this schedule to ensure
that the initial solution is at least at a local minimum. This schedule becomes our initial
incumbent solution and its L ma x value the initial U B. The incumbent solution and the
UB are updated as better solutions are found throughout the course of the procedure.
We expand the tree by branching on a selected node S at level h. For each job i not in
the partial schedule of S, we add a new node Si to the tree. The first h jobs of the partial
schedule of node Si are those of node S, and the (Il + I)st job is job i.
Whenever we branch on a specific node, we generate all possible nodes that can be
generated from that node. The new nodes generated inherit all characteristics of their
parent nodes. Therefore it is sufficient to keep track of only the active nodes, those
nodes that have not been branched on yet. By keeping these nodes in the order that we
are going to select them, the problem of identifying which node to branch on reduces to
picking the first node in an ordered list.
There arc two well-known methods for selecting the next node to branch on (Parker
and Rardin 1988). Depth-first search selects the last node that has been added to the
tree, i.e. the node at the deepest level of the tree. It has the advantage of requiring the
least number of nodes to be kept active at any time, although it may end up processing a
large number of nodes to reach the optimal solution. On the other hand, best-bound
search, which selects the active node with the lowest LB, minimizes the number of nodes
processed, but its memory requirements can be prohibitive since many nodes are active
at any time. We adopt a hybrid of these two methods by generating all possible nodes
that can be generated when branching on a specific node and selecting the node with
lowest LB among those at the deepest level of the tree.
We fathom a partial schedule in two different ways. A node is fathomed by
completion of a solution when it represents a full schedule, since it cannot be expanded
any further. If its objective function value is less than the UB, we have a solution that is
better than any solution found so far, and we update the incumbent solution and the
UB. A node is fathomed by bound ifits LB is greater than the current UB. This indicates
Single-machine dynamic scheduling 1251

that expanding the tree from that node can only give us solutions inferior to what we
already have. If a new incumbent solution is found, all nodes with LBs larger than the
new UB are eliminated from the list of active nodes for the same reason.
The LBs we use are derived from the results of Potts (1980) and earlier (1982) for the
l/r)L m ex problem. They show that for any subset S of the set N of jobs to be scheduled,

min {r/} +
JeS
L p,-max {d,}
leS leS
(I)

is a lower bound on the optimal L max of the problem and is tightened by taking the
maximum over all possible subsets S. This bound also applies to the lh,s;)Lmax
problem since we can only do worse by inserting setup times into the schedule. Since for
eachjobj scheduled, we incur a setup time of at least s minj = min'eN{siJ, we can tighten
(I) by adding in the sum of the s min/s for all j in the subset S. Therefore,
Downloaded by [University of Cambridge] at 04:50 10 October 2014

min {r/} + L (s min, + PI)-max {d/} (2)


leS leS leS

becomes a LB for I/rj , s;)L max for any subset S of N. The same bound applies when we
are trying to find a LB for a partial schedule during the course of the branch and bound
procedure. For a partial schedule S' with operation h scheduled last and with makespan
Cmax(S') when N' is the set of all jobs remaining to be scheduled, the expression

min {r;} +
leS
L (s min, + pil- max {d,}
leP leP
(3)

where P is any subset of the set of unscheduled jobs, forms a LB on the minimum L m ..
value that can be obtained by completing the partial schedule S'. Note that the release
time of any job i in the set P must be updated to r; = max {r" Cmax(S')} since ajobcannot
start before the completion time of the last job scheduled in the partial schedule S'.
Since there are (2"- I) such subsets, where n is the number of jobs that remain to be
scheduled, it is not feasible to check all subsets of N'. Therefore we consider only those
P of size 1,2, and n. When P= N', that is if the subset consists of all unscheduled jobs, we
can tighten the LB further by using a better lower bound on the amount of setup time
that will be incurred. The minimum amount of setup time that we will incur can be
found by solving a TSP problem where the intercity costs correspond to the sequence-
dependent setup times between jobs. However, since TSP is NP-hard, solving this
problem to optimality is computationally burdensome. Hence we opt for a lower
bound on the optimal value of the TSP obtained from the assignment problem which is
polynomially solvable (Balas and Toth 1985). The result is an expression of the form

min{ri}+SMIN+ L p=max{d,} (4)


leN' leN' leN'

where S MIN is a lower bound on the minimum amount of setup that will be incurred,
which forms a LB on the L max of the partial solution S'.
Another lower bound on the L max value that can be achieved by completing a
partial schedule S' is the L max of the partial schedule itself which we will denote as
Lm,,(S'). Since any schedule that we generate by expanding S' will contain S', its L m"
cannot be less than the L max of the partial schedule S'. Therefore, for a partial schedule
S', a lower bound to the minimum L maxachievable is found by finding the maximum of
Lmax(S'), expressions of the form (3) for all subsets P with I or 2jobs, and the expression
(4).
1252 I. M. Ovacik and R. Uzsoy

5. Experimental design
To evaluate the performance of the RHPs, we use two different algorithms as
benchmarks. The first of these is the EDD dispatching rule. Whenever the machine falls
idle, this rule myopically selects the available job with the earliest due date. This rule
has consistently shown itself to outperform other, more complex dispatching rules for
the performance measure of L max (Ovacik and Uzsoy 1992, 1993, Uzsoy et al. 1993). In
addition, Uzsoy et al. (1992) have shown that if the setup times are bounded by the
processing times, this rule has a tight worst-case error bound. The main weakness of
this rule is that it ignores the setup times. To remedy this deficiency, we have augmented
it with a local search procedure that performs adjacent pairwise exchanges to improve
the EDD schedule. We shall refer to this procedure as the EDD-LI procedure. EDD-LI
can never perform worse than EDD, and we would expect it to yield improved
schedules at the expense of moderate increases in computation time.
Downloaded by [University of Cambridge] at 04:50 10 October 2014

We have selected these benchmarks due to the fact that it is extremely difficult to
obtain optimal solutions, or even a reliable lower bound, on the optimal solution value
for this problem. These two rules are, in our experience, representative of approaches
taken to this problem in practice. One of our major results is that these rules often
perform extremely poorly, indicating that the widespread reliance often placed on
dispatching-based procedures may be misplaced for problems with sequence-
dependent setup times.
We compare the dispatching rules discussed above to the RHP with different
combinations of decision parameter values. We represent the forecast window in two
different ways: job- and time-based. If we assume that the n jobs to be scheduled are
indexed by increasing release times and let S(I) be the set of jobs that have been
scheduled at time I, then using a job-based forecast window, we include the nextj jobs
with release time greater than I in the forecast window. More formally, the forecast
window will contain the jobs (s+ l,s+2, ... ,s+j) where job s is last job that has
arrived, i.e. the highest indexed job i with ';:$;I and j=min {Jl,n-s} where Jl is a
decision parameter determining the maximum number of jobs we allow in the forecast
window at any time. While, the job-based approach allows a fixed number of jobs in the
forecast window, the time-based approach allows the jobs that will become available
over a fixed period of time to be in the window, i.e. all jobs i such that 'i:$; I + T where T
is the decision parameter denoting the length of the time-based forecast window. For
our experiments we use values of 1,2,3,4 for JI and 200,400,600 and 800 for T. These
values for Tcorrespond to the expected processing and setup time for I, 2, 3 and 4 jobs,
respectively. We also examine the two extreme cases where we have no visibility (Jl = T
=0) and where we have visibility over the entire horizon (Jl = n, T = 'n). These enable us
to examine the effects of having no forward visibility at all and perfect forward visibility
on the quality of the schedules generated.
For the parameter K, we use the values of 5 and 10. This parameter is the major
factor determining the computational burden of the procedure by limiting the size of
the largest subproblem solved. The choices of 5 and 10 represent a low and a high value
for this parameter, allowing us to isolate its effect on the performance of the procedures
in the experiments.
For A, we use values of I, 2 and 3, corresponding to fixing the schedule of I, 2, and 3
jobs at any decision point. As A decreases, the number of subproblems solved, and
therefore the computational burden of the procedure, increases. By assigning a higher
value to A, i.e. by fixing a larger number of jobs at any decision point, we commit
ourselves to a schedule for a longer period of time, which prevents us from reacting to
events such as the arrival of an urgent job that may occur during that time.
Single-machine dynamic scheduling 1253

We apply the different scheduling algorithms to randomly generated problems. The


processing and the setup times are taken from a uniform distribution in the interval
[1,200]. Each job is assigned a release time uniformly distributed over an interval
between time 0 and an upper bound which is the product of a range parameter Rand
the expected makes pan of the jobs. The range parameter R determines the time period
over which the jobs to be scheduled arrive. An R value of 0 corresponds to the static
problem where all the jobs are available at time 0, and larger values of R correspond to
less frequent arrivals over time. The expected makespan is the product of the number of
jobs and the expected setup and process time of a job which in this case is 200 minutes.
For the computational experiments we use R values of 0'6, O'S, 1'0, 1'2, and 1·4
corresponding to varying frequencies of job arrivals.
The due date d, of a job i with release time r i and processing time Pi is determined as
d i = r i+2kpi
Downloaded by [University of Cambridge] at 04:50 10 October 2014

where k is an integer uniformly distributed over the interval [ - 1,4]. This way we allow
each job a multiple of its processing time to complete before it is due. The multiplicative
factor 2 serves to include an estimate of setup time in the due-date setting procedure.
Since k can take on negative values, we may have jobs that are already tardy when they
become available. This is often the case in industrial situations where a job may be
delayed in preceding stages of the manufacturing process. When the problem is solved
as a subproblem in a decomposition procedure, ajob may be tardy due to interactions
with other jobs and machines in the job shop problem the decomposition procedure is
attempting to solve.

Values used Total

Release time range (R) 0'6,0'8, 1'0, 1-4 5


Number of jobs 10,20, ... ,100 10
Number of combinations 50
Problems/combination 20
Total number of problems 1000

Table 1. Randomly generated single machine problems.

Parameter and description Values used # of comb. Total

Ii Forecast window-job-based 0,1,2,3,4,co 6


T Forecast window-time-based 0,200,400,600, 800, co 6 12
K Max. size of subproblems solved 5,10 2 2
,l Planning horizon 1,2,3 3 3
Total number of combinations 72

Table 2. Parameter values used for RH P.


1254 I. M. Ovacik and R. Uzsoy

We examine problems of sizes ranging from 10 jobs through 100 jobs in 10 job
increments. For each combination of range parameter R and problem size, we
randomly generate 20 problems. Each of the 1,000 problems generated is solved using
the EDD and EDD + LI procedures and the 72 different parameter combinations of the
RHP procedure. For each problem, the L max is calculated and the CPU time to solve
the problem is measured. All algorithms are coded in C and run on a SUN SPARC
workstation. The design of the experiment is summarized in Tables I and 2.

6. Results
To evaluate the performance of the benchmarks and the RH Ps, we use the ratio of
the average solution value found by each procedure to the average of the best solutions
found for a given problem class. A problem class is characterized by a release time range
Downloaded by [University of Cambridge] at 04:50 10 October 2014

R and a problem size (number of jobs) n. We denote this ratio by r(R, n). We define
A VE(R, *), A VE(*, n), A VE(*, *) to be the average of r(R, n) over all values of n for fixed
R, average of r(R, n) over all values of R for fixed n, and the average of all r(R, n) over all
values of Rand n, respectively. MAX(R, *), MAX(*,n), and MAX(*, *) are defined
similarly for the maximum values of r(R, n).
The first issue to be examined is the performance of EDD and EDD-LI relative to
the RHPs with time-based forecast windows. Table 3 shows the A VE(R, "), A VE(*, n),
and A VE(*, *) values for the different algorithms. The columns marked xx denote the
average results for all RHPs with the same K and A values. The columns marked 0 and
C1J represent the results from the RH P with no knowledge and perfect knowledge of all
job arrival times, respectively.
These results show that EDD yields very poor solutions for this problem, being on
average 184% worse than the best solution found, even though a number of
computational studies (Uzsoy el al. 1993) have shown that EDD performs better than
several other dispatching rules. This illustrates the difficulties of evaluating the
performance of dispatching rules against each other. While a given dispatching rule
may perform well relative to other dispatching rules, its performance relative to the
optimum may be extremely poor.
The addition of the local improvement procedure to the EDD rule leads to
dramatic improvements in performance. This is due to the fact that the local
improvement procedure in effect has perfect visibility of all jobs in the problem, thus
remedying the poor decisions resulting from the myopic nature of EDD. However,
these improved solutions obtained by EDD-LI are still on average 57%, worse than the
best solution obtained, indicating how unreliable procedures which guarantee only
local optimality can be. It is also interesting how much room for improvement remains
after the improvements from EDD.
Examining the performance of the RHPs, we see that the most significant factor
affecting solution quality is the parameter K, which defines the maximum size of the
subproblems. This effect can be seen clearly when we compare the performance of the
EDD rule, which corresponds to K= I, A= I and T=O, with that of the RHPs with K
values of 5 and 10 and the same Tand ), values, corresponding to columns 3 and 12
Table 3. As K goes from I to 5, there is a 151'6% improvement in solution quality.
Increasing K to 10 yields a further improvement of 12'2%. The initial improvement
indicates the benefit of solving the subproblems to optimality rather than using a
myopic heuristic. The small improvement from K= 5 to K= 10 suggests the advantages
of using an optimal procedure myopically, without forward visibility, are limited.
Downloaded by [University of Cambridge] at 04:50 10 October 2014

1<=5 1<=10

T=O T=xx T= co T=O T=xx T= co T=O T=xx T=ro T~O T=xx T= cc T=O T=xx T=ro T~O T=xx T=<:I)

EDD EDD-L1 ).=1 ).=2 ).=3 A=1 ).=2 A=3


""S·
AVE (Q-6, *) 3·380 (,853 1·411 1·419 ',419 1·463 1-481 1·487 1·532 1·556 1·578 1·182 1·095 1·029 1·206 1·123 1·054 1·251 1·159 1·091 ""
AVE (0'8, *)
AVE(I'O,*)
3·605
2-899
1·764
1·590
1·392
1·350
1-390
1'309
1·388
1·298
1·447
1·388
1·439
1·359
1·460
1·365
1·500
1-460
1·502
1·402
1·526
1·443
1·297
',315
1'151
1·172
1'045
1·019
1·312
1·350
1'177
1·205
1·053
1·025
1·375
1-422
1'216
1·246
1·072
1·039
'"
;3
AVE(I'2, *) 2·328 1·423 ',323 1·242 1·220 1·399 1·270 1·300 1·483 1·314 1·355 1·317 1·173 1·010 1·395 1·188 1·020 1-469 1·228 1·039 '"S·"
" ..
AVE(I-4, *J 2·213 1·364 1·271 1·173 1·173 ',377 1·207 1·209 1·464 1·273 1·290 1·271 1·137 1·007 1·383 1·165 1·020 1·463 1·220 1·028
'"
"'-
AVE(*,IO) 1·984 1·307 1·296 1·125 1·089 1-364 1·155 1'112 1·405 \·195 1·119 1·296 1·090 1·000 1-364 1·116 1·000 1·405 1·161 1·000
AVE(',20) 2·235 1·388 1·280 1·242 1'218 1·331 1·256 1·269 1·406 1'295 1·330 1·273 1·155 1·022 1·317 1·168 1·028 1·407 1·208 1·031 ""s
::s
AVE(',30)
AVE(',40)
2·606
2·813
1·541
1·582
1·359
1·328
1·285
1·301
1·281
1·316
1-417
I·367
1·319
1-342
1·343
1·356
1·526
1·410
1·399
1·396
1'430
1·431
1·328
1·277
1·155
1·160
1·020
1·023
1-379
1·314
1·182
1·192
1·050
1·040
1·478
1·350
1·238
1·216
1·057
1·063
'"
;:;.
On
AVE(',50) 2·794 1·593 1·339 1·312 1·280 1-419 1·367 1·377 1·471 1·425 1'466 1·293 1·167 1'025 1-337 1·\83 1·035 1·363 ',225 1·054
AVE(',60) 3·029 1·656 1·382 1'345 1·358 1·430 1·390 1·416 1·506 1·453 1·507 1·292 1·138 1·026 1·315 1·170 1·043 1·403 1·221 1·069 "
"..

AVE(*,70) 3·176 1·709 1·377 1·376 1·374 1-434 1·420 1-427 1'534 1·500 1'499 1·249 1'152 1·026 \·313 ',175 1·033 1·402 1·228 1·062 it
AVE (*,80) 3'311 1·686 1·356 1·345 1·333 1·447 1·395 1·394 1·511 1-460 1·525 1·241 1·149 1·020 1·316 1·172 1·031 1·381 1·211
1·215
1·060
1·067
"S·
AVE(',90) 3·290 1·767 1·404 1-355 1·372 1-457 1·407 1'468 1·532 1·490 1·540 1·275 1'141 1'029 1·306 1·171 1·040 1·351
AVE(',I00) 3·145 1·761 1'372 1·379 1-376 1-479 1·461 1-482 1·577 1·483 1·541 1·241 1'148 1·030 1·332 1·186 1·043 1'418 1·215 1'075 ""
AVE(', *J 2·865 1·599 1·349 1·306 1·300 1'415 1·351 1·364 1·488 1-410 1-439 1·277 1·145 1·022 1'329 1·171 1·034 1·396 1·214 1·054

Table 3. AVE (a, *J, AVE(*,b), and AVEtO,*J values.

N
V.
V.
1256 I. M. Ovacik and R. Uzsoy
"35"_ _

',30

1·25
,"--- -----_ ,
<=S
_

1·20
AVEr:)
1-15

1-10

1·05

l,OO.J-----+-----+------1------l-------l
T=o
Downloaded by [University of Cambridge] at 04:50 10 October 2014

T=200 T=400 T=600 T=BOO

""
Figure 1. Effect of length of forecast window (T) on RHP performance.

There are clear interactions between T, the length of the forecast window and K. As
shown in Fig. I, when K = 5, increasing Thas little effect on solution quality since the
future information obtained cannot be taken into account in the subproblems. When
A= I, increasing T from I to CIJ results in only 3·1 % improvement. However, when
K= 10, extending the forecast window results in a steady, significant improvement,
reaching 20'2% as T increases to CIJ. This is due to the fact that when K is small, the
amount offuture information taken into account in the current decision is limited. The
larger K value allows more future oriented information to be considered, resulting in
superior solutions.
The effects of the forecast window become clear when we compare the RHPs with
time-base forecast windows to those with job-based forecast windows. Figure 2 plots
the AVE(*, *) values for the two families of RHPs. It can be seen that the time-based
procedures consistently outperform the job-based ones. When R is large, the time-
based procedure considers fewer jobs than the job-based procedure, but the jobs it
ignores will be those arriving far into the future. When R is small, the time-based
procedure may consider more jobs than the job-based procedure, allowing it to select
the set K(t) from a larger set of candidates, hence capturing a 'better' set K(t). The job-
based procedure, on the other hand, may ignore urgent jobs that arrive in the near
future, resulting in poor decisions. Since the time-based procedures are consistently
better than their job-based counterparts, we shall focus on the results of the time-based
procedures for the rest of this paper.
The number of jobs fixed at each decision point, A, also affects solution quality. As).
increases, solution quality degrades steadily, exhibiting a linear trend. This is illustrated
in Fig. 3 for the cases where T = I and 4 and K = 5 and 10. This is because a procedure
with a low value of K uses little future information, resulting in poor schedules for the
subproblems. While for small Adecisions are revised frequently, as Aincreases we are
committed to these poor decisions for a longer period of time, resulting in poorer
performance overall.
Although there are some exceptions, the performance of all procedures degrades
somewhat as the number of jobs increases. However, the RHPs appear to perform
rather more consistently than EDD and EDD-L1, which exhibit a marked degradation
Single-machine dynamic scheduling 1257

AVEr:)

1·05
Downloaded by [University of Cambridge] at 04:50 10 October 2014

1·00 +--+--l-~--+--+-+--+--l----if----+--l
T=200 T=-eOO T=6()O T=800 T=200 T=400 r-soo T=800 T"'200 T=400 T=600 T=800
~=1 1l=2 J.I=3 1l=4 ....=1 J.1=2 J.I=3 fJ=4 J.I=I 1.1=2 J.I=] J.1=4
J..=l ),.:1 ),.::1 A.=I A.=2 A.=2 ).,;::2 A.=2 ).=3 )..:3 ).,;::) 11.=3

Figure 2. Performance of job- and time-based forecast windows on RHP performance.

::
:::t ~
~~5'T=800.
K'=10. T::200
'-25 •
AVEr:)
1·20

"'5

110L-_--~
IC'=IO. T:800

',05

1·00 + - - - - - - - - - - - + - - - - - - - - - - - 1
l.=.

Figure 3. Effect of number of jobs fixed at each decision point (A) on RHP performance.

in performance with increasing problem size. This indicates another benefit of the
RHPs, that their performance relative to the other procedures improves as problem
size increases. Similar conclusions can be drawn for the effect of the range parameter R
on the performance of EDD and EDD-LI. Both these procedures show declining
performance as R decreases. This is due to the fact that with a small R, the number of
available jobs for the dispatching rule to choose from is high, and thus a myopic choice
ignoring setup times is more likely to be a poor one.
To evaluate the robustness ofthe algorithms we use the MAX (R, *), MAX (*, n), and
MAX (*, *) values shown in Table 4. All the RHPs outperform EDD and EDD-LI
significantly in the worst case. The worst of the RHPs outperforms EDD-LI by 38'6%
in the worst case, and the best by 104'2%. This indicates a major strength of the RHPs,
that even when they do not yield the best solution they are unlikely to deviate from it
N
V.
00
Downloaded by [University of Cambridge] at 04:50 10 October 2014

1(=5 1(= 10

T=O T=xx T=ro T=O T=xx T=ro T=O T=xx T= ro T~O T=xx T=ro T=O T=xx T=ro T=O T=xx T=ro

EDD EDD-L1 ).=1 i.=2 <=3 i.= 1 i.=2 <=3

MAX(0'6,*)
MAX (0'8, *)
MAX(I·O, *)
MAX(\'2, *)
3·767
5·068
3·697
2·734
2']18
2·065
1·809
1·627
1·503
]'493
1'456
1·442
1·521
1-497
1-446
1·399
1·515
1·491
1·435
1-375
],586
1·589
1·531
1·484
1·629
],631
1·553
"437
1·631
1·624
'·569
1·507
],665
1·662
',615
1·624
1·732
1·677
],565
1·474
],731
1·687
1·628
1·546
1·275
1-425
1·422
1'442
1·231
1·306
1·404
1·363
],046
1·076
1·034
1·029
1·313
I·589
]'468
1-460
1·254
1-300
1·401
1·405
1·089
1·074
',042
1·047
1·390
1·662
1·595
1·616
1·409
1·451
I-488
]-438
1·131
1'129
]·080
1·067
-
~
MAX(]'4,*) 2·640 ],720 1'536 1·485
0
1·540 1·366 1·326 1·394 1·534 1·411 \·406 1·366 \·317 1·016 1·534 1·371 1·067 1·720 1·571 1·081
'"'"
n
MAX(*,IO) 2·389 1·455 1-425 1·233 1·233 1·589 1'263 1·242 1·662 1-451 1·263 1-425 1-193 1-000 1'589 1·216 1·000 1·662 1·451 1·000
MAX(*,20)
MAX(*,30)
2·728
3·057
1·556
1·734
1·360
1·442
1·374
1·422
1·373
1·368
1·453
1·534
1·388
1'443
1·387
1·439
1·605
1·720
1·472
',536
1·501
],528
1·375
1·442
1·278
1·404
1·076
1·041
1'453
1·534
],320
1·401
1-074
1-067
],605
1·720
1-438
1'571
1·067
\·073 ''""
;::
"-
MAX (*, 40) 3-431 1·819 1·454 ]'428 1·413 1-490 1'492 1-486 1·489 1·574 1·531 1·383 1·317 ]·049 1·339 1·371 1·061 1·489 1·408 1·082
MAX(*,50) 3·471 ],869 1-398 1·443 1·385 ',531 1·513 1-474 1·554 1·586 1·628 1·376 1·355 1·045 1·435 ],320 1·057 1·417 1·488 1·107 ?:'
MAX(*,60) 3·625 ',991 1·463 ',498 1·465 1·560 ',594 1·560 1-614 1·710 1·710 ',422 1·317 1·046 1'444 1-318 1·067 ]'527 1'443 1·131 c::
N
MAX(*,70) 4·135 2'112 1·495 1·516 1·504 1·553 1-621 ],612 ],665 1·732 ],731 1·339 1·277 1·053 1-410 1·355 1·063 1·536 1-434 ]']19
'"
MAX(*,80)
MAX (*, 90)
4·662
4·814
1·995
2·065
1·460
1·503
1·468
1·521
1·462
1'515
1'523
1·547
1·562
1·573
'-536
1·564
1-609
1·654
1·654
1·664
1·656
1·687
],342
1·362
1·363
1·349
1·032
],055
1·455
1·434
1-405
',342
],051
1·071
1'521
1-477
1·399
1·365
1-105
1·106 '"'"
MAX (*,100) 5-068 2·118 1·493 1-519 1·506 1-586 1·631 1·631 ]·647 1·677 1·675 1·380 1'363 1·059 1'460 1·356 1-089 1·616 1-376 ],129
MAX(*, *) 5·068 - 2-118 1·503 1·521 1·515 ],589 1·631 1·631 1·720 1·732 1·731 1-442 1·404 1·076 1·589 1·405 1·089 1·720 1'571 1·131

Table 4. MAX (a, *), MAX (*, b), and MAX (*, *) values.
Single-machine dynamic scheduling 1259

drastically. Dispatching rules, on the other hand, may yield extremely poor solutions,
as the results for EDD show.
Summarizing our results on solution quality, several conclusions emerge. The first
is that dispatching rules can yield extremely poor solutions in the presence of sequence-
dependent setup times. Even the inclusion of a local improvement procedure does not
remedy these defects. The RH Ps with appropriate choices of parameters consistently
yield better solutions than EDD and EDD-LI both on average and in the worst case.
The RHPs with time-based forecast windows consistently outperform their job-based
counterparts. The performance of both job-based and time-based procedures is
affected by the algorithm parameters in the same way. However, solution quality is not
the only attribute to be considered when selecting for a problem. The computational
effort required by the algorithm is also an important factor which must often be traded
off against solution quality. We shall first discuss the computational burden of the
Downloaded by [University of Cambridge] at 04:50 10 October 2014

different procedures studied, and then address the issue of the quality/time tradeoff.
The computational effort required by the RHPs is heavily affected by the choice of
the parameters K, A. and T. The average CPU times for the RHPs are shown in Table 5,
and the maximum times in Table 6. The effect of K is particularly significant, which
follows from the discussion of the complexity of the RHPs in § 3. As K increases from 5
to 10 there is an order of magnitude increase in both average and maximum CPU time.
This is due to the exponential worst-case complexity of the branch and bound
algorithm used to solve the subproblems. The effects of ). and Tare weaker, but still
significant. As Tincreases, the number of jobs considered in a given subproblem, and
thus computation time, increases. As A. increases, the number of subproblems solved

K Ie 0 200 400 600 800 00

1 0·73 0·78 0·85 0·91 0·95 1·06


5 2 0·65 0·63 0·66 0·68 0·70 0·75
3 0·63 0·63 0·61 0·62 0·64 0·67
1 10·82 12-29 11·67 12-93 17·99 25'17
10 2 5·25 7·25 6·63 7·31 7·87 10·98
3 4·17 4·66 4·94 5·44 7·11 9-40

Table 5. Average CPU times (s/problem).

K Ie 0 200 400 600 800 00

1 3·02 2-97 2-99 2·80 2-86 2-91


5 2 2-45 2·78 2-68 2·69 2-68 2·68
3 2-92 2·96 2-85 2·88 2·52 2-43
1 170·84 151·21 122·02 120·34 184·31 171·39
10 2 73-68 103-30 84-41 81·75 83-05 85·69
3 50'76 56·67 51·48 56·58 79·74 80·69

Table 6. Maximum CPU times (s/problern).


1260 1. M. Ovacik and R. Uzsoy
3·00

CPU

10 20 30 40 50 60 70 80 90 100
Downloaded by [University of Cambridge] at 04:50 10 October 2014

Number of jobs
Figure 4. Effect of problem size on average and maximum CPU time (s) for K = 5, T = 800
and A=2.

1'50

1'45 1
...0
l
1·35

1·30

AVEj",' 1-25
\
,.",

1015

HO

1'05

"00
10 15 20 2. 30
CPU (se<:ondsl
I(, A. .5,1 Q 5.2 • 5,3 0 10.1 .. 10,2 6 10,3

Figure 5. Tradeoff between CPU time and RHP performance.

decreases, reducing computation time. The effects of the range parameter R and the
number of jobs are more marked than for solution quality. As R increases, compuiation
time decreases rapidly since fewer jobs are available in the forecast window. Neither the
average nor the maximum computation time increase exponentially with number of
jobs, as shown in Fig. 4 for a representative RHP with K=5, T=800, ).=2. This is
consistent with our analysis of the complexity of the RHPs in § 3.
The tradeoff between solution time and quality is illustrated in Fig. 5. The vertical
axis represents A VE(*, *), and the horizontal axis is the average computation time
required by the procedure. Each point corresponds to an RHP with a specific set of
parameter values. There are a number of procedures which are dominated, in the sense
that another procedure which obtains a better solution faster exists. Once we discard
Single-machine dynamic scheduling 1261

these points, we have a set of procedures that form the efficient frontier. We can see
diminishing returns on CPU time. Getting within 3'4% of the best solution on average
requires an average of approximately II s. Improving this to 2'2% requires approxi-
mately 25 s. The choice of procedure to use depends on the purpose for which the
solution will be used. If we are trying to make a real-time dispatching decision, then a
solution time of II s may be acceptable. On the other hand, if we seek a procedure to be
used repeatedly in a decomposition procedure which is itself being used in a real-time
environment, we may seek a faster, slightly less accurate procedure.

7. Conclusions and future directions


In this paper, we present a family of procedures for single machine problems with
nonsimultaneous arrival times and sequence-dependent setup times where the
Downloaded by [University of Cambridge] at 04:50 10 October 2014

performance measure to be minimized is L max. This work is significant because it


addresses a problem which has not been extensively studied in the literature to date.
However, our main motivation stems from the fact that these problems arise as
subproblems of a decomposition procedure we have developed to schedule complex
job shops. The decomposition procedure works by dividing an intractable job shop
problem into smaller, more tractable subproblems, developing solutions for the
subproblems, and assembling these into a schedule for the job shop. The effective
implementation of this procedure in real-world environments requires fast procedures
to obtain high-quality solutions to the subproblems.
The rolling horizon procedures we present address the dynamic scheduling
problem by solving a series of smaller subproblems to optimality. The size and number
of the subproblems is determined by algorithm parameters, such as the forecast
window length and the maximum size of the subproblems. These parameters allow us
to describe the tradeoff between solution time and quality explicitly and select the most
appropriate parameter settings for the application at hand. With appropriate
parameter settings, these procedures outperform the best available myopic dispatching
rule by an order of magnitude, and yield solutions that are on average 60% better than a
dispatching rule combined with local search. The maximum solution time for problems
of up to 100jobs is of the order of 3 min. Thus these procedures represent a substantial
improvement over heuristics commonly used in practice for these problems. Another
important insight from this paper is that dispatching rules, even when combined with
local improvement procedures, are capable of producing very poor solutions,
indicating that the widespread reliance on these methods in practice may be misplaced
in certain circumstances.
We have also developed a branch and bound algorithm to solve this problem to
optimality. While the computational burden of this procedure becomes prohibitive as
problem size increases, the limited size of the subproblems allows us to use it effectively
in the RHPs.
The most important direction for future research is to implement these procedures
in the decomposition procedure which motivated their development. Considerable
computational experimentation will be required to determine what parameter settings
are appropriate for their use in this environment. In earlier experiments (Ovacik and
Uzsoy 1992)we noted that adding a local improvement procedure to a myopic heuristic
used to solve the subproblems significantly improved the performance of the
decomposition procedure. We conjecture that the higher quality solutions obtained
using the RH Ps will improve its performance even further.
1262 I. M. Ovacik and R. Uzsoy

There are a number of issues to explore to further improve the efficiency of the
R H Ps. Empirically, problems where arrival times are distributed over a wide interval
are easier to solve. For problems which do not have this characteristic, we may be able
to exploit the time-symmetry or the related makespan problem with delivery times. H
the due dates are such that the time-symmetric problem has its arrival widely
distributed, then we may obtain considerable computational savings by applying the
R H P to this problem. Another aspect is that very often the subproblems arising in the
decomposition methods have precedence constraints between jobs, which could reduce
computation time if exploited appropriately.
In summary, rolling horizon procedures provide a promising avenue or attack in a
broad family or complex dynamic scheduling problems. When combined with an
intelligent exploitation or the structure or the problems at hand, they can yield high
quality solutions in very reasonable computation times. For this reason they form a
Downloaded by [University of Cambridge] at 04:50 10 October 2014

natural building block or decomposition methods for more complex scheduling


problems, and have considerable theoretical and practical interest in their own right.
Research is in progress on exploiting these characteristics in such a decomposition
method.

Acknowledgments
This research was partially supported by the National Science Foundation under
Grant No. DDM-9J07591 and the Purdue Research Foundation.

Appendix
In order to be able to refer to the problems under study in a concise manner, we shall
use the notation of Lageweg et al. (1981), extended to include sequence-dependent setup
times. This notation consists of three fields alPly. The first field represents the type of
shop (single machine (a = I), parallel identical machines (a = P), etc). The second field is
used to represent problem characteristics such as precedence constraints, dynamic job
arrivals, batch processing machines or special processing time structures. The last field
denotes the measure of performance to be optimized. Thus, for example, I/ri,sulLm..
represents the problem or minimizing maximum lateness on a single machine where
each job j is available at time rj and there arc sequence-dependent setup times. Some
examples of the notation are as follows:
J IIL m ax : minimize L ma x on a single machine with all jobs available
simultaneously,
I/s,)L max: IllL max with sequence-dependent setup times,
I/r)L max: minimize L max on a single machine withjobj available at time ri,
I/r)C m a x : minimize Cm ax on a single machine withjobj available at time ri'
hi
I Ir i , prec] L m a x : I L m ax with precedence constraints,
I Ir i, pmtn] L max : I Ir) L max where preemption of jobs is allowed,
I/r j , prec, sui L max : JIri' prec/L m a x with sequence-dependent setup times.

References
ADAMS, J., BALAS, E., and ZAWACK, D., 1988, The shifting bottleneck procedure for job-shop
scheduling. Munaqement Science, 34, 391-401.
BALAS, E., and TOTH, P., 1985,Branch and bound methods. In The Traveling Salesman Problem:
A Guided Tour of Combinatorial Optimization, E. L. Lawler, J. K. Lenstra, A. H. G.
Rinnooy Kan and D. B. Shmoys (eds) (New York: Wiley).
BAKER, K. R., 1974, llllroduction to Sequencing and Scheduling (New York: Wiley).
Single-machine dynamic scheduling 1263

BAKER, K. R., and Su, Z. S., 1974, Sequencing with due dates and early start times to minimize
maximum tardiness. Naval Research Logistics Quarterly, 21,171-176.
BHASKARAN, K., and PINEDO, M., 1991, Dispatching. In Handbook of Industrial Engineering,
G. Salvendy (ed.) (New York: Wiley).
CARLlER, J., 1982, The one-machine scheduling problem. European Journal of Operational
Research, II, 42-47.
CONSILIUM INC., 1988, Short Interval Scheduling System Users Manual. Internal Publication
(Mountain View, CAl.
FOWLER, J. W., HOGG, G. L., and PHILLIPS, D. T., 1992, Control of multiproduct bulk service
diffusion/oxidation processes. lIE Transactions on Scheduling and Logistics, 24, 84-96.
GAREY, M. R., and JOHNSON, D. S., 1979, Computers and Intractability: A Guide to the Theory of
N P Completeness (San Francisco: W. H. Freeman).
GLASSEY, CR., and WENG, W. W., 1991, Dynamic batching heuristic for simultaneous
processing. IEEE Transaction on SemicOllductor Manufacturing, 4, 77-82.
HALL, L., and SHMOYS, D., 1992, Jackson's rule for one-machine scheduling: making a good
heuristic better. Mathematics of Operations Research, 17,22-35.
Downloaded by [University of Cambridge] at 04:50 10 October 2014

LAGEWEG, B. J., LAWLER, E. L., LENSTRA, J. K., and RINNOOY KAN, A. H. G., 1981, Computer
aided complexity classification of deterministic scheduling problems, Research Report
BW 138/81 (Amsterdam: Mathematisch Centrum).
LAGEWEG, B. J., LENSTRA, J. K., and RINNOOY KAN, A. H. G., 1976, Minimizing maximum
lateness on one machine: computational experience and some applications. Siatistica
N eerlandica, 30, 25-41.
LAWLER, E. L., 1973, Optimal sequencing of a single machine subject to precedence constraints.
Management Science, 19, 544-546.
McMAHON, G., and FLORIAN, M., 1975, On scheduling with ready times and due dates to
minimize maximum lateness. Operations Research, 23, 475-482.
MONMA, C, and POTTS, Ci N, 1989, On the complexity of scheduling with batch setup times.
Operations Research, 37, 798-804.
MORTON, T. E., Forward algorithms for forward-thinking managers. In ApplicatiollS of
Manaqement Science, R. L. Schulz (ed.) (Greenwich, CT: JAI Press), pp. I-55.
OVACIK, I. M., and UZSOY, R., 1992, A shifting bottleneck algorithm for scheduling semi-
conductor testing operations. Journal of Electronics Manufacturing, 2,119-134.
OVACIK, I. M., and UZSOY, R., 1993, Exploiting shop floor status information to schedule
complex job shops. Journal of Manufacturing Systems, forthcoming.
PARKER, R. G., and RARDIN, R. L., 1988, Discrete Optimization (San Diego: Academic).
PICARD, J. C, and QUEYRANNE, M., 1978, The time-dependent travelling salesman problem and
its application to the tardiness problem in one-machine scheduling. Operations Research,
26,86-110.
POTTS, Ci N; 1980, Analysis of a heuristic for one machine sequencing with release dates and
delivery times. Operations Research, 28, 1436-1441.
SAHNI, S., and GONZALEZ, T., 1976, P-complete approximation problems. Journal of the
Associationfor Computing Machinery, 23, 555-565.
UNAL, A. T., and KIRAN, A. S., 1992, Batch sequencing. liE Transactions on Scheduling and
Logistics, 24, 73-83.
UZSOY, R., 1993, Decomposition methods for scheduling complex dynamic job shops.
Proceedings of the NSF Grantees' Conference, Charlotte, NC, pp. 1253-1257.
UZSOY, R., CHURCH, L. K., OVACIK, I. M., and HINCHMAN, J., 1993, Performance evaluation of
dispatching rules for semiconductor testing operations. Journal ofElectronics Manufactur-
ing, 3, 95-105.
UZSOY, R., LEE, C Y.,and MARTIN-VEGA, L. A., 1992, Scheduling semiconductor test operations:
minimizing maximum lateness and number of tardy jobs on a single machine. Naval
Research Logistics, 39, 369-388.
UZSOY, R., MARTIN-VEGA, L. A., LEE, C Y., and LEONARD, P. A., 1991, Production scheduling
algorithms for a semiconductor tesing facility. IEEE Transactions on Semiconductor
Manufacturing, 4, 270-280.
ZDRZALKA, S., 1992, Preemptive scheduling with release dates, delivery times and sequence
independent setup times. Institute of Engineering Cybernetics, Technical University of
Wroclaw, Wroclaw, Poland.

You might also like