You are on page 1of 13

Int J Adv Manuf Technol (2011) 56:699710

DOI 10.1007/s00170-011-3206-9

ORIGINAL ARTICLE

Makespan minimization of a flowshop sequence-dependent


group scheduling problem
Nasser Salmasi & Rasaratnam Logendran &
Mohammad Reza Skandari

Received: 19 May 2009 / Accepted: 25 January 2011 / Published online: 26 February 2011
# Springer-Verlag London Limited 2011

Abstract The flowshop sequence dependent group scheduling problem with minimization of makespan as the
objective (Fm|fmls, Splk, prmu|Cmax) is considered in this
paper. It is assumed that several groups with different
number of jobs are assigned to a flow shop cell that has m
machines. The goal is to find the best sequence of
processing the jobs in each group and the groups
themselves with minimization of makespan as the objective. A mathematical model for the research problem is
developed in this paper. As the research problem is shown
to be NP-hard, a hybrid ant colony optimization (HACO)
algorithm is developed to solve the problem. A lower
bounding technique based on relaxing a few constraints of
the mathematical model developed for the original problem
is proposed to evaluate the quality of the HACO algorithm.
Three different problem structures, with two, three, and six
machines, are used in the generation of the test problems to
test the performance of the algorithm and the lower
bounding technique developed. The results obtained from
the HACO algorithm and those that have appeared in the
published literature are also compared. The comparative
N. Salmasi : M. R. Skandari
Department of Industrial Engineering,
Sharif University of Technology,
Tehran, Iran
N. Salmasi
e-mail: nsalmasi@sharif.edu
M. R. Skandari
e-mail: r.skandari@ie.sharif.edu
R. Logendran (*)
School of Mechanical, Industrial, and Manufacturing Engineering,
Oregon State University,
Corvallis, OR 97331, USA
e-mail: Logen.Logendran@oregonstate.edu

results show that the HACO algorithm has a superior


performance compared to the best available algorithm
based on memetic algorithm with an average percentage
deviation of around 1.0% from the lower bound.
Keywords Sequence-dependent group scheduling . Integer
programming . Meta-heuristics . Lower bound . Flow shop
scheduling

1 Introduction
In this paper, it is assumed that in a flow shop cell several
groups with different number of jobs are assigned to be
processed. All jobs that belong to a group require similar
setup on machines. Thus, a major setup is required before
processing each group on every machine. It is assumed that
the setup time of a group for each machine depends on the
immediately preceding group that was processed on that
machine. The problem is classified as flow shop sequence
dependent group scheduling (FSDGS) problem. The
importance of sequence dependent setup time scheduling
problems has been discussed in several studies by Allahverdi et al. [1], Panwalker et al. [2], Wortman [3], and
Schaller et al. [4]. There are many real-world applications
of sequence-dependent scheduling problems. For instance,
painting automobiles with different colors in small batch
sizes is an example of sequence-dependent setup scheduling problems.
Allahverdi et al. [1], Cheng et al. [5], Zhu and Wilhelm
[6], and Allahverdi et al. [7] reported a comprehensive
literature review of scheduling problems by considering
separate setup time. Hejazi and Saghafian [8] performed a
comprehensive literature review on flow shop scheduling
problems. Jordan [9] discussed the extension of a genetic

700

algorithm (GA) to solve the FSDGS problem with two


machines in order to minimize the weighted sum of
earliness and tardiness penalties. Schaller et al. [4] present
branch-and-bound approaches as well as several heuristic
algorithms to solve the FSDGS problem with multiple
machines with minimization of makespan as the criterion.
They suggest applying the result of their best heuristic
algorithm as an initial solution for other heuristic algorithms such as tabu search (TS). They also showed that the
FSDGS problem with minimization of makespan as the
criterion is NP-hard. Franca et al. [10] developed an
algorithm based on GA and a memetic algorithm (MA)
with local search to solve the FSDGS problem with
minimization of makespan as the criterion. They used the
test problems of Schaller et al. [4] to evaluate their heuristic
algorithm. Logendran et al. [11] developed a heuristic
algorithm based on TS to solve the two-machine FSDGS
problems by considering minimization of makespan. They
also developed a lower bounding method to evaluate the
quality of their heuristic algorithm. Hendizadeh et al. [12]
developed a heuristic algorithm based on TS by applying
the concept of elitism and the acceptance of worse moves
from simulated annealing to improve the intensification
and diversification moves to solve the same problem.
Their study also included a comparison of their algorithmic results with those obtained by Franca et al. [10]. The
results showed that the MA algorithm developed by
Franca et al. [10] has a superior performance compared
to the TS algorithm developed by Hendizadeh at al. [12].
Salmasi and Logendran [13] developed several heuristic
algorithms based on TS for solving the FSDGS problem
by considering minimization of makespan. They tested the
suggestion of Schaller et al. [4] about applying the result
of their heuristic algorithm as an initial solution for a
meta-heuristic algorithm. The results of their experiment
showed that there is no significant difference between the
performance of using the results of Schaller et al. [4]
algorithm as an initial solution and using a random
sequence for groups as well the jobs that belong to each
group as an initial solution.
In this research, among the available meta-heuristics, a
heuristic algorithm developed based upon ant colony
optimization (ACO) is used to solve the problem, and the
results obtained from it are compared with the previously
attempted heuristics. An efficient lower bounding method has
also been developed to evaluate the performance of the
heuristic algorithm by means of enhancing and generalizing
the lower bounding method, previously proposed by
Logendran et al. [11] specifically for two-machine problems.
The goal is to find the best sequence of processing the
jobs in each group as well as the groups themselves with
minimization of makespan (Cmax) as the criterion. Based on

Int J Adv Manuf Technol (2011) 56:699710

Pinedo [14], the problem we investigate can be notated as


Fm|fmls, Splk, prmu|Cmax with the following descriptions:
&
&
&

&

Fm denotes a flow shop environment that comprises of a


series of (distinct) machines.
fmls denotes that the jobs are assigned to different
groups. Each group (p=1, 2, , g) includes bp jobs (the
number of jobs in groups can be different).
Splk denotes that the problem belongs to sequence
dependent setup time scheduling. The setup time of a
group (group l) on each machine (machine k) depends
on the immediately preceding group (group p) that was
processed on that machine. The setup time of each
group on each machine can be different.
prmu denotes permutation scheduling assumption that all
jobs and groups are processed in the same sequence on all
machines. This is the only method used to facilitate
production in some industries. For instance, if a conveyer
is used to move jobs between machines, then all jobs
should be processed in the same sequence on all machines.

Note that each job of a group is assumed to have no


separate setup time. If it does, the required setup time can
be included in its processing time. We make the wellknown group technology assumption that requires the jobs
in a group to be scheduled contiguously. The setup process
of a machine for a group can be started before a job that
belongs to the group is available, a feature commonly
known as anticipatory setups in group scheduling.

2 Mathematical model
We introduce the concept of slots in the mixed integer linear
programming modeling construct. A slot is a position which is
to be occupied by one of the groups in order to find the
sequence of groups. Thus, each group should be assigned to
only one slot and each slot should be dedicated to receiving
only one group. In real-world problems, groups can have
different number of jobs. Because each group can be assigned
to any slot, to facilitate the development of the mathematical
model, it is assumed that every group has the same number of
jobs, comprised of real and dummy jobs. This number is equal
to bmax which is the maximum number of real jobs in a
group. If a group has fewer real jobs than bmax, the difference,
i.e., bmaxnumber of real jobs, is assumed to be occupied by
dummy jobs. The list of indices, parameters, decision
variables, and the mathematical model are presented below:
Indices and parameters:
g
m
bp

Number of groups
Number of machines
Number of jobs in group p

p=1,2,...,g

Int J Adv Manuf Technol (2011) 56:699710

bmax

maximum number of jobs in groups

8
< For real jobs; run time of job j
in group p on machine k
:
For dummy jobs; 0

tpjk

tpjk

Tpk

g
X

p 1; 2; :::; g
j 1; 2; :::; bmax
k 1; 2; :::; m

8
< For real jobs; run time of job j
in group p on machine k
M : A large number
:
For dummy jobs; M

Splk

701

The setup time for group l


on machine k if group p is the
preceding group
The summation of processing
time of jobs in group p
on machine k

p; l 1; 2; . . . ; g
k 1; 2; . . . ; m
Tpk

bp
P

tpjk

j1

Wip 1

p
g gl6
X
X
p0

The completion time of job j and ith slot on machine k


i 1; 2; . . . ; g j 1; 2; . . . ; bmax
k 1; 2; . . . ; m
(

Wip

Yijq
Cik
Oik

Aipl

1; If group p is assigned to slot i


0; Otherwise

i; p 0; 1; 2; . . . ; g

8
>
< 1; If job q is processed
after job j in slot i
i 1; 2; . . . ; g
>
:
j;
q 1; 2 . . . ; bmax j 6 q
0; Otherwise
The completion time of ith slot on machine k
i 1; 2; . . . ; g
k 1; 2; . . . ; m
The setup time for a group assigned to slot i on
machine k
i 1; 2; . . . ; g
k 1; 2; . . . ; m
8
1; If group p is assigned
>
>
>
< to slot i and group l is
>
assigned to slot i 1
>
>
:
0; Otherwise

Aipl 1

i 0; 1; 2; . . . ; g  1

Aipl  Wip

i 0; 1; 2; :::; g  1

Aipl  Wi1l

p 6 l

Oik

l
g gp6
X
X
p0

i 1; 2; :::; g k 1; 2; :::m

Ai1pl Splk

5
g
X

Ci1 Ci11 Oi1

i 1; 2; 3; . . . ; g

Wip Tp1

p1

Xijk  Ci1k Oik

g
X

Wip tpjk

p1

i 1; 2; :::; g

j 1; 2; :::; bmax k 1; 2; 3 . . . m
g
P

i 1; 2; . . . ; g

Wip tpjk

j; q 1; 2; . . . bmax

8
j<q

k 1; 2; 3 . . . m
g

 P
0
Wip tpjk M : A large number
Xiqk  Xijk M 1  Yijq 
p1

MinimizeZ Cgm

1
Xijk  Xijk1 

Subject to:
Wip 1

4b

l1

Model:

g
X

p 0; 1; 2; . . . ; g

l 1; 2; . . . ; gp 6 l

p1

l 1; 2; :::; g

4a

Xijk  Xiqk M Yijq 


i 0; 1; 2; :::; g  1
p 0; 1; 2; :::; g

2b

l1

Decision variables:
Xijk

i 1; 2; . . . ; g

p1

g
P
p1

p 1; 2; . . . g

Wip tpjk

i 1; 2; . . . ; g

j 1; 2; . . . ; bmax

k 2; 3 . . . m

2a

10

i1

Cik  Xijk
Xijk ; Cik ; Oik  0

Wip ; Aipl 2 f0; 1gYijq 2 f0; 1g

j < q

i 1; 2; . . . ; g

j 1; 2; . . . ; bmax

k 2; 3 . . . m

11

702

Again, there are g slots and each group should be


assigned to one of them. Constraints (2a) and (2b) ensure
that each slot should contain just one group and every
group should be assigned to only one slot. Constraint (3) is
incorporated in the model to ensure that the setup time of a
group on a machine is dependent on that group and the
group processed immediately preceding the group. Constraints (4a) and (4b) ensure that if group p is assigned to
slot i and group l is assigned to slot i+1, then Aipl must be
equal to one. Likewise, if group p is not assigned to slot i or
group l is not assigned to slot i+1, then Aipl must be equal
to zero. Constraint (5) evaluates the required setup time of
groups on machines. The required setup time for a group on
a machine is evaluated based on the group assigned to a slot
and the group assigned to the preceding slot. The
completion time of the group assigned to a slot on the first
machine is evaluated in constraint (6). The completion time
of a group assigned to a slot is equal to the summation of
the completion time of the group assigned to the preceding
slot, the required setup time for the group in this slot, and
the sum of the processing time of all jobs in that group.
Constraint (7) is incorporated in the model to find the
completion time of jobs on machines. The completion time
of a job that belongs to a group is greater than or equal to
the summation of the completion time of the group
processed in the previous slot, the setup time for the group,
and the processing time of the job. Constraints (8) and (9)
are a kind of either/or constraints. They are incorporated in
the model to find the sequence of processing jobs that
belong to a group. If job j in a group is processed after job q
of the same group, then the difference between the
completion time of job j and job q on all machines should
be greater than or equal to the processing time of job j.
Constraint (10) is incorporated in the model to support that
a machine can start processing a job only if its processing
has already been completed on the previous machine. It
means that the completion time of a job on a machine
should be greater than or equal to the summation of the
completion time of that job on the preceding machine plus
the processing time of the job on the current machine.
Constraint (11) is incorporated in the model to ensure that
the completion time of a group on a machine is equal to the
completion time of the last job of the group which is
processed by the machine.
As the research problem is known to be NP-hard [4],
heuristic algorithms are needed to solve industry-size
problems within a reasonable time. Among the available
heuristics, ACO algorithm is chosen to be compared with
the previously proposed meta-heuristics because of the
widespread favorable publicity ACO algorithm has been
receiving lately for its capability of solving difficult
combinatorial problems. The meta-heuristic algorithm
developed is described in Section 3.

Int J Adv Manuf Technol (2011) 56:699710

3 Hybrid ant colony optimization algorithm


ACO algorithm has been applied for flow shop scheduling
problems by Gajpal and Rajendran [15]. They applied ACO
to solve the problem of scheduling in permutation flowshops with the objective of minimizing the variance of
completion time of jobs. Gajpal et al. [16] applied ACO
algorithm for sequence-dependent flow shop job scheduling
problems.
In this section, a hybrid meta-heuristic algorithm based
on ACO and the NEH algorithm [17] is proposed, which is
different from the one applied by Gajpal and Rajendran
[15] and Gajpal et al. [16]. A solution to the problem
comprises of two types of information: sequence of groups,
and sequence of jobs in each group. For this type of
problem, with makespan as the objective function, the
sequence of groups is more important than the sequence of
jobs in each group, because the setup times are dependent
on the sequence of groups, and the makespan of a solution
is not directly affected by the position of an individual job
in a group. This is not valid for other objective functions
such as due date-related objectives or total flow time,
wherein the sequence of jobs is as important as the
sequence of groups.
In hybrid ant colony optimization algorithm (HACO), a
meta-heuristic that is a variant of ACO, named ant colony
system, is used for finding the sequence of groups
(Section 3.1) and for sequence of jobs, two computationally
rapid heuristic algorithms are implemented as an integral
module of the proposed meta-heuristic (Section 3.2).
3.1 Sequence of groups
ACO is a population-based construction meta-heuristic,
inspired by the foraging behavior of several ant species.
Argentine ants deposit pheromone on the ground, and use
pheromone as an indirect medium of interchanging information to find the shortest path from colony to food. To find the
food, they initially wander, and upon finding food, return to
their colony while laying down pheromone trails. But if other
ants find such a path, they are likely not to keep traveling by
random, but instead they tend to follow the trail, returning
colony and reinforcing the path if they eventually find food.
Over time, however, the pheromone trail evaporates, and its
attractiveness is reduced as a result. The more time it takes for
an ant to travel from colony to food and return, the more the
available time for pheromones to evaporate. Consequently, a
shorter path, in comparison, gets used more and so the
pheromone renews. Thus the pheromone density remains
high. The more pheromone on a path, the higher is the
likelihood of that path being followed by other ants. So
eventually all the ants follow a single path, i.e., the shortest
path from colony to food. The idea of the ant colony algorithm

Int J Adv Manuf Technol (2011) 56:699710

is to mimic this behavior by deploying artificial ants (agents)


that walk around the graph of the combinatorial problem that
is to be solved.
The first work on ACO was Ant System [18] proposed for
solving the traveling salesman problem. In Ant System,
artificial ants are used to solve the problem by constructing
solutions via adding solution components (cities), one by
one, guided by the pheromone intensity and heuristic
information (which is based on intercity distances). Since
the Ant Colony System (ACS) which was proposed by
Dorigo and Gambardella [19], has shown better performance
than other available versions of ACO in different types of
problems, it is used as a part of HACO in this paper. The
common mechanics behind every ACO algorithm are:
1. A colony of (a number of) ants which constructs
solutions independently
2. The solution construction phase that works based
on pheromone values and heuristic information
3. Local search that is applied around the best solution
(s) found to improve the solution(s)
4. Updating pheromone values phase that updates the
pheromone values based on the quality of the
solutions found by ants; so as to guide the
algorithm toward promising solutions
The following are the steps of ACO as pseudo code:
Set parameters, initialize pheromone trails
while termination condition not met do
Construct Ant Solutions
Apply Local Search (optional)
Update Pheromones
end while
3.1.1 Solution construction
A group-sequence solution is a permutation of groups,
denoted by 1 =(G1, G2,, Gg), in which the absolute
position of the groups is important. A solution to the
problem can be constructed in two different ways:
1. Adding groups one by one to the sequence initially
including the reference group. For instance, sequence
1 is constructed as follows: G1 immediately follows
the reference group, G2 follows G1, and so on.
2. Defining u slots for u groups, and assigning the
groups to the slots. For instance, sequence 1 is
constructed as follows: G1 is assigned to the first
slot, G2 is assigned to the second slot, and so on.
It is worth mentioning that while the two ways seem
different in the first glance, there is a one-to-one accordance
between them in interpreting a solution.

703

3.1.2 Pheromone definition


Two different definitions of pheromone can be developed
according to the aforementioned ways of solution
construction:
1. pq: desirability of processing group q immediately
after group p, p=0,1, ,g q=1,2,..,g
2. pq: desirability of processing group q in the pth
position of the sequence, p=1,2, ,g q=1,2,,g
The first definition can be justified because of the
importance of the absolute positions of groups and can be
found in Gajpal and Rajendran [15]. The second definition
can be justified because of the sequence dependence of
setup times. Since the setup times are sequence dependent
and hold a large share of the solutions makespan, the
second one has an advantage. Suppose that processing
group p immediately after group q led to a favorable
solution, so it is promising to change the absolute positions
of groups while maintaining this relative order, in order to
achieve a better solution. Although we noted the one-to-one
accordance of the construction ways in the previous
section, it is vital to acknowledge the difference between
utilizing the values of the two pheromone definitions during
the construction phase. The high intensity of pq suggests
processing group q immediately after group p, while the
high intensity of pq suggests processing group q in the pth
position of the sequence. It is easily possible for pq and pq
to be used in different steps of construction phase.
3.1.3 Heuristic information
To guide the construction algorithm toward promising
areas, in addition to the pheromone mechanism, heuristic
information is deployed in ACO. In ACO literature,
heuristic information indicates an intuitive guess that
measures how adding a solution component to the partial
solution will affect the final solutions cost (solutions
cost is its objective function value that is to be
minimized in most ant colony optimization algorithms).
To coordinate the direction of pheromone and heuristic
information, usually the reverse of the guessed cost (or
other related functions) is used instead. In this problem,
the heuristic information for each group that is a
candidate to be added to the partial sequence (the partial
solution constructed so far) can be defined as the reverse
of differential makespan (makespanpl) produced because of adding the candidate group (group l) to the partial
sequence in the pth step of solution construction, as in
Eq. 12. In order to calculate the differential makespan, we
need to know the sequence of jobs in the groups. So the
group-sequencing algorithm needs job sequencing algo-

704

Int J Adv Manuf Technol (2011) 56:699710

rithm to perform and they go hand in hand. Therefore, the


job sequencing module is implemented here as a part of
the ACO algorithm.
hpl

1
makespanpl

12

3.1.4 Probabilistic model of solution construction


ACO constructs a solution by adding solution components
to a null sequence, one after the other guided by a
probabilistic mechanism. In HACO, we use the following
pseudorandom proportional rule, derived from ACS, to
select the next group q to be added to the partial sequence
SPart from the list of not yet sequenced groups lN(SPart).
The rule is a balanced mixture of two different approaches:
greed toward the best candidate, and bias toward more
favorable candidates, both in terms of pheromone and
heuristic values. For the pth group of the sequence, with 0
probability the best group (in terms of pheromone intensity
and heuristic information) is selected and with (1-0)
probability the random model identifies the next group. In
other words, with 0 probability the greed toward the best
candidate will determine the next group (the first line in
Eq. 13) and with 1-0 probability, the bias toward more
favorable candidates will determine the next group (the
second line in Eq. 13).
8
n
o
a1 0 a2 b
< arg Max
d  d0 ;
8l2N S Part t p0 l :t pl :hpl : if
q
:Q :
: otherwise
13
In the above formula, p is the last sequenced group, p
stands for the construction step, is a random variable
uniformly distributed in [0,1], 0 (0  d 0  1) is a parameter
and Q is a random variable selected according to the
probability distribution given by Eq. 14. This model assigns
a probability of being selected to every group proportional to
the pheromone and heuristic information values.

Ppq

8
t ap01q :t 0pqa2 :hbpq
>
<P
: if q 2 N S Part
a1 0a2 b
t
:t
:h

Part
0
> 8l2N S p l pl pl
:
0:
otherwise

14

In this model, 1, 2, and indicate the relative


importance of sequencing group q immediately after group
p0 (or the importance of the first definition of pheromone),
importance of sequencing group q in the pth position
of the sequence (the importance of the second definition
of pheromone), and heuristic information importance,
respectively.

3.1.5 Local search


Before updating the pheromone values and after constructing the solution, a local search can be applied to the best
solution. The integral part of every local search algorithm is
the definition of neighborhood. In HACO, three different
neighborhoods are used to improve the quality of solutions
and the computational efficiency of each was tested:
1. Swapping all possible pairs of groups (jobs)
2. Removing a group (a job) and inserting in all
possible positions
3. Permutations of all possible triplet of groups (jobs)
After selecting the neighborhood definition, it is time to
apply the local search. An innovative recursive local search
algorithm is used in HACO that works as follows: first, it
tries to improve the given solution by improving all its
group-wise neighboring solutions that have better objective
function value than the given solution to their neighboring
local optimum solution (by means of calling the local
search algorithm recursively for these solutions), and then
replaces the given solution with the best of these groupwise neighboring local optimum solutions; second, it uses
the same approach, but this time by improving job-wise
neighboring solutions of the updated solution to their
neighboring local optimum solution. Ultimately, the best
solution among the job-wise neighboring local optimum
solutions is returned as the output of the local search
algorithm. The pseudo code of the recursive algorithm used
for local search in HACO is presented in Appendix 1.
3.1.6 Pheromone update
In ACS, two strategies are used to manipulate the pheromone
values, in order to guide the algorithm: local and offline
pheromone update. The former diversifies the search by
decreasing the pheromone of sequenced groups by each ant
during the construction phase (diversification strategy) and
the latter intensifies the search toward promising areas at the
end of each iteration by the best ant (intensification strategy).
Model (15) is used to perform local pheromone update:
t pq

1  :t pq :t 0 ; 2 0; 1

15

In (15), is the local pheromone decay coefficient and


0 is the initial value of the pheromone. Model (16) is used
to perform offline pheromone update:

t pq

8
< 1  r:t
:

t pq

1
: if best ant uses edgei;j
C BestAnt
: otherwise
pq

16

Int J Adv Manuf Technol (2011) 56:699710

In (16), is the offline pheromone decay coefficient and


CBestAnt is the cost of the best solution. It is notable that we
should update the values of the two noted pheromone
definitions independently because of their different ways of
interpreting a solution.
3.1.7 Implementation details and parameter choices
Experiments show that the burden of calculating heuristic
information during each step of construction is not cost
effective, and saved time can be used to explore the solution
domain more efficiently. So, we adopted as zero. Also best
results are achieved when adopting 1 and 2 as 2 and 1,
respectively, which proves the superiority of the importance
of relative position of groups, i.e., the second definition of
the pheromone over the first one. The best values of the
parameters , , and 0 are 0.9, 0.9, and 0.1, respectively.
The vaster the neighborhood is, the more effective the
local search will be, but more time will be spent for local
search; so a tradeoff between the expected power of local
search and computational burden should be applied. Results
show that only the first two definitions of neighborhood
mentioned in Section 3.1.5 are cost effective.
3.2 Sequence of jobs in a group
To find the sequence of jobs inside the groups, two
heuristic algorithms are used: a heuristic based on NEH
algorithm [17] and random sequencing. In % of times, the
adapted NEH algorithm is used, and in (100-)% of times
the random algorithm. Both of these heuristics are very
simple to be implemented and fast to be executed. It is why
these algorithms are selected for the job-sequencing
module. Because NEH is not iterative and therefore no
new solution is constructed unless a new sequence of group
is introduced, we implemented the random algorithm in
order to introduce new solutions. The role of random
algorithm is more substantial when the sequence of groups
is near optimal. Suppose that the solution is group-wise
optimal (i.e., the optimal sequence of groups is found by
the group sequencing part of the HACO); if the sequence of
jobs suggested by the NEH algorithm is not optimal, we
need another algorithmhere the random algorithmto try
to find the optimum sequence of jobs. The setup times are
dependent on the sequence of groups and they hold a large
share of the solutions makespan. Consequently, when the
solution is not group-wise optimal, it is not so beneficial to
have the optimal sequence of jobs for that non-optimal
group sequence. The results show that the best sequence of
jobs is achieved when % is adopted as 70%.
The sequencing of jobs is dependent on the sequencing of
groups at each iteration. For the second group (and similarly
for the next groups in the sequence), the first machines are

705

released from processing the last job of the preceding group


sooner. During this period, setup operations for the imminent
group on the freed machines can be performed. Since the
original NEH algorithm assumes that all machines are free to
process the jobs at time zero, we manipulated the setup times
before applying the algorithm. First, for each machine, a
value is evaluated as the completion time of the last job (of
the preceding group) minus that of on the first machine. This
value is then added to the setup time of that machine. Doing
so will artificially make the assumption of NEH algorithm
hold true. After updating the setup times, the steps of the
original NEH algorithm can be applied. It should be noted
that the updated setup times are used only in this step of the
algorithm and are not updated globally.
The steps of the original NEH algorithm are:
1. Order the jobs in the decreasing order of sum of
processing times on the machines.
2. Take the first two jobs and schedule them in order
to minimize the partial makespan as if there were
only these two jobs.
3. For the remaining jobs, try to insert the jobs, one by
one, at all possible positions, and select the position
which minimizes the partial makespan, until all
jobs are sequenced.
3.3 Terminating criteria
Different termination criteria such as number of iterations
without improvement, total number of iterations, and CPU
time were tested. For results to be comparable with those of
the best available algorithm, i.e., the one by Franca et al.
[10], a limitation of 30 s of CPU time was adopted as the
terminating criterion.

4 Lower bound
A lower bounding (LB) technique was previously presented by
Logendran et al. [11] for the two-machine FSDGS problem.
In this paper, a variant of this lower bounding technique is
considered by incorporating a new constraint in order to
develop a more enhanced lower bound. More precisely, the
lower bounding technique developed below is intended to
deal with a generalized group scheduling problem with m
machines, while the one proposed by Logendran et al. [11]
was limited to two-machine problems only. The lower
bounding technique presented below is based on relaxing
the problem from FSDGS to a kind of flow shop sequence
dependent job scheduling (FSDJS) problem. Every group is
considered as an independent job. The processing time of
these independent jobs (groups) on each machine is considered equal to the summation of the processing time of its jobs

706

Int J Adv Manuf Technol (2011) 56:699710

on each machine. The optimal solution of this problem is a


lower bound for the original problem because the possible
idle times between processing jobs that belong to a group on
all machines are ignored. Solving the mathematical model of
this relaxed problem is much easier than solving the original
problem, since the variables and constraints related to the
completion time of jobs in a group are relaxed. However, the
relaxed problem is still NP-hard (FSDJS problems are NPhard). Since too many groups are typically not assigned to a
cell in practice, the mathematical model can be used to
calculate a lower bound for the research problem. The lower
bounding model is developed as follows:
Parameters:
Gpk

The minimum processing time of jobs in group p on


machine k p 1; 2; . . . ; g k 1; 2; . . . ; m

The parameters Splk and Tpk are the same as the ones
defined in the original model
Decision variables:
The decision variables Wip, Cik, Oik, and Aipl are the ones
defined in the original model
The lower bounding model
Minimize Z Cgm

17

Subject to:
Constraints (2a), (2b), (3), (4a), (4b), (5), and (6) of the
original model
g
X
Wip Tpk
Cik  Ci1k Oik
p1
18
i 1; 2; 3; . . . ; g

k 2; 3 . . . ; m

Cik  Ci1k1 Oik1

g
X

Wip Tpk

p1

g
X

19

Wip GPk1

p1

5 Test problem specifications

i 1; 2; 3; . . . ; g
Cik  Cik1

(6), describing the completion time of groups on the first


machine, are the same as the original mathematical model.
Three more constraints are incorporated in the above model
as follows.
Constraint (18) is incorporated in the model to support
that the start (processing) time of a group on each machine,
other than the first machine, depends on both availability of
the machine and the jobs in the group. To check the
availability of the machine, it is clear that the completion
time of each group on each machine except the first machine
should be greater or equal to the completion time of the
previously processed group on that machine plus the
required setup time and processing time of the group. This
constraint guarantees that the completion time of a group
assigned to a slot (say i) on a machine (say k) is greater than
or equal to the completion time of the group processed in the
previous slot (i-1) on machine k plus the setup time and the
required processing time of all jobs that belong to group
assigned to slot i. Constraint (19) is incorporated in the
model to evaluate the earliest possible time that a group can
be processed on a machine by considering the availability of
the group. A group is available to be processed on a machine
if at least one of its jobs has completed its processing on the
immediately preceding machine. It is clear that the earliest
time to start processing a group on a machine is the time that
the job with the minimum processing time of the group is
processed on the preceding machine. It is also clear that the
completion time of a group assigned to a slot on a machine
(say k), except the first machine, should be greater than or
equal to the completion time of the preceding group on the
previous machine (k-1), plus the required setup time for the
group on the previous machine (k-1), the minimum processing time of jobs in the group on the previous machine (k-1),
and the total processing time of all jobs on that machine (k).
Constraint (20) is incorporated in the model to support that
in a real schedule of processing groups, the completion time
of a group on a machine should be greater than or equal to
the completion time of the group on the preceding machine
plus the minimum processing time of jobs in the group on
the that machine.

g
P

k 2; 3 . . . ; m

Wip GPk

i 1; 2; 3; . . . ; g

k 2; 3 . . . ; m

p1

Cik ; Oik  0 Wip ; Aip i 1l0;1

20
The objective function of the lower bounding model as
well as the constraints to identify the sequence of groups, i.e.,
constraints (2a), (2b), (3), (4a), (4b), and (5), and constraint

Test problems are required to evaluate the performance of


the developed heuristic algorithm as well as the LB
technique. Schaller et al. [4] developed test problems which
have been used by Franca et al. [10] and Hendizadeh et al.
[12] to evaluate their heuristic algorithms. The specification
of these test problems are as follows:
&

Processing times are random integers from a discrete


uniform (DU) distribution, DU[1, 10].

Int J Adv Manuf Technol (2011) 56:699710

707

Table 1 The setup time of each machine on two-machine problems


Machine

Level 1

Level 2

Level 3

M1
M2

DU[1, 50]
DU[17, 67]

DU[1, 50]
DU[1, 50]

DU[17, 67]
DU[1, 50]

&
&
&

Setup times are generated randomly in three different


classes in ranges DU[1, 20], DU[1, 50], and DU[1,
100], respectively.
Number of groups is between three and ten, and the
number of jobs in a group is between one and ten.

In this research, we have developed different test


problems that we believe are more similar to the realworld problems. As mentioned before, since the goal is to
decompose the production line, rarely there are more than
six machines in a cell. Also, there exist real problems in
which more than ten groups are assigned to a cell. For these
reasons, in this research different sets of test problems are
created as follows. In the interest of time, problems with
two, three, and six machines in a cell are considered in this
research.
&
&

&

The processing time of jobs on machines is a random


integer from DU[1, 20].
The main idea of applying group scheduling techniques
in production is to decompose the production line into
small yet independent cells. Thus, in industry too many
groups are not expected to be assigned for processing in
the same cell. Schaller et al. [4] performed their
experiments by considering at most ten groups in a
cell. Using this as a guideline and yet accommodating
the possibility of investigating larger number of groups
in industry settings, the maximum number of groups in
a cell is set equal to 16 in this research. The test
problems are categorized in small, medium, and large
size problems based on this factor. The number of
groups is a random integer from DU[1, 5], DU[6, 10],
and DU[11, 16] for small-, medium-, and large-size
problems, respectively.
Problems with two to ten jobs in a group are
considered in this research. This number is the same
as that used by Schaller et al. [4]. The maximum
number of jobs that belongs to a group in a problem is
considered as a factor. For instance, if in a group
scheduling problem with three groups, groups have
three, six, and nine jobs, respectively, then the problem
is classified as a nine-job problem. This factor has also
three levels. Level 1 includes problems with at most
two to four jobs in a group. Problems with at most five
to seven jobs in a group are classified as level 2, and
finally if one of the groups of a problem includes eight
to ten jobs, then the problem belongs to the third level

based on its number of jobs factor. In other words, the


number of jobs in a group is a random integer from
DU[2, 4], DU[5, 7], and DU[8, 10] for small-,
medium-, and large-size problems, respectively, in this
research.
The experiments performed indicate that the quality
of solutions identified by the meta-heuristic algorithm
strongly depends on the ratio of setup times of
groups on consecutive machines. In other words, if
the ratio of the required setup times of groups on
consecutive machines is changed, then different
heuristic algorithms will have different performances.
Three levels are defined for this factor in this
research to develop test problems. In a sequential
machine pair, if the setup time of the first machine is
significantly less than the setup time of the second
machine, the problem belongs to the first level. If
both machines have almost the same setup times, the
problem belongs to the second level. Finally, if the
setup time of the first machine is significantly greater
than the second machine, the problem is classified as
the third level of this factor. This factor should be
applied to all sequential machine pairs. For instance,
in a three-machine problem, this ratio for M1/M2
and M2/M3 should be compared. Thus, this can be
considered as two separate factors in this problem. The
setup time of groups on each machine for two-machine
problem is shown in Table 1. As discussed, in the first
level based on the setup time ratio factor, the setup
time of groups on M1 should be less than M2. If these
setup times are generated from DU[1,50] and DU
[17,67] for M1 and M2, respectively, then the average
ratio of setup times is equal to 0.607 ([(1+50)/2]/[(17+
67)]/2=0.607), which satisfies the condition. The setup
times for other levels are generated similar to this rule
to satisfy the required ratio of each level as well. The
setup times of groups on each machine for problems
with three and six machines are shown in Tables 2 and
3. The setup times shown in Table 2 for the threemachine problem can be applied for setup time ratio
factors. The distribution to generate random setup time
for each machine at each level is chosen based on the
required ratio between setup times. If this technique is
applied for problems with more than three machines, the

Table 2 The setup time of each machine on three-machine problems


Machine

Level 1

Level 2

Level 3

M1
M2
M3

DU[1, 50]
DU[17, 67]
DU[45, 95]

DU[1, 50]
DU[1, 50]
DU[1, 50]

DU[45, 95]
DU[17, 67]
DU[1, 50]

708

Int J Adv Manuf Technol (2011) 56:699710

Table 3 The setup time of each machine on six-machine problems


Machine

Level 1

Level 2

Level 3

M1
M2
M3
M4
M5
M6

DU[1, 50]
DU[17, 67]
DU[45, 95]
DU[92, 142]
DU[170, 220]
DU[300, 350]

DU[1,
DU[1,
DU[1,
DU[1,
DU[1,
DU[1,

DU[300, 350]
DU[170, 220]
DU[92, 142]
DU[45, 95]
DU[17, 67]
DU[1, 50]

50]
50]
50]
50]
50]
50]

number of test problems which should be investigated will


increase significantly. For instance, for a six-machine
problem, because the number of whole-plot factors will
increase to seven (group factor, job factor, and five factors
for ratios of sequenced machines), if in each cell only two
replicates are applied, then it is required to solve 37 2=
4,374 problems. This is the correct way to perform the
experiment, but in the interest of time, it was not
practical for this research. Thus, the experiment for
problems with more than three machines is performed by
just applying one factor for the ratio of setup times for all
machine pairs. In this case, only a factor is defined for
the ratio of setup times of machine pairs with three
levels. Level 1 indicates the problems in which the
required setup times for each machine are increased
sequentially. The second level investigates the problems
in which the setup times of all machines are almost
equal. And finally, level three investigates the problems
in which the setup times of machines are decreased from
the first machine to the last machine.
For each problem level generated based on the number
of groups, maximum number of jobs in a group, and the
ratio of setups, two replicates are generated. Thus, the
number of test problems for two-, three-, and six-machine
problems are as follows:
&

Fifty-four test problems for two-machine problems


(three levels of groups three levels of the maximum
number of jobs in a group three levels of setup ratio
two replicates).

Table 4 The results of currently


available algorithms

&

&

One hundred sixty-two test problems for three-machine


problems (three levels of groups three levels of the
maximum number of jobs in a group three levels of
setup ratio for M1/M2 three levels of setup ratio for M2/
M3 two replicates).
Fifty-four test problems for six-machine problems
(three levels of groups three levels of the maximum
number of jobs in a group three levels of setup ratio
two replicates).

6 The results
The test problems are solved by the HACO algorithm,
coded in C programming language. The LB technique is
also applied to identify a lower bound for test problems.
The ILOG CPLEX [20] (version 9.0) is used to solve the
lower bounding model. The HACO algorithm and the LB
technique are run on a Power Edge 2650 with 2.4 GHz
Xeon, and 4 GB RAM. The results for two-, three-, and
six-machine problems are shown in Table 4. This table
presents the percentage error obtained for each size of the
test problem instances solved with the HACO algorithm
and the MA by Franca et al. [10]. The codes developed by
Franca et al. [10] are used to solve these test problems. In
order to have a fair comparison, the stopping criterion for
the HACO and MA algorithm are set to 30 s CPU time for
each test problem. The error percentage is calculated
according to: (the heuristic algorithm the lower bound)/
the lower bound.
The results show that the average times to solve for a LB
of the test problems are 10.2, 65.4, and 4,720 s for two-,
three-, and six-machine problems, respectively.
In order to compare the performance of the MA and
the HACO algorithm in detail, a paired t test is
performed. The results are shown in Appendix 2. Based
on the results, there is a significant difference between
the performance of HACO and MA for two-machine (p
value = 0.0595), three-machine (p value = 0.0239), and
six-machine (p value = 0.011) problems, respectively.
Since the average objective function value of the HACO

Number of
machines

Average time to
solve for a LB (seconds)

Percentage errors for the


hybrid ant colony optimization
algorithm (%)

Percentage errors for the


memetic algorithm by
Franca et al. [10] (%)

Two
Three
Six

10.2
65.4
4720

0.1
0.7
1.3

0.2
0.7
1.4

Int J Adv Manuf Technol (2011) 56:699710

algorithm is smaller than that of the MA, we can


conclude that HACO has a better performance compared
to MA for these test problems.

7 Summary and conclusions


In this research, a hybrid-meta-heuristic algorithm based
on ACO, namely HACO, is developed for FSDGS
problems. Test problems with two, three, and six
machines are solved by this algorithm. An LB technique
is developed and applied to evaluate the quality of the
HACO algorithm.
The results of the experiment show that the average
percentage error of the HACO algorithm for the test
problems with two, three, and six machines is 0.1%,
0.7%, and 1.3%, respectively. The HACO algorithm, in a
statistical sense, proves to have superior performance with
regard to quality, recording an equal or lesser average
percentage error for each of the three problem types (two,
three, and six machines) than the MA.
In summary, the main contributions of this paper are:
&

Developed a mathematical model for the research


problem.

709

&

&

Applied ant colony optimization algorithm for the first


time for a sequence-dependent group scheduling problem with minimization of makespan as the criterion, and
have shown its superior performance compared to the
best known available meta-heuristic algorithm in the
literature, i.e., MA.
Generalized the proposed LB by Logendran et al. [11]
for multi-stage FSDGS problems and substantially
enhanced it by incorporating a new constraint in order
to get tighter LBs.

The proposed ACO algorithm can be applied to


efficiently solve problems larger in size than the ones used
in the test problem section of this research by tuning the
parameters of the ACO algorithm. The CPLEX might not
able to solve the LB model of the larger size problems
optimally since it is an NP-hard problem. Thus, the quality
of the LB might decrease by increasing the size of the test
problems.

Acknowledgments This research is funded in part by the National


Science Foundation (USA) Grant No. DMI-0010118. Their support is
gratefully acknowledged. The authors are also thankful to Alexander
S. Mendes, University of Newcastle, Callaghan, New South Wales,
Australia, for providing the code to solve test problems with MA and
GA.

Appendix 1
The pseudo code of the recursive algorithm used for local
search in HACO

Function LocalSearch(inSolution)
{
// Searching Group Wise Neighborhood
BestSolSofarinSolution;
Do {
CurrentNeighbourSolNext Group Wise Neighboring Solution;
If CurrentNeighbourSol is Better Than inSolution{
LocalSearch(CurrentNeighbourSol); // recursively calls local search function
If CurrentNeighbourSol is Better Than BestSolSofar
BestSolSofarCurrentNeighbourSol;
}
}
Until (All Group Wise Neighboring Solutions Are Investigated);
inSolution BestSolSofar; //updating the input solution
// Searching Job Wise Neighborhood
Do {
CurrentNeighbourSolNext Job Wise Neighboring Solution;
If CurrentNeighbourSol is Better Than inSolution{
LocalSearch(CurrentNeighbourSol); // recursively calls local search function
If CurrentNeighbourSol is Better Than BestSolSofar
BestSolSofarCurrentNeighbourSol;
}
}
Until (All Job Wise Neighboring Solutions Are Investigated);
inSolution BestSolSofar; //updating the input solution

710

Int J Adv Manuf Technol (2011) 56:699710

Appendix 2
The result of the comparison of the memetic and the hybrid
ant colony optimization algorithms based on paired t test
Two Machine Problems:
data: x: V1 in SDF65 , and y: V2 in SDF65
t = 1.9254, df = 53, p-value = 0.0595
alternative hypothesis: true mean of differences is not equal to 0
95 percent confidence interval: -0.02703954 1.32333583
sample estimates: mean of x - y
0.6481481
Three machine problems:
data: x: V4 in SDF65 , and y: V5 in SDF65
t = 2.2807, df = 161, p-value = 0.0239
alternative hypothesis: true mean of differences is not equal to 0
95 percent confidence interval: 0.06457317 0.89838980
sample estimates: mean of x - y
0.4814815
Six machine problems:
data: x: V7 in SDF65 , and y: V8 in SDF65
t = 2.6359, df = 53, p-value = 0.011
alternative hypothesis: true mean of differences is not equal to 0
95 percent confidence interval: 0.2921935 2.1522509
sample estimates: mean of x - y
1.222222

References
11.
1. Allahverdi A, Gupta JND, Aldowaisian T (1999) A review of
scheduling research involving setup considerations. Int J Manag
Sci Omega 27:219239
2. Panwalker SS, Dudek RA, Smith ML (1973) Sequencing research
and the industrial scheduling problem. Symposium on the theory
of scheduling and its applications. Springer, New York, pp 2938
3. Wortman DB (1992) Managing capacity: getting the most from
your Firms assets. Ind Eng 24:4749
4. Schaller JE, Gupta JND, Vakharia AJ (2000) Scheduling a
flowline manufacturing cell with sequence dependent family
setup times. Eur J Oper Res 125:324339
5. Cheng TCE, Gupta JND, Wang G (2000) A review of flowshop
scheduling research with setup times. Prod Oper Manag 9(3):262282
6. Zhu X, Wilhelm WE (2006) Scheduling and lot sizing with sequencedependent setups: a literature review. IIE Trans 38:9871007
7. Allahverdi A, Ng CT, Cheng TCE, Kovalyov MY (2008) A
survey of scheduling problems with setup times or costs. Eur J
Oper Res 187:9851032
8. Hejazi SR, Saghafian S (2005) Flowshop-scheduling problems
with makespan criterion: a review. Int J Prod Res 43(14):2895
2929
9. Jordan C (1996) Batching and scheduling: models and methods
for several problem classes. Springer, Berlin
10. Franca PM, Gupta JND, Mendes AS, Moscato P, Veltink KJ
(2005) Evolutionary algorithms for scheduling a flowshop

12.

13.

14.
15.

16.

17.

18.
19.
20.

manufacturing cell with sequence dependent family setups.


Comput Ind Eng 48(3):491506
Logendran R, Salmasi N, Sriskandarajah C (2006) Two-machine
group scheduling problems in discrete parts manufacturing with
sequence-dependent setups. J Comput Oper Res 33:158180
Hendizadeh H, Faramarzi H, Mansouri SA, Gupta JND, Elmekkawy TY (2008) Meta-heuristics for scheduling a flowshop
manufacturing cell with sequence dependent family setup times.
Int J Prod Econ 111:593605
Salmasi N, Logendran R (2008) A heuristic approach for multistage sequence-dependent group scheduling problems. J Ind Eng
Int 4(7):4858
Pinedo M (2008) Scheduling theory algorithms and systems,
Third Edition, Prentice Hall
Gajpal Y, Rajendran C (2006) An ant-colony optimization
algorithm for minimizing the completion-time variance of jobs
in flowshops. Int J Prod Econ 101:259272
Gajpal Y, Rajendran C, Ziegler H (2006) An ant colony algorithm
for scheduling in flowshops, with sequence-dependent setup times
of jobs. Int J Adv Manuf Technol 30:416424
Nawaz M, Enscore E, Ham I (1983) A heuristic algorithm for the
m-machine, n-job flow-shop sequencing problem, omega. Int J
Manag Sci 11(1):9195
Dorigo M (1992) Optimization, learning, and natural algorithms,
PhD thesis, Politecnico di Milano
Dorigo M, Gambardella LM (1997) Ant colonies for the traveling
salesman problem. BioSystems 43(2):7381
ILOG, CPLEX (2003) Release 9.0, Paris, France

Copyright of International Journal of Advanced Manufacturing Technology is the property of Springer Science
& Business Media B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like