You are on page 1of 10

Fast, Effective Algorithms for Simple Assembly Line Balancing Problems Author(s): Steven T. Hackman, Michael J. Magazine, T. S.

Wee Source: Operations Research, Vol. 37, No. 6 (Nov. - Dec., 1989), pp. 916-924 Published by: INFORMS Stable URL: http://www.jstor.org/stable/171473 Accessed: 27/02/2009 11:24
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=informs. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact support@jstor.org.

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

http://www.jstor.org

ALGORITHMS FOR SIMPLEASSEMBLY FAST, EFFECTIVE LINE BALANCING PROBLEMS


STEVEN T. HACKMAN
GeorgiaInstituteof Technology, Atlanta,Georgia

MICHAELJ. MAGAZINE
University of Waterloo,Waterloo, Ontario,Canada

T. S. WEE
CanadianPacificRailroad,Montreal,Quebec,Canada (ReceivedJanuary1982;revisionreceivedSeptember1987;acceptedOctober1988) A simple,fast and effectiveheuristicfor the SimpleAssemblyLine Balancing Type I problem(mininmizing the number of workstations) is proposed.A fast and effectivebranch-and-bound which incorporates this heuristicfor use algorithm, in bounding,is developed.The algorithmintroducesheuristicfathomingas a techniquefor reducingthe size of the branch-and-bound tree. Methods to solve the Simple Assembly Line BalancingType II problem (maximizingthe for both problemsare provided. productionrate)are also described. Upperbounds on all heuristics

A n assembly line consists of a sequence of workstations. Each workstation performs a set of tasks; each task is an element of work that is performed nonpreemptively. The execution time, tk, of each task k, k = 1, 2, . . ., n is known and does not depend on which station performs it, nor on which tasks precede or follow it. Due to technological restrictions,certain tasks must be completed before others are started; these precedence constraints are known. Each workstation is allocated a predetermined amount of time C, termed the cycle time, to finish the tasks assigned to it. Items are finished when all tasks are completed; hence, the system production rate equals 1/C. The Simple Assembly Line Balancing Type I (SALB-I) problem is to determine the minimal number of workstations necessary to maintain the production rate while observing the precedence constraints. The Simple Assembly Line BalancingType II (SALB-II) problem is to assigntasks to a fixed number of workstations to maximize the production rate while observing the precedence constraints. The SALB-I problem is NP-hard because it may be reduced to the partition problem which is known to be NP-complete (Karp 1972). It is unlikely, therefore, that a polynomially-bounded optimal algorithm. exists. As a consequence, researchers have concentrated primarily on developing heuristics. (See, for example, Arcus 1966, Helgeson and Birnie 1961,

Hoffman 1963, K'ilbridgeand Wester 1961, Mastor 1970, Moodie and Young 1965, and Tonge 1965.) In this paper, we propose a simple heuristic to solve SALB-I. Extensive computational results show that our heuristic is fast and effective. We also provide a generalupper bound on the performanceof all known heuristics for SALB-I. Algorithms also have been proposed to find the optimal solution to SALB-1o Branch-and-bound (b&b) methods have been suggested by Assche and Herroelen (1978), Jaeschke (1964), Johnson (1983), Mertens (1967), and Nevins (1972). In this paper, we describea b&b algorithmthat is novel in two respects: first,it incorporatesour fast heuristicfor use in bounding; second, it introduces heuristic fathoming as a technique to reduce the size of the b&b tree. Extensive computational results show that our b&b algorithm compares favorably to existing b&b algorithms. (See Baybars 1986 for a survey of optimal approaches to solve SALB-I.) Finally, we describe methods to solve SALB-II using our algorithms to solve SALB-I. Helgeson and Birnie were the first to propose solving SALB-II by iteratively solving SALB-I. (Other iterative methods have been proposed by Mastor 1970, Gehrlein and Patterson 1975, and Dar-El (Mansoor) 1973.) They suggestedan "arbitrary" and "reasonable"large cycle time as an initial upper bound. We provide an

Subject classifications: Inventory/production, approximations/heuristics: analysis of algorithms, heuristics and branch-and-bound. Production/scheduling, flexible manufacturing/line balancing: simple assembly lines. Operations Research Vol. 37, No. 6, November-December 1989 0030-364X/89/3706-0916 $01.25 ? 1989 Operations Research Society of America

916

Algorithmsfor Simple Assembly Line Balancing / improved upper bound for SALB-II. Once again, computational results show that our methods proposed for SALB-II perform very well. (This is due, in large part, to the success of our algorithm to solve SALB-I.) 1. HEURISTICSFOR SALB-I Since SALB-I is NP-hard, many of the optimal algorithms proposed in the literaturerequire an excessive amount of computation. The branch-and-boundalgorithm proposed in Section 4 has excellent computational results due, in large part, to its repeated use of the following general heuristic. Immediate Update First-Fit (IUFF)Heuristic Step 1. Assign a numerical score n(x) to each task x. Step 2. Update the set of available tasks-tasks whose immediate predecessorshave been assigned. Step 3. Assign the available task with the highest numerical score into the first station in which the capacity and precedence constraints will not be violated. Go to Step 2. The IUFF heuristic represents a class of heuristics because it depends on the numerical score function n used. Table I describes 3 numerical score functions that have been proposed in the literature, along with 5 numerical score functions proposed here. Running time and storage space complexities are provided. We shall use the notation IUFFn, n = 1, 2, ..., 8 to signify which of the 8 functions is used in the IUFF heuristic.

917

The IUFF heuristic modifies the Generalized FirstFit (GFF) heuristic described in Wee and Magazine (1982). In the GFF heuristic all of the available tasks are assignedsuccessively into the first station in which the capacity constraintis not violated, and then updating (Step 2) occurs. (We shall use the notation GFFn, n = 1, 2, ..., 8 to signify which of the 8 functions is used in the GFF heuristic.)The GFF heuristicpartially orders the tasks by the levels in the ALB precedence graph; the numerical score function orders the tasks within each level. (Tasks with no predecessors are assigned to level 1; a task is assigned to level i if all of its predecessorshave been assigned to levels 1, 2, .... i - 1, and it cannot be assigned to a lower level.) Suppose that no precedence relation exists between two tasks i and j, but task i is at a higher level than task . Regardless of the function n(x), the GFF heuristic will never assign task i to an earlierstation than task j. The IUFF heuristic is designed to relax this restriction. Both lUFF and GFF differ from the more common heuristic proposed for SALB-I (see Baybars 1985). Rank-and-Assign (RA) Heuristic Step 1. Assign a numerical score to each task. Step 2. Rank tasks from the highest to lowest numerical score. Step 3. Assign tasks successively into the first station in which both the precedence and capacity constraints are met. (We shall use the notation RAn, n = 1, 2, .. ., 8 to signify which of the 8 functions is used in the RA heuristic.) The RA heuristic does not screen tasks as

Table I NumericalScoreFunctions n(x)


No. 1. 2. 3. 4. 5. 6. 7. 8. Name Positionalweight(Helgesonand Bernie) Reversepositionalweight Numberof followers(Tonge 1965) Numberof immediatefollowers(Mastor, Tonge) Numberof predecessors Workelementtime Backward recursive positionalweight Backward recursive edges Description The sum of the tasktimes for x and all tasksthat must followit. The sum of the tasktimes for x and all taskswhich precedeit. The numberof tasksthat followtask x. The numberof tasksthat immediatelyfollow task x. The numberof tasksthat precedetask x. The task time of task x. The sum of the tasktimes for x and all tasksin pathshavingx as its root. The numberof edgesin all pathshavingx as its root. Time Complexity
O(nc)a
O(nc)a
)a O(nc

Space Complexity 0(n')


0(n2)

O(n+ e)
O(nc)a

0(n2) O(n+ e) O(n2) O(n+ e) O(n+ e) O(n+ e)

O(n) O(n+ e) O(n+ e)

a ni(x),i = 1,2,3,5 have the same complexityas computingthe transitiveclosureof an n-node directedgraph(Aho, Hopcroftand Ullman 1974).The quest for lowervaluesof c continues.Knuth ( 1981 ) showsc c 2.5 161.

918 /

HACKMAN, MAGAZINE AND WEE

to their availability; hence, Step 3 checks the precedence and capacity constraints unnecessarily often. (This problem motivated the GFF heuristic.) In the next section, we examine the performanceof all 24 heuristics to ascertain which heuristic(s) is best for use in our proposed branch-and-boundalgorithm.

2. PERFORMANCE OF HEURISTICS The IUFF heuristic provided in this paper as well as those proposed by past researchers (including GFF and RA) possess the following properties. Gl Every pair of consecutive workstations has a total work content that exceeds C, where the work content of a station is defined to be the sum of the task times for those tasks assigned to it. G2 Each workstation (except possibly the last) has a work content that exceeds C - t *, where t * = maXk tk. Assume that we are given an instance of the SALB-I problem. Let S* denote the optimal number of stations. The following proposition provides general upper bounds on the performance of the aforementioned SALB-I heuristics. Proposition 1. Let S denote the number of stations determined by an algorithm (heuristic or optimal) which possesses both properties Gl and G2. a. S S

2S* - 1. b. S < C/(C - t*)S* + 1.


Proof. Let S(j) denote the set of tasks assigned to station j by the heuristic. Since the work content of each station cannot exceed C
s S* C 2Etk
k =
E E

Remarks. 1. Proposition l a shows that 2 is an asymptotic worst-case bound on all heuristics that possess property Gl. Queyranne (1985) has shown that this bound cannot be less than 1.5, unless P = NP. 2. Proposition lb shows that when t */C is small, all heuristics that possess property G2 provide nearoptimal solutions. (See Baybars and Frieze 1986 for results on expected performance.) To ascertain which of the 24 heuristics performs well, comprehensiveempiricaltestingwas undertaken. (Talbot, Gehrlein and Patterson 1986 and Johnson 1981 have also performed comprehensive empirical testing.) Twenty-eight problems from the literature were solved along with 25 problems that were randomly generated. The results shown in Table II indicate that several of the heuristics-for instance, IUFF6, IUFF7, RA7-give acceptable solutions for SALB-I, and are suggested for use: 1) in large problems that have to be run repeatedly,or 2) in conjunction with a b&b algorithm, as developed in the next section. All of the heuristics perform much better than the worst-casebound indicated by Proposition 1. In all of the literature-basedproblems, the number of stations using the heuristics never exceeded the optimal number of stations by more than 2. It was also observed that as t */C increased, the performanceof the heuristics deteriorated.For instance, for the worst of the 24 heuristics (IUFF5), the percentage of workstations that exceeded optimal (averaged over all randomly generated problems) went from 5% when t*/C = 0.3 to 13.33%when t*/C= 1.0. 3. THE BRANCH-AND-BOUND ALGORITHM 3.1. The Branch-and-Bound Network The branch-and-boundnetwork is a rooted tree. Each node x is a subset of N- 1, 2, .. , n }. The root node is the null set and is called node 0. Let P, denote the unique path in G with x as its tail, and let T(P,) denote the union of all tasks identified by the nodes along path P,. The successor of a node x is the set of all y E N which satisfies the following properties. P.1 Eiy tj < C. P.2 No task j E T(P,) U y has a predecessor i 4 T(P,) U y. P.3 No subset z satisfying P.1 and P.2 contains y as a proper subset. The interpretationof the branch-and-boundnetwork is as follows. Nodes representpossible assignments of tasks to workstations. Each path P, determines the order of the workstations that already have been

ti.

(1)

j=I

iES(j)

PropertiesGl and G2 imply that - *C 2 E E if S is even (2)

ti >

j=I

iES(j)

Using the fact that if a > b, a(= 2S*) and b(= S or S - 1) are even integers, then a 2 b + 2, (1) and (2) prove part a; (1) and (3) prove part b.

Algorithmsfor SimpleAssembly Line Balancing /

919

Table II Summaryof ComputationalResultsfor Heuristics


Problem Heuristic IUFF a. b.
C.a

1 72 2.84 0.549 56 4.10 0.507 72 2.84 0.514

2 48 5.71 0.548 44 5.71 0.530

3 54 4.49 0.557 44 6.08 0.494 56 4.78 0.447

4 52 4.81 0.142 44 5.43 0.154

5 32 7.58 0.538
b

6 68 3.52 0.171 56 4.13 0.149 64 4.56 0.141

7 74 2.97 0.184 57 4.16 0.166 74 2.97 0.180

8 60 4.77 0.188 45 6.35 0.165 57 3.97 0.179

GFF a. b. c. RA a. b. c.

a. Percentage of problems for which the heuristics found an optimal solution. b. Average percentage of excess stations given by heuristic. c. Average computation time over all problems. a In seconds on an IBM-370, Model 158, including all input and output. b These algorithms are equivalent to those with updating.

assigned. Property P.1 ensures that the cycle time constraint is not violated, property P.2 ensures that no precedence constraints are violated, and property P.3 excludes all obviously nonoptimal assignments to a workstation. 3.2. The Branching Rule Define the depthof node x, d(x), as the number of nodes in path P, that excludes the root. For each node x, d(x) equals the number of stations that already have been created for tasks T(P,). Let I, = d(x) C - EiET(p) ti denote the accumulated idle time of node x. The (unfathomed) leaf having the smallest accumulated idle time is the new branching node. 3.3. Stopping and Fathoming Rules Conditioned on the assignment of tasks to workstations as described by a path P, we define below functions Uh(x) and Lh(X) which provide upper and lower bounds, respectively, on the number of stations heuristic h will obtain (the specific lower bounds we use in (5) are actually independent of h). These functions will be used to define stopping and fathoming rules necessaryto implement an efficient branch-andbound algorithm. Let h(T(P,)) denote the number of workstations obtained on the remaining set of tasks T(P,) using heuristic h. The functions Uhand Lh are defined as
Uh(x)
=

where [x]+ denotes the smallest integer greater than or equal to x. We will describe the obvious stopping rule based on Uh. The constant
n[min =

(6)

represents the number of stations required if there were no precedence constraints;as such, it provides a lower bound on the optimal solution. If there exists a node x for which Uh(x) = nmin, then the algorithm will terminate and provide an optimal solution to the SALB-I problem. Our fathoming rules are based on dominance tests. Dominance tests are applied to the current active nodes in an effort to determine if they can be fathomed. Test 1. Fathom node x if a node y exists such that
Uh(y) = Lh(Y) < Lh(X).

Test 2. Fathom node x if a node y exists such that T(PX)C T(Py) and d(x) 2 d(y). Remarks. 1. Since
Lh(X)[ ti +I,/c]

d(x) + h(T(P,)) d(x) +


L
'isE=T(sP,)

(4)
1

Lh(X)

(5)

in Dominance Test 1 it is sufficient to check the condition I, : Iy. 2. If node x is fathomed by Dominance Test 2, then i) d(y) = d(x) and ii) x and y have different predecessors.(The falsehood of either i or ii contradictspropertyP.3.) Knowledge of this fact saves unnecessarytime checking Dominance Test 2.

920 /

HACKMAN, MAGAZINE AND WEE

3.4. Heuristic Fathoming Rules

To obtain descendant nodes in the b&b tree the methods suggestedby Gutjahr and Nemhauser (1964) and Schrage and Baker (1978) are modified, as follows: First, a list P of all possible feasible assignmentsto the next workstation is obtained. Next, nonmaximal feasible assignments from P are deleted. For even modest sized problems, the number of all possible feasible assignments to a workstation will be very large. To keep the size of P to manageable proportions, the following heuristic fathoming rules are included in our b&b algorithm. Rule 1. If there are more than M feasibleassignments, arbitrarilyselect a feasible assignments; a maximal assignments are then obtained from the a feasible assignments, and trimmed to eliminate redundancies. (This could be improved by eliminating those feasible assignments with the most accumulated idle time. This is not reportedin the computational results.) Rule 2. If a branching node has more than mr descendant nodes, select the m descendant nodes with the least accumulated idle times. Fathom the other descendant nodes. Rule 3. In any level of depth, if there are more than m unfathomed nodes (including father nodes), select the m unfathomed nodes with the least accumulated idle times. Fathom all other unfathomed leaves at the same level of depth. Rule 4. If there are m father nodes at depth i, fathom all of the leaves of depth k, k < i - 1. Finally, to provide solutions within a reasonable desired execution time, a bound T on the execution time is imposed, and given as input to the b&b algorithm. If the accumulated execution time exceeds T, the proposed b&b algorithm returns the least upper

bound (among nodes obtained so far) as the number of workstations required. 4. PERFORMANCE OF THE BRANCH-AND-BOUND ALGORITHM The proposed b&b algorithm, including the heuristic fathoming rules, were tested on the 53 problems mentioned earlier.The algorithm was tested using IUFF6. We shall refer to IUFF 6 as IUFFD to more easily identify which numerical score function was used. (The "D" stands for "decreasing".) Fifty of the 53 problems tested were guaranteed optimal by the stopping rule. (28 problems were guaranteed optimal at node 0.) The results demonstrate the important role played by dominance tests, particularly for problems having large b&b trees: for example, for problem 1 in Table III, 51 out of the 157 nodes generated were fathomed by Dominance Test 1, and an additional 32 nodes were fathomed by Dominance Test 2. The results are all based on heuristic fathoming rule parameters m = 6, a = 12, and M = 125. Since m determines the maximum number of nodes allowed at any level of the b&b tree, we tested how the value of m affects the efficiency of the algorithm. With m = 6, the optimal solution was found in 52 out of the 53 problems. When m was lowered to 2, the algorithm failed to find the optimal solution in only 4 cases. The heuristic fathoming rules (with m = 6) were required in 16 of the problems, and in all but 3 of these the heuristic solution was guaranteed optimal. Heuristic fathoming has a dramatic effect in pruning the size of the b&b tree (see Table III). The effect of heuristic fathoming rules 1 and 2 is not shown because they eliminate nodes without ever placing them in the b&b tree. The choice of IUFFD appearsto be a good one. In 34 of the examples, the optimal solution was found

Table III Problems That Generated at Least 100 Nodes in the b&b Tree
Number of Nodes Generated 157 244 175 206 149 307 187 Dom Test 1 51 0 82 0 0 10 0 No. Fathomed by Dom Heur Rule 3 Test 2 32 31 18 27 30 75 15 41 149 37 129 63 130 122 Heur Rule 4 0 2 3 2 7 17 2 Optimal Solution 8 21 21 20 20 20 20 Solution Given by b+b 8 21 21 20 20 21 20 Number of Nodes in ALB Graph 34 86 86 62 62 48 56 Exec. Timea .909 3.373 1.633 2.456 1.756 1.933 2.093

Problem 1 2 3 4 5 6 7
a

In seconds on an IBM-370, Model 158, including all input and output operations.

Algorithmsfor SimpleAssembly Line Balancing / (although it was not proved optimal at that point) using this heuristic. To determine the effectiveness of the b&b algorithm when the heuristic does not perform well, we constructed a family of 5 examples (see Figure 1) with the worst-case bound asymptotically equal to 5/3 (the worst example found). In each of the 5 cases, the optimal solution was found very quickly,

921

Q-o--0~
r

--28

0.

>
3 3+

-c6 2.
n

--2S

-6-

148 e + 38 1.4-

3+ 8 ~~~~3

n
2+8
2+8

o 13< 0 3a

2-

20

-------0
n

3~~~~ +
2

generating no more than 6 nodes, and guaranteeing the optimal solution at level 1 of the b&b tree. The execution time was only 0.15 seconds, even with 80 work elements. We compared our proposed b&b algorithm with severalb&b algorithmsappearingin the literature.We conclude from analyzing their results and ours that: a) their b&b trees are of height 2 S* (the optimal number of stations), while the average height of our trees was 0.15S*, b) the number of nodes generated by their algorithmswas significantlygreaterthan ours, and c) they cannot guarantee finding even a nearoptimal solution when a modest bound on execution time was imposed. Schrage and Baker have suggested an efficient implementation of the dynamic programming formulation described in Held, Karp and Shareshian (1963). They provide a labeling technique that attempts to find a unique label for each feasible subset of the problem. The drawback of dynamic programming is that all feasible subsets have to be generated, which leads to inefficiencies for even relatively small problems. For instance, with an imposed limit of 15 seconds execution time, dynamic programmingfailed to a find a solution in 36 out of the 53 problems. Our b&b algorithm solved all of the problems in the allotted time, and in only one instance took more time than the dynamic programmingapproach. We anticipate that Gutjahr and Nemhauser's shortest path technique will suffer the same problems, as each feasible subset is also represented by a node in the network. 5. SALB-II

X28 6

28

Kr-a

c28

c2

t<

n/

nn/6 X S
C187.
=

In this section, we describe how the methods used to solve SALB-I can be adapted to solve SALB-II. Let g(K), K = 1, 2,. .. denote the optimal value of SALB1I when the number of stations fixed is K, and let f(C), C > t * denote the optimal value of SALB-I for a given cycle time C. (For ease of notation, we have suppressedthe dependence offand g on the particular instance of SALB-II.) Two methods have been suggested for determining g(K). Lower Bound Method. Start with a known lower bound CL, and determine f(CL). If f(CL) does not exceed K, then the optimal solution is CL.Otherwise, CL is infeasible: update the new lower bound to CL + 1, and continue the process. Upper Bound Method. Start with a known upper bound C,, and determine f(Cu). Let C,, denote the

K 1 and C> 0, constructthe followingSALB-I Foreac A in whichn = K -r- 1, r--K -1 (mod 6), 0 s problem
r3 <,and0< Then optimal S* (A) = r + 1 + n K, with the solution:

= r+ n AndIUFFD(A) 6 + n/2 + n = 5/3 K- (2r + 5)/3, withthe solutionas follows:

Figure 1. Problems with asymptotic performance bound equal to 5/3.

922

MAGAZINE AND WEE HACKMAN, that if a and b are integers and a > b, then a > b + 1, Equation 2 becomes instead

maximum work content of the stations obtained that correspond to the cycle time Cu. Update the new upper bound to Cm - 1. Iff(Cu) exceeds K, then the optimal solution is Cm. Otherwise, update Cmto the maximum work content of the stations that correspond to the cycle time of the current upper bound Cu, update the upper bound to Cm- 1, and continue the process. In either method described above the number of iterations could be as large as Cu - CL. Since f(C) is a nonincreasingfunction, a binary search on the interval [CL, CU] reduces the number of iterations to
O(log(Cu
-

2(C+ 1) -I

if Seven

(C+ 1)+ 1ifSodd,S-3

or, equivalently 2 C
-W

iff(C) is even (9) iff(C) is odd, f(C) > 3.

CL))

Any iterative method such as those just described requiresas input known upper and lower bounds (Cu and CL,respectively)on the solution to SALB-II. The following proposition provides these a priori bounds. In what follows, W = E>iti denotes total work content. Proposition 2. CL(K) < g(K) < Cu(k) where CL(K) = and W ifK is even

1 f(C) 2(W - 1 I-1

once againthat g(K) > t*, f(g(K) - 1) > Assuming K + 1. Using a similar argumentto the one above, we may substitute g(K) - 1 for C in (9), and obtain
[2(w 1)] if K is even (10)

max

(7a)

g(K) -<

l 2[W

if K is odd, K - 3

where [x]- denotes the largest integer no greater than x. Each of the methods described above may not find __2W . (7b) Cu(K)=max t g(K) in a reasonableexecution time. Letfh(C) denote IK+1 if K is odd, K 3 the solution, using heuristic h, to SALB-I with cycle time constraint C; h is assumed to satisfy both K= I tW properties GI and G2. The function gh(K) = Proof The lower bound is obvious. The upper bound mint C: f (C) < K} is a natural estimate to g(K). To triviallyholds when g(K) = t* or when K = 1. Assume, obtain g,,(K) we modify the two methods just therefore,that g(K) > t * and that K > 2. Equation 2 described by calculatingf(C) instead of f(C). (Both shows that Cu(K) and CL(K) still provide upper and lower bounds, respectively, on gh(K).) However, except for f (W if f(C) is even the Lower Bound method, none of the methods guarf(C) antee finding the least C for whichfh(C) < K. Each of ~~~~~~~~~~(8) 2W these methods implicitly assumes that the function fh > 3. is odd, f) if f(C) f(C) is nonincreasing. However, it is possible that C1 < C2 and f(C1) < K, but f(C2) > K. For example, conLet e = mini ti. By definition of f and g, and the sider the SALB-II problem shown in Figure 2 with fact that g(K) > t*, f(g(K) - e) > K + 1. If = 3. The bounds from (10) are: CL(3) = 60 and K f(g(K) - e) = K + 1, then by substituting g(K) = 119. (This problem is a modification of Cu(3) for C and K + 1 for f(g(K) - e) in (8), and letthe example described in Coffman, Garey and 61/60 ting e go to 0, the result follows. Suppose that When the Upper Bound method is 1978.) Johnson f(g(K) - e) > K + 1. Since f(g(K) - e) is an integer, = IUFFD, at the 18th iteration C = 64, with h used f(g(K) - e) > K + 2. Substitutingg(K) - e for C and the solution obtained is shown in Figure 3a. The new K + 2 for f(g(K) - e) in (8), and letting e go to 0, we upper bound is 61. At the 19th iteration, the solution see that g(K) < 2 W/(K + 1), regardlessof the parity obtained is shown in Figure 3b. Note that fh(61) = 4, off(g(K) - e). Hence, (7b) follows. butf4(60) = 3 as shown in Figure 3c. Remark. If all task times are integers, then we can Each of the methods described in this section representsa class of algorithms for the SALB-II problem tighten the bounds, as follows. Using the obvious fact

Algorithmsfor Simple Assembly Line Balancing /

923

was 3.4% over the optimal cycle time, and was found in less than 1 second in all but 1 case. When the Binary Search method was used in conjunction with our b&b algorithmusing IUFFD, the optimal solution was found each time with running time no greater than 60 seconds.
\

\~~~2 ACKNOWLEDGMENT This paper is based, in part, on researchsupportedby the Natural Sciences and EngineeringResearchCouncil of Canada grant A4124. The comments of the referees and Gilbert Laporte were most helpful in preparingthis revision.

Figure 2. Example of an SALB-II problem. (Numbers in the nodes indicate task times.)

REFERENCES AHO, A. V., J. E. HOPCROFTAND J. D. ULLMAN. 1974.

Algorithms. The Design and Analysis of Computer


44 a 8 Addison Wesley, Reading, Mass. ARCUS, A. L. 1966. COMSOAL: A Computer Method of Sequencing Operations for Assembly Lines. In

Readings in Productionand OperationsManagement. John Wiley and Sons, New York.

D24 (a)

ASSCHE,F. V., AND W.

S. HERROELEN.1978.An Optimal

17

(b)

Procedure for the Single-Model Deterministic Assembly Line Balancing Problem. Eur. J. Opns. Res. 3, 142-149. BAYBARS, I. 1985. A Survey of Inexact Algorithms for the Simple Assembly Line Balancing Problem. Graduate School of Industrial Administration Working Paper, Carnegie-Mellon University, Pittsburgh. BAYBARS, I. 1986. A Survey of Exact Algorithms for the Simple Assembly Line Balancing Problem. Mgmt. Sci. 32, 909-932. BAYBARS,I., AND A. FRIEZE. 1986. ExpectedBehaviorof Line Balancing Heuristics. IMA J. Math. in ManGAREY AND D. S. JOHNSON. 1978. An Application of Bin-Packing to Multiprocessor Scheduling. SIAM J. on Computing, 1-17. DAR-EL, E. M. (MANSOOR). 1973. MALB-A Heuristic Technique for Balancing Large Single-Model Assembly Lines. AIIE Trans.5, 343-356. GEHRLEIN,W. V., AND J. H. PATTERSON.1975. Sequencing for Assembly Lines With Integer Task Times. Mgmt. Sci. 9, 1060-1070. COFFMAN, E. GUTJAHR,

44

161 1211
24 22
(c)

agement1, 1-11. G., M. R.

Figure 3. (a) 18th iteration; (b) 19th iteration; and (c) optimal solution. that depends on which heuristic or enumeration procedure is used at each SALB-I iteration. We tested each method on the same data set described in Section 3. When the Binary Search method was used in conjunction with IUFFD, the optimal solution was found in 34%of the cases. On the average,the solution

A. L., R. N.

AND

G. L.

NEMHAUSER.

1964. An

Algorithm for the Line Balancing Problem. Mgmt. Sci. 11, 308-315.
HELD, M., KARP AND

R.

SHARESHIAN. 1963.

Assembly Line Balancing-Dynamic Programming With Precedence Constraints. Opns. Res. 11, 442-459.

924 /

HACKMAN, MAGAZINE AND WEE

W. P., AND D. D. BIRNIE.1961. Assembly HELGESON, Line Balancing Using the Ranked Positional Weight Technique. J. Ind. Eng. 12, 394-398. HOFFMAN, T. R. 1963. Assembly Line Balancing With a Precedence Matrix. Mgmt. Sci. 9, 551-562. JAESCHKE, G. 1964. Eine Allgemaine Methode Zur Losung Kombinatorisher Probleme. Ablauf und

Comparative Evaluation of Production Line Balancing Techniques. Mgmt. Sci. 16, 728-745. MERTENS, P. 1967. AssemblyLine Balancingby Partial

Enumeration. Ablauf und Planungforschung8, 429-433. MOODIE, C. L., AND H. H. YOUNG. 1965. A Heuristic
Method of Assembly Line Balancing for Assumptions of Constant or Variable Work Element Times.
NEVINS,

5, 133-153. Planungforschung
JOHNSON, R. V. 1981. Assemblv Line Balancing Algo-

rithms: Computation Comparisons. Intl. J. Prod. Res. 19, 277-287. JOHNSON, R. V. 1983. A Branch and Bound Algorithm for Assembly Line Balancing Problems With Formulation Irregularities. Mgmt. Sci. 29, 1309-1324. KARP, R. M. 1972. Reducibility Among Combinatorial

J. Ind. Eng. 16, 23-29. A. J. 1972. Assembly Line Balancing Using Best Bud Search.Mgmt. Sci. 18, 529-539. QUEYRANNE, M. 1985. Bounds for AssemblyLine Balancing Heuristics. Opns. Res. 33, 1353-1359.
SCHRAGE, L., AND

K. R.

BAKER.

1978. Dynamic Pro-

Applications, Problems.In Complexityof Computer


R. E. Miller and J. W. Thatcher (eds.). Plenum, New York, pp. 85-104. 1961. A Heuristic KILBRIDGE, M. D., AND L. WESTER. Method of Assembly Line Balancing. J. Ind. Eng. 12, 292-298.
KNUTH,

gramming Solution of Sequencing Problems With Precedence Constraints. Opns. Res. 26, 444-449. TALBOT, F. B., W. V. GEHRLEIN AND J. H. PATTERSON. 1986. Comparative Evaluation of Heuristic Line Balancing Techniques. Mgmt. Sci. 32, 430-454. TONGE, F. M. 1965. Assembly Line Balancing Using Probabilistic Combinations of Heuristics. Mgmt.

Sci. 11, 727-735.


WEE, T. S., AND M. J. MAGAZINE. 1982. AssemblyLine

Programming, D. E. 1981. TheArt of Computer

Vol. 2, p. 482. Addison-Wesley, Reading, Mass.


MASTOR,A. A. 1970. An Experimental Investigation and

Balancing as Generalized Bin Packing. Opns. Res.

Lett. 1, 56-58.

You might also like