You are on page 1of 21

Production Scheduling for Apparel Manufacturing Systems*

Loo Hay Lee, F. H. Abernathy and Y.C. Ho

(will be appeared in Production Planning & Control)

Abstract

In this paper, we model an apparel manufacturing system characterized by the co-existence of the two

production lines, i.e., traditional, long lead time production line and flexible, short lead time production

line. Our goal is to find strategies which decide : (1) the fraction of the total production capacity to be

allocated to each individual line, and (2) the production schedules so as to maximize to overall profits. In

this problem, searching for the best solution is prohibited in view of the tremendous computing budget

involved. Using Ordinal Optimization ideas, we obtained very encouraging results not only have we

achieved a high proportion of "good enough" designs but also tight profit margins compared to a pre-

calculated upper bound. There is also a saving of at least 1/2000 of the computation time.

Key Words: production scheduling, ordinal optimization, goal softening

*The work reported in this paper is supported in part by NSF grants EEC-94-02384, EID-92-12122, ARO
contracts DAAl-03-92-G-0115, DAAH-04-95-0148, AFOSR contract F49620-95-1, and Alfred P. Sloan
Foundation grant
1. Introduction

In the past twenty years, technological advancements, international competitions and market dynamics

have brought a major impact to the North American apparel manufacturing industry. The conventional

analysis of apparel industry predicts that the apparel industry will collapse rapidly and migrate to nation

with low labor cost. Although apparel industries still exist in the United States nowadays, intense

competition encourages management to develop new production and supply methodologies in order to

remain competitive (Abernathy 1995). One key issue involves the allocation of scarce production resources

over competing demands, which is a typical problem in dealing with many complex man-made systems

(technically known as Discrete Event Dynamic Systems (DEDS)) (Cassandras 1993) for which

manufacturing systems are typical examples.

In this paper, we will describe an apparel manufacturing scheduling problem, which is depicted in figure 1.

Production Quick Line Central Distribution


Lines Production Center Retailer
Regular Line
Production
Production Production Inventory Sales
Schedules Information Information Information

Production
Management

Information Flow
Material Flow

Figure 1. Material and information flow chart of apparel manufacturing systems.

There are two different types of production lines: quick lines and regular lines. In a regular production line,

works flows from worker to worker in bundles with work buffers between each workstation. The work-in-

process (WIP) in each buffer is so large that it takes 20 to 25 days for a garment to pass through all

operations, even though only 10 to 20 minutes of direct labor content is actually required to assemble the

garment. Therefore this kind of lines has a long lead time, which is defined as the receipt of the product

2
authorization to the time the products are shipped to the central distribution center. In a quick line, a small

group of workers are cross-trained to perform several sewing operations. The group of workers performs

all of the sewing assembly operations on the apparel item. Operators move from one workstation to another

thereby minimizing the WIP in the production line. The cycle time in the quick line is less than the cycle

time in the regular line; however, since an operator is generally less productive, on average, at several

operations than they are at a single operation, the costs of quick line are higher. The quick line has been

adopted by a number of firms seeking to increase speed and flexibility of their manufacturing systems.

When apparel items are assembled under either production system, they are generally shipped to a central

distribution center where orders from retailer are filled.

Retailer's weekly demands are generally specific for each of their stores and specific for each item of

apparel. Retail items are almost always specified by Stock Keeping Unit (SKU) which is a particular style,

fabric, and size of an apparel item. A typical jeans manufacturer may make 10,000 to 30,000 distinct SKUs

of jeans in a year. In a given season of the year, the number of SKUs manufactured may still be as high as

10,000. Many apparel manufacturers offer rapid replenishment to retailers in a collection which may be as

large as several hundred SKUs for a given apparel type.

In order to model the demand of retailers on apparel manufacturers, we will allow seasonal variation (e.g.

high season sales on both Father's day and Christmas for shirt market). Random variations in the actual

demand will account for weekly or daily fluctuations.

The production management team is responsible for making decisions on how to manage the future

production in different production lines in order to optimally supply a retail demand. In practice, when they

make a decision, certain criteria must be considered. First, finished goods inventory is expensive to be

maintained and should be no higher than necessary to meet demand. Second, satisfying customer demands

is an important strategic requirement. Failing to do so can result not only in lost profits due to reduced

sales, but also may put a manufacturer in danger of losing future market share.

The goal of the production management team is to determine (1) the fraction of the total production

capacity, , to be allocated to each production line; and (2) the scheduling strategy, , that decides the

production schedules so as to maximize the overall manufacturing profits.

3
A similar problem has been addressed by Tang (1994), but he only manages to solve the problem of only 9

different products without seasonal effects. In this paper, we target on solving a more practical problem,

i.e., more than 10,000 different products with seasonal trend.

The paper is organized as follow. A formal description will be presented in section 2. In section 3, we will

introduce a new optimization concept, and show how it on solving this problem. Then, in section 4, some

experiments and case studies were. Finally we make a conclusion in section 5.

2 Problem Formulation

The goal of this problem is to find , the capacity ratio of quick line to the total capacity and , the

scheduling strategy, so as to maximize the manufacturing profits, which are defined as total revenue less

material costs, cut, make and trim costs (CMT cost), inventory and WIP holding costs, and shipping costs.

The scheduling strategy is the mapping from the information set to weekly production schedules, or in

other words, it generates the weekly production schedules after collecting all the information (past

production schedules, inventory and demand information). In the following sections, we will describe the

model in details.

2.1 Demand Models

The demand is weekly and there is no back-ordering. In this paper, we assume that demand of SKU i at

time t, di(t) is a truncated Gaussian random variable with mean i(t), and standard deviation i(t), i.e.,

xi(t) = N(i(t), i(t))


+
di(t) = [xi(t)] (1)

If we neglect the truncated effect, average demand of SKU i at time t is roughly equal to i(t).

Coefficient of variation of SKU i, Cvi(t), is defined as standard deviation divided by mean, i.e.,
i (t )
Cvi (t ) = (2)
i (t )

In this paper, we assume that the coefficient of variation, Cvi(t), is a constant, and we will use Cvi from

now on.

The following are the definitions of several demand models.

Seasonal Sine Demand

4
Sine function can be used to model seasonal effects of the average demand, i.e., i(t) = Ai +

Bisin(2t/T), where T is the period of the seasonal effect, and Ai, Bi are the amplitudes of the function,

and Ai > Bi. For shirt manufacturers, the period, T is half year; there are peak sale seasons at Father's

day and Christmas. When we use sine function to model the average demand, it means that the

changes of the average demand are very smooth.

Seasonal- Impulse demand

In practice, the seasonal demand can also be modeled by a two-level demand function, which is called

impulse demand. This is used to describe the following scenario: Peak demands are often introduced

by promotions or special holidays or both. Hence, a sudden jump from low sales to high sales is often

observed at the beginning of a peak sales period. The peak sales are often planned to be roughly

equally spaced along a year and last for a short time (several weeks) compared to the regular selling

period.

2.2 Production Facilities

There are two different kinds of production lines, quick lines and regular lines; the lead times of which are

denoted respectively by L1 and L2. Both L1 and L2 are assumed to be known and constant, and by definition,

L2 > L1.

The total production capacity is generally limited by the availability of resources such as equipment and

available labor. In this problem, we assume the total capacity CP is equal to the yearly average demand

over all SKUs. Usually regular working day is 5 days a week and we allow one day overtime, and

therefore,

Maximum Capacity = CPmax = 1.2 CP (3)

As for the minimum capacity, it is clear that it should be at least greater than zero, but in most situations, it

cannot vary greatly from week to week. A reasonable assumption is we have to work at least 4 days a

week. Therefore,

Minimum capacity limit = CPmin = 0.8 CP (4)

5
The production schedules of each week should be chosen within these limits

Let ui1(t) be the amount of SKU i to be scheduled on the quick line at time t

and ui2(t) be the amount of SKU i to be scheduled on the regular line at time t

M
u j min u ij (t ) u j max for j = 1, 2 (5)
i =1

where u1max = CPmax , u1min = CPmin,

u2max = (1-) CPmax , and u2min = (1-) CPmin

2.3 Inventory Dynamic

The model assumes that retail demand is replenished from the manufacturer's central distribution center. To

reduce the complexity of the problem, we do not attempt to explicitly investigate the inventory

replenishment policies at the retail stores. Alternatively, we specify the lead time of a production line to

include the actual production lead time and the time from the factory to the distribution center, and use the

term "inventory" to mean the inventories of the distribution center. We assume an immediate weekly

replenishment from the distribution center to each store. Let

Ii(t) : the total inventory of SKU i at time t;

Wi(t) : the total work-in-process (WIP) inventory of SKU i at time t;

Using the above notation, the inventory dynamics can be described as follows:

2
I i (t + 1) = [ I i (t ) d i (t )] + + u ij (t L j + 1) i = 1, ..., M (6)
j =1

Here we assume that delivery of finished apparel goods from the production line will arrive at the end of a

week while demand will happen in the middle of the week.

As for the WIP, it is defined as

2 t
Wi (t ) = u ij (k ) (7)
j =1 k =t L j +1

6
2.4 Cost Matrix

CI : Inventory & WIP holding costs

Cm : Material costs

CLj : Production costs (including the shipping cost) for line j

PS : Sale price of the product

Here, we assume that the inventory holding cost to be equal to the WIP holding cost, and the sale prices of

different SKUs of the same product type are the same.

2.5 Problem Formulation

Given the allocation of total production capacity between quick line and regular line in terms of , a

scheduling policy is a sequence of decision functions which, at each time instant, determines how many

of each SKU should be produced by each production line. Formally speaking, it is a function which maps

from the information space to a control space, i.e., u(t) = (z(t)), where z(t) is the information that contains

current inventory level, WIP level and demand distribution, and u(t) is the vector of the production

schedules uij(t). For a given and , Jtotal(,) denotes the total manufacturing profits gained from

time t = 1 to time t = , which is calculated as follows. The total profit is equal to total revenues minus

the material costs, production costs and inventory holding costs.


M M 2 M 2
Jtotal(,) = P
i =1 t =1
S min( I i (t ), d i (t )) C
i =1 t =1 j =1
m u ij (t ) C
i =1 t =1 j =1
Lj u ij (t )

M M
C I I i (t )
i =1 t =1
C W (t )
i =1 t =1
I i (8)

The average weekly manufacturing profit, J(,) is given by Jtotal(,) divided by , i.e.,
1
J(,) = Jtotal(,) (9)

Since the demand is random, our problem is then to find and in order to maximize the expected total

manufacturing profits, i.e.

Max E[ J ( , )] (P1)
, [ 0 ,1]

subject to constraints (1), (5), (6) and (7)

7
where is the collection of all possible scheduling policies .

2.6 Challenges of the Problems

There exists four nearly insurmountable challenges in this problem.

First, in the apparel manufacturing system, different sizes, colors, or fashion of shirts are considered as

different stock-keeping units (SKUs). There may be over ten thousand different SKUs in the system.

The demand of each SKU varies weekly and exhibits seasonal trends.

Second, since the exact demand is not known in advance, in order to estimate precisely the expected

profit of each strategy, one needs to perform numerous time-consuming and expensive Monte-Carlo

simulations.

Third, the number of applicable strategies is equal to the size of the possible production schedules

raised to the power of the size of the information space. It is clear that this can be very large even for a

moderately-sized problem.

Fourth, since the neighborhood structure in the strategy space is not known and the performance value

function cannot be explicitly represented in terms of strategy, therefore the calculus and gradient

decent algorithm cannot be applied.

Because of these difficulties, to get the optimal solution to this problem, brute-force simulation, or large

state-space dynamic programming is unavoidable. Therefore, in practice it is impossible for us to find the

optimal scheduling strategy for the system.

3. Our Approach Ordinal Optimization

As mentioned in the previous section, searching for the best solution is prohibitive by experience in view

of the tremendous computing budget involved. However, if we do not insist in finding the optimal

solution, i.e., we soften our goal by accepting any good enough solution with high probability, the

problem will become approachable. Notice that we have changed our criterion of optimization. Earlier we

had insisted on finding the design which is the best with certainty. This is analogous to hitting a speeding

bullet with another bullet. Now, because of goal softening, i.e., good enough solutions with high

probability, we are shooting a truck with a shotgun.

8
Let us define the following:

G = Good enough set (our goal, e.g. top 5 % of the design space)

S = Selected Set (the set that we pick based on simulation results, e.g. top s designs which have

the best simulation results )

|S G| = alignment

k = minimum alignment level desired

Our problem then becomes: How should we select the set S so that we will include at least k good designs

with high probability in the selected set? This approach is known as Ordinal Optimization, first introduced

by Ho et. al. 1992.

Ordinal Optimization Approach: Instead of estimating the best performance for sure, we settle for the

goood enough alternative with high probability

The main contribution of Ordinal Optimization is to reduce the computational burden of the problem by

orders of magnitude (e.g. Ho 1995, Patsis 1997). The key ideas of Ordinal Optimization can be explained

by the following 2 tenets:

(1) "Order" converge exponentially fast while "value" converges at rate 1/(N)1/2, where N is the length of

simulation. ( Dai 1996, Xie 1997)

(2) Probability getting something "good enough" increases exponentially with the size of the "good

enough" set. (Lau 1997 and Lee1999).

More importantly, the advantages of (1) and (2) multiplies rather than adds.

By introducing the concept of ordinal optimization, the difficulties of the original problem can be

overcome by the following ideas.

Although there might be many different SKUs, say 10 thousand different SKUs, in choosing the

designs, we can aggregate these SKUs to an affordable number, say 100 SKUs or even 10 SKUs.

Aggregation of SKUs may incur inaccuracy in estimating the performance values but by the goal

softening argument, we can have high confidence that the alignment between the good enough set and

the selected set is high.

Simulation is time consuming, but we can afford to run shorter simulations when the goal is softened.

9
If the good enough set is defined as top 5% of the design space, although the design space is large and

structureless, sampling 1,000 designs from it will guarantee containing some good enough designs

because the probability of not containing any top 5% designs in 1,000 samples = (0.95)1000 0

By using the ideas above, the following algorithm was devised to find good enough solutions or designs

for this problem. A design is defined as a possible solution to the problem, and here it is defined as (,).

Algorithm 1:

STEP 1 Pick N designs (i.e., different strategies and different capacity allocations).

STEP 2 Aggregate different SKUs and run short simulations with only a few replications to

obtain rough estimates of the performance values.

STEP 3 Pick the top s observed designs.

STEP 4 Run long simulations with sufficient replications to estimate the true performance

values of these top s designs.

STEP 5 Compare the results to a pre-calculated performance upper bound. If the designer

finds the results not satisfactory, go to STEP 1, otherwise terminate.

The method of how to generate a design will be described in Appendix A while the upper bound

calculation will be in Appendix B.

4. Experiments

4.1 100 SKUs Experiment

In this experiment, we will use algorithm 1 to find good scheduling strategy for the problem. The

experiment scenario is described below:

Experiment Scenario:

There are 100 different SKUs. The demand type is seasonal-sine demand. The ratio of the average

demand in the peak season to the low season ranges from 3 to 7. The Cv of the SKUs ranges from 0.1

to 1.0, and the SKUs with high Cv have lower demand than the SKUs with lower Cv. The period of a

season is half year.

10
The lead time of quick line is 1 week, and lead time of regular line is 4 weeks.

CI = $0.08, Cm = $10, CL1 = $4.4, CL2 = $4 and PS =$20.

The "good enough" set G is defined as the top 5 % of the solution space, and N = 1,000.

In order to get the true performance value of a design, it will be necessary to run the detailed

simulation. In this experiment, we assume that a detailed simulation utilizes the entire 100 SKUs with

a simulation time = 500 weeks and the number of replications = 40.

The observed performance value of the design was estimated by running an aggregated 10 SKUs

simulation with time = 100 weeks and number of replication = 1. Notice that the time needed to

estimate the observed performance value is roughly 1/2000 of the time needed to estimate the true

performance value of the design. We have reduced the computation time from 1 week to several

minutes.

The results of the simulations are shown in table 1.

Keys:

s = number of designs selected by using the observed performance value.

k = number of overlaps of the selected s designs with true top-50 designs, i.e.,

alignment level |G S|.

( These top-50 designs are obtained by running all 1000 designs for detailed simulation. Notice that this is

a tremendous computational burden and precisely what our approach is trying to circumvent. However to

lend credibility to our approach, this is the only way to prove its validity. Once established, we need not

repeat this validation process in practical applications.)

k = predicted expected alignment level, i.e., E[k], or E[|G S| ].

J = the best performance value (profit) in the selected s designs.

s K k J

1 1 0.4 356,834

5 4 1.8 358,999

11
10 7 3.96 358,999

20 11 7.86 358,999

50 26 18.36 359,504

100 38 32.51 359,504

Table 4.1 The alignment level and profit that we obtained for the 100 SKUs case

From the results in table 4.1, we have the following observations.

In order to get the true performance value of all the designs, simulations were run for one week, 24

hours a day, on a Sun SPARC 20 machine, but to get the observed performance values, we only

needed a run of several minutes. We have reduced the computation time by a factor of 2000.

The selected set S contains a high proportion of good enough designs. When we increase the size of

selected set S, the number of alignments between the good enough set G and the selected set S

increases.

The performance value (manufacturing profit) of the best design in the selected set is only 3% away

from the pre-calculated upper bound (upper bound is $ 369,551). This means that this approach not

only guarantees to find good designs but also the design is close to the optimum.

The alignment level, k, is a good indicator of the goodness of the selected set. In practice, it can be

used to decide the size of the selected set. For example, if a user wants to have at average about 5

designs in the selected set, then he should set s equal to 10. However, in real world operations, it is

impossible to calculate this parameter because to know k requires knowledge of the true performance

values of all the designs. In order to quantify the selection, k , the predicted expected alignment level

is introduced, and
min( g , s )
k = kP(| G S |= k )
k =0

where g is the size of G and s is the size of S. (10)

When the distribution of the noise and performance were known, we can estimate this quantity by the

method proposed in Lau 1997. The third column from table 4.1 shows the value of k and it does

12
provide a good approximation to k. This suggests that our solutions can be quantifiable without

running detailed simulations for all the 1,000 designs.

In this example, although we only consider 100 SKUs, but it can be easily extended to 10,000 SKUs.

What we need to do is to aggregate these SKUs to an affordable number, say 10 SKUs. Then by using

the same algorithm, we can pick a selected set which contains some good enough.

Experiment 2

Experiment Scenario:

The experiment scenario of this experiment is similar to experiment 1 except that seasonal-impulse

demand model was used. The period of a season is half year and the peak sales last for 3 weeks. The

demand was adjusted so that the upper bound of the weekly profit found in this experiment was equal

to that of experiment 1.

The results of the simulations are shown in table 4.2.

s k k J

1 0 0.4 $340,446

5 1 1.8 $350,016

10 3 5.75 $350,016

20 7 8.71 $353,865

50 23 14.54 $356,741

100 35 21.12 $356,741

Table 4.2 The alignment level and best profit obtained for the 100 SKUs case (periodicimpulse demand)

The results are similar to experiment 1, except that the best profit of selected set S is lower. This is

because the average demand function has a sudden change in volume during the peak season, and

13
therefore we have to start accumulating inventory long before the peak season begins in order to have

sufficient inventory to satisfy the needs of the peak season. Consequently, a higher average inventory

is needed and hence lower profits result.

4.2 100 SKUs Satisfaction Rate Experiments

For some company concerns, satisfying the customers demands is an important strategic requirement.

Failing to do so can result not only in lost profits due to lost sales, but also may put the company in the

danger of losing future market share. This motivates a concept called the satisfaction rate, which is simply

the fraction of time that demand is satisfied by the inventory level. Satisfaction rate is defined as,
1
Satisfaction rate = ( I (t ) d (t ))
t =1
(11)

where
1 if x0
( x) = (12)
0 if x<0

Therefore, in order to maintain a high level of satisfaction rate, keeping a high inventory level is

unavoidable, which will induce a cost. However, the relation between enforcing the satisfaction rate and

the cost incurred is not obvious. In this section, by using the algorithm 1, we can quickly find this relation,

and this will serve as a good indicator for the production managers to know how to set their satisfaction

rate level.

After adding the satisfaction rate constraints, the problem becomes a constrained optimization problem. In

order to convert the problem back to unconstrained optimization, a penalty cost function is introduced.

Here assume that the average satisfaction rate, SR, of all SKUs have to be above certain level, , i.e.
1 M
SR = E{[ I i (t ) d i (t )]}
M i =1 t =1
(13)

Therefore problem (P1) becomes

Max E[ J ( , )] Penalty ( SR; ) (P2)


, [ 0 ,1]

subject to all constraints in (P1).

The penalty function is is defined as followed,

14
c ( x ) 2 if x<
Penalty ( x; ) = (14)
0 otherwise

where c is the coefficient of the penalty function. The good enough set is defined as the designs that

belong to the top-n% of the design space in (P2). Since the problem is reduced to an unconstrained

problem, we can easily implement algorithm 1 to pick the selected set.

Experiment 3

Experiment Scenario:

The experiment scenario is same as experiment 1, except that the sale price, PS =$16, which is much

lower. For lower profit margin, we would keep a lower inventory level, and therefore the design that

gives the optimum profit level will have a low satisfaction rate. For the interest of this problem, we

will see how the costs incurred when we enforce the high satisfaction rate constraint.

The results of the simulations are shown in table 4.3.

s = number of designs selected by using the observed performance value

J = the true profit of the best design in the selected s designs.

s J with no J with = 0.97 J with = 0.98 J with = 0.99

satisfaction rate

constraint

1 $96,030 $92,686 $93,819 $88,283

5 $96,413 $95,210 $93,819 $92,147

10 $96,413 $95,210 $94,022 $92,147

20 $96,413 $95,210 $94,022 $92,147

50 $96,413 $95,210 $94,022 $92,147

Table 4.3 The results of the simulation when we have satisfaction rate constraints

15
From the results, we have observed that if we enforce the satisfaction rate higher than 0.97, there will

be a profit lost of $800 and when this constraint increase to 0.99, the cost incurred will be roughly

$4,000. This table, which is obtained within an hour, will be useful for a manager to know the cost

associated with the satisfaction rate constraint.

5. Conclusions

By using the concepts of ordinal optimization, this algorithm is very fast in generating a solution for

the complex problem, and, in general, it can save orders of magnitude of computation time. Therefore,

it can be applied to a lot of real problems, where simulation-based optimization is needed.

The results of the solution are not only in the top 5% of the design space, but also within 3% from the

upper bound of the solution.

This algorithm is also very flexible, and can be easily modified to accommodate a wide range of

operating conditions, e.g., adding another production line.

Appendix A Design Generation

While generating a design in this problem, can be generated by a uniform random number generator but

the generation of is not obvious. A poor representation of the strategy will give us poor performance

values. We will propose a method on generating , by the following arguments and figures.

16
Unit of Unit of
Apparel Apparel
d(t)
I(t)
I(t)

E[d(t)]
= d(t) E[d(t)]

time
time
(a) (b)
Unit of Unit of
Apparel Apparel
I(t)

(t) (t)
d(t)

E[d(t)]
time time
(c) (d)

Figure 2 A graphical illustration of how to select the scheduling strategy

If there is no uncertainty in the demand process d(t), i.e., d(t) = E[d(t)], and we can take d(t) to be a

deterministic process, then we can arrange the production schedules to track d(t) as best we can (see figure

2(a)). This can be solved, in principle, by using well-known control theory tools such as dynamic

programming, or other ad hoc heuristic methods, if the size is too large. However, as shown in figure 2(b)

if the demand process d(t) is a random process, then it is clear that tracking E[d(t)] alone will not be

satisfactory (in figure 2(b), we can see that the inventory is too low to guarantee sales). Thus, we introduce

another process to play the role of a deterministic process from which we can plan our scheduling strategy.

This new process is called the target inventory level, denoted by (t), which is used to replace what we

have to, but cannot, track, i.e., d(t). This is shown in figure 2(c). Notice that (t) is not a random process.

Now we can solve a control problem to determine u(t) to follow (t) as best we can. u(t) will be the

production schedules. Therefore, we generate by first finding the target inventory level of each SKU,

and then finding production schedules that can track this target inventory level. This is shown in figure

2(d). The remaining problem is to find a method to generate a scheduling strategy that will track the target

level.

17
Target Tracking Strategy

The intuitive idea of the target tracking strategy is to arrange the production schedules so that by the

time the SKUs exit the production lines, the expected inventory level (which is equal to current

inventory level - expected demand + finished production) will be equal to the target inventory level.

When the production capacity is not enough, the capacity is allocated fairly among all the SKUs.

To determine the production schedule at time t, we use the algorithm described below.

Algorithm 2

STEP 1: At time t, allocate the quick line capacity to different SKUs.

The amount of production is scheduled so that by the time the product is shipped to the

warehouse, the expected inventory level will equal the target inventory level at that time.

While calculating the expected inventory level, we use current inventory information,

prescheduled production (i.e. the production that was scheduled before time t), and expected

future demand. When the capacity is not enough, allocate the resources fairly to all SKUs.

Fairly means, for each SKU, after resource allocation the ratio of the expected inventory

level to the target inventory level is same.

STEP 2: Allocate regular line production.

The same as STEP 1. The amount of the production is scheduled so that by the time the

product leaves the regular line, the expected inventory level will equal the target inventory

level at that time. The only difference from STEP 1 is while calculating the expected

inventory level, we not only use current inventory information, prescheduled production and

expected future demand, we also use future quick line production that will finish its

processing before the regular line production at time t. When the capacity is not enough,

again we will allocate the capacity fairly among the SKUs.

Appendix B Upper Bound Determination

18
The solution of the problem will be unchanged if we modify the objective function from total profit to

weekly profit, which is equal to total profit divided by . When is very large, and this problem can be

simplified or approximated by the following observations

1. To keep the system stable, the average weekly production will be roughly equal to the average weekly

sale.

2. The ratio of the average weekly production of the quick line and the regular line should be close to the

ratio of the capacity of quick line and the regular line when the utilization of the production lines is high.

3. By Littles law, the work in process (WIP) of the production should be equal to the average weekly

production multiply by the lead time.

From these three observations,

Problem (P1) becomes

Max E[ J ( , )] (P3)
, [ 0 ,1]

1 M
where J ( , ) = ( PS C m ) min( I i (t ), d i (t ))
i =1 t =1
M
( (C
i =1 t =1
L1 + C I L1 ) + (1 )(C L2 + C I L2 )) min( I i (t ), d i (t ))
M

C I (t )
i =1 t =1
I i

Subject to all constraints listed in (P1).

If we neglect the constraints, we can solve problem (P3). The objective function of (P3) only depends on

the inventory level and the capacity ratio of the quick line. Therefore, we can find the inventory level and

the capacity ratio that maximize the weekly profit. This weekly profit will be an upper bound of problem

(P1).

To maximize the weekly profit,


= 1 when (C L1 + C I L1 ) < (C L2 + C I L2 )

= 0 when (C L1 + C I L1 ) > (C L2 + C I L2 )

19
[0,1] when (C L1 + C I L1 ) = (C L2 + C I L2 )

without loss of generosity, assume (C L1 + C I L1 ) > (C L2 + C I L2 ) and = 0

Therefore
1 M
J ( ,0) = ( PS (C m + C L2 + C I L2 )) min( I i (t ), d i (t ))
i =1 t =1
M

C I (t )
i =1 t =1
I i (15)

Then,
J ( ) 1 E[min(I i (t ), d i (t )]
= ( PS (C m + C L2 + C I L2 )) CI (16)
I i (t ) I i (t )

J ( )
= 0 when
I i (t )
E[min( I i (t ), d i (t )] CI
= (17)
I i (t ) PS (C m + C L2 + C I L2 )

By solving equation B.5, we can get the inventory level, and the maximum weekly profit, which is the

upper bound of Problem (P1).

References

Abernathy, F. H. et al. 1995, Harvard Center For Textile & Apparel Research Annual Progress Report.

Cassandras, C. G., 1993, Discrete Event System: Modeling and Performance Analysis (Irwin and Aksen

Associates).

Dai, L. Y., 1996, Convergence Properties of Ordinal Comparison in the Simulation of Discrete Event

Dynamic Systems. Journal of Optimization Theory & Application, 91, (2), 363-388.

Diehl, G. W. W., 1995, Optimal Production Control for Multiple Products in Multiple Production Lines

with Uncertain Demand, Time Varying Capacity, Time Varying Demand, and Uncertain Deliver.

Technical Report, Harvard University.

Diehl, G. W. W., 1996, Overview of IBA Data Analysis and Scheduling Work to Date, Technical Report,

Harvard University.

20
Hammond, J. H., 1992, Coordination as the basis for Quick Response: A case for virtual integration in

Supply Networks. Harvard Business School Working Paper #92-007.

Hammond, J. H., 1992, Quick Response in Retail Channels. Harvard Business School Working Paper #92-

068.

Ho, Y. C., 1994, Heuristics, Rules of Thumb, and the 80/20 Proposition. IEEE Trans. on Automatic

Control , 39, (5), 1025-1027.

Ho, Y.C., and Larson, M. E., 1995, Ordinal Optimization Approach to Rare Event Probability Problems.

Journal of Discrete Event Dynamic Systems, 5, 281-301.

Ho, Y.C., Sreenivas, R., Vakili, P., 1992, Ordinal Optimization of Discrete Event Dynamic Systems,

Journal of Discrete Event Dynamic Systems, 2, 61-88.

Lau, T.W.E., and Ho, Y.C., 1997, Universal Alignment Probability and Subset Selection in Ordinal

Optimization. Journal of Optimization Theory and Application, 93, (3), 455-489.

Lee, L. H., Lau, T.W. E., and Ho Y.C., 1999, Explanation of Goal softening in Ordinal Optimization, IEEE

Transaction on Automatic Control, 44, (1), 94-99.

Patsis, N.T., Chen, C.-H., and Larson M. E., 1997, SIMD Parallel Discrete Event Dynamic Systems

Simulation, IEEE Transactions on Control Technology, 5, 30-41.

Silver, E. A. and Peterson, Rein, 1985, Decision Systems for Inventory Management and Production

Planning (John Wiley & Sons).

Tang, Z. B., Hammond, J. H. and Abernathy, F. H. 1994, Design and Scheduling of Apparel manufacturing

systems with both slow and quick production lines. In Proceeding of 33rd IEEE Conference on Decision

and Control (Lake Buena Vista).

Wagner, H. and Whitten, T., 1958, Dynamic version of the economic lot size model. Management Science,

8, (1), 88-96.

Xie, X.L., 1997, Dynamics and Convergence Rate of Ordinal Comparison of Stochastic Discrete Event

Systems, IEEE Transaction on Automatic Control, 42, (4), 586-590.

Yang, M.S., L.H. Lee, and Y.C. Ho, 1997, On Stochastic Optimization and Its Applications to

Manufacturing, Lectures in Applied Mathematics, 33, 317-331.

21

You might also like