Professional Documents
Culture Documents
Gerald Reiner
Editor
123
Editor
Prof. Dr. Gerald Reiner
Institut de l’Entreprise (IENE)
Faculté des Sciences Économiques
Université de Neuchâtel
Rue A.-L. Breguet 1
2000 Neuchâtel
Switzerland
gerald.reiner@unine.ch
v
vi Foreword
Fig. 1 The first public use of the term Rapid Modeling at a continuing education class taught by
the author at the Society of Manufacturing Engineers in 1987
From this point on, in our publications and classes we focused on the advantages
of Rapid Modeling for lead time reduction (for example, see Suri, 1989). These ad-
viii Foreword
vantages were further emphasized with the development of Quick Response Man-
ufacturing (QRM) - a refinement of TBC with a specific focus on manufacturing
enterprises (Suri, 1998). For instance, at the Center for Quick Response Manufac-
turing, during our work with around 200 manufacturing companies during the past
15 years (see www.qrmcenter.org) we have found that Rapid Modeling is an invalu-
able tool to help companies reduce lead time and implement QRM.
But enough about the past - let us look to the future. It is very encouraging to
see an entire conference organized around the theme of Rapid Modeling, and to see
that researchers from around the world will be presenting papers at this conference.
Further, it is even more encouraging to see Rapid Modeling being extended beyond
manufacturing systems - for example, to supply chain modeling, to container ter-
minals and logistics management, to service processes, and even to venture capital
firms and courts of law. All these events speak well for the future of Rapid Mod-
eling. Finally, as one who promoted the Rapid Modeling concept as a tool to help
manufacturing companies become more competitive, it is truly heartening to see
that leading researchers in Europe have decided to use Rapid Modeling as a core
concept in their EU project on “Keeping Jobs in Europe”(see Project Keeping Jobs
In Europe, 2009).
Once again, I congratulate the conference organizers and the program committee
on the rich set of papers that have been put together here. I wish all the participants
a fruitful conference, and I would also like to wish all these researchers success in
the application of their Rapid Modeling concepts to many different fields.
References
Despite the developments in the field of lead time reduction over the past 25 years,
long lead times continue to have a negative impact on companies’ business re-
sults, i.e., customer dissatisfaction, loss of market shares, and missed opportunities
to match supply and demand. Increased global competition requires companies to
seek out new ways of responding to volatile demand and increased customer re-
quirements for customization with continuously shorter lead times. Manufacturing
companies, as well as service firms, in the developed economies are in the doldrums
because low responsiveness make them vulnerable to low-cost competitors. Com-
panies that are equipped for speed, with innovative processes, will outperform their
slower competitors in many industries but the knowledge concerning lead time re-
duction, which has been developed globally, has yet to be combined into a unified
theory.
The purpose of this proceedings volume of selected papers presented at the 1st
rapid modelling conference “Increasing Competitiveness - Tools and Mindset” is to
give a state of the art overview about actual works in the field of rapid modelling
in combination with lead time reduction. Furthermore, new developments will be
discussed. In general, Rapid Modelling is based on queuing theory but other mathe-
matical modelling techniques as well as simulation models to facilitate the transfer
of knowledge from theory to application are of interest in this context as well. The
interested reader, e.g.,
• researchers in the fields of
– operations management
– production management
– supply chain management
– operations research or
– industrial engineering as well as
ix
x Preface
Acknowledgement
We would like to thank all those who contributed to the conference and this proceed-
ings volume. First, we wish to thank all authors and presenters for their contribution.
Furthermore, we appreciate the valuable help from the members of the international
scientific board, the referees and our sponsors (see the Appendix for the appropriate
lists).
In particular, our gratitude goes to our support team at Enterprise Institute at the
University of Neuchâtel, Gina Fiore Walder who organized all the major and minor
aspects of this conference project. Ulf Richter, who handled the promotion process
as well as the scientific referee process. Gina Fiore Walder, Yvan Nieto, Gil Gomes
dos Santos supported by Arda Alp and Boualem Rabta handled the majority of the
text reviews as well as the formating work with LaTex. Ronald Kurz created the logo
Preface xi
of our conference and he took over the development of the conference homepage
http://www.unine.ch/rmc09.
Furthermore, we would like to give special thanks to Professor Rajan Suri, the
founding director of the Center for Quick Response Manufacturing, University of
Wisconsin-Madison, USA, who supported the development of our conference with
valuable ideas, suggestions and hints. Furthermore, he authored the forward of this
book based on his leading expertise in the field of Rapid Modelling as well as Quick
Response Manufacturing.
Finally, it has to be mentioned that the conference as well as the book are
supported by the EU SEVENTH FRAMEWORK PROGRAMME - THE PEO-
PLE PROGRAMME - Industry-Academia Partnerships and Pathways Project (No.
217891) “How revolutionary queuing based modelling software helps keeping jobs
in Europe. The creation of a lead time reduction software that increases industry
competitiveness and supports academic research.”
xiii
xiv Contents
B Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
List of Contributors
Smail Adjabi
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
e-mail: adjabi@hotmail.com
Djamil Aı̈ssani
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
e-mail: lamos bejaia.hotmail.com
Arda ALP
Enterprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: arda.alp@unine.ch
Peter Ball
Department of Manufacturing, Cranfield University, Cranfield, Bedford, MK43
0AL, U.K.
e-mail: p.d.ball@cranfield.ac.uk
Gerhard Bauer
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: gerhard.bauer@wu.ac.at
T. Benkhellat
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
Vedran Capkun
HEC School of Management, 1, rue de la Liberation, 78351 Jouy-en-Josas cedex,
France
e-mail: capkun@hec.fr
Akram M. Chaudhry
College of Business Administration, University of Bahrain, P.O.Box #32038,
Sakhir, Kingdom of Bahrain, Middle East
xvii
xviii List of Contributors
Olli-Pekka Hilmola
Lappeenranta Univ. of Tech., Kouvola Unit, Prikaatintie 9, 45100 Kouvola, Finland
Ulrich Hoffrage
University of Lausanne, Faculty of Business and Economics, Internef 614,
CH-1015 Lausanne, Switzerland
e-mail: ulrich.hoffrage@unil.ch
Janne Huiskonen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
Werner Jammernegg
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: werner.jammernegg@wu.ac.at
István Jenei
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: istvan.jenei@uni-corvinus.hu
Matteo Kalchschmidt
Department of Economics and Technology Management, Università di Bergamo,
Viale Marconi 5, 24044 Dalmine, Italy
e-mail: matteo.kalchschmidt@unibg.it
Noémi Kalló
Department of Management and Corporate Economics, Budapest University of
Technology and Economics, Müegyetem rkp. 9. T. ép. IV. em., 1111 Budapest,
Hungary
e-mail: kallo@mvt.bme.hu
Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
e-mail: henri.karppinen@lut.fi
Tamás Koltai
Department of Management and Corporate Economics, Budapest University of
Technology and Economics, Müegyetem rkp. 9. T. ép. IV. em., 1111 Budapest,
Hungary
e-mail: koltai@mvt.bme.hu
Ananth Krishnamurthy
University of Wisconsin-Madison, Department of Industrial and Systems Engineer-
ing, 1513 University Avenue, Madison, WI 53706, USA
e-mail: ananth@engr.wisc.edu
xx List of Contributors
Dávid Losonci
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: (david.losonci@uni-corvinus.hu
Doug Love
Aston Business School, Aston University, Birmingham, B4 7ET, U.K.
e-mail: d.m.love@aston.ac.uk
Zsolt Matyusz
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: zsolt.matyusz@uni-corvinus.hu
N. Medjkoune
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel, Rue A.-L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: yvan.nieto@unine.ch
Petra Pekkanen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
e-mail: petra.pekkanen@lut.fi
Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London, United
Kingdom
e-mail: jpetty@bluewin.ch
Timo Pirttilä
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
Boualem Rabta
Enterprise Institute, University of Neuchatel, Rue A.-L. Breguet 1, CH-2000
Neuchatel, Switzerland
e-mail: boualem.rabta@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel, Rue A.-L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: gerald.reiner@unine.ch
Heidrun Rosič
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: heidrun.rosic@wu.ac.at
List of Contributors xxi
Reinhold Schodl
Capgemini Consulting, Lassallestr. 9b, 1020 Wien, Austria
e-mail: : reinhold.schodl@capgemini.com
Robert Schönberger
University of Technology Darmstadt, Chair of Clusters & Value Chain,Darmstadt
University of Technology, Hochschulstrasse 1, 64289 Darmstadt, Germany
e-mail: schoenberger@tud-cluster.de
Prakash J. Singh
Department of Management & Marketing, University of Melbourne-Parkville,
3010, Australia
e-mail: pjsingh@unimelb.edu.au
Nico J. Vandaele
Research Center for Operations Management, Department of Decision Sciences
and Information Management, K.U. 3000 Leuven, Belgium
e-mail: Nico.Vandaele@econ.kuleuven.be
Inneke Van Nieuwenhuyse
Research Center for Operations Management, Department of Decision Sciences
and Information Management, K.U. 3000 Leuven, Belgium
e-mail: Inneke.VanNieuwenhuyse@econ.kuleuven.be
Lawrence A. Weiss
McDonough School of Business, Georgetown University, Old North G01A,
Washington, DC 20057-1147, USA
e-mail: law62@georgetown.edu
Part I
Theory Pieces and Review
Chapter 1
Managerial Decision Making and Lead Times:
The Impact of Cognitive Illusions
Suzanne de Treville
University of Lausanne, Faculty of Business and Economics, Internef 315, CH-1015 Lausanne
Téléphone : 021 692 33 41,
e-mail: suzanne.detreville@unil.ch
Ulrich Hoffrage
University of Lausanne, Faculty of Business and Economics, Internef 614, CH-1015 Lausanne
e-mail: ulrich.hoffrage@unil.ch
Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London,
e-mail: jpetty@bluewin.ch
3
4 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
1.1 Introduction
People play an integral role in any operation, from process development to execu-
tion, assessment, and improvement. Because people are involved, most decisions
exhibit some bias, as individuals use heuristics to simplify the decision-making pro-
cess. Although such biases are not usually considered in managing and evaluating
operations, they have a major impact on the decisions that are made, as well as how
learning from decisions occurs.
Cognitive illusions “lead to a perception, judgment, or memory that reliably de-
viates from reality” (Pohl 2004: 2). This deviation is referred to as a cognitive bias.
Such illusions or biases happen randomly, tend to be robust and hard to avoid, and
are difficult – sometimes impossible – to eliminate. There have been occasional ref-
erences to the scarcity of literature on cognitive illusions or biases in the OM field
(e.g., Mantel et al, in press; Schweitzer and Cachon, 2000). These papers refer to
one or two cognitive biases, but do not present a large sample of biases that have
been studied in the cognitive psychology literature.
larly robust, occurring even when the decision-maker is completely aware of the
phenomenon (Buehler et al, 2002).
This phenomenon plays a fundamental role in operations management, in areas
ranging from delivery to product and process development, to project management.
Things always take longer than anyone expected, there is always a scramble to get
things pulled together right before the final deadline, and no amount of planning or
organization seems to eliminate this bias. Can insights from cognitive psychology
inform operations management theory concerning how to improve lead time per-
formance? Or, could operations management theory bring new insights to theory
concerning the planning fallacy?
Breaking projects into small pieces has been observed to keep projects more on
schedule through creating the tension required to keep people focused on due dates
(van Oorschot et al, 2002). While this might be feasible with the new product devel-
opment projects studied by these authors, it would not work for repetitive operations
(manufacturing or service). Furthermore, van Oorschot et al. noted that estimates for
smaller project packages are more accurate, but the overall project time remains ex-
cessively long.
Responding to lead times that are longer than expected by increasing our estima-
tion of lead times leads to the “planning loop” (Suri, 1998); Longer estimates reduce
the quality of forecasts, increasing mismatches between production and demand,
placing additional demands on the system to respond to actual customer needs, re-
sulting in higher utilization and longer lead times. This is consistent with the psy-
chological realization mentioned early that the more time available, the worse the
overconfidence. Historically, lead time estimation has been treated as a rational, lin-
ear computation. Suri (e.g., 1998) and Hopp and Spearman (1996) used queuing
theory to illustrate the complexity of process dynamics, explaining part of the di-
vergence between the expected simplicity and actual complexity of calculating lead
times. These complex system dynamics may amplify the cognitive bias implied by
the planning fallacy, thus partially explaining why in operations management we so
consistently fail to get our lead times right.
Furthermore, exploration of the interaction between the cognitive and compu-
tational aspects of lead time estimation may lead to new insights concerning this
cognitive illusion. Most managers do not understand the impact of bottleneck uti-
lization, lot size, layout, and system variability on lead time (Suri, 1994). As lead
times not only increase but explode with utilization, it is not surprising that lead
times exceed expectation in the majority of operations, especially given the com-
mon emphasis on maintaining high utilization. Therefore, an understanding of the
mathematical principles that drive lead times might serve as a model for the cogni-
tive processes involved in the planning fallacy.
6 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
Illusion of control occurs when an individual overestimates his or her personal in-
fluence on an outcome (Thompson, 2004). The illusion increases in magnitude with
perceived skill, emphasis on success or failure, and need for a given outcome, as
well as in contexts where skill coexists with luck, as people use random positive
outcomes to increase their skill attributions (Langer and Roth, 1975).
Consider an experienced worker who is choosing whether to follow process doc-
uments in completing a task. The illusion of control implies that the worker may
believe that process outcomes will be better if he or she draws from experience
and intuition rather than relying on the standard operating procedures (SOPs). In-
terestingly enough, this worker may well believe that other workers should follow
the SOPs (for a discussion of worker insistence that co-workers follow SOPs, see
Barker, 1993; Graham, 1995). Times when the worker has carried out a process
change that has coincided with an improved yield (success, whether or not due to
that change vs. normal process variability) will tend to increase this illusion of con-
trol.
Polaroid’s efforts to introduce Statistical Process Control were hindered by work-
ers’ illusions of control. Workers believed that process outcomes would be better if
they were allowed to choose their own machine settings and make adjustments as
they deemed necessary, rather than shutting down the machine and waiting for main-
tenance if process data indicated that the machine was going out of statistical con-
trol. This was in spite of data demonstrating substantially increased yields when ma-
chines were maintained to maximize consistency (Wheelwright et al, 1992). More
generally, workers prey to the illusion of control are unlikely to embrace process
standardization and documentation.
Entrepreneurs may well demonstrate an illusion of control when it comes to de-
veloping the operations for their new venture. E Ink, for example, was a new venture
originating from the MIT Media Lab that had invented an electronic ink, opening the
door to “radio paper” and an eventual electronic newspaper that would have the look
and feel of a traditional newspaper, but that would be updateable from newspaper
headquarters. The attitude of the founders was that developing the new product was
difficult, but that operations would be relatively easy-a classic illusion of control.
Basic operations management problems (such as yield-to-yield variation) kept the
company in survival mode for the better part of a decade (Yoffie and Mack, 2005).
Had the founders made operations a priority from the start, they may well have been
profitable many years earlier.
The good news is that the illusion of control can be virtually eliminated by the
intrusion of reality, which creates circumstances requiring individuals to systemat-
ically estimate the actual control that they have in a process (Thompson, 2004). In
other words, before standardizing and documenting processes, or before designing
new processes, it is worth carrying out an assessment exercise so that people have a
clear understanding of their true abilities and control level.
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 7
Anchoring and adjustment, that is, predicting or estimating relative to some anchor
(Mussweiler et al, 2004), is a heuristic that is often evoked to explain cognitive bi-
ases that can be observed in various aspects of operations management. Anchors
may be used individually when making decisions, or collectively across an organi-
zation as a benchmark for success or failure, often without regard for their relevance
or impact on a given situation.
In the operations management field we often use anchoring to make an opera-
tions strategy more powerful; Consider “Zero Defects” or “Zero Inventories,” “Just-
in-Time” or “lean production,” and “Six Sigma,” all of which have in common use
of a keyword anchor that powerfully sets direction. The positive aspect of these an-
chors is that a direction or standard for the company has been established, it may,
however, be set with such force that later efforts to moderate are unfruitful. Hopp
and Spearman (1996), for example, described the confusion that resulted from use
of the Zero Defects or Zero Inventories slogans, as companies responded by ex-
cessively slashing inventories or setting completely unrealistic quality objectives
(as has also occurred as companies that should be striving for percent or parts-per-
thousand defects vainly strive for the ppm or even parts per billion implied by being
six standard deviations from the process mean). The term lean is so powerful that
companies may become overenthusiastic about removing slack resources (required
for creativity, flexibility, or improvement efforts, e.g., Lawson, 2001) from processes
(De Treville and Antonakis, 2006; Rinehart et al, 1997).
Anchoring has been observed to move companies away from rational inventory
policies (Schweitzer and Cachon, 2000), and to shift companies from constantly
striving for improvement to just working toward meeting a set standard Tenbrunsel
et al (e.g., that individuals stop seeking to save the environment and simply work to
meet environmental standards, 2000).
An interesting example of anchoring in the field of operations management
comes from the shift in attitude toward the level of defects in a process over the
past couple of decades. Twenty years ago, a classroom discussion of defect levels
might include student claims that if the optimum is 10% defects and we are aiming
for 2%, we are going to make less money than we should.“ Fast forward to today’s
classroom, where a similar comment might beif the optimum is 300 parts per million
(ppm) defects and we are aiming for 50 ppm” In other words, referring to percent
vs. ppm anchors decision-makers as they set process improvement goals.
Anchoring influences process experimentation. Consider a conveyor belt that car-
ries product through a furnace that is limiting the overall capacity of the process. In
thinking through how to increase throughput for the furnace operation, process de-
velopment engineers may limit their experiments if they anchor their analysis to the
existing process, rather than taking a new look at how the process is run. In the
case of the conveyor, for example, it might be possible to almost double the out-
put by stacking pieces on the belt, which would require both slowing the belt and
increasing the temperature.
8 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
“Whenever people search for, interpret, or remember information in such a way that
the corroboration of a hypothesis becomes likely, independent of its truth, they show
a confirmation bias.” (Oswald and Grosjean 2004: 93). Confirmation bias represents
a “type of cognitive bias toward confirmation of the hypothesis under study. To
compensate for this observed human tendency, the scientific method is constructed
so that we must try to disprove our hypotheses” (Wikipedia, 2006). This type of
bias becomes more likely when the “hypotheses tested are already established or
are motivationally supported” (Oswald and Grosjean 2004: 93). Watkins and Bazer-
man (2003) described several disasters that would have been easily preventable had
individuals not fallen prey to confirmation and related biases.
“As managers estimate the likelihood of an event’s occurrence, they may over-
estimate the representativeness of a piece of information and draw inaccurate con-
clusions” (Bazerman, 2005). This also implies that information that is easily avail-
able may well have a greater impact on the decision made than it should: Whether
decision-makers notice or ignore a piece of information often depends on how that
information is presented (Mantel et al, in press).
One of the early demonstrations of the confirmation bias came from an exper-
iment in which subjects were shown the sequence 2, 4, 6, and asked to find the
rule that generated the sequence. Subjects were to propose their own triples, learn
from the experimenter whether the sequence conformed to the rule, and specified
the rule as soon as they thought they had discovered it. The actual rule was “any
ascending sequence.” Many subjects, however, assumed a rule of the form of n+2,
and generated sequences of this form to confirm their guess. Such subjects were
quite surprised to learn that their specified rule was wrong, in spite of the fact that
they had only received positive and confirming feedback. Arriving at the correct rule
required that subjects select examples that would disconfirm their beliefs, but this
did not come naturally (Wason, 1960). Compare this phenomenon to an employee
who has an idea about how to improve a process. As demonstrated by Wason, such
an employee is more likely to suggest experiments to demonstrate that the idea
works than to seek problems that may arise. Furthermore, implementation of in-
sufficiently tested ideas is a primary source of production variability (Edelson and
Bennett, 1998).
The choice by Campbell Soup engineers to create a microwaveable soup process
that resembled a canned soup line (argued in the preceding section to demonstrate
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 9
“In recollecting some target event from the past, people will often confuse events
that happened before or after the target even with the event itself,” with some
illusions involving remembrance of events that never actually occurred (Bartlett
1932/95; Roediger III and Gallo 2004: 309).
In managing operations, memory plays an important role. When was the last
time we did a furnace profile or maintained that machine? How has that supplier
been performing over the past year? Does it seem like the process is stable? What
has been going on with operators and repetitive strain injuries? The list goes on and
on. The constant updating of memories plays an important role in adaptive learning,
and is almost impossible to prevent or control (Roediger III and Gallo, 2004).
That memory is constantly reconstructed based on our theories, beliefs, and sub-
sequent experiences demonstrates the importance of excellent record-keeping and
patience with those who remember differently. Associative memory illusions are
related to illusions of change or stability (Wilson and Ross, 2004), referring to in-
10 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
accurate comparisons of past and present states. Individuals, for example, often er-
roneously believe that improvement has occurred simply because of involvement in
an improvement activity. Consider MacDuffie’s description of Ford’s improvement
activities:
“[Reporting forms] appear to be used more to report on the activity level of the sub-
system group, to show that the required processes are being fulfilled, rather than to
diagnose, systematically, the “root cause” and possible solutions to a problem. When
a problem recurs, seldom is it reanalyzed, and rarely are earlier actions reassessed.
With past activities already documented and reported, the key is to generate new
documentation, to provide proof of continued activity. Thus, “continuous improve-
ment” becomes less a process of incremental problem resolution than a process of
energetic implementation of intuitively selected solutions” (MacDuffie, 1997, 185).
Once cognitive biases have been identified, what debiasing techniques exist to re-
duce their impact? In this section we briefly examine some tools that may contribute
to debiasing.
We suggest that a technique called the Premortem exercise (Klein 2003: 98-101)
may be more successful in overcoming or reducing the planning fallacy. This
method starts with the assumption that a project or plan has failed. Not just a bit,
but in a big way: It has turned out to be a catastrophe or disaster. Participants in the
exercise take this failure as a given and provide reasons why it happened. This pro-
cedure relieves the participants from the (usually self-imposed) constraint that they
must not say anything unpleasant, depressing, or potentially hurtful to their col-
leagues. The aim is to compile a long list of hidden assumptions that turned out to
be wrong, or of weaknesses and key vulnerabilities in a plan. Once this list has been
established, managers are enabled to take such “unforeseeable” events into account
when planning, incorporating buffers and contingencies. Although in our experi-
ence the premortem technique has been quite successful in debiasing the planning
fallacy, we are not aware of studies that have systematically explored its use.
Getting participants to use their past experience to calibrate their time judgments
has proven to be successful in empirical verification. Buehler et al (1994) required
participants to first indicate the date and time they would finish a computer assign-
ment if they finished it as far before its deadline as they typically completed as-
signments. In a second step, participants were asked to recall a plausible scenario
from their past experience that would result in their completing the computer assign-
ment at the typical time. Based on these estimations, they were to make predictions
about completion times. This “recall-relevance” manipulation successfully reduced
the optimistic bias constituting the planning fallacy.
12 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
1.4 Conclusions
This paper considered many of the biases and cognitive illusions which are relevant
to the field of operations management and will continue to have an effect on op-
erations as long as people are involved in the decision-making process. Cognitive
psychologists have developed theories and conducted empirical research that can
serve as a theoretical foundation for operations-based research.
As demonstrated in the case examples cited, these biases occur in both start-up
and established ventures, and across all levels of the companies. The planning fal-
lacy and anchoring effect appear to dominate operations-related activities, but de-
veloping an understanding of each of the cognitive illusions presented in this paper
in the operations management context may improve the quality of decisions in our
field, as well as facilitate learning.
References
Roediger III H, Gallo D (2004) Associative memory illusions. In: Pohl R (ed) Cog-
nitive Illusions, Psychology Press, East Sussex, pp 309–326
Sanna L, Schwarz N (2004) Integrating Temporal Biases. Psychological Science
15(7):474–481
Schoemaker P (1993) Multiple scenario development: Its conceptual and behavioral
foundation. Strategic Management Journal 14(3):193–213
Schweitzer M, Cachon G (2000) Decision bias in the newsvendor problem with
a known demand distribution: Experimental evidence. Management Science
46(3):404–420
Suri R (1994) Common misconceptions and blunders in implementing quick re-
sponse manufacturing. Proceedings of the SME AUTOFACT ’94 Conference,
Detroit, Michigan, November
Suri R (1998) Quick response manufacturing: A companywide approach to reducing
lead times. Productivity Press
Tenbrunsel A, Wade-Benzoni K, Messick D, Bazerman M (2000) Understanding
the influence of environmental standards on judgments and choices. Academy of
Management Journal 43(5):854–866
Thompson S (2004) Illusions of control. In: Pohl R (ed) Cognitive Illusions, Psy-
chology Press, Hove, East Sussex, pp 113–126
Wason P (1960) On the failure to eliminate hypotheses in a conceptual task. The
Quarterly Journal of Experimental Psychology 12(3):129–140
Watkins M, Bazerman M (2003) Predictable surprises: The disasters you should
have seen coming. Harvard Business Review 81(3):72–85
Wheelwright SC, Gill G (1990) Campbell Soup Company. In: Harvard Business
School case 9-690-051, Cambridge, MA, p 23
Wheelwright SC, Bowen HK, Elliott B (1992) Process control at Polaroid. In: Har-
vard Business School case 9-693-047, Cambridge, MA, p 17
Wikipedia (2006) Confirmation bias. URL http://en.wikipedia.org
/wiki/Confirmation bias.
Wilson A, Ross M (2004) 21 Illusions of change or stability. In: Pohl R (ed) Cogni-
tive Illustions, Psychology Press (UK), Hove, East Sussex, pp 379–396
Yoffie DB, Mack BJ (2005) E Ink in 2005. In: Harvard Business School case 9-705-
506, Cambridge, MA, p 24
Chapter 2
Queueing Networks Modeling Software for
Manufacturing
Abstract This paper reviews the evolution of queueing networks software and its
use in manufacturing. In particular, we will discuss two different groups of soft-
ware tools. First, there are queueing networks software packages which require a
good level of familiarity with the theory. In the other hand, there are some packages
designed for manufacturing where the model development process is automated. Is-
sues related to practical considerations will be adressed and recommendations will
be given.
2.1 Introduction
Boualem Rabta
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: boualem.rabta@unine.ch
Arda Alp
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: arda.alp@unine.ch
Gerald Reiner
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: gerald.reiner@unine.ch
15
16 Boualem Rabta, Arda Alp and Gerald Reiner
networks are useful to model and measure the performance of manufacturing sys-
tems and also complex service processes. Apparently, queuing-theory-based soft-
ware packages for manufacturing processes (e.g., MPX) automate model develop-
ment process and help users (e.g. managers, academics) identify relatively easy an-
alytical insights (Vokurka et al, 1996).
Queuing software can be used by industrial analysts, managers, and educators.
It is also a good tool to help students understand factory physics along with model-
ing and analysis techniques (see, e.g., de Treville and Van Ackere, 2006). Despite
certain challenges over of queuing-theory-based modeling (e.g. need strong math-
ematical background, hard to maintain certain level of understanding on theories),
training in queuing-theory-based modeling is likely to yield better competitiveness
in lead time reduction (de Treville and Van Ackere, 2006). Business executives do
not always make the best possible decisions. That is, managers can fail to understand
the implications of mathematical laws and take actions that increase lead times (see,
de Treville and Van Ackere, 2006; Suri, 1998).
Complex real life service and manufacturing systems have a number of specific
features as compared to ’simplistic cases’, posing important methodological chal-
lenges. Basic queuing theory provides key insights to practitioners but not complete
and depth understanding of the system. Also the complexity of queuing theory based
methods has caused companies to use other tools (e.g. simulation) rather than queu-
ing theory.
Finally, queuing theory becomes that much popular in academic and research ar-
eas, especially for operations modeling, because complexity and size of the real life
problems can be reduced to relatively simple yet complex enough models. Com-
pared to a similar simulation model, those will be in less detail, lacking transition
behavior of the system but on the other hand simple and sufficient enough to make a
decision (de Treville and Van Ackere, 2006). Basically, relatively simple and quick
solutions are much more preferred as an initial system analysis or for quick deci-
sions.
The rest of this paper is organized as follows: In Section 22.2, we give a brief re-
view of the evolution of queuing network theory, focusing on decomposition meth-
ods. In Section 22.3, we list selected queueing software packages. All of them are
freely available for download on the Internet. Some manufacturing software pack-
ages based on queueing theory are presented in Section 2.4. Finally, we provide a
conclusion and give recommendations in Section 2.5.
Queueing networks have been extensively studied in literature since Jackson’s sem-
inal paper (Jackson, 1957). The first significant results were those of Jackson (Jack-
son, 1957, 1963) who showed that under special assumptions (exponnential inter-
arrival and service times, markovian routing, first-come-first-served descipline,...)
a queueing network may be analyzed by considering its stations each in isola-
2 Queueing Networks Modeling Software for Manufacturing 17
tion (product form). Gordon and Newell showed that the product form solution
also holds for closed queuing networks (i.e., networks where the number of jobs
is fixed) with exponential interarrival and service durations (Gordon and Newell,
1967). Those results have been extended in (Baskett et al, 1975) and (Kelly, 1975)
to other special cases (open, closed and mixed networks of queues with multiple
job classes and different service disciplines). Since this kind of results was possible
only under restrictive assumptions, other researchers tried to extend product form
solutions to more general networks (decomposition methods). Several authors (
Kuehn (1979),Whitt (1983), Pujolle and Wu (1986), Gelenbe and Pujolle (1987) and
Chylla (1986) among others) proposed decomposition procedures for open G/G/1
(G/G/m) queueing networks. Closed networks of queues have also been analyzed
by decomposition (see, e.g., Marie, 1979). This approach has been modified in
different ways since though (e.g., multiple job classes (Bitran and Tirupati, 1988;
Whitt, 1994). In (Kim, 2004) and (Kim et al, 2005) it is shown that the classical
Whitt’s decomposition method performs poorly in some situations (high variabil-
ity and heavy traffic) and the innovations method is proposed as improvement, by
replacing relations among squared coefficients of variability with approximate re-
gression relationships among in the underlying point processes. This relationships
allow to add information about correlations.
It seems that the application of this method gives satisfactory results in various
cases. However, there are still some situations where the existing tools fail. Other
approaches which have been proposed include diffusion approximations (Reiser and
Kobayashi, 1974) and Brownian approximations (Dai and Harrison, 1993; Harrison
and Nguyen, 1990; Dai, 2002).
Queueing theory is a well-known method for evaluating the performance of man-
ufacturing systems under the influence of randomness (see, e.g., Buzacott and Shan-
thikumar, 1993; Suri, 1998). The randomness mainly comes from natural variability
of interarrival times and service durations. Queueing networks modeling has its ori-
gins in manufacturing applications: Jackson’s papers (Jackson, 1957, 1963) targeted
the analysis of job shops; a class of discrete manufacturing systems. Suri et al (1993)
gave a detailed survey of analytical models for manufacturing including queueing
network models. Govil and Fu (1999) presented a survey on the use of queueing
theory in manufacturing. Shanthikumar et al (2007) surveyed applications of queu-
ing networks theory for semiconductor manufacturing systems and discussed open
problems.
The developed theory motivated the development of many software packages for the
analysis of queueing networks. These packages suppose a good level of familiarity
with queueing theory. There are some early packages that were based on original
algorithms. The Queueing Network Analyzer (QNA) has been proposed by Whitt
as implementation of his two-node decomposition method (Whitt, 1983). QNET
18 Boualem Rabta, Arda Alp and Gerald Reiner
The important question is whether these software tools are practical and capable
enough to satisfy the complex industry needs. Moreover, among the majority of
functionalities that they offer, which one is suitable under which circumstances?
When performing in a practical context the user of this kind of software is assumed
to have an acceptable level of knowledge in queueing theory. The modeling has to
be done separately and the results are generally given in a brute form. It is obvious
that those drawbacks do not permit a wide use in a company given that managers
are in general not queueing specialists.
Additionally the previous software tools, more specific software packages were de-
signed for manufacturing based on queueing networks theory. Such modeling aid is
automatic and embedded in the software and provides the user a unique ability to
model the manufacturing system without worrying about the theoretical side. They
are particularly suitable for use by industrials with little or no queueing knowledge.
Snowdon and Ammons (1988) survey eight queueing network packages existing
at that time. Some of the queueing network software packages are public domain
while others are commercially sold by a software vendor. CAN-Q is a recursive
algorithm for solving a product-form stochastic model of production systems (Co
and Wysk, 1986) based on the results of Jackson and Gordon and Newell. A version
of QNA supporting some features of manufacturing systems has also been proposed
Segal and Whitt. (1989) but there are no indices that this package has been sold
as commercial product or distributed for large use. Other early packages include
Q-LOTS (Karmarkar et al, 1985), MANUPLAN (Suri et al, 1986) and Operations
Planner (Jackman and Johnson, 1993).
MANUPLAN includes an embedded dynamic model that is based on queueing
network theory and provides common performance results such as WIP, tool uti-
lization, production rate. The tool also provides trade-off analysis among inventory
levels, flow times, reliability of the tools, etc. (Suri et al, 1986).
MPX is perhaps the most popular software package in its category. It is the suc-
cessor of MANUPLAN. Users greatly appreciate the speed of calculations and the
ease of modeling despite of several missing improvements possibilities in its be-
havior and interface. The exact MPX’s algorithm is not published. Apparently, it
uses the classical decomposition algorithm (Whitt, 1983) coupled to the opera-
tor/workstation algorithm (Suri et al, 1993) with some changes to support additional
20 Boualem Rabta, Arda Alp and Gerald Reiner
features. It also provides a procedure to compute optimal lot sizes and transfer batch
sizes.
Still that the existing software model is quite generic and does not integrate high
level of complexity. For instance, MPX does not provide support for some manu-
facturing features like finite capacity of buffers, service disciplines other than first-
come-first-served and dynamic lot sizing nor for some popular production systems
(e.g., Kanban).
On the other hand several industries prefer to use systems design software such
as SAP-APO, IBM’s A-Team, etc., (Pinedo, 2002) and those generate their solution
based on heuristics, relaxations or approximations different than queueing software
solutions. However, usually those approaches have limitations. Their performance
change based on certain settings and in general, user needs to complete several ex-
periments to determine the most suitable algorithm. Additionally computation speed
becomes one of the most important practical considerations. Instead of those all-in-
one, multi functional software designs, queueing software can provide quick and
easy solutions while covering dynamics and related effects but not higher levels of
system details (Suri et al, 1995).
When using queueing networks software in a practical setting, the resulting models
are less accurate and detailed than simulation and give no insights into transition
behavior, but they often suffice as decision support tools and can yield results that are
useful in real-world applications (de Treville and Van Ackere, 2006). They provide a
rapid and easy way to understand systems’ dynamics and predict their performance,
in the opposition of complex simulation models which necessitate vast amount of
modeling, advanced knowledge and computer time. It is important in today’s world
to be able to rapidly evaluate different alternatives as manufacturing systems are in
continuous change. This software packages are also an important tool for training
and teaching the impact of some decisions on lead time and cost reduction.
Queueing networks software is still has limited usage in practical complex manu-
facturing applications. It is not mature for practitioners how a queueing software can
cover complex industry related constraints among with several tradeoffs regarding
to several performance objectives. Other issues like data requirements may also be
the cause. Software that passes the test of accuracy and detail can fail miserably in
the field because it requires data beyond what are easily available (Suri et al, 1995).
Those are basically limitations related to practical implications.
Close contact between researchers and industrial users has been critical to the
growth in use of the software. Emphasis on such contact, along with better linkages
to operational systems, will ensure continued growth of manufacturing applications
of queuing software (Suri et al, 1995). The use of the software in education may
also help to enlarge its use in companies. When students realize the usefulness of
2 Queueing Networks Modeling Software for Manufacturing 21
this tool, it becomes natural that they will use it after they join work in the industry
or they become managers.
When recognizing the importance of those tools and the opportunities they of-
fer, the existing software packages are still limited in their modeling capabilities.
It is important for software creators for enlarging the usability of their packages to
offer support of different real manufacturing systems. While handing problems of
modeling, a specified software design should be based on realistic assumptions (i.e.
buffers capacity, priority rules, integration of forecasting and inventory policies).
The combination of queueing networks analysis with statistical and optimization
tools... can provide better solutions and attract more practical applications.
The presentation of the computations’ output is also an important factor. Cus-
tomizable reports and graphical charts help to better understand the results. It should
be also possible for the software to provide some insights in the interpretation of the
results and to warn the user about the limits of its performance (for example, MPX
shows a warning when the utilization is very high saying that the results may not
be accurate). Performance measures given by queueing packages are based on only
steady-state value measurements given as the average values of such measures WIP,
flow time. However, it can be desired to have variance (or variability) information
about the output performance measures. Also, the provided average values are just
approximate and it may be useful to provide trustable bounds for them.
The success of a software package depends on many factors other than the accu-
racy of its computational method. Users look for a powerful tool with evidence of
efficiency but also a user-friendly, easy-to-learn and well supported product (docu-
mentation and tutorial, demo version, consultancy/training course). The integration
with other packages like spreadsheet packages, statistical packages, DBMS, legacy
applications, ERP... is also a highly desired feature. Finally, the ability of the soft-
ware to import/export data from/to other packages allows the users to gain in time
and effort.
References
Baskett F, Chandy K, Muntz R, Palacios F (1975) Open, closed and mixed networks
of queues with different classes of customers. Journal of the ACM 22(2):248–260
Bitran G, Tirupati D (1988) Multiproduct queueing networks with deterministic
rout-ing: Decomposition approach and the notion of interference. Management
Science 34(1):75–100
Bosilj-Vuksic V, Ceric V, Hlupic V (2007) Criteria for the evaluation of business
process simulation tools. Interdisciplinary Journal of Information, Knowledge
and Management 2:73–88
22 Boualem Rabta, Arda Alp and Gerald Reiner
Boualem Rabta
Abstract Open queueing networks are useful for modeling and performance eval-
uation of complex systems such as computer systems, communication networks,
production lines and manufacturing systems. Exact analytical results are available
only in few situations with restricted assumptions. In the general case, feasible solu-
tions can be obtained only through approximations. This paper reviews performance
evaluation methods for open queueing systems with focus on decomposition meth-
ods.
3.1 Introduction
Open queueing networks (OQN) are useful for modeling and performance eval-
uation of complex systems such as computer systems, communication networks,
production lines and manufacturing systems. A Queueing network consists of sev-
eral connected service stations. It is called open if customers can enter from outside
and also r the system. A single station (or a node) queueing system consists of a
queueing buffer of finite or infinite size and one or more identical servers. We will
focus on unrestricted networks where each station has an infinite waiting capacity.
Customers arrive from an external source to any station and wait for an available
server. After being served, they move to the next station or leave the system.
Performance evaluation of open queueing networks has been addressed through :
• Exact methods : analytical results are available only in few situations with sim-
ple assumptions and particular topologies (Jackson networks). Many classes of
networks have no known closed-form solutions.
Boualem Rabta
Entreprise Institute, University of Neuchatel, Rue A.-L. Breguet 1, CH-2000 Neuchatel (Switzer-
land)
e-mail: boualem.rabta@unine.ch
25
26 Boualem Rabta
Fig. 3.1 An example of open queueing network (a) and a single station (b)
The parameters of each subnetwork depend on the state of other subnetworks and
thus acknowledge the correlation with other subnetworks. The main difficulty lies
in obtaining good approximations for these parameters.
While the theory of single-station queues finds its origins in Erlang’s work on
telecommunications at the beginning of the 20th century, the analysis of networks
of queues began in the 1950s. Initial results appeared in Jackson (1954) who con-
sidered a system of two stations in tandem. Jackson (1957, 1963) analyzed a class
of open queueing networks with Poisson external arrivals, exponential service times
and Markovian routing of customers, and showed that the equilibrium probabil-
ity distribution of customers could be obtained through node-by-node decomposi-
tion. Kelly (1975, 1976) extended Jackson’s work by including customers of several
classes and different service disciplines. Similar results were presented by Barbour
(1976). Baskett et al (1975) presented the most comprehensive results at the time
for the classical models.
First surveys of queueing network theory include Lemoine (1977) and Koenigs-
berg (1982). Lemoine discussed an overview of equilibrium results of general Jack-
son networks and the methodology which has been employed to obtain those results.
Disney and Konig (1985) presented an extensive survey covering the seminal works
of Jackson and the extensions of Kelly, including a bibliography of more than 300
references. Suri et al (1993) examined performance evaluation models for different
manufacturing systems including production lines (tandem queues), assembly lines
(arborescent queues), job-shops (OQN),...
Buzacott and Shanthikumar (1992, 1993), Bitran and Dasu (1992) and Bitran and
Morabito (1996) analyzed both performance evaluation models and optimization
models for queueing networks. Bitran and Dasu (1992) discussed strategic, tactical
and operational problems of manufacturing systems based on the OQN methodol-
ogy, with a special attention to design and planning models for job-shops. Govil
and Fu (1999) presented a survey on the use of queueing theory in manufacturing.
Shanthikumar et al (2007) surveyed applications of queuing networks theory for
semiconductor manufacturing systems and discussed open problems. Also, some
software packages for the analysis of manufacturing systems are based on queue-
ing networks theory. For instance, Manuplan and MPX (Suri et al, 1995) implement
decomposition methods.
When interarrival and service times are exponential, we refer to the network as a
Jackson network. Here, the network is composed of several interconnected M/M/m
stations with first–come–first–served (FCFS) discipline of service and an infinite
queue capacity (n + 1 the number of stations in the system where station 0 rep-
28 Boualem Rabta
resents the external world to the network). Then, each station j is described by 3
parameters :
The number of servers in the station, m j .
The external arrival rate of customers to station j, λ0 j .
The expected service rate, μ j .
A customer who finishes the service at station i, moves to station j with probabil-
ity ri j where, 0 ≤ ri j ≤ 1, ∀i, j = 0, .., n and ∑nj=0 ri j = 1, ∀i = 0, .., n. Thus, r0 j is
the probability that a customers enters directly from outside to station j and r j0 is
the probability that a customer leaves the network after just completing service at
station j.
Denote by λ j the overall arrival rate to station j and by λ the overall arrival rate
to the whole network. By a result of Burke (1956) and Reich (1957) we know that
the output of an M/M/m queue in equilibrium is Poisson with the same rate as the
input process. Thus,
n
λ j = λ0 j + ∑ ri j λi , ∀ j = 1..n, (3.1)
i=1
with π j is the steady state distribution of the classical M/M/m j queueing system :
⎧ xj
⎪
⎨ π j (0)m j ρ j if x j ≤ m j ,
xj!
π j (x j ) = mj xj
⎪
⎩ π j (0)m j ρ j if x j > m j ,
mj!
This result says that the network acts as if each station could be viewed as an in-
dependent M/M/m queue. In fact, it can be shown (Disney, 1981) that, in general,
the actual internal flow in these kinds of networks is not Poisson (as long as there
is any kind of feedback). Nevertheless, the previous relation still holds (see, Gross
and Harris, 1998).
The expected waiting time in queue at station j is the given by :
3 A Review of Decomposition Methods for Open Queueing Networks 29
mj
ρ j (m j ρ j )
E(W j ) = π j (0).
λ j (1 − ρ j )2 m j !
λj
E(V j ) = . (3.2)
λ0
where λ0 = ∑ni=1 λ0i . Finally, the expected lead time E(T ) (or cycle time) for an
arbitrary customer, that is, the total time spent by a customer in the network from its
arrival moment to its final departure, is given by :
n
1
E(T ) = ∑ E(V j ) E(W j ) + .
j=1 μj
Note that the model in Jackson (1963) allows for arrival and service rates to
depend on the state of the system.
Whitt (1999) proposed a time-dependent and state-dependent generalization of a
Jackson queueing network to model a telephone call center. For each station j, exter-
nal arrivals λ j (t, x), service rates μ j (t, x) and routing probabilities r ji (t, x), i = 1, .., n
depend upon the time t and the state x = (x1 , x2 , .., xn ) of the system. The Markovian
structure makes it possible to obtain a time-dependent description of performance as
the solution of a system of ordinary differential equations, but the network structure
induces a very large number of equations, tending to make the analysis intractable.
The author presented a framework for decomposition approximations by assuming
the transition intensities of the underlying Markov chain to be of a product form.
Baskett et al (1975) treated multiclass Jackson networks and obtained product form
solutions for three service disciplines : processor sharing, ample service and last–
come–first–served with preemptive resume servicing. Customers are allowed to
switch classes after completing service at a station. The external input may be state
dependent and service distributions can be of the phase type. They also considered
multiple server first–come–first–served stations where customers of different classes
have the same rate of exponentially distributed service times. See the discussion in
Kleinrock (1976, Sec. 4.12). Reiser and Kobayashi (1974) generalized the result of
Baskett et al. by assuming that customer routing transitions are characterized by a
Markov chain decomposable into multiple subchains.
Kelly (1975, 1976, 1979) also extended Jackson’s results to multiple class queue-
ing networks. The type of a customer is allowed to influence his choice of path
through the network and, under certain conditions, his service time distribution at
each queue. Kelly’s model allows for different service disciplines. Even though, the
equilibrium probability has a product form Disney and Konig (see also 1985).
30 Boualem Rabta
Let I be the number of customer classes. Customers of type i arrive to the network
as a Poisson process with rate λ (i) and follow the route
(i) (i) (i)
r1 , r2 , ..., r fi
(i) (i)
where r j is the j-th station visited by this type and r fi is the last station visited
before leaving the system. At station j, customers have an exponentially distributed
service requirement where requirements at stations visited by a customer of a par-
ticular class, are independent and those at all stations for all customers are mutually
independent and independent of the arrival processes.
If queue j contains k j customers then the expected service requirement for the
(l)
customer in position l is 1/μ j . Also, x j = (v jl , s jl ) (l = 1, ..., k j ) indicates that the
l-th customer in the queue is of type v jl and is has reached the stage s jl along its
route. X j = (x j1 , x j2 , ..., x jk j ) denotes the state of station j. The state of the network
is represented by X = (X1 , X2 , ..., Xn ). It is then proved (Kelly, 1975; Disney and
Konig, 1985) that the equilibrium distribution is given by :
n
π (X) = ∏ π j (X j )
j=1
where
kj
α (v jl , s jl )
π j (X j ) = B j ∏ (l)
,
l=1 μj
∞ baj
Bj = ∑ a (l)
,
a=0 ∏l=1 μ j
I fi
b j = ∑ ∑ α j (i, s),
i=1 s=1
(i)
α j (i, s) = λ (i) if rs = j, .
0 otherwise.
Let N j , ( j = 1, .., n) be the stationary queue lengths in equilibrium. their stationary
probabilities are :
k
B jb j j
P Nj = k j = k (l)
.
∏l=1 μ j
j
The equilibrium departure process of class i is a Poisson process with rate λ (i) and
the departure processes of the different classes are mutually independent (Kelly,
1976).
Although these results are interesting, practical implementations are difficult due
to the size of the state space (Bitran and Morabito, 1996).
3 A Review of Decomposition Methods for Open Queueing Networks 31
The previous model (Kelly, 1976) supposes deterministic routing. The general
routing is considered in Kelly (1975). Based on the fact that nonnegative probability
distributions can be well approximated by finite mixtures of gamma distributions,
he further conjectured that many of his results can be extended to include general
service time distributions. This conjuncture is proved by Barbour (1976).
Gross and Harris (1998, Sec. 4.2.1) exposed a multiclass network where cus-
tomers are served by m j exponential servers at station j, with the same service rate
for all classes and first–come–first–served discipline. In this case, waiting time is
the same for all customer classes. It is suggested to first solve the traffic equations
separately for each customer class and then add the resulting arrival rates. Denote by
(l)
λ0 j the external arrival rate of customers of class l from outside to station j and let
(l)
ri, j be the probability for a customer of class l to move to station j after completing
(l)
the service at station i. Solving the traffic rate equation (3.1) yields λ j , j = 1, .., n
for each class l; i.e., the overall arrival rate of customers of class l to station j. We
(l)
then obtain, λ j = ∑Il=1 λ j . Using M/M/m j results, we obtain the average number
L j of customers in station j (the average waiting time can be obtained by Little’s
formula). The average number of customers of class l in station j is then given by :
(l)
(l) λj
Lj = (i)
L j.
∑Ii=1 λ j
When the interarrival or service times (or both) are not exponential, we talk about
a generalized Jackson network. Decomposition methods try to extend the indepen-
dence between stations and Jackson’s product form solution to general open net-
works. The individual stations are analyzed as independent GI/G/m queues after
approximating arrival processes by renewal processes. This approach involves :
• Combination of the input of each station : arrivals from outside and from other
stations are merged to produce an arrival flow to the station.
• Analysis of each station as independent GI/G/m : compute performance mea-
sures and departures.
• Splitting up departures from each station : decomposition of the overall departure
flow into departure flows to other station and to outside.
In general, distributions are specified by two first moments (the mean and the
squared coefficient of variation). This approach was first proposed by Reiser and
Kobayashi (1974) and improved by Sevcik et al (1977), Kuehn (1979), Shanthiku-
mar and Buzacott (1981), Albin (1982), Whitt (1983a) among others.
32 Boualem Rabta
Suppose we have n internal stations in the network with one server at each station.
For a station j, external interarrival times a0 j and services times s j are independent
and identically distributed (i.i.d) with general distributions. Define the following
notations :
λ0 j expected external arrival rate.
ca0 j scv (squared coefficient of variation) or variability of external interarrival time
V(a0 j )
(ca0 j = E(a0 j )2
).
μ j expected service rate (μ j = 1/E(s j )).
V(s j )
cs j scv or variability of service time (cs j = E(s j )2
).
Merging arrivals :
The asymptotic method (Sevcik et al, 1977) and the stationary-interval method
(Kuehn, 1979) may be used to determine ca j , i.e., the merged interarrival time vari-
V(a )
ability (ca j = E(a j)2 , λ j = E(a1 j ) ). Moreover, the asymptotic method is asymptoti-
j
cally correct as ρ j → 1 (heavy traffic intensity) and the stationary-interval method
is asymptotically correct when the arrival process tends to a Poisson process (Bitran
and Morabito, 1996).
Let cai j be the interarrival time variability at station j from station i. Based on
the asymptotic method, ca j is a convex combination of cai j given by :
3 A Review of Decomposition Methods for Open Queueing Networks 33
n
λ0 j λi j
ca j = ca0 j + ∑ cai j . (3.3)
λj i=1 λ j
where
1
wj =
1 + 4(1 − ρ j)2 (v j − 1)
1
vj = λi j
∑ni=0 ( λ j )2
Computing departures :
E(W j )
cd j = ca j + 2ρ 2j cs j − 2ρ j (1 − ρ j ) .
E(S j )
Using the Kraemer and Langenbach-Belz (1976) approximation for the expected
waiting time E(W j ) at G/G/1 nodes,
cd j = ρ 2j cs j + (1 − ρ 2j )ca j .
Splitting departures :
Under the assumption of Markovian routing, the departure stream from station j is
split. The squared coefficient of variation cd ji of the departure stream from station
j to station i is given by
cd ji = r ji cd j + 1 − r ji.
The expected waiting time E(W j ) in station j may be estimated by the KLB for-
mula (Kraemer and Langenbach-Belz, 1976) :
ρ j (ca j + cs j )g(ρ j , ca j , cs j )
E(W j ) = ,
2μ j (1 − ρ j )
34 Boualem Rabta
where,
−2(1−ρ j )(1−ca j )2
exp if ca j < 1,
g(ρ j , ca j , cs j ) = 3ρ j (ca j +cs j )
1 if ca j ≥ 1.
For other approximations of E(W j ) see, e.g., Shanthikumar and Buzacott (1981) and
Buzacott and Shanthikumar (1993).
The expected lead time E(T ) for a customer (including waiting times and service
times) is given by :
n
1
E(T ) = ∑ E(V j )(E(W j ) + ),
j=1 μ j
where
n
αj = 1 + wj p0 j ca0 j − 1 + ∑ pi j (1 − ri j + ri j ρi2 yi )
i=1
βi j = w j pi j ri j (1 − ρi2)
with w j is defined by (3.4) and
λi j λi
pi j = = ri j
λj λj
max{csi , 0.2} − 1
yi = 1 + √ .
mi
The expressions for α j and βi j follow from considerations of the merging and split-
ting of customers streams and the impact of service time variability on the squared
coefficient of traffic streams departing from a station, as opposed to that of incoming
stream.
The expected waiting time at station j is given by :
ca j + cs j
E(W j ) = Wj
2
3 A Review of Decomposition Methods for Open Queueing Networks 35
where W j is the expected waiting time for a M/M/m j queue. Many other approxi-
mations formulas for the mean waiting time in GI/G/m system are given in Bolch
et al (2006, Sec.6.3.6).
The method described in this section allows for customers creation and combination
by using a multiplication factor γ j at each station j (Whitt, 1983a).
For those stations where r j j > 0 it is advantageous to consider the successive vis-
its of a customer as one longer visit, that is, a customer gets its total service time
continuously. The stations’ parameters are changed as follows (Kuehn, 1979) :
μ ∗j = μ j (1 − r j j )
cs∗2
j = r j j (1 − r j j )cs j
ri∗j = ri j /(1 − r j j ), i = j.
A proof of en exact analogy between stations with and without feedback with respect
to the distribution of queue lengths and mean sojourn times was given by Takacs
(1963) in the case of G/M/1 stations. The extension to general arrival processes is
an approximation. It has been shown by simulation that this reconfiguration step of
the network yields good accuracy, whereas the analysis without this step results in
considerable inaccuracies (Kuehn, 1979).
Further details may be found in Whitt (1983a,b) and Suri et al (1993).
Manufacturing systems:
To meet needs in the manufacturing environment, this method has been modified to
represent machine breakdowns, batch service, changing lot sizes and product testing
with associated repair and partial yields (Segal and Whitt, 1989).
Suresh and Whitt (1990) showed that for tandem queues, for example, the original
Whitt’s procedure performs well for all except the last station, which is a bottleneck.
That is, the expected waiting time at the bottleneck station is underapproximated.
The heavy-traffic bottleneck phenomenon can be described as a relatively large
number in queue, observed when external arrivals are highly variable and a bot-
36 Boualem Rabta
tleneck station is visited after jobs go through stations with moderate traffic (Kim,
2005). Whitt (1995) suggested an enhancement to the parametric-decomposition
method for generalized Jackson networks. Instead of using a variability parameter
for each arrival process, he proposed the use of a variability function for each arrival
process; i.e., the variability parameter should be regarded as a function of the traffic
intensity of a queue to which the arrival process might go.
Dai et al (1994) proposed a hybrid method for analyzing generalized Jackson net-
works that employs both decomposition approximation and heavy traffic theory; the
sequential bottleneck method, in which an open queueing network is decomposed
in a set of groups of queues, i.e., not necessarily individual queues.
Whitt (1983a) proposed a procedure to aggregate all classes in a single one and
utilize the single class model described above. In this way the original multiple
class model is reduced to a single aggregate open network. After the analysis of
the aggregate class model, the performance measures for each class are estimated
individually.In many cases this aggregation step works quite well, but in some cases
it does not (Whitt, 1994).
Bitran and Tirupati (1988) considered an open queueing network with multiple
customer classes, deterministic routing and generally distributed arrivals and service
times. They pointed out that the splitting operation in the original Whitt’s procedure
may not perform well due to the existence of interference among classes. Their
approximation is based on the two–class case, by aggregating all classes except the
one of interest into one where the aggregate arrivals of class 2 is assumed to follow
a Poisson processes. Their procedure provides dramatic improvements in accuracy
in some cases (Whitt, 1994).
As an extension to the approximations by Bitran and Tirupati (1988) and Whitt
(1994) developed methods for approximately characterizing the departure process
of each customer class from a multi-class single-server queue ∑(GIi /GIi )/1 with
a non-Poisson renewal arrival process and a non-exponential service-time distri-
bution for each class, unlimited waiting space and the FCFS service discipline.
The results are used for improving parametric-decomposition approximations for
analyzing non-Markov open queueing networks with multiple classes. The effect
of class-dependent service times is also considered there. Whitt used different ap-
proaches : an extension of Bitran and Tirupati’s formula (based on batch poisson
and batch deterministic processes) and a heuristic hybrid approximation based on
the results for the limiting case where a server is continuously busy.
Caldentey (2001) presented an approximation method to compute the squared
coefficient of variation of the departure stream from a multiclass queueing system
generalizing the results of Bitran and Tirupati (1988) and Whitt (1994).
Kim (2005) considered a multiclass deterministic routing queueing network with
highly variable arrivals. He pointed out that the previous procedures of Bitran and
Tirupati (1988) and Whitt (1994), may not be accurate under high variability as-
38 Boualem Rabta
Harrison and Lemoine (1981) considered networks of queues with an infinite num-
ber of servers at each station. They pointed out that independent motions of cus-
tomers in the system, which are characteristic of infinite-server networks, lead in
a simple way to time-dependent distributions of state, and thence to steady-state
distributions. Moreover, these steady-state distributions often exhibit an invariance
with regard to distributions of service in the network.
Massey and Whitt (1993) considered a network of infinite-server queues with
nonstationary Poisson input. As a motivating application, they cited wireless (or
mobile cellular) telecommunications systems.Their model appears as a highly ide-
alized model, which initially ignores resource constraints. The different queues rep-
resent cells. Call originations are modeled as a nonhomogeneous Poisson process,
with the nonhomogeneity capturing the important time-of-day effect.
In real life, many applications feature simultaneous job transitions. For example, in
manufacturing, parts are often processed and transported in batches. Batch queuing
networks have been considered by Kelly (1979) and subsequently Whittle (1986)
and Pollett (1987). Miyazawa and Taylor (1997) proposed a class of batch arrival
batch service continuous-time open queueing networks with batch movements. A
requested number of customers is simultaneously served at a node, and transferred
to another node as, possibly, a batch of different size, if there are sufficient customers
there; the node is emptied otherwise. Their model assumes a Markovian setting
for the arrival process, service times and routing, where batch sizes are generally
distributed. The authors introduced an extra batches arrival process while nodes
are empty and showed that the stationary distribution of the queue length has a
geometric product form over the nodes if and only if certain conditions are satisfied
for the extra arrivals and under a stability condition.
The correspondence between batch–movement queueing networks and single–
movement queueing networks has also been discussed in Coleman et al (1997) for
class of networks having product–form solutions.
3 A Review of Decomposition Methods for Open Queueing Networks 39
References
Akram M. Chaudhry
Abstract This paper addresses issue of modeling, analysis and forecasting of time
series drifted by autoregressive noise and finding its optimal solution by extending
a conventional linear growth model with an autoregressive component. This addi-
tional component is designed to take care of high frequencies of autoregressive noise
drift without influencing the low frequencies of the linear trend and compromising
on parsimonious nature of the model. The parameters of this model are then opti-
mally estimated through the self updating recursive equations using Bayesian priors.
For identification of autoregressive order of noise and estimation of its coefficients
ATS procedure of Akram (2001) is employed. Further, for unknown variance of ob-
servations an on-line variance learning and estimation procedure is discussed. To
demonstrate practical aspects of the model some examples are given and for gen-
eration of short, medium and long term forecasts in one go an appropriate forecast
function is given.
4.1 Introduction
In many economic, financial and physical phenomena time series drifted by autore-
gressive noise are observed. For analysis of such series numerous simple to complex
models had been proposed by researchers. Most of these models are meant for either
short term forecasts or medium term or long term forecasts only. Very few of these
models generate three types of forecasts in one go. To obtain all these types of fore-
casts, usually, three different models are employed using different model settings.
These forecasts are then joined or/combined to visualize them in one sequence over
short to long term time horizon. To do so some sort of alignment is made by the fore-
45
46 Akram M. Chaudhry
For analysis and forecasting of time series {yt }t=1,2,,T bearing white noise {δt }t=1,2,,T
the conventional linear growth model at time t is locally defined as:
Yt = f θt + δt
θt = Gθt−1 + wt
Where:
f = (1×n) vector of unknown stochastic parameters. θt = (n×l) vector of unknown
stochastic parameters. G = (n × n) matrix, called, state or transition matrix, of n-
number of nonzero eigenvalues {λi }i=1,...,n .
δt is an observation noise, assumed to be normally distributed with mean zero and
some known constant variance.
wt = (n × 1) vector of parameter noise, assumed to be normally distributed with
mean zero and a constant known variance-covariance matrix W = diag(W 1, . . . ,W n),
the components of which are as defined by Harrison and Akram (1983).
4.1.1.1 Example 1
G = {gi j }i, j=1,2 is a 2 × 2 transition matrix having non zero eigenvalues {λi }i=1,2 ,
such that g11 = 1, g12 = 1, g21 = 0, g22 = 2. This matrix assists in transition of low
frequency of trend housed in parameter vector from state at time t − 1 to t.
W = diag(w1, w2) where for a smoothing coefficient 0 < β < min(λi2 )i=1,2 the ex-
pressions of w1 and w2 are:
V (1 − β )(λ1 + λ2 )(λ1 λ2 − β )
w1 =
λ2 β
4.1.2 Comments
The observations drifted by AR(p) type noise, i.e. , Φ p (B)Et = δt may locally be
modeled as:
yt = f θt + Et
θt = Gθt−1 + wt
Et = [Φ p (B)]−1 δt
Where:
p
Φ p (B) = ∏i=1 (1 − φi B) is invertible. That is 0 < |φi | < 1 for ∀i .
Yt = f ∗ θt∗
θt∗ = J θt−1
∗
+ wt∗ ; wt∗ ∼ N(0,W ∗ )
Where for an AR(p) process:
Where W1∗ = {Wi j }i, j=1,,p such that wi j = V for i, j = p and zero otherwise
diag{W1 ,W2 }
W2∗ = V where w1 and w2 are as defined earlier.
Where βφ , such that 0 < βφ < 1, is a damping coefficient for highly volatile noise
frequencies
The order and the values of {φi }i=1,,p are determined by using noise identification
and testing procedure of Akram (2001).
G, the state transition matrix for low frequencies of underlying processes, is as de-
fined earlier.
4 Parsimonious Modeling and Forecasting of Time Series 49
Rt = JCt−1 J + W ∗
−1
At = Rt f ∗ V + f ∗ Rt f ∗
Ct = [I − At f ∗ ] Rt
et = yt − f ∗ Jmt−1
mt = Jmt−1 + At [yt − f ∗ Jmt−1 ]
where at time t, R is a system matrix, I is an identity matrix, A is an updating
or gain vector, et are one step ahead forecast errors and W ∗ , a variance-covariance
matrix of parameter noise, is as defined earlier. The dimensions of all these compo-
nents are assumed to be compatible with their associated vectors and matrices of the
recursive updating equations.
4.3.1 Example 2
For time series drifted by AR(2) noise process a linear growth model, in canonical
form, at time t is operated by defining:
f ∗ = ( 1 0 0 0 ) a (1 × 4) vector
Where W1∗ = {wi j }i, j=1,2 such that w22 = V and zero otherwise
diag{w1 ,w2 }
W2∗ = V where w1 and w2 are as defined earlier.
50 Akram M. Chaudhry
For the above recurrence equations the observation noise variance V is assumed to
be known. If unknown then at time t it may be estimated on line using the following
variance estimation equations.
This variance learning system starts generating fairly accurate variance estimates
after couple of observations.
For generating short, medium and long term forecasts in one go the forecast func-
tion is:
(k)
Ft = f ∗ J k mt for k ≥ 1 integers.
This function yields optimum short term forecasts and fairly accurate medium to
long term forecasts at the same time.
4 Parsimonious Modeling and Forecasting of Time Series 51
4.6 Comments
The above model is presented for time series drifted by AR(p) noise process. In
practice rarely time series drifted by more than AR(2) process are observed. In many
cases, therefore linear growth model with drifted component of AR(2) is required.
For more discussion see Akram (1994), Bohlin (1978) and Harrison and Akram
(1983).
To determine exact order of AR noise many techniques are available. For great
ease however, AIC of Akaike (1973) and ATS of Akram (2001) may be employed.
Among these two techniques, ATS may be effectively used by the practitioners to es-
timate the unknown values of autoregressive coefficients {φi }i=1,,p as demonstrated
by Akram and Irfan (2007)
The above model is parameterized in a canonical form. For application purpose,
if desired, may be transformed to a diagonal form by using inverse transformation
of Akram (1988).
This model, if used in accordance with Akram (1994) is expected to take care of
high frequencies of autoregressive noise while keeping low frequencies of underly-
ing process of time series intact. As a result yielding fairly accurate forecasts.
References
5.1 Introduction
Increasing of the traffic at the park with containers of the Bejaia harbor’s and the
widening of its physical surface are not directly proportional. This is why the im-
provement of the productivity of the park and the good functioning of the unloading
and loading system requires the specialization of the equipment and the availability
of storage area which can receive the unloaded quantity, and having a configura-
tion which will be able to adapt and answer the traffic growth. Accordingly, a first
study which aimed to model the unloading process, had been realized in 2003 (Sait
et al, 2007). At that time, the park with containers of the EPB (Harbor Company of
Bejaia) was of 3000 ETU (Equivalent Twenty Units): 2100 ETU for the full park
and 900 ETU for the empty park. The study showed that for an arrival rate of 0.55
ships/day, and a batch size of 72 ETU, the mean number of containers in the full
park was of 1241 ETU. While varying the rate of the arrivals (or the batch size),
the park full will be saturated for a rate of 1.0368 ships/day (or for a size of 200
ETU). This study was one of the factors that have raised awareness of the EPB to
the need of creating a dedicated terminal in the treatment of container, where the
birth of BMT (Bejaia Mediterranean Terminal) Company. The company began its
commercial activities in July 2005. In order to ensure a good functioning of the con-
tainer terminal, some performance evaluation studies are established. A first study
was realized in 2007 (see Ayache et al, 2007). It had for objective the global model-
ing of unloading/loading process and had shown that if the number of ships (having
a mean size of 170 ETU), which was of 0.83 ships/day, increases to 1.4 ships/day,
the full park will undergo a saturation of 94%.
Djamil Aı̈ssani
Laboratory LAMOS, University of Béjaia,
e-mail: lamos bejaia.hotmail.com
Smail Adjabi
Laboratory LAMOS, University of Béjaia, e-mail: adjabi@hotmail.com
53
54 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune
In this section, we present the park with containers of BMT Company and identify
the motion of the containers.
Actually, the terminal is provided with four quays of 500 m, and a park with con-
tainers which has a storage capacity of 10300 ETU. The park is divided into four
zones: full park, empty park, park with refrigerating containers and a zone of dis-
charge/potting (see Fig. 5.1.a).
The park with full containers has a capacity of 8300 ETU and the one with empty
containers has a capacity of 900 ETU. In addition, the BMT container terminal of-
fers specialized installations for the refrigerating containers and the dangerous prod-
ucts of a capacity of 600 ETU, as well as a zone of destuffing/packing of a capacity
of 500 ETU (see Fig. 5.1.a).
The principal motions of the containers at the Bejaia harbor’s are schematized in the
Fig. 5.1.b (Ayache et al, 2007).
Fig. 5.1 (a):Plan of the terminal. (b): Plan of the model of treatment of the containers
4. Handling step: The gantry of a quay raises the container to put it on board the
ship.
5. Step of service: The operational service of the EPB escorts the ship till the roads
to leave the Bejaia harbor’s.
1. Deliveries: The delivery concerns the full containers or discharged goods. The
means used to perform this operation are: RTG, trucks, stacker and forklifts if
necessary.
2. Restitution of the containers: At the restitution of the containers, two zones are
intended for the storage of the empty container, one for the empty containers of
20 units and the other for the empty containers of 40 units.
The evolution of the number of containers handled in ETU is presented in the graph
(Fig.5.3.a). It is noted that in the year 2007, BMT company treated 100000 ETU. Its
objective for the year 2008 was to treat 120000 ETU.
In March 2008, a calculation of forecast had been carried out. The designed series
is the number of containers treated (loaded/unloaded) in ETU. The used data are
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 57
Fig. 5.2 (a): Diagram of the model of the unloading process. (b): Diagram of the model of the
storage process.
monthly collected and are held forth over a period of two years (from January 2006
to March 2008). The method used for calculation of the forecasts is the exponential
smoothing method (Blondel, 2002).
The graph (Fig.5.3.b) represents the original series of the number of containers
in ETU, as well as the forecasts (from April to December 2008). It is thus noted that
the objective that BMT company had fixed at the beginning of the year was likely
to be achieved.
58 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune
In the same situation, we completed the same work for the year 2009. The objec-
tives of BMT company correspond to the treatment of 130254 ETU over the year.
Calculations of the forecasts are presented in table 5.1.
First of all, we will carry out a statistical analysis to identify the model of network
of queues which correspond to our system.
The results of the preliminary statistical analysis (estimate and adjustment test) on
the data collected for the identification of the parameters of the processes are sum-
marized in table 5.2.
According to this preliminary analysis, one concludes that the performance eval-
uation of the terminal of Bejaia is really a complex problem. Indeed, the system is
modeled by a network of unspecified queues, because it consists of queues of type
G/G/1, M [x] /G/1, with blocking,...Therefore, we cannot use analytical methods
(as for the Jackson networks or BCMP) to obtain the characteristics of the system.
The models are:
1. Unloading process
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 59
./G/1 ./G/m
−→
..
Arrivals −→ M [X] /./. ./G/1 . −→ Departure
−→
Fig. 5.4 (a): Modeling of the unloading process
2. Loading process
./G[X] /m
Arrivals of the customers of the type 1 −→ M/./. ./G/1
−→ |
..
. | −→ Departure
−→ |
Arrivals of the customers of the type 2 −→ D[X] /./. ./G/1
3. Delivery process
4. Restitution process
In the case of the loading and restitution models, the servers will be in service only
if there is at least a customer of type 1 in the first queue which will fix the size of the
group to be treated. Otherwise, they will remain in a state of idleness even if there
are customers of type 2 in the second queue.
• It is noticed that there is a mean of 0.6104633 ships which accost to the Bejaia
harbor’s in order to be loaded into containers by BMT company and 0.7761129
ships for the unloading.
• The ships to be loaded make a request of 214 ETU in mean and the BMT makes
the unloading of 218 ETU in mean by each ship.
• The mean number of delivered containers each day is n3 = 120.9000 ETU.
• The mean number of restored containers each day is n4 = 125.8974 ETU.
Because of the complexity of the global model, it is not possible to calculate some
essential characteristics analytically. This is why we will call upon the simulation
approach.
5.5.3 Simulation
We designed a simulator for each model under the Matlab environment. After the
validation tests of each simulator, their executions provided the results summarized
in table 5.4.
Interpretation: The results of simulation show that the total number of containers
loaded during one year will be of 51598.20 ETU and the mean number of ships in
roads and in the quay are respectively of 0.0742 and 1.39 ships, the total number of
loaded ships during one year will be of 240.52 ships.
Concerning the unloading process, the total number of containers unloaded during
one year will be of 64628.52 ETU, the mean number of ships in roads and in the
quay are respectively of 0.0533 and 1.9308, the total number of ships unloaded
62 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune
during one year will be of 296.17 and the total number of containers which will be
handled for the year 2008 will be of 116226.72 ETU.
Concerning the parks of storage, the mean number of containers in the full park will
be of 3372.9 ETU and the mean number of containers in the empty park will be of
211.1208 ETU.
In order to study the behavior of the system in the case of variation of the arrival rate
of the ships to be loaded and unloaded, other executions have been carried out. We
have increased the number of ships at the loading and the unloading of 30%. The
number of ships passes from 0.6104633 to 0.7936 per day for the loading and from
0.7761129 to 1.0089 per day for the unloading. The obtained results are summarized
in table 5.5.
Table 5.5 Performances of the processes in the case of increase of 30% of the number of ships
arriving at the Bejaia harbor’s obtained by simulation
Process Performance characteristics Value
Mean number of loaded containers/month 5458.30
Mean number of loaded ships/month 25.4433
Loading Mean number of ships in roads 0.09230
Mean number of ships in the quay 1.40000
Mean number of containers unloaded /month 6958.04
Mean number of ships unloaded/month 31.8858
Unloading Mean number of ships in roads 0.06690
Mean number of ships in the quay 1.90000
Mean number of full containers in the park 4874.20
Storage Mean number of empty containers in the park 154.9814
Interpretation: With an increase of 30% of the rate of ships arriving at the Be-
jaia harbor’s, we note that the mean number of ships in roads and in the quay will
increase a little. This means that the materials available within BMT company are
sufficient to face this situation. In other words, an increase of 30% does not gen-
erate a congestion of ships in roads or in the quay. On the other hand, the mean
number of handled containers will undergo a remarkable increase equivalent to
30000 ETU. This increase will not have any influence on the full stock or the empty
stock. Indeed, they will pass respectively from 3372.9 ETU to 4874.2 ETU and from
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 63
211.1208 to 154.9814 ETU, say from 41% to 59% for the full park and from 24%
to 18% for the empty park.
5.7 Conclusion
The objective of this work is to analyze the functioning of the park of containers of
the BMT company in order to evaluate its performances, then to foresee the behavior
of the system in the case of increase of the arrivals flow of the container ships.
For this, we divided the system into four independent sub-systems: the “loading”,
the “unloading”, the “full stock” and the “empty stock” processes. Each system is
modeled by an opened network of queues and a simulation model of the functioning
of each system could be established. The goal of each simulator is to reproduce
the functioning of the park with containers. The study shows that the park with
containers will have the possibility of handling 116226.72 ETU, say 51598.20 ETU
at loading and 64628.52 ETU at unloading and a mean number of 3372.9 ETU in
the park, for entry rates of 0.6104 ships per day for the loading process and 0.7761
ships per day for the unloading process. After that, a variation of the arrivals rate of
the ships was proposed with an aim of estimating its influence on the performances
of the system.
With an increase of 30% of the number of ships arriving at the Bejaia harbor’s, we
note a small increase in the mean number of ships in roads and in the quay. On the
other hand, there will be a clear increase in the total number of treated containers
which will pass from 116226.72 ETU to 148996.08 ETU including 65499.6 ETU
at loading and 83496.48 ETU at unloading. We also note an increase in the mean
number of containers in the full park which will pass from 3372.9 to 4874.2 ETU.
Regarding the number of ships, it will pass from 240.52 to 305.3 ships at loading
and from 296.17 to 382.63 ships at unloading.
It would be interesting to achieve this work, by discussing the following items:
• An analytical resolution of the problem.
• Determination of an optimal management of the machines of the BMT company.
• Variation of other parameters.
References
6.1 Introduction
Johannes Fichtinger
Institute for Production Management, WU Vienna – Nordbergstraße 15, A-1090 Wien
e-mail: johannes.fichtinger@wu-wien.ac.at
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch
65
66 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
service level is crucial for customer satisfaction. Companies have to carefully adapt
their delivery time to customer requirements and be prepared to cope with unplanned
variation in demand as well as supply to prevent stock outs. In the context of make-
to-stock manufacturing strategy, a common solution to hedge unpredictable demand
and supply variability is to constitute safety stocks. This approach is widely used in
practice and often relies on a classical calculation that integrates both demand and
supply lead time means and standard deviations.
A critical point to mention here is that the safety stock calculation assumes a
stationary demand process such that the two random variables, demand and lead
time, are both assumed to be identical and independent distributed each. Unfortu-
nately, considering empirical data, demand process decomposition does not neces-
sarily show these properties and, as a consequence, this calculation leads to volatile
results. While for a stationary demand process the amount of historical data i. e. the
number of periods used for estimation of the process variability does not affect the
computation, this no longer holds when using empirical data. Often ignored, these
points may reveal to be critical as it may impact the supply chain dynamics and lead
to inappropriate inventory levels as well as service levels.
The aim of this work is to present a dynamic two-stage supply chain model of
a supplier and a retailer with focus on the retailer. In particular, for the retailer,
we consider a periodic review inventory replenishment model, where the demand
distribution is not known. Hence, the retailer uses demand forecasting techniques
to estimate the demand distribution. For the supplier’s manufacturing process we
assume a pure make-to-order production strategy subject to limited capacity, where
orders are processed based on a strict first-in, first-out priority rule. Considering
that the supply chain evaluation has to be product- and customer-specific we use
an empirical reference dataset of a retail chain company to discuss our research
question. We show how unstable forecast errors impact supply chain performance
through its implication on order-up-to level calculation.
Specifically, we build a process simulation model and measure the effect of the
number of periods used in demand estimation on the performance of the supply
chain. Hence, the independent variable is the number of past periods the retailer
considers for calculating the mean and variance of demand. The performance mea-
sures, the dependent variables, are average on-hand inventory, the bullwhip effect as
the amplification between demand variance and order variance and the fillrate as a
service level criterion. Moreover, we consider the effect of manufacturing capacity
(upper limit of the throughput rate) on these measures. To reduce the multi-criteria
based performance measurement, we use the efficiency frontier approach to provide
a single performance measure.
Since our aim is to consider many aspects of a supply chain, the relevant literature
is vast. Even if we use a simple inventory policy, we refer the interested reader for
a comprehensive review on inventory models to Silver and Peterson (1985), Zipkin
(2000) and Porteus (2002), and especially for the multi-echelon models to Axsäter
(2006). The classical optimization approaches in inventory management are focus-
ing on minimization of the total inventory system cost (Liu and Esogbue, 1999). A
fundamental problem in this context is the “right” estimation of costs. This problem
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 67
is mentioned also by Metters and Vargas (1999), i. e., classically, different perfor-
mance measures are converted into one monetary performance measure. Therefore,
these authors suggested applying data envelopment analysis to be able to take differ-
ent performance measures into consideration. In general it has to be mentioned that
multi-criteria optimization as well multi-objective decision making problems have
been solved in many areas. Surprisingly, till now only a couple of papers have been
published in the field of inventory management (see also Maity and Maiti, 2005).
One of the performance measures that we consider, the bullwhip effect (Lee et al,
1997a,b; Sterman, 1989), gained significant interest of many researchers. A pointed
definition of the bullwhip effect is provided by de Kok et al (2005): “The bullwhip
is the metaphor for the phenomenon that variability increases as one moves up a
supply chain”. Different approaches to identify the causes of the bullwhip effect
have been made so far. Lee et al (1997b, 2004) describe four fundamental causes;
demand signalling processing, price variations, rationing games and order batching.
While the latter three are not considered in this work, the demand amplification due
to the combined effects of demand signalling processing and non-zero lead times
are a main focus of this work.
In a work on the interface of the forecasting and replenishment system with focus
on the bullwhip effect, Chen et al (2000b) use a two stage supply chain model and
consider the dependencies between forecasting, lead times and information in the
supply chain. In their model, the retailer does not know the distribution of demand
and uses a simple moving average estimator for mean and variance of demand.
Similar two-stage supply chain model have also been used by e. g. Boute et al (2007)
to successfully study dynamic impact of inventory policies.
The literature on the efficiency frontier approach for performance/efficiency mea-
surement is vast after the seminal work of Charnes et al (1978). An excellent recent
review can be found in Cook and Seiford (2009). Dyson et al (2001) discuss the
problems of factor measurement related with percentage values, as e. g. the fillrate
in our approach.
The remainder of this paper is organized as follows. Section 6.2 introduces the
basic supply chain model for a single supplier and a single retailer using demand
forecasting. In section 6.3 we present simulation results based on numerical data
and empirical examples. Section 6.5 contains further extensions to the current model
and conlcluding remarks.
Consider a simple supply chain consisting of a single retailer and a single manu-
facturer. The retailer does not know the true distribution of customer demand, so
he uses a demand forecasting model to estimate mean and variance of demand. In
each period, t, the retailer checks his inventory position and accordingly places an
order, qt to the supplier. After the order is placed, the retailer faces random costumer
demand, Dt , where any unfulfilled demand is lost. There is a random lead time, L,
68 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
such that an order placed at the beginning of period t arrives at the beginning of
period t + lt , where lt denotes the random realization of the order lead time placed
in t. We assume that the retailer uses a simple order-up-to policy based on demand
forecasting methods using regression analysis.
We use aggregated weekly empirical sales data of about 220 periods (approx. 4
years) from 01/2001 to 04/2005 to estimate demand Dt for specific products. The
data do not only contain sales information (units sold) but also gross price pt , stock
available, number of market outlets Ot , which needs to be considered in an expand-
ing company and a features indicator Ft (binary information to account for the effect
of advertisement, e. g. by means of newspaper supplements as flyers and leaflets).
To clean the data from these effects and additionally from trend and seasonality, we
use a least squares regression model as proposed by Natter et al (2007).
2t π 2t π
Dt = β0 + β1 pt + β2t + β3 sin + β4 cos + β5Ot + β6 Ft + et (6.1)
52 52
Note that the sales data do not necessarily correspond to the underlying real de-
mand process since demand during stockouts is not recorded. However, an analysis
of stockout situations on the real data show that they occur in less than 2% of the
selling periods. Therefore, we take the existing sales data as censored information
to for demand.
We tested the assumptions related with classical linear regression models for
the cleaning model (see e. g. Greene, 2008, for a comprehensive discussion). There
is no exact linear relationship for the independent variables in the regression (full
rank assumption), and the independent variables are found to be exogenous. On
the Contrary, the assumption of homoscedasticity and nonautocorrelation was not
fulfilled for many products. An earlier study on the same data by Arikan et al (2007)
shows that for many products a nonlinear relationship between price and demand
such as
Dt = a · pt−b · et for a > 0, b > 1 (6.2)
could be found better explaining the pricing effect. As a consequence, for such prod-
ucts estimating an additive demand model as (6.1) leads to a decreasing variance of
the error term, ε , in price. Hence, Var(ε |pt ) is not independent of price anymore.
These in practical demand forecasting and replenishment problems inevitably oc-
curring effects destroy the common stationarity assumption on the demand error
term and, hence, are the focus of the subsequent analysis.
Supplier Orders
Retailer Demand
Customer
x Production capacity x Base stock policy x Demand characteristics
x Target fillrate
Delivery x Forecast accuracy Sales
x Observation periods
As shown in Fig. 6.1, similar to the model of Chen et al (2000a) the retailer
follows a classical infinite horizon base stock policy using weekly replenishments,
where the order-up-to point St is estimated based on the expected demand for the
actual period, μt , and an estimate for the standard deviation of the (1 + L) periods
demand forecast error, σ̂t1+L as
where the safety factor, zt , is chosen to meet a certain target fillrate, FR, service
measure. In particular, since any unsatisfied customer demand is lost, zt is found
such that it satisfies
Rμt 1 − FR
G(zt ) = · , (6.4)
σt FR
where G(·) denotes the standard normal loss function.
The supply lead time the retailer faces is stochastic, where the corresponding
random variable, L, has mean λt and standard deviation υt . It is well-known that for
the case of fixed order costs, an (s, S) policy is optimal, however, we do not consider
fixed order costs as we are interested in the effect of forecasting and the order-up-to
level on the performance measures.
Note that the order-up-to point in (7.1) is calculated based on the standard devi-
ation of the (1 + λt ) period forecast error σt1+L and its estimator σ̂t1+L rather then
the standard deviation of the demand over (1 + λt ) periods. As Chen et al (2000a)
point out very clearly, using σ̂t1+L captures the demand error uncertainty plus the
uncertainty due to the fact that dt+1 must be estimated by μt+1 . Finally, defining
an integer nt = max{n : n ≤ λt , n ∈ Z} helps to express the actual demand error
observation, et1+L , as
nt
et1+L = dt − μt + ∑ (dt+i − μt+i ) + (λt − nt ) (dt+nt +1 − μt+nt +1 ). (6.5)
i=1
Based on the random variable of the demand error, ε 1+L , in (1 + L) periods, the
estimator in period t of the standard deviation of the past demand errors can be
calculated as
σ̂t1+L = Var(ε 1+L ) + υt2(μt λt )2 . (6.6)
For the supplier’s manufacturing process we assume a pure make-to-order pro-
duction strategy, where orders are processed based on a strict first-in, first-out basis.
While the period length is one week for the retailer, the supplier is assumed to de-
liver at the end of the day the order is completely produced. We consider production
of the supplier taking place on at most five days a week, hence, the supply lead time,
L, can take values, l, such that 5l ∈ Z, so that l ∈ {0.2, 0, 4, 0.6, . . .}. The supplier has
a fixed capacity C available solely for the retailer under consideration. For this very
reason the retailer faces lead time variation, but due to missing information sharing
with the supplier the retailer does not consider the supply as capacitated, and uses
uncapacitated stochastic lead time models for replenishment.
The lead time observation li for an order placed in i can be defined as
70 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
li = pi + wi , (6.7)
To be able to measure the efficiency of the forecasting system and the inventory
policy, we consider a set of n simulation runs j = {1 . . . n}. The performance of each
run is calculated as the efficiency of the supply chain by the ratio of the weighted
fillrate in per cent, FR j , to the average on-hand inventory, I¯j ,
uFR j
. (6.13)
vI¯j
u FR j − u0
. (6.14)
v I¯j
Based on our approach presented before, we will provide examples and validation
for the analysis. First, we will consider artificialy generated datasets in order to
validate the model against theory. Artificially generated samples were based on a
normal distribution of the form N (250, 50) and used to evaluate the performance of
the supply chain under 120 scenarios. Each scenario is related to a specific capacity
at the supplier as well as a specific number of observation periods. Specifically, we
use 5 distinct capacities, i.e. 1.1, 1.25, 1.5, 1.8 and 2.5 times the average demand
of the dataset, and 24 different numbers of observation periods, i.e. each second
periods starting from 4 to 50. Figure 6.3 presents the results obtained for the 3
performance measures of interests, i. e. the average on-hand inventory at the retailer,
the bullwhip effect and the fillrate. It can be noticed that results are generally in
72 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
97
●● ● ●
● ●●
Servicelevel: Fillrate
●●
●
● ●●
96
● ●
B●
A
●
Obs 1●
●
95
●
●
●
94
●
93
C = 1.1
97.0
C = 1.25
180
1.06
96.0
C = 1.5
170
1.02
95.0
C = 1.8
C = 2.5
0.98
94.0
160
20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods
Fig. 6.3 OHI, Bullwhip and Fillrate of a stationary demand product, plotted as a function of the
number of periods used for estimating the standard deviation of the demand error term. When using
around 15 periods or more, the effect of the number of periods vanishes
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 73
Next, we used the performance results as input for the efficiency analysis and re-
sults are presented in Fig. 6.4. It can be observed that, if any, only marginal improve-
ments are possible by increasing the number of observation periods. Nevertheless,
it is worth to mention that the optimal is proper to each capacity setting and that,
for now, no comparison is possible between the different efficiency analysis. Based
on the later, we consider that our model match theory expectation and is therefore
verified and validated for further analysis.
Fig. 6.4 Fillrate efficiency for stationary demand products. Using 20 periods or more leads to an
negligible efficiency gap smaller 1%
In particular, we will use empirical data presenting non stationarities to illustrate our
position. We will consider two datasets, one presenting a seasonal pattern (Fig.6.4a)
and a second including a single strong peak in demand (Fig.6.4b). Both datasets
present therefore nonstationarities wich could have different impact on the perfor-
mance of the supply chain regarding the number of observation periods considered.
Simulation was performed using the same scenarios structure we presented ear-
lier (see section 3.1) and results are presented in Fig.6.6. It can be observed that for
the seasonal data , results tend to reach stability once a sufficient number of periods
is available (Fig.6.6a). However, it is worth to mention that first the number of ob-
servation periods required is higher in this setting, which can be argued in the sense
that, by being more structured and more volatile, valide estimations will necessi-
tate more information, i. e. more observation. Second, fillrate remains more volatile,
even for high number of observation periods and the reason for it is assumed to
be linked with intrinsic dynamic of the model. In this context, order sizes are more
variable, which can lead to stronger bias in lead time distribution and impact fillrate.
In the case of the second dataset, interesting results in terms of performance can be
observed (Fig.6.6b). In this case, the hierarchy related to capacity is much unclear
considering fillrate. The number of observation periods strongly impact the service
performance and make optimal setting difficult to identify.
74 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
97.0
C = 1.1
1.2
180
96.0
C = 1.25
1.1
170
C = 1.5
95.0
C = 1.8
1.0
160
C = 2.5
94.0
20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods
94
1.10
C = 1.1
C = 1.25
1.05
93
250
1.00
92
C = 1.5
230
0.95
C = 1.8
91
210
C = 2.5
0.90
90
20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods
Fig. 6.6 OHI, Bullwhip and Fillrate of non-stationary demand products A (up) and B (down)
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 75
The results from the efficiency analysis presented in Fig.6.7 confirmed the previ-
ous observations, i.e. the impact of the number of observation periods is limited in
the case of the seasonal dataset (Fig.6.7a). However, the dataset including a strong
peak in demand leads to erratic results regarding number of observation periods. In
this last case, the highest number of periods did not lead to optimal anymore and
the choice of the observation range can have strong impact on performance and this
independently from the elected strategy.
0.98
0.98
0.98
0.98
0.94
0.94
0.94
0.94
0.94
0.90
0.90
0.90
0.90
0.90
20 40 20 40 20 40 20 40 20 40
Nr Periods Nr Periods Nr Periods Nr Periods Nr Periods
0.98
0.98
0.98
0.98
0.94
0.94
0.94
0.94
0.94
0.90
0.90
0.90
0.90
20 40 20 40 20 40 20 40 0.90 20 40
Nr Periods Nr Periods Nr Periods Nr Periods Nr Periods
Fig. 6.7 Fillrate efficiency for non-stationary demand products A (up) and B (down)
6.5 Conclusion
We considered a dynamic two-stage supply chain model with focus on the retailer to
identify the possible impact of the number of observation periods used to calculate
the order-up-to level using an efficiency frontier approach. Based on this, we showed
for the stationary demand case that as long as the number of periods is sufficiently
large (here around 18 periods), it has no noticeable effect on the performance of the
supply chain. However, considering non-stationary demand caused e. g. by a mis-
specification of the price dependency of demand in the demand forecasting model,
the number of observation periods can lead to divergent results and considerably
affect efficiency.
Based on our results, we demonstrate that the impact of non stationarities when
using classical safety stock calculation is highly influenced by the number of obser-
vation periods considered. In addition, as it is not possible to know ex-ante which
76 Johannes Fichtinger, Yvan Nieto and Gerald Reiner
References
Abstract The ability to fulfil customer orders is crucial for companies which have
to operate in agile supply chains. They have to be prepared to respond to changing
demand without jeopardizing service level, i. e. delivery performance is the market
winner (Christopher and Towill, 2000; Lee, 2002). In this context, lead time re-
duction (average as well as variability) is of key interest since it allows increasing
responsiveness without enlarging inventories. In front of these possible levers (e. g.
Chandra and Kumar (2000), the question arises of the dynamic assessment of poten-
tial process improvements for a specific supply chain and moreover a combination
of potential process improvements related to an overall strategy (responsive, agile,
etc.). Using process simulation, we demonstrate how the coordinated application of
strategic supply chain methods improves performance measures of both intra- (lead
time) and interorganizational (service level) targets.
7.1 Introduction
The intention of this study is to analyse and assess the effects of shortening lead
time, i. e., average as well as variability, on the performance of the entire supply
chain (delivery service, delivery time, cost, etc.). There are a great number of differ-
ent strategic/tactical supply chain approaches (Chandra and Kumar, 2000; Mentzer
Dominik Gläßer
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: dominik.glasser@unine.ch
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch
79
80 Dominik Gläßer, Yvan Nieto and Gerald Reiner
et al, 2001) that make it possible to improve the supply chain processes by means
of, e. g., demand forecast (see Winklhofer et al (1996) for a review), capacity man-
agement, organizational relations, better communication, reduction of supply chain
echelons and adapted inventory management. Also, the possibility of moving the
customer order decoupling point has been recognized (Olhager, 2003), opening the
door to postponement strategies, etc.. In front of these possible levers, the question
arises of the dynamic assessment of potential process improvements for a specific
supply chain and moreover a combination of potential process improvements re-
lated to an overall strategy. Supply chain evaluation is of primarily importance in
order to support decision making. We will demonstrate that these theoretical con-
cepts as well as the related restrictions have to be modified under consideration of
“real” processes. Therefore, the question arises if these concepts are robust enough
to improve also “real” processes. The investigations will be carried out on the basis
of quantitative models using empirical data (Bertrand and Fransoo, 2009). Basi-
cally, with the quantitative examination of empirical data a model will be developed
which reproduces causal correlations between the control variables and the perfor-
mance variables. Furthermore, Bertrand and Fransoo (2002) pointed out that this
methodology offers a great opportunity to further advance the theory. According
to Davis et al (2007) the choice of simulation technology is an important decision
when it comes to achieving the research objective. Thus, simulation models will
be developed, e. g., discrete event simulation (Sanchez et al, 1996), since the pos-
sibility of understanding the supply chain as a whole and analyzing and assessing
different strategic/tactical action alternatives offer a considerable benefit. This is
why we have opted to use ARENA for developing the simulations models (Kelton
et al, 2003). First, Section 2 discusses the effects of optimised lead time as regards
supply chain performance. Furthermore, the importance of supply chain evaluation
is emphasised. Then, in Section 3, we set out our research approach with the help of
a polymer processing supply chain. Finally, Section 4 provides concluding remarks
plus a look at other research possibilities.
parameters stand for demand mean (λτ ) and demand variance (υ 2 ) as well as the
safety factor z, which represents a trade-off between service level and stock keeping
costs, and Is for safety stock. Therefore there is a lot of interest in reducing the vari-
ance as well as average delivery time. On the one hand, this result in reduced safety
stock that is reflected by lower stock keeping costs. On the other hand, this in no way
worsens the service level, e. g. the number of stock outs. Thus the operational ob-
jective of a supply chain, i. e. increased customer satisfaction and lower costs at the
same time, becomes more realistic. This can even turn out to be a strategic compet-
itive advantage. A decisive element is the customer order decoupling point (CODP)
(Mason-Jones et al, 2000). It is the point where the forecast-driven standard produc-
tion, mostly serial production of standard components (PUSH), and the demand-
driven production, i. e. commissioned production in response to customer orders or
other requirement indicators (PULL) meet. Physically, the decoupling point in the
supply chain is the ultimate inventory where components not yet relating to any or-
der (Mason-Jones and Towill, 1999). The further downstream in the supply chain the
decoupling point is, the less the quantities taken from the inventories agree with real
demand at the point of sale (POS). Owing to the fact that most supply chain partners
do not see real customer demand, they tend to be forecast-driven and not demand-
driven (Christopher and Towill, 2000), which also enforces the so-called “bullwhip
effect” (increasing fluctuations in order quantities and inventory up-stream in the
supply chain whilst end customer demand remains constant (Lee et al, 2004)). To
increase competitive advantage, Olhager (2003) determines that companies can ei-
ther keep the CODP at its current position and reduce delivery lead time or maintain
the delivery lead time and move the CODP upstream in order to reduce or clear
stocks. Strategically positioning the CODP particularly depends on the production
to delivery lead time (P/D) ratio and on relative demand volatility (RDV) (standard
deviation of demand relative to the average demand). In this way, for example, a
make to order (MTO) strategy can only be achieved if the P/D ratio is less than 1
(Olhager, 2003). This is because when production lead time is greater than the deliv-
ery lead time of a customer order, customer service of course suffers (Jammernegg
and Reiner, 2007). On the other hand, it is not advisable to apply a make to stock
(MTS) strategy (lead time is zero) if the RDV is very high because this results in
huge inventories if customer service is to be maintained, and this of course results
in high inventory costs. If, in this case, the P/D ratio is greater than 1, then some
components would have to be produced for stock, which leads to an assembly to
order (ATO) or an MTS strategy. The importance of lead time is also emphasised
by Cachon and Fisher (2000), in that they assert that reducing lead time or batch
size can affect supply chain performance more than information sharing. Likewise,
in the 12 Mason-Jones et al (2000) for simplifying material flow cutting lead time
is an important point.
82 Dominik Gläßer, Yvan Nieto and Gerald Reiner
Evaluation of real supply chain processes is always challenging since a valid esti-
mation can only be obtained through a detailed, specific process analysis. Improve-
ments of a specific supply chain process can never be 100% applied (copied) to an-
other setting. Nevertheless, they can be used as best practice indicating improvement
potentials to another company / supply chain. This analysis must be product-specific
as well as company-specific and the performance measures have to be selected care-
fully and in accordance with the specificity of the system under study (Reiner and
Trcka, 2004). An important step in defining suitable performance measures is de-
termining market qualifiers and market winners, which determine the alignment
and therefore different metrics for leanness and agility of supply chain performance
(Mason-Jones et al, 2000; Naylor et al, 1999). When drawing up the analysis and
assessment model a product-specific supply chain design method should be selected
in order to achieve results that are close to reality. This method provides for the fact
that a supply chain always has to be designed in a product-specific and customer-
specific way (Fisher, 1997) and that the alignment of the supply chain with regard
to its leanness, agility or a combination of both (Lee 2002, Christopher and Towill
2000) plays a decisive role. If a supply chain already exists in reality, then the neces-
sary data for the specified performance measures can be obtained by i. e. analysing
existing IT systems as well as interviewing the supply chain partners. However, if
alternative supply chain strategies have to be analysed in terms of their performance,
then data is never available. In this case, missing values can be calculated, estimated
or obtained by simulation. But calculation is often impossible and a general esti-
mation is too imprecise (Jammernegg and Reiner, 2007). Dynamic stochastic com-
puter simulations can provide not only average values for performance measures
but also give information about their probabilistic distribution (Kelton et al, 2003)
because of the use of random variables (Jammernegg and Reiner, 2007). Random
variables, which simulate risks, are essential to reliable evaluations because, accord-
ing to Hopp and Spearman (1996), risks negatively affect supply chain performance.
To enable precise evaluation, the model must include all important process-related
matters.
To illustrate the ”real” improvement potential of theoretical lead time reduction ap-
proaches, we analysed empirical data from a supply chain in the polymer as well as
furniture industry. The supply chain is characterized by three levels, i. e. a supplier, a
manufacturer and a sales office, and ends with a market-leading OEM as unique cus-
tomer. In this case, delivery performance is the market winner and on-time delivery
is therefore crucial to maintain customer loyalty. Due to the tremendous variety of
products offered by the manufacturer (more than 50000), the analysis had to be lim-
ited to key articles. The selection of the product was performed using ABC-XYZ
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 83
The manufacturer, located in Western Europe, delivers goods to a sales office lo-
cated in Eastern Europe. In turn, the sales office supplies the four OEM production
plants (C1, , C4) belonging to the customer and also located in the eastern part of
Europe. The entire procedure is set out in Fig. 1, with the sales office as well as
the production sites arranged as in reality. In more details, the sales office uses its
inventory to fulfil the customer orders. As soon as the inventory level at the sales
office decreases down to a reorder point a stock replenishment order is placed at
the manufacturer. The manufacturer must then supply the goods and send them to
the sales office as fast as possible. No delivery time is specified. It is to be borne
in mind that the manufacturer’s finished goods inventory merely serves as a buffer
store for transport purposes (batching) and is thus not able to deal with any signif-
icant demand fluctuations because the manufacturing strategy is make to order for
the manufacturer.
7U
DQ 2U
VS GH
RU U
WD
WLR
Q
Fig. 7.1 The initial process
The sales distribution process is to be regarded as based on the classic push prin-
ciple (make to stock). In a dynamic environment where there is uncertainty about
demand and fluctuations in demand this make to stock strategy may lead to great
84 Dominik Gläßer, Yvan Nieto and Gerald Reiner
problems. Fig. 2 shows the stock movements at the sales office over a year. The
diagram shows that there is an increase of stock outs during the first half of the
year, and this has a negative effect on customer satisfaction. The problems associ-
ated with in this setting are manifold. (1) Owing to the irregular pattern of customer
order placing, it is difficult for the sales office to produce a forecast for the future.
(2) Furthermore, available information of the sales office is not sent promptly to the
manufacturer. (3) There is a lack of transparency, the manufacturer is not aware of
actual customer demand. Therefore, he is not able to discern whether or not there is
a genuine customer order behind the stock replenishment order placed by the sales
office. This frequently leads to unfavourable prioritisation of the production orders
and this, in turn, sometimes results in long and varying delivery periods. (4) There
is no classical replenishment policy used by the sales office, so that decisions con-
cerning reorder points and order quantity are mostly made on a one-off basis by the
sales office staff.
6WRFN0RYHPHQW
4XDQWLW\
'D\V
For our simulation model, we use discrete event simulation (ARENA). We apply
the model to asses the performance of different supply chain settings as well as to
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 85
evaluate design alternatives. For each scenarios tested, replications were carried out
in order to asses the variability and the robustness provided by each strategy (Reiner
and Trcka, 2004). One simulation run is for a period of 365 days. Quantitative mod-
els based on empirical data are largely dependent on the data they integrate as well
as on process design descriptions. These are necessary for making sure that the way
the model works comes as close as possible to actual observations and processes. In
order to obtain a coherent data base free of organisational barriers, the data triangu-
lation approach was chosen (Croom, 2009). In particular, we looked at the existing
IT systems at the plant, the sales office and the organisational administration depart-
ments. Based on direct data access we ensured that data could be directly acquired
from the source using the database query. The model design was adapted in line
with the product-specific supply chain method based on analyses and observations
of reality, e. g. participant observations and questioning responsible supply chain
managers. Product specification, according to Mason-Jones et al (2000), yielded
that the market winner is the level of readiness to deliver, whereas quality, costs and
lead time are the market qualifiers. This indicates an agile supply chain environ-
ment. For model validation, the initial scenario was simulated and the result data
were compared with the real data. The comparison showed that the results of the
model reflect the reality. Finally the completed model design including the simula-
tion results were again confirmed through responsible supply chain managers and
participant observations.
Based on interviews we figured out that a 4-week rolling forecast from customer
could be provided, which will constitute the core alternative of our first scenario.
The rolling forecast represents actual order entry with optional manual adoptions
from the customer. In addition, and in order to support the impact of the forecast,
an (s,S) inventory policy will be applied at the sales office, with a safety stock cal-
culated as in eq. 1 with a target cycle service level of 95%. The order quantity
also takes the manufacturer’s batch size into account. All applied distributions for
stochastic input variables (e. g. delivery time between manufacturer and sales of-
fices incl. production time, transport cost) were worked out on the basis of real data,
taking account of chi-square and Kolmogorov-Smirnov goodness-of-fit hypothesis
tests. In addition, all distributions have been validated in a graphical evaluation. As it
has not yet been possible to estimate the precision of the customer’s forecast, it was
assumed in the simulation that the actual order can deviate 20% from the forecast
per period. The results of the scenario 1 are presented in Table 1.
86 Dominik Gläßer, Yvan Nieto and Gerald Reiner
Scenario 2 focuses on shortening the supply chain by closing the inventory at the
sales office and by direct customer delivery from the manufacturer. Now, the man-
ufacturer’s order policy envisages always having sufficient articles in stock for the
next two weeks (based on average demand per week). Therefore, in order to enable
this strategy, a forecast is necessary and the 4-weeks rolling forecast from scenario
1 was conserved. By doing so, the manufacturer becomes aware of the actual cus-
tomer requirement leading to an upstream move of the CODP. It is worth to mention
that the sales office stays responsible for customer relation, contract extension and
contract monitoring. In addition, this strategy results in new transport costs. New
transport prices were estimated from interviews with the carrier and were factored
into the simulation. Fig. 3 shows the entire process.
&
5
00
&5
The performance measures are set out in Table 21.2 and are related to an entire year.
Based on the described improvements in scenario 1, we are able to reduce the num-
ber of stock outs, which is a direct indicator of customer satisfaction. Lead times
and costs can not be reduced. Owing to the delivery time (mean and the variance)
between sales office and manufacturer, it is necessary to keep large stocks, which
in turn has a negative effect on stock keeping costs and the profit margin. This case
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 87
would also mean building work to extend the sales office inventory to handle the
high stock of inventory. As it is not possible to find an improved solution according
to all of our performance dimensions (lead time, customer satisfaction and cost),
we decided to consider the entire supply chain in scenario 2. By cutting the flow
of products through shortening the supply chain and postponing the CODP, it was
possible to achieve a marked reduction in total lead time in scenario 2. Compared to
the initial scenario, these activities also had a positive effect on customer satisfac-
tion because it is possible to react much faster to customer requirements. Also stock
keeping costs are reduced as sales office stores are no longer required and produc-
tion is carried out on forecasts provided by the customer. In order to be complete, it
has to be mentioned that this strategy would only be possible by extending the man-
ufacturer inventory capacity. Nevertheless, we assume that this would be a realistic
investment as the costs of building an extension to the manufacturer’s stores would
easily be compensate by the saving on the transport costs side within one year.
7.4 Conclusion
In this paper we analysed and assessed two different possibilities for supply chain
improvements. We regarded their effects on lead time and it was possible to show
financial and strategic enhancements. Our approach was illustrated by a polymer
supply chain with a major OEM as end customer. For each of the alternatives, the
performance was measured using lead time, finished articles inventory stocks as
well as costs, number of stock outs and transport costs; where the number of stock
outs constitutes a decisive index for customer satisfaction. The threshold number
of stock outs should be less than 10 days per year. We were able to confirm the
positive impact of lead time reduction on supply chain performance, i. e. the simul-
taneous reduction of inventory and increase of customer satisfaction. We managed to
identify this specific dynamic behaviour by quantifying the benefits earned through
each alternative. Further on, we confirmed the importance of considering the supply
chain as a whole when assessing improvement alternatives. Our results demonstrate
that the benefits of certain alternatives can only be realised if improvements are
aligned along the supply chain partners, e. g. inventory management is based on the
customer forecast and linked to the production planning. We believe these results
to be interesting for both academics and practitioner as they contribute to better
understanding the dynamics of the supply chain and the importance of the entire
supply chain-specific evaluation of improvements. One of our next research activi-
ties will be to implement the most suitable alternative, in order to be able to draw
further conclusions about the model (see also Mitroff et al (1974) and to ascertain
an appropriate forecast algorithm based on historical data to support the customer
forecast.
88 Dominik Gläßer, Yvan Nieto and Gerald Reiner
Acknowledgements Partial funding for this research has been provided by the project “Matching
supply and demand – an integrated dynamic analysis of supply chain flexibility enablers” supported
by the Swiss National Science Foundation.
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 89
References
Abstract Traditionally supply chain management decisions are based on the eco-
nomic performance which is expressed by financial and non-financial measures,
i.e. costs and customer service. From this perspective, in the last decades, several
logistics trends, i.e. outsourcing, offshoring and centralization, emerged. Recently,
studies have shown that the focus on the cost aspect is no longer sufficient. Due to
internal and external drivers (e.g. customer pressure, regulations, etc.) environmen-
tal criteria become more and more important for the decision-making of individual
enterprises. Furthermore, the risk which is related to the increased transportation
distances resulting from these strategies is often not taken into account or under-
estimated. These shifts in priorities of companies force them to search for new lo-
gistics strategies that are at the same time cost-efficient, environmentally friendly
and reliable. Based on this integrated perspective new logistics trends, like on- and
nearshoring, flexible supply base or flexible transportation, have come up recently
and will gain more importance in the near future. Relying on a flexible supply base
a company can benefit from low costs in an offshore facility and simultaneously
be able to respond quickly to demand fluctuations and react to delivery delays and
disruptions by serving the market also from an onshore site. A single-period dual
sourcing model is presented to show the effects of emission costs on the offshore,
onshore and total order quantity.
Heidrun Rosič
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: heidrun.rosic@wu.ac.at
Gerhard Bauer
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: gerhard.bauer@wu.ac.at
Werner Jammernegg
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: werner.jammernegg@wu.ac.at
91
92 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg
8.1 Introduction
Traditionally supply chain management decisions are based on the economic per-
formance which is expressed by financial and non-financial measures, i.e. costs
and customer service. From this perspective, in the last decades, different logistics
trends, i.e. outsourcing, offshoring and centralization, have emerged.
Even though these trends seem to be rather “old” they are still prevailing in to-
day’s businesses. Recently, a study conducted in Austria has shown that 41% of the
interviewed companies still intend to offshore some of their production activities in
the following two years. Furthermore, 35.4% of them plan to move their production
sites to Asia; especially China is a prominent destination for offshoring. The low
cost of the production factors (personal, material, etc.) are the key drivers for their
decisions (Breinbauer et al, 2008).
A European-wide study carried out by Fraunhofer ISI concerning offshoring
showed similar results. Between 25% and 50% of the surveyed enterprises moved
parts of their production abroad in the years 2002 and 2003 (Dachs et al, 2006).
Further examples can be found. For instance, the Austria-based Knill Group,
which is active in the field of infrastructure, supplying systems and applications
for energy and data transmission, built new production facilities in India and China
within the past 36 months in order to take advantage of lower wages in Asia (Brein-
bauer et al, 2008). NXP, a leading semiconductor company is headquartered in Eu-
rope and employs more than 33,500 employees. The company pursued a strong
offshoring strategy and now more than 60% of its production activities are located
in Asia, 5% in America; only 33% have remained in Europe. Also, AT&S, a large
Austrian manufacturer of printed circuit boards, continues its offshoring strategy.
In January 1999, AT&S started operating in India by acquiring the largest Indian
printed circuit board manufacturer and now it will build a second facility located
nearby. The investments for this project will amount to 37 million Euros and pro-
duction activities shall start in the third quarter of 2009. Besides, AT&S operates
facilities in China and Korea.
In section 2 prevalent logistics trends are presented focusing on a cost perspec-
tive, thereby showing the trade-offs that exist between the different cost components.
The trends presented, i.e. outsourcing, offshoring and centralization, usually lead to
lower production (procurement) costs in the case of offshoring and outsourcing or
lower inventory costs in the case of physical centralization. But, in general, they re-
sult in an increase of transportation distances, therefore making supply chains longer
and/or more complex. Often in the evaluation of these strategies side effects of in-
creased transportation distances are not taken into account adequately. Therefore, in
section 3 in addition to the economic criteria, “soft” factors, like lead time, deliv-
ery reliability, flexibility, etc. and the environmental impact are included. Based on
this integrated perspective consisting of costs, risks and environment new logistics
trends are highlighted. One of these new logistics trends is then analyzed in more
detail, namely flexible supply base with the specific variant dual sourcing. In section
4 a transport-focused framework for dual sourcing (off- and onshore supply source)
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 93
and in section 5 a single-period model for dual sourcing including emission costs
are presented.
ern Europe to Eastern Europe is often called nearshoring as the distance and the
cultural differences are less (Ferreira and Prokopets, 2009). Physical centralization
means that the number of production, procurement and/or distribution sites is re-
duced to a single one, this means “consolidating operations in a single location”
(Van Mieghem, 2008). The main goal of centralization is to pool risk, reduce inven-
tory and exploit economies of scale (Chopra and Meindl, 2006).
These trends mainly lead to a reduction of total landed cost due to lower produc-
tion (procurement) cost in the case of offshoring and outsourcing or lower inventory
cost due to risk pooling in the case of physical centralization. But as a negative side-
effect supply chains are longer and/or more complex (Tang, 2006). Due to the in-
creased length of supply chains more transportation activities are necessary leading
to an increase of the respective costs. In this paper we will especially pay attention
to the effect of transportation activity within a supply chain.
The presented logistics trends have proven to be optimal for industrial compa-
nies under economic considerations. Recently studies have shown that the focus
on the cost aspect of a certain strategy is no longer sufficient. Environmental crite-
ria become more and more important for the decision-making of individual enter-
prises. Walker et al (2008) differ between internal (organizational factors, efficiency
improvements) and external drivers (regulation, customers, competition and soci-
ety) which may induce the consideration of environmental aspects in supply chain
decision-making. Especially carbon dioxide (CO2) emissions heavily accelerate the
greenhouse effect; 60% of this effect is caused by CO2. This is a reason why gov-
ernmental institutions (UN, EU, etc.) often focus their regulations on CO2-reduction
(Kyoto protocol, EU emission trading scheme, etc.).
Furthermore, the risk which is related to these strategies is often not taken into
account or underestimated. There are various types of risks that exist especially in
the case of offshoring. Currency risk and political risk depend on the economic and
political stability within a country. Intellectual property risk and competitive risk
should also not be ignored (Van Mieghem, 2008). Ferreira and Prokopets (2009)
conclude from the “2008 Archstone/SCRM Survey of Manufacturers” (in-depth
survey of 39 senior executives from US and European-based manufacturers) that
executives also start to recognize aspects of offshoring, such as “quality problems,
longer supply chains, lack of visibility, piracy and intellectual capital theft”. Due
to these additional aspects the cost savings of offshoring which represent between
25% and 40% on average start to diminish.
In addition, an offshoring strategy negatively affects the flexibility and respon-
siveness of a supply chain as shipments have to be made in large lots (e.g. container-
size) and the delivery time is very long (e.g. up to several months). Besides, the cus-
tomization of products to individual customer needs is more difficult. Furthermore,
the cost components are about to change; 40% of the manufacturing enterprises have
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 95
market and reducing transportation distances. This new network design resulted in
a small increase of total costs compared to the optimal solution, but the total costs
are still more than 10% smaller than in the initial situation and the CO2-emissions
could be reduced by a quarter (Simchi-Levi, 2008).
Concerning supply chain risks, it has to be pointed out that offshoring, outsourc-
ing and centralization typically move production away from the market which re-
duces the responsiveness and flexibility of a supply chain. This has to be consid-
ered together with the possible cost reductions of a certain strategy (Allon and
Van Mieghem, 2009). Further, Tang (2006) points out that supply chains have to
become robust which means that a supply chain is able to fulfill customer require-
ments even though a disruption of the supply chain has occurred. This disruption
can be of different kind, either a short one due to congestion or accidents or a long
one which can be the result of a natural disaster or a terrorist attack destroying one
node or arc in the supply chain.
By using a flexible supply base a company can benefit from low costs in an off-
shore facility and simultaneously be able to respond quickly to demand fluctuations
by serving the market also from an onshore site and react to delivery delays and
disruptions. In this way, the amount of long-distant transport can be reduced, there-
fore mitigating transportation risks. For instance, Hewlett Packard uses an offshore
facility to produce the base volume and employs also an onshore facility to quickly
react to disruptions and demand fluctuations (Tang, 2006).
Furthermore, flexible transportation helps to improve the performance of a sup-
ply chain by a change of transport mode, multi-modal transportation or the use of
multiple routes. The use of a single mode is mainly due to cost consideration and
the aim to reduce complexity in supply chains but this increases the vulnerability
of the supply chains. By using multi-modal transportation the supply chain is able
to obtain more flexibility and therefore can handle disruptions easier. Especially in
the case of congestion an alternative route could increase the time- as well as cost-
effectiveness. For instance, LKW Walter decided to change the mode on the link
north-eastern Spain to southern Italy. Road transportation was replaced by a multi-
modal solution (sea/truck). Thereby, 1,211 km per shipment (1,523 km on the road
vs. 312 km short sea/trucking), in total over 1.2 million km per year, could be saved
(ECR, 2008). Nike operates a distribution center in Belgium that serves the Euro-
pean market. 96% of the freight to the location is transported by inland waterways.
Thereby, 10,000 truck loads could be saved and also on the distribution side Nike
very much relies on waterways; only the direct delivery to the customers is carried
out by truck (Seebauer, 2008).
Improvements in transportation efficiency can be achieved through better ve-
hicle utilization, the reduction of empty trips as well as less frequent shipments
with larger lot sizes. This leads to a reduction of the number of transports. Thus
costs, CO2-emmissions and fossil fuel consumption can be reduced significantly.
S.C. Johnson & Son Inc., a household and personal-care products maker, for in-
stance, was able to cut fuel use by 630,000 liters by improving truckload utilization
(Simchi-Levi et al, 2008). By maximizing full truck load and supplying the market
from the closest location, PepsiCo, on average, saved 1.5 million km and 1,200 t
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 97
CO2-emissions (ECR, 2008). The British drugstore chain Boots, for instance, could
avoid empty runs by using route planning. Thereby, 2.2 million kilometers on the
road could be eliminated which resulted in a reduction of 1,750 t CO2-emissions. In
combination with the use of larger containers, increased utilization of the contain-
ers and reduced amount of air transportation Boots achieved a reduction of 3,000 t
CO2 (-29%) between 2004 and 2007. These improvements were only possible due
to the tight collaboration between Boots and its logistics service provider Maersk
Logistics (Seebauer, 2008). According to Simchi-Levi et al (2008) logistics service
providers will be employed more often in order to increase efficiency. They are
able to consolidate the shipments from a large number of customers and therewith
can reduce the number of empty trips. Again Boots was able to save approximately
120,000 km as well as 92 t of CO2-emissions per year by sharing transportation
with another company in the UK. Further examples in this context can be found in
the ECR Sustainable Transport Project (ECR, 2008). Table 8.1 gives an overview of
the presented new logistics trends.
In the following sections we use the flexible supply base - one of the presented
new logistics trends - to develop a transport-focused framework and a stylized model
for dual sourcing.
In the previous section it was exemplarily shown that a flexible supply base can help
to improve the performance of a supply chain from an integrated perspective includ-
ing economic, risk and environmental criteria. In the following we will focus on a
certain type of this strategy, i.e. dual sourcing depending on a cheap but inflexible
and slow offshore supply source and on an expensive but flexible and fast onshore
supply source. The onshore supply source can help to improve the performance of
a supply chain with respect to risks in two cases, to bridge delivery delays and/or
disruptions or to fulfill demand exceeding the offshore order quantity.
Table 8.2 gives an overview of the external conditions that have an impact on a
company’s policy and the decisions to be taken.
Environmental regulations, like the emission trading scheme of the EU, impose
restrictions on companies and therefore influence the policies they choose. The
emission trading scheme of the EU (EU ETS) was implemented in order to reach
the goals stated in the Kyoto protocol. It is a cap-and-trade system of allowances for
emitting CO2 and other greenhouse gases whereby each allowance certifies the right
to emit one ton of CO2. Only certain industries are included in this regulation up-to-
now. These industries are heavy energy consuming industries, like refineries, power
generation with fossil resources, metal production and processing, pulp and paper,
etc. Today, 11,000 sites that produce around 50% of the EU’s total CO2-emissions
are covered by the EU ETS. A certain number of emission allowances are allocated
to the companies free of charge. Those companies that produce fewer emissions than
the number of allowances owned can sell them, whereas those producing more have
98 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg
Flexible supply Using multiple Reduced number Hewlett Packard uses an offshore facility
base supply sources of long-distant to produce the base volume and employs
(offshore and transports and also an onshore facility to quickly react to
onshore) mitigation of disruptions and demand fluctuations.
transportation
risks
Transportation Vehicle routing Reduced number By maximizing full truck load, PepsiCo,
efficiency and loading, of empty trips, on average, saved 1.5 million km and
Consolidated Improved, 1,200 t CO2-emissions. A manufacturer
shipments vehicle of household and personal-care products
utilization cut fuel use by 630,000 litres by combin-
ing multiple customer orders.
ing scheme, e.g. include civil aviation by 2013 (EC, 2008). So, it has to be expected
that the whole transport sector will be confronted with more severe regulations or
the inclusion into the EU ETS in the near future.
External conditions are also determined by the transportation network including
respective risks. According to Rodrigues et al (2008) transportation risks are related
to the carrier who executes the transport and to external factors. The carrier is a
source of risk with respect to his fleet capacity, network planning, scheduling and
routing and information system as well as his financial conditions and reliability.
As external risk factors, transport macroeconomics (oil price, availability of drivers,
etc.), infrastructure conditions (congestion, construction, etc.) and future govern-
ment policies have to be mentioned. Further, severe shocks, like terrorist attacks,
natural disasters or industrial action, might have a strong impact on the transporta-
tion network. Whereas the probability of such event is very low, the impacts can be
detrimental. Based on this, they state that with the increasing degree of outsourcing
and the higher geographical spread of supply chains the transportation risks increase
(Rodrigues et al, 2008).
The paper by Allon and Van Mieghem (2009) about global dual sourcing shows
that it is almost impossible to derive the optimal sourcing policy for a responsive
near-shore source and a low-cost offshore source even if the criterion is just cost
minimization. By including an environmental criterion, thus, it seems reasonable to
develop a simple model for dual sourcing with onshore reactive capacity to be able
to analyze the consequences for the offshore order quantity.
In the seminal newsvendor model a possibility to reduce the mismatch cost of under-
stocking or overstocking is to allow for a second order opportunity. In the simplest
version it is assumed that at the beginning of the selling season the demand of a
product is known exactly or that the second production facility can immediately
produce any requested quantity (see e.g.,Warburton and Stratton, 2005 or Cachon
and Terwiesch, 2009, chapter 12).
In the considered single-period dual sourcing model a product can either be
sourced from an offshore production facility and from an onshore production plant
whereby the capacity of the onshore supply source is unlimited and can deliver im-
mediately. The two suppliers can be internal or external to the company. Because of
the long procurement lead time the offshore order quantity of the product is based on
the random demand X characterized by the distribution function F. The company,
e.g. a retailer, sells the product at the unit selling price p. The purchase price per
unit from the offshore supplier is denoted by co f f , that from the onshore supplier
is con . Leftover inventory at the end of the regular selling season can be sold at a
unit salvage value z. It is assumed that p > con > co f f > z holds. Then the profit P
depends on the offshore order quantity q and on the realized demand x:
100 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg
px − co f f q + z(q − x)+ x ≤ q
P(q, x) =
px − co f f q − con(x − q)+ x > q
The optimal offshore order quantity q∗ is derived by maximizing the expected
profit E(P(q, X)). Using the framework of the classical newsvendor model the opti-
mality condition is given by (see, e.g. Cachon and Terwiesch, 2009, section 12.4):
con − co f f
F(q∗ ) =
con − z
The unit purchase price from the offshore supplier is composed of the product
price per unit c and the emission cost factor ϕ ; the unit purchase price from the
onshore supplier is obtained by adding a domestic premium (d · c) to the offshore
product price per unit. This premium mainly is caused by higher labor costs that
have to be paid in the onshore production facility (Warburton and Stratton, 2005).
The two cost parameters are defined as :
co f f = (1 + ϕ )c,
con = (1 + d)c.
The offshore supply source is only used if it is overall cheaper than the onshore
supply source, which is the case as long as ϕ < d. As soon as d ≥ ϕ the product
quantity is exclusively procured from the onshore source on order. The factor ϕ
represents the emission costs per product unit, whereby it is assumed that costs for
emission allowances only arise for long-distant transportation from the offshore lo-
cation. The emission costs per unit sourced from the offshore supplier depend on the
selected transportation route and transportation mode. For the different modes av-
erage emission factors per kilometer exist. Multiplying these emission factors with
the distance the vehicle has to travel, the CO2-emissions for one trip can be calcu-
lated. The emission costs, then, are derived from the buying price of an emission
allowance, traded under the EU ETS. It is reasonable to assume that the emission
cost factor ϕ is independent of the order quantity q if the transport is carried out by
a logistics service provider. The company, e.g. retailer, then has to reserve a fixed
transport capacity which determines the factor ϕ . If part of that reserved capacity is
not used by the company, the logistics service provider can sell it to other customers
and therefore usually achieve high vehicle utilization.
A numerical example with the following cost and price parameters is presented
in order to show the impact of emission costs on the quantity decisions: selling price
p = 20, product price per unit c = 10, salvage value z = 5 and domestic premium d =
0.2. The emission costs factor ϕ is varied in order to show the impact of increasing
environmental costs on the optimal decision. Demand is assumed to be normally
distributed with a mean μ of 1,000 units whereby two different standard deviations
(σ1 = 150, σ2 = 300) are used in order to show the impact of variability. Taking a
normally distributed demand is justified if the coefficient of variation (σ /μ ) is small
enough (Warburton and Stratton, 2005).
The offshore order quantity depends on the relative cost advantage that can be
achieved through offshore sourcing. The lower the offshore cost is the more the
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 101
retailer will procure from the offshore source. The onshore supply source is only
employed in order to fulfill the demand that exceeds the offshore order quantity,
i.e. expected lost sales. Therefore, with the onshore supply source a service level of
100% can be guaranteed. But it should not be forgotten that this comes at a high do-
mestic premium. Nevertheless, the dual sourcing strategy often outperforms a pure
offshoring strategy with respect to expected profit (see, e.g. Cachon and Terwiesch,
2009).
With increasing emission costs (ϕ · c) the company sources less from offshore
as the cost advantage is reduced. The offshore quantity decreases nearly linearly
with increasing ϕ until a certain point after which it decreases sharply. The total
order quantity (off- and onshore quantity) also decreases depending on ϕ . This is
due to the following fact: The fewer units are procured through the offshore supply
source the lower is the expected leftover inventory (I). The whole expected lost sales
quantity (qon) is then fulfilled from the onshore supply source and this decision is
taken under complete certainty. Overall, the total order quantity converges to the
mean demand because
q∗ + qon = E(X) + I
Higher demand uncertainty, i.e. a higher coefficient of variation of demand, im-
plies that the onshore supply source is used more.
The numerical results for the two different demand distributions with the above
price and cost parameters are graphically shown in Fig.8.1 and Fig.8.2. The emission
costs factor is varied in the range 0 ≤ d < ϕ .
Fig. 8.1 Off-, onshore and total order quantity depending on the emission cost factor ϕ for nor-
mally distributed demand with μ = 1, 000, σ1 = 150 and d = 0.2
102 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg
Fig. 8.2 Off-, onshore and total order quantity depending on the emission cost factor ϕ for nor-
mally distributed demand with μ = 1, 000, σ1 = 3000 and d = 0.2
The presented model is based on limiting assumptions with respect to the exist-
ing environmental regulations concerning emission allowances. Under the existing
EU ETS companies receive allowances free of charge. Therefore, in contrast to the
model presented, emission costs do not arise for each unit ordered, but only if a cer-
tain threshold is exceeded. For a more general model with a positive emission limit
and the opportunity to buy additional emission allowances or to sell unused ones we
refer to Rosič and Jammernegg (2009).
8.6 Summary
Prevalent logistics trends, i.e. outsourcing, offshoring, and centralization are pre-
sented from a cost perspective. These strategies are chosen with the objective to
reduce total landed costs (e.g. reduction of labor costs through offshoring or in-
ventory costs through centralization). But as direct consequence transportation dis-
tances increase; supply chains become longer and/or more complex. This has nega-
tive impacts on the risk a supply chain has to face (e.g. congestion on transportation
links) and on the environment (e.g. CO2-emissions). An integrated perspective is
presented and new logistics trends which perform better with respect to transporta-
tion risks and the environment are illustrated by several case studies. Further, we use
one of the presented trends - flexible supply base - to develop a transport-focused
framework for dual sourcing. Dual sourcing means that a company relies on a cheap
but slow offshore supply source and on an expensive but fast and unlimited onshore
supply source. The external conditions which influence the policies of an individual
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 103
References
Allon G, Van Mieghem J (2009) Global Dual Sourcing: Tailored Base Surge Alloca-
tion to Near and Offshore Production. Tech. rep., Working Paper, Kellogg School
of Management, Northwestern University
Breinbauer A, Haslehner F, Wala T (2008) Internationale Produktionsver-
lagerung Österreichischer Industrieunternehmer Ergebnisse einer em-
pirischen Untersuchung. Tech. rep., FH des bfi Wien, URL http://www.fh-
vie.ac.at/files/2008 Studie Produktionsverlagerungen.pdf
Cachon G, Terwiesch C (2009) Matching supply with demand: An introduction to
operations management, 2nd edn. McGraw-Hill, Boston
Chopra S, Meindl P (2006) Supply chain management, 3rd edn. Pearson Prentice
Hall, New Jersey
Dachs B, Ebersberger B, Kinkel S, Waser B (2006) Offshoring of production
A European perspective. URL http://www.systemsresearch.ac.at/%20getdown-
load.php?id=154
EC (2008) EU action against climate change The EU Emissions Trading System.
European Commission. URL http://ec.europa.eu/environment/climat/pdf/brochu-
res/ets en.pdf
ECR (2008) ECR Sustainable Transport Project Case Studies. URL
http://www.ecrnet.org/05-projects/transport/Combined%20Case%20stu-
dies v1%208 220508 pro.pdf
Ferreira J, Prokopets L (2009) Does offshoring still make sense? Supply Chain Man-
agement Review 13(1):20–27
Rodrigues V, Stantchev D, Potter A, Naim M, Whiteing A (2008) Establishing a
transport operation focused uncertainty model for the supply chain. International
Journal of Physical Distribution & Logistics Management 38(5):388–411
104 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg
Philip Hedenstierna
Logistics Research Group, University of Skövde, 541 28 Skövde, Sweden
Per Hilletofth, Corresponding author
Logistic Research Group, University of Skövde, 541 28 Skövde, Sweden, Tel.: +46 (0)500 44 85
88; Fax: +46 (0)500 44 87 99,
e-mail: per.hilletofth@his.se
Olli-Pekka Hilmola
Lappeenranta Univ. of Tech., Kouvola Unit, Prikaatintie 9, 45100 Kouvola, Finland
105
106 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola
9.1 Introduction
(The popularity of using simple methods, like the reorder point method is shown
in Jonsson and Mattsson 2006 and Ghobbar and Friend 2004). When a method’s
assumptions are unmet, its performance may be difficult to predict. The scenario
of using theoretically improper methods is not unlikely, as businesses may want to
utilize inventory control methods that are simple to manage, such as the reorder
point system, even when the planning environment would require a dynamic lot-
sizing method such as the Wagner-Whitin, Part-period or the Silver-Meal algorithm
(Axsäter, 2006). In the same fashion, simple forecasting methods may be applied
to complex demand patterns to simplify the implementation and management of the
forecasts.
The remainder of this paper is structured as follows: First, Section 9.2 integrates
existing theory to describe a framework for designing inventory control models.
Section 9.3 introduces empirical data from a company, whose planning environ-
ment was interpreted in Section 10.2.2 to develop a simulation model based on the
framework. Section 9.5 describes the results of the simulations. Thereafter, Section
23.5 discusses the implications of the results, while Section 18.5 describes the con-
clusions that can be drawn from the study.
The design of the framework is based on observing how inventory control meth-
ods operate, what input they require and what output they provide. An underlying
assumption for inventory control systems is that there for any given time t, is an in-
ventory level LL, which is reduced by demand D and increased by replenishment R.
Another assumption is that time is divided into buckets as described by Pidd (1988),
for continuous systems buckets are infinitesimal, and that for each bucket the lowest
inventory level, which is sufficient to evaluate the effects of inventory control, is
governed by Formula 1. The relationship between these factors has been deduced
from the rules that material requirements planning is built on (Vollmann, 2005).
Formula 1 dictates how transactions of any system placed in the framework will
operate. It considers replenishment to occur between time buckets, meaning that it
is sufficient to monitor the lowest inventory level to manage inventory transactions.
Information such as service levels, inventory position and the highest stock level
may be calculated from the lowest inventory level. The formula governs the inven-
tory transactions of any inventory control system, and must be represented in any
inventory control application. All other parts of an application may vary, either de-
pending on the planning environment in which an inventory control system is used,
or on the design of the system. Fig. 9.2 shows the framework, which starts with the
planning environment and ends with a measurement of the system’s performance.
The planning environment comprises the characteristics of all aspects that may
affect the timing/sizing decisions (Mattsson, 2004). For each time unit, the environ-
ment, which determines the distribution of demand, generates momentary demand
that is passed on to a forecasting method, to an inventory control method and to
actual transactions. The type of demand, which is dictated by the planning envi-
ronment, tells whether a backlog can be implemented or not, and what function that
may represent it (Waters, 2003). Forecasting is affected by past demand information
and the planning environment (Axsäter, 1991). The former is used to do time series
9 An Integrative Approach To Inventory Control 109
analysis, which is common practice in inventory control, while the latter may con-
cern other input, such as information that may improve forecasting or data needed
for causal forecasting. The environment may also tell of changes in the demand pat-
tern, which may necessitate adjusting forecasting parameters or changing the fore-
casting method. It is necessary to consider the aggregation level of data, as longer
term forecasts will have a low coefficient of variation, at the cost of losing forecast
responsiveness (Vollmann, 2005). Forecast data is necessary for inventory control
methods (forecasted mean values), and for safety stock sizing (forecast variability)
(Waters, 2003). Safety stock sizing is a method buffering against deviations from
the expected mean value of the forecast (Waters, 2003). The assumption of safety
stock sizing is that all forecasts are correct estimations of the future mean value of
demand; any deviations from the forecast are attributed as demand variability. This
effectively means that an ill-performing forecast simply detects higher demand vari-
ability than a good forecast. The sizing is also affected by the planning environment,
as the uncertain time determines the need for safety stock, as lead times also may
have variability, and as the environment will determine to what extent customers
accept shortages, or low service (Axsäter, 1991).
Inventory control methods rely on forecasts, on safety stock sizing and on the
planning environment. The safety stock is used as a cushion to maintain service,
while forecasts and data from the planning environment, which are ordering costs,
holding costs and lead times, are used to determine when and/or how much to order
(Vollmann, 2005). The actual balancing of supply, which comes from the replen-
ishment of the inventory, and demand, which is sales or lost sales, takes place as
inventory transactions. Measuring these transactions gives an understanding of how
well an inventory control system performs for the given planning environment (Wa-
ters, 2003).
Data was collected from a local timber yard, currently not using an inventory control
policy. Existing functionality for the reorder point method and for the periodic order
quantity method allowed for these methods to be deployed at low cost. The issue
was whether the methods could cope with the fluctuations in demand, as trend and
seasonal components were assumed to exist. Based on an analysis of sales data,
the demand for timber was found to be seasonal, but with no trend component.
This information was used to generate a demand function, based on the normal
distribution. The purpose of the demand function was to allow the simulation model
to run several times (200 independent simulations of three consecutive years were
run, with random seeds for each simulated day). Real demand, as well as a sample
of simulated demand, is shown in Fig. 9.3.
Demand characteristics are shown in Table 9.1 and other parameters pertaining
to the planning environment are shown in Table 9.2. As transport costs were consid-
ered to be semi-fixed rather than variable, the reordering cost is valid for the reorder
110 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola
quantity used. Increasing the order quantity was not a cost-effective option. Stock
out costs were not considered, as the consequences of stock outs are hard to mea-
sure; not only are sales lost, there is also the possibility of competitors winning the
sale, and of losing customers, as they cannot find what they need.
Lead times were considered as fixed, as no information on delivery timeliness
was available. The expected fill rate (fraction of demand serviceable from stock)
for the reorder point method was 99%, while the it for the periodic order quantity
method would be 98% and for the lot-for-lot method would be 96% (calculated
9 An Integrative Approach To Inventory Control 111
using the loss function, based on the standard deviation, as described by Axsäter
(2006).
All methods were verified against theory by testing, if the method implemen-
tations gave the values that theory dictates. For the reorder point method, the re-
order point was raised by days of forecasted demand, to prevent undershoot, as
described in (Mattsson, 2007). Several forecast methods were considered, and the
actual choice of forecast for this case was based on the mean absolute deviation.
9 An Integrative Approach To Inventory Control 113
Bias was calculated to see whether a forecasting method followed the mean of de-
mand. The seasonally adjusted moving average (Waters, 2003) was chosen as the
preferred method, as it proved to be nearly as accurate as Holt-Winters (Axsäter,
1991), while not requiring as careful calibration.
Forecasts were monthly, and predicted the demand for the following month. The
forecast value was multiplied by 1.5 to reflect an economic inventory cycle time of
45 days. This simplification was done to see how the system would react to system-
atic design errors.
sured cost/service relationship does not intersect with the theoretical function. For
the methods with longer order cycles, the measured cost/service relationship shows
a much flatter curve than expected, indicating that these functions will require more
safety stock than expected to improve fill rates.
9.6 Discussion
Completed simulations indicate that the fill rate of the periodic order quantity
method suffers when the seasonal demand pattern is introduced, while the reorder
point method can maintain the same fill rate as if no seasonality were present. This
is a result of the nature of the two methods, where variability affecting the reorder
point method will affect the time of ordering, while the periodic order quantity
method, with fixed ordering times, cannot regulate order timing to prevent stock
outs. Instead, it must let the inventory level take the full effect of any variability.
Conversely, the effect of variability on the reorder point method is that the resulting
order interval may not be economic.
When comparing the methods used in the simulation using the framework, we
find that the reorder point method is superior both concerning holding costs, and fill
rate. What system is preferable depends on whether suppliers can manage to deliver
varying quantities (up to three times the average, both for periodic ordering, and for
lot-for-lot) or at varying intervals.
9 An Integrative Approach To Inventory Control 115
The difference between measured and theoretical fill rate demonstrated by the
periodic order quantity shows how inventory control methods not designed for a
certain planning environment can be affected. The use of a monthly forecast not
representing the next inventory cycle may also have contributed to the low fill rate.
The simulation based on the framework helped give insight into how the inventory
control process would react to the planning environment. It showed that a large
safety stock would be required if the periodic order quantity were to be used, as the
periodic order quantity method undershot performance predictions much more than
the reorder point method. If applied over multiple products, the framework can tell
if consolidation using the sensitive periodic order quantity system is less costly than
the reorder point system. Given that the periodic order quantity system has a 100%
uncertain time (Axsäter, 1991), it may be used as a benchmark in simulations, as
variability and problems caused by poor design of the process always are reflected
in the fill rate.
In the future we plan to enlarge our simulation experiments by incorporating dif-
ferent kind of demand types (continuous and discontinuous) as well as new meth-
ods used in forecasting and ordering. Recent research has shown that autoregressive
forecasting methods outperform others in situations where demand is fluctuating
widely and follows a “life-cycle” pattern (Datta et al, 2009). Similarly, purchasing
order method research argues that not a single ordering method should be used (so
basically it is not a question, which method is the best one, but which one best
suits the environment), but usually a combination of different purchasing methods
should be incorporated in ERP systems during the entire life-cycle of a product
(Hilmola et al, 2009). However, if volumes are low, then even economic order quan-
tities/reorder point systems, and periodic order policies should be abandoned; a lot
for lot policy might produce best results in these situations (Hilmola et al, 2009).
Thus, much depends from the operations strategy (order or stock based system),
and from the amount of time, which customers are willing to wait for a delivery to
reach their facilities (Hilletofth, 2008).
9.7 Conclusions
Treating inventory control and forecasting as separate activities, while not acknowl-
edging how forecasting and its application affect inventory control may lead to in-
correct assessments of a system’s performance in a certain planning environment.
Approaching inventory control as a process, starting with a planning environment
and ending with a measurement of the system’s performance shows that all activities
are related, and that the end result may be affected by the activities or by the way
they are connected. This paper uses a simulation model to show how the use of fore-
casts and complexity in demand patterns affects the performance of the reorder point
system and the periodic order quantity system. Simulations show that performance
generally is worse than expected, and that periodic ordering consistently shows a
greater susceptibility both to variability and to design errors, due to its inability to
9 An Integrative Approach To Inventory Control 117
buffer against these by changing the reordering interval. This weakness also appears
in lot-for-lot systems, as they are based on periodic ordering.
References
10.1 Introduction
Companies, which are successful in cost- and quality-based competitions, look for
other factors that can help them to gain further competitive advantage. Therefore,
time-based competition spreads among leading companies. Time has turned into
Noémi Kalló
Department of Management and Corporate Economics, Budapest University of Technology and
Economics-Hungary, 1111 Budapest, Müegyetem rkp. 9. T. ép. IV. em.
e-mail: kallo@mvt.bme.hu
Tamás Koltai
Department of Management and Corporate Economics, Budapest University of Technology and
Economics-Hungary, 1111 Budapest, Müegyetem rkp. 9. T. ép. IV. em.
e-mail: koltai@mvt.bme.hu
119
120 Noémi Kalló and Tamás Koltai
Express line systems, like most queuing problems, can be modeled both analyti-
cally and empirically. Analytical models are based on the results of queuing theory.
Generally some existing analytical models are used to approximate the operation of
the queuing system. These models are quite simple to use; however, in the case of
complex queuing systems, they give only rough estimation of the real operation. For
analyzing these problems, new analytical models must be developed or simulation
models can be used (Hillier and Lieberman, 1995). Simulation modeling requires
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 121
more time and resource; however, quite special characteristics of queuing processes
can be modeled in this way. For our analyses, an analytical and a simulation model
were created as well.
Queuing systems with express lines have several special characteristics which make
their analytical modeling difficult. The most important specialty is that express lines
are generally used in supermarkets where many service facilities are located and
each has its own separate waiting line. Analyzing this kind of queuing systems with
the models of queuing theory presents difficulties because there is no existing ana-
lytical model which can properly describe such a system. In this case, two analytical
models can be used as approximations: one consisting of many service facilities with
a common waiting line and another containing many independent queuing systems
each having one service facility with its own separate queue.
If analytical formulae have to be used for the whole queuing system - containing
k checkouts and k waiting lines -, the following two approaches can be used:
One-common-line approach. For this, the queuing system can be modeled as if
all checkouts had one common queue. For this, a G/G/k model can be applied or,
according to the system characteristics, a special type of it (for example M/G/k or
M/M/k). If there are E express and R regular checkouts, then a model with k=E and
another with k=R are required.
Modeling the checkout system as a queuing system with one common line for
all checkouts is an optimistic approach. It underestimates the average waiting time
by assuming optimally efficient queue selection of customers which minimizes their
waiting times. That is, it supposes that customers always choose the queue in which
their waiting time will be shortest, and if their waiting line moves too slowly, they
jockey among the queues. In some cases, however, customers cannot behave in the
most efficient way. If there are idle checkouts but jockeying to these lines is diffi-
cult, or jockeying does not provide considerable time savings, then customers do
not change line. Consequently, the one-common-line approach provides a best-case
estimate of the operation of the queuing system.
Independent-queuing-system approach. In this case, k independent G/G/1 models
are applied or, according to the system characteristics, other special models (for
example M/G/1 or M/M/1). If there are E express and R regular checkouts, then
E+R models are required.
Modeling the checkouts of a supermarket as independent queuing systems gives
a pessimistic approach of waiting since it overestimates the average waiting time.
Waiting lines are, however, generally not independent from each other, which can
help to reduce the average waiting time. First, most of the arriving customers try to
join the shortest queue. Second, some customers jockey from slowly moving lines to
fast moving ones. If, for example, some checkouts become idle, customers waiting
in line try to jockey to the idle checkouts. That is, queues are not independent from
122 Noémi Kalló and Tamás Koltai
Introducing express lines into a queuing system requires the consideration of sev-
eral operational issues. One of the main questions related to develop an express line
system is to determine the limit value which optimizes the operation of the system.
To make this decision, the effect of all possible limit values must be examined. As
the limit value determines which customers use the express and which the regular
checkouts, for this analysis, characteristics of customer groups generated with all
possible limit values must be determined. These main characteristics are the arrival
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 123
rates, the average service times and variances of service times. Before introducing
express checkouts, this information is unavailable, that is, it must be determined by
using the data of the existing system.
For building the analytical - and the simulation - models only such information
and data were used which can be determined without the actual introduction of
the express lines. The data used can be obtained by observing and measuring the
operation of the existing queuing system without express lines. Therefore, decisions
about the implementation of express lines can be made in advance and the possible
effects on customer waiting can be forecasted.
For determining the service characteristics of different customers the relationship
between the number of items bought and the service times must be analyzed. By
using this relationship, the average service time and the variance of service times
can be determined for customers buying a certain amount. With the help of the
distribution function of the number of items bought, the average arrival and service
rates, and the variances of services times can be calculated for all possible customer
groups as well (for details see Koltai et al, 2008).
The model works in the following way. Based on the main characteristics of the
existing queuing systems (in italics), the special characteristics of the express line
systems with different limit values are determined. With these parameters, using
the formulae of M/G/1 and M/G/k queuing models, the average waiting times also
can be calculated (typed boldface). Knowing all possible waiting times the smallest
one must be selected (framed). The minimal average waiting time, eventually, deter-
mines the optimal limit value. Analyses with different parameter values showed that
the waiting time as a function of the limit parameter has a distinct minimum. That is,
an optimal limit value can be determined for every express line system (Fig.10.2).
Express line systems have several special characteristics, and only a few of them
can be taken into consideration with analytical models. For example, the managerial
regulation which controls the use of checkouts can be build into analytical models.
If there are more than one checkout accessible for customers, their choice among
them cannot be considered. The analytical model appropriate for describing express
line systems assume either uniform customer distribution among accessible waiting
lines (independent-system approach) or do not deal with line selection at all (one-
common-queue approach).
A simulation model, considering several customer behavioral issues, was built for
studying the operation of express line systems. The block diagram of the simulation
model created with Arena, simulation software of Rockwell Automation, can be
seen on Fig.10.3
With the first, create block, customers are generated according to a stochastic
distribution. The assign block, based on a formerly defined distribution function, de-
termines the number of items bought by each customer. With this quantity, knowing
their stochastic relationship, the service time of each customer is also calculated.
The branch block creates two customer groups: one of them can use the express
checkouts, the other are directed to the regular lines. Customers entitled to use ex-
press checkouts buy no more items than the limit value. Customers in each group
have to make a decision about which line to choose. Rules forming the base of this
decision can be given in the pickq blocks. Next, the customer joins the selected
queue, waits until the server will be free and can be seized. At this point, the waiting
process in queue ends. The waiting time is recorded by a tally block. The following
branch block is needed for data collection and statistical analyses. The customer’s
route continues along the solid lines while waiting time data of the same customer
group are combined in tally blocks (along the dashed lines). As the service needs
a specific amount of time, the customer is delayed. When service ends, the server
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 125
is released and made free for the next customer. At this point the sojourn time ends
and it is recorded by a tally as well. After combining the different waiting time data,
the customer can leave the system at the dispose block.
For the analyses, the real data of a do-it-yourself superstore is used. In this store,
generally five checkouts operate. Using the data provided by the checkout informa-
tion system, the arrival rates for the different days and for the different parts of the
days was estimated. For all periods, the Poisson arrival process is acceptable accord-
ing to Kolmogorov-Smirnov tests. Based on Rényi’s limiting distribution theorem
and its generalizations, the arrival processes of the two customer groups can also be
approximated with Poisson processes (Rényi, 1956; Szántai, 1971a,b).
The density function of the number of items bought by customers is also pro-
vided by the checkout information system. For describing it, a truncated geometric
distribution with a mean of 3.089 is found acceptable by a chi-square test.
As the service time of customers cannot be obtained from any information sys-
tems, it was measured manually. The relationship between the number of items
bought and the service time was analyzed with regression analysis. A 0.777 cor-
relation coefficient supported the assumption of linearity. According to the results
of linear regression, service time has two parts. On the average, the part independent
of the number of items bought lasts 0.5463 minute, reading a bar code needs 0.1622
minute. With linear regression, the standard deviation of these parameters and the
service times of customers buying different amounts were determined as well (for
details see Koltai et al, 2008).
Results presented in this article are valid for a midday traffic intensity with an
arrival rate most characteristic for the store (λ =95 customers/hour). According to
the geometric distribution, customers buy generally only few items. Therefore, two
of the 5 working checkouts was considered to be express servers (S=5, E=2).
In the store in question, express lines have not been used yet. Therefore, the real
queuing system could not be used to validate the simulation model. Consequently,
the analytical models were used for checking the validity of results. The fundamen-
tal simplifications applied in analytical models were introduced to the simulation
model. In the M/G/k simulation model, there is a common line for customers enti-
tled to use express checkouts and another one for customers buying many items. In
the M/G/1 simulation model, there are independent arrival processes for all of the
checkouts and their own waiting lines. The analytical and simulation results gained
by the same type of models are quite close to each other; accordingly, they can be
considered valid (Table 10.1).
126 Noémi Kalló and Tamás Koltai
lines buy only few items, that is, they receive service of lower value. Accordingly,
their satisfaction will be lower even if they must wait the same time as customers
in the regular lines. The relationship between waiting and satisfaction can be de-
scribed with a suitable utility function. For the calculations, as a simplification of
the expected utility model, a mean-variance (or a two-moment) decision model can
be used (Levy and Markowitz, 1979; Meyer, 1987). In this way, the transformation
of waiting time into customer satisfaction, after determining the parameter values
characteristic for customers, can be performed based on measures which can easily
be determined analytically or empirically.
Table 10.2 The operation of the system with different objective functions and limit values
Limit value
Objective L=1 L=2 L=3 L=4
Average waiting time 0.3403 0.257 0.3125 0.4614
Standard deviation of waiting times 0.7374 0.5535 0.5757 0.7444
Average perceived waiting times 0.3288 0.255 0.31 0.4491
In Table 10.2, the optimal objective function values are typed boldface. It can
be seen that the same optimal limit value (L=2) is obtained independently of which
objective function is used. This result has two consequences.
First, managers trying to optimize the operation of their queuing systems can
use, aside from satisfaction maximization, any of the possible objective functions
and will get the same result (optimal limit value). Moreover in this way they will
optimize (or at least improve) all of the measures mentioned.
128 Noémi Kalló and Tamás Koltai
Second, as average waiting time can be optimized easily and fast with analytical
models, there is no need for using a more time-consuming and hardly manageable
simulation model.
It must be mentioned that there are situations when the different objective func-
tions determine different optimal limit values. Our analyses showed, however, that
these limit values are numbers next to each other and, in these cases, the waiting
measures are nearly equal independently of the applied limit values. Therefore,
even if the different objective functions give different optimal solutions for the limit
value, the different limits result only slight differences in the waiting measures.
10.5 Conclusions
The application of express lines is a widely used management tool for waiting pro-
cess improvement. One of the main parameters of express line systems is the limit
value which controls checkout type selection. Its value must be selected carefully
because introducing express lines with an improper limit value can increase cus-
tomer waiting significantly. Therefore, determining the optimal limit value, which
minimizes average waiting time, is one of the most important tasks of managers
operating express lines.
For determining optimal limit value, special tools are required. Our analyses
show, however, that simple analytical models are accurate enough for practical ap-
plications. They give only rough approximation of operation and they are appropri-
ate for analyzing only simple waiting measures and management objectives; how-
ever, they can be used to determine the optimal limit value. Using analytical models,
the time, money and knowledge needed for developing and running simulation mod-
els can be saved. That is, analitical models provide an effective rapid modelling tool
for service managers.
It also must be mentioned, that beside limit value there is another parameter
which can be used by managers for influencing waiting time without cost conse-
quences. This is the ratio of express and regular checkouts (when the total num-
ber of checkouts is constant). With this parameter, if optimal limit values are used,
waiting time cannot significantly decreased, therefore it is recommended to use this
parameter to assure constant limit value when total number of checkouts is changed
for some reasons.
The waiting time decreasing effect of express lines are limited. Notwithstanding
express lines are popular among customers. Therefore, to reveal all consequences
of applying express lines, their effects on waiting distribution among the different
customer groups and, accordingly, on satisfaction related to waiting times must be
analyzed as well. These are topics of our further researches.
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 129
References
11.1 Introduction
Recent advances in information technology have led to the belief that sharing ad-
vance demand information (ADI) to manufacturers will allow customers to receive
better service from their manufacturing suppliers. Manufacturers also expect that
this ADI can be effectively integrated into their production inventory control sys-
tems (PICS) to reduce lead times and inventories. This paper investigates the effect
of integrating ADI into Kanban Control Systems (KCS). Using analytical models,
we quantify the improvements obtained in system performance when the KCS is
integrated with ADI.
Ananth Krishnamurthy
University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 Uni-
versity Avenue, Madison, WI 53706, USA,
e-mail: ananth@engr.wisc.edu
Deng Ge
University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 Uni-
versity Avenue, Madison, WI 53706, USA,
e-mail: dge@wisc.edu
131
132 Ananth Krishnamurthy and Deng Ge
The effect of ADI on PICS has been the focus of several studies. Survey articles
such as Uzsoy and Martin-Vega (1990) provide an overview of the prior research
on kanban controlled systems. A number of researchers like PhiIipoom et al (1987)
and Di Mascolo and Frein (1996) have studied various aspects of the design of a
classical kanban controlled system. Other researchers have proposed and analyzed
the performance of variations of the KCS. For instance, Dallery and Liberopou-
los (2000) introduce the Extended Kanban Control System (EKCS). They show
the EKCS is a combination of the classical KCS and the Base Stock (BS) system.
They also show that the EKCS provides the flexibility to decouple design decisions
related to production capacity and base stock levels. Buzacott and Shanthikumar
(1993) introduces the Production Authorization Control (PAC) system that incorpo-
rates advance order information from customers. Karaesmen et al (2002) analyze a
discrete-time make-to-stock queue and investigate the structure of the optimal policy
and associated base stock levels. Liberopoulos and Koukoumialos (2005) analyze a
system operating under KCS with ADI and conduct simulation experiments to in-
vestigate tradeoffs between base stock levels, number of kanbans and manufacturing
lead times. The analytical model discussed here is a first step towards models that
could provide understanding how system performance can be improved further by
integration of the ADI with the kanban controlled system. The model presented in
this paper is for a single-stage system. We compare system performance with that
obtained under the classical KCS and BS system with ADI. Based on the Markov
chain analysis, we show that the integration of the KCS with ADI results in superior
system performance as the integration combines the best features of KCS and based
stock systems with ADI.
The remainder of the paper is organized as follows. Section 11.2 describes the
operation of a system operating under the KCS with ADI, followed by Section 11.3
that describes the detailed Markov chain analysis for the system. Section 11.4 com-
pares the performance of the different systems, and Section 11.5 summarizes the
insights.
This section describes the queuing network model of the KCS with ADI using the
general framework provided in Liberopoulos and Tsikis (2003). The operational
characteristics of the system are described in terms of movement of activated or-
ders, products, and free kanbans in the network. The model is composed of a single-
stage manufacturing station (MFG), fork/join synchronization stations (FJ1 , FJ2 )
and order delay station (OD). Figure 11.1 shows a schematic of the system. We
assume that customer orders arrive at the system according to a Poisson process
with rate λ . However, each customer places their order LTD time units in advance
of the due date. We call LTD the demand lead time and let τd = E[LTD ] (the case of
no ADI corresponds to the case where LTD = 0). Note that the demand lead time
11 Kanban Control with ADI 133
is customer specified and it is different from the planning lead time (LTS ) that the
manufacturing system uses for planning order releases for production. Note that if
sufficient ADI is available, the system might be able to meet customer demand with
less finished goods inventory than that required in a system operating under the KCS
without ADI. For instance, if E[LTD ] > LTS it is possible that the system operates in
a make-to-order mode with minimal inventory. This paper focuses on the more in-
teresting case wherein E[LTD ] < LTS . Consequently, orders received from customers
are immediately activated. However, they may not be released into the manufactur-
ing system immediately, as they might wait in buffer BD1 for a free kanban to be
available in queue FK. When a free kanban is available in FK, an activated order
in BD1 and a free kanban are matched together and released into the manufacturing
stage MFG which consists of a single exponential server with mean service time
μs−1 . After completing service, the product queues in the finished goods buffer FG.
At buffer BD2 , LTD time units after an order is placed, the customer arrives demand-
ing a product. If a unit is available in finished goods, the demand is immediately
satisfied. The kanban attached to the order is released and routed back to FK where
it is available to release another activated order into production.
K Kanbans
FK MFG
FG (Z)
Satisfied Demands
BD2
FJ1 Demands
BD1 FJ2
τd OD
External Orders
We assume that (i) the number of kanbans, K in the system is fixed; (ii) demands
that are not satisfied immediately get back-ordered; (iii) the system maintains a tar-
get base stock level, Z of finished products in FG. The factors affecting system
performance are the demand and planning information, target base stock levels (Z),
the number of kanbans (K), and the characteristics of the demand and manufactur-
ing processes. The service times at the manufacturing station and the inter-arrival
times of demands and orders are assumed to be independent. Since orders arrive at
rate λ , and the service rate of the manufacturing station is μs , we assume that the
system utilization is ρ = μλs ≤ 1.
134 Ananth Krishnamurthy and Deng Ge
The main performance measures of interest are the (i) average work in process,
E[C], (ii) average finished goods inventory, E[I], (iii) the probability of backorder,
PB , (iv) the average number of backorders, E[B], and (v) the overall average total
cost E[TC].
In this section, we analyze the Markov chain for the KCS with ADI. To develop the
Markov chain analysis, we assume that the demand lead time LTD has an exponential
distribution. Let X1 (t) = F(t) − P(t), X2 (t) = I(t) − B(t), t ≥ 0, then the system
performance measures defined in Section 11.2 can be uniquely determined by states
(X1 (t), X2 (t)) as follows:
F(t) = X1+ (t), P(t) = (−X1 )+ (t), I(t) = X2+ (t), B(t) = (−X2 )+ (t), t ≥ 0 (11.3)
The Markov chain for the system is developed as shown in Fig. 11.3. The
state space can be partitioned into six areas based on the number of finished
goods/backorders. Let Ni be the number of states in area i, where i ∈ (1, 2, 3, 4, 5, 6).
Then we have the number of states in each area as follows, N1 = K0 + 1, N2 =
1
2 (2K0 + K − Z + 2)(T − 1), N3 = K0 + K − Z + 1, N4 = (K0 + K − Z + 1)(Z − 1), N5 =
K0 + K − Z + 1, N6 = 12 (K0 + K − Z + 1)(K0 + K − Z). This implies that total number
of states is N = ∑6i=1 Ni .
11 Kanban Control with ADI 135
Let π (x1 , x2 ) be the limiting probability i.e. P {limt→∞ [X1 (t), X2 (t)]} = π (x1 , x2 ),
where −K0 ≤ x1 ≤ K −Z, −(K0 +K −Z) ≤ x2 ≤ K. Let T = K −Z, we can write the
Chapman-Kolmogorov equations for each of the six areas of the Markov chain. As
an example the equations for Area 6 where −(T + K0 ) ≤ x2 ≤ −1, −K0 ≤ x1 ≤ T − 1
are given below:
For x2 = −(T + K0 ), x1 = x2 + T :
Area 1 λ λ λ
0, K −1, K ··· −K0 , K
Tα (T + K0 )α
μ μ
μ (T + 1)α
···
λ λ λ λ
1, K − 1 0, K − 1 −1, K − 1 ··· −K0 , K − 1
(T − 1)α μ Tα μ (T + 1)α (T + K0 )α
μ μ
...... · · · ... ... · · · ... .. .. · · · .. ..
Area 2 . . . .
2α μ (T + 2)α
··· μ (T + 1)α μ
λ λ λ λ
T − 1, Z + 1 ··· 0, Z + 1 −1, Z + 1 ··· −K0 , Z + 1
α 2α
Area 3 λ
μ · · · Tα μ (T + 1)α μ (T + K )α μ 0
λ λ
T, Z T − 1, Z ··· 0, Z −1, Z ··· −K0 , Z
μ α μ · · · μ T α μ (T + 1)α μ (T + K )α
μ
0
λ λ λ λ λ
T, Z − 1 ··· 1, Z − 1 0, Z − 1 ··· −K0 , Z − 1
μ
· · · (T − 1)α μ Tα μ · · · (T + K )α μ 0
Area 4 ... · · · ... ··· ··· ... ... ··· ··· ...
α μ
μ 2α μ (T + 1)α (T + 2)α μ
μ
λ λ λ λ λ
T, 1 T − 1, 1 ··· λ
0, 1 −1, 1 ··· −K0 , 1
μ
α
μ 2α · · · Tα μ
(T + 1)α μ (T + K0 )α μ
Area 5 λ λ λ λ λ
T, 0 T − 1, 0 ··· 0, 0 −1, 0 · · · −K0, 0
μ
μ
α μ ··· Tα · · · μ (T + K )α 0
λ λ
T − 1, −1 ··· 0, −1 −1, −1 · · · λ −K0, −1
μ
· · · (T − 1)α Tα μ ··· (T + K − 1)α
0
λ
−(K0 − 1), −(T + K0 − 1) −K0 , −(T + K0 − 1)
μ α
−K0 , −(T + K0 )
Fig. 11.2 Markov chain transition diagram for the EKCS with ADI, where α = τd−1
For x1 = x2 + T :
For x1 = −K0 :
These balance equations can be solved to obtain the key performance measures.
However, the expressions for the performance measures of KCS with ADI are not
closed form. Let Pb , E[I], E[B], E[C] be the probability of being backordered, and
the expectation of I(t), B(t),C(t), respectively. Then, if ρ = μλs and τd = E[LTD ], we
have:
Pb = ∑ π (x1 , x2 ) (11.11)
(x1 ,x2 ):x2 <0
E[C] = ∑+ (K − x+ +
1 − x2 )π (x1 , x2 ) (11.14)
(x1 ,x2 ):K−x1 −x+
2 >0
Since a system operating under KCS with ADI combines features of both the classi-
cal KCS and BS system with ADI, we compare the performance of all three policies
assuming that the manufacturing system has same configuration and the parameters
characterizing the ADI and demand arrival processes are the same. Note that ana-
lytical expressions have already been established for the performance measures of
KCS and BS with ADI systems by Dallery and Liberopoulos (2000) and Karaesmen
et al (2002) respectively. Table 11.1 shows expressions of performance measures for
these two systems.
To compare system performance under all three control policies, we introduce
the expected total cost defined in Equation 11.15, where hw , h f and b are cost rates
for average work in process, finished goods and backorders, respectively.
ρ K+1 Z+1
E[B] 1−ρ e−μτd (1−ρ ) ρ1−ρ
ρ ρ (1−ρ K )
E[C] (1−ρ ) (1−ρ )
This section presents the design of experiments used for comparing the performance
of the KCS with ADI with the classical KCS and BS system with ADI. In these ex-
periments, the service time of the manufacturing station is assumed to have an expo-
nential distribution with mean μs−1 = 1. The experiments are conducted by varying
K = (5, 10, 20, 30), Z = (0, K/2, K) and λ = (0.5, 0.6, 0.7, 0.8, 0.9), respectively.
We assume that the average demand lead time LTD (τd ) is set as τd = 0.9τs , where
τs , the average flow time (the average time from order activation at BD1 till the de-
livery of a finished product to FG is estimated by τs μs 1−λ . Here we set K0 large
enough so that the underlying Markov chain is finite, and yet no more than 0.1% of
the orders that arrive are rejected from the system.
In this section, we discuss the effect of base stock level Z on the performance
measures for the three different policies. The experiment was carried on λ ∈
{0.5, 0.6, 0.7, 0.8, 0.9} and K ∈ {5, 10, 20, 30}. For each given (λ , K), Z ranges from
0 to K. We compare E[B], E[I] and E[TC] for KCS with ADI, BS with ADI and the
classical kanban system (KCS).
Figure 11.3 plots the trade-offs obtained. In particular, Figures 11.3 i-a and i-b
show that the average finished goods of the system operating under KCS with ADI
is less than that of system operating under BS with ADI or the classical Kanban
system. i.e., E[I] ≤ min(E[Ik ], E[Ibsa ]). This implies that KCS with ADI provides a
better control over inventory than the base stock system with ADI or the classical
KCS. Figures 11.3 ii-a and ii-b show that as Z increases, the average number of
backorders decreases for both the KCS with ADI and BS with ADI, but is constant
for the KCS. This is because both KCS with ADI and BS with ADI use a target stock
level Z to reduce the backorders. The KCS does not set a base stock level, and hence
the number of backorders in the system is constant for a given number of kanbans,
138 Ananth Krishnamurthy and Deng Ge
K. We also notice that the average number backorders of the system operating under
KCS with ADI lies between those of BS with ADI and classical KCS. Figures 11.3
iii-a and iii-b show the tradeoffs with respect to total cost. We notice that for a
system operating under KCS with ADI, the E[TC] function is neither convex nor
concave over Z. However, for the BS system with ADI, the expected total cost is
convex over Z. As expected, for the KCS the cost is constant, for a given K and λ .
For low values of λ (or system load), the KCS with ADI behaves similar to the BS
with ADI, but for high values of λ (or system load), the KCS with ADI achieves
lower cost than BS with ADI for all values of Z.
In this section, we study the effect of number of kanbans on the performance mea-
sures for the KCS with ADI, BS with ADI and the classical KCS. The target base
stock level Z is set as Z ∗ , which is the optimal base stock level for BS with ADI
hf
ln h +b
system, where Z ∗ = f
lnρ + μτdln(1−
ρ
ρ)
(Buzacott and Shanthikumar, 1993) and K is
varied from Z ∗ to 30.
11 Kanban Control with ADI 139
Figure 11.4 plots the performance tradeoffs. In Figures 11.4 i-a and i-b, we see
that for a system operating under the KCS, E[I] increases almost linearly as K in-
creases, but for the KCS with ADI, E[I] increases initially with the increase in K,
but is then bounded by the target stock level Z ∗ . This is due to the structure of the
KCS system with ADI: the excess of kanbans queue up as free kanbans waiting for
activated orders. This prevents release of additional kanbans into production limit-
ing the built up of excess finished goods inventory. Figures 11.4 ii-a and ii-b show
that initially E[B] decreases with increase in K, but then approaches a constant as
E[I] approaches the target base stock level. The reason is similar to that given above:
when E[I] approaches the target stock level, increase in K does not reduce backo-
rders, as the additional kanbans queue up as free kanbans instead of being used to
further reduce backorders. In Figures 11.4 iii-a and iii-b, we see that for a system
operating under KCS, the expected total system cost, E[TC] is convex, but for a
system operating under KCS with ADI, the expected total cost is neither convex nor
concave. The optimal number of kanbans for the KCS with ADI appears to be close
to the optimal kanban setting for the classical KCS. For either low or high λ (system
load), the KCS with ADI always performs better than classical KCS.
This section demonstrates the impact of control pair (K, Z) on the overall perfor-
mance. We vary K from 1 to 30 and Z from 0 to K. For each λ ∈ {0.7, 0.8, 0.9},
we consider all 495 combinations of (K, Z) and study its impact on the total cost.
Figure 11.5 shows the case of λ = 0.9. As we have seen in Fig. 11.3 iii-b and Fig.
11.4 iii-b, E[TC] does not demonstrate convexity or concavity over the control pair,
and E[TC] has local minimums.
work is aimed at developing detailed closed form approximations for key perfor-
mance measures and optimizing overall system performance over the controllable
parameters.
References
Many of the key performance aspects of a manufacturing system are related to the
effect of stochastic events on its operation and although mathematical modelling can
Doug Love
Aston Business School, Aston University, Birmingham, B4 7ET, U.K.,
e-mail: d.m.love@aston.ac.uk
Peter Ball
Department of Manufacturing, Cranfield University, Cranfield, Bedford, MK43 0AL, U.K.,
e-mail: p.d.ball@cranfield.ac.uk
143
144 Doug Love and Peter Ball
help with some of these it is simulation that provides the most flexible and powerful
means of estimating their impact. Reliable estimates of lead times, work in progress
levels, delivery performance, resource utilization etc. all depend on proper represen-
tation of such sources of uncertainty. Determination of the robustness of the design
requires study of external and internal sources of uncertainty, for example changes
in volume and product mix are external to the system whilst breakdowns or scrap
are internal factors. Smith (2003) reviews the literature on the use of simulation in
manufacturing and lists many examples of its use in the design of manufacturing
systems. However the review finds few papers that are concerned with role of simu-
lation in a comprehensive manufacturing system design (MSD) process such as that
proposed by Parnaby (1979). Kamrani et al (1998) presented a simplistic three stage
methodology for cell design in which simulation was the third phase. Other exam-
ples of simulation being discussed in the context of the manufacturing system design
process include Paquet and Lin (2003) who introduce ergonomic considerations and
AlDurgham and Barghash (2008) who propose a framework for manufacturing sim-
ulation that covers some aspects of the design problem but is presented from a more
general perspective.
Conventionally simulation has been linked with the ’dynamic design’ stage of the
manufacturing system design process which follows the concept and detail design
phases in which steady-state conditions are assumed (Love, 1996). During these
earlier stages average losses or utilization factors are assumed to cover internal un-
certainties and average conditions said to apply to demand and product mix. Only at
the dynamic design stage are these factors studied (and represented) in more depth
so that reliable estimates of many of the manufacturing system’s key performance
metrics will only be revealed at this late stage. Ideally the evaluation of dynamic
performance of the manufacturing system should be included in every stage in the
design process but this means that the simulation model would need to change as
the engineers develop their view of the manufacturing system design. Lewis (1995)
proposed a manufacturing system design methodology that incorporated just such
a synchronized approach but it was never fully implemented. He suggested that the
simulation model should be used throughout the system design and through the all
the iterations of its development.
The feasibility of such an approach clearly depends on ability of the modeller to
complete the simulation re-build loop inside the time available for each stage in the
system design process. If that cannot be done then inevitably the simulation will be
left until the system design has stabilized to the point where major changes in the
simulation model would not be needed - that is why the simulation is often built
toward the end of the design project once the detail design phase is complete. Of
course it means that any serious deficiencies in the design that emerge from the dy-
namic analysis may require expensive revision of the system architecture that could
have been accommodated more easily at an earlier stage. Manufacturing system
redesign is normally initiated when there is a compelling business need and that
need is usually time-sensitive so that there is considerable pressure to complete the
project as soon as possible. This pressure means that design team are unlikely to
Title Suppressed Due to Excessive Length 145
favour extending the project time scale even if the extra time spent on a simulation
study would result in a higher quality and more robust design.
Clearly if the time required to perform the simulation analysis could be signifi-
cantly reduced then it would alter the trade-off between design quality and project
duration in favour of the use of simulation.
During the early stages in MSD the architecture of the system may change substan-
tially, for example the cell and related part families may be redefined completely, so
simulation support through this phase implies an ability to completely rebuild the
simulation model quickly. As the architecture is developed a series of models will
be required to test out very different alternatives. Differences will not merely relate
to number and distribution of resources but may require more fundamental revisions
to reflect changes to cell families, material flow paths, work and skill patterns and
machine tool capabilities. This means that the time to build a complete model from
scratch is a key determinant of whether simulation can be used to support this early
phase in the project. Building a model from scratch always takes a significant period
of time, especially if the model is complex Cochran et al (1995) suggest that over
45% of simulation projects take more than three months and nearly 30% require
over three man-months of effort. We have not been able to identify a more recent
study that assessed the impact of the technical enhancements seen since that time or
was focused specifically on manufacturing design projects.
Speeding up model building has long been a desirable objective for simulation
system developers, for example see Love and Bridge (1988). It is clear that whilst
improvements have been made, the position is still seen as one in which scope exists
for further improvement. For example Pedgen’s review (see Pegden, 2005) of future
developments in simulation states that: “If we want to close the application gap we
need to make significant improvements in the model building process to support
the fast-paced decision making environment decision making environment of the
future”.
In response to this pressure software systems have improved considerably, no-
tably in relation to the use of graphics and reusable elements; for examples of this
trend see the Simul8, Witness and Arena systems amongst others. These develop-
ments focus on speeding up the translation of the model logic into executable code
whilst other enhancements provide support for multiple runs, statistical analysis and
the production of common reports, graphs etc. that help by speeding up the exper-
imental process. However the domain independence or breadth of these systems
means that the user is still required to provide much of the detail logic of the model.
This is likely to be a significant task Robinson (2004) suggests the conceptual
modelling stage takes around one third of the total simulation project time. Thus,
although these improvements could be expected to speed up model development
to some extent, they are limited by primarily addressing the coding and experi-
146 Doug Love and Peter Ball
mentation parts of the simulation model development cycle leaving the conceptual
modelling phase relatively untouched.
Whilst the need to repeat the conceptual modelling stage is clearly a serious in-
hibitor on the use of such systems in the architecture design of the manufacturing
system it is a less significant issue for refinement and detail design. The refinement
stages of the manufacturing system design process will generate a need for modi-
fications to an existing model even if the underlying conceptual simulation model
remains largely unchanged. The ease and speed with which these can be done will
have been aided by the improvements mentioned above but may still require longer
than the engineer would wish. The length of the minor modify-experiment cycle
may depend on the ease with which the engineer can interact with the model to im-
plement the required changes and perform the necessary experiments and that, in
turn, may depend on the nature of the simulation software.
It could be argued that some simulation systems already allow models to be built
using only data without any programming and some simulation software companies
may argue that their interfaces are intuitive and can be learnt very quickly. But this
would not be a view shared by a typical manufacturing engineer unfamiliar with
simulation interfaces or the subtle tricks needed to get the systems to represent the
logic required with recourse to programming. Ball and Love (1994) point out that
interfaces may make simulation packages easier to use but this does not necessary
mean easy to use, i.e. easy to use describes the simplicity by which the user can
create the model from data from the problem domain.
Data-driven simulators are usually defined as systems that allow a user to create
a running model without the need to do any programming, for example see Pidd’s
definition (Pidd, 1992). Configuration options are used to define or modify the op-
erational logic of the model usually through menus choices and the setting of entity
properties. Although it is true that this approach does use ‘data’ to define the model,
it may still require the user make decisions that are normally associated with concep-
tual modelling, for example to define model inputs, outputs, and data requirements
and to decide what components are to be included and the level of detail with which
they will be represented. The more freedom the system offers to the user the wider
its potential range of applications will be, but the more specialist knowledge will be
needed to use it.
O’Keefe and Haddock (1991) present a useful diagram that demonstrates the
continuum from pure programming languages through the type of system described
above to highly focused problem-specific simulators that merely require data popu-
lation. The approach used in the cases described here is close to the problem specific
end of the range; the ‘model’ is pre-built and the options offered are limited to those
that are directly related to the manufacturing system design problem itself. They
would be recognised by an engineer as part of the normal specification of the design
and are expressed in domain specific language. The conceptual modelling decisions
have already been made and are hard-wired into the system. The model is populated
by the data that is loaded into it; in these cases the data describe the products, pro-
duction processes and resources that make up the real system. Normally this data
will be extracted from the company databases or ERP systems and formatted be-
Title Suppressed Due to Excessive Length 147
fore uploading although Randell and Bolmsjö (2001) built a demonstration factory
simulation that showed it was: “feasible to run a simulation using the production
planning data as the only information source”. The data used for this project are
very similar to those required by SwiftSim (Love, 2009) - see below. Detailed con-
figuration options may still be required but they are defined and presented in a form
and language familiar to the engineer using a problem specific interface. This means
there is no need for users to learn specialist simulation concepts and terminology. Of
course other aspects of the simulation art are still required, especially those related
to the experimentation.
The avoidance of the conceptual modelling stage altogether and the fact that the
coding stage is also eliminated means that the user can move from data gathering to
running model very quickly since data upload and parameter and option setting are
all that are required to create a running model. The Robinson (2004) suggestion that
the project time is roughly split evenly between conceptual modelling, coding and
experimentation (he excludes implementation from this) means that use of this type
of data-driven simulator could save up to two thirds of the simulation project time.
This paper reviews two case studies that illustrate this approach.
The aerospace case is an example of the use of a data-driven simulator in the design
of a cellular manufacturing system. This company manufactures complex parts for
the aerospace industry with application in both the military and commercial markets
and their customers include all the major manufacturers of aircraft. Moulding and
related processes are used in their manufacture so that this application was slightly
unusual in that the processes were very different from those seen in a conventional
machine shop. The variety of parts in the cell’s product family was also substantial
- around 3200 part numbers were considered ‘live’ and each part passed through
around 10 operations. The number of work centres in the cell was more modest at
around 70 although many contained multiple stations. At some stations individual
parts were loaded and processed by the operator whilst at others parts were loaded
in bulk and the processing took place unattended. The need to changeover might be
triggered by a change in part number from one batch to the next or by a change in
some other property or attribute of the part. Operators were multi-skilled and those
skills differed from person to person and shift working was the norm. Special tooling
was used extensively and in some cases travelled with the work through several
operations and could be considered an important resource limitation. In some cases
parts were assembled together at certain operations so that the process route data
had to include bill of materials information. In some cases the constituent parts were
made in the same cell whilst in other instances they were produced elsewhere. MRP
148 Doug Love and Peter Ball
generated works orders were to be used by the company to drive the production
programme for the cell.
The design team recognised the potential benefit of using simulation but were
concerned that it would take too long to develop a usable model given the tight
timeframe that they had been given for the project. The complexity of their pro-
cesses and the size of the part family were also seen as likely to extend the develop-
ment time needed for the model. On the other hand the ability to test the robustness
of the design was recognised as especially important for high-variety cells where
shifts in product mix can cause unforeseen problems in sustaining delivery perfor-
mance and utilisation levels. Since the redesign involved reorganisation of existing
facilities rather than the introduction of new processes it followed that much of the
data held in the company’s ERP system could be used to populate the simulation
model. A revised version of an existing data-driven batch manufacturing simula-
tor had recently become available at Aston University so it was decided to use that
package for the project. The original system (ATOMS, Bridge see 1991) had em-
ployed a manual user-interface in which the engineer typed in all the relevant data
and, whilst some basic data could be uploaded from files, extensive manual editing
was always required before a viable model could be generated. Although the core
of the system was little changed the revised facilities meant much larger models
could be run and a more comprehensive range of upload options were implemented
through a spreadsheet interface. These developments meant that ERP data could be
used without simplification to generate the model.
SwiftSim (Love, 2009) relies entirely on the base manufacturing data and a range
of configuration options (that are also defined by the uploaded data) to generate a
running model of a manufacturing system. The data required is extensive but is no
more comprehensive than would be needed to specify the manufacturing system
design. The system does not offer any programming options at all - if the required
functionality is not present then it cannot be added. To ensure that its range of func-
tionality was as comprehensive as possible the original design was based on a study
of cell design practise across a UK-based multi-national company. Engineers from
the company’s design task forces located in plants across the country were inter-
viewed to identify the features that the system needed to offer. The system was also
refined by application in a number of in-house redesign projects.
Domain data are used to create the model directly, i.e. the data are formatted,
uploaded (or manually entered into the system), run options selected and the model
then executes immediately. The user defines materials (i.e. part numbers), process
routes, bills of materials, work stations, work centres, operators, work patterns, skill
groups, control systems (MRP, Kanban), sequencing rules (FIFO, batching etc),
stock policies, suppliers and lead times, demand sources (generated or input) etc..
The model is created directly from this data. Company terminology is used through-
out so, for example, actual part numbers are used and operators are given their real
names. The system can generate a range of standard reports that vary in the level of
detail offered from simple tables of resource utilisation to event log files that record
everything that happened in a run. The original ATOMS system provided a limited
graphical, schematic, representation of the simulated system that could be used for
Title Suppressed Due to Excessive Length 149
debugging and diagnostic investigation. For this type of system the graphical dis-
play of the system status is rarely used when performing experimental runs but it
remains very useful for diagnostics so that aspect will be a core focus of the new
graphical extension currently being considered for SwiftSim.
The concern to ensure the model was built as quickly as possible and the fact that
the company had no experience of the modelling system influenced their decision
to employ an external consultant (one of the authors) who had knowledge of both
the manufacturing design process and the simulator. This meant that there was a
learning curve faced by the consultant in becoming familiar with the companys
products, processes etc. This approach ensured that the first model was produced
quickly but the extra communications involved did slow the iteration cycle down
during later stages in the project.
The raw process and sales demand data were extracted from the company’s ERP
system into spreadsheets where they could be readily reformatted for upload. The
data for work stations, operators, materials, process routes (including bills of mate-
rial) were all handled that way. Generating demand data proved to be a little more
complex as an MRP calculation was performed in the spreadsheet to convert product
demand to that for the cell family parts. This had the advantage of avoiding any dis-
tortions that might have been present in a works order history extracted from ERP.
The disadvantage was that the spreadsheet calculation was slow, taking 6-8 hours
on average.
The absence of a programming capability did not prove to be a constraint as
the simulator handled all the complications of the manufacturing processes with-
out the need for any special ‘tricks’ or deviations. The time the system needs to
create a model from a spreadsheet is very short (less than a minute) and run times
are also reasonable taking around an hour to run a year’s simulated operation of
the cell. However the time required for initial familiarisation and analysis, data
extraction and reprocessing and data validation and correction meant that the first
proper model took around 100 man hours to produce including the time needed to
include program the MRP explosion into the spreadsheet. This time also included
the consultants learning curve of around 20 hours that would have been avoided by a
SwiftSim-trained engineer. Subsequent revisions to reflect design changes or differ-
ent performance requirements could be accommodated much more quickly taking
around 8 man hours to revise the data set, upload and perform a test run. These times
are taken from contemporaneous log of the projects task times that was used to track
progress and resources used. Once the base model had been created the engineers
were able to obtain feedback on design changes quite rapidly although this cycle
time would have been reduced and some of the initial creation problems may have
been avoided if the engineers had used the simulator themselves from the beginning
of the project.
The engineers were able to use the standard reports from the system and gener-
ally they provided the information needed although the ability to show an animated
graphic of the cell running was seen as very desirable, especially for communicating
with both senior management and the shop floor.
150 Doug Love and Peter Ball
The second case study is drawn from a high volume, engineered product environ-
ment. The company regularly introduces new products which trigger the develop-
ment of new production lines. A production line is developed iteratively over a
number of months and simulation is used as standard practice within those itera-
tions. There are many individuals involved in the production line design process
and although many regularly use simulation only a few are considered simulation
experts. The focus of the design activity is the production line, with some links to
the support, supply chain, etc. activities. The initial users of the simulation output
are the wider design team to trigger redesign work or to confirm performance. The
final simulation output is used as part of the senior management sign off process.
The role of simulation in this case is to support the activities of the manufactur-
ing engineers in removing risk from the design process and, importantly, trigger-
ing design changes that would typically result in a 10% performance improvement.
Numerous simulation models are created during the design of a production line re-
sulting from changes to numbers of machines, machine cycle times, process quality,
expected output rates, etc. The models include details of buffers, selection rules,
conditional routings, scrap rates and operator behaviour. Given their size (100 en-
tities in a model is not unusual) and the scope of the potential changes, the models
are rapidly rebuilt from scratch each time rather than modifying a base model. This
rapid rebuilding of models is considered more robust than model modification and
the scope of the changes required mean that such modifications could take longer
than is available to the overall design team.
The rapid building of simulation models is achieved through a tailored spread-
sheet interface to a commercially available simulation package. The users work with
the interface to specify the model through either manual entry or copy and past-
ing data from other design spreadsheets. The data entered represents the entities
to be modelled as well as the control parameters. Populating the interface the first
time for a new production line design typically takes several days, however, once
achieved subsequent design changes can be accommodated easily with a day, often
in hours. The early modelling work takes many days as the first models are run are
deterministic and stochastic enhancements are progressively added and experiments
performed. Once set up, the interface is able to build the model in the simulation
package, run the model a number of times and retrieve the results. The interface
contains only sufficient functionality to build models for that particular company.
Therefore the user works within the user interface using terminology of a manufac-
turing engineer rather than generalised simulation terminology and is restricted to
entering data typical of that companys requirements. The overall time from start to
finish of modelling a given line is of the order of weeks. Relatively therefore the
model build and run time is short for a given scenario. Overall modelling effort is
actually dictated by the design iterations creating new scenarios.
Manufacturing engineers use the simulation interface to build and run simulation
models, sometimes with the guidance of the simulation experts. The company spe-
cific functionality of the simulation interface means that data specified in the inter-
Title Suppressed Due to Excessive Length 151
face that completely defines the model creation and execution is readily understood
by all, whether or not they were part of initial modelling work. This contrasts with
the typical view that simulation models built by others take time to fully understand.
The size of the models means that manual creation of both the model logic and
the graphical display would take a significant amount of time to create; potentially
the speed is such that modifications would be triggered by the manufacturing engi-
neers before the model was completed. Experimentation times are typically of the
order of hours, sometimes scenarios are batched together and run overnight to use
otherwise idle computers. The modelling approach therefore uses the power of a
commercial simulator to model complex and varied systems and combines this with
the simplicity of an interface dedicated to the particular companys work. This sepa-
ration of the model creation from the power of the simulation software enables staff
to quickly create models without having to develop a dedicated simulator or use
significant staff time.
In summary, the approach combines the power of a commercial simulation pack-
age with the speed and ease of use of a dedicated spreadsheet based interface in
the language of the manufacturing engineer and allows rapid creation of models for
experimentation by simulation experts and non-experts alike. The speed of the mod-
elling is within the pace of the wider design team activity and genuinely informs the
design process triggering design iterations and confirmation of performance before
sign off.
The paper has argued that the traditional relationship between manufacturing system
design and simulation needs to evolve to truly draw on the benefits of simulation as
an integral part of the dynamic design stage. The discussion went further to make
the case that the dynamic design should be iterative starting at the concept stage
and that to enable this to take place the interface to simulation systems should be
in the language of the manufacturing engineers who are making the critical design
decisions. The paper has presented two different cases where simulation models
have built to directly contribute to the manufacturing system design process. The
following discussion reviews how far these cases go towards enhancing the design
outcome.
Both cases address the integration of simulation into the manufacturing system
design process. The cases demonstrate the influence of simulation on the design
outcome as well as confirming performance. In both cases there was a simulation
expert supporting the design activities and notably in case two those manufacturing
engineers who are in the core of the design team are also users of the simulation
system.
The cases show the use of simulation to improve design performance, however,
its influence on the concept design differs according to the point at it is deployed at
the design stage. Whilst the first case demonstrated the influence on the design con-
152 Doug Love and Peter Ball
cepts in the second case simulation was utilised after the production line concepts
emerged and therefore its role was to improve the performance of a given design
option. Design iterations can vary in magnitude from parameter changes (such as
cycle times and material control rules) to more fundamental structural changes (such
as number of machines and routings). In case two most iterations were parameter
changes however there were occasional more fundamental changes that resulted, as
would be expected, in longer model creation times.
Developments in simulators have potential to improve simulation model build
times and in turn influence its role in manufacturing systems design. The devel-
opment of a domain specific simulator requires both simulation and application
domain knowledge. Two different approaches were illustrated the case two com-
pany used a standard commercial simulation system as the basis of their simulator
whereas SwiftSim was developed through an academic research project with indus-
trial collaborators. Interestingly the developments most valuable to these cases were
core functionality improvements rather than those relating to animation.
The role of the manufacturing engineer in the use of simulation varies in these
cases. Case one was lead by the simulation expert where as case two was supported
by the simulation expert. It has to be noted that simulation experts were used when
subtle ‘tricks’ are required that are not a standard part of the interface functional-
ity. A simulation specialist may be used to interface between the engineer and the
model but this approach also has drawbacks. The specialist translates the engineer’s
requirements into a model suitable for the purpose but the risk is that features are
lost in translation and delays result. It may be that the popularity, mentioned ear-
lier, of simple spreadsheet models with manufacturing engineers reflects a desire to
directly control all aspects of the analysis.
The level of data translation required from the manufacturing engineering design
world into the simulation analysis world and back influences the level of robustness
of the analysis process as well as the time taken to complete it. Both cases feature a
minimum level of translation of data from the manufacturing engineers world to the
simulation world, hence the manufacturing engineer could readily understand the
model construction and outputs. Subsequently this minimises any nervousness with
verification.
Both cases present implications for model build time and indicate that they are
built rapidly when compared to typically quoted figures from the literature. The
rapid model building has had two impacts: firstly, the simulation output was influ-
encing the design outcome rather than just confirming performance and, secondly,
the level of detail possible is very high providing greater confidence in the design
outcome.
12.5 Conclusions
This paper has presented a discussion on the relationship between the manufactur-
ing system design process and the simulation modelling process. It was argued that
Title Suppressed Due to Excessive Length 153
to improve the design outcome, the model building process needs to be reduced
significantly to enable the results of simulation to truly influence the selection and
refinement of design concepts. The detail of the two industrial cases demonstrates
the challenges for simulation use as well as the benefits obtained. From this, key is-
sues of integration of simulation, the influence on concept design, the functionality
of commercial simulators, the role of the manufacturing engineer and data transla-
tion were identified and discussed. Overall the paper has demonstrated how domain
specific, data driven simulators can enhance the manufacturing system design pro-
cess.
References
Pidd M (1992) Guidelines for the design of data-driven generic simulators for spe-
cific domains. Simulation 59(4):237–243
Randell L, Bolmsjö G (2001) Database driven factory simulation: A proof-of-
concept demonstrator. In: Peters B, Smith J, Medeiros D, Rohrer M (eds) Pro-
ceedings of the 33rd conference on Winter simulation, December 9-12, pp 977–
983
Robinson S (2004) Simulation: The practice of model development and use. John
Wiley & Sons, Chichester
Smith J (2003) Survey on the use of simulation for manufacturing system design
and operation. Journal of Manufacturing Systems 22(2):157–171
Chapter 13
The Best of Both Worlds - Integrated
Application of Analytic Methods and Simulation
in Supply Chain Management
Reinhold Schodl
This work attempts to discover how complex order fulfillment processes of a sup-
ply chain can be analyzed effectively and efficiently. In this context, complexity is
determined by the number of process elements and the degree of interaction be-
tween them, as well as by the extent variability is influencing process performance.
We show how the combination of analytic methods and simulation can be utilized
to analyze complex supply chain processes and present a procedure that integrates
queuing theory with discrete event simulation. In a case study, the approach is ap-
plied to a real-life supply chain to show the practical applicability.
Analytic models and simulation models are opposing ways to represent supply chain
processes for purposes of analysis. “If the relationships that compose the model are
simple enough, it may be possible to use mathematical methods (such as algebra,
calculus, or probability theory) to obtain exact information on questions of interest;
this is called an analytic solution” (Law and Kelton, 2000). Conversely, simulation
models are quantitative models, which do not consist of an integrated system of pre-
cisely solvable equations. “Computer simulation refers to methods for studying a
wide variety of models of real world systems by numerical evaluation using soft-
ware designed to imitate the system’s operations or characteristics, often over time”
(Kelton et al, 2002).
The use of analytic models and simulation models in supply chain management
harbors distinct merits and demerits (see Table 13.1). By combining analytic meth-
ods and computer simulation, one can potentially derive greater value than by ap-
plying one of these methods alone. This idea has been advocated since the early
Reinhold Schodl
Capgemini Consulting, Lassallestr. 9b, 1020 Wien, Austria,
e-mail: : reinhold.schodl@capgemini.com
155
156 Reinhold Schodl
days of computer simulation. Nolan and Sovereign integrate analytic methods with
simulation in an early work in the area of logistics (Nolan and Sovereign, 1972).
Later research utilizes the joint application of the methods in the field of supply
chain management (for examples, see Ko et al, 2006; Gnoni et al, 2003; Merkuryev
et al, 2003; Lee and Kim, 2002).
Depending on the degree of integration, one can distinguish two different forms,
i.e., hybrid modeling and hybrid models. “Hybrid modeling consists of building in-
dependent analytic and simulation models of the total system, developing their solu-
tion procedures, and using their solution procedures together for problem solving”
(Sargent, 1994). An example is the evaluation of alternatives based on economic
viability and operative feasibility by applying an analytic and a simulation model
respectively. A further application is the verification of an analytic model via an
independent simulation model (Jammernegg and Reiner, 2001). “A hybrid simula-
tion/analytic model is a mathematical model which combines identifiable simulation
and analytic models” (Shanthikumar and Sargent, 1983). Hybrid models are char-
acterized by a higher degree of integration, as analytic methods and simulation are
incorporated into a single model.
Hybrid models can be classified according to the type of dynamic and hierarchi-
cal integration (see Table 13.2). Following the classification with regard to dynamic
integration, this work presents a Type I model. Concerning the hierarchical inte-
gration, the presented model is a special case of Type IV, as the simulation model
requires the analytic model’s output, but both models are hierarchically equivalent
and represent the whole system.
13 The Best of Both Worlds 157
The analysis and improvement of complex supply chain processes is a unique chal-
lenge. Given the fact there is no universally accepted definition of the complexity
of supply chain processes, we define the following building factors for complex-
ity: number of process ele-ments (e.g., activities, buffers, information, resources)
and degree of interaction between them, random variability (e.g., machine failures),
and predictable variability (e.g., multiple product variants). The following two ap-
proaches show that hybrid models are particularly suitable for the analysis of com-
plex supply chain processes.
• The entire system is assessed by using analytic methods (e.g., queuing theory).
Subsequently, the results of the assessment are used to construct a model of a
sub-system, which then helps to conduct a more detailed analysis by means of
simulation. This type of approach is used, for instance, to analyze a complex
supply chain in the semi-conductor industry (Jain et al, 1999).
• An analytic model is employed to assess a relatively large number of alternatives
with relatively minimal effort. Promising alternatives are analyzed via simula-
tion in more detail. For instance, such approach is employed to solve complex
transportation problems (Granger et al, 2001).
We now present a procedure to analyze complex supply chains with a balance be-
tween validity and effort. The procedure is different from the discussed approaches
in the following ways. First, narrowing of the system’s scope by an analysis on an
aggregated level is avoided, to incorporate the dynamic behavior of the overall sys-
tem. Second, no preselecting of alternatives by an analysis on an aggregated level
occurs, which prevents an unwanted rejection of promising process designs. The
procedure consists of the following steps:
158 Reinhold Schodl
1. In the first step, the real system’s supply chain processes are modeled as an an-
alytic model and analyzed according to queuing theory. The queuing model de-
livers values of performance indicators (e.g., waiting times) which are inputs for
the complexity reduction in Step 2, as well as for the simulation model in Step 3.
2. This step aims to reduce complexity by identifying non-critical process steps
that can be modeled in a simplified manner in Step 3. If variability is not being
reduced, it has to be buffered by inventory, capacity, or times in order to maintain
process performance. Inventory levels, capacity utilization, and waiting times
represent the degree of buffering, and therefore act as indicators of how critical a
process step is. These indicators can be obtained from the queuing model. Further
indicators can be derived from the real system. An example is a process step’s
relative position in the queuing network as, generally, variability at the beginning
of the process has greater impact than at the end.
3. In this last step, the supply chain processes are modeled as discrete event simula-
tion model. Process steps that are defined in Step 2 as non-critical are modeled in
a simplified manner. Simplification can be achieved by modeling process steps
without capacity restrictions. Waiting times caused by capacity limitations are
then modeled in the simulation model as constants according to the values de-
rived from the queuing model. Finally, the simulation model is applied to analyze
alternative process designs.
The supply chain processes are modeled as a network of queues to be analyzed ac-
cording to queuing theory. The software MPX (Network Dynamics, Inc.) is applied,
13 The Best of Both Worlds 159
which “... is based on an open network model with multiple classes of customers.
It is solved using a node decomposition approach. [Each] ... node is analyzed as a
GI/G/m queue, with an estimate for the mean waiting time based on the first two
moments of the arrival and service distributions. Next, the MPX solution takes into
account the interconnection of the nodes ... as well as the impact of failures on the
service time and departure distributions” (MPX, 2003). The analytic model’s inputs
include:
• Demand data (primary demand in defined period, variability of customer order
inter-arrival time),
• Bill of material data,
• Routing data (sequence of production steps, average setup time, variability of
setup time, average process time, variability of process time, work center assign-
ment),
• Resource data (parallel machines in work centers, scheduled availability, mean
time to failure, mean time to repair), and
• Production lot sizes.
The model is validated by comparing the mean production lead time with that of the
real system, which differs by less than 5%. It is then applied to find values for each
work center’s capacity utilization and average value-adding times for setting-up and
processing, as well as average waiting times due to capacity restrictions. This output
is required for the reduction of complexity in Step 2 and for the simulation model
in Step 3.
If certain resources, such as work centers, are not modeled in the simulation model,
the effort for building and running the model can be reduced. This is acceptable only
if simplified modeling is limited to resources aligned to non-critical process steps.
Process steps are classified as critical and non-critical based on multiple criteria, as
follows:
• The capability of a process step to deal with variability is an important factor
in evaluating how critical a process step is. Generally, if variability cannot be
reduced, it has to be buffered by capacity, time, and inventory. Fundamental in-
dicators for the degree of buffering of variability are capacity utilization and lead
time efficiency. Both measures are provided by the described queuing model.
• Another factor is the relative contribution of a process step to the overall perfor-
mance of the supply chain, which can be measured by a process step’s proportion
of value-adding time and proportion of cost of goods sold.
• Moreover, a process step’s relationship with other process steps is taken into
account. The relative position of a process step within the network is a relevant
indicator, as generally variability at the beginning of a process has greater impact
than at its end. A further indicator is a process step’s assembling functionality,
160 Reinhold Schodl
The processes of the supply chain under study are modeled as a simulation model
to be analyzed in detail with multiple performance measures. After building the
model, verification and validation is carried out and an experimental design is de-
veloped to finally run simulation experiments. Critical process steps are modeled
detailed, i.e., resources that carry out the process steps are represented in the model,
including details about scheduled availability and random breakdowns. Resources
aligned to non-critical process steps are not being modeled. Because of this simpli-
fication, waiting times caused by capacity restrictions cannot be determined by the
simulation model. Thus, for non-critical process steps, the waiting times calculated
by the analytic model are utilized and represented as constants in the simulation
model. This approach guarantees a balance between representing reality as detailed
as necessary while also keeping the effort to build and run the model as low as
possible.
The case study’s discrete event simulation model is implemented with the soft-
ware ARENA (Rockwell Automation), which is based on the simulation language
SIMAN. The simulation model accounts for various risks of the supply chain, espe-
cially variable demand, forecast errors, stochastic setup times and machine break-
downs. The model’s input is comprised of:
• Demand data (order time, order quantity, desired delivery date),
• Forecast data (forecasted order time, forecasted order quantity, forecasted desired
delivery date),
• Bill of material data,
• Routing data (sequence of production steps, assignment of work centers, setup
time, processing time),
• Resource data (parallel machines in work center, scheduled availability, mean
time to failure, mean time to repair, constant waiting time for simplified modeled
work centers),
• Production data and rules (production lot size, rule for dispatching production
orders, rule for prioritization of production orders), and
• Cost data (material cost, time-depended machine cost, quantity-depended ma-
chine cost).
The length of the warm-up-period of the non-terminating simulation is decided
by visual analysis of the dynamic development of inventory. The number of replica-
tions is determined by statistical analysis of the order fulfillment lead time.
The output of the simulation model comprises performance measures whose def-
initions are in line with the well-established Supply Chain Operations Reference
Model (Supply Chain Council, 2009). The defined scenarios are compared with
13 The Best of Both Worlds 161
The focus of this paper does not lie in the presentation of the scenario’s spe-
cific results, but on a demonstration of the practical applicability of the presented
approach to deal with complex supply chains. Therefore, the validation of the simu-
lation model under different degrees of complexity reduction is of particular interest.
The degree of complexity reduction is expressed as the proportion of work centers
modeled in a simplified manner. Table 13.3 shows how complexity reduction af-
fects the model’s error of order-fulfillment lead time. Complexity reduction of 9%
results in a generally acceptable error of the order fulfillment lead time of 1%; for a
complexity reduction of 29%, the error is still under 3%.
For further validation, statistical analysis of the order fulfillment lead times of
the customer orders was carried out. A Smith-Satterthwaite test is utilized, as the
system and model data are both normal and variances are dissimilar (Chung, 2004).
For a level of significance of 0.05 and a degree of complexity reduction of zero
and 9%, there is no statistically significant difference between the actual system and
the simulation. For a level of significance of 0.01, this is also true for a complexity
reduction of 29%.
13.4 Conclusion
Analytic models and simulation models are characterized by specific strengths and
weaknesses. In this paper, we demonstrated a procedure that combines an analytic
queuing model with a discrete event simulation model to utilize the specific benefits
of both methodological approaches. A balance between validity of the results and
effort for the analysis of the supply chain processes was accomplished.
References
14.1 Introduction
In industrial practice, the concept of Lean operations management is the hype of the
new millennium. It consists of a set of tools that assist in the identification and steady
elimination of waste (muda), the improvement of quality, and production time and
cost reduction. The concept of Lean operations is built upon decades of insights and
experience from Just-In-Time (JIT) applications. Since the first articles and books
Nico J. Vandaele
Research Center for Operations Management, Department of Decision Sciences and Information
Management, K.U. 3000 Leuven, Belgium,
e-mail: Nico.Vandaele@econ.kuleuven.be
Inneke Van Nieuwenhuyse
Research Center for Operations Management, Department of Decision Sciences and Information
Management, K.U. 3000 Leuven, Belgium,
e-mail: Inneke.VanNieuwenhuyse@econ.kuleuven.be
163
164 Nico J. Vandaele and Inneke Van Nieuwenhuyse
In order to develop a quantitative approach to the Lean concept, we will rely on some
basic stochastic models for flow systems. Flow systems are systems where a set of
resources is intended to perform operations on flows (see Vandaele and Lambrecht,
2003). Some illustrative examples are listed in table 14.1.
These examples show the rich variety of flow systems. All these systems share
some common physical characteristics: on their routing through the system, flows
visit resources in order to be processed, and hence consume (part of the) capacity of
the resources. This competition for capacity causes congestion: flows may need to
queue up in front of the resources. This congestion in turn inflates the lead time of a
flow entity through the system.
These basic mechanics of a flow system imply that every decision related to
the flow has consequences for the resource consumption over time. For instance,
once lead time off-setting for manufacturing components in an assembly setting is
performed, resources (i.e., capacity) need to be committed in order to be able to
perform the required processes for the components. Vice versa, all resource-related
decisions have an impact on the flow: scheduled maintenance for instance will tem-
porarily impede flow, while sequencing decisions cause certain flows to proceed
while other flows need to wait. Consequently, flow systems contain three funda-
mental decision dimensions: flows, resources and time. If a flow system is to be
managed in an effective and efficient way, the management decisions must consider
the flow, resource and time aspects simultaneously, symbolized by the intersection
visualized in Fig.14.2.
In what follows, we assume the flow system to be stochastic: i.e. both the flows
and the resources are subject to variability. In real-life systems, causes of system
variability are omnipresent, like quality problems, resource failures, stochastic rout-
ings, randomness, etc. (see for instance Hopp and Spearman, 2008). It is known
that the presence of variability influences system performance in a negative way
(Vandaele and Lambrecht, 2002). Important system performance measures are re-
source utilization, flow time, inventory, throughput and various forms of service
level. Some of these (e.g. flow time and inventory) are flow oriented, while others
(such as utilization and throughput) are resource oriented.
In order to maintain an acceptable performance in a stochastic environment, a
flow system has to operate with buffers (Vandaele and De Boeck, 2003). Conform
the three basic system dimensions mentioned above, three types of buffers may
be used: inventory buffers (e.g. safety stocks, work-in-process,), capacity buffers
(spare capacity, temporary labor,) and time buffers (safety time, synchronization
buffers,...). Any particular combination of the three buffers leads eventually to a
166 Nico J. Vandaele and Inneke Van Nieuwenhuyse
In this section we consider a system consisting of only one single server, processing
a single product type. The system’s queueing behavior can be modeled as an M/M/1
system (see e.g. Anupindi et al, 2006), with an arrival rate μ and a processing rate
μ . The desired customer service levvel is defined by S (0<S<1), and Ws refers to
the S percentile of the flow time of products through the system. In a MTO system,
Ws would be the lead time quote necessary in order to guarantee a delivery service
level of S. The performance measures of interest are listed in table 14.2.
Given these performance measures we can quantify the concept of buffer substi-
tution, the dynamic definition of lean and the issue of six sigma. This will be shown
in figs. 14.2 and 14.3 respectively.
The concept of system buffering is illustrated in fig. 14.2, where the W and Ws
(where S equals 95), are shown as a function of the utilization ρ (the arrival rate
varies from 0.05 to 0.95 while the service rate equals 1). First the strong non-linear
behavior of the lead time as a function of ρ can be observed. As a consequence,
the lead time quote, which includes safety time, gets increasingly larger with larger
utilizations and with higher percentages of service. Therefore we can conclude that
the amount of safety time grows with increasing utilization. It can be clearly seen
that a smaller amount of safety time can be reached with a higher level of safety
capacity and vice versa.
In general, the desired lead time is determined by market conditions. If the com-
pany policy is such that a service level S has to be provided, this desired lead time
quote needs to coincide with Ws . Given Ws and W for the company, the amount of
safety time can be derived. From the relationships in table 14.2, the corresponding
utilization equals
1
1 − ln 1−S
ρ= (14.1)
μ × Ws
and the corresponding safety capacity equals
168 Nico J. Vandaele and Inneke Van Nieuwenhuyse
1
1 − ln 1−S
Csa f e = 1 − ρ = . (14.2)
μ × Ws
In this way we can derive
1
1 − ln 1−S
Csa f e = (14.3)
μ × (Ws + Wsa f e )
time Ws can be realized operating at higher utilizations; the same performance with
more sales volume, which is inspired either by an expansion based strategy or a ra-
tionalization strategy, in case of the same sales volume with less resources. An alter-
native way to profit from the project improvements is to operate at the same utiliza-
tion which simply leads to sharper lead time quotes. This embraces an aggressive,
competitive strategy. Of course, all combinations of higher utilization and improved
performance are alternative paths towards the shifted improvement curves. This is
visualized in Fig. 14.3, where WS1, WS2, and WS3 resemble low variability, high
variability and medium variability respectively. Typically, as Six Sigma projects at-
tack system variability under a continuous improvement framework, the three curves
stand for the subsequent and systematically implemented improvements.
setting the “base case”. The figure shows the strong non-linear, concave behavior of
T H in terms of N. Safety capacity Csa f e hence decreases as WIP increases. Given
the characteristics of the system (i.e., the processing rate μ of each of the servers),
higher T H can be obtained at the price of higher WIP, and lower safety capacity.
Fig. 14.4 TH in terms of N for a 5-station line with BNR = = 1 unit per minute (base case)
The expression for T H in Table 14.3 can be used to quantify the trade-off be-
tween safety capacity and WIP for systems targeting a market-determined through-
put rate T H. As T H=N/(m+N-1)* μ , a decrease in N without jeopardizing T H can
only be obtained by increasing the bottleneck rate μ (and, hence, by increasing the
capacity of the line). Consequently, the same throughput level TH can only be ob-
tained with lower WIP at the price of extra safety capacity. This is visualized in
Fig.14.5. Assuming a market-determined T H equal to 40 units per hour, the base-
case system would require N=8 units, with Csa f e =20 units/hr. The WIP level can
be cut by half (N=4 units) without impacting the throughput rate (T H=40 units per
hour) when the capacity of the system is increased to BNR=μ = 80 units per hour =
4/3 units per minute, implying a safety capacity Csa f e =40 units/hr.
14 Rapid Modeling In A Lean Context 171
Fig. 14.5 T H and SC in terms of N, for the base case and the increased capacity case
Given the systems characteristics and the market-determined T H, the lean level
of WIP and Csa f e can be defined as that combination of WIP and Csa f e that yields
T H. A system employing higher WIP levels can be characterized as obese: in-
deed, as the market determines T H, the inherent capability of the system to achieve
throughput higher than T H will not result in additional sales. Conversely, a system
employing lower WIP levels is anorectic, as it is incapable of achieving the desired
T H.
The impact of six sigma projects can be illustrated in an analogous way. It is
known from the literature that the reduction of system variability shifts the T H
curve to the upper left corner (e.g., Hopp and Spearman, 2008). This is shown in
Fig.14.6 (T H curve for reduced variability case). Consequently, six sigma either al-
lows to obtain a target T H with lower WIP (and, hence, lower working capital) or to
increase T H for the same level of WIP (if this is desired in view of satisfying addi-
tional sales). Obviously, all combinations of lower WIP and improved T H represent
alternative paths towards the shifted improvement curves.
14.3 Conclusion
References
Abstract In this paper we describe a model that investigates the impact of lean
management on business competitiveness. We hypothesize that business competi-
tiveness depends on organizational competences (including both the static level of
operational capability and the dynamic capabilities of improving and adapting to
changing internal and external conditions) and business performance. The lean lit-
erature provides an unbalanced picture of the elements of business competitiveness:
while several researches discuss the impact of lean on static operational measures,
there are much less studies about the relationship between lean and 1) organiza-
tional changes and responsiveness, and between lean and 2) business performance.
In the empirical part of our paper we focus on the latter issues using both case stud-
ies and questionnaires. With our case based research (using two original cases and
relying on several ECCH cases) we can clearly highlight how lean affects, through
employees, organizational responsiveness and how it leads towards higher business
competitiveness. Our analysis is unique in the sense that we could relate the case-
based analysis to the perspective of employees, since in our original cases several
employees (83 and 97) filled in a questionnaire that showed the impact of lean tools
Krisztina Demeter
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: krisztina.demeter@uni-corvinus.hu
Dávid Losonci
Department of Logistics and Supply Chain Management, Corvinus University of Budapest,
Fovam ter 8, H-1093 Budapest, Hungary, e-mail: david.losonci@uni-corvinus.hu
Zsolt Matyusz
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: zsolt.matyusz@uni-corvinus.hu
István Jenei
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: istvan.jenei@uni-corvinus.hu
177
178 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
and methods on them, as well as their opinion about the improvements both at op-
erational and business levels.
15.1 Introduction
Nowadays, lean management is having its second heyday (Schonberger, 2007; Hol-
weg, 2007). Several companies, many of them outside the automotive industry, im-
plement lean management hoping to achieve competitive advantage. Their hope is
fueled by the success of Toyota and other car manufacturers and their suppliers
(Liker and Wu, 2000). Large international surveys support that pull, customer in-
duced production systems, the basics of lean management, are inevitably the sources
of competitive advantage today (Laugen et al, 2005). Unfortunately these researches
did not investigate whether improving competitive resources and better operational
performance really affect financial performance. Huson and Nanda (1995) were un-
able to present a clear link between JIT and profitability, while Voss (1995, 2005)
suggests that competitiveness would improve - but that term is not defined in any
way. One can also rarely find empirical studies that focus on the organizational
change requirements of lean transformation beyond the usual lean tools and princi-
ples, though it has been widely accepted for decades now that human resources play
an outstanding role in lean transformation, thus lean requires substantial changes in
employees and managers’ perspectives and everyday work (Sugimori et al, 1977;
Hines et al, 2004).
In this paper we discuss the changes triggered by lean (introduction of new tools,
methods and principles) and their results through case studies. We emphasize the
role of human resources in this process and hence we present not only the top man-
agements opinion about lean management, but also the workers impressions.
We begin with a definition of business competitiveness, which sets up the the-
oretical frame for the research. Then the existing literature about the relationship
between lean management and business competitiveness is summarized. After hav-
ing described the research methodology, the empirical results of our study follow.
The final part discusses the results and the limitations of the research.
We can hardly find overall and well developed definitions of business competitive-
ness in the operations management literature. Therefore we cross the boarder of
operations management and start the discussion of business competitiveness based
on the definition of Chikán (2006):
15 Impact of Lean Management 179
Fig. 15.1 Components of business competitiveness. Based on Chikán (2006) and Gelei (2007)
Lean management was a hot topic already in the early 80s in operations manage-
ment (Schonberger, 2007) and flourished in the US (Holweg, 2007) under the name
of just-in-time (JIT). In the ’90s lean management became the dominant strategy for
organizing production systems (Karlsson and Ahlström, 1996). Moreover, Hines
et al (2004) considered it as the most influential paradigm of operations manage-
ment. Despite this, the effects of lean management on business competitiveness
were anecdotic or case-based, lacking any deeper insight into real managerial issues.
From another point of view (Voss and Blackmon 1994, cited by Davies and Kochhar
2002, operating practice in the field of operations management is crucial for operat-
ing performance, and operating performance is crucial for operational competitive-
ness. Following this logic Voss et al (1997) state, without empirical support, that
outstanding operating performance leads to outstanding business performance and
competitiveness. Adapting this logic to lean management, leaning production im-
proves production performance, and thus production competitiveness, all of which
contribute to business performance and competitiveness. Thus lean management, as
one of the best practices of world class manufacturing, finally leads to improved
business competitiveness (Voss, 1995; 2005). In overall, the relationship between
lean and competitiveness is strong intuitively, but real practical evidence in this is-
sue is missing.
15 Impact of Lean Management 181
Schmenner (1988) concluded, that “Out of many potential means of improving pro-
ductivity, only the JIT-related ones were statistically shown to be consistently effec-
tive”. Shah and Ward (2007) found a positive relationship between lean management
and outstanding operating performance, and added that the relationship is well ac-
cepted among researchers and practitioners (see their referred sources e. g., Krafcik
(1988); MacDuffie (1995); MacDuffie et al (1996); Shah and Ward (2003); Wom-
ack and Jones (1996)). According to the literature, lean practices impact inventory
turnover, quality, lead time, labour productivity, space utilization, flexibility (vol-
ume and mix) and costs heavily (Crawford et al, 1988; Huson and Nanda, 1995;
Flynn et al, 1995; MacDuffie et al, 1996; Karlsson and Ahlström, 1996; Sakakibara
et al, 1997; Boyer, 1998; McKone et al, 2001; Cua et al, 2001). Thus lean prac-
tices inevitably impact operating performance dimensions positively, and moreover,
concurrent applications of various practices seem to have synergetic effect, they
strengthen each other (Crawford et al, 1988; Cua et al, 2001; Flynn et al, 1995;
Sakakibara et al, 1997; Boyer, 1998; McKone et al, 2001; Shah and Ward, 2007).
To summarize this part, there is far enough empirical support that lean management
contributes to business competitiveness through the capability to operate.
According to Fig. 15.1, the capability to change consists of four areas: (1) mar-
ket relations, (2) personal skills, (3) decision making and communication and the
level of (4) innovativeness. The relationship of lean management with these areas
in most of the cases is clear, but less evident. Nonetheless, all of them are important
in lean transformations. Just think about the importance of (1) market relationship
quality in (i) balancing production load forward and backward in the supply chain,
(ii) identifying the customer value (the first principle of lean thinking, Womack and
Jones (1996), or (iii) organizing JIT supplies (a basic lean element). The (2) hu-
man factor is crucial in lean transformations: “Needless to say, sophisticated tech-
nologies and innovative manufacturing practices alone can do very little to enhance
operational performance unless the requisite human resource management (HRM)
practices are in place” (Ahmad and Schroeder, 2003, p.19). This statement is sup-
ported by the fact that human resources (under the name of cross-functional work
force) are among the most frequent practices within lean management (Shah and
Ward, 2003, 2007). (3) Decision making and communication systems play a central
role in todays organization where information flow and knowledge have enormous
effect on value creating processes. Latest HR practices such as empowerment and
decentralization, which occur in lean as well, shape the structure of these systems.
Areas (2) and (3) overlap heavily, they could not be handled isolated from each
other. Hence we integrate them in the empirical part of this study. The effect of lean
182 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
management on (4) the innovativeness of the company (e.g. R&D expenditures) still
shows contradicting results. Several researchers state that excessive elimination of
waste “cripples” innovative ideas and the extent of developments decreases (Lewis,
2000). A counter-example is Toyota, which brought Prius on the market far earlier
than its competitors, years before the era of hybrid-driven cars (Liker, 2008). The
human factor serves as a basis for two elements of the capability to change, and both
seem to determine the success of the lean transformation. Since areas (2) and (3) are
relevant from the very beginning, they should be developed parallel with lean tools
and principles during the lean transition. In spite of this these elements are rarely in
the focus of empirical works, especially from shop floor workers point of view.
Emphasizing people is a must in lean management: it comes from the logic of its
operations. Since process dependence increases with the elimination of the buffers,
production problems arise. Thus the demand for motivated and adjustable work
force is obvious (Sugimori et al, 1977; MacDuffie, 1995). Shah and Ward (2007)
reached the same conclusion: employees working in cross-functional, self managed
teams are faster and more efficient in solving identified problems. MacDuffie (1995)
and Shah and Ward (2007) proved that companies relying on HR practices as an
integral part of their lean production system can achieve higher results (this is also
supported by Wood (1999)). Interestingly, HR literature does not discuss the issue of
lean (Wood, 1999) with the exception of Birdi et al (2008). If we consider the effect
of lean on employees, there is are two opposing views (Delbridge et al, 2000). Sup-
porters, mostly researchers of OM, see the positive effects of lean management on
employees (Legge, 2005). Others (Berggren, 1993; Landsbergis et al, 1999; Lowe,
1993; Skorstad, 1994; Wood, 1999) emphasize the “dark side” of lean (e. g. work
intensity, reduced autonomy, overtime, increased horizontal load etc.) Our paper
brings some new thought into this discussion by analyzing lean achievements (using
business competitiveness as a framework) and HR related changes through the eyes
of employees. Based on a comprehensive literature review (Sugimori et al, 1977;
Crawford et al, 1988; Flynn et al, 1995; MacDuffie, 1995; Sakakibara et al, 1997;
Boyer, 1998; McLachlin, 1997; Cua et al, 2001; Hines et al, 2004; Shah and Ward,
2007) we summarize the most important HRM practices in relation to lean manage-
ment:
• Education and training, cross-functional work force;
• Decentralization and empowerment;
• Team work;
• Information flow and feedback.
The elements above overlap with the most important HRM practices of the dom-
inant HRM model, which considers people as valuable assets (see Legge 2005 and
Pfeffer 1998).
15 Impact of Lean Management 183
Surprisingly, in spite of the popularity of lean management in the last decades, op-
erations management still not supported empirically the relationship between lean
management and business performance. Impacts on operating performance are ob-
vious, but few studies tie operating performance to financial performance. The work
of Huson and Nanda (1995) is exceptional in this respect, even if their conclusion
is that the real impact of just-in-time on profitability is ambiguous. Lewis (2000), in
his casebased research, argues that “Becoming lean does not automatically result in
improved financial performance”, since “[t]he benefits of lean production can very
easily flow to powerful players” (Lewis, 2000, p. 975). Nonetheless, the investiga-
tion of the effect of lean management on financial performance is very important.
Besides proving the direction of the relationship, another goal is to uncover those
elements that influence the quality of the relationship. Based on the literature re-
view (which reflects the view of the top management) Table 21.1 summarizes the
questions investigated in this paper. First we analyze the three elements of competi-
tiveness with the help of case studies that reflect top management opinion ((1)-(3)).
After the operational results of our companies (I) we discuss the ability of respon-
siveness that supports internal and external adaptation (II), where we give special
attention to HR related changes because of their important role in lean implemen-
tation and sustainability. Finally we investigate the relationship between lean and
financial performance (III). Throughout the questionnaire analysis we assume that
good results may motivate employees. The real motivating factor, though, is the
feeling of being part of the changes (V), not just the experience of the good results
((IV) and (VI)).
15.3 Methodology
lean and competitiveness. The case studies were based on interviews (with middle
and top management), company visits, top management questionnaires and com-
pany documents. Primary company selection criteria were an open attitude and a
solid determination towards lean. In Table 21.2 we summarized the lean manage-
ment tools applied by our two case companies, which indicate the selected compa-
nies fulfillment of the selection criteria.
The surveying method is a common tool for measuring management attitudes,
while it is very uncommon to conduct employee surveys on a large sample in the
field of lean, hence our approach is somewhat unique. The employee questionnaire
consisted of 51 questions, and it was inspired by a previous survey (Tracy, 2004).
The questionnaire intended to grab the anticipations of the companies, the goals
and implementation of lean transformation, the transformational effects, results and
changes on working conditions, tools, applied technology and intra-firm communi-
cation.
foams and seat trims) for Suzuki, a large OEM in Hungary. Moving towards lean
management was essential in order to survive after some years of unprofitable oper-
ations and downsizing. Altogether 83 employees filled in the questionnaire at Rába
(62% of all employees in the working places affected by lean transformation).
186 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
OKIN Hungary Ltd. has new owners, a German investment group since 2007. It
has around 300 employees with a site in north-east Hungary, next to the city of
Hajdúdorog. It assembles furniture motion mechanics in large variety. Product de-
sign, orders, deadlines and customer relationship management is executed in the
German headquarter. Lean transformation was forced by the owners, because the
Central European wage advantages eroded strongly after new capacities have been
created lately in Far Eastern countries. The employees of four assembly lines filled
in the questionnaire at the company. Practically each employee who was present at
the time wrote his/her opinion (93 people).
We can conclude from the Rába and Okin cases that the improving measures were
caused by both the reorganization of manufacturing and the changes made in the
supporting infrastructure. The companies made the most changes in the manufac-
turing processes, in their control and in the (2) area of human resources (see Table
21.2). It is worth mentioning that from the other dimensions of responsiveness the
area of (3) decision making and communication systems (which is closely connected
to people) changed strongly, while the area of (1) market relations and (4) innova-
tiveness did not change or only to small extent. The lack of innovativeness could
be explained by the fact that it is not required by the companiescustomers. The sole
15 Impact of Lean Management 187
Business Performance
During the years following lean implementation the output and sales revenue in-
creased at both companies (1), but it would be a mistake to consider lean as the
only factor behind this phenomenon. Lean should be regarded as a consequence of
growing as well as a cause of it. At Rába lean implementation as a way of efficiency-
seeking was forced by the capacity reserved by the customer (anticipated growth)
and the unprofitable business, while the improving performance created further uti-
lizable capacity. At Okin the main causes were also capacity issues and more prof-
itable prices. In both cases the companies were able to improve efficiency indicators
((3) and (6)) in a way that no great investments were needed. This is further strength-
ened by the increased inventory turnover (though Okin has a very special inventory
policy). The companies business performance (ROS (4) and operating profit (3)) do
not improve automatically with the implementation of lean. Business performance
is affected by several other factors, e. g. industry, market position (Rába: second-tier
supplier; Okin: mother company), competition intensity (Rába: increasing presence
of competitors and OEMs in the Central European region; Okin: Far Eastern man-
ufacturing sites), power (Rába: OEM as customer; Okin: mother company transfer
prices), product characteristics (complex, substitution, product range), product de-
velopment capability etc.
188 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
In Fig. 15.2 we summarized the findings of the case studies. Similarly to the
results in Table 21.1, we concluded that lean had an obvious positive effect on the
capability to operate. The companies show great improvement in their operational
performance following the lean transformation. Measures of capability to change
are also better, which reflect better market relationships, more skilled workforce,
more advanced decision making systems and more intensive communication. The
capabilities to operate and change together suggest the improvement of business
competitiveness. But did the market accept this improving performance? Business
indicators became better at both companies, to a greater extent at Rába and to a
slighter extent at Okin. According to the case studies the relationship between lean
and business performance is not so strong as between lean and the capabilities to
operate and change, because here we have to deal with several other influencing
contextual factors.
15.3.2.2 Survey
Our case studies in accordance with literature show the “beneficial” influence of
lean on organizational dimensions. This statement is based on managerial inter-
views, managerial surveys and company data. Concerning the employees, there are
15 Impact of Lean Management 189
very few (empirical) research about them and those as we said earlier focus mostly
on the working conditions of lean. We do not know anything about whether they
perceive the “proved” success during their daily work. As the knowing of success
could be inspiring, therefore their opinion and experience may facilitate the manage-
ment of lean. The opinion of the employees, who operate the whole system, hence
could give significant insights for the better understanding of the system. During the
analysis of the employee surveys we kept the three “legs” of competitiveness, but
with a slight modification: inside the capability to change we investigated only the
critical HR practices identified by previous lean literature (see Table 21.1).
Questions in Table 15.5 are about the dimensions of capability to operate. The em-
ployees could mark those statements with which they agreed.
From the previously given performance measures improvement in productivity
was the most chosen one at both companies (83% and 50%), though the frequen-
cies differ significantly. The distribution of employee answers indicates that the lead
time/cycle time, the scrap ratio and quality became significantly better at both com-
panies. As for the remaining measures (inventories, costs, process stability), Rába
performed better. There is also another difference between the companies in the
respondent rate, which was higher at Rába. This can be partly explained by the
fact, that the changes made at Rába were more deep and better communicated. It is
190 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
nonetheless strange that despite the radical changes only 40-50% of the employees
perceived some improvement. The operational results of lean appear at shop floor
level, though to much less extent than among the top managers or what could be ex-
pected after a radical improvement. The difference between employee perceptions
and reality can originate from many sources, e. g. internal communication, focal
points of employee reward system or employee ignorance. It is worth thinking over
how can the achieved results help in the acceptance and sustainability of lean.
In this part we are concentrating on the presence and effects of critical lean HR
practices identified in part 2.2.2.1 previously. We investigate only the (2) personal
skills and (3) decision making and communication parts of the capability to change.
Results in Table 15.6 suggest that there are significant differences between some
of the HR practices and the whole transformation was deeper at Rába.
Training. According to the literature, training is one of the key points, though the
employees of the companies evaluated it only as “I slightly agree”. Employees per-
ceived some improvement compared with the previous routines, but only to a small
extent. The most plausible explanation for this is that the employees only got the
necessary knowledge about lean basics, but beyond this there was no further educa-
tion (see below for more details). Cross-functional workforce. Workers perceptions
reflect higher horizontal workload and more supplement activities to do. The latter
originates in the quality approach of lean management: the worker is responsible
for his/her work environment, since it affects product/service quality. Supplement
activities, namely 5S and smaller maintenance tasks (introduced by both firms) do
15 Impact of Lean Management 191
Table 15.6 Capability to change shop floor workers about critical HR practices in lean2
HR practices Average Average F (Statistical
(Rába, Okin) Rába Okin significance)
Training
Learning is essential at my company. (78, 91) 3.88 (1.562) 3.21 (1.410) 8.729 (0.004)
Employees were or will be given some form of
training on how to use the technology/tools 2.88 (1.308) 3.13 (1.532) 1.293 (0.257)
that is required to implement lean (81, 86)
Cross-functional workforce
Since lean I have to know more kind of 2.30 (1.274) 2.63 (1.265) 2.788 (0.097)
operations. (82, 89)
Since lean I have to do more supplement 2.55 (1.441) 2.78 (1.237) 1.291 (0.257)
activities (82, 91)
Empowerment and decentralization
For decisions concerning my work my opinion 2.30 (1.274) 2.63 (1.265) 2.788 (0.097)
is also taken into account. (83, 90)
Since lean I have to do more supplement 3.22 (1.490) 2.93 (1.356) 1.717 (0.192)
activities (82, 91) I have the opportunity to improve 3.00 (1.322) 3.02 (1.339) 0.012 (0.912)
processes. (80, 88)
My boss allows me to be creative. (80, 89) 3.31 (1.365) 3.04 (1.269) 1.743 (0.189)
Lean innovation mistakes are tolerated. (80, 90) 3.39 (1.355) 3.22 (1.356) 0.630 (0.428)
Team work
Within my organization, management and employees 1.65 (0.706) 2.00 (0.789) 0.069 (0.792)
work together to solve problems. (83, 89)
My coworkers supported/support me in lean 2.73 (1.043) 3.11 (1.352) 4.277 (0.040)
implementation (80, 89)
Teams were or will be developed to implement 1.83 (0.792) 2.91 (1.30) 41.405 (0.000)
lean. (80, 87)
Communication
I understand why lean is/was implemented. (76, 90) 2.12 (0.923) 2.64 (1.164) 10.131 (0.002)
I got the necessary knowledge about the essence and 2.69 (1.281) 3.28 (1.529) 7.301 (0.008)
background of lean transformation. (81, 90)
Before lean implementation my manager clarified my 2.54 (1.275) 3.26 (1.410) 11.986 (0.001)
tasks. (81, 88)
During lean implementation my manager clarified my 2.53 (1.253) 2.91 (1.295) 3.882 (0.050)
tasks. (80, 90)
I was told the reasons to implement lean. (81, 88) 2.10 (1.020) 2.97 (1.504) 18.908 (0.000)
My managers told me when and how lean would be 2.09 (0.883) 2.91 (1.403) 20.616 (0.000)
implemented. (81, 90)
I was informed about the results of lean. (77, 79) 2.10 (0.867) 3.35 (1.396) 44.880 (0.000)
2All questions were asked on a 1 to 6 scale, where 1 means total agreement, 2 stands for agreement,
3 is for slight agreement, 4 is for slight disagreement, 5 stands for disagreement and 6 means total
disagreement.
not require professional training, only more basic lean knowledge. So seeing the
slightly “positive” rating of learning in Table 15.6. we state that the companies ef-
192 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
forts in job enrichment (more manufacturing operations) build much more on the ef-
fective exploitation of existing worker knowledge and proper organized processes.
In other words: (professional) training supports mostly technological changes. In
lean (transformation) one should consider and not waste (not exploit) professional
worker knowledge, and use it as a valuable resource (as our case companies did).
In addition basic lean training is an unavoidable premise, just as the deepening of
the training at all hierarchical levels in order to sustain lean later on. One practical
consequence of this is that managers in lean environment should pay more attention
to the learning phase of newly employed people. Empowerment and decentraliza-
tion. The firms rely on employees active participation in improvement activities.
The shop floor workers can be either “advisors”, or “makers”, i. e. they have the op-
portunity to build their own ideas into the reorganized processes. Emphasizing trial
and error approach of improvement activity (as perceived in our cases) provides an
innovative environment for lean efforts. Although the basis for worker participation
is good, as they have the opportunity to take part in projects in a tolerant atmo-
sphere, opinions in Table 15.7. suggest that there is place for further improvement.
Only about 40-50% of shop floor employees feel involved. According to operators
perception middle managers have the most important role in disseminating and ap-
plying lean. This organizational level connects top managers lean commitment with
daily professional practice.
Team work. In lean, the unit of work organization is a team, especially in the case
of Rába. In our companies the foremen/managers have the leading position in team
work (problem solving). This refers to the fact that they manage problem solving
activities, coordinate and frame “leaning”, and dominate the radical changes of the
implementation phase. These findings support the previous paragraph: active par-
ticipation of top and middle management is a substantial success factor in leaning
the shop floor. Communication. Comparing the two firms this element shows the
most remarkable difference, with Rába having an overall advantage in all dimen-
sions. The gap (at least partially) can be explained by the depth of the changes and
at the same time one should be aware that the “amplitude” can and should affect
communication strategy. The offensive communication deserves extra attention on
15 Impact of Lean Management 193
each stages of the implementation (before, during, and after): informing workers
about backgrounds and reasons, tasks and responsibility and evidently about re-
sults. The top-down flow of data and information can be tightly connected to active
management role, which is an additional signal of their central role. The averages
on result-feedback (bottom line in Table 15.6) display and confirm our explana-
tion on differences in capability to operate measurements Table (15.5): employees
in Rába are better informed. The commanding value of the automotive supplier in
this question might be misleading, since operators perceptions, as we argued earlier,
lag significantly behind the real figures. To summarize, HR practices related to lean
management are of high priority in successful lean companies. According to shop
floor perceptions their actual deployment depends on the depth of the transition and
lean commitment of managers. Our case companies suggest that workers, thanks to
the fact that lean exploits operators knowledge more effectively, are responsible for
more “core” and supplement activities. Shop floor employees can actively partic-
ipate in process improvements. Anyway, the problem-solving teamwork is mainly
dominated by managers. Communication has considerable role from the first steps,
employees are informed along the lean journey.
Business Performance
Although the distinction is more outstanding, the pattern of answers resembles the
operative Figs. (Table 15.8). Almost unanimously, Rába workers tie lean and prof-
itability, and this relationship seems to be much stronger than the one with manu-
facturing performance measurement. In the case of Okin workers did not perceive
improved profitability. One powerful rationale is the depth of changes, another that
the recent history of Rába is marked by losses and lay-offs, meanwhile Okin oper-
ations covered its costs and grew constantly. The case studies indicate that workers
are aware of the effect of lean on company performance (both operational and busi-
ness), but they underestimate this effect. It is worth thinking over whether better
feedback could improve employee commitment and satisfaction.
15.4 Conclusion
The findings presented in this paper illustrate that becoming lean can contribute to
business competitiveness.
(1) Our case studies confirm the previously ”proven” positive relationship be-
tween lean management and operative performance measures. The data pointed
out that lean implementation can enormously improve operative performance in the
early stages of the transition. (And behind the measurable results can also have some
hardly measurable effects: stability, order and cleanness.) This positive relationship
is obvious company-wide: not only (top) managers know about it, but the majority
of shop floor workers as well. Although in the latter case the perception of the extent
of improvements lags far behind the real outcomes. The more active and effective
communication of the operative success (if employees can influence them directly)
could support the acceptance of lean and enhance workers lean commitment. (2)
Human resource occurs in lean literature as a central element in the transformation
process in spite of this it is rarely in the focus of empirical works. In accordance
with the academic society we found that beside the production-related tools our
case companies apply HR- related practices most frequently. Beyond (i) the grow-
ing importance of the middle management level, this new HR approach appears (ii)
in training, (iii) in more effective exploitation of existing expertise (covering prob-
lem solving, improvement activities, and job enrichment), (iv) in team work and (v)
in intensified communication on the shop floor level. It may be a challenging issue
for an OM professional: lean is not only about operations, because people require
at least as much attention as production and service processes. (3) Although lean
is a “fashionable” management and academic “fad”, surprisingly there is not almost
any research about its possible financial impact. According to financial data case
companies started lean in a period of “motivating crisis”: hallmarked by operating
losses and anticipated growing demand. Our case data refer to positive relationship
between lean and business measures. Cases also highlight that several factors (e. g.
competition in the industry, product characteristic, and customer power) can affect
business performance, and any of these might be stronger than the potential finan-
cial outcome of improved operational performance. Workers perception is mainly
shaped by communication and the earlier performance of the company.
Limitations
In case of researches like this one, the question of validity and reliability is crucial.
We used two means to enhance them: (i) the work is based on managers and shop
floor workers’points of view; (2) we combined qualitative (interviews, company
documents, visits) and quantitative (surveys, company documents) sources during
data gathering and explanation phases. In spite of researchers endeavor, the paper,
with special regard to its findings, should be handled carefully, since companies dif-
fer in size, operate in different industries and business environment, and go along
their own lean “path”. The number of case companies together with the methodol-
15 Impact of Lean Management 195
ogy used to analyze them clearly means a limitation. So the research can not con-
clude with general statements. However, we believe that the chosen research frame-
work served our research objective: the adaptation of business competitiveness on
lean companies.
References
Meredith J (1998) Building operations management theory through case and field
research. Journal of Operations Management 18(4):441–454
Meredith JR, Raturi A, Amoako-Gyampah K, Kaplan B (1989) Alternative research
paradigms in operations. Journal of Operations Management 8(4):297–326
Pfeffer J (1998) The Human Equation: Building profits by putting people first. Har-
vard Business School Press, USA
Sakakibara S, Flynn BB, Schroeder RG, Morris WT (1997) The impact of just-in-
time manufacturing and its infrastructure on manufacturing performance. Man-
agement Science 43(9):1246–1257
Schmenner R (1988) Behind labor productivity gains in the factory. Journal of Man-
ufacturing & Operations Management 1(4):323–38
Schonberger RJ (2007) Japanese production management. Journal of Operations
Management 25:403–419
Shah R, Ward PT (2003) Lean manufacturing: context, practice bundles, and perfor-
mance. Journal of Operations Management 21(2):129–149
Shah R, Ward PT (2007) Defining and developing measures of lean production.
Journal of Operations Management 25(4):785–805
Skorstad E (1994) Lean production, conditions of work and worker commitment.
Economic and Industrial Democracy 15(3):429
Stuart I, McCutcheon D, Handfield R, McLachlin R, Samson D (2002) Effective
case research in operations management: A process perspective. Journal of Oper-
ations Management 20:419–433
Sugimori Y, Kusunoki K, Cho F, Uchikawa S (1977) Toyota production system and
kanban system. Materialization of just-in-time and respect-for-human system. In-
ternational Journal of Production Research 15(6):553–564
Swink M, Hegarty WH (1998) Core manufacturing capabilities and their links to
product differentiation. International Journal of Operations and Production Man-
agement 18(4):374–396
Tracy M (2004) Lean Transformation Questionnaire. URL
http://www.oakland.edu/lean/download/LeanMfgSurvey forEmploy-
ees.pdf#search=%22lean questionnaire%22
Voss C, Blackmon K (1994) Total Quality Management and ISO 9000: A European
Study. Centre for Operations Management, London Business School, London.
Voss C, Ahlstrm P, Blackmon K (1997) Benchmarking and operational perfor-
mance: Some empirical research. Quality Management & Technology 4(4):273–
285
Voss C, Tsikriktsis N, Frohlich M (2002) Case research in operations management.
International Journal of Operations and Production Management 22(2):195–219
Voss CA (1995) Alternative paradigms for manufacturing strategy. International
Journal of Operations & Production Management 15(4):5 – 16
Voss CA (2005) Alternative paradigms for manufacturing strategy. International
Journal of Operations and Production Management 25(12):1211–1222
Womack JP, Jones DT (1996) Lean thinking: Banish waste and create wealth in your
corporation. Simon&Schuster UK Ltd
198 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
Abstract The management of public sector service operations has gained much
attention in the scientific literature during the last fifteen years. As in the industrial
world, also in the service world, different types of processes exist, requiring different
kind of tools and improvement actions. A group of challenging service processes are
the so called ‘fluid service processes’ that are considered as uncontrollable, people-
dominated, diagnosis-focused, and the traditional process improvement tools do not
seem to work in them. The focus of the study is inter-organisational cooperation in
fluid service process delivery. The specific focus of the paper is in understanding and
reducing the service process lead-time. The study is based on three interconnected
action research projects conducted in a Finnish municipality. The results of the study
show that the traditional process development approach is not enough when trying
to solve process-related problems in the inter-organisational context, such as the
lead-time of a service process.
16.1 Introduction
Service process development has become an important topic both in private and
public sector services. In the current financial turbulence, making the service oper-
ations even more efficient while at the same time emphasizing customer-orientation
is a challenging approach for every organisation. It is, however, an inspiring start-
ing point for research in the service sector, which has certain traditions but is still
Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland, tel: +358-5-621 2649, fax: +358-5-621 2699,
e-mail: henri.karppinen@lut.fi
Janne Huiskonen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland
199
200 Henri Karppinen and Janne Huiskonen
in a developing stage. The interest in this study is a specific area of services: the
interorganisational service context, where many organisations from the private and
public sector, with diversified objectives and motivation to participate in coopera-
tion, work together in order to produce a service for the customer. In the literature
the topics of inter-organisational cooperation and relationships as well as a more op-
erational view the service process management are widely discussed and analysed,
but unfortunately separately. Service process management is also a developing area
of research and theory. Our intention is to emphasize that the research of the ser-
vice sector should be about seeing service as it is, and only after that trying to form
the needed approaches and tools, not trying to fit the services into existing models
based on the industrial world. The research work described in this study work was
performed in a Finnish Municipality in three different service sector processes.
We define the research gap and the target of the study on the basis of the setting
explained in the introduction. The research gap is very practical world-oriented, and
in order to reach our objective we have selected the action research as the methodol-
ogy, and our aim is to maintain practical relevancy while trying to influence theory
creation. The starting point of our study was a prolonged lead-time problem in three
service processes. Long lead-time causes problems in terms of costs, but also when
measuring the service quality and the service delivered for the customer. The cus-
tomer has a significant role in all three processes, and all of them are very labour
intensive. In the case processes, the biggest challenge is having many different or-
ganisations involved in the service delivery. The participation is also very intensive,
not the usual buyer-supplier relationship, but more like a joint activity or joint ven-
ture.
We started by analysing the existing literature in two separate and usually not in-
terconnected themes: inter-organisational cooperation/relationships without com-
mon organisational structure and managing complex, customer and labour inten-
sive multi-stage service process. Oliver (1990) presents a joint program cooperation
form, which is a specific programme of two agencies working together when plan-
ning and implementing common activities but without a common organisational
structure. The prerequisite according to Oliver is that the objectives of two indi-
vidual agencies can only be achieved by cooperation. Joint programs are formalized
arrangements which tend to institutionalize and stabilize the inter-organisational ex-
change of resources. Oliver mentions that this kind of cooperation form is usual
when dealing with social services and cooperation related to them. A linkage to
managing joint operations is not mentioned, indicating a common problem in this
literature. Much of the literature focuses on static exchange relationships with an
economical rather than an operational perspective (e.g. Ouchi, 1979; Dekker, 2004;
Cäker, 2008). A single attempt to raise the important question related to the oper-
ational view of a cooperative service process has been made by Provan and Sebas-
16 Reducing Through Inter-Organisational Process Coordination 201
tian (1998). They mention the idea of an informal or formal integration structure
that aims to coordinate the services their client needs. This integration form varies
between single information transaction to a full-scale sharing of resources and pro-
grams.
Lundgren (1992) discusses the coordination of activities in the industrial network
context. He states that coordination in networks usually means organising functions
and flows, activities and relationships within a network to increase the effective-
ness of the activities. Coordination of activities will cause changes in the resource
structure, but in the network context it also offers possibilities to form new kinds
of combinations of different resources and activities. Although Lundgren discusses
industrial networks, similar characteristics apply also for the service context. The
coordination focuses on cooperation, the process of interaction between the mem-
bers in the network. Laing and Lian (2005) have formed four different categories
of factors involving the coordination of interactions in the service context: trust,
closeness, process factors, and organisational policy factors.
Our target is to find key elements of the processes in our specific case systems.
We base our processual view on Wemmerlöv (1989), who uses two different process
categories: rigid service processes and fluid service process. Our interest is in fluid
service processes, which are described as follows: they usually require relatively
high technical skills; a great amount of information is needed in order to specify
the nature of service needed exactly; and the service worker goes through an unpro-
grammed search processes and makes several judgement decisions, meaning that
the process is not well defined; the volume of people handled per a unit of time is
low; and the workflow uncertainty is high, process normally involves only one cus-
tomer at a time, the response time to a customer-initiated service request is often
fairly long. Wemmerlöv adds that fluid processes are often people-dominated and
they often exist in highly professional organisations.
The challenges in managing these kinds of processes are high, and because of the
wide scope and unpredictable service requests, forecasting of service flows should
be tightly connected to the resource use per time unit (load), and most efforts on
development should be focused on the expertise of individual persons and the in-
formation on which they base their process control decisions. Though according
to Wemmerlöv (1989), standardization of the fluid process is difficult or worthless,
Bowen and Youngdahl (1998) present an opposite idea in which they combine lean
thinking and the product-line approach. Their idea is that mass customization is
possible both in the industrial world and the service world, and it is all about “hav-
ing flexible processes and structure when producing variable or even individually
customized products or services” (op.cit, 1998, p.222). One of the elements they
include in their idea is the networked organisation that does not exclude responsive-
ness, flexibility or focus on individual customers.
The third part of the literature analysis concerns defining the lead-time in the
context of multi-staged fluid service process. According to Wemmerlöv (1989, p.32)
“Accurate time standards are difficult to derive, and, due the variance in tasks and
processing times, often not worth the effort developing.” From the customer per-
spective, the lead-time of a service is important because it affects on ‘service expe-
202 Henri Karppinen and Janne Huiskonen
rience’ and further on ‘service quality’. From the service provider’s perspective, the
lead-time is often connected to costs; the longer the customer is in the process, the
higher the costs are. We see the service process lead-time as a relevant and impor-
tant measurement, and it should not be undervalued. In this study we consider the
true lead-time as the time the customer is within the process, and both passive and
active time should be included in the lead-time.
The research gap is based on an observation that we do not have enough focused
theory about connecting areas of inter-organisational cooperation and managing a
multi-staged service process and the need existing in real service systems. Our re-
search target and the effort of filling the gap “to analyze lead-time related problems
in the fluid service process and to find the factors that have an influence on lead-time
improvement efforts”
The problem analysis was conducted in three different public sector service process
using action research methodology. The selection of methodology was guided by
two main needs: a need to increase understanding of the researchers from the re-
search point of view, in order to benefit the theory building (primary), and a need to
benefit the client system with feasible and process oriented solutions (secondary).
The client system included two healthcare processes and one residential application
process in a Finnish municipality. The original objectives in the process develop-
ment projects included improving the productivity and service quality, improving
the customer flows and lead-time as well as achieving better customer satisfaction,
both with internal and external customers.
The first process (process A) involved children under the age of 7. The patients in
this process typically suffered from developmental disorders and problems that were
caused directly or indirectly by their parents. The second process involved young
persons in the age from 13 to 20. These patients had developmental disorders and
parent related-problems, but also mental, alcohol and drug abuse problems (process
B). The third process (process C) was a service process for people applying for an
apartment or a house from the municipality, and unlike in the first two processes,
the process itself was much less visible to the customer.
The action research projects included all six main steps of the action research cy-
cle: data gathering, data feedback, data analysis, action planning, implementation,
and evaluation (Coughlan and Coghlan, 2002). In process A the action research
steps were conducted in 9 workshops ( day), in process B in six, and in process
C in three workshops. The research team included three researchers: a facilitator,
an observer in the workshops, and a researcher not participating in the workshops
but taking part in the data analysis and evaluation. This kind of setting was needed
because of the validity, and subjectivity challenges related to action research as a
methodology (Coughlan and Coghlan, 2002; Zuber-Skerrit and Fletcher, 2007). The
integration of methodological and problem solving steps is presented in the fig.16.1.
16 Reducing Through Inter-Organisational Process Coordination 203
Fig. 16.1 Integration of methodological steps and process related problem solving
We base our view on observations made in the workshops, focusing on the pro-
cess mapping, problem analysis and solution definition. Already in process A we
learned that process level coordination, as presented in the process management lit-
erature, is inadequate because it tends to create a situation where problem analysis
causes too aggregated solutions. Flexibility is needed not only in the service pro-
cesses but also in the problem analysis. The process mapping and analysis chart
(modelling part) in the case process B is presented in the fig.16.2.
The analysis indicates that in these kinds of processes, the most difficult problem
when trying to improve the lead-time of a service process is a too static view of
the process. The thinking of managers is too much focused on setting high service
standards, measurements, quality systems etc., and at the same time the employees
on the operating level are too much focused on individual tasks. The service process
level does not exist at all, or at least not the kind of process perspective founded in
the service process literature. Flexibility and responsiveness do not work because
the service process definitions and operational policies are based on static and unre-
alistic definitions. The service process lead-time is a sum of different service paths,
often tailored for individual customers, with the result that the original lead-time
targets are never met. The original targets and lead-time measurement however pre-
suppose that the process has a well defined workflow and the customer will get
‘standardized service’ with low variance.
On the basis of problem analysis, we can also state that the process level view that
should include managing the process flows and controlling the interfaces between
204 Henri Karppinen and Janne Huiskonen
Fig. 16.2 Process map and analysis chart in the case process B
the process events, did not change if the load in the process changed. This meant in
processes A and B that alternatives were originally not defined at all for different
process flows. If the process flow stopped, then the customers/patients were moved
to an unplanned and often unintended place to wait for the planned process flow to
continue again. In processes A and B, the lead-time measured (if measured) was not
a result of preset and locked service paths, but rather the unplanned (true) service
paths. At the same time, when the process alternatives were not defined, the pro-
cess included events where, according to the process descriptions, different actors
produced the same activity, but no flow control existed. As process level control did
not exist, the solutions made at the event level, were not optimised and the lead-time
became longer and longer. Despite the fact that there should have been a cooperative
process, the focus was purely on single events.
In process C, the application process was a centered solution where three differ-
ent major apartment/resident owners bought centralised service from the municipal-
ity service provider. The problem in the centralised process caused in the worst case
a stop in the service for all three companies, even if the ‘problematic customer’ was
a customer of only one company. In general the problems were process flow control
focused (flow control was based on diagnosis made in individual events e.g doctors,
experts; process load based control decisions did not exist). Simplified, a fluid ser-
vice process is all about recognizing the state of the process proactively and making
control decisions that are process-loading and flow-based. This does not exclude
16 Reducing Through Inter-Organisational Process Coordination 205
16.4 Discussion
As an answer to the research gap defined, we found out that in all case processes the
dependency/relationship setting between actors was created first, and only after that
the single service processes were planned. This creates a locked situation, where the
cooperation is based on preset dependencies, not on the actors’ interests/objectives,
or the service delivered for the customer. In our opinion the inter-organisational
setting creates a need to develop the service process, not only on the process level but
206 Henri Karppinen and Janne Huiskonen
also on the inter-organisational level and the single event level. Unlike the literature
related to the subject, we consider that a fluid service process is controllable in the
inter-organisational setting. The idea is that if flexibility and responsiveness are to
be maintained on the process level, the actions can and should be controlled and the
operations standardized on the inter-organisational level.
Only when the process-related policies, organisational roles, rules, the required
service paths and alternatives for the different service loads are set on the inter-
organisational level, can the service process planning begin. The process level plan-
ning should be based on polices created at a higher level of cooperation, and the
service process should be planned to be active, or even proactive, not static like the
service literature describes. Agreeing about common policies does not mean that
static process boxes and arrows for a single option only have to be formed. The
idea of three-level planning is that the service process is intelligent and responsive,
a viable system. This is achieved by doing the process planning in cooperation with
the organisations, and setting needs for the resources and the process alternatives
needed in the service delivery together.
At the single event level, which is the smallest entity in our model, the actions
are based on process/event-related professionalism. Therefore the flexibility at this
level is based on making the diagnoses needed, and having the right selections of the
service process alternatives for different customer cases and process loads. These
procedures, the decision-making rules, must be pre-planned, and the person who
diagnoses only selects the right path for the customer case based on the state of the
service process (intelligent service management).
As a summary, the inter-organisational level is for the service process policy,
rules, and role setting; the process-level planning must be based on finding the right
process paths and the alternatives for the different process loads, and the event-level
must be based on the required professional skills and diagnosing the customer cases,
as well as strict rules when controlling the process flows. The study has some scien-
tific limitations, and one of them is the idea of ‘levels of coordination’ being formed
through the observations made during the action research projects, not beforehand.
The decisions made in case process C were, however, a result of new thinking, em-
phasizing “the levels of coordination”. The second limitation is that we do not have
enough quantitative data to validate our observations and client system view related
to service process lead-times (based mostly on qualitative data).
The implications for future research and also for practitioners are similar: we
need to test these ideas further, to find out whether the model of ‘intelligent service’
is beneficial and applicable in other services process, as well. One important issue
related to future research is the challenge of modelling intelligent service, and main-
taining at the same time the relevant information and intelligibility. We intend to fo-
cus also in the future on the ‘service-intelligence’-models in the diagnosis-focused
service processes.
16 Reducing Through Inter-Organisational Process Coordination 207
References
Abstract This study on the management of business process flows of venture cap-
ital (VC) firms explores the relationship between the utilization rate of the human
resources within the VC firm and deal (project) rejection rate under consideration of
contextual factors. We employ an exploratory research design (a historical case anal-
ysis) as well as quantitative model oriented research based on empirical data in order
to understand what is really going on in terms of VC firm processes with regard to
their system dynamics. We utilize a longitudinal data set comprising 11 years of
archival data covering 3,340 investment decisions collected from a European-based
VC firm. The results indicate that, over time, there are considerable dynamics in the
VC decision making process. Specifically, the investment decisions of venture cap-
italists are influenced by firm-specific factors related to the human capital resources
of the firm, namely management capacity. Implications of these results for research
and practice, in venture capital as well as other service industries, are discussed.
17.1 Introduction
Venture capital (VC) firms, which are typically staffed by a small team, receive deals
according to a stochastic arrival rate over the life of a fund. During this time the team
is responsible for the deal flow process that involves the evaluation of deals, structur-
ing investments, managing portfolio companies and liquidating the fund’s portfolio
(Tyebjee and Bruno, 1984; Fried and Hisrich, 1988), as well as managing the firm
itself. These firm processes are based upon assumptions within a deterministic en-
Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London
e-mail: jpetty@lancercallon.com
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch
209
210 Jeffrey S. Petty and Gerald Reiner
vironment (no demand variability, etc. is taken into consideration) and are typically
driven by financial performance measures rather than operational ones. Hence, the
question arises as to how the “quality” of the process output is affected during the
life of a venture fund (e. g. Is there a higher risk of rejection, even for a poten-
tially suitable project, at different times based upon capacity problems within the
VC firm?). The literature focused on VC decision making (Macmillan et al, 1985;
Dixon, 1991; Zacharakis and Shepherd, 2005) does not take this firm-specific as-
pect into consideration and focuses typically on the “quality” of the potential deal
based upon: (i) the company’s management team, (ii) the market, (iii) the product
or service, and (iv) the venture’s financial potential. Thus, the existing literature
fails to address the potential impact of the firm-specific processes and resources
(Barney, 1986, 1991; Hitt and Tyler, 1991; Mahoney and Pandian, 1992) on the
strategic decisions (Eisenhardt and Zbaracki, 1992), and ultimately the strategy and
performance, of a firm. As such, until now there have been no direct time related
requirements considered for designing, planning or managing the VC firm’s pro-
cesses. The management and delivery of a VC firm’s product and related processes,
as well as other professional service firms, can typically be characterized as “make-
to-order” (Naylor et al, 1999). Therefore, in general, the question arises about the
quantity of available firm resources (e. g. people, systems, capital) that are required
to successfully pursue the firm’s strategy within the specified time and with the
specified level of service (e. g., Jammernegg and Reiner, 2007). This capacity man-
agement approach is suitable for classical relationships between a service provider
and its customers/clients wherein the scope of services and time requirements can
be specified in a contract. However, what happens in terms of resource allocation
and process effectiveness if there is no clear classification of order requirements,
especially with respect to time related aspects, possible? Therefore, in our study we
deal primarily with the following research questions,
(1) Over time, what is the impact of firm structure (lean staffing) on deal evaluation
and decision making?
(2) What are the firm-specific processes that influence deal evaluation and decision
making under consideration of dynamic aspects?
(3) Is there a relationship between the utilization rate changes over time and the
related investment decisions of VC-firms?
longitudinal setting, the research design adopted in our study focuses on validity and
accuracy rather than generalizability, and provides the basis for the development of
new theory which then can be further advanced following a multiple-case replica-
tion logic and through large-scale survey research (Eisenhardt, 1989; Strauss and
Corbin, 1998; Yin, 2003). As this study seeks to explore the factors affecting the
VC’s processes over time, the use of archival data analysis is preferred over an in-
terview or survey approach because it allows for the collection and analysis of the
different measures over several time periods. This approach also provides access to
information that helps to gain a more realistic view of the actual environment as
well as the actions that were made by the subjects at the time. Thus, this helps to
enhance the validity of the data as it eliminates recall bias on the part of the subject
as well as other limitations often associated with self-reported techniques (Hall and
Hofer, 1993; Shepherd and Zacharakis, 1999). We also conduct quantitative model-
oriented research, especially under consideration of empirical data, based upon the
results of the qualitative research activities. Bertrand and Fransoo (2002) pointed out
that the methodology of quantitative model-driven empirical research offers a great
opportunity to further advance theory (Davis et al, 2007). In general, quantitative
model-based empirical research provides the ability to generate models of causal
relationships between control variables and performance variables. These models
are then analyzed or tested using different scenarios involving varying levels of con-
straints on the subject variables. The primary concern of this research approach is to
ensure that there is a fit between the actual observations and actions and the result-
ing model, which is based upon reality. Utilizing a combination of different research
approaches thus enables us to address the aforementioned research questions.
The data used in the model was collected from the archival records of a European-
based VC firm and included the investment/rejection decisions on more than 3,600
deals that had been received by the firm over an 11-year time period. The data set
was created by reading all 7,284 passages of text in the firm’s deal flow data base
as well as related emails and memos in the archived deal files. The database entries
for deals that had made it beyond the initial screening phase into the evaluation and
due diligence phases typically contained a synopsis of the VC’s findings and views;
a random sample of 350 deals was selected in order to compare the notes in the
files to the comments entered in the action log and there was no evidence of gross
omissions or any material rewording of comments, so the database was deemed a
reliable data source. The time a deal spent in the selection process ranged from
one day to more than a year and, after eliminating those deals that lacked sufficient
information (e. g. date of submission or VC decision was missing) to be included
in the study the resulting sample included 3,340 deals. The firm was staffed by a
small team of VCs and the average acceptance rate of deals submitted to the firm
over the entire period was 1%, which is consistent with the description of VC firms
212 Jeffrey S. Petty and Gerald Reiner
and industry averages reported in many other studies. We will use the term “firm”
to describe the VC firm, whereas the terms “company”, “deal”, and “proposal” all
apply to the entrepreneurial ventures evaluated by the VC.
This model will provide time specific utilization information for further statistical
analysis.
We will describe the main mathematical equations, input data as well as variables
(flows and stocks) and performance measures of our model. The above described
initial exploratory research proved the input data for our analysis, i. e., proposals
(Pt ), resources (Rt ), rejections I (RIt ), rejections II (RIIt ), rejections III (RIIIt ), ac-
tivity times for screening, evaluation structuring and portfolio (AIt , AIIt , AIIIt , AIVt ),
termination (Tt ), number of newly hired employees (Ht ) and number of employees
departing from the firm (Qt ) . The process operations of the venture capital firms are
specified as follows. Number of proposals waiting for Screening (SCt ) is increased
by Pt and reduced by the maximum processing rate (OIt ). OIt is determined by the
activity time I (AIt ) as well as the available number of resources within the resource
pool (Rt )
Rt
OIIt = min , EVt−Δ t (17.5)
AIIt
Rt
Ut = 1 − (17.13)
Nt
Furthermore, we defined time related performance measures, to be able to vali-
date our model. In particular, we calculate the flow time for each VC process opera-
tion based on waiting time as well as activity time, i. e., average screening flow time
(SCT ), average evaluation flow time (EV T ), average structuring flow time (ST T ) as
well as well as the average flow time within the portfolio (POT ).
To be able to calculate the flow times (T L) we use a general distributed (times be-
tween arrivals and service time) queuing model (Hopp and Spearman, 1996). Based
17 VC Firm Business Process Flow Management and Investment Decisions 215
on the application of queuing models we were able to calculate the average time
(periods in month) spent for evaluation before investment (BIT )
17.3 Results
Throughout the period under study there were a surprising number of instances
(n = 67) when the team of VCs within the firm simply did not have the manage-
ment capacity to adequately evaluate a potential deal, even when the deal was ac-
knowledged to be potentially viable (e. g. “Interesting but time constraints due to
other due diligence.”). Similarly, during these periods of increased activity, deals
that were viewed as potentially time consuming were rejected, despite any potential
interest (e. g. “it has high potential but very much handson work and stirring will
be required”). These instances demonstrate that the team’s capacity utilization, in
combination with the characteristics of the potential “client”, play a significant role
in the decision-making process. Thus, a firm’s strategy with respect to staffing and
structure may need to be adjusted in order to adapt to changing demand needs.
Also, once all of the 3,340 deal decisions had been coded, the occurrence of
deal specific decision reasons in relation to the VC management team’s utilization
were compared with the chi-square test. All analyses were performed with SPSS
(version 15.0). Multiple scenarios using different VC team utilization rates were
tested and the chi-square statistic for all tests was highly significant (P value of
.000) thus indicating that the stated reason(s) for the decisions are associated with
the VC team’s available safety capacity. Additionally, further quantitative analysis
216 Jeffrey S. Petty and Gerald Reiner
100% 70
90%
60
80%
50
70%
number of proposals
60%
40
[%]
50%
30
40%
30%
20
20%
10
10%
0% 0
0 20 40 60 80 100 120
Time (months)
revealed, contrary to what may be expected, that the arrival rate of new deals did not
necessarily predict deal rejection rates or team utilization rates (see Figure 17.2).
However, the relationship between the team’s utilization rate and deal rejection is
much more pronounced, especially in the latter months of the fund when VCs are
operating at a high utilization rate as a result of on-going portfolio management
activities.
In this study we explore the relationship between utilization rate and decision mak-
ing, specifically the rejection rate, under consideration of contextual factors. Based
upon both qualitative and quantitative analysis we find that there are considerable
dynamics in the VC decision making process, especially over time. In this context,
we have shown that operations management and in particular capacity management
provides further valuable insight. Second, our findings extend prior conceptualiza-
tions of the VC decision-making process by showing that the importance of decision
making criteria can significantly change over the lifecycle of a fund. Developing and
managing a portfolio places constraints on the resources (i. e., human resources) of
the VC firm so that during periods of extensive due diligence, deal closings and
managing existing portfolio companies there is less time to spend screening and
evaluating new deals. Additionally, this study raises the question of whether or not
the existing research focused on individual VC decision making is in fact represen-
17 VC Firm Business Process Flow Management and Investment Decisions 217
tative of the actual decision making within the firm. VC decision making is but one
aspect of an organizational process and therefore researchers should not assume that
a study of individual VC decision making is the same as VC firm level decision mak-
ing. Much of the research that has been conducted on VC decision making appears
to be suited to the initial screening phase of the process but without considering the
current needs or requirements of the firm it is not practical to assume that we are
developing models that accurately depict the firm-level processes and actions.
17.4.1 Implications
Looked at from the perspective of the VC firm, our findings suggest that VCs should
evaluate their existing management capacity. They should develop strategies to ac-
commodate for times when they experience increased deal flow, above average due
diligence activity or number of deal closings and hands-on management of the port-
folio firms. While there has been limited research on how VC firms are organized
and managed (Gorman and Sahlman, 1989), there is no evidence that VCs acknowl-
edge or attempt to address any potential under-capacity in terms of management
time, especially in the latter years of a fund. This missing focus on capacity man-
agement in combination with the performance evaluation of VC firms is solved in
our research study by integrating state-of the art service operations management
knowledge. The apparent constraint on VC management time over the life of a fund
adds to the existing literature focused on the allocation of a VC’s time. Although
the issue of VC time allocation has typically been concerned with the management
of the fund’s portfolio (Gorman and Sahlman, 1989; Jääskeläinen et al, 2006), our
findings provide additional evidence that both pre and post-investment activities
(Gifford, 1997; Shepherd et al, 2005) may influence the decisions of the VCs within
a firm. Furthermore, considering that VC management capacity has a considerable
impact on the deal selection process, entrepreneurs need to be aware that there will
be instances when the entrepreneur is basically at the right place at the wrong time:
their business proposal is compelling, yet the VC firm does not have the capacity
to evaluate it and therefore, out of necessity, makes the decision to reject it. As is
the case in most service settings, entrepreneurs may be limited in what they can do
to influence the expert’s opinions of their proposals yet they may find it well worth
the effort to attempt to learn about the current capacity of the firm in order to avoid
those times when the firm is overloaded with work in terms of management time,
thus improving the chance that the decision maker will be able to devote their full
attention to the proposal. Finally, given that other service firms (e. g. legal, advertis-
ing, financial, and consulting) are also characterized by “make-to-order” processes,
lean staffing, and demand variability, there may be possibilities for generalization
of the research results across the service sector.
218 Jeffrey S. Petty and Gerald Reiner
A key limitation of this study is that the researchers were not present in the VC
firm during the actual deal evaluation process so there is no way to guarantee that
all of the relevant information was recorded in the firm’s database. Additionally, the
fact that the data was obtained from a single VC firm limits the ability to generalize
the findings across the industry as a whole. Although not addressed in this study,
there are many other factors, such as the size and location of the VC firm (Gupta
and Sapienza, 1992) and biases on the part of individual VCs (Franke et al, 2008;
Matusik et al, 2008; Shepherd et al, 2003), which may also have an effect on an indi-
vidual VC’s investment decisions. Despite these limitations this study shows that the
criteria used by VCs are not consistent over time and provides evidence that there
is much to be discovered about the VC decision making process that cannot be ac-
complished using existing approaches. Based upon the results of this study, selected
firm-specific factors possessing a variable nature (Cyert and March, 1963) appear to
have a greater impact on managerial action and decision making than previously re-
ported. As such, more longitudinal research is required within the context of service
firms that will enable researchers to capture the complexity of the task environment
as well as the resulting decisions. Although each firm pursues a different strategy
suited to their specific goals all venture capitalists operate under similar constraints
when it comes to staffing, portfolio demands, and investment restrictions so similar
results are expected in studies conducted in other firms. Further limitations related
to our empirical quantitative model are that we did not take into consideration the
use of resources for sourcing deals and termination (or exit) of the portfolio compa-
nies. However, in reality, the majority of deals received by a VC firm are unsolicited
so the sourcing activity is more passive in nature (Fried and Hisrich, 1988), which
would imply that the time requirements on the part of the VC are quite minimal.
Further research work should deal also with the identification and evaluation of po-
tential VC firm process improvements (capacity management, etc.) and finally the
implementation of the process improvements to be able to finish the entire research
cycle (Mitroff et al, 1974) in combination with longitudinal research.
References
Mitroff II, Betz F, Pondy LR, Sagasti F (1974) On managing science in the sys-
tems age: Two schemas for the study of science as a whole systems phenomenon.
Interfaces 4(3):46 – 58
Naylor J, Naim M, Berry D (1999) Leagility: Integrating the lean and agile manu-
facturing paradigms in the total supply chain. International Journal of Production
Economics 62(1-2):107–118
Roberts C (1997) Text analysis for the social sciences: Methods for drawing statis-
tical inferences from texts and transcripts. Lawrence Erlbaum Associates
Shepherd DA, Zacharakis A (1999) Conjoint analysis: A new methodological ap-
proach for researching the decision policies of venture capitalists. Venture Capital
1(3):197 – 217
Shepherd DA, Zacharakis A, Baron RA (2003) VCs’ decision processes: Evidence
suggesting more experience may not always be better. Journal of Business Ven-
turing 18(3):381 – 401
Shepherd DA, Armstrong MJ, Lévesque M (2005) Allocation of attention within
venture capital firms. European Journal of Operational Research 163(2):545 –
564
Strauss A, Corbin J (1998) Basics of qualitative research: Techniques and proce-
dures for developing grounded theory. Sage Publications Inc
Tyebjee TT, Bruno AV (1984) A Model of Venture Capitalist Investment Activity.
Management Science 30(9):1051–1066
Yin R (2003) Case Study Research Design and Methods, 3rd edn. Sage, Thousand
Oaks, CA
Zacharakis A, Shepherd DA (2005) A non-additive decision-aid for venture capital-
ists’ investment decisions. European Journal of Operational Research 162(3):673
– 689
Chapter 18
What Causes Prolonged Lead-Times in Courts
of Law?
Abstract The paper highlights the challenges in process performance issues in large
public sector professional organizations. Factors causing process inefficiencies and
prolonged lead-times in two Finnish Courts of Law are introduced and analyzed.
18.1 Introduction
The Finnish Constitution states that everyone has the right to have his/her legal
case heard properly and without undue delays before a legally component court of
law. This statement is also written into the European Declaration of Human Rights.
Finnish courts have struggled with prolonged lead-times and have received appeals
regularly from the European Court of Human Rights concerning unreasonable du-
ration in the handling of judicial cases. Complaints about delays in courts are not
solely Finnish phenomena nor are they something new. The court systems in many
countries have been criticized for years for being inflexible, for taking too long, and
for demanding more and more resources (Martins et al, 2007; McWilliams, 1992;
Smolej, 2006).
This research started with a call for help from the Finnish Ministry of Justice
wanting to study the court system processes in order to find ways to reduce the
Petra Pekkanen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland,
e-mail: petra.pekkanen@lut.fi
Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland
Timo Pirttilä
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland
221
222 Petra Pekkanen, Henri Karppinen and Timo Pirttilä
time that cases stay in the process without endangering the quality of decisions
or increasing the resources. The backbone of court system operations is, like in
all professional organizations, autonomous work of highly motivated and educated
individuals (Brock et al, 1999; Lowendahl, 2005; Mintzberg, 1983). In the court
system, the judges also need to be completely independent and “beyond control” to
ensure objective ruling. Still, at the same time the court system is a process with a set
of sequential tasks and activities linked together, concerning different participants.
It is a process that demands continuous and coordinated flow of a very large number
of individual and infinitely different types of cases. Court systems are organizations
balancing between the needs of independent professional work and effective mass-
production processes. The problems with process performance indicate that process
effectiveness issues have not been given the attention they need in different areas
of justice organizations operations. These features are often considered, at least to
some degree, opposing. There is a fear that increasing the process viewpoint and
process performance will lead to unfavorable circumstances for professional work
and thus weaken the quality of the decisions made.
The aim of the research project was to find ways to help court system organi-
zations find new ways of actions that take the divergent requirements into account
better. The first part of the task was to find out what the exact problem was and what
caused it. This paper concentrates on defining the lead-time problem and identifying
and analyzing the reasons and sources for inefficiencies and prolonged lead-times.
The main research question is:
• What are the main factors in court systems current way of actions which have
caused and influenced the problems in process performance and prolonged lead-
times?
The analysis is based on experiences gained from large process improvement
projects in two Finnish Courts of Law. The case organizations are introduced first.
After that, the improvement projects and the data collection methods are presented.
In section 4, an analysis of the factors behind the prolonged lead-times is introduced.
Finally, concluding remarks are made.
The Finnish court system is tripartite for civil and criminal cases. The first level is
the District Courts. The decisions of District Courts can normally be appealed in a
Court of Appeal. The decisions of the Courts of Appeal, then, can be restrictively
appealed in the Supreme Court. In addition, there are special courts, for example
Insurance Court and Administrative Courts.
18 What Causes Prolonged Lead-Times in Courts of Law? 223
The first case court in this study is the largest Court of Appeal in Finland. It han-
dles about 4000 cases annually, which make about 30 The cases are prepared and
presented for decision by legally trained referanderies, who are called Senior As-
sistant Justices. After preparation, one of the judge members, who are called Se-
nior Justices, goes through and verifies the prepared case. A responsible judge and
referandary are appointed for every case. The cases are then decided in a Court
session by a composition of three Senior Justices. In the case court, there are 170
employees and it operates in seven departments. Each department operates indepen-
dently and is headed by one of the Senior Justices. The case handling operations are
presented in the Fig.18.1
The needed preparation time varies according to the complexity of the case. The
cases are divided to five size-groups: S, M, L, XL, and XXL. There are two types
of cases: criminal cases and civil cases. The civil cases are usually more complex
and require more preparation time. The cases are also prioritized and categorized
to three classes according to the assessed urgency of the case. The first priority
level concerns “emergency” type of cases, which need to be handled immediately,
for example child guardian issues or restraining orders. Other cases are divided to
priority level 2 or priority level 3 with several criteria according to the nature of the
felony or dispute.
There are two main ways to handle individual cases: a written procedure or a
main hearing. In a main hearing, the parties involved are present and witnesses are
heard. The average lead-time in 2006 was 12 months, but the dispersion of lead-
times was huge, from weeks to several years. From the year 2003 on, the case court
has solved annually more cases than have arrived. The proportion of very old cases
was quite bad when the improvement project started: 34 % of the pending cases were
224 Petra Pekkanen, Henri Karppinen and Timo Pirttilä
older than 12 months. The age of the pending cases at the start of the improvement
project is shown in Fig.18.2.
The Insurance Court is a special court for social security issues. It handles 10 000
cases annually. There are 120 employees and the court operates in three depart-
ments. There is only one case handling procedure, a written procedure. At the mo-
ment there is no formal division of cases, either by size or urgency. The average
lead-time was 14 months in 2007, varying from a couple of months to several years.
In recent years, the Insurance Court has solved much more cases annually than have
arrived, and the number of pending cases is diminishing fast. The problem is that
while the number of pending cases has almost halved in one year, the large dis-
persion of lead-times has not changed, or the number of very old cases dropped.
The number of cases pending and their age in the years 2007-2008 are presented in
Fig.18.3.
18 What Causes Prolonged Lead-Times in Courts of Law? 225
Fig. 18.3 Age of pending cases in the Insurance Court (30 September 2007 and 30 September
2008)
The research and data collection was carried out using the action research approach,
which is a generic term covering many forms of action-oriented research (Cough-
land and Coghlan, 2002; Gummesson, 2000).
The process improvement project in Helsinki Court of Appeal started in May
2006 and in the Insurance Court in June 2008. The improvement team in both courts
consisted of members from all organitional levels, altogether 15 persons per team.
The main stages of the improvement projects were data gathering, data analysis, ac-
tion planning, implementation, and evaluation. The work was done in several work-
shop meetings of the improvement teams. The project in Helsinki Court of Appeal is
now in the evaluation stage and the project in Insurance Court in the action planning
stage.
The research group has actively participated in the process improvement projects
as external experts and change facilitators from the beginning. The main source for
data has been active observation and monitoring of the improvement work in group
workshop meetings. Complementary data collection methods have included for ex-
ample collecting operational statistics and generating numerical analyses from the
database of clients and interviews of 60 members of personnel (30 in both case
courts) concerning the problems behind process performance and process improve-
ment potentials.
226 Petra Pekkanen, Henri Karppinen and Timo Pirttilä
The most apparent problem in the lead-times of both case courts was the fact that
complex and large cases get stuck in the process for some reason. This has created
both large dispersion of lead-times and several complaints concerning unreasonable
duration. Because both case courts solve more cases annually than arrive, the piling
up of large cases is not strictly related to a lack of resources. The next task was
to analyze the process and the way of actions in order to find out what causes and
furthers this phenomenon. The analysis revealed four main categories of factors,
introduced in Fig.18.4.
The performance indicators used in the courts are selected by the Ministry of Jus-
tice, which also sets targets and monitors their accomplishment. Public sector or-
ganizations are said to be still facing more problems associated with performance
measurement than private sector organizations. One very typical trap is the use of
too simplified output measures or the concentration on managing one single success
factor at a time. (Rantanen et al, 2007)
The most important goal and performance indicator used in the courts is the num-
ber of annual output. It is emphasized a lot and monitored carefully. This does not
encourage preparing the complex, often badly overdue cases, and makes it feasible
to increase the total output by ignoring the more complex cases. The overemphasis
on the annual output indicator has also led to competition between departments and
18 What Causes Prolonged Lead-Times in Courts of Law? 227
restrained cooperation between them. A lot of energy is used in optimizing the num-
ber of solved cases, and the last part of the year is always employed by solving the
small cases and getting the output goal filled. In the Insurance Court, the output of
referanderies is monitored even more carefully. They need to prepare eleven cases
every week, which has also led to a quite inflexible system.
The only monitored goal and indicator for lead-time is the average lead-time of
solved cases, which is monitored annually. This also makes it more feasible to solve
the smaller cases. All the indicators used describe past performance and output, and
there is no indicator showing what is left behind, what the current situation is, or
what would be a goal for a maximum lead-time. It is quite obvious that not a lot
of attention is paid to the choice of the most appropriate performance indicator,
or to the dangerous and negative effects of wrong goal and performance indicator
choices. The use of simplified output goals and indicators is very likely due to lack
of time and knowhow in the Ministry of Justice and difficulties in defining more
comprehensive outcome goals and measures when the final product is quite abstract
and variable.
A typical feature in professional organizations is the fact that managers are chosen
by substance skills rather than managerial capabilities, which means that the best
professional becomes the manager - not the best manager. Independent professionals
are not, in addition, the easiest to manage, and the management of professional
organizations depends much on negotiation, consensus and the good work ethic of
professionals (Lowendahl, 2005; Mintzberg, 1983; Rantanen et al, 2007).
The experience, knowledge and interest in process performance issues vary a lot
between individual managers in the case courts. Some managers follow lead-times
and process performance very carefully, some managers not at all, and do not even
feel that it is important. The promotion system in the case courts is based only on
achievement in judicial issues and the training of referanderies concentrates only on
these issues. The courts are places for the referendaries to get experience and train-
ing and qualify as Senior Justices. This is why the referendaries change departments
or even the court in every couple of years. This tradition has its benefits, but it also
leads to a lack of clear responsibilities and a sense of duty over the complex cases in
the current department. The high turnover of referanderies makes the whole system
very vulnerable. The process is very referendaryled, and they have huge respon-
sibility for the start-up and smooth running of the handling process. The judge’s
responsibilities are unclear, and their responsible role is quite trivial in practice.
The fact that judges need to be completely independent poses a lot of challenges
for the management. Practically nothing can be done in situations where the work
ethics of an individual judge fails. The judges can not be fired or their salary reduced.
Even though the need to be totally independent was originally meant to cover only
the content issues of a ruling, the convention has spread also to working methods.
228 Petra Pekkanen, Henri Karppinen and Timo Pirttilä
While the management must respect this status, they must also be able to intervene
if backlog of cases increases without a good reason. Some of the managers do not
feel superior to other judges, and do not see it to be their place to intervene in
“colleagues” work. On the other hand, there are individual managers who follow
the situation and intervene almost too much, which is also experienced as quite
oppressive. However, the general opinion is that more managerial feedback (positive
and negative) is needed and that the managers should follow the lead-times and
backlogs of an individual employee more carefully and take actions if necessary,
but it should be done in a constructive and respectful manner.
The management system relies on a very clearly articulated hierarchy, where sta-
tus and ranking are emphasized, but clear responsibilities and chains of command
are missing. The follow-up and management duties are almost solely the responsi-
bility of the Head of the Department. Lower-level superiors are appointed (Superior
Referandary), but they do not have any formal manager status. Their role is mainly
to distribute the cases evenly to the referendaries and to monitor the case load of
the whole department. By expanding the management duties, there would be more
time and resources to concentrate also on managing the process performance-related
issues.
At the start of the improvement project it was a general concern and notion that it is
not possible to increase productivity without increasing resources. A large number
of cases are solved but without any orderliness, leading to the aging of complex
cases. Restraining the number of very old cases is not a question of doing more cases
but a question of doing things according to some kind of a plan and order. There
is practically no production planning or production scheduling in the case courts.
The measures and follow-up indicators used have led to the fact that individual
workers plan their work mostly by picking out the cases that best help to meet the
expectations. A very common opinion in the start of the projects was also that you
can not plan professional work or you should not chain up brainwork with schedules.
It is true that the lack of deadlines for cases makes planning seem quite useless in
the eyes of an individual professional.
The fact that no planning practices are used, leads to the fact that it is very hard to
find uninterrupted time for preparing the complex cases. When the time for longer
preparation is not planned and scheduled, new, more high-priority, cases always
emerge and the preparation for complex cases stops. The setup times grow enor-
mously when getting acquainted with the case material is done several times all
over again. The weekly number quota for referanderies makes it even harder to find
the preparation time needed for complex cases. One fact that complicates the plan-
ning is that the individual buffers of cases is so large and unmanageable that the
occupants no longer know and recall the situation of an individual case nor the age
18 What Causes Prolonged Lead-Times in Courts of Law? 229
of it. The piles of cases just keep growing on your desk and you have no plan for
preparing them.
There is a lot of compulsory waiting time in the case handling process that is
completely wasted without long-term planning. The court session will not be started
to be arranged and called together until the preparation of the case is completed. The
arrangements for a court session are a difficult coordination task of getting all parties
present. This leads to situations where the court session may take place months or
even years after the preparation has been completed. In practice the referandary and
the judge need to get acquainted with the case material all over again nearer the
court session.
Every organization and profession creates its own set of values, beliefs and attitudes.
When talking about the legal profession, which has its origins in the antiquity, these
values and attitudes, have a very long and traditional history. The legal professionals
take much pride in their occupation and they enjoy a lot of respect from whole
society as well. This status relies heavily on the complex knowledge base of judges
and lawyers and the long traditions of methods and routines (Becher, 1999; Schein,
2004). In this sense there is very little room for appreciation of productivity and
effectiveness. This thinking is reflected in all aspects of court operations: what is
valued is measured, managed, educated and done. The quality of rulings made is
followed, for example, very carefully and lot of time is spent on the spelling and
phrasing of final acts and checking every little detail in the final decisions. So far
lead-time has not been very high in the valuation lists and has been seen as an enemy
for the traditionally valued aspects of quality.
While legal professionals might not be the most dynamic or change-oriented
professionals, they are very ethical and hard working and do enormous work in order
to fulfill what is expected of them. The prolonged lead-times are a cause of stress
but the professionals have felt quite powerless in front of this evergrowing problem.
When they see that something really works and helps, despite of preconceptions, the
acceptance of change initiatives is good, especially among younger professionals.
Changes in values and attitudes will not happen overnight, and the improvement
steps need to be justified and taken gradually.
18.5 Conclusions
When starting to describe courts of law as organizations, the first word that comes to
mind is professional organizations. But unlike for example small private law firms,
the justice courts can also be described as production plants, operating in a multi-
staged process. Added with the features arising from public administration, it can be
230 Petra Pekkanen, Henri Karppinen and Timo Pirttilä
concluded that the courts of justices are organizations with diversified requirements
for operations. If the organization and its practices are designed with too much em-
phasis on one single feature, the other ones will suffer.
This paper has concentrated on describing the lead-time problem and its causes
in courts of law. It can be concluded that process performance problems are not a
matter of producing too little numbers or a problem of average lead-time. The prob-
lem is the fact that some cases get prolonged so much that the courts get complaints
and backlogs emerge. Since the number of produced cases is very good in both case
courts presented in this study, the source for the problem is not a lack of resources,
nor is it an outcome of any other single factor. It is a complex mixture of causes
and effects connected to unsuitable design parameters and way of actions, added
with management practices and a value system with long history and traditions.
Several public sector professional organizations are facing a similar situation as the
pressures for better efficiency and productivity are constantly increased. They need
examine the operational practices thoroughly and pinpoint the sources for ineffi-
ciency, and find ways to transform. In justice courts the first step is to recognize and
accept the fact that they have also process effectiveness-related demands that need
to be delivered as well as other quality criteria. It is largely an issue of broadening
the image of the organization and its tasks, and translating that into operation.
Big changes have been made to the measurement and follow-up systems, man-
agement practices and to the planning and scheduling practices in the case courts.
For example in reforming the planning practices it became evident that scheduling
supports professional work, not impedes it, and the time that can be spent on an
individual case is longer when uninterrupted preparation time has been scheduled.
All the improvement initiatives planned and taken in the case courts have proved
that good results can be achieved and that the quality of justice decisions and pro-
cess effectiveness are not exclusionary, but they even support each other. While long
traditions of working methods and values take long time to change, it is possible to
find ways to balance the requirements of professional work and process efficiency
better.
References
Abstract Concerning the topic of “How Regional Value Chains Speed Up Global
Supply Chains”, the research question of this paper will be the acceleration poten-
tial of logistics clusters in the entire supply chain. Acceleration potential can occur
e.g. as lead time reductions or as agile and quick responses. Logistics clusters can
make it easier for companies to gain an increase in innovation ability and produc-
tivity, and so enable a higher reactivity. The increase is being caused by a time
advantage, which on the other hand results from a better product-market position,
a connection of resources, core competencies and knowledge, and declining trans-
actions cots within the cluster. Therefore Porters diamond model, which he used to
transfer his own model of national advantages to the field of regional clusters, will
be extended by an additional factor, that is to say time.
19.1 Introduction
“The world seems to move faster” - this statement is not new, but it never felt
stronger than today. For companies it implies that traditional business models en-
counter limitations and in addition a strong conflict occurs. Always being one in-
novation ahead to the competitor, securing immediate reactivity and a 100 percent
stock availability are opposed to an “it-may-cost-nothing”-mentality and the find-
ings of lean management.
It is clear that it is harder for a single company, as a “lone standing fighter”, to
meet these challenges of time and money. But cooperation in networks can offer
Ralf Elbert
University of Technology Berlin, Chair of Logistics Services and Transportation
e-mail: elbert@logistik.tu-berlin.de
Robert Schönberger
University of Technology Darmstadt, Chair of Clusters & Value Chain
e-mail: schoenberger@tud-cluster.de
233
234 Ralf Elbert and Robert Schönberger
1 Often also described with the terms industrial districts (see Markusen (1996) and Marshall
(1920), industry cluster (see Cortright, 2005, p.8), innovative milieus (see Franz, 1999, p. 112-114),
hot spots (see Pouder and St. John, 1996, S. 1194), sticky places, regional innovation networks
and hub-and-spoke-districts (see Markusen, 1996, p. 296-297))
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 235
In its broadest sense, a network can be thought as social, economic and/or political
relations between individuals and organizations (see Schubert, 1994, p. 9). Value
creation networks are characterized by cooperation between economic and legal in-
dependent companies in order to realize competitive advantages (see Jarillo 1988,
p. 32 and Liebhart 2002, p. 113). A regional value creation network consists a large
number of companies from the same region, which are tied to each other by coop-
erative and competitive horizontal and vertical links (see Ritsch, 2005, p. 28). This
is what Porter defined as clusters: a geographic concentration of companies, spe-
cialized suppliers and service providers, as well as companies in related sectors and
institutions, which are all connected in a special field of interaction. The character-
istics of this connection could be a common branch and industry or related process
technologies.
According to Porter “the enduring competitive advantages in a global economy
lie increasingly in local things - knowledge, relationships, and motivation that dis-
tant rivals cannot match” (see Porter, 1998, p. 78). It is again Porter, who analyzes
clusters the first time from its competitive strategy framework and describes them as
a superior economic structure. Clusters affect competition in three broad ways: they
improve the productivity of cluster companies, increase their innovation capability
and stimulate the creation of new demand. Porter explains these effects with envi-
ronmental influences that converge in six attributes that have the greatest influence
on a company’s ability to innovate and upgrade. These attributes, which he terms
diamond, “shape the information firms have available to perceive opportunities, the
pool of inputs, skills and knowledge they can draw on, the goals that condition in-
vestment, and the pressures in firms to act” (see Porter, 1990, p. 111).
His diamond model represented in the illustration below covers the following
four determinants: The factor conditions describe the position of a region concerning
the availability of production factors. This covers all relevant factors for an industry
such as work force, raw materials and services. The demand conditions describe the
kind of regional demand for products or services of an industry. The related and sup-
porting industries mark the presence of international competitive related industries
or suppliers. The firm strategy, structure and rivalry determine finally the conditions
of a cluster, how companies are organized and led, as they cooperate and how the
regional competition looks like. As further two determinants the chance and the
government were later added, which can shape the regional competition sustainable
and give at the same time important impulses for the cluster development.
The development of clusters is either founded by particularly favourable condi-
tions in one of the four original determinants of the diamond model or it can be
released by acts of business, which cannot be traced to special local conditions. The
further development of clusters explains Porter with a reciprocal process, which
runs between the determinants of the diamond model. He understands the diamond
as a strengthening model, in which a positive development in one determinant leads
to an improvement of the other competitive conditions. He assumes that a certain
number of participants in the cluster - a critical mass - must be achieved, thus the
236 Ralf Elbert and Robert Schönberger
In section 19.2 it was shown that clusters can generate competitive advantages re-
garding the Porter diamond. In order to deduce the emergence of these competitive
advantages, it is required to analyze the sources of value creation in clusters, where
basically three categories can be identified: “product-market position”, “resources,
core competencies and knowledge”, as well as “transaction costs ” (see Elbert et al,
2009, p. 63).
According to the market-based view value chains in clusters allow on the one hand
bundling of existing regional product-market positions and on the other hand the
development of new product-market positions (see Möller, 2006, p. 40). By pooling
of already developed markets, companies can profit among themselves of the market
positions of the cooperation partners and develop individually new markets on their
own. For companies without network participation such a market entrance is much
more difficult and becomes more time and cost intensive. Particular the bundling
of financial and organisational resources within networks enables beyond that an
accelerated and for the single firm more economical development of new markets
(see Wrona and Schell, 2003, p. 320). In addition regional cooperation with already
established competitors can reduce the existing rivalry, whereby economic rents can
increase temporarily (see Zahn and Foschiani, 2002, p. 271).
According to the resource-based view (see Penrose, 1959; Prahalad and Hamel,
1990; Barney, 1991; Amit and Schoemaker, 1993). and its enhancements the cre-
ation of value in networks is induced through bundling and generation of resources,
of core competencies and of knowledge. The aggregation and composition of com-
plementary resources enables the development of individually strengths as well as
the reconciliation of existing weaknesses. At the same time network-specific intan-
gible and only with difficulties imitable knowledge can be generated as new core
competency, which leads to a competence-based ability to innovate by the involved
companies (see Duschek, 2002, p. 172). The capacity for innovation of the compa-
nies represents a central competitive advantage, which positively affects the value
creation within the network. Regarding the knowledge which can be developed, the
bundling in a regional network enables also the research and development within
specialized areas and reduces thereby the risk for each individual company. Beyond
that network-specific value creation results in particular from the transfer and the
enhancement of knowledge as well as learning from and with the partners in the
238 Ralf Elbert and Robert Schönberger
cluster (see Zahn and Foschiani, 2002, p. 271). The bundling of the existing and the
generation of new knowledge is based on the confidence between the partners as
well as o the existing competence in relationships as another central core compe-
tency (see Wrona and Schell, 2003, p. 320).
According to the transaction cost theory different kinds of transaction costs can
be affected by cooperation in regional networks (see Woratschek and Roth, 2003,
p. 156). Cost advantages arise in networks in particular as a result of scale effects,
which cannot be realized individually by a single company (see Zahn and Foschiani,
2002, p. 270-271). From the transaction cost theory view the costs per transaction
can be reduced by investments in relational capital, since repetitive transactions be-
tween a small group of regional network participants are reducing the initiation and
arrangement costs; in addition scale and scope effects with a rising contract volume
in the region can be obtained and the average completion costs can be reduced. An
extensive exchange of information leads to a reduction of the information asymme-
tries and therewith connected a reduction of the control costs in the region (see Dyer,
1997, p. 543-544). However these advantages do not result automatically regarding
the transaction costs. Instead inter-organisational cooperation can require on the one
hand a stronger coordination and organization of the activities. On the other hand
a lack of confidence and reputation of the companies can lead to opportunistic be-
haviour in the regional network and requiring thus higher safety precautions (see
Williamson, 1991, p. 291). In both cases higher transaction costs would result from
cooperation in a regional network.
As it is obvious from the discussion above value creation within clusters is based
on two mechanisms. First, clusters can improve firm’s productivity and second,
increase their innovation capability. Both allow companies to produce a superior
output with similar or lower costs thereby improving their competitive position.
Whereas agglomerations lead to shared commonalities across companies they still
act independently in the market place. It is only through cooperation that companies
engage in joint regional value creation systems, in which an independent transforma-
tion process takes place. The companies’ input, consisting of joint configuration of
value activities as well as their combined resources and capabilities, are transformed
by the reinforcing effects of the Porter diamond and lead to upgraded products and
innovations.
The following figure illustrates how clusters - starting from a simple agglomer-
ation of companies - can be a source of innovation and productivity through coop-
eration, in combination with an activating cluster management. The Porter diamond
reinforces the underlying sources of competitive advantages leading to superior pro-
ductivity and enhanced innovation capabilities of its related companies. Time as
determining factor is being added as additional advantage.
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 239
Thus, it becomes evident how a cluster generates time advantages and so ac-
celeration potential for the involved companies. Only remains the transmission on
logistics and the empirical confirmation that it is possible to gain time advantages
through clusters in the global supply chain.
In a really short time more than 40 logistics clusters were established in Germany.
A deeper contemplation shows that all sizes, ages and forms of organization can be
found within the logistics clusters. On the one hand there are big and established
ones, which are supported by ministries or regional governments; on the other hand
there are small and young logistics clusters, which - it seems like that - are still
searching for their role in the global supply chain. But most important is that it
does not look like that logistics clusters are only a phenomenon, which is going to
disappear during the next years. Since most logistics clusters have a long-ranging
business model, it is to assume that their positions will even strength. The logistics
clusters which can be identified in Germany and were analyzed are put together in
the Appendix.
240 Ralf Elbert and Robert Schönberger
interests also for the development and for the use of existing industrial real estate
areas and promote thus the settlement of new companies, as well as the growth of
the existing companies and the entire region. In the same time they improve the de-
velopment of the labor force by common education and vocational trainings, or by
the common investment into the infrastructure and enable so their development and
a better extent of the utilization. The activities mentioned affect on the one hand the
factor conditions, i.e. the existing human, principal and natural resources. On the
other hand the settlement of new companies affects the demand conditions and the
competition between the logistics cluster participants. Beyond that some logistics
clusters concentrate on the networking between the logistics industry and related,
supporting industries. A very good example for this is the logistics cluster metropo-
lis Ruhr, which wants to lead the Ruhr region by the interdisciplinary connection
of logistics and IT “from the traffic-technical center” to the “information-logistics
center of Europe”: A regional bundling of the available as well as a development of
new product-market positions.
• Resources, Core Competencies and Knowledge→ Innovation & Productivity
The knowledge transfer between the cluster participants - as basis, in order to
strengthen and develop common strengths - represents a frequently specified goal
by the analyzed logistics clusters. In this coherence interdisciplinary cooperation
is aimed between companies, universities and research institutions, as for example
“Bavaria innovatively: Cluster logistics” particularly emphasizes as an activity, for
example. So especially for small and medium-size companies it should be easier to
get in touch with research institutions. The opportunities for knowledge transfer are
offered by workshops, lectures, working groups and further meetings. Common core
competencies are developed, if the cluster selects a certain topic within logistics to
create a basis for competence-based innovation ability. As example the intralogistics
network in Baden-Wrttemberg could be named.
• Transactions Costs → Productivity
The majority of the current activities of the logistics clusters, like e.g. newsletter
mailings, the implementation of an internet portal as well as the organization of
different events - as for example the logistics cluster North-Rhine/Westphalia or
the network logistics Leipzig are doing - have the goal to create trust between the
logistics clusters participants. These activities are to be rated as investment into the
relations capital and form thus the basis for cooperation in the logistics clusters.
Beyond that the information transfer also enables the networking of economics,
science, research and politics. Proximity is created by the exchange between the
companies, which leads to a reduction of the initiation and arrangement costs, for
example. The arising needs of coordination and organization make an own cluster
office necessary. Nearly two thirds of the observed logistics clusters already have
their own cluster office; an example for this is the logistics network Thuringia.
The discussion shows that several ways are gone within the logistics clusters to
create a basis for cooperation and for development of requirements in the region
to accelerate the logistics services. Nowadays most issues related to an accelera-
tion potential, and named on the websites, are an improvement in infrastructure to
242 Ralf Elbert and Robert Schönberger
speed up traffic and action in the logistics clusters area. But especially joint research
and development activities show the intension to create new logistics services. And
bringing the local logistics actors together in working groups or in discussion fo-
rums is a chance to exchange knowledge and - even more - establishes the basis
for cooperation. Because people are close together and using the chance of short
distance, a time advantage leads into faster innovations and higher productivity.
With a description of the sources of value creation and the combination with the
self-strengthening effects of the Porter diamond it was possible to show how clusters
can create a time advantage on the way to an increase of innovation and higher pro-
ductivity. Short ranges, fast contacts and response times, as well as the knowledge
spill-over in a cluster lead into the time advantages and into the fact that redundant
resources can be reduced.
The web-pages of the 40 identified logistics clusters in Germany give an impres-
sion what kind of acceleration potential lies in the regional networks. Since in most
regions the logistics clusters are relatively new, the results of the cluster work are
not measurable yet. So for future research it is necessary to keep an eye on the clus-
ter development and to follow up what they are doing in the region concerning time
and speed.
If logistics clusters are succeeding to dismount their position in the supply chain
as new mega hubs, the question, as well, is taking into account whether such re-
gional networks can activate additional adding value and whether in the future the
production will follow logistics.
19.6 Appendix
References
Amit R, Schoemaker PJH (1993) Strategic Assets and Organizational Rent. Strate-
gic Management Journal 14(1):33–46
Barney J (1991) Firm Resources and Sustained Competitive Advantage. Journal of
Management 17(1):99
Cortright J (2005) Making Sense of Clusters Regional Competi-
tion and Economic Development. Last check 30.07.2008, URL
http://www.brook.edu/metro/pubs/20060313 Clusters.pdf.
Duschek S (2002) Innovation in Netzwerken: Renten-Relationen-Regeln. Wies-
baden
Dyer JH (1997) Effective Interfirm Collaboration: How Firms Minimize Trans-
action Costs and Maximize Transaction Value. Strategic Management Journal
18(7):535–556
Elbert R, Schönberger R, Müller F (2008) Regionale Gestaltungsfelder für robuste
und sichere globale Logistiksysteme. Strategien zur Vermeidung, Reduzierung
und Beherrschung von Risiken durch Logistik-Cluster. In: Pfohl HC, Wimmer T
(eds) Wissenschaft und Praxis im Dialog. Robuste und sichere Logistiksysteme,
Hamburg, pp 294–322.
244 Ralf Elbert and Robert Schönberger
Abstract This paper examines the relationship between a managerial focus on re-
ducing inventory and improvements in value added. We analyze financial informa-
tion on large non-service US based firms over the 25 year period from 1980 to 2004.
Our results show a very strong correlation between the increase in value added and
the decrease in days of inventory speed across all manufacturing industries. The
results strongly support the operations management literature which claims a man-
agerial focus on efficiency, in particular increasing the speed of operations, will
result in significant value creation for firms. The results also imply that the concept
of competition based on operational speed has not been transferred across all firms
and the potential for improvement still exits in most industries.
20.1 Introduction
Operational speed, defined as the lead time from order handling, through production
and delivery, to the customer, has long been recognized as one of the common char-
acteristics of successful companies in competitive business environments. All oper-
ational management methods to improve operations are supposed to make processes
Vedran Capkun
HEC School of Management, 1, rue de la Liberation, 78351 Jouy-en-Josas cedex, France, tel. +33-
1-39-67-96-11 fax. 70-86
e-mail: capkun@hec.fr
Ari-Pekka Hameri, corresponding author
Ecole des HEC, University of Lausanne, Internef, Lausanne 1015 Switzerland, tel +41 21 692 3460
fax 3495
e-mail: Ari-Pekka.Hameri@unil.ch
Lawrence A. Weiss
McDonough School of Business, Georgetown University, Old North G01A, Washington, DC
20057-1147, USA, tel. +1-202-687-3802 fax. 4031
e-mail: law62@georgetown.edu
249
250 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
faster, more controllable and accurate. This includes business process engineering,
total quality management, vendor managed inventories, supply chain integration,
just-in-time, lean thinking, and activity based management. Among the best known
firms which focus on operational speed are the computer assembler Dell and the
apparel company Zara. These companies avoid the perilous impact of supply chain
dynamics by operating at speeds where the capital bound by their operations is a
fraction of the overall volume of their business. This provides them with the agility
to react to sudden demand variations and outperform their competitors. These com-
panies are also less dependent on forecasts and preplanned operations giving them
an additional advantage and cost efficiency over their slower competitors. Schmen-
ner, 1988; Stalk and Hout, 1990; Womack and Jones, 1996, all find a strong positive
relationship between financial results and those firms who set operational speed as
their key strategic approach.
According to the operations management literature, each operation that is part
of a business process should add to the value of the end-product. The more effi-
ciently a company creates value which customers are willing to pay for, the greater
the firm’s ability to deliver value to its stakeholders. One of the first steps to in-
creasing the value creation capability of a process is to remove those operations that
do not add value (in the sense of providing something a customer is willing to pay
for). Essentially a process should create value without bottlenecks and the process
variability, inherent or external, should be minimized (see Schmenner and Swink,
1998). Whether the term used is swiftness, operational speed, or reduced days of
supply, the aim is to improve operational speed by reducing the lead times of the
value creation processes of the company.
The majority of success stories in operations management stem from automo-
tive, machinery and job-shop (i.e. assembly) industries. It is important to review
how just-in-time and other operational methodologies have affected these and other
industries since their introduction in early 1980’s. The paper begins by presenting
the relevant literature on lead time reduction and competition based on speed. This
is followed by a description of the research hypothesis, sample description, and ap-
plied methodologies. Then we document the relationship between value creation
and operational speed by using financial information from large U.S. companies.
Next, we illustrate how development in operational speed has taken place in general
and across different industries. Finally, we present a summary of our key findings
with their managerial implications.
Manufacturing processes have come a long way since Henry Ford’s moving assem-
bly line and mass production. Ford’s focus was on output and cycle time, however,
he also provided examples of how repeating tasks improve lead time along the tradi-
tional learning curve - like the case of disassembling excess war ships after the First
World War (Ford, 1922). Scale and cost centric manufacturing were a managerial
20 Measuring the Effects of Improvements in Operations Management 251
focus until the 1970s. Then, the quality movement turned the focus to continuous
improvement and errorless operations. By the end of the 1970s, quality management
had become entrenched in manufacturing operations. The arrival of Just-in-Time
(JIT), and its emphasis on waste, inventory reduction, and operational flexibility,
created a managerial focus on operational speed in the early 1980’s.
Schonberger (1982), in one of the first books on JIT, describes the benefits stem-
ming from reduced set-up times and smaller lot-sizes. He also warns of the perils
of monster machines and excessive investments in technological marvels which re-
quire near 100% utilization levels to justify the financial investment. Hall (1983) and
Monden (1983) document the increase in efficiency obtained by Japanese produc-
tion facilities. All these books use examples from car assemblers and their suppliers
with a few references to other machine assembly industries. Thus, the initial fo-
cus of the early JIT literature was on a relatively repetitive type of production with
assembly of complex products. This trend continued with other supportive studies
reporting on job shops with huge product variety, and numerous different operations
and product routings.
Goldratt and Cox (1984) and Suri (1994, 1998) refined the managerial focus to
a relentless reduction of bottlenecks and lead time. These approaches, theory of
constraints and quick response manufacturing, rooted their message on flow and
lead time reduction by representing a plethora of cases from job shops and machine
assembly companies. The same message was being delivered by other scholars and
practitioners under different labels like time based competition (Stalk, 1988; Stalk
and Hout, 1990) and lean manufacturing (Womack et al, 1990; Womack and Jones,
1996).
Little (1961) shows that speed is directly proportional to average inventory. This
means speed will increase with inventory reductions1. Schmenner (2001) provides
an overall perspective on the history of manufacturing and operations management
with the simple phrase “swift, even flow”. He argues that the operational manage-
ment emphasis should be an expeditious, well controlled, flow of material through
various value adding operations without bottlenecks and excess variability. Accord-
ing to Schmenner (2001), throughout history, companies that focused on flow with
an emphasis on speed and variability reduction have outperformed companies em-
phasizing other goals. This is consistent with the mathematical principles of opera-
tions management, based on queuing theory, which demonstrates the relationships
between lot sizes, bottlenecks, lead times, and process variability (see Hopp and
Spearman, 1996).
Despite the sound mathematical grounds and numerous documented cases, speed
based competition is still principally rooted in high volume and repetitive industries.
The flagship of this trend remains the automotive industry, followed by job shops
assembling machinery. Studies like Holmström (1995) and Hameri and Lehtonen
(2001) indicate that most industries are dormant when it comes to operational speed.
Some industries, like pulp and paper, have improved their productivity by investing
in automation and through vertical integration including strong merger and acqui-
1Unless of course there is a supply chain glitch, which would then cause a decline in speed. See
Hendricks and Singhal (2003)
252 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
sition strategies (Hameri and Weiss, 2006). The Internet has also pushed opera-
tional speed into a new focus over the past few years. Firms now use the internet
to improve information transparency across organizational boundaries. Piszczalski
(2000) and Bruun and Mefford (2004) provide examples of how new communica-
tion technologies speed-up operations and reduce mistakes. They show how com-
pletely new business models based on automatic order handling and verification
have emerged. There are some anecdotal studies on JIT and speed based competi-
tion in services (Barlow, 2002) and other manufacturing industries. Unfortunately,
these are few and sporadic. The vast majority of the examples of speed based com-
petition are based on industries related to automobiles, assembly of machinery, and
computer equipment.
Over half a century ago Forrester’s (1961) non-linear simulations on information
and delivery delays in supply chains helped academics and managers understand
how information distortion and order batching leads to ever longer lead times and
inventory build up. JIT emerged in the early 1980s and brought supplier relations to
the forefront. Activity Based Costing systems arrived a few years later, in the mid
1980’s. These systems allowed managers to quantify the cost of resource demands
and operational processes. This provided managers with an improved understanding
of the underlying economics of their operations and allowed them to make more
informed decisions on their product and process choices (see Kaplan and Cooper,
1998). In the early 1990’s, the focus on the entire supply chain (from elementary
suppliers to the end customer) gained extensive momentum. Today, supply chain
management is a corner stone of modern operations management and supply chain
structures and control principles are the subject of extensive academic research (e.g.
Houlihan, 1987; Fisher and Raman, 1996; Fisher, 1997; Frohlich and Westbrook,
2002).
As noted above, speed and lead time related research has focused principally on
the automotive, machinery, and computer assembly operations. By contrast, sup-
ply chain research extends to all industries. It reviews supply chain structures with
distributions centers and various forwarding parties included in the chain. The un-
derlying theme in supply chain management is on information transparency, reliable
lead times, and the clever positioning of various value adding operations in the long
logistical chains. Companies with efficient in-house operations are also more likely
to display competence in managing their supply chains.
To summarize, operational speed, defined as the lead time from order handling,
through production and delivery, to the customer, has been recognized as one of
the common characteristics of successful companies in competitive business en-
vironments. Most of the research in the field of operational speed documents the
numerous cases where lead time reduction has resulted in major advantages for
the company. The vast majority of this research concerns industries where JIT and
speed based competition was first introduced - namely automotive, job shops, and
electronic equipment. Surprisingly, there are no studies which review different in-
dustries and their development over longer periods of time. Our paper aims to fill
this gap by focusing on the longitudinal development in the reduction of operational
speed (defined as days of supply) and the related increase in value creation.
20 Measuring the Effects of Improvements in Operations Management 253
This paper examines the relationship between value creation and three key elements
ascribed to a managerial focus on operational efficiency: days of supply, new in-
vestments in plant and equipment, and expenditures on research and development.
The key findings of the operations management literature indicates changes in value
added should be at least partially explained by these three key variables, which de-
fines the model to be tested in this study (Fig. 20.1). Following the literature review
we set our research questions as:
1. What is the impact of a managerial focus on operations on value creation?
2. How does this link vary across industries? and
3. Is there a stronger correlation among industries that were early adopters of speed
based operations strategies?
Fig. 20.1 The underlying model of the study: How capital expenditures, days of supply and R&D
costs (our three independent variables), correlate with increases in value added (our dependent
variable)
of the manufacturing firms has a mean in total assets of $ 3.9 billion (median of $
646 million) compared to $4.6 billion (median $1.0 billion) for the sample of non-
manufacturing firms. To compute the change in our variables (as defined below) we
require at least two consecutive periods and exclude data without a consecutive pe-
riod. We also remove negative value added data from the sample. Our final samples
consist of 915 firms and 10,882 observations (an average of 12 periods per firm)
for the regressions 1 & 2, and 927 firms with 11,244 observations (on average 12
periods per firm) for regression 3, over the 25 year period 1980-2004.
Table 20.1 SIC industry classification codes for firms used in our sample. Our sample consists of
firms with SIC primary codes from 01-59
01.09 Agriculture, Forestry, And Fishing
10.14 Mining
15-17 Construction
20-39 Manufacturing
40-49 Transportation, Communications, Electric, Gas, And Sanitary Services
50-51 Wholesale Trade
52-59 Retail Trade
60-67 Finances
70-89 Services
91-99 Public Administration
We collect data on our dependent variable and the three independent variables
from the annual report of each firm over the 25 year period 1980-1994. Our variables
are defined as follows:
(1) VAD = (Sales-COGS) / Number of Employees
VAD is the value added defined as gross profit per employee. Sales are annual sales
taken from the firm’s annual income statement. COGS are annual costs of goods
sold taken from the firm’s annual income statement. Number of employees is the
year end total number of employees as reported in annual report.
(2) DoS = (End of the year inventory / COGS) × 365
DoS is the days of supply. End of the year inventory is taken from the firm’s balance
sheet. COGS are the annual costs of goods sold taken from the firm’s annual income
statement. A reduction in DoS proxies for improvements in operations and should
lead to an increase in VAD.
(3) CAPEX = net capital expenditures on new investments in long term assets
reported on the cash flow statement.
20 Measuring the Effects of Improvements in Operations Management 255
CAPEX is the capital expenditure on long term assets. An increase in CAPEX indi-
cates new investments for operations which should lead to an increase in VAD.
(4) R&D = research and development expenses reported on the income state-
ment.
We begin our examination of the link between days of supply, capital expenditures
and R&D costs with value creation across all firms and then separate our sample
between manufacturing and non-manufacturing firms. Table 20.2 shows the results
2 Specifying the regression differently does not change our conclusions. Using CAPEX/Total As-
sets and RD/Sales instead of changes in CAPEX and RD does not change our conclusions nor does
using absolute values of VAD and DoS.
3 Our conclusions do not change if we use random effects regression or if we lag value added.
Using other definitions of days supply and value added also does not change our conclusions nor
does using absolute values of variables.
256 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
for the whole sample and the two sub-samples for the three regression models. The
first model (regression 1) is the original model with value added per employee as the
dependent variable. To test the sensitivity of our results to the choice of dependent
variable and macroeconomic conditions, in the second model (regression 2), we ad-
just value added per employee for inflation resulting in value added per employee in
1980 US dollars. This adjustment does not change our conclusions. We run the third
model (regression 3) with value added divided by total assets (instead of divided
by employees) as the dependent variable. Our conclusions remain the same. For the
whole sample there is a strong and statistically significant (1% level) link between
our three independent variables and value added, regardless of the choice of regres-
sion model. A closer look shows that this holds only for manufacturing firms. For
non-manufacturing firms, the only statistically significant coefficient is days of sup-
ply, and in most cases only at the 10% significance level. This was expected based
both on prior research and common sense. Value creation in non-manufacturing op-
erations is by default less related to our chosen independent variables. Many of the
non-manufacturing firms have little or no inventory, and others having a constant
inventory level, thus making their value added insensitive to changes in inventory
management. The detailed results for non-manufacturing industries (industry anal-
ysis) are mixed and do not indicate a relationship between the dependent variable
and any of the three independent variables4 .
N is the number of firms in the sample and the sub-samples. Value Added is
defined as (Sales - COGS)/Number of Employees. Days of Supply is defined as
(End of the year inventory/COGS)×365. Capital expenditure is defined as the net
capital expenditures on new investments in long term assets. R&D cost is defined as
the research and development expenses. Values in parentheses are standard errors of
the coefficients, while ***, ** and * represent significance levels of 1, 5, and 10%
respectively.
As noted above, for manufacturing firms the coefficient of speed is significant
and negatively related to value added. The coefficients associated with both capital
expenditures and R&D expenditures are positive and significant for manufacturing
firms. These results are consistent with the argument that improving operational
speed, increasing capital investments and/or increasing funding of R&D all lead to
an increase in value added. The results are consistent with our model depicted in
Fig. 20.1 for manufacturing companies.
To analyze manufacturing industries in more depth, we further divide our sam-
ple of manufacturing firms into sub-samples based on the two first digits of their
SIC code. We eliminate those industries in our sample with less than 20 firms. The
results are presented in Table 20.3. While the coefficient associated with days sup-
ply remains negative and significant, the capital expenditure and R&D expenditure
coefficients differ across industries. The coefficient of capital expenditure is signif-
icant and positive for primary metal, machinery and computer equipment, electron-
ics, transportation equipment, and instruments. By contrast, an increase in capital
expenditure does not seem to have any relationship to the value added increase in
4 We analyzed all non-manufacturing industries but we do not present the results in this paper. The
results are not significant for days of supply in any of the analyzed non-manufacturing industries.
20 Measuring the Effects of Improvements in Operations Management 257
Table 20.2 The table presents fixed effects regression coefficients of the percentage increase in
Value Added during the 25 year period 1980 to 2004 on the percentage change in days of supply,
capital expenditures and research and development costs
All firms Days R&D Capital
of Supply costs Expenditures
Regression 1: (915 firms)
Value added per Employee -0.185*** 0.042*** 0.061***
-0.01 -0.009 -0.006
Regression 2: (915 firms)
Value added per employee adjusted for inflation -0.184*** 0.042*** 0.061***
-0.01 -0.009 -0.006
Regression 3: (927 firms)
Value added per Total assets -0.216*** 0.051*** 0.0214***
-0.01 -0.01 -0.006
food, chemical products or fabricated metal industries. The coefficient of R&D ex-
penses also differs across industries. It remains positive and significant for paper
products, machinery and computer equipment, electronics, and instruments. How-
ever, it is not significant in food, primary metal, fabricated metal, or transportation
equipment.
All manufacturing industries with more than 20 firms were analyzed. Value
Added is defined as (Sales - COGS)/Number of Employees. Days of Supply is de-
fined as (End of the year inventory/COGS)×365. Capital expenditure is defined as
the net capital expenditures on new investments in long term assets. R&D cost is
defined as the research and development expenses. Values in parentheses are stan-
dard errors of the coefficients, while ***, ** and * represent significance levels of
1, 5, and 10% respectively.
Capital expenditures appear to play a major role in capital intensive, some high-
tech and complex products producing industries. A similar pattern is found on the
impact of increasing R&D investments. The underlying message seems to be that
258 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
Table 20.3 The table presents fixed effects regression coefficients of the percentage increase in
Value Added during the 25 year period 1980 to 2004 on the percentage change in days of supply,
capital expenditures and research and development costs
# of Days Capital R&D
SIC Industry name firms of Supply Expenditure costs
20 Food 21 -0.313*** -0.012 -0.062
(0.119) (0.060) (0.095)
28 Chemical and allied products 141 -0.110*** 0.015 0.0712**
(0.025) (0.016) (0.0344)
33 Primary metal 23 -0.256** 0.139*** -0.107**
(0.108) (0.036) (0.054)
34 Fabricated metal 28 -0.374*** 0.019 -0.007
(0.051) (0.027) (0.041)
35 Machinery and allied products 124 -0.184*** 0.054*** 0.077***
(0.022) (0.013) (0.027)
36 Electronics 194 -0.223*** 0.116*** 0.064**
(0.024) (0.013) (0.027)
37 Transportation equipment 58 -0.425*** 0.042** 0.037*
(0.034) (0.020) (0.022)
38 Instruments 136 -0.099*** 0.047*** 0.048***
(0.024) (0.012) (0.016)
competition in different industries has been increasing over the past two and half
decades and survival has required some level of improved operational efficiency.
We next track trends in the relationship between value added and days of supply by
industry.
As noted above, some managers of job-shop and assembly industries began shift-
ing to JIT and other related methods to improve speed and efficiency in the early
1980s. To provide evidence on this shift in managerial focus, we compute indus-
try medians of value added and days of supply for different industries. Figure 20.2
shows 4 different industries with their very different development profiles over the
past 25 years.
The transportation equipment industry (Fig. 20.2a), SIC 37) has reduced days of
supply from around 100 days to 40 days, while at the same time the value added has
almost doubled. The improvement in speed occurred primarily during the 1980s.
This was the period when JIT and a focus on small lot sizes was introduced to the
industry, and is clearly reflected in the graph. This period of speed improvement
also witnessed a strong improvement in value creation. Lieberman and Demeester
(1999) document, in their profound study on Japanese automakers, that each 10%
reduction in inventory leads to approximately a 1% gain in labor productivity, with
a lag of about one year. Although their result is from a limited specific sample,
our sample of US companies in the transportation equipment industry also finds
that a focus on speed results in improved value creation. A more detailed look at
this industry reveals that those firms with large relative reductions in the days of
supply have higher increases in value creation, regardless of the absolute starting
level. Companies which are speedier but do not improve their speed are stagnant
20 Measuring the Effects of Improvements in Operations Management 259
Fig. 20.2 Inflation adjusted value added per employee and days of supply for four industries. Value
Added, adjusted to 1980 US dollars, is defined as (Sales - COGS)/Number of Employees. Days of
Supply is defined as (End of the year inventory/COGS)×365
260 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
in their value generation. This indicates the important element is to focus on speed
improvement as opposed to having the lowest absolute days supply. Effectively,
continuous improvement in speed appears to be as vital for valuation creation as
quality management.
The electronic equipment and component industry (Fig. 20.2b), SIC 36) shows
a continuous increase in value added over our sample period. The median of value
added increased three times while speed improved (was reduced) by 20%. This can
perhaps be explained by a more in-depth examination of the sample. The SIC code
36 holds several different industries. The conventional electronics industries, like
household appliances, operate in well established commodity type industries. These
firms have maintained a fairly constant speed level throughout the 25 year period.
The highly competitive telecommunication industries have had a major drive to im-
prove their speed. Here, the top performing companies halved their speed and in-
creased their value creation ten times.
Next, we examine the 25th and 75th percentiles for the industrial machinery and
computer equipment industries (SIC code 35, see Fig. 20.3). Here we expect a strong
correlation between improvements in operational speed and value creation. The top
companies of this industry have more than halved, from 160 to 75 days, their oper-
ational speed during the past 25 years, while their value added has increased more
than three times. During the same period the worst performing companies have re-
duced their speed by 20% but their value added remained virtually unchanged with
only a 10% increase in 25 years. We note that speed for the worst companies started
on a significantly better level than the best performing companies. This means, that
in relative terms, the speed improvement is very drastic among the top value cre-
ating companies. This gives strong support to the hypothesis that companies with
a focus on the speed of their operations have a greater potential to increase value
added.
The most efficient firm in the machinery and computer equipment industry (SIC
code 35) in 2004 was Dell with days of supply of only 4.15, followed by Apple
Computers (6.19 days). Dell outperformed its competition in terms of speed in most
years since it entered our sample in 1987, resulting in above average value added per
employee of $ 73.000 in 2004 (in 1980 dollars). Figure 20.4 shows the speed and
performance of Dell in the period of 1987-2004. Days of supply decreased from 80
to 4 while the firm’s value added per employee doubled from $ 36.000 to $ 73.000
(in constant 1980 dollars).
Fig. 20.3 Companies in SIC 35 belonging to top and bottom 25th percentiles when measured in
value added and days of supply. Value Added is defined as (Sales - COGS)/Number of Employees.
Days of Supply is defined as (End of the year inventory/COGS)× 365
across all manufacturing industries. The relationship between value added and the
capital expenditure and R&D expenses is also strong with differences across indus-
tries based on the standard industrial classification codes. The longitudinal analysis
shows that speed based competition was especially pronounced in the machine pro-
duction, transportation, and computer equipment industries. Other industries also
displayed improvements in value added but without a similar relationship to in-
ventory. The results strongly support the operations management literature which
claims a managerial focus on efficiency, in particular increases to the speed of op-
erations, will result in significant value creation for firms. The results also imply
that the concept of competition based on operational speed has not been transferred
across all firms and the potential for improvement still exits in most industries.
Our results are based on correlations, not on experimental research. This means
our results do not prove causality between our variables. However, the many docu-
mented cases in the literature on positive turnarounds in manufacturing companies
through the reduction of inventories and speeding operations serve as experimental
research which through the manipulation of the concerned variables has strongly
demonstrated the causal relationship between the variables. Our statistical analysis
combined with the visualization of the time series clearly supports our hypotheses.
There are several implications of the analyses which provide additional insight to
the studied variables and their relationships. The following list summarizes the main
262 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
Fig. 20.4 Inflation adjusted value added per employee and days of supply for Dell Computer. Value
Added, adjusted to 1980 US dollars, is defined as (Sales - COGS)/Number of Employees. Days of
Supply is defined as (End of the year inventory/COGS) × 365. Dell’s value added per employee
(in 1980 dollars) and days of supply in the 1980-2004 period
findings of our study and how manufacturing companies should perceive them in an
operational sense:
• Slow companies will eventually fail. The analyzed data demonstrate that com-
panies which are unable to improve their operational speed will gradually loose
their capability to produce value. Even the worst performing companies show
progress in operational speed. Without this improvement they probably would
not have been able to remain competitive. Improving operational speed appears
to be vital for the survival of a firm, independent of the firm’s current level of
performance. Naturally, companies with a monopolistic position are excluded
from our discussion as they are removed from the competitive environment. The
strong correlation between days of supply and value creation across all manufac-
turing industries should invite managers to emphasize operational speed and its
reduction in their strategies. A managerial focus on operational speed should be
perceived to be as important as ROI, market share, and profit.
• Each industry sector has companies that exploit speed based operational supe-
riority over others. Our data demonstrates that despite differences across indus-
tries, in each industry there are companies who enjoy larger value creation due
to their speedier operations. Although some industries are competing based on
20 Measuring the Effects of Improvements in Operations Management 263
factors other than speed, there appears to be additional opportunities for firms to
improve relative to the industry by focusing on expediting operations.
• Slower companies, in general, have a higher potential to improve their value
added, while already faster companies can further increase their lead with addi-
tional improvements in operational speed. Our analysis indicates that, in relative
terms, all top percentile companies improved their speed in absolute terms rel-
ative to the lower percentile companies. However, in some industries the lower
level companies improved at a faster rate than the top companies. This demon-
strates the importance of both derivatives of speed improvement - the first is the
importance of improving the absolute level, the second is the importance of im-
proving the rate of change.
As noted above, the savings from inventory costs related to reduced inventory
levels have very little impact on COGS and the Gross Margin per employee. This
means that companies who improve their speed are doing things better in gen-
eral. They tap outsourcing, technology, or whatever new operational principles
emerge faster than the other companies and incorporate the benefits more efficiently.
Clearly, doing things faster is not simply having people work harder and having
trucks move faster. Rather, it means a wide range of operations are being done in a
different and more intelligent manner.
References
Hameri A, Weiss L (2006) Value creation and days of supply in major pulp and
paper companies, paper and Timber (forthcoming)
Hendricks K, Singhal V (2003) The effect of supply chain glitches on shareholder
wealth. Journal of Operations Management 21(5):501–522
Holmström J (1995) Speed and efficiencyA statistical enquiry of manufacturing in-
dustries. International Journal of Production Economics 39(3):185–191
Hopp W, Spearman M (1996) Factory Physics. Irwin, Chicago
Houlihan J (1987) International supply chain management. International Journal of
Physical Distribution and Materials Management 17(2):51–66
Kaplan R, Cooper R (1998) Cost and Effect: Using integrated cost systems to
drive profitability and performance. Harvard Business School Press, Boston, Mas-
sachussets
Lieberman M, Demeester L (1999) Inventory reduction and productivity growth:
linkages in the Japanese automotive industry. Management Science 45(4):466–
485
Little J (1961) A proof for the queuing formula: L= λ W. Operations Research
9:383–387
Monden Y (1983) Toyota production system: An integrated approach to just-in-
time. Industrial Engineering and Management Press, Institute of Industrial Engi-
neers, Norcross, GA
Piszczalski E (2000) Lean vs. Information Systems. Automotive Manufacturing &
Production 112(8):26–28
Schmenner R (1988) The merit of making things fast. Sloan Management Review
30(1):11–17
Schmenner R (2001) Looking ahead by looking back: Swift, even flow in the history
of manufacturing. Production and Operations Management 10(1):87–96
Schmenner R, Swink M (1998) On theory in operations management. Journal of
Operations Management 17(1):97–113
Schonberger R (1982) Japanese manufacturing techniques: Nine hidden lessons in
simplicity. Free Press, New York
Stalk G (1988) Time-The next source of competitive advantage. Harvard Business
Review 66(4):41–51
Stalk G, Hout T (1990) Competing against time: How time-based competition is
reshaping global markets. The Free Press, New York
Suri R (1994) Common misconceptions and blunders in implementing quick re-
sponse manufacturing. In: Proceedings of the SME Autofact 94 Conference, So-
ciety of Manufacturing Engineers, p 23
Suri R (1998) Quick Response Manufacturing. Productivity Press: Portland, OR
Womack J, Jones D (1996) Lean thinking: Banish waste and create wealth in your
corporation. Simon & Schuster, New York
Womack J, Jones D, Roos D (1990) The machine that changed the world. Rawson
Associates, New York
Chapter 21
Managing Demand Through the Enablers of
Flexibility: The Impact of Forecasting and
Process Flow Management
Abstract In recent years increased attention has been paid to integrated demand and
supply chain management. The present research study discusses the main concept
in this context, i. e., flexibility. In particular, we have learned from existing research
work, that is difficult to link the construct “flexibility” with performance (efficiency
as well as effectiveness). Therefore, we analyzed important enablers of flexibility.
Based on our conceptual flexibility framework, we discussed the impact of layout,
process flow management as well as the forecasting performance (error). Our results
provided some interesting insights. In particular, only process flow management is
linked with effectiveness as well as efficiency performance. On the other hand, lay-
out as well as forecasting performance, is only linked with efficiency. These results
demonstrate that, e. g., forecasting is not directly linked with external results like
customer satisfaction. Based on these results it is possible to motivate further re-
search activities that should investigate these complex relationships more in detail.
21.1 Introduction
In recent years, literature has devoted more and more attention to the problem of
supply and demand management in uncertain contexts. On the one hand, literature
Matteo Kalchschmidt
Department of Economics and Technology Management, Universit di Bergamo – Viale Marconi 5,
24044 Dalmine
e-mail: matteo.kalchschmidt@unibg.it
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch
265
266 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
in the field of demand management has discussed this topic from several points of
view; relevant attention has been paid on building better and more performing fore-
casting techniques and approaches, in order to reduce the uncertainty companies
perceive (Hanssens et al, 2003). Other authors have applied models to learn how to
reduce demand uncertainty through the adoption of marketing actions, e. g., every
day low price strategies (Lee and Tang, 1997). In this context, improvement of infor-
mation sharing based on partnerships with customers is also of interest (Cachon and
Fisher, 2000). On the other hand, in the supply management literature several con-
tributions have been provided on the adoption of flexibility as a mean to cope with
uncertainty (Lee, 2002). Limited contributions, however, can be found regarding the
interaction among these leverages (i. e. forecasting, information sharing and process
flow management) to manage demand in uncertain contexts as well as regarding
their joint effect on company performance. In the end, it should be one of the core
objectives in OM research to match capacity and inventory management with cus-
tomer demand management in order to maximize business results. In other words,
in terms of our specific research study, what is flexibility and what is the impact of
flexibility with regard to dynamic interactions with forecast accuracy/characteristic,
process flow, etc.? Based on these results the next research questions could be to
identify what is the “right” flexibility level? The aim of this work is thus to study
the relationship between enablers (i. e., practices under consideration of contingency
factors) of flexibility and company performance.
siderable ambiguity in the existing literature and both words have even been used
interchangeably, even though they are distinct concepts. Reichhart and Holweg’s
framework make the distinction between external and internal flexibility, i. e. exter-
nal flexibility is linked to achieving a competitive advantage (‘what the customer
sees’) by opposition to internal flexibility which is the internal means by which ex-
ternal flexibility can be achieved (‘what can we do’). Following Slack (1987) and
Upton (1994) a system’s flexibility is based on internal resources that can be used
to achieve different types of internal flexibility, which in turn can support the sys-
tem’s ability to demonstrate external flexibility to its environment. This distinction
separates the capabilities of operations resources from the market requirements, the
dual influences that need to be reconciled by operations strategy (Slack, 2002). This
discrepancy between internal and external flexibility may explain contradicting re-
sults concerning the relationship between uncertainty and flexibility. E. g., Swami-
dass and Newell (1987) found that flexibility improves performance in uncertain
environments, in contrast Pagell and Krause (1999) found no relationship between
measures of environmental uncertainty and operational flexibility in a survey among
North-American manufacturers.
External Results
Internal Results
Inventory management
………..
more, these approaches will have also an impact on the efficiency, i. e. costs. The
total success (effectiveness as well as efficiency) of flexibility can be only evalu-
ated under consideration of both aspects. The overall framework is presented in Fig.
21.1. Flexibility and performance are known to be influenced by contingency fac-
tors which also affect their relation. Numerous examples of influences from contin-
gent factors have been provided, including e. g. perceived environmental uncertainty
(Swamidass and Newell, 1987) or company’s business strategy (Gupta and Somers,
1996) and, reviewing literature on manufacturing flexibility, Vokurka and O’Leary-
Kelly (2000) identify four general contingent factors. Specifically, the authors men-
tioned strategy, environmental factors, organizational attributes and technology as
being exogenous variables impacting on flexibility and performance, highlighting
once more the complexity of the topic. Finally, we can summarize, when consid-
ering flexibility as the result of a set of business practices, that contingency is of
main importance as a selected practice may not be possible in all settings (e. g. Ke-
tokivi, 2006). In the context of demand management, forecasting is also known to
be a strong lever against uncertainty and thus can contribute in gaining better perfor-
mances. Literature traditionally considers accuracy as the relevant performance to
be evaluated in a forecasting process (Mentzer and Bienstock, 1998; Chase, 1999).
When forecast accuracy increases, cost and delivery performances consequently im-
prove, as they are typically correlated with forecast error. Inventory levels, and thus
related costs, can be reduced; manufacturing systems are better managed, as equip-
ment utilization improves and companies can effectively plan in advance actions to
be undertaken (Vollmann et al, 1992; Ritzman and King, 1993; Fisher and Raman,
1996). In turn manufacturing and product costs decrease. Delivery performances
(e. g., order fulfillment and delivery speed/punctuality) also improve as, when fore-
cast accuracy is higher, it is more probable that products are available when the cus-
tomer orders (Enns, 2002; Kalchschmidt et al, 2003). Forecast inaccuracy causes
major rescheduling and cost difficulties for manufacturing (Ebert and Lee, 1995)
and it may impact on logistic performances such as delivery timeliness and quality
(Kalchschmidt and Zotteri, 2007). For these reasons, it is no surprise that several
surveys show accuracy as the most important criterion in selecting a forecasting
approach (Dalrymple, 1987; Mahmoud et al, 1988). For this reason some authors
have even recommended eliminating forecasts entirely (Goddard, 1989). Other pos-
sibilities to hedge uncertainty are inventory management as well as capacity man-
agement. Traditionally, inventory management is challenging because uncertain de-
mand and uncertain supply and/or production flow times make it necessary to hold
inventory at certain positions to provide adequate service to the customers. As a
consequence, increasing process inventories will increase customer service and rev-
enue, but it comes at a higher cost. Therefore, management has to resolve this trade-
off by identifying possibilities to decrease inventories by simultaneously improving
customer service. A well known management lever in this respect is risk pooling by
different types of centralization or standardization, e. g. central warehouses, product
commonalities, postponement strategies (see e. g. Tallon, 1993). In this way, it is
usually possible to reduce inventory costs to a large extent. However, this reduction
of inventory costs often is related with an increase of other costs such as transporta-
21 Managing Demand Through the Enablers of Flexibility 269
tion costs or production costs. If activities are postponed downstream in the process
by shifting the customer order decoupling point upstream in the process, the order
flow time is affected. E. g., if no additional resources are allocated to the postponed
activities, the order flow time and thus the delivery time for a customer will be in-
creased. Therefore, additional resources (labour and/or equipment) have to be taken
into account for the evaluation of such process changes and the additional produc-
tion costs have to be traded off with the reduction in inventory costs (Jammernegg
and Reiner, 2007).
The aim of this work is to study the relationship between enablers of flexibility
and performance. In particular, the paper wants to address the following research
question: what is the impact of flexibility enablers (forecasting, layout, process flow
management, etc.) on companies’ operational performances? In order to analyze
this research question, we considered two different performances: efficiency and ef-
fectiveness. Analytical literature suggests that enablers may have some impacts on
both performances, but as we mentioned, limited empirical evidence can be found
on these relationships. Thus the theoretical model we are considering is represented
in Fig. 21.2. Empirical analysis is based on data collected from the 4th edition of the
GMRG survey. The Global Manufacturing Research Group (GRMG) collects infor-
mation regarding manufacturing practices in several countries all over the world.
Currently, full data sets have been provided by 598 companies in 13 different coun-
tries (Austria, Australia, China, Germany, Hungary, Italy, Korea, Mexico, Poland,
Sweden, Switzerland, Taiwan, USA), all belonging to manufacturing and assembly
industry.
/D\RXW
3URFHVVIORZPDQDJHPHQW (IILFLHQF\DQG
HIIHFWLYHQHVV
)RUHFDVWLQJ
Fig. 21.2 The theoretical model
Table 21.1 synthesizes the distribution of the sample in terms of size and Table
21.2, the distribution among the different countries. The sample size shows several
medium and large companies, but also some small companies are present in the
dataset.
270 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
ous literature, in order to measure the extent of investment on layout for flexibility,
we considered the extent to which companies have invested on: (1) cellular manu-
facturing and (2) factory automation.The two items are correlated with each other
(Pearson Correlation index is 0.44 and significant at 0.01 level). To measure the ex-
tent of investment on responsiveness, we considered the extent to which companies
have invested on: (1) Just-In-Time, (2) Manufacturing throughput time reduction,
(3) Setup time reduction and (4) Total Quality Management. The items are corre-
lated with each other (all Pearson Correlation indexes are above 0.40 and significant
at 0.001 level). Thus the constructs layout and process flow management are defined
by averaging the specific items. We assessed convergent validity and unidimension-
ality of the defined constructs with a confirmative factor analysis model. Literature
recommends using normed fit index (NFI) and comparative fix index (CFI) together
in assessing model fit. NFI is 0.98 and CFI is 0.99 which let us consider the model
as acceptable (Hu and Bentler, 1999). In addition root mean square error of approx-
imation (RMSEA) is 0.05 which suggests that the model fit is acceptable. Factors
loads are all significant and respect the lower suggested value of 0.40 (Gefen et al,
2000). Cronbach’s Alpha was also measured in order to verify reliability of the con-
structs; constructs were considered reliable if Alpha’s value is above the minimum
requirement of 0.60 (Nunnally and Bernstein, 1994). To evaluate forecasting accu-
racy, companies have been asked to provide the average error for a single product
on a two-month period. Thus we evaluate short term forecast performances. In the
end, efficiency and effectiveness performances were considered. As for efficiency,
three items were examined, as we asked respondents to provide an evaluation of
the following performances compared with their competitors on a 7 point Likert
scale (1 is for “far worse than” and 7 for “far better than”): (1) direct manufacturing
costs, (2) total product costs, (3) raw material costs. As to the effectiveness per-
formances, a similar question was asked for the following: (1) product quality, (2)
delivery speed and (3) delivery as promised. It can be noted that, as it is difficult
to compare performances between companies operating within different contexts,
this research focuses on perceptual and relative measures of cost and delivery per-
formances. Thus the constructs efficiency and effectiveness are defined by averaging
the specific items. NFI is 0.99 and CFI is 1.00 which let us consider the model as
acceptable. In addition RMSEA is 0.00 which suggests that the model fits well. Fac-
tors loads are all significant and Cronbach’s Alpha value is significantly above the
minimum requirement of 0.60. When dealing with survey data, common method
bias (CMB) can affect statistical results. As suggested by Podsakoff et al (2003),
we checked for this problem by means of confirmatory factor analyses (CFA) on
competing models that increase in complexity (Podsakoff et al, 2003). If method
variance is a significant problem, a simple model (e. g., single factor model) should
fit the data as well as a more complex model (in this case a five factor model). The
hypothesized model, containing five factors yielded a better fit of the data than the
simple model (one factor model: CFI 0.56 and RMSEA 0.17; five factor model: CFI
0.98 and RMSEA 0.04). Furthermore, the improved fit of the six factor model over
the simple model was statistically significant: the change in χ 2 is 1030.40 and the
change in df is 9 (p <.001). Thus, CMB did not appear to be of concern in our analy-
272 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
sis. In order to study the research questions we adopted linear regression between the
different variables considered. In particular we run two different regression analyses
by considering efficiency and effectiveness as dependent variables. We also control
the regression results by adding the size of the company as a control variable; to
evaluate the contribution of the independent variables we consider the coefficient of
determination R2 increment (specifically whether it is significant or not) and check
for multicollinearity problems through the analysis of the variance inflation factor
(VIF). Table 21.3 summarizes the results of the linear regression between forecast-
ing performances and flexibility practices on efficiency performances.
As we can see from the first regression analysis, the size of the company is posi-
tively related to its capability of being efficient. This is no surprise since size is typ-
ically related to the ability of gaining economics of scale. However, in the second
model, the variables we are considering are all significantly related to efficiency and
the model fit is significantly better. In particular both flexibility-related practices are
positively related to efficiency, thus the more companies invest in layout and respon-
siveness the more they are able to improve their performances. Coherently, forecast
error is negatively related to efficiency. Multicollinearity doesn’t seem to be a main
concern since VIF is lower than 2 for all variables. Table 21.4 provides the same
analysis for effectiveness performances.
As we can see, size is no longer significant significant any more. Quite interest-
ingly, only process flow management is related to effectiveness performances. This
provides evidence that layout investments for improving flexibility and forecasting
accuracy mainly have impact on efficiency.
21.5 Discussion
A first interesting result is that a relationship between flexibility enablers and per-
formance exists. This is important empirical evidence because we are able to show
that flexibility enablers (layout as well as process flow management) can be very
effective in gaining better performances both internally (efficiency) and externally
(effectiveness). Quite interestingly, however, this relationship is very strong with
efficiency performances, while only process flow management has an impact on ef-
fectiveness. This means that companies investing on layout should expect to achieve
better internal performances, but at the same time, they may have to pay attention to
the impact on the customer. A second interesting result relates to forecasting. In fact
we can see that forecasting accuracy impacts on efficiency performances (coherently
with what other contributions provide (see previous literature discussion). However,
this impact disappears when effectiveness is considered, claiming that forecasting
accuracy doesn’t directly replicate in customers’ satisfaction. This result is consis-
tent with previous works (see Danese and Kalchschmidt, 2008) that show that this
missing link doesn’t mean that forecasting is useless, but rather that this relation-
ship is far more complex than expected. We argue thus that the relationship between
forecasting and process flow management deserves more attention. These results do
not provide evidence of a clear preference between the two practices. Therefore, it
would interesting to understand whether any synergic relationship exists between
the two or if process flow management simply compensates forecasting error, thus
limiting the negative impact that forecast error may have on performances. In the
end the provided results emphasize that the impact of flexibility on performances
is very complex: specifically, different levers can have very different impacts on
performances. Thus, attention on what companies are doing to increase flexibility
should be seriously considered. To conclude, we would also like to address some
limitations of this work. First of all, as we mentioned, we compared flexibility en-
ablers (i. e. practices) with forecasting performances. This approach was due to the
inability of defining specific measures for flexibility. The comparison is not com-
pletely fair since we are not putting together homogeneous variables. Next, studies
should take this issue into consideration, by, for example, looking at forecasting
274 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
practices and not simply at the outcome of this process. A second issue relates to
contingencies: we didn’t consider specifically any contingent factor that may influ-
ence the different variables and relationships here described. Future works should
devote attention to those factors that may change how variables are defined. We ar-
gue that general results will be not be drastically affected by these variables, also
because several degrees of freedom are left to companies in terms of which practices
they can adopt.
Acknowledgements Partial funding for this research has been provided by the PRIN 2007 fund
“La gestione del rischio operativo nella supply chain dei beni di largo consumo” as well as by the
project “Matching supply and demand - an integrated dynamic analysis of supply chain flexibility
enablers” supported by the Swiss National Science Foundation.
References
Cachon GP, Fisher ML (2000) Supply chain inventory management and the value
of shared information. Management Science 46(8):1032–1048
Chase C (1999) Sales forecasting at the dawn of the new millennium? Journal of
Business Forecasting Methods and Systems 18:2–2
Dalrymple D (1987) Sales forecasting practices: Results from a United States sur-
vey. International Journal of Forecasting 3(3):379–91
Das A (2001) Towards theory building in manufacturing flexibility. International
journal of production research 39(18):4153–4177
Enns S (2002) MRP performance effects due to forecast bias and demand uncer-
tainty. European Journal of Operational Research 138(1):87–102
Fisher M, Raman A (1996) Reducing the cost of demand uncertainty through accu-
rate response to early sales. Operations Research 44(1):87–99
Gefen D, Straub D, Boudreau M (2000) Structural equation modeling and regres-
sion: Guidelines for research practice. Structural Equation Modeling 4(7)
Goddard W (1989) Lets scrap forecasting. Modern Materials Handling 39:39
Gupta Y, Somers T (1996) Business strategy, manufacturing flexibility, and orga-
nizational performance relationships: a path analysis approach. Production and
Operations Management 5(3):204–233
Hallgren M, Olhager J (2009) Flexibility configurations: Empirical analysis of vol-
ume and product mix flexibility. Omega 37(4):746–756
Hanssens D, Parsons L, Schultz R (2003) Market response models: Econometric
and time series analysis. Kluwer Academic Publishers
Ho C, Tai Y, Tai Y, Chi Y (2005) A structural approach to measuring uncertainty in
supply chains. International Journal of Electronic Commerce 9(3):91–114
Hu L, Bentler P (1999) Cutoff criteria for fit indexes in covariance structure anal-
ysis: Conventional criteria versus new alternatives. Structural equation modeling
6(1):1–55
21 Managing Demand Through the Enablers of Flexibility 275
Jack E, Raturi A (2002) Sources of volume flexibility and their impact on perfor-
mance. Journal of Operations Management 20(5):519–548
Jammernegg W, Reiner G (2007) Performance improvement of supply chain pro-
cesses by coordinated inventory and capacity management. International Journal
of Production Economics 108(1-2):183–190
Kalchschmidt M, Zotteri G (2007) Forecasting practices: empirical evidence and a
framework for research. International Journal of Production Economics 108:84–
99
Kalchschmidt M, Zotteri G, Verganti R (2003) Inventory management in a multi-
echelon spare parts supply chain. International Journal of Production Economics
81:397–413
Kara S, Kayis B (2004) Manufacturing flexibility and variability: an overview. Jour-
nal of Manufacturing Technology Management 15:466–478
Ketokivi M (2006) Elaborating the contingency theory of organizations: The case
of manufacturing flexibility strategies. Production and Operations Management
15(2):215–228
Koste L, Malhotra M (1999) A theoretical framework for analyzing the dimensions
of manufacturing flexibility. Journal of Operations Management 18(1):75–93
Lee H (2002) Aligning supply chain strategies with product uncertainties. California
Management Review 44(3):105–119
Lee H, Tang C (1997) Modelling the costs and benefits of delayed product differen-
tiation. Management science 43(1):40–53
Mahmoud E, Rice G, Malhotra N (1988) Emerging issues in sales forecasting
and decision support systems. Journal of the Academy of Marketing Science
16(3):47–61
Mentzer J, Bienstock C (1998) Sales forecasting management. Sage Beverley Hills,
CA
Nunnally J, Bernstein I (1994) Psychometric theory. New York, NY
Pagell M, Krause D (1999) A multiple-method study of environmental uncertainty
and manufacturing flexibility. Journal of Operations Management 17(3):307–325
Podsakoff P, MacKenzie S, Lee J, Podsakoff N (2003) Common method biases in
behavioral research: A critical review of the literature and recommended reme-
dies. Journal of Applied Psychology 88(5):879–903
Reichhart A, Holweg M (2007) Creating the customer-responsive supply chain:
a reconciliation of concepts. International Journal of Operations & Production
Management 27(11):1144–1172
Ritzman L, King B (1993) The relative significance of forecast errors in multistage
manufacturing. Journal of Operations Management 11(1):51–65
Slack N (1987) The flexibility of manufacturing systems. International Journal of
Operations & Production Management 7(4):35 – 45
Slack N (2002) Operations Strategy. Prentice Hall
Stevenson M, Spring M (2007) Flexibility from a supply chain perspective: defini-
tion and review. International Journal of Operations & Production Management
27(7):685–713
276 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
Abstract This paper analyses the impact of local sourcing on lead time perfor-
mances. In particular attention is devoted to the effect of choosing to source locally
without having made a proper strategic analysis of the purchasing process. Analyses
are based on data provided by the IMSS research project regarding more than 500
companies in different countries around the world. Results show that local sourcing
can lead to bad performances if it is adopted without a clear sourcing strategy.
22.1 Introduction
During the last twenty years companies have witnessed a considerable expansion
of supply chains into international locations (Taylor, 1997; Dornier et al, 1998).
This growth in globalization has motivated both practitioner and academic interest
in global supply chain management (Prasad and Babbar, 2000).
Looking only at the upstream part of the supply chain, global sourcing (i.e. the
management of supplier relationships on a global perspective) has been considered
and analyzed (e.g., Kotabe and Omura, 1989; Murray et al, 1995). One major issue
regarding global sourcing is why companies extend their relationships internation-
ally and to what extent this practice contributes to increase their competitive advan-
tage (e.g., Alguire et al, 1994; Womack and Jones, 1996; Trent and Monczka, 2003).
Bozarth et al (1998) identify different motivators for global sourcing: offset require-
277
278 Ruggero Golini and Matteo Kalchschmidt
ments, currency restrictions, local content and countertrade, lower prices, quality,
technology access, access to new markets, shorter product development and life cy-
cles, competitive advantage. In some cases, internal factors (e.g., company image)
can be the principal motivators (Alguire et al, 1994).
However, sourcing globalization is still little diffused (Trent and Monczka, 2003;
Cagliano et al, 2008). This is because global sourcing can imply longer supply lead
times that can bring to higher inventory levels and other hidden costs (Handfield,
1994). Moreover, in a global sourcing context it becomes more difficult to have
an integrated and efficient supply chain (Das and Handfield 1997). Nevertheless,
sourcing locally can lead to a competitiveness loss, if other companies are able
to efficiently exploit globalization opportunities. In fact, thanks to experience and
investment in supply chain, companies can achieve better performances, also on
lead times and inventories, even with a globalized supply base (Bozarth et al, 1998;
Golini and Kalchschmidt, 2008). On the other side, local sourcing allows companies
to invest in JIT with suppliers thus allowing improving procurement performances.
In fact many companies prefer to source locally, and in some cases they invest in
Just-in-Time practices - that require physical proximity - to keep inventories under
control (Das and Handfield, 1997; Prasad and Babbar, 2000).
Actually, several studies have failed to detect any significant impact on general
business success of global supply chains (Kotabe and Omura, 1989; Steinle and
Schiele, 2008) and specifically of global sourcing. Only some weak evidences have
been found: it seems that global sourcing can improve product and process inno-
vation, but it seems to have no impact on strategic performances (Kotabe, 1990;
Murray et al, 1995). Companies, however, have to carefully select their globaliza-
tion strategy, using, for example, hybrid global/local approaches according to the
type of goods purchased (Steinle and Schiele, 2008). In the rather developed liter-
ature on global or local sourcing, however, there are limited empirical researches
regarding the impact of this practice on lead times. This paper aims at contributing
to this issue, by providing evidence on the relationship between global and local
purchasing, supply chain management and lead time performances.
The remainder of the paper is structured as follows. In the next section literature
regarding the relationship between global sourcing, supply chain management and
lead time performances is taken into account. Following, research objectives and
methodology are detailed and empirical analyses are described. Then discussion of
empirical results is provided and, in the end, we draw some conclusions and suggest
potential future developments.
Lead times are a major concern for suppliers as these impact directly on cus-tomers’
performances: lower lead times induce lower inventories and allow to be more flexi-
ble. Also from a supply chain perspective, lead time reduction positively contributes
in reducing the bullwhip effect (Chen et al, 2000) making the entire supply chain
22 Threats of Sourcing Locally Without a Strategic Approach 279
more efficient. On the other side competitive pressures (lower costs, higher qual-
ity, innovativeness) drive many companies in scouting suppliers abroad (Alguire
et al, 1994; Ettlie and Sethuraman, 2002; Frear et al, 1992; Smith and Reece, 1999;
Trent and Monczka, 2003; Birou and Fawcett, 1993; Womack and Jones, 1996).
This practice may intuitively cause higher procurement lead times mainly because
of geographical distances. This is confirmed by Handfield (1994): among the top
five costs problems experienced in using international sources there are long lead
time and inventory costs. In the same study it is also shown that an international
sourcing systematically causes less on-time deliveries, longer lead times and higher
lead times.
However the problem is more complex, as at least three aspects of the system
dynamic have to be considered. First of all, companies may compensate higher pro-
curement lead time with higher inventories thus not affecting the lead time for the
customer. The second aspect is the position of the decoupling point: companies can
have higher material inventories if they produce in make-to-stock or make-to-order.
Engineer-to-order companies instead have a direct impact of the procurement lead
time on the total lead time. Moreover the quality level of the supply may affect the
manufacturing lead time, as scrap and reworks can make it longer. Again companies
can react to this with higher work-in-progress and finished goods inventories if their
production model allows it (i.e. make-to-stock).
Finally supply chain practices have to be considered. In fact literature suggests
reducing procurement lead time by means of investments in supply chain integration
with suppliers (e.g., Droge et al, 2004; Fliedner, 2003). Many contributions (e.g.,
Frohlich and Westbrook, 2001) show how a higher level of integration provides
better operational performances (also in terms of lead time), thus suggesting that
firms should invest in this direction.
In this area, researchers have identified two complementary ways in which sup-
ply chain integration can be applied (Cagliano et al, 2005; Vereecke and Muylle,
2006): information sharing and system coupling. The first regards exchanging in-
formation on market demand, inventories, production plans, delivery dates, etc. Lee
et al (1997) provide several examples of information sharing as an effective instru-
ment to face the bullwhip effect. Regarding information sharing domain, significant
importance has been given to the use of electronic tools for information exchange
and integration. The adoption of electronic communication channels between firms
has been a relevant issue for several years (e.g., Malone et al, 1987), and the liter-
ature contains several contributions on the role of ICT in SCM from different per-
spectives (for a review, see Gunasekaran and Ngai, 2005; Power and Singh, 2007).
More recently, literature has focused on Internet-based electronic communication
(i.e. eBusiness). According to this literature, purchasing performances can be im-
proved through the adoption of internet tools (McIvor et al, 2000), and the flow of
information along the supply chain can be easily transferred, which helps companies
to be more responsive (e.g., Naylor et al, 1999; Aviv, 2001; Devaraj et al, 2007).
The second area of integration, system coupling, is represented by coordinating
physical activities, through mechanisms as VMI, CPFR or JIT-Kanban to obtain a
smooth material flow and a seamless supply chain (see for example Childerhouse
280 Ruggero Golini and Matteo Kalchschmidt
et al, 2002; Disney and Towill, 2003). From this point of view, an integrated supply
chain offers the opportunity for firms to compete on the basis of speed and flexibility,
while at the same time holding minimum levels of inventory in the chain (Power,
2005). In particular some studies have highlighted that JIT sourcing requires specific
conditions (e.g., frequent and fast deliveries, small lots, etc.) that can be difficult to
be performed in an international environment. So, even if it is possible to achieve
efficiency in a global sourcing context through JIT, they are not yet comparable to
what can be gained at domestic level (Das and Handfield, 1997).
Finally, some authors have highlighted the importance of designing a proper
sourcing process in all its subphases and applying proper tools (Zeng, 2003; Pe-
tersen et al, 2005; Quintens et al, 2005; Gelderman and Semeijn, 2006). All this
evidence suggests to companies with global sourcing processes to invest in supply
chain management and integration, e.g. to keep lead times under control (Bozarth
et al, 1998).
In conclusion, literature highlights how companies can improve operational per-
formances (and lead time ones specifically) through supply chain integration, i.e.
information sharing and system coupling. Nevertheless, integration - and especially
system coupling - can be difficult to be performed in a global sourcing context be-
cause of suppliers’ distance. This can make more difficult for companies to control
sourcing globalization counter effects - mainly longer lead times - and neutralize
lower cost seeking strategies.
The objective of this paper is to explore the relationship between global sourcing,
supply chain investments and lead time performances.
Literature suggests that global sourcing can have an impact on lead time perfor-
mances, however this relationship is not completely straightforward. For example,
from one side global sourcing means that companies purchase from suppliers that
are far from the plant compared to what companies can do in a local sourcing sit-
uation. This should increase the procurement lead time due mainly to transporta-
tion over longer distances. However companies sourcing locally will have to choose
among a limited set of potential suppliers while when global sourcing is adopted
companies are able, at least potentially, to choose the best suppliers, thus being able
to gain better performances. For this reason, the first research question this work
wants to address is:
RQ1 What is the impact of local sourcing on lead time performances?
Some companies often source locally simply because they don’t consider other
alternatives, either because they are too small or because they rely on consolidated
relationships with local suppliers. The impact of local sourcing on performances
may not be completely straightforward also because some companies choose to
22 Threats of Sourcing Locally Without a Strategic Approach 281
source locally through a clear and rational analysis. Other companies, on the con-
trary, choose to purchase locally because that’s enough to achieve their business
goals. For this reason we aim at considering also the impact of how the choice of
local sourcing is made, whether it is likely the result of a structured analysis or not.
We argue that companies that source locally based on a strategic analysis of their
context have better performances compared to those that simply choose to purchase
locally without taking into account strategic issues. Thus we formulate the following
research question:
RQ2 What is the impact of local sourcing on lead time performances?
Literature suggests that improvements on procurement lead time can be achieved
by leveraging on supply chain investments such as JIT, information sharing with
suppliers, coupling between customer and supplier production systems, etc. Limited
evidence however can be found regarding the extent to which these investments
are related to local sourcing approaches. In particular some investments (e.g., JIT)
are positively influenced by local sourcing, but previous works (e.g., Golini and
Kalchschmidt, 2008) have shown that other investments are negatively related to
local sourcing (e.g., purchasing process improvement programs).
Thus our third research question is:
RQ3 What is the impact of local sourcing on supply chain investments that influence
lead time performances?
In order to investigate the above research questions, data have been collected
within the fourth edition of the International Manufacturing Strategy Survey (IMSS),
a research project carried out in 2005 by a global network. This project, originally
launched by London Business School and Chalmers University of Technology, stud-
ies manufacturing and supply chain strategies within the assembly industry (ISIC
2835 classification), through a detailed questionnaire that is administered simulta-
neously in many countries by local research groups; responses are gathered in a
unique global database (Lindberg and Trygg, 1991).
The sample consisted of 660 companies from 21 countries, with an average re-
sponse rate of 34 %. The usable sample included 620 companies, which provided
enough information for the purpose of this study. Among these companies we lim-
ited our analyses only on those companies that don’t rely on Engineering to order
manufacturing systems. This is due to the fact that this kind of companies face dif-
ferent challenges compared to assemble/make-to-order or make-to-stock ones (e.g.
inventories have a different role). The distribution of the sample in terms of country,
size and ISIC code is shown in Table 22.1 and Table 22.2
In order to measure the extent of localization of sourcing activities, we collected
information regarding the percentage of purchases inside the country where the plant
is based. On the other side, to evaluate the extent to which companies decide to
purchase locally for strategic reasons we collected the information regarding the
extent to which companies consider the proximity of suppliers a key element in
selecting a supplier. This item is measured on a 1-5 Likert scale where 1 equals to
“not at all” and “to a great extent”.
282 Ruggero Golini and Matteo Kalchschmidt
Table 22.1 Sampe distribution in terms of country (a) and size (b) - Small: less than 250 employ-
ees, Medium: 251-500 employees, Large: over 501 employees
22.4 Results
1 Reliability of these latent variables were checked by controlling Cronbach’s Alpha values (for all
variables over 0.6) and items’ factor loads (for all variables over 0.5)
284 Ruggero Golini and Matteo Kalchschmidt
The cluster analysis has been performed on two variables: degree of local sourcing
(measured as percentage of purchases within the same country where the plant is
located - scale from 0% to 100%) and the extent to which physical proximity is
important in the suppliers’ selection. Through the analysis of the dendrogram, it is
possible to identify that the best solution is three or four clusters. For interpretability
sake we selected three clusters because the two variables are positively correlated
(sig. 0,000); by looking at how the points are jointly distributed in a scatter diagram
it is possible to identify a priori these three high density areas:
• Low local sourcing (i.e. global sourcing) and relatively low importance to phys-
ical proximity (named in the following Globals)
• High local sourcing and high importance to physical proximity (named Patriots).
• High local sourcing and low importance to physical proximity (named Idlers).
Companies with global sourcing and high importance to physical proximity are,
as one could expect, very few and we put them together with the Globals.
Because of this data structure, when we perform a Two Step cluster analysis (log-
likelihood distance on standardized variables) specifying three as the cluster number
we obtain results reported in Table 22.3.
Table 22.3 Cluster analysis results. For each cluster the average values and the significant dissim-
ilarities respect to other clusters are represented (** Mann-Whitney U sig.< 0.000)
N. Local Sourcing Physical proximity
1 Patriots 233 81.725 (3)** 3.624 (2,3)**
2 Idlers 105 78.933 (3)** 1.762 (1,3)**
3 Globals 189 26.060 (1,2)** 2.476 (1,2)**
Average 56.558 2.746
In order to check for the reliability of the defined clusters we tested differences
among clusters on different contingency variables related to company size, busi-
ness objectives, manufacturing globalization, industrial sector and position of de-
coupling point (see Table 22.4 for details). As we can see there are no differences
related to the business objectives and industrial sector. Global sourcers are larger
companies and have a more globalized manufacturing network than local ones. This
is quite intuitive, even if it is not always confirmed by literature (e.g., Cavusgil
et al, 1993; Quintens et al, 2005). Finally it is interesting to notice that companies
sourcing locally have higher direct salaries/wages costs and lower direct materi-
als/parts/components. This suggests that Globals, since they have a higher incidence
of direct materials and components on the total costs, are more driven to scout best
and most convenient suppliers around the world. Given the relevance of these pur-
chases, a longer lead times or higher supply chain investments can be acceptable. We
can summarize results by stating that no relevant differences arise from this analysis
22 Threats of Sourcing Locally Without a Strategic Approach 285
and thus we can argue that the clusters are not biased. After the contingency analy-
sis, we looked for differences among clusters on lead time and other related perfor-
mances and, separately, on SC practices. Since analyzed variables are not normally
distributed (based on Kolmogorov-Smirnov test) we adopted non-parametric tests.
22.4.2 Performances
For what concerns performances we took into account lead time performances (pro-
curement, manufacturing and delivery lead times), manufacturing conformance,
throughput time efficiency, scrap and rework costs, raw materials/components in-
ventory, WIP and final products inventory. In particular we made a two-by-two com-
parisons among clusters using Mann-Whitney’s U test. Here results are reported for
each comparison.
Globals vs Idlers. Looking at Table22.5, Globals have superior performances
compared to Idlers on Procurement lead time, Manufacturing lead time, Delivery
speed and Manufacturing conformance. This claims that even if Globals have more
distant suppliers they are still able to have competitive lead times without carrying
higher inventories. The shorter manufacturing lead time can be in part explained
by the higher manufacturing conformance even if there is no difference on scrap
and rework costs. Quite interestingly there is no difference on the throughput time
efficiency.
Globals vs Patriots. Patriots don’t show particular differences with Globals apart
from superior throughput time efficiency (Table22.6). This shows that a strategic
approach to local sourcing can give the opportunity to remain competitive on lead
time performances (not only procurement, but also manufacturing and delivery lead
times). On the other side, global sourcing doesn’t always imply worse lead times.
Patriots vs. Idlers. Finally we compared Patriots with Idlers (Table22.7). Results
show almost no differences between the clusters a part from a superior throughput
time efficiency of the Patriots. Because of that Patriots have probably other reasons
rather than lead times to select suppliers locally.
286 Ruggero Golini and Matteo Kalchschmidt
Table 22.5 Average rank of the analyzed clusters on the different performances
Globals Idlers Sig.
Procurement lead time 130 113 0.043
Manufacturing lead time 132 107 0.004
Delivery speed 135 115 0.027
Manufacturing conformance 136 118 0.045
Throughput Time Efficiency 134 138 0.720
Scrap and rework costs 149 163 0.217
Material/ components inventory 160 158 0.814
WIP inventory 153 172 0.077
Finished products inventory 159 158 0.940
Table 22.6 Average rank of the analyzed clusters on the different performances
Globals Patriots Sig.
Procurement lead time 163 153 0.252
Manufacturing lead time 165 148 0.061
Delivery speed 167 159 0.394
Manufacturing conformance 170 161 0.311
Throughput Time Efficiency 151 194 0.000
Scrap and rework costs 183 203 0.077
Material/ components inventory 204 186 0.121
WIP inventory 192 203 0.350
Finished products inventory 203 187 0.146
Table 22.7 Average rank of the analyzed clusters on the different performances
Idlers Patriots Sig.
Procurement lead time 104 112 0.290
Manufacturing lead time 103 113 0.194
Delivery speed 106 118 0.142
Manufacturing conformance 110 119 0.251
Throughput Time Efficiency 99 123 0.008
Scrap and rework costs 131 133 0.793
Material/ components inventory 143 131 0.262
WIP inventory 139 131 0.411
Finished products inventory 138 129 0.344
We also took into consideration the supply chain management practices put in place
by companies. Namely we considered: Just in time adoption, Information sharing,
System coupling, eBusiness, Purchasing process improvements, as defined in the
Methodology chapter. Following the same approach of the previous chapter, we
compared the clusters two-by-two on the different practices.
22 Threats of Sourcing Locally Without a Strategic Approach 287
Globals vs Idlers. Globals use information sharing, system coupling and eBusi-
ness more than Idlers while there are no significant differences on just in time and
supply chain investments (Table22.8).
Globals vs Patriots. Globals tend to adopt less JIT compared to Patriots, while
there are no significant differences on the other dimensions (Table22.9).
Patriots vs Idlers. Patriots use all supply chain practices to a more extent com-
pared to Idlers. The only exception is supply chain investments where no difference
arises (Table22.10).
Table 22.8 Average rank of the analyzed clusters on the different supply chain practices
Globals Idlers Sig.
Just in time 156 155 0.980
Information Sharing 174 144 0.007
System Coupling 178 139 0.001
eBusiness 170 134 0.001
Supply chain investments 164 144 0.069
Table 22.9 Average rank of the analyzed clusters on the different supply chain practices
Globals Patriots Sig.
Just in time 180 208 0.013
Information Sharing 207 203 0.739
System Coupling 211 200 0.343
eBusiness 204 201 0.816
Supply chain investments 206 187 0.084
Table 22.10 Average rank of the analyzed clusters on the different supply chain practices
Idlers Patriots Sig.
Just in time 124 144 0.043
Information Sharing 130 153 0.028
System Coupling 127 153 0.010
eBusiness 121 152 0.003
Supply chain investments 135 139 0.648
22.5 Discussion
22.6 Conclusion
This paper contributes in the understanding of the impact of global and local sourc-
ing on both companies’ performances and behavior. The paper provides evidence of
the importance of choosing carefully a sourcing strategy and to consider how it can
be supported by leveraging on supply chain management. In the end, we would like
to highlight the limitations of this work. First of all, statistical analyses were based
on a specific sample, thus in the future replication of this study should be considered
in order to verify the reliability of the results provided.
Second, due to limitations in terms of space, we didn’t consider at any rate why
companies choose to adopt different sourcing approaches. We argue that this el-
ement should help in understanding better differences and similarities among the
considered clusters.
In the, we clustered companies based on the extent of local or global sourcing.
However some companies adopt hybrid approaches between these two alternatives
according to the type of good purchased. This aspect has not been considered here
and future studies should consider this situation and how companies manage hybrid
configurations.
Acknowledgements Financial support for this research was provided by the PRIN 2007 fund “La
gestione del rischio operativo nella supply chain dei beni di largo consumo”.
References
Womack J, Jones D (1996) Lean Thinking: Banish Waste and Create Wealth in Your
Corporation. Simon & Schuster New York, NY
Zeng A (2003) Global sourcing: Process and design for efficient management. Sup-
ply Chain Management: An International Journal 8(4):367–379
Chapter 23
Improving Lead Times Through Collaboration
With Supply Chain Partners: Evidence From
Australian Manufacturing Firms
Prakash J. Singh
Abstract Whilst it is well recognized that lead time reductions can be of strategic
benefit to firms, most existing methods for generating these outcomes are seen as
being too complex and difficult to implement. In this paper, the possibility of using
supply chain collaboration for the purpose of reducing lead times was examined.
Data from a study involving 416 Australian manufacturing plants showed that there
were strong albeit indirect links between collaborative practices that firms develop
with key customers and suppliers, and lead time performance. From this, it is sug-
gested that firms consider, amongst other strategies, developing strong collabora-
tive relationships with their trading partners if they wish to reduce lead times.
23.1 Introduction
Benefits of reducing lead times associated with new product development and man-
ufacturing are well documented with publications such as Quick Response Manufac-
turing (Suri, 1998), Competing Against Time (Stalk and Hout, 1990) and Clockspeed
(Fine, 1998) bringing sharp attention to this issue. However, a significant challenge
continues to persist. This challenge is in how firms can achieve significant reduc-
tions in lead time. Although the literature provides some prescriptions on how lead
times can be reduced, many firms continue to struggle to reduce lead times. The
existing ideas appear to be too steeped in mathematical modeling, leading managers
to believe that lead time reduction methods are too difficult and costly to implement
(Suri, 1998; De Treville et al, 2004; Tersine and Hummingbird, 1995; Suri, 1999).
There is a need to develop methods for lead time reduction that are simpler and
practically achievable. One such idea emanates from the supply chain management
(SCM) body of knowledge. More specifically, the strong emphasis that SCM the-
Prakash J. Singh
Department of Management & Marketing, University of Melbourne-Parkville, 3010, Australia
e-mail: pjsingh@unimelb.edu.au
293
294 Prakash J. Singh
Lead time can generally be defined as the time period between the initiation of a
task and its completion. More specific definitions depend on context. For example,
in the new product development area, lead time is conceived in terms of the time it
takes to identify a market need, design and test the product, and develop the pro-
cesses for manufacturing (Tennant and Roberts, 2001; Yazdani and Holmes, 1999).
Manufacturing lead time, on the other hand, is the “elapsed time between releasing
an order and receiving it” (Hsu and Lee, 2008, p.1). A number of different terms
are used to describe manufacturing lead time. Examples include “manufacturing
throughput time” (Johnson, 2003) and “order-to-delivery time” (Zhang et al, 2007).
From a SCM perspective, supply chain lead time is the “time spent by the supply
chain to process the raw materials to obtain the final products and deliver them to
the customer” (Bertolini et al, 2007, p.199).
For the purpose of this study, only new product development and manufactur-
ing lead times will be considered. There are two reasons for this. Firstly, these lead
times are more directly under the control of firms; supply chain lead times can often
be too difficult for individual firms to control and influence. Secondly, the interest is
in more closely examining the concepts that come under “time-based competition”
23 Improving Lead Times Through Collaboration With Supply Chain Partners 295
(Stalk and Hout, 1990; Stalk, 1988). These include “fast-to-market” and “fast-to-
product”. Firms that compete with fast-to-market strategy emphasize reductions in
new product development lead times. On the other hand, fast-to-product firms em-
phasize speed in responding to customer demands for existing products. This in-
volves reducing the time it takes to manufacture products (throughput time) as well
as the ability to reduce the time between taking a customer’s order and actually
delivering the product (delivery speed).
Studies have shown that lead times can involve tremendous amounts of waste,
with about 85 percent of time spent waiting between value-adding steps (Holweg
and Pil, 2004). Therefore, there are many benefits of reducing lead times. Ceteris
paribus, firms that are able to reduce new product development lead times can gain
a market edge over others that are not able to do same (Tennant and Roberts, 2001).
Also, these firms move along the learning curve faster than their competition. Both
these factors increase barriers to competitors. Methods for reducing this form of lead
time include application of concurrent engineering practices (Yazdani and Holmes,
1999; Wilding and Yazdani, 1997), careful project management policies (Tennant
and Roberts, 2001) and formal stagegate processes (Cooper, 1995). Similarly, reduc-
tions in manufacturing lead times can generate numerous benefits, including lower
inventory levels, improved quality, reduced forecasting errors and increased flexi-
bility (Johnson, 2003; Ouyang et al, 2007). These then improve customer service
and satisfaction levels, which in turn contribute to competitive advantage accruing
to the firms (Tersine and Hummingbird, 1995). Practical methods for generating
manufacturing lead time reductions include process reengineering (Bertolini et al,
2007), reducing product variety (Zhang et al, 2007), implementing lean and JIT
manufacturing practices (De Treville et al, 2004; Ouyang et al, 2007), using cellu-
lar manufacturing arrangements (Suri, 1998), and using ICT tools (Bertolini et al,
2007).
A number of researchers have complained that many of these methods that have
been suggested in the literature have not been taken up and implemented in firms
for the purpose of reducing lead times (Suri, 1998; De Treville et al, 2004; Tersine
and Hummingbird, 1995; Suri, 1999). Reasons for these include the complexity of
some of these ideas, and the perceived difficulties associated with them. Hence,
there is a need to develop more simpler and easy-to-implement ideas for reducing
new product development and manufacturing lead times. Ideas from the SCM area
have some potential.
Collaboration involves firms working together and sharing resources and benefits,
with the expectation of generating joint improvements in customer service, and
achieving competitive advantage (Simatupang and Sridharan, 2002; Foster, 2005).
SCM proponents claim that firms that collaborate with their customers and suppliers
296 Prakash J. Singh
are able to generate many benefits, including reduced new product development and
manufacturing lead times (Christopher, 2005; Lee, 2004). The empirical evidence
for these claims is in the form of case studies and survey type multi-organizational
cross-sectional studies. For example, case studies of high profile firms such as Nokia
(Heikkilä, 2002) and Toyota (Stalk, 1988) demonstrate the ability of these firms to
achieve lead time reductions through, inter alia, pursuing close collaborative ar-
rangements with their trading partners. In a similar vein, survey based studies show
that collaboration has a positive impact on the operational (including lead time re-
ductions) and financial performance of firms (Vickery et al, 2003; Wisner, 2003).
Despite collaboration’s inherent attractiveness, many firms have found it is not
all that easy to sustainably develop these types of relationships. There appears to
be two main reasons for this. Firstly, collaboration is a relatively ’hard’ concept to
implement because it requires a lot more effort from firms relative to other forms of
inter-organizational relationships such as cooperation and coordination. Secondly,
firms have predominantly focused on technical aspects of collaboration, with the
belief that these systems would provide the infrastructure for strong relationships
(Narasimhan and Kim, 2001; Patterson et al, 2003). As such, large investments
have been made into establishing ICT systems (Bendoly and Kaefer, 2004). Rela-
tively less investment has gone into establishing appropriate social systems (Burgess
and Singh, 2006; Nahapiet and Ghoshal, 1998; Tsai and Ghoshal, 1998). But, as
Spekman et al. Spekman et al (1998) and Golicic et al. Golicic et al (2003) con-
tend, collaboration is as much to do with the technical system as it is with the social
system. The relative neglect of social issues has led to firms not achieving the pur-
ported benefits of collaboration. For firms to be able to establish and benefit from
collaboration, it is self-evident that the above mentioned issues need to be resolved.
We attempt to (at least partially) address this in this paper.
We focus on one key element of collaboration: the nature and extent of integration
that firms need to enter into with their trading partners in order to realize significant
benefits in the form of lead time reductions. Frohlich and Westbrook Frohlich and
Westbrook (2002) separate this into supply and demand integration. Supply integra-
tion includes JIT (frequent, small lots) delivery, small supply base, suppliers selected
on the basis of quality and delivery performance, longterm contracts with suppliers
and elimination of paperwork. Demand integration, on the other hand, includes in-
creased access to demand information throughout the supply chain to permit rapid
and efficient delivery, coordinated planning and improved logistics communication.
23.2.3 Hypotheses
Based on above discussions and purpose of this study, we hypothesize that firms that
are able to develop high levels of collaboration (through integration) with supply
chain partners would be able to generate significant reductions in both new product
development and manufacturing lead times. Presenting this formally, it is hypothe-
sized that:
23 Improving Lead Times Through Collaboration With Supply Chain Partners 297
Data for the empirical testing of the above hypotheses was obtained through a postal
survey targeting firms in the manufacturing industry in Australia. The JASANZ Reg-
ister (Standards, 2004) was used for selecting the sample of firms. The unit of anal-
ysis was at the plant level. A target list of 1,053 unique plants was selected from
the database. The respondents were senior managers (general, operations, quality,
298 Prakash J. Singh
production, etc). The survey was carried out in two stages. The final usable re-
sponse rate was 41 percent (n=416). The study participants were predominantly
small plants with almost half having fewer than 100 employees and $A10 million
in annual revenue. Also, the plants were mainly from the machinery and equipment
manufacturing (26 percent) and metal products (17 percent) manufacturing industry
sub-categories.
The items used in this study was from a measurement instrument that was de-
veloped for a large study of quality and operations management practices (Singh,
2003).Since this instrument was original in many respects, a full set of tests for re-
liability and validity were performed to ensure that the various types of errors were
within acceptable levels. These included pretest with eight practitioners and aca-
demicians and pilot test within 21 firms. A total of 146 items were present. Each of
the items was measured on a five-point Likert scale.For this paper, a subset of the
items relevant to the key constructs of internal organizational processes, relation-
ships with suppliers, relationships with customers, and lead time performance was
used. These constructs along with their associated items, and the scales that were
used to measure the items, are shown in Table 23.1.
Content Validity. The lists of items assigned to the constructs were based on litera-
ture (summarized in the Literature Review section earlier). This provided evidence
that the items associated with the four constructs had sufficient grounding in rele-
vant literature and therefore had content validity.
Correlation Coefficients and Descriptive Statistics. The inter-item Pearson correla-
tion coefficients were low to moderate in magnitude, suggesting that multicollinear-
ity related problems were not present, with all coefficients being less than the thresh-
old value of 0.9 (Hair, 2006). Further, the mean and standard deviation values of
all the items suggested that the item measures did not suffer from excessive non-
normality.
Reliability. The Cronbach’s alpha reliability coefficients for the constructs inter-
nal organizational processes, customer relationship, supplier involvement and firm
performance constructs were 0.830, 0.654, 0.634 and 0.774 respectively. These co-
efficients exceeded the minimum threshold level of 0.6 for acceptable reliability
(Nunnally and Bernstein, 1978) for all the constructs. Therefore, the selected items
23 Improving Lead Times Through Collaboration With Supply Chain Partners 299
is a structural equation model (SEM) where the constructs are all co-varied with
each other. The SEM analysis was conducted using the AMOS5.0 R (Arbuckle and
Wothke, 2004) software package. The maximum likelihood (ML) estimation tech-
nique was used to fit the model to the data because it is a reasonably scale- and
distribution-free procedure (Hair, 2006). A number of commonly reported indices
for assessing the goodness-of-fit of SEM models with data were obtained for the
CFA model. These were as follows: χ 2 (246) = 876 with p-value ¡ 0.001; normed
χ 2 = 3.567; goodness-of-fit index (GFI) = 0.844; adjusted goodness-of-fit index
(AGFI) = 0.810; Tucker-Lewis index (TLI) = 0.764; comparative fit index (CFI) =
0.790; root mean square residual (RMR) = 0.045; and, root mean square error of
approximation (RMSEA) = 0.079. The χ 2 fit measure has a tendency to produce
negative results with sample sizes are greater than 200, and so was be disregarded.
For all other fit indices, applying the cutoff criteria proposed by researchers (Hair,
2006; Marsh et al, 2004; Sharma et al, 2005; Schermelleh-Engel et al, 2003), the
’acceptable’ descriptor reasonably accurately captures the level of fit that has been
obtained here. The parameters associated with the CFA showed the convergent va-
lidity of the constructs was generally supported; all the estimated factor loadings of
items on constructs were significant (at p-values ¡ 0.001), the signs were all positive
and only one was below 0.4, with the minimum being +0.346. Further, from the
squared multiple correlation coefficient values, the variances of the items explained
by their constructs were reasonably high (with the average being 34 percent). As
for discriminant validity, correlations between the constructs were mostly moder-
ate, suggesting that items assigned to one construct were not significantly highly
loading on others.
As with the CFA model, the SEM analysis procedure was used to assess the hypoth-
esized relationships in the theoretical model presented in Fig.23.1. The fit indices for
the hypothesized model were as follows: χ 2 (df=246) = 876 with p-value = 0.000;
χ 2 /df = 3.567; GFI = 0.844; AGFI = 0.810; TLI = 0. 764; CFI = 0.790; RMR =
0.045; and, RMSEA = 0.079. These indices suggest that the theoretical model has
adequate level of empirical support.1
1 To further confirm this result, a χ 2 difference test is traditionally used whereby the χ 2 values for
the hypothesized model is compared with the CFA model (Anderson and Gerbing, 1988). However,
in our case, the number of parameters in the hypothesized model is the same as that in the CFA,
resulting in all fit indices being the same for the two models. Hence, this χ 2 difference test could
not be meaningfully performed.
23 Improving Lead Times Through Collaboration With Supply Chain Partners 301
Figure23.2 shows all the structural model parameters (regression, relevant squared
multiple correlation, and correlation coefficients) within the theoretical model.
These are in standardized form. The results have several noteworthy aspects. In
terms of the magnitude and sign of the relationships, as Fig.23.2 shows, only two
out of six relationships (H2a and H2c) are statistically insignificant in magnitude,
having p-values greater than 0.05. The other relationships are all statistically sig-
nificant in magnitude (p-values less than 0.05) and positive in sign. Also, the inter-
correlation between the two exogenous constructs is positive, statistically significant
and moderate in magnitude. Finally, the squared multiple correlation values for the
two endogenous constructs, internal organizational processes and lead time perfor-
mance, were 0.726 and 0.334 respectively. The exogenous constructs therefore ac-
counted for large proportions of variances in these constructs. We further analyzed
the regression and correlation data presented in Fig.23.2 by examining the standard-
ized effect sizes between the constructs. Effect size is the increase/decrease in the
endogenous construct (in standard deviation units) when there is a one standard de-
viation increase in the exogenous construct. The standardized direct effects, indirect
effects (calculated using the path analysis tracing rules described by Kline (2005))
and total effects of all the exogenous constructs on the endogenous constructs of the
model are shown in Table 23.2.
Fig. 23.2 Theoretical model, showing maximum likelihood estimates of standardized regression
coefficients (on straight lines with single-arrowheads), squared multiple correlation coefficients (on
constructs) and correlation coefficients (on curved lines with double-arrowheads.* 0.05 < p-value
≤ 0.1; ** 0.01 < p-value ≤ 0.05; *** p-value ≤ 0.01
302 Prakash J. Singh
23.5 Discussion
The SEM model-data fit results suggested that in an overall sense, there was em-
pirical support for SCM-based collaboration - lead time performance model as de-
scribed in Fig.23.1. However, in terms of the specific hypothesized relationships, the
same analysis showed that two direct hypothesized relationships (H2a: Relationship
with customers → lead time performance, and H2c: Relationship with suppliers →
lead time performance) were not supported. All other hypothesized relationships
had empirical support. One could therefore conclude that SCM based collaboration
strategies such as development of greater integration with supply chain partners are
not so effective. This however would be an erroneous conclusion. Effects analysis
results in Table23.2 show that the indirect effects are strong and more than compen-
sate for the weak direct effects. As Table23.2 shows, the total effect of relationships
with customers on lead time performance was 0.939, indicating that one standard de-
viation improvement in the relationships with customers construct is associated with
0.939 standard deviation change in lead time improvement. This is a very strong ef-
fect. Similarly, the total effect of relationships with suppliers construct on lead time
performance was 0.445, indicating strong effect. These strong total effects indicate
that collaboration practices combine in a synergistic manner to affect lead time per-
formance. As a result, it would be myopic if each individual corroboration construct
is assessed and evaluated for its effect in isolation.
The literature on SCM collaboration has shown a number of benefits that firms
can generate. To date, however, it has not been clear if lead time performance could
also improve through supply chain collaboration. This study has shown that there is
a strong albeit indirect link between supply chain collaboration practices and lead
time performance. As such, firms can pursue such strategies with the understanding
that a wide range of benefits that include lead time performance improvements can
be achieved.
Due to the manner in which the key supply chain collaboration constructs have
been defined, measured and empirically validated, there is guidance to managers
as to how these constructs can be operationalised in practice. From the items in
23 Improving Lead Times Through Collaboration With Supply Chain Partners 303
Table23.1, it is clear that much of the actual practices relating to supply chain col-
laboration are quite straightforward and simple in nature, and therefore realistic to
achieve. One could indeed consider most of these items as being commonsensical
and generally good practices that could be found in most well managed firms. As
such, a key contribution of the paper is the articulation of a simple set of practices
that enable and facilitate supply chain collaboration. Firms can be confident that
if they put into practice the items listed in Table23.1, then by the empirical links
established in the study, there will be a good chance that lead time performance
improvements would follow. In doing so, this would overcome the belief that many
managers have that the existing methods for lead time reductions are too complex
and difficult to implement.
23.6 Conclusion
In this study, the aim was to establish if it is possible to use ideas from the SCM
field, specifically those relating to collaboration between firms and their trading
partners, for the purpose of reducing lead times for new product development and
manufacturing. The empirical data from a sample of 416 Australian manufacturing
plants showed that this may indeed be possible, with data showing that relationships
that firms develop with key customers and suppliers, acting directly and indirectly
through internal organizational processes, strongly affected lead time performance.
Given that lead time reduction has been recognized as playing a vital strategic role in
enabling firms to successfully and sustainably compete, managers should therefore
consider supply chain management collaboration as one more tool available to them,
amongst other existing methods, for the purposes of generating lead time reductions.
References
The chair of the international scientific board of the 1st rapid modelling conference
”Increasing Competitiveness - Tools and Mindset” consisted of:
• Gerald Reiner (University of Neuchâtel, Switzerland)
Members of the international scientific board as well as referees are:
• Djamil Aı̈ssani (LAMOS, University of Béjaia, Algeria)
• Michel Bierlaire (EPFL, Switzerland)
• Bnamar Chouaf (University of Sidi Bel Abes, Algeria)
• Lawrence Corbett (Victoria University of Wellington, New Zealand)
• Krisztina Demeter (Corvinus University of Budapest, Hungary)
• Suzanne de Treville (University of Lausanne, Switzerland)
• Gerard Gaalman (University of Groningen, The Netherlands)
• Petri Helo (University of Vaasa, Finland)
• Werner Jammernegg (Vienna University of Economics and Business Adminis-
tration, Austria)
• Matteo Kalchschmidt (University of Bergamo, Italy)
• Doug Love (Aston Business School, UK)
• Jose Antonio Dominguez Machuca (University of Sevilla, Spain)
• Jeffrey S. Petty (Lancer Callon, UK)
• Boualem Rabta (University of Neuchâtel, Switzerland)
307
Appendix B
Sponsors
309