You are on page 1of 314

Rapid Modelling for Increasing Competitiveness

Gerald Reiner
Editor

Rapid Modelling for


Increasing Competitiveness
Tools and Mindset

123
Editor
Prof. Dr. Gerald Reiner
Institut de l’Entreprise (IENE)
Faculté des Sciences Économiques
Université de Neuchâtel
Rue A.-L. Breguet 1
2000 Neuchâtel
Switzerland
gerald.reiner@unine.ch

ISBN 978-1-84882-747-9 e-ISBN 978-1-84882-748-6


DOI 10.1007/978-1-84882-748-6
Springer Dordrecht Heidelberg London New York

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library

Library of Congress Control Number: 2009929380

c Springer-Verlag London Limited 2009


Apart from any fair dealing for the purposes of research or private study, or criticism or review, as per-
mitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publis-
hers, or in the case of reprographic reproduction in accordance with the terms of licenses issued by the
Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers.
The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a
specific statement, that such names are exempt from the relevant laws and regulations and therefore free
for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information
contained in this book and cannot accept any legal responsibility or liability for any errors or omissions
that may be made.

Cover design: eStudioCalamar, Figueres/Berlin

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Foreword

A Perspective on Two Decades of Rapid Modeling


It is an honor for me to be asked to write a foreword to the Proceedings of the 1st
Rapid Modeling Conference. In 1987, when I coined the term “Rapid Modeling”
to denote queuing modeling of manufacturing systems, I never imagined that two
decades later there would be an international conference devoted to this topic! I
am delighted to see that there will be around 40 presentations at the conference by
leading researchers from around the world, and about half of these presentations are
represented by written papers published in this book. I congratulate the conference
organizers and program committee on the success of their efforts to hold the first
ever conference on Rapid Modeling.
Attendees at this conference might find it interesting to learn about the history of
the term Rapid Modeling in the context it is used here. During the fall of 1986 I was
invited to a meeting at the Headquarters of the Society of Manufacturing Engineers
(SME) in Dearborn, Michigan. By that time I had successfully demonstrated sev-
eral industry applications of queuing network models at leading manufacturers in
the USA. Although in principle the use of queuing networks to model manufactur-
ing systems was well known in the OR/MS community and many papers had been
published, the actual use of such models by manufacturing professionals was almost
nonexistent. Instead, discrete-event simulation had been popularized through ag-
gressive marketing by software companies, and if manufacturing managers wanted
an analysis of their systems it was usually done by simulation.
A few researchers, including myself, were trying to change this situation and
include queuing models in the suite of tools used by manufacturing analysts. In
the 1970s Professor Jim Solberg of Purdue had demonstrated that a Flexible Man-
ufacturing System (FMS) could be modeled as a closed queuing network; he im-
plemented a simple calculation on a hand calculator which he took with him as he
visited manufacturers and impressed them with his quick predictions of through-
put and bottlenecks. He called his model CAN-Q, for computer-aided network of
queues. Motivated by his success, but finding his single-class model to be too sim-
plistic, Dr. Rick Hildebrant (of the Draper Lab at MIT) and I developed an extension

v
vi Foreword

of multiclass mean-value analysis (MVA) and implemented it in a software pack-


age called MVAQ. We demonstrated the practical value of queuing models by using
MVAQ to quickly analyze and improve the throughput of an FMS at Hughes Air-
craft Company during a production crisis (Suri and Hildebrant, 1984) - yes, the
very same company formed by Howard Hughes that was seen in the recent movie
called “The Aviator”! We then proceeded to use MVAQ at several FMS installations
around the USA.
However, I was not satisfied with the closed network model because FMSs were
still rare and most manufacturing facilities operated more like open networks. At the
same time Jackson networks were too limiting in their assumptions. So I homed-
in on the node decomposition approach developed and refined by Buzacott, Shan-
thikumar, Whitt and a few others, and in 1985 I wrote a software package called
ManuPlan (in Fortran!). By the time of the 1986 meeting at SME, I had already
demonstrated applications of ManuPlan not just by me, but by manufacturing an-
alysts working at IBM, Alcoa, Digital Equipment Corporation, Pratt & Whitney
Aircraft, and other companies - in other words, industry users were working with
queuing models! Following this success, the software package was made more pro-
fessional by a software company, Network Dynamics, Inc. (NDI). Eventually Dr.
Gregory Diehl of NDI implemented it on a PC (the package was called ManuPlan
II) and then greatly improved its interface to the latest version, MPX. A detailed
perspective on these and many other developments in the area of queuing models of
manufacturing systems can be found in the article, “From CAN-Q to MPX” (Suri
et al, 1995).
Anyway, let’s get back to the fall of 1986. The successes at IBM and other compa-
nies had been published in several articles and had caught the eye of the continuing
education staff at SME. They wanted to know if I could teach a class to manufac-
turing professionals that would show them how to use queuing models to analyze
manufacturing systems. The key question they had was, what would be the benefits
of using this approach compared to simulation, and how could we convince peo-
ple in industry to attend this class? (They didn’t think the term “queuing models”
would do much to attract industry people!) At the SME meeting I explained that
simulation models took weeks or months to build, debug and verify - in those days
there was no interactive model-building software, and simulation models were built
by writing detailed programming code. Further, even after the model was ready, it
often took several hours for a single run. In other words, evaluation of a single set of
decisions could take hours, and if you wanted to evaluate a number of alternatives it
could take days or even weeks. I explained to the SME personnel that queuing mod-
els required only a few inputs, were easy to build, and just needed a few seconds to
evaluate each scenario. As I went through these explanations, it suddenly dawned
on me: what queuing models offered was a technique for rapid modeling of man-
ufacturing systems - and the term Rapid Modeling was born! The SME team was
convinced, and as a result I taught the first-ever continuing education class (known
as a “Master Class” in Europe) on Rapid Modeling at SME Headquarters during
April 28-30, 1987 (see Suri, 1987), also see Fig.1). Soon after that, I began using
the acronym RMT for Rapid Modeling Technique and documented its advantages in
Foreword vii

a 1988 article in Manufacturing Engineering (Suri, 1988). I also continued to teach


Rapid Modeling classes at SME and at various manufacturing companies.
In spite of these efforts, however, Rapid Modeling continued to play only a minor
role in the modeling of manufacturing systems. This changed with a major advance
in business strategy - and more significantly for this conference, when one of the
members of this conference’s program committee invited me to collaborate with her
on some projects. The year was 1988 and George Stalk had just published his arti-
cle on Time-Based Competition (TBC). Professor Suzanne de Treville (then at the
Helsinki University of Technology) invited me to Finland to work with Finnish com-
panies on helping them reduce lead times and implement TBC, using the MPX soft-
ware as an analysis tool. During these assignments we arrived at a critical insight:
Rapid Modeling was the ideal tool to help companies reorganize their operations to
reduce lead time. Static capacity models, or decision tools such as linear program-
ming, did not show the tradeoffs between utilization and lead time; on the other
hand simulation models were too complex and took too long to build, and managers
could not wait that long to make time-sensitive strategic decisions. Rapid Modeling
tools clearly showed senior managers the key tradeoffs involved and helped them to
quickly justify and implement decisions to reduce their lead times (see De Treville,
1992 for examples of how Rapid Modeling benefited the projects in Finland).

Fig. 1 The first public use of the term Rapid Modeling at a continuing education class taught by
the author at the Society of Manufacturing Engineers in 1987

From this point on, in our publications and classes we focused on the advantages
of Rapid Modeling for lead time reduction (for example, see Suri, 1989). These ad-
viii Foreword

vantages were further emphasized with the development of Quick Response Man-
ufacturing (QRM) - a refinement of TBC with a specific focus on manufacturing
enterprises (Suri, 1998). For instance, at the Center for Quick Response Manufac-
turing, during our work with around 200 manufacturing companies during the past
15 years (see www.qrmcenter.org) we have found that Rapid Modeling is an invalu-
able tool to help companies reduce lead time and implement QRM.
But enough about the past - let us look to the future. It is very encouraging to
see an entire conference organized around the theme of Rapid Modeling, and to see
that researchers from around the world will be presenting papers at this conference.
Further, it is even more encouraging to see Rapid Modeling being extended beyond
manufacturing systems - for example, to supply chain modeling, to container ter-
minals and logistics management, to service processes, and even to venture capital
firms and courts of law. All these events speak well for the future of Rapid Mod-
eling. Finally, as one who promoted the Rapid Modeling concept as a tool to help
manufacturing companies become more competitive, it is truly heartening to see
that leading researchers in Europe have decided to use Rapid Modeling as a core
concept in their EU project on “Keeping Jobs in Europe”(see Project Keeping Jobs
In Europe, 2009).
Once again, I congratulate the conference organizers and the program committee
on the rich set of papers that have been put together here. I wish all the participants
a fruitful conference, and I would also like to wish all these researchers success in
the application of their Rapid Modeling concepts to many different fields.

Neuchâtel, May 2009 Rajan Suri

References

De Treville S (1992) Time is money. OR/MS Today 19(5):30–4


Project Keeping Jobs In Europe (2009) Keeping jobs in eu: Rapid modeling for the
manufacturing and service industry. URL http://www2.unine.ch/iene-kje
Suri R (1987) Rapid modeling techniques: Evaluating manufacturing system deci-
sions. In: A Hands-on Course, SME World Headquarters, Dearborn, MI
Suri R (1988) RMT puts manufacturing at the helm. Manufacturing Engineering
February:41–44
Suri R (1989) Lead time reduction through rapid modeling. Manufacturing Systems
7:66–68
Suri R (1998) Quick response manufacturing: a companywide approach to reducing
lead times. Productivity Press
Suri R, Hildebrant R (1984) Modeling flexible manufacturing systems using mean-
value analysis. Journal of Manufacturing Systems Vol. 3(1):27–38
Suri R, Diehl G, de Treville S, Tomsicek M (1995) From CAN-Q to MPX: Evolution
of queuing software for manufacturing. Interfaces 25(25):128–150
Preface

Rapid Modelling - Increasing Competitiveness - Tools and


Mindset

Despite the developments in the field of lead time reduction over the past 25 years,
long lead times continue to have a negative impact on companies’ business re-
sults, i.e., customer dissatisfaction, loss of market shares, and missed opportunities
to match supply and demand. Increased global competition requires companies to
seek out new ways of responding to volatile demand and increased customer re-
quirements for customization with continuously shorter lead times. Manufacturing
companies, as well as service firms, in the developed economies are in the doldrums
because low responsiveness make them vulnerable to low-cost competitors. Com-
panies that are equipped for speed, with innovative processes, will outperform their
slower competitors in many industries but the knowledge concerning lead time re-
duction, which has been developed globally, has yet to be combined into a unified
theory.
The purpose of this proceedings volume of selected papers presented at the 1st
rapid modelling conference “Increasing Competitiveness - Tools and Mindset” is to
give a state of the art overview about actual works in the field of rapid modelling
in combination with lead time reduction. Furthermore, new developments will be
discussed. In general, Rapid Modelling is based on queuing theory but other mathe-
matical modelling techniques as well as simulation models to facilitate the transfer
of knowledge from theory to application are of interest in this context as well. The
interested reader, e.g.,
• researchers in the fields of
– operations management
– production management
– supply chain management
– operations research or
– industrial engineering as well as

ix
x Preface

• practitioners with any connection to either


– manufacturing or
– service operations
should have a good overview about what is going on in this field. The objec-
tive of this conference is to provide an international, multidisciplinary platform for
researchers and practitioners to create and exchange knowledge on increasing com-
petitiveness through speed. Lead time reduction (through techniques ranging from
quick response manufacturing to lean production) is achieved through a mutually re-
inforcing combination of changed mindset and analytical tools. We accepted papers
that contribute to these themes in the form of:
• Theory Pieces and Reviews
• Modelling and Simulation
• Case Study and Action Research
• Survey and Longitudinal Research
Based on these research methods, the proceedings volume has been divided into
four chapters and brings together papers which present different research method-
ologies. These papers are allocated based on their primary research methodology.
All papers passed through a double-blind referee process to ensure their quality.
Therefore, this book should serve as a valid source for research activities in this
field.
While the RMC09 (1st rapid modelling conference “Increasing Competitiveness
- Tools and Mindset”) takes place at the University of Neuchâtel, located in the heart
of the city of Neuchâtel, Switzerland, it is based on a collaboration with the project
partners within our IAPP Project (No. 217891), see also http://www.unine.ch/iene-
kje. We are happy to have brought together authors from Algeria, Australia, Aus-
tria, Bahrain, Belgium, England, Finland, France, Germany, Hungary, Italy, Sweden,
Switzerland and the United States of America.

Acknowledgement

We would like to thank all those who contributed to the conference and this proceed-
ings volume. First, we wish to thank all authors and presenters for their contribution.
Furthermore, we appreciate the valuable help from the members of the international
scientific board, the referees and our sponsors (see the Appendix for the appropriate
lists).
In particular, our gratitude goes to our support team at Enterprise Institute at the
University of Neuchâtel, Gina Fiore Walder who organized all the major and minor
aspects of this conference project. Ulf Richter, who handled the promotion process
as well as the scientific referee process. Gina Fiore Walder, Yvan Nieto, Gil Gomes
dos Santos supported by Arda Alp and Boualem Rabta handled the majority of the
text reviews as well as the formating work with LaTex. Ronald Kurz created the logo
Preface xi

of our conference and he took over the development of the conference homepage
http://www.unine.ch/rmc09.
Furthermore, we would like to give special thanks to Professor Rajan Suri, the
founding director of the Center for Quick Response Manufacturing, University of
Wisconsin-Madison, USA, who supported the development of our conference with
valuable ideas, suggestions and hints. Furthermore, he authored the forward of this
book based on his leading expertise in the field of Rapid Modelling as well as Quick
Response Manufacturing.
Finally, it has to be mentioned that the conference as well as the book are
supported by the EU SEVENTH FRAMEWORK PROGRAMME - THE PEO-
PLE PROGRAMME - Industry-Academia Partnerships and Pathways Project (No.
217891) “How revolutionary queuing based modelling software helps keeping jobs
in Europe. The creation of a lead time reduction software that increases industry
competitiveness and supports academic research.”

Neuchâtel, May 2009 Gerald Reiner


Contents

Part I Theory Pieces and Review

1 Managerial Decision Making and Lead Times: The Impact of


Cognitive Illusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty
2 Queueing Networks Modeling Software for Manufacturing . . . . . . . . 15
Boualem Rabta, Arda Alp and Gerald Reiner
3 A Review of Decomposition Methods for Open Queueing Networks . 25
Boualem Rabta

Part II Modelling and Simulation

4 Parsimonious Modeling and Forecasting of Time Series drifted by


Autoregressive Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Akram M. Chaudhry
5 Forecast of the Traffic and Performance Evaluation of the BMT
Container Terminal (Bejaia’s Harbor) . . . . . . . . . . . . . . . . . . . . . . . . . . 53
D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune
6 A Dynamic Forecasting and Inventory Management Evaluation
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Johannes Fichtinger, Yvan Nieto and Gerald Reiner
7 Performance Evaluation of Process Strategies Focussing on Lead
Time Reduction Illustrated with an Existing Polymer Supply Chain 79
Dominik Gläßer, Yvan Nieto and Gerald Reiner
8 A Framework for Economic and Environmental Sustainability and
Resilience of Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

xiii
xiv Contents

9 An Integrative Approach To Inventory Control . . . . . . . . . . . . . . . . . . 105


Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola
10 Rapid Modeling of Express Line Systems for Improving Waiting
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Noémi Kalló and Tamás Koltai
11 Integrating Kanban Control with Advance Demand Information:
Insights from an Analytical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Ananth Krishnamurthy and Deng Ge

12 Rapid Modelling in Manufacturing System Design Using Domain


Specific Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Doug Love and Peter Ball
13 The Best of Both Worlds - Integrated Application of Analytic
Methods and Simulation in Supply Chain Management . . . . . . . . . . . 155
Reinhold Schodl
14 Rapid Modeling In A Lean Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Nico J. Vandaele and Inneke Van Nieuwenhuyse

Part III Case Study and Action Research

15 The Impact of Lean Management on Business Level Performance


and Competitiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei
16 Reducing Service Process Lead-Time Through Inter-Organisational
Process Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Henri Karppinen and Janne Huiskonen
17 Is There a Relationship Between VC Firm Business Process Flow
Management and Investment Decisions? . . . . . . . . . . . . . . . . . . . . . . . . 209
Jeffrey S. Petty and Gerald Reiner
18 What Causes Prolonged Lead-Times in Courts of Law? . . . . . . . . . . . 221
Petra Pekkanen, Henri Karppinen and Timo Pirttilä
19 Logistics Clusters - How Regional Value Chains Speed Up Global
Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Ralf Elbert and Robert Schönberger

Part IV Survey and Longitudinal Research

20 Measuring the Effects of Improvements in Operations Management . 249


Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss
Contents xv

21 Managing Demand Through the Enablers of Flexibility: The


Impact of Forecasting and Process Flow Management . . . . . . . . . . . . . 265
Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner
22 Threats of Sourcing Locally Without a Strategic Approach:
Impacts on Lead Time Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Ruggero Golini and Matteo Kalchschmidt
23 Improving Lead Times Through Collaboration With Supply Chain
Partners: Evidence From Australian Manufacturing Firms . . . . . . . . 293
Prakash J. Singh

A International Scientific Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

B Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
List of Contributors

Smail Adjabi
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
e-mail: adjabi@hotmail.com
Djamil Aı̈ssani
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
e-mail: lamos bejaia.hotmail.com
Arda ALP
Enterprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: arda.alp@unine.ch
Peter Ball
Department of Manufacturing, Cranfield University, Cranfield, Bedford, MK43
0AL, U.K.
e-mail: p.d.ball@cranfield.ac.uk
Gerhard Bauer
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: gerhard.bauer@wu.ac.at
T. Benkhellat
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
Vedran Capkun
HEC School of Management, 1, rue de la Liberation, 78351 Jouy-en-Josas cedex,
France
e-mail: capkun@hec.fr
Akram M. Chaudhry
College of Business Administration, University of Bahrain, P.O.Box #32038,
Sakhir, Kingdom of Bahrain, Middle East

xvii
xviii List of Contributors

e-mail: drakramm@yahoo.com, drakramm@buss.uob.bh


M. Cherfaoui
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
Suzanne de Treville
University of Lausanne, Faculty of Business and Economics, Internef 315,
CH-1015 Lausanne, Switzerland
e-mail: suzanne.detreville@unil.ch
Krisztina Demeter
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: Krisztina.demeter@uni-corvinus.hu
Ralf Elbert
University of Technology Berlin, Chair of Logistics Services and Transportation,
sec. H. 94 main building, Room H 9181, Straße des 17. Juni 135, 10623 Berlin,
Germany
e-mail: elbert@logistik.tu-berlin.de
Johannes Fichtinger
Institute for Production Management, WU Vienna, Nordbergstraße 15, A-1090
Wien, Austria
e-mail: johannes.fichtinger@wu-wien.ac.at
Deng Ge
University of Wisconsin-Madison, Department of Industrial and Systems Engineer-
ing, 1513 University Avenue, Madison, WI 53706, USA
e-mail: dge@wisc.edu
Dominik Gläßer
Institut de l’entreprise, Université de Neuchâtel, Rue A.-L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: dominik.glasser@unine.ch
Ruggero Golini
Department of Economics and Technology Management, Università degli Studi di
Bergamo, Viale Marconi 5, 24044 Dalmine (BG), Italy
e-mail: ruggero.golini@unibg.it
Ari-Pekka Hameri
Ecole des HEC, University of Lausanne, Internef, Lausanne 1015, Switzerland
e-mail: Ari-Pekka.Hameri@unil.ch
Philip Hedenstierna
Logistics Research Group, University of Skövde, 541 28 Skövde, Sweden
Per Hilletofth
Logistic Research Group, University of Skövde, 541 28 Skövde, Sweden
e-mail: per.hilletofth@his.se
List of Contributors xix

Olli-Pekka Hilmola
Lappeenranta Univ. of Tech., Kouvola Unit, Prikaatintie 9, 45100 Kouvola, Finland
Ulrich Hoffrage
University of Lausanne, Faculty of Business and Economics, Internef 614,
CH-1015 Lausanne, Switzerland
e-mail: ulrich.hoffrage@unil.ch
Janne Huiskonen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
Werner Jammernegg
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: werner.jammernegg@wu.ac.at
István Jenei
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: istvan.jenei@uni-corvinus.hu
Matteo Kalchschmidt
Department of Economics and Technology Management, Università di Bergamo,
Viale Marconi 5, 24044 Dalmine, Italy
e-mail: matteo.kalchschmidt@unibg.it
Noémi Kalló
Department of Management and Corporate Economics, Budapest University of
Technology and Economics, Müegyetem rkp. 9. T. ép. IV. em., 1111 Budapest,
Hungary
e-mail: kallo@mvt.bme.hu
Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
e-mail: henri.karppinen@lut.fi
Tamás Koltai
Department of Management and Corporate Economics, Budapest University of
Technology and Economics, Müegyetem rkp. 9. T. ép. IV. em., 1111 Budapest,
Hungary
e-mail: koltai@mvt.bme.hu
Ananth Krishnamurthy
University of Wisconsin-Madison, Department of Industrial and Systems Engineer-
ing, 1513 University Avenue, Madison, WI 53706, USA
e-mail: ananth@engr.wisc.edu
xx List of Contributors

Dávid Losonci
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: (david.losonci@uni-corvinus.hu
Doug Love
Aston Business School, Aston University, Birmingham, B4 7ET, U.K.
e-mail: d.m.love@aston.ac.uk
Zsolt Matyusz
Department of Logistics and Supply Chain Management, Corvinus University of
Budapest, Fovam ter 8, H-1093 Budapest, Hungary
e-mail: zsolt.matyusz@uni-corvinus.hu
N. Medjkoune
Laboratory LAMOS, University of Béjaia, Targa Ouzemour, 6000 Béjaia, Algeria
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel, Rue A.-L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: yvan.nieto@unine.ch
Petra Pekkanen
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
e-mail: petra.pekkanen@lut.fi
Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London, United
Kingdom
e-mail: jpetty@bluewin.ch
Timo Pirttilä
Department of Industrial Management, Lappeenranta University of Technology,
P.O. Box 20, FIN-53851 Lappeenranta, Finland
Boualem Rabta
Enterprise Institute, University of Neuchatel, Rue A.-L. Breguet 1, CH-2000
Neuchatel, Switzerland
e-mail: boualem.rabta@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel, Rue A.-L. Breguet 1, CH-2000
Neuchâtel, Switzerland
e-mail: gerald.reiner@unine.ch
Heidrun Rosič
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna,
Austria
e-mail: heidrun.rosic@wu.ac.at
List of Contributors xxi

Reinhold Schodl
Capgemini Consulting, Lassallestr. 9b, 1020 Wien, Austria
e-mail: : reinhold.schodl@capgemini.com
Robert Schönberger
University of Technology Darmstadt, Chair of Clusters & Value Chain,Darmstadt
University of Technology, Hochschulstrasse 1, 64289 Darmstadt, Germany
e-mail: schoenberger@tud-cluster.de
Prakash J. Singh
Department of Management & Marketing, University of Melbourne-Parkville,
3010, Australia
e-mail: pjsingh@unimelb.edu.au
Nico J. Vandaele
Research Center for Operations Management, Department of Decision Sciences
and Information Management, K.U. 3000 Leuven, Belgium
e-mail: Nico.Vandaele@econ.kuleuven.be
Inneke Van Nieuwenhuyse
Research Center for Operations Management, Department of Decision Sciences
and Information Management, K.U. 3000 Leuven, Belgium
e-mail: Inneke.VanNieuwenhuyse@econ.kuleuven.be
Lawrence A. Weiss
McDonough School of Business, Georgetown University, Old North G01A,
Washington, DC 20057-1147, USA
e-mail: law62@georgetown.edu
Part I
Theory Pieces and Review
Chapter 1
Managerial Decision Making and Lead Times:
The Impact of Cognitive Illusions

Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

Abstract In this paper, we consider the impact of cognitive illusions on decision


making in the operations management field, in areas ranging from product and pro-
cess development to project management. Psychologists have studied the effects
of overconfidence, the planning fallacy, illusions of control, anchoring, confirma-
tion bias, hindsight bias, and associative memory illusions on individual judgment,
thinking, and memory in many experiments, but little research has focused on op-
erations management implications of these biases and illusions. Drawing on these
psychological findings we discuss several of these cognitive illusions and their im-
pact on operations managers, plant workers, technicians and engineers alike in a
variety of operational settings. As in other contexts, these cognitive illusions are
quite robust in operations management, but fortunately the impact of selected il-
lusions can be substantially reduced through debiasing techniques. The examples
discussed in this paper highlight the need for more operations-management-based
research on the impact of cognitive illusions on decision making.

Suzanne de Treville
University of Lausanne, Faculty of Business and Economics, Internef 315, CH-1015 Lausanne
Téléphone : 021 692 33 41,
e-mail: suzanne.detreville@unil.ch
Ulrich Hoffrage
University of Lausanne, Faculty of Business and Economics, Internef 614, CH-1015 Lausanne
e-mail: ulrich.hoffrage@unil.ch
Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London,
e-mail: jpetty@bluewin.ch

3
4 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

1.1 Introduction

People play an integral role in any operation, from process development to execu-
tion, assessment, and improvement. Because people are involved, most decisions
exhibit some bias, as individuals use heuristics to simplify the decision-making pro-
cess. Although such biases are not usually considered in managing and evaluating
operations, they have a major impact on the decisions that are made, as well as how
learning from decisions occurs.
Cognitive illusions “lead to a perception, judgment, or memory that reliably de-
viates from reality” (Pohl 2004: 2). This deviation is referred to as a cognitive bias.
Such illusions or biases happen randomly, tend to be robust and hard to avoid, and
are difficult – sometimes impossible – to eliminate. There have been occasional ref-
erences to the scarcity of literature on cognitive illusions or biases in the OM field
(e.g., Mantel et al, in press; Schweitzer and Cachon, 2000). These papers refer to
one or two cognitive biases, but do not present a large sample of biases that have
been studied in the cognitive psychology literature.

1.2 Cognitive Illusions

We begin with a bias that is fundamental to operations management: the planning


fallacy (an example of overconfidence). We then consider the illusion of control, an-
choring and adjustment, hindsight bias, confirmation bias, and associative memory
illusion. We close with a brief discussion of debiasing techniques.

1.2.1 Overconfidence: The Planning Fallacy

Overconfidence occurs when “our confidence in our judgments, inferences, or pre-


dictions is too high when compared to the corresponding accuracy (Hoffrage 2004:
235). One specific type of overconfidence is the planning fallacy, according to which
“predicted completion times for specific future tasks tend to be more optimistic than
can be justified by the actual completion times or by the predictors’ general beliefs
about the amount of time such tasks usually take“ (Buehler et al 1994, 2002: 250).
The planning fallacy results in substantial underestimation of timing or cost for a
given project or task due to cognitive malfunctions (Buehler et al, 2002; Kahne-
man and Tversky, 1979; Lovallo and Kahneman, 2003). It is particularly likely for
projects that are perceived as a linear set of well-understood tasks, which, however,
is often not the case, as we describe shortly.
The planning fallacy also has a temporal aspect: The more time we have to plan,
the more overconfident we are and the more likely we are to underestimate the de-
mands of the project (Sanna and Schwarz, 2004). Furthermore, the phenomenon
increases with incentives (Buehler et al, 2002). This cognitive illusion is particu-
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 5

larly robust, occurring even when the decision-maker is completely aware of the
phenomenon (Buehler et al, 2002).
This phenomenon plays a fundamental role in operations management, in areas
ranging from delivery to product and process development, to project management.
Things always take longer than anyone expected, there is always a scramble to get
things pulled together right before the final deadline, and no amount of planning or
organization seems to eliminate this bias. Can insights from cognitive psychology
inform operations management theory concerning how to improve lead time per-
formance? Or, could operations management theory bring new insights to theory
concerning the planning fallacy?
Breaking projects into small pieces has been observed to keep projects more on
schedule through creating the tension required to keep people focused on due dates
(van Oorschot et al, 2002). While this might be feasible with the new product devel-
opment projects studied by these authors, it would not work for repetitive operations
(manufacturing or service). Furthermore, van Oorschot et al. noted that estimates for
smaller project packages are more accurate, but the overall project time remains ex-
cessively long.
Responding to lead times that are longer than expected by increasing our estima-
tion of lead times leads to the “planning loop” (Suri, 1998); Longer estimates reduce
the quality of forecasts, increasing mismatches between production and demand,
placing additional demands on the system to respond to actual customer needs, re-
sulting in higher utilization and longer lead times. This is consistent with the psy-
chological realization mentioned early that the more time available, the worse the
overconfidence. Historically, lead time estimation has been treated as a rational, lin-
ear computation. Suri (e.g., 1998) and Hopp and Spearman (1996) used queuing
theory to illustrate the complexity of process dynamics, explaining part of the di-
vergence between the expected simplicity and actual complexity of calculating lead
times. These complex system dynamics may amplify the cognitive bias implied by
the planning fallacy, thus partially explaining why in operations management we so
consistently fail to get our lead times right.
Furthermore, exploration of the interaction between the cognitive and compu-
tational aspects of lead time estimation may lead to new insights concerning this
cognitive illusion. Most managers do not understand the impact of bottleneck uti-
lization, lot size, layout, and system variability on lead time (Suri, 1994). As lead
times not only increase but explode with utilization, it is not surprising that lead
times exceed expectation in the majority of operations, especially given the com-
mon emphasis on maintaining high utilization. Therefore, an understanding of the
mathematical principles that drive lead times might serve as a model for the cogni-
tive processes involved in the planning fallacy.
6 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

1.2.2 Illusion of Control

Illusion of control occurs when an individual overestimates his or her personal in-
fluence on an outcome (Thompson, 2004). The illusion increases in magnitude with
perceived skill, emphasis on success or failure, and need for a given outcome, as
well as in contexts where skill coexists with luck, as people use random positive
outcomes to increase their skill attributions (Langer and Roth, 1975).
Consider an experienced worker who is choosing whether to follow process doc-
uments in completing a task. The illusion of control implies that the worker may
believe that process outcomes will be better if he or she draws from experience
and intuition rather than relying on the standard operating procedures (SOPs). In-
terestingly enough, this worker may well believe that other workers should follow
the SOPs (for a discussion of worker insistence that co-workers follow SOPs, see
Barker, 1993; Graham, 1995). Times when the worker has carried out a process
change that has coincided with an improved yield (success, whether or not due to
that change vs. normal process variability) will tend to increase this illusion of con-
trol.
Polaroid’s efforts to introduce Statistical Process Control were hindered by work-
ers’ illusions of control. Workers believed that process outcomes would be better if
they were allowed to choose their own machine settings and make adjustments as
they deemed necessary, rather than shutting down the machine and waiting for main-
tenance if process data indicated that the machine was going out of statistical con-
trol. This was in spite of data demonstrating substantially increased yields when ma-
chines were maintained to maximize consistency (Wheelwright et al, 1992). More
generally, workers prey to the illusion of control are unlikely to embrace process
standardization and documentation.
Entrepreneurs may well demonstrate an illusion of control when it comes to de-
veloping the operations for their new venture. E Ink, for example, was a new venture
originating from the MIT Media Lab that had invented an electronic ink, opening the
door to “radio paper” and an eventual electronic newspaper that would have the look
and feel of a traditional newspaper, but that would be updateable from newspaper
headquarters. The attitude of the founders was that developing the new product was
difficult, but that operations would be relatively easy-a classic illusion of control.
Basic operations management problems (such as yield-to-yield variation) kept the
company in survival mode for the better part of a decade (Yoffie and Mack, 2005).
Had the founders made operations a priority from the start, they may well have been
profitable many years earlier.
The good news is that the illusion of control can be virtually eliminated by the
intrusion of reality, which creates circumstances requiring individuals to systemat-
ically estimate the actual control that they have in a process (Thompson, 2004). In
other words, before standardizing and documenting processes, or before designing
new processes, it is worth carrying out an assessment exercise so that people have a
clear understanding of their true abilities and control level.
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 7

1.2.3 Anchoring and Hindsight Bias

Anchoring and adjustment, that is, predicting or estimating relative to some anchor
(Mussweiler et al, 2004), is a heuristic that is often evoked to explain cognitive bi-
ases that can be observed in various aspects of operations management. Anchors
may be used individually when making decisions, or collectively across an organi-
zation as a benchmark for success or failure, often without regard for their relevance
or impact on a given situation.
In the operations management field we often use anchoring to make an opera-
tions strategy more powerful; Consider “Zero Defects” or “Zero Inventories,” “Just-
in-Time” or “lean production,” and “Six Sigma,” all of which have in common use
of a keyword anchor that powerfully sets direction. The positive aspect of these an-
chors is that a direction or standard for the company has been established, it may,
however, be set with such force that later efforts to moderate are unfruitful. Hopp
and Spearman (1996), for example, described the confusion that resulted from use
of the Zero Defects or Zero Inventories slogans, as companies responded by ex-
cessively slashing inventories or setting completely unrealistic quality objectives
(as has also occurred as companies that should be striving for percent or parts-per-
thousand defects vainly strive for the ppm or even parts per billion implied by being
six standard deviations from the process mean). The term lean is so powerful that
companies may become overenthusiastic about removing slack resources (required
for creativity, flexibility, or improvement efforts, e.g., Lawson, 2001) from processes
(De Treville and Antonakis, 2006; Rinehart et al, 1997).
Anchoring has been observed to move companies away from rational inventory
policies (Schweitzer and Cachon, 2000), and to shift companies from constantly
striving for improvement to just working toward meeting a set standard Tenbrunsel
et al (e.g., that individuals stop seeking to save the environment and simply work to
meet environmental standards, 2000).
An interesting example of anchoring in the field of operations management
comes from the shift in attitude toward the level of defects in a process over the
past couple of decades. Twenty years ago, a classroom discussion of defect levels
might include student claims that if the optimum is 10% defects and we are aiming
for 2%, we are going to make less money than we should.“ Fast forward to today’s
classroom, where a similar comment might beif the optimum is 300 parts per million
(ppm) defects and we are aiming for 50 ppm” In other words, referring to percent
vs. ppm anchors decision-makers as they set process improvement goals.
Anchoring influences process experimentation. Consider a conveyor belt that car-
ries product through a furnace that is limiting the overall capacity of the process. In
thinking through how to increase throughput for the furnace operation, process de-
velopment engineers may limit their experiments if they anchor their analysis to the
existing process, rather than taking a new look at how the process is run. In the
case of the conveyor, for example, it might be possible to almost double the out-
put by stacking pieces on the belt, which would require both slowing the belt and
increasing the temperature.
8 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

On a larger scale, Campbell Soup anchored their process development activities


to canned soup production in designing a line for microwaveable soups. This an-
choring was not even questioned until a new project manager was brought in who
had frozen food rather than canned soup experience, and was therefore free to break
free from the anchor (unfortunately too late to save the project, Wheelwright and
Gill, 1990).

1.2.4 Confirmation Bias

“Whenever people search for, interpret, or remember information in such a way that
the corroboration of a hypothesis becomes likely, independent of its truth, they show
a confirmation bias.” (Oswald and Grosjean 2004: 93). Confirmation bias represents
a “type of cognitive bias toward confirmation of the hypothesis under study. To
compensate for this observed human tendency, the scientific method is constructed
so that we must try to disprove our hypotheses” (Wikipedia, 2006). This type of
bias becomes more likely when the “hypotheses tested are already established or
are motivationally supported” (Oswald and Grosjean 2004: 93). Watkins and Bazer-
man (2003) described several disasters that would have been easily preventable had
individuals not fallen prey to confirmation and related biases.
“As managers estimate the likelihood of an event’s occurrence, they may over-
estimate the representativeness of a piece of information and draw inaccurate con-
clusions” (Bazerman, 2005). This also implies that information that is easily avail-
able may well have a greater impact on the decision made than it should: Whether
decision-makers notice or ignore a piece of information often depends on how that
information is presented (Mantel et al, in press).
One of the early demonstrations of the confirmation bias came from an exper-
iment in which subjects were shown the sequence 2, 4, 6, and asked to find the
rule that generated the sequence. Subjects were to propose their own triples, learn
from the experimenter whether the sequence conformed to the rule, and specified
the rule as soon as they thought they had discovered it. The actual rule was “any
ascending sequence.” Many subjects, however, assumed a rule of the form of n+2,
and generated sequences of this form to confirm their guess. Such subjects were
quite surprised to learn that their specified rule was wrong, in spite of the fact that
they had only received positive and confirming feedback. Arriving at the correct rule
required that subjects select examples that would disconfirm their beliefs, but this
did not come naturally (Wason, 1960). Compare this phenomenon to an employee
who has an idea about how to improve a process. As demonstrated by Wason, such
an employee is more likely to suggest experiments to demonstrate that the idea
works than to seek problems that may arise. Furthermore, implementation of in-
sufficiently tested ideas is a primary source of production variability (Edelson and
Bennett, 1998).
The choice by Campbell Soup engineers to create a microwaveable soup process
that resembled a canned soup line (argued in the preceding section to demonstrate
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 9

anchoring) also provides an example of confirmation bias. Just as respondents in


Wason (1960) experiment did not seek examples that would disconfirm the intuitive
rule suggested by the initial sequence, so the Campbell Soup team made no apparent
efforts to test their assumption that all soup lines should look like canned soup lines
until the new project manager brought them a new model of reality (Wheelwright
and Gill, 1990).
There are many examples of confirmation biases in the operations management
field. We mentioned previously how illusion of control led to unmanageable new
processes in the Campbell Soup case (Wheelwright and Gill, 1990). The ability of
these managers to dispel these illusions of control by an injection of reality was
hindered by subsequent confirmation bias: Although it was clear that nothing was
working as it should, management could not read the signals, nor did they perceive
the costs to be excessive, and continued to invest in develop of the new processes.
This example illustrates how cognitive illusions can coexist and reinforce each other.
The confirmation bias emerges in decisions about outsourcing (for a discussion
of confirmation-related biases in outsourcing, see Mantel et al, in press). Managers
considering outsourcing have often already made up their minds about whether a
supplier is capable of meeting their needs, so that they do not really consider the
possibility that the supplier might fail.
Confirmation bias can hinder communication and theory development, as oppos-
ing camps only consider information that supports their pet hypothesis, as can be
seen in the case of lean production. A given factory is likely to have lean proponents
and opponents, both of whom can produce substantial quantities of data support-
ing their viewpoint. The ability to really make use of conflicting data to falsify the
opposing theory, however, appears to be the scarce supply (e.g., De Treville and
Antonakis, 2006).

1.2.5 Associative Memory Illusion

“In recollecting some target event from the past, people will often confuse events
that happened before or after the target even with the event itself,” with some
illusions involving remembrance of events that never actually occurred (Bartlett
1932/95; Roediger III and Gallo 2004: 309).
In managing operations, memory plays an important role. When was the last
time we did a furnace profile or maintained that machine? How has that supplier
been performing over the past year? Does it seem like the process is stable? What
has been going on with operators and repetitive strain injuries? The list goes on and
on. The constant updating of memories plays an important role in adaptive learning,
and is almost impossible to prevent or control (Roediger III and Gallo, 2004).
That memory is constantly reconstructed based on our theories, beliefs, and sub-
sequent experiences demonstrates the importance of excellent record-keeping and
patience with those who remember differently. Associative memory illusions are
related to illusions of change or stability (Wilson and Ross, 2004), referring to in-
10 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

accurate comparisons of past and present states. Individuals, for example, often er-
roneously believe that improvement has occurred simply because of involvement in
an improvement activity. Consider MacDuffie’s description of Ford’s improvement
activities:

“[Reporting forms] appear to be used more to report on the activity level of the sub-
system group, to show that the required processes are being fulfilled, rather than to
diagnose, systematically, the “root cause” and possible solutions to a problem. When
a problem recurs, seldom is it reanalyzed, and rarely are earlier actions reassessed.
With past activities already documented and reported, the key is to generate new
documentation, to provide proof of continued activity. Thus, “continuous improve-
ment” becomes less a process of incremental problem resolution than a process of
energetic implementation of intuitively selected solutions” (MacDuffie, 1997, 185).

In other words, the assumption of these managers is that activity = improvement,


and this is seldom tested.

1.3 Debiasing Techniques

Once cognitive biases have been identified, what debiasing techniques exist to re-
duce their impact? In this section we briefly examine some tools that may contribute
to debiasing.

1.3.1 Inside or Outside Views

In considering ways to reduce the planning fallacy, it is useful to differentiate be-


tween an inside (focusing on singular information) and an outside view (focusing on
distributional information). One reduces the dominance of the singular (i.e., subjec-
tive probabilities for single cases) over the distributional approach (i.e., estimation
of frequency distribution parameters) by eliciting predictions for aggregate sets of
events rather than single cases. Asking for aggregate frequencies rather than single-
case estimates has proven to reduce or eliminate a number of cognitive illusions
(Gigerenzer et al, in press). Unfortunately however, Buehler et al. (2002: 269) re-
ported that using this method to debias the planning fallacy (asking questions in the
form “In how many cases like this will you be able to keep the deadline?” rather
than “What is your subjective probability of being able to keep the deadline?”) was
not successful, speculating that it takes a sophisticated view to conceive of a given
task as a sample of a more general reference class. It seems that participants adapt
an inside view to make estimates about a given project, only then inferring a fre-
quency response from this individual case. Beginning with an outside view to arrive
at an inside view appears to be unnatural for average individuals.
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 11

1.3.2 Consideration of Alternative Scenarios

Evidence is inconclusive concerning the impact of asking individuals to consider


alternative scenarios. On one hand, encouraging people to consider other, more pes-
simistic scenarios generally reduces overconfidence, both in knowledge and in pre-
diction (Koriat et al, 1980). Not surprisingly, scenario planning has become popular
as a forecasting tool in many business and organizational contexts (for a review, see
Schoemaker, 1993). On the other hand, Buehler et al (2002) reported that in their
studies the planning fallacy has resisted this technique, obviously because individu-
als’ best-guess predictions were overly influenced by the most optimistic scenarios,
thereby downplaying the more pessimistic (and, unfortunately, often more realistic)
scenarios.

1.3.3 Premortem Exercise

We suggest that a technique called the Premortem exercise (Klein 2003: 98-101)
may be more successful in overcoming or reducing the planning fallacy. This
method starts with the assumption that a project or plan has failed. Not just a bit,
but in a big way: It has turned out to be a catastrophe or disaster. Participants in the
exercise take this failure as a given and provide reasons why it happened. This pro-
cedure relieves the participants from the (usually self-imposed) constraint that they
must not say anything unpleasant, depressing, or potentially hurtful to their col-
leagues. The aim is to compile a long list of hidden assumptions that turned out to
be wrong, or of weaknesses and key vulnerabilities in a plan. Once this list has been
established, managers are enabled to take such “unforeseeable” events into account
when planning, incorporating buffers and contingencies. Although in our experi-
ence the premortem technique has been quite successful in debiasing the planning
fallacy, we are not aware of studies that have systematically explored its use.

1.3.4 Recall-Relevance Manipulation

Getting participants to use their past experience to calibrate their time judgments
has proven to be successful in empirical verification. Buehler et al (1994) required
participants to first indicate the date and time they would finish a computer assign-
ment if they finished it as far before its deadline as they typically completed as-
signments. In a second step, participants were asked to recall a plausible scenario
from their past experience that would result in their completing the computer assign-
ment at the typical time. Based on these estimations, they were to make predictions
about completion times. This “recall-relevance” manipulation successfully reduced
the optimistic bias constituting the planning fallacy.
12 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

1.3.5 Incorporation of Lead Time Reduction Theory

Combining an understanding of the mathematical relationships that drive lead times


may be helpful in reducing the planning fallacy. Recall that the planning fallacy is
particularly likely in situations where individuals perceive a project as a set of linear,
straightforward tasks. Although operations often appear to be just such a set of lin-
ear and well-understood tasks, the underlying system dynamics are neither linear nor
straightforward. We propose that a good understanding of the relationship between
utilization, lot size, variability, and layout to lead time might encourage incorpo-
ration of a buffer at bottleneck operations, more attention to lot size and variability
reduction, and implementation of layouts that support flow. Perhaps these operations
management concepts will eventually inform the literature on the planning fallacy.

1.4 Conclusions

This paper considered many of the biases and cognitive illusions which are relevant
to the field of operations management and will continue to have an effect on op-
erations as long as people are involved in the decision-making process. Cognitive
psychologists have developed theories and conducted empirical research that can
serve as a theoretical foundation for operations-based research.
As demonstrated in the case examples cited, these biases occur in both start-up
and established ventures, and across all levels of the companies. The planning fal-
lacy and anchoring effect appear to dominate operations-related activities, but de-
veloping an understanding of each of the cognitive illusions presented in this paper
in the operations management context may improve the quality of decisions in our
field, as well as facilitate learning.

References

Barker J (1993) Tightening the iron cage: Concertive control in self-managing


teams. Administrative Science Quarterly 38:408–437
Bartlett F (1932/95) Remembering: A study in experimental and social psychology.
Cambridge University Press
Bazerman M (2005) Judgment in managerial decision making, 6th edn. John Wiley
and Sons, New York
Buehler R, Griffin D, Ross M (1994) Exploring the Planning Fallacy”: Why peo-
ple underestimate their task completion times. Journal of Personality and Social
Psychology 67:366–366
Buehler R, Griffin D, Ross M (2002) Inside the planning fallacy: The causes and
consequences of optimistic time predictions. In: Gilovich DGT, Kahneman D
1 Managerial Decision Making and Lead Times: The Impact of Cognitive Illusions 13

(eds) Heuristics and Biases: The Psychology of Intuitive Judgement, Cambridge


University Press, New York, pp 250–270
De Treville S, Antonakis J (2006) Could lean production job design be intrinsically
motivating? Contextual, configurational, and levels-of-analysis issues. Journal of
Operations Management 24(2):99–123
Edelson N, Bennett C (1998) Process discipline: How to maximize profitability and
quality through manufacturing consistency. Quality Resources, New York
Gigerenzer G, Hertwig R, Hoffrage U, Sedlmeier P (in press) Cognitive illu-
sions. In: Smith P (ed) Handbook of experimental economics results, North Hol-
land/Elsevier Press
Graham L (1995) On the line at Subaru-Isuzu: The Japanese model and the Ameri-
can worker. ILR Press, Ithaca, NY
Hoffrage U (2004) Overconfidence. In: Pohl R (ed) Cognitive Illusions, Psychology
Press, Hove, East Sussex, pp 235–254
Hopp WJ, Spearman ML (1996) Factory Physics: Foundations of Manufacturing
Management. Irvin Inc., Chicago
Kahneman D, Tversky A (1979) Intuitive Prediction: Biases and Corrective Proce-
dures. TIMS Studies in Management Science 12:313–327
Klein G (2003) The Power of Intuition. Currency/Doubleday
Koriat A, Liechtenstein S, Fischhoff B (1980) Reasons for confidence. Journal of
Experimental Psychology: Human Learning and Memory 6(2):107–118
Langer E, Roth J (1975) Heads I win, tails its chance: The illusion of control as a
function of the sequence of outcomes in a purely chance task. Journal of Person-
ality and Social Psychology 32(6):951–955
Lawson M (2001) In praise of slack: Time is of the essence. Academy of Manage-
ment Executive 15(3):125–135
Lovallo D, Kahneman D (2003) Delusions of success. How optimism undermines
executives’ decisions. Harvard Business Review 81(7):56
MacDuffie J (1997) The road to” Root Cause”: Shop-floor problem-solving at three
auto assembly plants. Management Science 43(4):479–502
Mantel S, Tatikonda M, Liao Y (in press) A behavioral study of supply manager
decision-making: Factors influencing make versus buy evaluation. Journal of Op-
erations Management 24(6):822–838
Mussweiler T, Englich B, Strack F (2004) Anchoring Effect. In: Pohl R (ed) Cogni-
tive illusions, Psychology Press (UK), Hove, East Sussex, pp 183–200
van Oorschot K, Bertrand J, Rutte C (2002) A longitudinal empirical study
of the causes of lateness of new product development projects. URL
http://www2.ipe.liu.se/rwg/igls/igls2002/Paper111.pdf
Oswald M, Grosjean S (2004) Confirmation bias. In: Pohl R (ed) Cognitive Illu-
sions, Psychology Press (UK), Hove, East Sussex, pp 79–96
Pohl R (2004) Introduction: Cognitive illusions. In: Pohl R (ed) Cognitive Illusions,
Psychology Press (UK), Hove, East Sussex, pp 1–20
Rinehart J, Huxley C, Robertson D (1997) Just another car factory?: Lean produc-
tion and its discontents. Cornell University Press, Ithaca, NY
14 Suzanne de Treville, Ulrich Hoffrage and Jeffrey S. Petty

Roediger III H, Gallo D (2004) Associative memory illusions. In: Pohl R (ed) Cog-
nitive Illusions, Psychology Press, East Sussex, pp 309–326
Sanna L, Schwarz N (2004) Integrating Temporal Biases. Psychological Science
15(7):474–481
Schoemaker P (1993) Multiple scenario development: Its conceptual and behavioral
foundation. Strategic Management Journal 14(3):193–213
Schweitzer M, Cachon G (2000) Decision bias in the newsvendor problem with
a known demand distribution: Experimental evidence. Management Science
46(3):404–420
Suri R (1994) Common misconceptions and blunders in implementing quick re-
sponse manufacturing. Proceedings of the SME AUTOFACT ’94 Conference,
Detroit, Michigan, November
Suri R (1998) Quick response manufacturing: A companywide approach to reducing
lead times. Productivity Press
Tenbrunsel A, Wade-Benzoni K, Messick D, Bazerman M (2000) Understanding
the influence of environmental standards on judgments and choices. Academy of
Management Journal 43(5):854–866
Thompson S (2004) Illusions of control. In: Pohl R (ed) Cognitive Illusions, Psy-
chology Press, Hove, East Sussex, pp 113–126
Wason P (1960) On the failure to eliminate hypotheses in a conceptual task. The
Quarterly Journal of Experimental Psychology 12(3):129–140
Watkins M, Bazerman M (2003) Predictable surprises: The disasters you should
have seen coming. Harvard Business Review 81(3):72–85
Wheelwright SC, Gill G (1990) Campbell Soup Company. In: Harvard Business
School case 9-690-051, Cambridge, MA, p 23
Wheelwright SC, Bowen HK, Elliott B (1992) Process control at Polaroid. In: Har-
vard Business School case 9-693-047, Cambridge, MA, p 17
Wikipedia (2006) Confirmation bias. URL http://en.wikipedia.org
/wiki/Confirmation bias.
Wilson A, Ross M (2004) 21 Illusions of change or stability. In: Pohl R (ed) Cogni-
tive Illustions, Psychology Press (UK), Hove, East Sussex, pp 379–396
Yoffie DB, Mack BJ (2005) E Ink in 2005. In: Harvard Business School case 9-705-
506, Cambridge, MA, p 24
Chapter 2
Queueing Networks Modeling Software for
Manufacturing

Boualem Rabta, Arda Alp and Gerald Reiner

Abstract This paper reviews the evolution of queueing networks software and its
use in manufacturing. In particular, we will discuss two different groups of soft-
ware tools. First, there are queueing networks software packages which require a
good level of familiarity with the theory. In the other hand, there are some packages
designed for manufacturing where the model development process is automated. Is-
sues related to practical considerations will be adressed and recommendations will
be given.

2.1 Introduction

In a period of continuous change in global business environment, organizations,


large and small, are finding it increasingly difficult to deal with, and adjust to the
demands for such changes (Bosilj-Vuksic et al, 2007). In order to improve perfor-
mance of a complex manufacturing system, the dynamic dependencies need to be
understood well (e.g., utilization, variability, lead time, throughput, WIP, operat-
ing expenses, quality, etc). In this manner rapid modeling techniques like queueing
theory, can be applied to improve such an understanding. For instance queueing

Boualem Rabta
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: boualem.rabta@unine.ch
Arda Alp
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: arda.alp@unine.ch
Gerald Reiner
Entreprise Institute, University of Neuchâtel, Rue A.L. Breguet 1, CH-2000 Neuchâtel, Switzer-
land.
e-mail: gerald.reiner@unine.ch

15
16 Boualem Rabta, Arda Alp and Gerald Reiner

networks are useful to model and measure the performance of manufacturing sys-
tems and also complex service processes. Apparently, queuing-theory-based soft-
ware packages for manufacturing processes (e.g., MPX) automate model develop-
ment process and help users (e.g. managers, academics) identify relatively easy an-
alytical insights (Vokurka et al, 1996).
Queuing software can be used by industrial analysts, managers, and educators.
It is also a good tool to help students understand factory physics along with model-
ing and analysis techniques (see, e.g., de Treville and Van Ackere, 2006). Despite
certain challenges over of queuing-theory-based modeling (e.g. need strong math-
ematical background, hard to maintain certain level of understanding on theories),
training in queuing-theory-based modeling is likely to yield better competitiveness
in lead time reduction (de Treville and Van Ackere, 2006). Business executives do
not always make the best possible decisions. That is, managers can fail to understand
the implications of mathematical laws and take actions that increase lead times (see,
de Treville and Van Ackere, 2006; Suri, 1998).
Complex real life service and manufacturing systems have a number of specific
features as compared to ’simplistic cases’, posing important methodological chal-
lenges. Basic queuing theory provides key insights to practitioners but not complete
and depth understanding of the system. Also the complexity of queuing theory based
methods has caused companies to use other tools (e.g. simulation) rather than queu-
ing theory.
Finally, queuing theory becomes that much popular in academic and research ar-
eas, especially for operations modeling, because complexity and size of the real life
problems can be reduced to relatively simple yet complex enough models. Com-
pared to a similar simulation model, those will be in less detail, lacking transition
behavior of the system but on the other hand simple and sufficient enough to make a
decision (de Treville and Van Ackere, 2006). Basically, relatively simple and quick
solutions are much more preferred as an initial system analysis or for quick deci-
sions.
The rest of this paper is organized as follows: In Section 22.2, we give a brief re-
view of the evolution of queuing network theory, focusing on decomposition meth-
ods. In Section 22.3, we list selected queueing software packages. All of them are
freely available for download on the Internet. Some manufacturing software pack-
ages based on queueing theory are presented in Section 2.4. Finally, we provide a
conclusion and give recommendations in Section 2.5.

2.2 Queueing Networks Theory

Queueing networks have been extensively studied in literature since Jackson’s sem-
inal paper (Jackson, 1957). The first significant results were those of Jackson (Jack-
son, 1957, 1963) who showed that under special assumptions (exponnential inter-
arrival and service times, markovian routing, first-come-first-served descipline,...)
a queueing network may be analyzed by considering its stations each in isola-
2 Queueing Networks Modeling Software for Manufacturing 17

tion (product form). Gordon and Newell showed that the product form solution
also holds for closed queuing networks (i.e., networks where the number of jobs
is fixed) with exponential interarrival and service durations (Gordon and Newell,
1967). Those results have been extended in (Baskett et al, 1975) and (Kelly, 1975)
to other special cases (open, closed and mixed networks of queues with multiple
job classes and different service disciplines). Since this kind of results was possible
only under restrictive assumptions, other researchers tried to extend product form
solutions to more general networks (decomposition methods). Several authors (
Kuehn (1979),Whitt (1983), Pujolle and Wu (1986), Gelenbe and Pujolle (1987) and
Chylla (1986) among others) proposed decomposition procedures for open G/G/1
(G/G/m) queueing networks. Closed networks of queues have also been analyzed
by decomposition (see, e.g., Marie, 1979). This approach has been modified in
different ways since though (e.g., multiple job classes (Bitran and Tirupati, 1988;
Whitt, 1994). In (Kim, 2004) and (Kim et al, 2005) it is shown that the classical
Whitt’s decomposition method performs poorly in some situations (high variabil-
ity and heavy traffic) and the innovations method is proposed as improvement, by
replacing relations among squared coefficients of variability with approximate re-
gression relationships among in the underlying point processes. This relationships
allow to add information about correlations.
It seems that the application of this method gives satisfactory results in various
cases. However, there are still some situations where the existing tools fail. Other
approaches which have been proposed include diffusion approximations (Reiser and
Kobayashi, 1974) and Brownian approximations (Dai and Harrison, 1993; Harrison
and Nguyen, 1990; Dai, 2002).
Queueing theory is a well-known method for evaluating the performance of man-
ufacturing systems under the influence of randomness (see, e.g., Buzacott and Shan-
thikumar, 1993; Suri, 1998). The randomness mainly comes from natural variability
of interarrival times and service durations. Queueing networks modeling has its ori-
gins in manufacturing applications: Jackson’s papers (Jackson, 1957, 1963) targeted
the analysis of job shops; a class of discrete manufacturing systems. Suri et al (1993)
gave a detailed survey of analytical models for manufacturing including queueing
network models. Govil and Fu (1999) presented a survey on the use of queueing
theory in manufacturing. Shanthikumar et al (2007) surveyed applications of queu-
ing networks theory for semiconductor manufacturing systems and discussed open
problems.

2.3 Queueing Networks Software

The developed theory motivated the development of many software packages for the
analysis of queueing networks. These packages suppose a good level of familiarity
with queueing theory. There are some early packages that were based on original
algorithms. The Queueing Network Analyzer (QNA) has been proposed by Whitt
as implementation of his two-node decomposition method (Whitt, 1983). QNET
18 Boualem Rabta, Arda Alp and Gerald Reiner

is another software package for performance analysis of queueing networks. It is


the implementation of the analysis algorithm based on Brownian approximation of
queueing networks (Dai and Harrison, 1993; Harrison and Nguyen, 1990) (moti-
vated by heavy traffic theory). This package is written in text mode and its source
code is available for free download. However, it seems that since mid 90s this soft-
ware has not been rewritten and it is easy to guess that its use has remained very
limited. See also, Govil and Fu (1999) for description of other queueing network
packages.
PEPSY-QNS /WinPEPSY: It has been developed at the University of Erlangen-
Nurnberg in early 90s. It has a comfortable and easy to use graphical environment.
This package includes more than 50 different analysis algorithms. The Windows
version (WinPEPSY) has particular features : a user friendly graphical interface, a
graphical network editor, charts for results...
QNAT : The Queueing Network Analysis Tool (QNAT) is a Windows graphical
package for analysing a wide variety of queueing networks. QNAT uses Mathemat-
ica as its computing platform and can handle general configurations of open and
closed networks of both finite and infinite capacity queues. Incorporation of fork-
join nodes, multiclass customers, mixed customer classes and blocking mechanisms
of different types are some of the other features available in this software tool.
RAQS : Rapid Analysis of Queueing Systems (RAQS) is a Windows graphical
queueing software (Kamath et al, 1995) based on the Whitt’s QNA method and its
version in Segal and Whitt. (1989). It also implements decomposition algorithms
for closed queuing networks and for tandem finite buffer queueing networks. It’s
freely available for download. RAQS’ user interface provides less explanation for
inexperienced users. Most probably, input and output interfaces are more suitable
for experienced users who owns considerable amount of knowledge on basics of
queueing theory.
QTS : Queueing Theory Software, is written as Excel spreadsheet for solving a
wide range of queueing models and other probability models (Markov chains, birth
and death processes,...). The software is based on the textbook of Gross et al (2008).
One advantage of this software is that the user has all-in-one model and several
performance indicators (e.g., server utilization, mean number of jobs in the system
and in the queue, mean waiting time in the system and in the queue...) in a simple
sheet.
JMT : The Java Modeling Tools is a free open source suite implementing several
algorithms for the exact, asymptotic and simulative analysis of queueing network
models. Models can be described either through wizard dialogs or with a graphical
interface. The workload analysis tool is based on clustering techniques. The JMT
tool is user-friendly including a visual design tool. Also, visual sliding buttons for
simulation parameters (e.g. avg. arrival rate, avg. service time, buffer size and sim-
ulation time) makes what-if analysis easy for the user.
Notice that those packages implement known (published) algorithms and are all
freely available for download (some are open source, e.g., QTS, JMT). The dif-
ference is then in the number of implemented algorithms (the number of network
types which can be analyzed), the user interface and the presentation of the results.
2 Queueing Networks Modeling Software for Manufacturing 19

Table 2.1 Download links for some free QN software


WinPEPSY http://www7.informatik.uni-erlangen.de/ prbazan/pepsy/download.shtml
RAQS http://www.okstate.edu/cocim/raqs/raqs.htm
QST http://www.geocities.com/qtsplus/ (Also: http://qtsplus4calc.sourceforge.net/)
JMT http://jmt.sourceforge.net/

The important question is whether these software tools are practical and capable
enough to satisfy the complex industry needs. Moreover, among the majority of
functionalities that they offer, which one is suitable under which circumstances?
When performing in a practical context the user of this kind of software is assumed
to have an acceptable level of knowledge in queueing theory. The modeling has to
be done separately and the results are generally given in a brute form. It is obvious
that those drawbacks do not permit a wide use in a company given that managers
are in general not queueing specialists.

2.4 Queueing Networks Software for Manufacturing

Additionally the previous software tools, more specific software packages were de-
signed for manufacturing based on queueing networks theory. Such modeling aid is
automatic and embedded in the software and provides the user a unique ability to
model the manufacturing system without worrying about the theoretical side. They
are particularly suitable for use by industrials with little or no queueing knowledge.
Snowdon and Ammons (1988) survey eight queueing network packages existing
at that time. Some of the queueing network software packages are public domain
while others are commercially sold by a software vendor. CAN-Q is a recursive
algorithm for solving a product-form stochastic model of production systems (Co
and Wysk, 1986) based on the results of Jackson and Gordon and Newell. A version
of QNA supporting some features of manufacturing systems has also been proposed
Segal and Whitt. (1989) but there are no indices that this package has been sold
as commercial product or distributed for large use. Other early packages include
Q-LOTS (Karmarkar et al, 1985), MANUPLAN (Suri et al, 1986) and Operations
Planner (Jackman and Johnson, 1993).
MANUPLAN includes an embedded dynamic model that is based on queueing
network theory and provides common performance results such as WIP, tool uti-
lization, production rate. The tool also provides trade-off analysis among inventory
levels, flow times, reliability of the tools, etc. (Suri et al, 1986).
MPX is perhaps the most popular software package in its category. It is the suc-
cessor of MANUPLAN. Users greatly appreciate the speed of calculations and the
ease of modeling despite of several missing improvements possibilities in its be-
havior and interface. The exact MPX’s algorithm is not published. Apparently, it
uses the classical decomposition algorithm (Whitt, 1983) coupled to the opera-
tor/workstation algorithm (Suri et al, 1993) with some changes to support additional
20 Boualem Rabta, Arda Alp and Gerald Reiner

features. It also provides a procedure to compute optimal lot sizes and transfer batch
sizes.
Still that the existing software model is quite generic and does not integrate high
level of complexity. For instance, MPX does not provide support for some manu-
facturing features like finite capacity of buffers, service disciplines other than first-
come-first-served and dynamic lot sizing nor for some popular production systems
(e.g., Kanban).
On the other hand several industries prefer to use systems design software such
as SAP-APO, IBM’s A-Team, etc., (Pinedo, 2002) and those generate their solution
based on heuristics, relaxations or approximations different than queueing software
solutions. However, usually those approaches have limitations. Their performance
change based on certain settings and in general, user needs to complete several ex-
periments to determine the most suitable algorithm. Additionally computation speed
becomes one of the most important practical considerations. Instead of those all-in-
one, multi functional software designs, queueing software can provide quick and
easy solutions while covering dynamics and related effects but not higher levels of
system details (Suri et al, 1995).

2.5 Further Remarks

When using queueing networks software in a practical setting, the resulting models
are less accurate and detailed than simulation and give no insights into transition
behavior, but they often suffice as decision support tools and can yield results that are
useful in real-world applications (de Treville and Van Ackere, 2006). They provide a
rapid and easy way to understand systems’ dynamics and predict their performance,
in the opposition of complex simulation models which necessitate vast amount of
modeling, advanced knowledge and computer time. It is important in today’s world
to be able to rapidly evaluate different alternatives as manufacturing systems are in
continuous change. This software packages are also an important tool for training
and teaching the impact of some decisions on lead time and cost reduction.
Queueing networks software is still has limited usage in practical complex manu-
facturing applications. It is not mature for practitioners how a queueing software can
cover complex industry related constraints among with several tradeoffs regarding
to several performance objectives. Other issues like data requirements may also be
the cause. Software that passes the test of accuracy and detail can fail miserably in
the field because it requires data beyond what are easily available (Suri et al, 1995).
Those are basically limitations related to practical implications.
Close contact between researchers and industrial users has been critical to the
growth in use of the software. Emphasis on such contact, along with better linkages
to operational systems, will ensure continued growth of manufacturing applications
of queuing software (Suri et al, 1995). The use of the software in education may
also help to enlarge its use in companies. When students realize the usefulness of
2 Queueing Networks Modeling Software for Manufacturing 21

this tool, it becomes natural that they will use it after they join work in the industry
or they become managers.
When recognizing the importance of those tools and the opportunities they of-
fer, the existing software packages are still limited in their modeling capabilities.
It is important for software creators for enlarging the usability of their packages to
offer support of different real manufacturing systems. While handing problems of
modeling, a specified software design should be based on realistic assumptions (i.e.
buffers capacity, priority rules, integration of forecasting and inventory policies).
The combination of queueing networks analysis with statistical and optimization
tools... can provide better solutions and attract more practical applications.
The presentation of the computations’ output is also an important factor. Cus-
tomizable reports and graphical charts help to better understand the results. It should
be also possible for the software to provide some insights in the interpretation of the
results and to warn the user about the limits of its performance (for example, MPX
shows a warning when the utilization is very high saying that the results may not
be accurate). Performance measures given by queueing packages are based on only
steady-state value measurements given as the average values of such measures WIP,
flow time. However, it can be desired to have variance (or variability) information
about the output performance measures. Also, the provided average values are just
approximate and it may be useful to provide trustable bounds for them.
The success of a software package depends on many factors other than the accu-
racy of its computational method. Users look for a powerful tool with evidence of
efficiency but also a user-friendly, easy-to-learn and well supported product (docu-
mentation and tutorial, demo version, consultancy/training course). The integration
with other packages like spreadsheet packages, statistical packages, DBMS, legacy
applications, ERP... is also a highly desired feature. Finally, the ability of the soft-
ware to import/export data from/to other packages allows the users to gain in time
and effort.

Acknowledgements This work is supported by the SEVENTH FRAMEWORK PROGRAMME


- THE PEOPLE PROGRAMME - Marie Curie Industry-Academia Partnerships and Pathways
Project (No. 217891) ”Keeping jobs in Europe”

References

Baskett F, Chandy K, Muntz R, Palacios F (1975) Open, closed and mixed networks
of queues with different classes of customers. Journal of the ACM 22(2):248–260
Bitran G, Tirupati D (1988) Multiproduct queueing networks with deterministic
rout-ing: Decomposition approach and the notion of interference. Management
Science 34(1):75–100
Bosilj-Vuksic V, Ceric V, Hlupic V (2007) Criteria for the evaluation of business
process simulation tools. Interdisciplinary Journal of Information, Knowledge
and Management 2:73–88
22 Boualem Rabta, Arda Alp and Gerald Reiner

Buzacott JA, Shanthikumar JG (1993) Stochastic Models of Manufacturing Sys-


tems. Prentice-Hall, Englewood Cliffs, NJ
Chylla P (1986) Zur Modellierung und approximativen Leistungsanalyse von
Vielteilnehmer-Rechensystemen. Dessertation, Faculty for Mathematics and
Computer Science, Technical university Munich
Co HC, Wysk RA (1986) The robustness of can-q in modelling automated manu-
facturing systems. International journal of production research 24(6):1485–1503
Dai J, Harrison J (1993) The qnet method for two-moment analysis of closed man-
ufacturing systems. Annals of Applied Probability 3(4):968–1012
Dai W (2002) A brownian model for multiclass queueing networks with finite
buffers. Journal of Computational and Applied Mathematics 144(1–2):145–160
Gelenbe E, Pujolle G (1987) Introduction to queueing networks. John Wiley, Chich-
ester
Gordon W, Newell G (1967) Closed queueing systems with exponential servers.
Operations Research 15(2):254–65
Govil M, Fu M (1999) Queueing theory in manufacturing : A survey. Journal of
Manufacturing Systems 18(3):214–240
Gross D, Shortle JF, Thompson JM, Harris CM (2008) Fundamentals of Queueing
Theory, 4th edn. John Wiley & Sons, Inc.
Harrison JM, Nguyen V (1990) The qnet method for two-moment analysis of open
queueing networks. Queueing Sytems 6(1):1–32
Jackman J, Johnson E (1993) The role of queueing network models in performance
evaluation of manufacturing systems. Journal of the Operational Research Society
44(8):797–807
Jackson J (1963) Jobshop-like queueing systems. Management Science 10(1):131–
142
Jackson JR (1957) Networks of waiting lines. Operations Research 5(4):518–521
Kamath M, Sivaramakrishnan S, Shirhatti G (1995) Raqs: A software package to
support instruction and research in queueing systems. Proceedings of 4th Indus-
trial Engineering Research Conference, IIE, Norcross, GA pp 944–953
Karmarkar US, Kekre L, Freeman S (1985) Lotsizing and leadtime performance in
a manufacturing cell. Interfaces 15(2):l–9
Kelly FP (1975) Networks of queues with customers of different types. Journal of
Applied Probability 12(3):542–554
Kim S (2004) The heavy-traffic bottleneck phenomenon under splitting and super-
position. European Journal of Operational Research 157(3):736–745
Kim S, Muralidharan R, O’Cinneide C (2005) Taking account of correlation be-
tween streams in queueing network approximations. Queueing Systems 49(3–
4):261–281
Kuehn PJ (1979) Approximate analysis of general networks by decomposition.
IEEE Transactions on Communications 27(1):113–126
Marie R (1979) An approximate analytic method for general queueing networks.
IEEE Transactions on Software Engineer 5(5):530–538
Pinedo M (2002) Scheduling: Theory, Algorithms, and Systems, 2nd edn. Prentice-
Hall Inc.
2 Queueing Networks Modeling Software for Manufacturing 23

Pujolle G, Wu A (1986) A solution for multiserver and multiclass open queueing


networks. Information Systems and Operations Research 24(3):221–230
Reiser M, Kobayashi H (1974) Accuracy of diffusion approximations for some
queueing networks. IBM Journal of Research and Development 18(2):110–124
Segal M, Whitt W (1989) A queueing network analyzer for manufacturing. Pro-
ceedings of the 12th International Teletraffic Congress, Torino, Italy, June 1988
pp 1146–1152
Shanthikumar J, Ding S, Zang M (2007) Queueing theory for semiconductor manu-
facturing systems : A survey and open problems. IEEE Transactions on Automa-
tion Science and Engiheering 4(4):513–522
Snowdon JL, Ammons JC (1988) A survey of queueing network packages for the
analysis of manufacturing systems. Manufacturing Review 1(1):14–25
Suri R (1998) Quick Response Manufacturing. Productivity Press, Portland, OR
Suri R, Diehl GW, Dean R (1986) Quick and easy manufacturing systems analysis
using manuplan. Proceedings of the Spring HE Conference, Dallas, TX pp 195–
205
Suri R, Sanders J, Kamath M (1993) Performance Evaluation of Production Net-
works, vol 4: Logistics of Production and Inventory, Elsevier, pp 199–286
Suri R, Diehl GWW, de Treville S, Tomsicek MJ (1995) From can-q to mpx: Evo-
lution of queuing software for manufacturing. Interfaces 25(5):128–150
de Treville S, Van Ackere A (2006) Equipping students to reduce lead times: The
role of queuing-theory-based modeling. Interfaces 36(2):165–173
Vokurka RJ, Choobineh R, Vadi L (1996) A prototype expert system for the evalu-
ation and selection of potential suppliers. International Journal of Operations &
Production Management 16(12):106–127
Whitt W (1983) The queuing network analyzer. Bell Systems Technical Journal
62(9):2779–2815
Whitt W (1994) Towards better multi-class parametric-decomposition approxima-
tions for open queueing networks. Annals of Operations Research 48(3):221–248
Chapter 3
A Review of Decomposition Methods for Open
Queueing Networks

Boualem Rabta

Abstract Open queueing networks are useful for modeling and performance eval-
uation of complex systems such as computer systems, communication networks,
production lines and manufacturing systems. Exact analytical results are available
only in few situations with restricted assumptions. In the general case, feasible solu-
tions can be obtained only through approximations. This paper reviews performance
evaluation methods for open queueing systems with focus on decomposition meth-
ods.

3.1 Introduction

Open queueing networks (OQN) are useful for modeling and performance eval-
uation of complex systems such as computer systems, communication networks,
production lines and manufacturing systems. A Queueing network consists of sev-
eral connected service stations. It is called open if customers can enter from outside
and also r the system. A single station (or a node) queueing system consists of a
queueing buffer of finite or infinite size and one or more identical servers. We will
focus on unrestricted networks where each station has an infinite waiting capacity.
Customers arrive from an external source to any station and wait for an available
server. After being served, they move to the next station or leave the system.
Performance evaluation of open queueing networks has been addressed through :
• Exact methods : analytical results are available only in few situations with sim-
ple assumptions and particular topologies (Jackson networks). Many classes of
networks have no known closed-form solutions.

Boualem Rabta
Entreprise Institute, University of Neuchatel, Rue A.-L. Breguet 1, CH-2000 Neuchatel (Switzer-
land)
e-mail: boualem.rabta@unine.ch

25
26 Boualem Rabta

Fig. 3.1 An example of open queueing network (a) and a single station (b)

• Approximation methods including : diffusion approximations, mean value anal-


ysis, operational analysis, exponentialization approximations and decomposition
methods.
• Simulation and related techniques : This is perhaps, the most popular approach to
evaluate the performance of queueing networks. Although more realistic and de-
tailed, it can be cumbersome to optimize, and its accuracy is strongly dependent
on the quality of the calibration data.
First relevant analytical results for OQN were presented by Jackson (1957) who
considered a special category (called thereafter Jackson networks) and showed that
the joint distribution of the number of customers in the network is the product of
the marginal distributions of the number of customers in each station (i.e., a product
form solution). This kind of results allows us to analyze the network by considering
each station individually.
Product form results have been extended to a few situations (e.g. Kelly, 1975) but
for general networks, product form solutions are not possible. Therefore, approxi-
mations are the only feasible solution. On the other hand, some networks have state
spaces that are so large that certain analysis techniques, while theoretically possible,
are impractical (Baldwin et al, 2003).
The most frequently used approximation methods to analyze open queueing net-
works have been decomposition methods. According to this approach, the dimen-
sion of the network is reduced by breaking it down to sub-networks and analyz-
ing each sub-network in isolation. The decomposition approach assumes that the
sub-networks can be treated as being stochastically independent and that the input
to each sub-network is a renewal process. Then, the analysis involves three basic
steps :
1. decomposition of the network into sub-networks (in most cases, individual sta-
tions),
2. analysis of each sub-network and the interaction between the sub-networks,
3. recomposition of the results to compute the network performance.
3 A Review of Decomposition Methods for Open Queueing Networks 27

The parameters of each subnetwork depend on the state of other subnetworks and
thus acknowledge the correlation with other subnetworks. The main difficulty lies
in obtaining good approximations for these parameters.
While the theory of single-station queues finds its origins in Erlang’s work on
telecommunications at the beginning of the 20th century, the analysis of networks
of queues began in the 1950s. Initial results appeared in Jackson (1954) who con-
sidered a system of two stations in tandem. Jackson (1957, 1963) analyzed a class
of open queueing networks with Poisson external arrivals, exponential service times
and Markovian routing of customers, and showed that the equilibrium probabil-
ity distribution of customers could be obtained through node-by-node decomposi-
tion. Kelly (1975, 1976) extended Jackson’s work by including customers of several
classes and different service disciplines. Similar results were presented by Barbour
(1976). Baskett et al (1975) presented the most comprehensive results at the time
for the classical models.
First surveys of queueing network theory include Lemoine (1977) and Koenigs-
berg (1982). Lemoine discussed an overview of equilibrium results of general Jack-
son networks and the methodology which has been employed to obtain those results.
Disney and Konig (1985) presented an extensive survey covering the seminal works
of Jackson and the extensions of Kelly, including a bibliography of more than 300
references. Suri et al (1993) examined performance evaluation models for different
manufacturing systems including production lines (tandem queues), assembly lines
(arborescent queues), job-shops (OQN),...
Buzacott and Shanthikumar (1992, 1993), Bitran and Dasu (1992) and Bitran and
Morabito (1996) analyzed both performance evaluation models and optimization
models for queueing networks. Bitran and Dasu (1992) discussed strategic, tactical
and operational problems of manufacturing systems based on the OQN methodol-
ogy, with a special attention to design and planning models for job-shops. Govil
and Fu (1999) presented a survey on the use of queueing theory in manufacturing.
Shanthikumar et al (2007) surveyed applications of queuing networks theory for
semiconductor manufacturing systems and discussed open problems. Also, some
software packages for the analysis of manufacturing systems are based on queue-
ing networks theory. For instance, Manuplan and MPX (Suri et al, 1995) implement
decomposition methods.

3.2 Jackson Networks

3.2.1 Single Class Jackson Networks

When interarrival and service times are exponential, we refer to the network as a
Jackson network. Here, the network is composed of several interconnected M/M/m
stations with first–come–first–served (FCFS) discipline of service and an infinite
queue capacity (n + 1 the number of stations in the system where station 0 rep-
28 Boualem Rabta

resents the external world to the network). Then, each station j is described by 3
parameters :
The number of servers in the station, m j .
The external arrival rate of customers to station j, λ0 j .
The expected service rate, μ j .
A customer who finishes the service at station i, moves to station j with probabil-
ity ri j where, 0 ≤ ri j ≤ 1, ∀i, j = 0, .., n and ∑nj=0 ri j = 1, ∀i = 0, .., n. Thus, r0 j is
the probability that a customers enters directly from outside to station j and r j0 is
the probability that a customer leaves the network after just completing service at
station j.
Denote by λ j the overall arrival rate to station j and by λ the overall arrival rate
to the whole network. By a result of Burke (1956) and Reich (1957) we know that
the output of an M/M/m queue in equilibrium is Poisson with the same rate as the
input process. Thus,
n
λ j = λ0 j + ∑ ri j λi , ∀ j = 1..n, (3.1)
i=1

is a system of linear equations known as traffic rate equations.


The state of the system is defined as a vector x = (x1 , x2 , .., xn ) where x j is the
number of customers in station j (customers in queue and in service). Under the
assumption that the system reaches stationary regime, denote by π j (x j ) the prob-
ability of station j being in state x j and by π (x1 , x2 , .., xn ) the probability of the
system being in state x = (x1 , x2 , .., xn ). Jackson (1957) showed that :
n
π (x1 , x2 , .., xn ) = ∏ π j (x j ),
j=1

with π j is the steady state distribution of the classical M/M/m j queueing system :
⎧ xj

⎨ π j (0)m j ρ j if x j ≤ m j ,
xj!
π j (x j ) = mj xj

⎩ π j (0)m j ρ j if x j > m j ,
mj!

where, ρ j is the expected utilization of station j, given as :



λj
ρj = ∑ π j (k) = 1 − π j(0) = μ j m j , 0 ≤ ρ j < 1.
k=1

This result says that the network acts as if each station could be viewed as an in-
dependent M/M/m queue. In fact, it can be shown (Disney, 1981) that, in general,
the actual internal flow in these kinds of networks is not Poisson (as long as there
is any kind of feedback). Nevertheless, the previous relation still holds (see, Gross
and Harris, 1998).
The expected waiting time in queue at station j is the given by :
3 A Review of Decomposition Methods for Open Queueing Networks 29
mj
ρ j (m j ρ j )
E(W j ) = π j (0).
λ j (1 − ρ j )2 m j !

The expected number of visits to station j :

λj
E(V j ) = . (3.2)
λ0
where λ0 = ∑ni=1 λ0i . Finally, the expected lead time E(T ) (or cycle time) for an
arbitrary customer, that is, the total time spent by a customer in the network from its
arrival moment to its final departure, is given by :
n  
1
E(T ) = ∑ E(V j ) E(W j ) + .
j=1 μj

Note that the model in Jackson (1963) allows for arrival and service rates to
depend on the state of the system.
Whitt (1999) proposed a time-dependent and state-dependent generalization of a
Jackson queueing network to model a telephone call center. For each station j, exter-
nal arrivals λ j (t, x), service rates μ j (t, x) and routing probabilities r ji (t, x), i = 1, .., n
depend upon the time t and the state x = (x1 , x2 , .., xn ) of the system. The Markovian
structure makes it possible to obtain a time-dependent description of performance as
the solution of a system of ordinary differential equations, but the network structure
induces a very large number of equations, tending to make the analysis intractable.
The author presented a framework for decomposition approximations by assuming
the transition intensities of the underlying Markov chain to be of a product form.

3.2.2 Multiple Class Jackson Networks

Baskett et al (1975) treated multiclass Jackson networks and obtained product form
solutions for three service disciplines : processor sharing, ample service and last–
come–first–served with preemptive resume servicing. Customers are allowed to
switch classes after completing service at a station. The external input may be state
dependent and service distributions can be of the phase type. They also considered
multiple server first–come–first–served stations where customers of different classes
have the same rate of exponentially distributed service times. See the discussion in
Kleinrock (1976, Sec. 4.12). Reiser and Kobayashi (1974) generalized the result of
Baskett et al. by assuming that customer routing transitions are characterized by a
Markov chain decomposable into multiple subchains.
Kelly (1975, 1976, 1979) also extended Jackson’s results to multiple class queue-
ing networks. The type of a customer is allowed to influence his choice of path
through the network and, under certain conditions, his service time distribution at
each queue. Kelly’s model allows for different service disciplines. Even though, the
equilibrium probability has a product form Disney and Konig (see also 1985).
30 Boualem Rabta

Let I be the number of customer classes. Customers of type i arrive to the network
as a Poisson process with rate λ (i) and follow the route
(i) (i) (i)
r1 , r2 , ..., r fi

(i) (i)
where r j is the j-th station visited by this type and r fi is the last station visited
before leaving the system. At station j, customers have an exponentially distributed
service requirement where requirements at stations visited by a customer of a par-
ticular class, are independent and those at all stations for all customers are mutually
independent and independent of the arrival processes.
If queue j contains k j customers then the expected service requirement for the
(l)
customer in position l is 1/μ j . Also, x j = (v jl , s jl ) (l = 1, ..., k j ) indicates that the
l-th customer in the queue is of type v jl and is has reached the stage s jl along its
route. X j = (x j1 , x j2 , ..., x jk j ) denotes the state of station j. The state of the network
is represented by X = (X1 , X2 , ..., Xn ). It is then proved (Kelly, 1975; Disney and
Konig, 1985) that the equilibrium distribution is given by :
n
π (X) = ∏ π j (X j )
j=1

where
kj
α (v jl , s jl )
π j (X j ) = B j ∏ (l)
,
l=1 μj
∞ baj
Bj = ∑ a (l)
,
a=0 ∏l=1 μ j

I fi
b j = ∑ ∑ α j (i, s),
i=1 s=1
 (i)
α j (i, s) = λ (i) if rs = j, .
0 otherwise.
Let N j , ( j = 1, .., n) be the stationary queue lengths in equilibrium. their stationary
probabilities are :
k

B jb j j
P Nj = k j = k (l)
.
∏l=1 μ j
j

The equilibrium departure process of class i is a Poisson process with rate λ (i) and
the departure processes of the different classes are mutually independent (Kelly,
1976).
Although these results are interesting, practical implementations are difficult due
to the size of the state space (Bitran and Morabito, 1996).
3 A Review of Decomposition Methods for Open Queueing Networks 31

The previous model (Kelly, 1976) supposes deterministic routing. The general
routing is considered in Kelly (1975). Based on the fact that nonnegative probability
distributions can be well approximated by finite mixtures of gamma distributions,
he further conjectured that many of his results can be extended to include general
service time distributions. This conjuncture is proved by Barbour (1976).
Gross and Harris (1998, Sec. 4.2.1) exposed a multiclass network where cus-
tomers are served by m j exponential servers at station j, with the same service rate
for all classes and first–come–first–served discipline. In this case, waiting time is
the same for all customer classes. It is suggested to first solve the traffic equations
separately for each customer class and then add the resulting arrival rates. Denote by
(l)
λ0 j the external arrival rate of customers of class l from outside to station j and let
(l)
ri, j be the probability for a customer of class l to move to station j after completing
(l)
the service at station i. Solving the traffic rate equation (3.1) yields λ j , j = 1, .., n
for each class l; i.e., the overall arrival rate of customers of class l to station j. We
(l)
then obtain, λ j = ∑Il=1 λ j . Using M/M/m j results, we obtain the average number
L j of customers in station j (the average waiting time can be obtained by Little’s
formula). The average number of customers of class l in station j is then given by :
(l)
(l) λj
Lj = (i)
L j.
∑Ii=1 λ j

3.3 Generalised Jackson Networks

3.3.1 Single Class Generalized Jackson Networks

When the interarrival or service times (or both) are not exponential, we talk about
a generalized Jackson network. Decomposition methods try to extend the indepen-
dence between stations and Jackson’s product form solution to general open net-
works. The individual stations are analyzed as independent GI/G/m queues after
approximating arrival processes by renewal processes. This approach involves :
• Combination of the input of each station : arrivals from outside and from other
stations are merged to produce an arrival flow to the station.
• Analysis of each station as independent GI/G/m : compute performance mea-
sures and departures.
• Splitting up departures from each station : decomposition of the overall departure
flow into departure flows to other station and to outside.
In general, distributions are specified by two first moments (the mean and the
squared coefficient of variation). This approach was first proposed by Reiser and
Kobayashi (1974) and improved by Sevcik et al (1977), Kuehn (1979), Shanthiku-
mar and Buzacott (1981), Albin (1982), Whitt (1983a) among others.
32 Boualem Rabta

Fig. 3.2 Basic steps of decomposition

3.3.1.1 GI/G/1 Open Queueing Network

Suppose we have n internal stations in the network with one server at each station.
For a station j, external interarrival times a0 j and services times s j are independent
and identically distributed (i.i.d) with general distributions. Define the following
notations :
λ0 j expected external arrival rate.
ca0 j scv (squared coefficient of variation) or variability of external interarrival time
V(a0 j )
(ca0 j = E(a0 j )2
).
μ j expected service rate (μ j = 1/E(s j )).
V(s j )
cs j scv or variability of service time (cs j = E(s j )2
).

After completing service at station i, a customer moves to station j with probability


ri j or leaves the network with probability ri0 . Suppose that there is no immediate
feedback (rii = 0, i = 1, .., n).
Similarly to Jackson networks, we obtain exact (combined) expected arrival rates
from the traffic rate equations (3.1). Necessary and sufficient conditions for stability
of this network are well known. They are that at each station the total arrival rate
must be less than the service rate. See Borovkov (1987) or, for a modern treatment
and further references Dai (1995).

Merging arrivals :

The asymptotic method (Sevcik et al, 1977) and the stationary-interval method
(Kuehn, 1979) may be used to determine ca j , i.e., the merged interarrival time vari-
V(a )
ability (ca j = E(a j)2 , λ j = E(a1 j ) ). Moreover, the asymptotic method is asymptoti-
j
cally correct as ρ j → 1 (heavy traffic intensity) and the stationary-interval method
is asymptotically correct when the arrival process tends to a Poisson process (Bitran
and Morabito, 1996).
Let cai j be the interarrival time variability at station j from station i. Based on
the asymptotic method, ca j is a convex combination of cai j given by :
3 A Review of Decomposition Methods for Open Queueing Networks 33
n
λ0 j λi j
ca j = ca0 j + ∑ cai j . (3.3)
λj i=1 λ j

Albin (1982, 1984) suggested an approximation to ca j based on a convex combi-


nation between the previous value and the one obtained by the stationary interval
method. Whitt (1983b) substituted the stationary interval method by a Poisson pro-
cess and obtained :
n
λi j
ca j = w j ∑ cai j + 1 − w j (3.4)
i=0 λ j

where
1
wj =
1 + 4(1 − ρ j)2 (v j − 1)
1
vj = λi j
∑ni=0 ( λ j )2

Computing departures :

The squared coefficient of variation cd j of the inter-departure stream from station j


is computed by Marshall’s formula:

E(W j )
cd j = ca j + 2ρ 2j cs j − 2ρ j (1 − ρ j ) .
E(S j )

Using the Kraemer and Langenbach-Belz (1976) approximation for the expected
waiting time E(W j ) at G/G/1 nodes,

cd j = ρ 2j cs j + (1 − ρ 2j )ca j .

Splitting departures :

Under the assumption of Markovian routing, the departure stream from station j is
split. The squared coefficient of variation cd ji of the departure stream from station
j to station i is given by
cd ji = r ji cd j + 1 − r ji.

Analysis of single nodes :

The expected waiting time E(W j ) in station j may be estimated by the KLB for-
mula (Kraemer and Langenbach-Belz, 1976) :

ρ j (ca j + cs j )g(ρ j , ca j , cs j )
E(W j ) = ,
2μ j (1 − ρ j )
34 Boualem Rabta

where,
−2(1−ρ j )(1−ca j )2
exp if ca j < 1,
g(ρ j , ca j , cs j ) = 3ρ j (ca j +cs j )
1 if ca j ≥ 1.
For other approximations of E(W j ) see, e.g., Shanthikumar and Buzacott (1981) and
Buzacott and Shanthikumar (1993).
The expected lead time E(T ) for a customer (including waiting times and service
times) is given by :
n
1
E(T ) = ∑ E(V j )(E(W j ) + ),
j=1 μ j

where E(V j ) is the expected number of visits to station j given by (3.2).

3.3.1.2 GI/G/m Open Queueing Network

Suppose that in station j, there are m j (m j ≥ 1) identical servers. The following


system of equations (Whitt, 1983a) allows us to determine ca j for each station :
n
ca j = α j + ∑ cai βi j , for j = 1, .., n
i=1

where  
n
αj = 1 + wj p0 j ca0 j − 1 + ∑ pi j (1 − ri j + ri j ρi2 yi )
i=1

βi j = w j pi j ri j (1 − ρi2)
with w j is defined by (3.4) and

λi j λi
pi j = = ri j
λj λj

max{csi , 0.2} − 1
yi = 1 + √ .
mi
The expressions for α j and βi j follow from considerations of the merging and split-
ting of customers streams and the impact of service time variability on the squared
coefficient of traffic streams departing from a station, as opposed to that of incoming
stream.
The expected waiting time at station j is given by :
ca j + cs j
E(W j ) = Wj
2
3 A Review of Decomposition Methods for Open Queueing Networks 35

where W j is the expected waiting time for a M/M/m j queue. Many other approxi-
mations formulas for the mean waiting time in GI/G/m system are given in Bolch
et al (2006, Sec.6.3.6).

Creating and Combining Customers :

The method described in this section allows for customers creation and combination
by using a multiplication factor γ j at each station j (Whitt, 1983a).

Eliminating immediate feedback :

For those stations where r j j > 0 it is advantageous to consider the successive vis-
its of a customer as one longer visit, that is, a customer gets its total service time
continuously. The stations’ parameters are changed as follows (Kuehn, 1979) :

μ ∗j = μ j (1 − r j j )

cs∗2
j = r j j (1 − r j j )cs j

ri∗j = ri j /(1 − r j j ), i = j.
A proof of en exact analogy between stations with and without feedback with respect
to the distribution of queue lengths and mean sojourn times was given by Takacs
(1963) in the case of G/M/1 stations. The extension to general arrival processes is
an approximation. It has been shown by simulation that this reconfiguration step of
the network yields good accuracy, whereas the analysis without this step results in
considerable inaccuracies (Kuehn, 1979).
Further details may be found in Whitt (1983a,b) and Suri et al (1993).

Manufacturing systems:

To meet needs in the manufacturing environment, this method has been modified to
represent machine breakdowns, batch service, changing lot sizes and product testing
with associated repair and partial yields (Segal and Whitt, 1989).

3.3.1.3 The Bottleneck Phenomenon

Suresh and Whitt (1990) showed that for tandem queues, for example, the original
Whitt’s procedure performs well for all except the last station, which is a bottleneck.
That is, the expected waiting time at the bottleneck station is underapproximated.
The heavy-traffic bottleneck phenomenon can be described as a relatively large
number in queue, observed when external arrivals are highly variable and a bot-
36 Boualem Rabta

tleneck station is visited after jobs go through stations with moderate traffic (Kim,
2005). Whitt (1995) suggested an enhancement to the parametric-decomposition
method for generalized Jackson networks. Instead of using a variability parameter
for each arrival process, he proposed the use of a variability function for each arrival
process; i.e., the variability parameter should be regarded as a function of the traffic
intensity of a queue to which the arrival process might go.
Dai et al (1994) proposed a hybrid method for analyzing generalized Jackson net-
works that employs both decomposition approximation and heavy traffic theory; the
sequential bottleneck method, in which an open queueing network is decomposed
in a set of groups of queues, i.e., not necessarily individual queues.

3.3.1.4 PH/PH/1(/K) Open Queueing Network

Haverkort (1995, 1998) modified Whitt’s network by using PH/PH/1(/K) queues


instead of GI/G/1 queues, so that the individual queues can be solved exactly us-
ing matrix-geometric techniques. In another step, he also allowed for the inclusion
of finite capacity queues. Sadre et al (1999) extended this work by removing a
few approximate steps in the decomposition procedure. In particular, they used ex-
act results for the departure process of PH/PH/1/K queues, as first developed by
Bocharov and Naumov (1986).

3.3.1.5 Open Queueing Network with Correlated Input

As mentioned before, in most existing decomposition algorithms for open networks,


the output of a queue is usually approximated as a renewal process, which becomes
the arrival process to the next queue. Since the correlations of network traffic may
have a considerable impact on performance measures, they must be captured to
some extent by the employed traffic descriptors. Heindl (2001) considered a general
tandem network where the internal traffic processes are described as semi-Markov
processes (SMPs) and Markov modulated Poisson processes (MMPPs).
Heindl and Telek (2002) presented a decomposition methodology based on
Markovian arrival processes (MAPs), whose correlation structure is determined
from the busy-period behavior of the upstream queues. The resulting compact MAPs
in connection with sophisticated moment matching techniques allow an efficient de-
composition of large queueing networks. Compared with a previous approach, the
output approximation of MAP/PH/1(/K) queues - the crucial step in MAP-based de-
composition - is refined in such a way that also higher moments of the number of
customers in a busy period can be taken into account. Heindl et al (2006) constructed
a Markovian arrival process of second order (MAP(2)) and showed numerically how
their results can be used to efficiently decompose queueing networks.
Kim et al (2005) proposed an improvement to Whitt’s method (called the in-
novations method) by replacing relations among squared coefficients of variabil-
ity with approximate regression relationships among in the underlying point pro-
3 A Review of Decomposition Methods for Open Queueing Networks 37

cesses. These relationships allow to add information on correlations between dif-


ferent streams. Kim (2004) combined the variability function and the innovations
method in the context of the heavy traffic bottleneck phenomenon (Kim, 2004).
Blaciog̃lu et al (2008) proposed a tree-parameter renewal approximation to ana-
lyze the splitting and superposition of autocorrelated processes based on the work of
Jagerman et al (2004). Two parameters capture information on the first and second
order statistics of the original process and the third parameter captures the intricate
behaviour that a superposition can exhibit.

3.3.2 Multiple Class Generalized Jackson Networks

Whitt (1983a) proposed a procedure to aggregate all classes in a single one and
utilize the single class model described above. In this way the original multiple
class model is reduced to a single aggregate open network. After the analysis of
the aggregate class model, the performance measures for each class are estimated
individually.In many cases this aggregation step works quite well, but in some cases
it does not (Whitt, 1994).
Bitran and Tirupati (1988) considered an open queueing network with multiple
customer classes, deterministic routing and generally distributed arrivals and service
times. They pointed out that the splitting operation in the original Whitt’s procedure
may not perform well due to the existence of interference among classes. Their
approximation is based on the two–class case, by aggregating all classes except the
one of interest into one where the aggregate arrivals of class 2 is assumed to follow
a Poisson processes. Their procedure provides dramatic improvements in accuracy
in some cases (Whitt, 1994).
As an extension to the approximations by Bitran and Tirupati (1988) and Whitt
(1994) developed methods for approximately characterizing the departure process
of each customer class from a multi-class single-server queue ∑(GIi /GIi )/1 with
a non-Poisson renewal arrival process and a non-exponential service-time distri-
bution for each class, unlimited waiting space and the FCFS service discipline.
The results are used for improving parametric-decomposition approximations for
analyzing non-Markov open queueing networks with multiple classes. The effect
of class-dependent service times is also considered there. Whitt used different ap-
proaches : an extension of Bitran and Tirupati’s formula (based on batch poisson
and batch deterministic processes) and a heuristic hybrid approximation based on
the results for the limiting case where a server is continuously busy.
Caldentey (2001) presented an approximation method to compute the squared
coefficient of variation of the departure stream from a multiclass queueing system
generalizing the results of Bitran and Tirupati (1988) and Whitt (1994).
Kim (2005) considered a multiclass deterministic routing queueing network with
highly variable arrivals. He pointed out that the previous procedures of Bitran and
Tirupati (1988) and Whitt (1994), may not be accurate under high variability as-
38 Boualem Rabta

sumption. He proposed refinements to those results based on Whitt’s variability


functions.

3.4 Other Classes of Networks

3.4.1 Infinite Server Networks

Harrison and Lemoine (1981) considered networks of queues with an infinite num-
ber of servers at each station. They pointed out that independent motions of cus-
tomers in the system, which are characteristic of infinite-server networks, lead in
a simple way to time-dependent distributions of state, and thence to steady-state
distributions. Moreover, these steady-state distributions often exhibit an invariance
with regard to distributions of service in the network.
Massey and Whitt (1993) considered a network of infinite-server queues with
nonstationary Poisson input. As a motivating application, they cited wireless (or
mobile cellular) telecommunications systems.Their model appears as a highly ide-
alized model, which initially ignores resource constraints. The different queues rep-
resent cells. Call originations are modeled as a nonhomogeneous Poisson process,
with the nonhomogeneity capturing the important time-of-day effect.

3.4.2 Batch Movement Networks

In real life, many applications feature simultaneous job transitions. For example, in
manufacturing, parts are often processed and transported in batches. Batch queuing
networks have been considered by Kelly (1979) and subsequently Whittle (1986)
and Pollett (1987). Miyazawa and Taylor (1997) proposed a class of batch arrival
batch service continuous-time open queueing networks with batch movements. A
requested number of customers is simultaneously served at a node, and transferred
to another node as, possibly, a batch of different size, if there are sufficient customers
there; the node is emptied otherwise. Their model assumes a Markovian setting
for the arrival process, service times and routing, where batch sizes are generally
distributed. The authors introduced an extra batches arrival process while nodes
are empty and showed that the stationary distribution of the queue length has a
geometric product form over the nodes if and only if certain conditions are satisfied
for the extra arrivals and under a stability condition.
The correspondence between batch–movement queueing networks and single–
movement queueing networks has also been discussed in Coleman et al (1997) for
class of networks having product–form solutions.
3 A Review of Decomposition Methods for Open Queueing Networks 39

Meng and Heragu (2004) proposed an extention to the classical decomposition


algorithm Whitt (1983a) to handle transfer batch size change between stations. They
consider only deterministic routing where transfer batch sizes are also deterministic.

Acknowledgements This work is supported by the SEVENTH FRAMEWORK PROGRAMME


- THE PEOPLE PROGRAMME - Marie Curie Industry-Academia Partnerships and Pathways
Project (No. 217891) ”Keeping jobs in Europe”

References

Albin SL (1982) On poisson approximations for superposition arrival processes in


queues. Management Science 28(2):126–137
Albin SL (1984) Approximating a point process by a renewal process, ii: Superpo-
sition arrival processes to queues. Operations Research 32(5):1133–1162
Baldwin R, Davis IV N, Midkiff S, Kobza J (2003) Queueing network analysis:
concepts, terminology, and methods. Journal of Systems and Software 66(2):99–
117
Barbour A (1976) Networks of queues and the method of stages. Advances in Ap-
plied Probability 8(3):584–591
Baskett F, Chandy K, Muntz R, Palacios F (1975) Open, closed and mixed networks
of queues with different classes of customers. Journal of the ACM 22(2):248–260,
DOI http://doi.acm.org/10.1145/321879.321887
Bitran G, Morabito R (1996) Open queueing networks : Optimization and perfor-
mance evaluation models for discrete manufacturing systems. Production and Op-
erations Management 51(2–4):163–193
Bitran G, Tirupati D (1988) Multiproduct queueing networks with deterministic
routing: Decomposition approach and the notion of interference. Management
Science 34(1):75–100
Bitran GR, Dasu S (1992) A review of open queueing network models of manufac-
turing systems. Queueing Systems 12(1-2):95–134
Blaciog̃lu B, Jagerman D, Altiok T (2008) Merging and splitting autocorrelated
arrival processes and impact on queueing performance. Performance Evaluation
65(9):653–669
Bocharov P, Naumov V (1986) Matrix-geometric stationary distribution for the
PH/PH/1/r, queue. Elektronische Informationsverarbeitung und Kybernetik
22(4):179–186
Bolch G, Greiner S, de Meer H, Trivedi K (2006) Queueing Networks and Markov
Chains : Modeling and Performance Evaluation with Computer Science Applica-
tions, 2nd edn. Wiley, New York
Borovkov AA (1987) Limit theorems for queueing networks. i. Theory Probab Appl
31(3):413–427
Burke P (1956) The output of a queueing system. Operations Research 4(6):699–
704
40 Boualem Rabta

Buzacott JA, Shanthikumar JG (1992) Design of manufacturing systems using


queueing models. Queueing Systems 12(1–2):135–213
Buzacott JA, Shanthikumar JG (1993) Stochastic models of manufacturing systems.
Prentice-Hall, Englewood Cliffs, NJ
Caldentey R (2001) Approximations for multi-class departure processes. Queueing
Systems 38(2):205–212
Coleman J, Henderson W, Pearce C, Taylor P (1997) A correspondence between
product-form batch-movement queueing networks and single-movement net-
works. Journal of Applied Probability 34(1):160–175
Dai J (1995) On positive recurrence of multiclass queueing networks: A unified
approach via fluid limit models. Annals of Applied Probability 5(1):49–77
Dai J, Nguyen V, Reiman M (1994) Sequential bottleneck decomposition: an
approximation method for generalised jackson networks. Operations Research
42(1):119–136
Disney RL (1981) Queueing networks. Proceedings of AMS Symposia in Applied
Mathematics 25:53–83
Disney RL, Konig D (1985) Queueing networks: A survey of their random pro-
cesses. SIAM Review 27(3):335–403
Govil M, Fu M (1999) Queueing theory in manufacturing : A survey. Journal of
Manufacturing Systems 18(3):214–240
Gross D, Harris M (1998) Fundamentals of Queueing Theory, 3rd edn. Wiley, New
York
Harrison J, Lemoine A (1981) A note on networks of infinite-server queues. Journal
of Applied Probability 18(2):561–567
Haverkort B (1995) Approximate analysis of networks of PH/PH/1/K, queues:
Theory & tool support. In: MMB ’95: Proceedings of the 8th International Con-
ference on Modelling Techniques and Tools for Computer Performance Evalua-
tion, isbn: 3-540-60300-X, Springer-Verlag, London, UK, pp 239–253
Haverkort B (1998) Approximate analysis of networks of PH/PH/1/K, queues
with customer losses: Test results. Annals of Operations Research 79(0):271–291
Heindl A (2001) Decomposition of general tandem queueing networks with mmpp
input. Performance Evaluation 44(1-4):5–23
Heindl A, Telek M (2002) Output models of MAP/PH/1(/K) queues for an effi-
cient network decomposition. Performance Evaluation 49(1–4):321–339
Heindl A, Mitchell K, van de Liefvoort A (2006) Correlation bounds for second-
order maps with application to queueing network decomposition. Performance
Evaluation 63(6):553–577
Jackson JR (1957) Networks of waiting lines. Operations Research 5(4):518–521
Jackson JR (1963) Job shop-like queueing systems. Management Science
10(1):131–142
Jackson RRP (1954) Queueing systems with phase type service. OR 5(4):109–120
Jagerman D, Bolciog̃lu B, Altiok T, Melamed B (2004) Mean waiting time approx-
imations in the g/g/1 queue. Queueing Systems 46(3):481–506
Kelly FP (1975) Networks of queues with customers of different types. Journal of
Applied Probability 12(3):542–554
3 A Review of Decomposition Methods for Open Queueing Networks 41

Kelly FP (1976) Networks of queues. Journal of Applied Probability 8(2):416–432


Kelly FP (1979) Reversibility and Stochastic Processes. Wiley, NY
Kim S (2004) The heavy-traffic bottleneck phenomenon under splitting and super-
position. European Journal of Operational Research 157(3):736–745
Kim S (2005) Approximation of multiclass queueing networks with highly variable
arrivals under deterministic routing. Naval Research Logistics 52(5):399–408
Kim S, Muralidharan R, O’Cinneide C (2005) Taking account of correlation be-
tween streams in queueing network approximations. Queueing Systems 49(3–
4):261–281
Kleinrock L (1976) Queueing systems, Vol. II : Computer applications. Wiley, New
York
Koenigsberg E (1982) Twenty five years of cyclic queues and closed queue net-
works: A review. The Journal of the Operational Research Society 33(7):605–619
Kraemer W, Langenbach-Belz M (1976) Approximate formulae for the delay in
the queueing system gi/g/1. Proceedings of the 8th International Teletraffic
Congress, Melbourne 235:1–8
Kuehn PJ (1979) Approximate analysis of general queuing networks by decompo-
sition. IEEE Transactions on Communications 27(1):113–126
Lemoine AJ (1977) Networks of queues - a survey of equilibrium analysis. Man-
agement Science 24(4):464–481
Massey WA, Whitt W (1993) Networks of infinite-server queues with nonstationary
poisson input. Queueing Systems 13(1–3):183–250
Meng G, Heragu SS (2004) Batch size modeling in a multi-item, discrete manufac-
turing system via an open queuing network. IIE Transactions 36(8):743–753
Miyazawa M, Taylor P (1997) A geometric product-form distribution for a queue-
ing network with non-standard batch arrivals and batch transfers. Advances in
Applied Probability 29(2):523–544
Pollett PK (1987) Preserving partial balance in continuous-time markov chains. Ad-
vances in Applied Probability 19(2):431–453
Reich E (1957) Waiting times when queues are in tandem. Annals of Mathematical
Statistics 28(3):768–773
Reiser M, Kobayashi H (1974) Accuracy of the diffusion approximation for some
queuing systems. IBM Journal of Research and Development 18(2):110–124
Sadre R, Haverkort B, Ost A (1999) An efficient and accurate decomposition
method for open finite- and infinite-buffer queueing networks. In W Stewart and
B Plateau, editors, Proceedings 3rd International Workshop on Numerical Solu-
tion of Markov Chains p 120
Segal M, Whitt W (1989) A queueing network analyser for manufacturing. Teletraf-
fic Science for New Cost-Effective Systems, Networks and Services, Proceedings
of ITC 12 (ed M Bonatti), North-Holland, Amsterdam pp 1146–1152
Sevcik KC, Levy AI, Tripathi SK, Zahorjan JL (1977) Improving approximations
of aggregated queueing network systems. Computer Performance (eds K Chandy
and M Reiser), North-Holland pp 1–22
42 Boualem Rabta

Shanthikumar J, Ding S, Zang M (2007) Queueing theory for semiconductor manu-


facturing systems : A survey and open problems. IEEE Transactions on Automa-
tion Science and Engiheering 4(4):513–522
Shanthikumar JG, Buzacott JA (1981) Open queueing network models of dynamic
job shops. International Journal of Production Research 19(3):255–266, DOI
10.1080/00207548108956652
Suresh S, Whitt W (1990) The heavy-traffic bottleneck phenomenon in open queue-
ing networks. Operations Research Letters 9(6):355–362
Suri R, Sanders JL, Kamath M (1993) Performance evaluation of production net-
works, vol 4. Elsevier, North-Holland,Amsterdam
Suri R, Diehl GWW, de Treville S, Tomsicek MJ (1995) From can-q to mpx: Evo-
lution of queuing software for manufacturing. Interfaces 25(5):128–150
Takacs L (1963) A single server queue with feedback. BSTJ 42:505–519
Whitt W (1983a) The queueing network analyzer. The Bell System technical journal
62(9):2779–2815
Whitt W (1983b) Performance of the queueing network analyzer. The Bell System
technical journal 62(9):2817–2843
Whitt W (1994) Towards better multi-class parametric-decomposition approxima-
tions for open queueing networks. Annals of Operations Research 48(3):221–248
Whitt W (1995) Variability functions for parametric-decomposition approximations
of queueing networks. Management Science 41(10):1704–1715
Whitt W (1999) Decomposition approximations for time-dependent marko-
vian queueing networks. Operations Research Letters 24(3):97–103, DOI
http://dx.doi.org/10.1016/S0167-6377(99)00007-3
Whittle P (1986) Systems in Stochastic Equilibrium. Wiley, London
Part II
Modelling and Simulation
Chapter 4
Parsimonious Modeling and Forecasting of Time
Series drifted by Autoregressive Noise

Akram M. Chaudhry

Abstract This paper addresses issue of modeling, analysis and forecasting of time
series drifted by autoregressive noise and finding its optimal solution by extending
a conventional linear growth model with an autoregressive component. This addi-
tional component is designed to take care of high frequencies of autoregressive noise
drift without influencing the low frequencies of the linear trend and compromising
on parsimonious nature of the model. The parameters of this model are then opti-
mally estimated through the self updating recursive equations using Bayesian priors.
For identification of autoregressive order of noise and estimation of its coefficients
ATS procedure of Akram (2001) is employed. Further, for unknown variance of ob-
servations an on-line variance learning and estimation procedure is discussed. To
demonstrate practical aspects of the model some examples are given and for gen-
eration of short, medium and long term forecasts in one go an appropriate forecast
function is given.

4.1 Introduction

In many economic, financial and physical phenomena time series drifted by autore-
gressive noise are observed. For analysis of such series numerous simple to complex
models had been proposed by researchers. Most of these models are meant for either
short term forecasts or medium term or long term forecasts only. Very few of these
models generate three types of forecasts in one go. To obtain all these types of fore-
casts, usually, three different models are employed using different model settings.
These forecasts are then joined or/combined to visualize them in one sequence over
short to long term time horizon. To do so some sort of alignment is made by the fore-

Akram M. Chaudhry, Associate Professor


College of Business Administration, University of Bahrain-P.O.Box #32038, Sakhir, Kingdom of
Bahrain, Middle East, Contact: (+973) 39171071(Mobile), 17438586(Office), 17642281(Res.)
e-mail: drakramm@hotmail.com, drakramm@yahoo.com, drakramm@buss.uob.bh

45
46 Akram M. Chaudhry

casters by underestimating or/and overestimating the actual forecasts at the joints.


By doing so some accuracy of forecasts is sacrificed resulting in heuristic rather
than optimum forecasts. Further, for identification of an order of auto-regression
of noise terms parametric techniques, such as, Akaike (1973) and Bohlin (1978)
are frequently used. These techniques are known to be cumbersome and some time
ambiguous.
To overcome these problems, a parsimonious linear growth model having an addi-
tional drift component is presented. This additional component that takes care of
auto-regression in noise component can be easily modeled and re-parameterized if
need arises. Before discussing extended model, let us go through a conventional
linear growth model of Harrison and Akram (1983) meant for taking care of low
frequencies of trend, bearing white noise.

4.1.1 Conventional Linear Growth Model

For analysis and forecasting of time series {yt }t=1,2,,T bearing white noise {δt }t=1,2,,T
the conventional linear growth model at time t is locally defined as:

Yt = f θt + δt
θt = Gθt−1 + wt
Where:
f = (1×n) vector of unknown stochastic parameters. θt = (n×l) vector of unknown
stochastic parameters. G = (n × n) matrix, called, state or transition matrix, of n-
number of nonzero eigenvalues {λi }i=1,...,n .
δt is an observation noise, assumed to be normally distributed with mean zero and
some known constant variance.
wt = (n × 1) vector of parameter noise, assumed to be normally distributed with
mean zero and a constant known variance-covariance matrix W = diag(W 1, . . . ,W n),
the components of which are as defined by Harrison and Akram (1983).

4.1.1.1 Example 1

In case of a second order (n = 2) model, the above components of the model, in


canonical form, at time t are:
 
f= 10
θt = (θ1 , θ2 ) , where the parameter θ1 is the level of underlying process of time
series and θ2 is the growth parameter.
4 Parsimonious Modeling and Forecasting of Time Series 47

G = {gi j }i, j=1,2 is a 2 × 2 transition matrix having non zero eigenvalues {λi }i=1,2 ,
such that g11 = 1, g12 = 1, g21 = 0, g22 = 2. This matrix assists in transition of low
frequency of trend housed in parameter vector from state at time t − 1 to t.
W = diag(w1, w2) where for a smoothing coefficient 0 < β < min(λi2 )i=1,2 the ex-
pressions of w1 and w2 are:

V (1 − β )(λ1 + λ2 )(λ1 λ2 − β )
w1 =
λ2 β

V (1 − β )(λ1λ2 − β )(λ1 − λ2β )(λ22 − β )


w2 =
λ2 β 2
where 0 < β < 1 is a smoothing coefficient
The parameters θ1 and θ2 of this model are optimally estimated using recursive
equations of Harrison and Akram (1983).
This second order model is the most commonly used member of the family of linear
dynamic system models as in many real life cases it sufficiently represents the low
frequencies of the underlying processes of many time series in a parsimonious man-
ner. In this paper, therefore this type of model shall be referred and used for time
series drifted by autoregressive (AR) noise of order p.

4.1.2 Comments

In practice more specific version having eigenvalues λ1 = 1 and λ2 = 2 is preferred


for a linearly growing phenomena. However, for exponential growth λ1 = 1 and
λ2 < 1 are used. Exact value of λ2 that depends upon formation and the represen-
tation of growth by exponential functions, such as Logistic and Gompertz may be
estimated by λ2 estimation procedure of Akram (1992).

4.2 Extended Linear Growth Model for AR(p) Drifts

The observations drifted by AR(p) type noise, i.e. , Φ p (B)Et = δt may locally be
modeled as:

yt = f θt + Et
θt = Gθt−1 + wt
Et = [Φ p (B)]−1 δt
Where:

B is a backward shift operator, such that BEt = Et − 1


48 Akram M. Chaudhry

p
Φ p (B) = ∏i=1 (1 − φi B) is invertible. That is 0 < |φi | < 1 for ∀i .

{φi }i=1,···,p are the autoregressive coefficients

δ t ∼ N(0,V ) and Wt ∼ N(0,W ) are as defined earlier.

In a compact form, this conventional representation of drifted time series may be


parsimoniously parameterized as:

Yt = f ∗ θt∗

θt∗ = J θt−1

+ wt∗ ; wt∗ ∼ N(0,W ∗ )
Where for an AR(p) process:

f ∗ = (1, 0, . . . , 0) a (1 × (n + p)) vector of known functions or constants

θt∗ = (θ1∗ , . . . , θ(n+p)


∗ ) a ((n + p) × 1) vector of unknown parameters.

W ∗ = diag{(W1∗ ,W2∗ } is of rank (n + p)

Where W1∗ = {Wi j }i, j=1,,p such that wi j = V for i, j = p and zero otherwise
diag{W1 ,W2 }
W2∗ = V where w1 and w2 are as defined earlier.

J = diag{Φ p , G} is a {(n + p) × (n + p)} state transition matrix of full rank

Where Φ p is a (p × p) matrix of autoregressive coefficients {φi }i=1,,p defined as


{φi j }i=1,,p such that

φi j = φi βφ0.5 for i = j = 1, . . . , p and zero otherwise.

Where βφ , such that 0 < βφ < 1, is a damping coefficient for highly volatile noise
frequencies

The order and the values of {φi }i=1,,p are determined by using noise identification
and testing procedure of Akram (2001).

G, the state transition matrix for low frequencies of underlying processes, is as de-
fined earlier.
4 Parsimonious Modeling and Forecasting of Time Series 49

4.3 Estimation of the Parameters of the Extended Model

For data Dt = (yt , Dt−1 ) assuming prior of parameter θ at time t − 1

(θt−1 , |Dt−1 ) ∼ N [mt−1 ;Ct−1 ]


the posterior of θ at time t

(θt , |Dt ) ∼ N [mt ;Ct ]


is determined by providing initial information on f , G, W , m0 and C0 as stated
by Harrison and Akram (1983) and Akram (1992) through the following recursive
equations.

Rt = JCt−1 J + W ∗
 
−1
At = Rt f ∗ V + f ∗ Rt f ∗

Ct = [I − At f ∗ ] Rt
et = yt − f ∗ Jmt−1
mt = Jmt−1 + At [yt − f ∗ Jmt−1 ]
where at time t, R is a system matrix, I is an identity matrix, A is an updating
or gain vector, et are one step ahead forecast errors and W ∗ , a variance-covariance
matrix of parameter noise, is as defined earlier. The dimensions of all these compo-
nents are assumed to be compatible with their associated vectors and matrices of the
recursive updating equations.

4.3.1 Example 2

For time series drifted by AR(2) noise process a linear growth model, in canonical
form, at time t is operated by defining:

f ∗ = ( 1 0 0 0 ) a (1 × 4) vector

θt ∗ = (θ1∗ , . . . , θ4∗ ) a (4 × 1) vector of unknown parameters.

W ∗ = diag{W1∗ ,W2∗ } is a (4 × 4) matrix

Where W1∗ = {wi j }i, j=1,2 such that w22 = V and zero otherwise
diag{w1 ,w2 }
W2∗ = V where w1 and w2 are as defined earlier.
50 Akram M. Chaudhry

J = diag{Φ p , G} is a (4 × 4) state transition matrix of full rank

Where Φ2 = {φi j }i, j=1,2 such that

φi j = φi {βφ0.5 } i=j=1,2 and zero otherwise.

4.4 On Line Variance Learning

For the above recurrence equations the observation noise variance V is assumed to
be known. If unknown then at time t it may be estimated on line using the following
variance estimation equations.

Xt = βv Xt−1 + (1 − f ∗At )dt2

Nt = βv Nt−1 + 1, where 0 < βv < 1 is a variance damper.


Xt
Vt = Nt


Ytˆ = Vt + f ∗ Rt f ∗

dt = min(et2 , ξ Ytˆ) , where ξ is a preset constant, a value of 4 for 95% confidence


level and 6 for 99% confidence level.

This variance learning system starts generating fairly accurate variance estimates
after couple of observations.

4.5 Forecast Function

For generating short, medium and long term forecasts in one go the forecast func-
tion is:
(k)
Ft = f ∗ J k mt for k ≥ 1 integers.

This function yields optimum short term forecasts and fairly accurate medium to
long term forecasts at the same time.
4 Parsimonious Modeling and Forecasting of Time Series 51

4.6 Comments

The above model is presented for time series drifted by AR(p) noise process. In
practice rarely time series drifted by more than AR(2) process are observed. In many
cases, therefore linear growth model with drifted component of AR(2) is required.
For more discussion see Akram (1994), Bohlin (1978) and Harrison and Akram
(1983).
To determine exact order of AR noise many techniques are available. For great
ease however, AIC of Akaike (1973) and ATS of Akram (2001) may be employed.
Among these two techniques, ATS may be effectively used by the practitioners to es-
timate the unknown values of autoregressive coefficients {φi }i=1,,p as demonstrated
by Akram and Irfan (2007)
The above model is parameterized in a canonical form. For application purpose,
if desired, may be transformed to a diagonal form by using inverse transformation
of Akram (1988).
This model, if used in accordance with Akram (1994) is expected to take care of
high frequencies of autoregressive noise while keeping low frequencies of underly-
ing process of time series intact. As a result yielding fairly accurate forecasts.

References

Akaike H (1973) Information theory and an extension of the maximum likelihood


principle. In: Proceedings of the 2nd International Symposium on Inference The-
ory. BN Petran, F. Csáki Eds. Akadémiai Kiadi, Budapest, Hungary, pp 267–281
Akram M (1988) Recursive transformation matrices for linear dynamic system mod-
els. J Computational Stat & Data Analysis 6:119–127
Akram M (1992) Construction of state space models for time series exhibiting ex-
ponential growth. In: Computational Statistics, vol 1, Physica Verlag, Heidelberg,
Germany, pp 303–308
Akram M (1994) Computational aspects of state space models for time se-
ries forecasting. Proceedings of 11th Symposium on Computational Statistics
(COMPSTAT-1994), Vienna, Austria, pp 116–117
Akram M (2001) A test statistic for identification of noise processes. Pakistan Jour-
nal of Statistics 17(2):103–115
Akram M, Irfan A (2007) Identification of optimum statistical models for time series
analysis and forecasting using akaike information criterion and akram test statis-
tic: A comparative study. Proc.of World Congress of Engineers, London, vol 2,
pp 956–960
Bohlin T (1978) Maximum-power validation of models without higher-order fitting.
Automatica 14:137–146
Harrison P, Akram M (1983) Generalized exponentially weighted regression and
parsimonious dynamic linear modelling. Time Series Analysis: Theory and Prac-
tice 3:102–139
Chapter 5
Forecast of the Traffic and Performance
Evaluation of the BMT Container Terminal
(Bejaia’s Harbor)

D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

5.1 Introduction

Increasing of the traffic at the park with containers of the Bejaia harbor’s and the
widening of its physical surface are not directly proportional. This is why the im-
provement of the productivity of the park and the good functioning of the unloading
and loading system requires the specialization of the equipment and the availability
of storage area which can receive the unloaded quantity, and having a configura-
tion which will be able to adapt and answer the traffic growth. Accordingly, a first
study which aimed to model the unloading process, had been realized in 2003 (Sait
et al, 2007). At that time, the park with containers of the EPB (Harbor Company of
Bejaia) was of 3000 ETU (Equivalent Twenty Units): 2100 ETU for the full park
and 900 ETU for the empty park. The study showed that for an arrival rate of 0.55
ships/day, and a batch size of 72 ETU, the mean number of containers in the full
park was of 1241 ETU. While varying the rate of the arrivals (or the batch size),
the park full will be saturated for a rate of 1.0368 ships/day (or for a size of 200
ETU). This study was one of the factors that have raised awareness of the EPB to
the need of creating a dedicated terminal in the treatment of container, where the
birth of BMT (Bejaia Mediterranean Terminal) Company. The company began its
commercial activities in July 2005. In order to ensure a good functioning of the con-
tainer terminal, some performance evaluation studies are established. A first study
was realized in 2007 (see Ayache et al, 2007). It had for objective the global model-
ing of unloading/loading process and had shown that if the number of ships (having
a mean size of 170 ETU), which was of 0.83 ships/day, increases to 1.4 ships/day,
the full park will undergo a saturation of 94%.

Djamil Aı̈ssani
Laboratory LAMOS, University of Béjaia,
e-mail: lamos bejaia.hotmail.com
Smail Adjabi
Laboratory LAMOS, University of Béjaia, e-mail: adjabi@hotmail.com

53
54 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

In this work, we propose another modeling approach which consists to decompose


the system into four independent sub-systems: the loading, the unloading, the full
stock and the empty stock processes.

5.2 Park with Containers and Motion of the Containers

In this section, we present the park with containers of BMT Company and identify
the motion of the containers.

5.2.1 The BMT Park with Containers

Actually, the terminal is provided with four quays of 500 m, and a park with con-
tainers which has a storage capacity of 10300 ETU. The park is divided into four
zones: full park, empty park, park with refrigerating containers and a zone of dis-
charge/potting (see Fig. 5.1.a).
The park with full containers has a capacity of 8300 ETU and the one with empty
containers has a capacity of 900 ETU. In addition, the BMT container terminal of-
fers specialized installations for the refrigerating containers and the dangerous prod-
ucts of a capacity of 600 ETU, as well as a zone of destuffing/packing of a capacity
of 500 ETU (see Fig. 5.1.a).

5.2.2 Motions of the Containers

The principal motions of the containers at the Bejaia harbor’s are schematized in the
Fig. 5.1.b (Ayache et al, 2007).

5.2.2.1 The Unloading Process

The unloading process is made up mainly of five steps.


1. The step of anchorage: With the exception of the car-ferries and container ships,
any ship arriving at the Bejaia harbor’s is put on standby in the anchorage (roads)
for a duration of time which varies from a ship to another, because of the occu-
pation of the quay stations or unavailability of pilot or tug boats.
2. The step of service: The accosting of the ships is ensured by the operational
sections of the Harbor Company of Bejaia, such as the sections of piloting and
towing.
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 55

Fig. 5.1 (a):Plan of the terminal. (b): Plan of the model of treatment of the containers

3. Vessel handling: It consists in the unloading of the containers. That is carried


out with the two gantries of quay which have carriages being able to raise the
containers from the container ships, and to put them on tractors.
4. The transfer: Once the container unloaded on the tractors, it will be transported
towards the zone of storage.
5. Storage: The transferred containers are arranged, piled up and stored in the park
of containers.

5.2.2.2 The Loading Process

The process of loading is the opposite to the process of unloading.


1. Step of anchorage: Same procedure as in the case of unloading.
2. Step of storage: The RTG (Rubber Tyre Gantry) puts the containers on the track-
tors .
3. Step of transfer: The trucks transport the containers beside the container ship.
56 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

4. Handling step: The gantry of a quay raises the container to put it on board the
ship.
5. Step of service: The operational service of the EPB escorts the ship till the roads
to leave the Bejaia harbor’s.

5.2.2.3 The Delivery and Restitution Processes

1. Deliveries: The delivery concerns the full containers or discharged goods. The
means used to perform this operation are: RTG, trucks, stacker and forklifts if
necessary.
2. Restitution of the containers: At the restitution of the containers, two zones are
intended for the storage of the empty container, one for the empty containers of
20 units and the other for the empty containers of 40 units.

5.3 Mathematical Models

We present in this section, the mathematical models corresponding to each system.


We regarded the containers as being the customers.
We impose the following assumptions:
• Only one gantry in the quay for a ship.
• The service duration of a truck is the sum of three variables: Duration of moving
the quay till a full stock, duration of unloading of the truck by the RTG and
duration of return of the truck to the quay station.
• No difference between the containers. They are measured in ETU.
The model of the unloading process can be represented in the diagram 5.2.a and the
model of delivery process is given in the diagram 5.2.b.
Remarks
• The arrival of a ship represents the arrival of a containers group of a random size.
• The quay is composed of two stations with size one (the queue size is limited to
a random size in the sense of containers).
• The treatment by the gantry is made container by container.

5.4 Calculation of the Forecasts

The evolution of the number of containers handled in ETU is presented in the graph
(Fig.5.3.a). It is noted that in the year 2007, BMT company treated 100000 ETU. Its
objective for the year 2008 was to treat 120000 ETU.
In March 2008, a calculation of forecast had been carried out. The designed series
is the number of containers treated (loaded/unloaded) in ETU. The used data are
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 57

Fig. 5.2 (a): Diagram of the model of the unloading process. (b): Diagram of the model of the
storage process.

monthly collected and are held forth over a period of two years (from January 2006
to March 2008). The method used for calculation of the forecasts is the exponential
smoothing method (Blondel, 2002).

Fig. 5.3 (a): Evolution of the number of containers treated in ETU/year,


(b): Original series and forecasts of the number of containers to be treated in ETU in the
year 2008

The graph (Fig.5.3.b) represents the original series of the number of containers
in ETU, as well as the forecasts (from April to December 2008). It is thus noted that
the objective that BMT company had fixed at the beginning of the year was likely
to be achieved.
58 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

In the same situation, we completed the same work for the year 2009. The objec-
tives of BMT company correspond to the treatment of 130254 ETU over the year.
Calculations of the forecasts are presented in table 5.1.

Table 5.1 Forecast of the year 2009 in ETU


Month Objective of BMT Forecast of the model
1 10600 10019
2 10730 10468
3 10440 10591
4 10014 10714
5 10900 10838
6 10750 10962
7 10900 11085
8 10650 11209
9 10900 11332
10 10570 11456
11 11650 11579
12 12150 11703
Total 130254 131956

5.5 Performance Evaluation of the BMT Terminal

First of all, we will carry out a statistical analysis to identify the model of network
of queues which correspond to our system.

5.5.1 Statistical Analysis and Identification of Models

The results of the preliminary statistical analysis (estimate and adjustment test) on
the data collected for the identification of the parameters of the processes are sum-
marized in table 5.2.
According to this preliminary analysis, one concludes that the performance eval-
uation of the terminal of Bejaia is really a complex problem. Indeed, the system is
modeled by a network of unspecified queues, because it consists of queues of type
G/G/1, M [x] /G/1, with blocking,...Therefore, we cannot use analytical methods
(as for the Jackson networks or BCMP) to obtain the characteristics of the system.
The models are:
1. Unloading process
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 59

Table 5.2 Results of the statistical analysis on the collected data


Process Variable Law Parameters of the law
Inter-arrivals of the ships to be loaded Exponential 1/λ = 0.000424
Size of groups to be loaded Geometric p = 0.0046
service duration of the gantries of quay Normal μ = 3.0044 and
Loading σ 2 = 1.5314062
Service duration of the trucks Normal μ = 6.5506 and
σ 2 = 6.10435884
Inter-arrivals of the ships to be unloaded Exponential 1/λ = 0.0005389
Size of groups to be unloaded Geometric p = 0.0047
Service duration of the gantries of quay Normal μ = 3.0044 and
Unloading σ 2 = 1.5314062
Service duration of the trucks Normal μ = 3.0044 and
σ 2 = 1.5314062
Size of groups of delivered containers/day Uniform Mean=121
Storage Size of groups of restored containers/day Uniform Mean=126

./G/1 ./G/m
−→
..
Arrivals −→ M [X] /./. ./G/1 . −→ Departure

−→
Fig. 5.4 (a): Modeling of the unloading process

2. Loading process

./G[X] /m
Arrivals of the customers of the type 1 −→ M/./. ./G/1
−→ |
..
. | −→ Departure
−→ |
Arrivals of the customers of the type 2 −→ D[X] /./. ./G/1

Fig. 5.4 (b): Modeling of the loading process

3. Delivery process

M [X] /G[X] /1 ./G[X] /1


Arrivals −→  −→  −→ Departure
Fig. 5.4 (c): Modeling of the delivery process
60 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

4. Restitution process

Arrivals of the customers of the type 1 −→ M/./.


/G[X] /1
| −→ Departure

Arrivals of the customers of the type 2 −→ D[X] /./.
Fig. 5.4 (d): Modeling of the restitution process

In the case of the loading and restitution models, the servers will be in service only
if there is at least a customer of type 1 in the first queue which will fix the size of the
group to be treated. Otherwise, they will remain in a state of idleness even if there
are customers of type 2 in the second queue.

5.5.2 Analytical Results

Performances evaluation of systems aims to obtain the numerical values of some


of their characteristics. These performances are calculated from the sample which
enabled us to adjust the arrivals and services laws. The principal performances are
summarized in table 3.

Table 5.3 Performances of the processes


Process Performance characteristics Value
Mean number of ships to be loaded/day 0.6104633
Loading Inter-arrival mean (day) 1.6381
Mean size of groups to be loaded 214.5278
Mean number of ships to be unloaded/day 0.7761129
Unloading Inter-arrival mean (day) 1.2884722
Mean size of groups to unload 218.2174
Delivery Mean number of delivered containers/day 120.9000
Restitution Mean number of restored containers/day 125.8974
Gantry Mean duration of service (minutes) 3.0044
Truck Mean duration of service (minutes) 6.5506

Interpretation: According to the obtained results (table 5.3.):


5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 61

• It is noticed that there is a mean of 0.6104633 ships which accost to the Bejaia
harbor’s in order to be loaded into containers by BMT company and 0.7761129
ships for the unloading.
• The ships to be loaded make a request of 214 ETU in mean and the BMT makes
the unloading of 218 ETU in mean by each ship.
• The mean number of delivered containers each day is n3 = 120.9000 ETU.
• The mean number of restored containers each day is n4 = 125.8974 ETU.
Because of the complexity of the global model, it is not possible to calculate some
essential characteristics analytically. This is why we will call upon the simulation
approach.

5.5.3 Simulation

We designed a simulator for each model under the Matlab environment. After the
validation tests of each simulator, their executions provided the results summarized
in table 5.4.

Table 5.4 Performances of the processes obtained by simulation


Processes Performance characteristics Value
Mean number of loaded containers/month 4299.85
Mean number of loaded ships/month 20.0433
Loading Mean number of ships in roads 0.0742
Mean number of ships in the quay 1.3925
Mean number of unloaded containers/month 5385.71
Mean number of unloaded ships/month 24.6808
Unloading Mean number of ships in roads 0.0533
Mean number of ships in the quay 1.9308
Storage Mean number of full containers in the park 3372.9
Mean number of empty containers in the park 211.1208

Interpretation: The results of simulation show that the total number of containers
loaded during one year will be of 51598.20 ETU and the mean number of ships in
roads and in the quay are respectively of 0.0742 and 1.39 ships, the total number of
loaded ships during one year will be of 240.52 ships.
Concerning the unloading process, the total number of containers unloaded during
one year will be of 64628.52 ETU, the mean number of ships in roads and in the
quay are respectively of 0.0533 and 1.9308, the total number of ships unloaded
62 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

during one year will be of 296.17 and the total number of containers which will be
handled for the year 2008 will be of 116226.72 ETU.
Concerning the parks of storage, the mean number of containers in the full park will
be of 3372.9 ETU and the mean number of containers in the empty park will be of
211.1208 ETU.

5.6 Variation of the Arrival Rate

In order to study the behavior of the system in the case of variation of the arrival rate
of the ships to be loaded and unloaded, other executions have been carried out. We
have increased the number of ships at the loading and the unloading of 30%. The
number of ships passes from 0.6104633 to 0.7936 per day for the loading and from
0.7761129 to 1.0089 per day for the unloading. The obtained results are summarized
in table 5.5.

Table 5.5 Performances of the processes in the case of increase of 30% of the number of ships
arriving at the Bejaia harbor’s obtained by simulation
Process Performance characteristics Value
Mean number of loaded containers/month 5458.30
Mean number of loaded ships/month 25.4433
Loading Mean number of ships in roads 0.09230
Mean number of ships in the quay 1.40000
Mean number of containers unloaded /month 6958.04
Mean number of ships unloaded/month 31.8858
Unloading Mean number of ships in roads 0.06690
Mean number of ships in the quay 1.90000
Mean number of full containers in the park 4874.20
Storage Mean number of empty containers in the park 154.9814

Interpretation: With an increase of 30% of the rate of ships arriving at the Be-
jaia harbor’s, we note that the mean number of ships in roads and in the quay will
increase a little. This means that the materials available within BMT company are
sufficient to face this situation. In other words, an increase of 30% does not gen-
erate a congestion of ships in roads or in the quay. On the other hand, the mean
number of handled containers will undergo a remarkable increase equivalent to
30000 ETU. This increase will not have any influence on the full stock or the empty
stock. Indeed, they will pass respectively from 3372.9 ETU to 4874.2 ETU and from
5 Forecast of the Traffic and Performance Evaluation of the BMT Container Terminal 63

211.1208 to 154.9814 ETU, say from 41% to 59% for the full park and from 24%
to 18% for the empty park.

5.7 Conclusion

The objective of this work is to analyze the functioning of the park of containers of
the BMT company in order to evaluate its performances, then to foresee the behavior
of the system in the case of increase of the arrivals flow of the container ships.
For this, we divided the system into four independent sub-systems: the “loading”,
the “unloading”, the “full stock” and the “empty stock” processes. Each system is
modeled by an opened network of queues and a simulation model of the functioning
of each system could be established. The goal of each simulator is to reproduce
the functioning of the park with containers. The study shows that the park with
containers will have the possibility of handling 116226.72 ETU, say 51598.20 ETU
at loading and 64628.52 ETU at unloading and a mean number of 3372.9 ETU in
the park, for entry rates of 0.6104 ships per day for the loading process and 0.7761
ships per day for the unloading process. After that, a variation of the arrivals rate of
the ships was proposed with an aim of estimating its influence on the performances
of the system.
With an increase of 30% of the number of ships arriving at the Bejaia harbor’s, we
note a small increase in the mean number of ships in roads and in the quay. On the
other hand, there will be a clear increase in the total number of treated containers
which will pass from 116226.72 ETU to 148996.08 ETU including 65499.6 ETU
at loading and 83496.48 ETU at unloading. We also note an increase in the mean
number of containers in the full park which will pass from 3372.9 to 4874.2 ETU.
Regarding the number of ships, it will pass from 240.52 to 305.3 ships at loading
and from 296.17 to 382.63 ships at unloading.
It would be interesting to achieve this work, by discussing the following items:
• An analytical resolution of the problem.
• Determination of an optimal management of the machines of the BMT company.
• Variation of other parameters.

References

Ayache N, Hidja R, Aı̈ssani D, S A (2007) Evaluation des performances du parc


à conteneurs de lentreprise bmt. rapport de recherche no. 3/2007. Tech. rep.,
Département Recherche Opérationnelle-Université de Béjaia.
Blondel F (2002) Gestion de la production. Dunod Edition, Paris
David M, Michaud JC (1983) La prévision: approche empirique dune méthode
statistique,. Masson Edition, Paris
64 D. Aı̈ssani, S. Adjabi, M. Cherfaoui, T. Benkhellat and N. Medjkoune

De Werra D, Liebling TM, Heche JF (2003) Recherche Opérationnelle pour


ingénieurs, Tome 2. Presses Polytechniques et Universitaires Romandes
Gross D, Harris CM (1998) Fundamentals of Queuing Theory. Wiley Series in Prob-
ability and Statistics
Pujolle G, Fdida S (1989) Modèles de Systèmes et de Réseaux, Tome 2,. Editions
Eyrolles
Ruegg A (1989) Processus Stochastiques. Presses Polytechniques et Universitaires
Romandes
Sait R, Zerrougui N, Adjabi S, Aı̈ssani D (2007) Evaluation des performances du
parc à conteneurs de lentreprise portuaire de béjaia. In: Proceedings of an In-
ternational Conference Sada07 (Applied Statistics for Development in Africa),
Cotounou (Benin).
Chapter 6
A Dynamic Forecasting and Inventory
Management Evaluation Approach

Johannes Fichtinger, Yvan Nieto and Gerald Reiner

Abstract A common strategy for companies to hedge unpredictable demand and


supply variability is to constitute safety stocks as well as safety capacity. How-
ever, classical safety stock calculations, often used in practice, assumed demand and
lead time to be identical and independent distributed each, which is generally not
true when considering empirical data. One cause for this problem can be the mis-
specification of the demand forecasting model, e. g. if a standard, additive linear
regression model is used to describe heteroscedastic demand. While for a stationary
demand process the amount of historical data i. e. the number of periods used for
estimation of the process variability does not affect the computation, this no longer
holds when using empirical data. In this study, we used a two-stage supply chain
model to show that in a non-stationary setting the number of observation periods
highly influence the supply chain performance in terms of on-hand inventory, fill-
rate and bullwhip effect. Also, we use the efficiency frontier approach to provide a
single performance measure and further analyse our results.

6.1 Introduction

Increasing competition leads companies in a lot of industries to pay more atten-


tion to customer satisfaction. Being able to fulfill customer orders with the “right”

Johannes Fichtinger
Institute for Production Management, WU Vienna – Nordbergstraße 15, A-1090 Wien
e-mail: johannes.fichtinger@wu-wien.ac.at
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch

65
66 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

service level is crucial for customer satisfaction. Companies have to carefully adapt
their delivery time to customer requirements and be prepared to cope with unplanned
variation in demand as well as supply to prevent stock outs. In the context of make-
to-stock manufacturing strategy, a common solution to hedge unpredictable demand
and supply variability is to constitute safety stocks. This approach is widely used in
practice and often relies on a classical calculation that integrates both demand and
supply lead time means and standard deviations.
A critical point to mention here is that the safety stock calculation assumes a
stationary demand process such that the two random variables, demand and lead
time, are both assumed to be identical and independent distributed each. Unfortu-
nately, considering empirical data, demand process decomposition does not neces-
sarily show these properties and, as a consequence, this calculation leads to volatile
results. While for a stationary demand process the amount of historical data i. e. the
number of periods used for estimation of the process variability does not affect the
computation, this no longer holds when using empirical data. Often ignored, these
points may reveal to be critical as it may impact the supply chain dynamics and lead
to inappropriate inventory levels as well as service levels.
The aim of this work is to present a dynamic two-stage supply chain model of
a supplier and a retailer with focus on the retailer. In particular, for the retailer,
we consider a periodic review inventory replenishment model, where the demand
distribution is not known. Hence, the retailer uses demand forecasting techniques
to estimate the demand distribution. For the supplier’s manufacturing process we
assume a pure make-to-order production strategy subject to limited capacity, where
orders are processed based on a strict first-in, first-out priority rule. Considering
that the supply chain evaluation has to be product- and customer-specific we use
an empirical reference dataset of a retail chain company to discuss our research
question. We show how unstable forecast errors impact supply chain performance
through its implication on order-up-to level calculation.
Specifically, we build a process simulation model and measure the effect of the
number of periods used in demand estimation on the performance of the supply
chain. Hence, the independent variable is the number of past periods the retailer
considers for calculating the mean and variance of demand. The performance mea-
sures, the dependent variables, are average on-hand inventory, the bullwhip effect as
the amplification between demand variance and order variance and the fillrate as a
service level criterion. Moreover, we consider the effect of manufacturing capacity
(upper limit of the throughput rate) on these measures. To reduce the multi-criteria
based performance measurement, we use the efficiency frontier approach to provide
a single performance measure.
Since our aim is to consider many aspects of a supply chain, the relevant literature
is vast. Even if we use a simple inventory policy, we refer the interested reader for
a comprehensive review on inventory models to Silver and Peterson (1985), Zipkin
(2000) and Porteus (2002), and especially for the multi-echelon models to Axsäter
(2006). The classical optimization approaches in inventory management are focus-
ing on minimization of the total inventory system cost (Liu and Esogbue, 1999). A
fundamental problem in this context is the “right” estimation of costs. This problem
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 67

is mentioned also by Metters and Vargas (1999), i. e., classically, different perfor-
mance measures are converted into one monetary performance measure. Therefore,
these authors suggested applying data envelopment analysis to be able to take differ-
ent performance measures into consideration. In general it has to be mentioned that
multi-criteria optimization as well multi-objective decision making problems have
been solved in many areas. Surprisingly, till now only a couple of papers have been
published in the field of inventory management (see also Maity and Maiti, 2005).
One of the performance measures that we consider, the bullwhip effect (Lee et al,
1997a,b; Sterman, 1989), gained significant interest of many researchers. A pointed
definition of the bullwhip effect is provided by de Kok et al (2005): “The bullwhip
is the metaphor for the phenomenon that variability increases as one moves up a
supply chain”. Different approaches to identify the causes of the bullwhip effect
have been made so far. Lee et al (1997b, 2004) describe four fundamental causes;
demand signalling processing, price variations, rationing games and order batching.
While the latter three are not considered in this work, the demand amplification due
to the combined effects of demand signalling processing and non-zero lead times
are a main focus of this work.
In a work on the interface of the forecasting and replenishment system with focus
on the bullwhip effect, Chen et al (2000b) use a two stage supply chain model and
consider the dependencies between forecasting, lead times and information in the
supply chain. In their model, the retailer does not know the distribution of demand
and uses a simple moving average estimator for mean and variance of demand.
Similar two-stage supply chain model have also been used by e. g. Boute et al (2007)
to successfully study dynamic impact of inventory policies.
The literature on the efficiency frontier approach for performance/efficiency mea-
surement is vast after the seminal work of Charnes et al (1978). An excellent recent
review can be found in Cook and Seiford (2009). Dyson et al (2001) discuss the
problems of factor measurement related with percentage values, as e. g. the fillrate
in our approach.
The remainder of this paper is organized as follows. Section 6.2 introduces the
basic supply chain model for a single supplier and a single retailer using demand
forecasting. In section 6.3 we present simulation results based on numerical data
and empirical examples. Section 6.5 contains further extensions to the current model
and conlcluding remarks.

6.2 A Simple Supply Chain Model with Demand Forecasting

Consider a simple supply chain consisting of a single retailer and a single manu-
facturer. The retailer does not know the true distribution of customer demand, so
he uses a demand forecasting model to estimate mean and variance of demand. In
each period, t, the retailer checks his inventory position and accordingly places an
order, qt to the supplier. After the order is placed, the retailer faces random costumer
demand, Dt , where any unfulfilled demand is lost. There is a random lead time, L,
68 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

such that an order placed at the beginning of period t arrives at the beginning of
period t + lt , where lt denotes the random realization of the order lead time placed
in t. We assume that the retailer uses a simple order-up-to policy based on demand
forecasting methods using regression analysis.
We use aggregated weekly empirical sales data of about 220 periods (approx. 4
years) from 01/2001 to 04/2005 to estimate demand Dt for specific products. The
data do not only contain sales information (units sold) but also gross price pt , stock
available, number of market outlets Ot , which needs to be considered in an expand-
ing company and a features indicator Ft (binary information to account for the effect
of advertisement, e. g. by means of newspaper supplements as flyers and leaflets).
To clean the data from these effects and additionally from trend and seasonality, we
use a least squares regression model as proposed by Natter et al (2007).

   
2t π 2t π
Dt = β0 + β1 pt + β2t + β3 sin + β4 cos + β5Ot + β6 Ft + et (6.1)
52 52

Note that the sales data do not necessarily correspond to the underlying real de-
mand process since demand during stockouts is not recorded. However, an analysis
of stockout situations on the real data show that they occur in less than 2% of the
selling periods. Therefore, we take the existing sales data as censored information
to for demand.
We tested the assumptions related with classical linear regression models for
the cleaning model (see e. g. Greene, 2008, for a comprehensive discussion). There
is no exact linear relationship for the independent variables in the regression (full
rank assumption), and the independent variables are found to be exogenous. On
the Contrary, the assumption of homoscedasticity and nonautocorrelation was not
fulfilled for many products. An earlier study on the same data by Arikan et al (2007)
shows that for many products a nonlinear relationship between price and demand
such as
Dt = a · pt−b · et for a > 0, b > 1 (6.2)
could be found better explaining the pricing effect. As a consequence, for such prod-
ucts estimating an additive demand model as (6.1) leads to a decreasing variance of
the error term, ε , in price. Hence, Var(ε |pt ) is not independent of price anymore.
These in practical demand forecasting and replenishment problems inevitably oc-
curring effects destroy the common stationarity assumption on the demand error
term and, hence, are the focus of the subsequent analysis.

Supplier Orders
Retailer Demand
Customer
x Production capacity x Base stock policy x Demand characteristics
x Target fillrate
Delivery x Forecast accuracy Sales
x Observation periods

Fig. 6.1 Two stage supply chain model


6 A Dynamic Forecasting and Inventory Management Evaluation Approach 69

As shown in Fig. 6.1, similar to the model of Chen et al (2000a) the retailer
follows a classical infinite horizon base stock policy using weekly replenishments,
where the order-up-to point St is estimated based on the expected demand for the
actual period, μt , and an estimate for the standard deviation of the (1 + L) periods
demand forecast error, σ̂t1+L as

St = (1 + λt )μt + zt σ̂t1+L , (6.3)

where the safety factor, zt , is chosen to meet a certain target fillrate, FR, service
measure. In particular, since any unsatisfied customer demand is lost, zt is found
such that it satisfies
Rμt 1 − FR
G(zt ) = · , (6.4)
σt FR
where G(·) denotes the standard normal loss function.
The supply lead time the retailer faces is stochastic, where the corresponding
random variable, L, has mean λt and standard deviation υt . It is well-known that for
the case of fixed order costs, an (s, S) policy is optimal, however, we do not consider
fixed order costs as we are interested in the effect of forecasting and the order-up-to
level on the performance measures.
Note that the order-up-to point in (7.1) is calculated based on the standard devi-
ation of the (1 + λt ) period forecast error σt1+L and its estimator σ̂t1+L rather then
the standard deviation of the demand over (1 + λt ) periods. As Chen et al (2000a)
point out very clearly, using σ̂t1+L captures the demand error uncertainty plus the
uncertainty due to the fact that dt+1 must be estimated by μt+1 . Finally, defining
an integer nt = max{n : n ≤ λt , n ∈ Z} helps to express the actual demand error
observation, et1+L , as
nt
et1+L = dt − μt + ∑ (dt+i − μt+i ) + (λt − nt ) (dt+nt +1 − μt+nt +1 ). (6.5)
i=1

Based on the random variable of the demand error, ε 1+L , in (1 + L) periods, the
estimator in period t of the standard deviation of the past demand errors can be
calculated as 
σ̂t1+L = Var(ε 1+L ) + υt2(μt λt )2 . (6.6)
For the supplier’s manufacturing process we assume a pure make-to-order pro-
duction strategy, where orders are processed based on a strict first-in, first-out basis.
While the period length is one week for the retailer, the supplier is assumed to de-
liver at the end of the day the order is completely produced. We consider production
of the supplier taking place on at most five days a week, hence, the supply lead time,
L, can take values, l, such that 5l ∈ Z, so that l ∈ {0.2, 0, 4, 0.6, . . .}. The supplier has
a fixed capacity C available solely for the retailer under consideration. For this very
reason the retailer faces lead time variation, but due to missing information sharing
with the supplier the retailer does not consider the supply as capacitated, and uses
uncapacitated stochastic lead time models for replenishment.
The lead time observation li for an order placed in i can be defined as
70 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

li = pi + wi , (6.7)

where pi = min{pi : pi qi ≥ C, 5pi ∈ Z} denotes the time necessary to complete the


order qi and wi is the time during which the order was backlogged.
Observe that the presented replenishment models assume the distribution of the
supply lead time, L, to be independent and identical. However, due to the capacitated
supplier, lead time is neither identical distributed since higher order quantities lead
to stochastically longer lead times, nor is it independent due to the strict first-in
first-out scheduling rule used by the supplier.
The performance of the supply chain is evaluated against different measures: the
average on-hand inventory at the retailer, I,¯ the bullwhip effect as the amplification
of the customer demand variance to the order demand variance, BW , and the aver-
age fillrate observed at the retailer, FR. All of these measures need to consider the
available capacity at the supplier, C. While the on-hand inventory can be seen as a
proxy for inventory related cost such as e. g. holding cost, the service level refers
to the retailer’s “quality” of customer service, and the available capacity indicates
the flexibility of the supplier within the supply chain. It is well-known from inven-
tory theory that tradeoffs between these measures exits, e. g. the higher the average
on-hand inventory, the (potentially) higher the service level and vice versa.
Let xt be the quantity of goods received from the supplier at time t, then the
retailer’s on-hand inventory, It , is

It = max(It−1 + xt − dt , 0) for t ≥ 0, (6.8)

and the corresponding lost sales vt are

vt = − min(It−1 + xt − dt , 0) for t ≥ 0. (6.9)

The observed average fillrate at the retailer can be calculated as


T
∑t=1 vt
FR = 1 − T , (6.10)
∑t=1 t
D

and, finally, the bullwhip effect is measured as


cout
BW = , (6.11)
cin
where cin and cout denote the coefficient of variation of the demand and the retailer’s
order quantity , respectively (Fransoo and Wouters, 2000). Both coefficients can be
easily calculated as
SD(D) SD(q)
cin = , and cout = . (6.12)
D̄ q¯
Efficiency measurement in organizations has been the focus of numerous re-
search activities, especially after the appearance of the seminal work of Charnes
et al (1978) on the efficiency of decision making units.
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 71

To be able to measure the efficiency of the forecasting system and the inventory
policy, we consider a set of n simulation runs j = {1 . . . n}. The performance of each
run is calculated as the efficiency of the supply chain by the ratio of the weighted
fillrate in per cent, FR j , to the average on-hand inventory, I¯j ,

uFR j
. (6.13)
vI¯j

This performance/input ratio is basis for calculation of the standard engineering


ratio of productivity and hence, the basis of the subsequent analysis. For our analy-
sis, we use the extension of Banker et al (1984) of using a variable returns to scale
(VRS) model such that the efficincy is defined as

u FR j − u0
. (6.14)
v I¯j

Due to unknown multipliers u and v , Charnes et al (1978) and Banker et al (1984)


in their extension proposed to solve the following linear programming problem. In
particular, for the observation j = 0 under consideration, solve

e∗0 = max u FR0


s.t. v I0 = 1
(6.15)
u FR j − u0 − v I j ≤ 0 j = 1, . . . , n
u ≥ δ , v ≥ δ , u0 unrestricted.

Fig. 6.2 shows a geometrical representation of the efficiency frontier problem.


By solving this input oriented model in (6.15) for each of the observations, it refers
to projecting every observation to the left on the frontier. As example, in the case of
the observation Obs 1, the projection is represented by the point B. The efficiency
of Obs 1 is calculated as the ratio of the distances B0/A0.

6.3 Model Verification and Validation

Based on our approach presented before, we will provide examples and validation
for the analysis. First, we will consider artificialy generated datasets in order to
validate the model against theory. Artificially generated samples were based on a
normal distribution of the form N (250, 50) and used to evaluate the performance of
the supply chain under 120 scenarios. Each scenario is related to a specific capacity
at the supplier as well as a specific number of observation periods. Specifically, we
use 5 distinct capacities, i.e. 1.1, 1.25, 1.5, 1.8 and 2.5 times the average demand
of the dataset, and 24 different numbers of observation periods, i.e. each second
periods starting from 4 to 50. Figure 6.3 presents the results obtained for the 3
performance measures of interests, i. e. the average on-hand inventory at the retailer,
the bullwhip effect and the fillrate. It can be noticed that results are generally in
72 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

97
●● ● ●
● ●●

Servicelevel: Fillrate
●●

● ●●

96
● ●
B●
A

Obs 1●

95


94


93

175 180 185 190


OHI
Fig. 6.2 Calculation of the efficiency of a single observation based on the efficiency frontier

accordance with theory. Specifically, higher capacity lead to higher performance in


terms of inventory and fillrate, as benefiting from shorter lead time. Bullwhip is
here neglectable, however, it illustrates, as expected from theory, that no noticable
impact from the number of observation periods on the performance measures is to
be reported for this stationary setting. Nevertheless, it appears that a minimum of
around 18 periods of observation is necessary to permit stable estimations. A last
comment is on the discrepency between the observed fillrate and the target of 98%
that we assume to be related to the non-normality of the lead time. This explanation
is convicing, knowing that tests involving normally distributed lead time leads to
fillrate in the range of 97.2% to 98.3 (for all capacities and with sufficient number
of observation periods).

OHI Bullwhip Fillrate


1.10

C = 1.1
97.0

C = 1.25
180

1.06

96.0

C = 1.5
170

1.02

95.0

C = 1.8

C = 2.5
0.98

94.0
160

20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods

Fig. 6.3 OHI, Bullwhip and Fillrate of a stationary demand product, plotted as a function of the
number of periods used for estimating the standard deviation of the demand error term. When using
around 15 periods or more, the effect of the number of periods vanishes
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 73

Next, we used the performance results as input for the efficiency analysis and re-
sults are presented in Fig. 6.4. It can be observed that, if any, only marginal improve-
ments are possible by increasing the number of observation periods. Nevertheless,
it is worth to mention that the optimal is proper to each capacity setting and that,
for now, no comparison is possible between the different efficiency analysis. Based
on the later, we consider that our model match theory expectation and is therefore
verified and validated for further analysis.

C=1.1 C=1.25 C=1.5 C=1.8 C=2.5


0.970 0.980 0.990 1.000

0.970 0.980 0.990 1.000

0.970 0.980 0.990 1.000

0.970 0.980 0.990 1.000

0.970 0.980 0.990 1.000


20 40 20 40 20 40 20 40 20 40
Nr Periods Nr Periods Nr Periods Nr Periods Nr Periods

Fig. 6.4 Fillrate efficiency for stationary demand products. Using 20 periods or more leads to an
negligible efficiency gap smaller 1%

6.4 Empirical Data Analysis

In particular, we will use empirical data presenting non stationarities to illustrate our
position. We will consider two datasets, one presenting a seasonal pattern (Fig.6.4a)
and a second including a single strong peak in demand (Fig.6.4b). Both datasets
present therefore nonstationarities wich could have different impact on the perfor-
mance of the supply chain regarding the number of observation periods considered.
Simulation was performed using the same scenarios structure we presented ear-
lier (see section 3.1) and results are presented in Fig.6.6. It can be observed that for
the seasonal data , results tend to reach stability once a sufficient number of periods
is available (Fig.6.6a). However, it is worth to mention that first the number of ob-
servation periods required is higher in this setting, which can be argued in the sense
that, by being more structured and more volatile, valide estimations will necessi-
tate more information, i. e. more observation. Second, fillrate remains more volatile,
even for high number of observation periods and the reason for it is assumed to
be linked with intrinsic dynamic of the model. In this context, order sizes are more
variable, which can lead to stronger bias in lead time distribution and impact fillrate.
In the case of the second dataset, interesting results in terms of performance can be
observed (Fig.6.6b). In this case, the hierarchy related to capacity is much unclear
considering fillrate. The number of observation periods strongly impact the service
performance and make optimal setting difficult to identify.
74 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

800 Product A Product B

100 200 300 400 500 600 700


700
600
500
400
300

0 50 100 150 200 0 50 100 150 200


week week

Fig. 6.5 Demand plots

OHI Bullwhip Fillrate


190

97.0
C = 1.1
1.2
180

96.0

C = 1.25
1.1
170

C = 1.5
95.0

C = 1.8
1.0
160

C = 2.5
94.0

20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods

OHI Bullwhip Fillrate


270

94
1.10

C = 1.1

C = 1.25
1.05

93
250

1.00

92

C = 1.5
230

0.95

C = 1.8
91
210

C = 2.5
0.90

90

20 30 40 50 20 30 40 50 20 30 40 50
Nr Periods Nr Periods Nr Periods

Fig. 6.6 OHI, Bullwhip and Fillrate of non-stationary demand products A (up) and B (down)
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 75

The results from the efficiency analysis presented in Fig.6.7 confirmed the previ-
ous observations, i.e. the impact of the number of observation periods is limited in
the case of the seasonal dataset (Fig.6.7a). However, the dataset including a strong
peak in demand leads to erratic results regarding number of observation periods. In
this last case, the highest number of periods did not lead to optimal anymore and
the choice of the observation range can have strong impact on performance and this
independently from the elected strategy.

C=1.1 C=1.25 C=1.5 C=1.8 C=2.5


0.98

0.98

0.98

0.98

0.98
0.94

0.94

0.94

0.94

0.94
0.90

0.90

0.90

0.90

0.90
20 40 20 40 20 40 20 40 20 40
Nr Periods Nr Periods Nr Periods Nr Periods Nr Periods

C=1.1 C=1.25 C=1.5 C=1.8 C=2.5


0.98

0.98

0.98

0.98

0.98
0.94

0.94

0.94

0.94

0.94
0.90

0.90

0.90

0.90

20 40 20 40 20 40 20 40 0.90 20 40
Nr Periods Nr Periods Nr Periods Nr Periods Nr Periods

Fig. 6.7 Fillrate efficiency for non-stationary demand products A (up) and B (down)

6.5 Conclusion

We considered a dynamic two-stage supply chain model with focus on the retailer to
identify the possible impact of the number of observation periods used to calculate
the order-up-to level using an efficiency frontier approach. Based on this, we showed
for the stationary demand case that as long as the number of periods is sufficiently
large (here around 18 periods), it has no noticeable effect on the performance of the
supply chain. However, considering non-stationary demand caused e. g. by a mis-
specification of the price dependency of demand in the demand forecasting model,
the number of observation periods can lead to divergent results and considerably
affect efficiency.
Based on our results, we demonstrate that the impact of non stationarities when
using classical safety stock calculation is highly influenced by the number of obser-
vation periods considered. In addition, as it is not possible to know ex-ante which
76 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

periods number is optimal (neither do we know if such evaluation is possible), the


danger of non stationarity should be highlighted.
Here we discuss a few potential extensions to the current model. First, as men-
tioned in section 6.2, the retailer has no information about the cause of the lead time
variability because of lacking information sharing in the supply chain. However, the
benefits of information sharing for both of the supply chain partners are well-known
as described in Chen (2003). In particular, in the current model, the retailer does not
know about and hence, does not consider the production capacity at the supplier.
On the contrary, sharing the available capacity beween the supplier and the retailer
allows the retailer to apply replenishment models intergrating capacitated suppliers.
We refer the interested reader to Federgruen and Zipkin (1986a,b). They showed
that under certain reasonable assumptions it is optimal to follow a base-stock policy
when possible and to order capacity when the prescribed quantity would exceed the
capacity. By turning the problem into a cost minimization problem, cost improve-
ments due to explicit consideration of the capacitated supplier can be evaluated.
Hence, the value of information sharing in this framework becomes measureable.
A second extension can be considering supplier lead time as stochastic as in
Ciarallo et al (1994). They show in their model that under stochastic lead times
an order-up-to policy is still optimal. Using stochastic supplier capacity, a supplier
with equal mean but increasing variance in the lead time is considered less reliable.
By comparing the performance measures at the retailer the value of a more reliable
supplier can be made explicit.
Thirdly, to fully support decision making, the efficiency frontier approch could be
extended. Assuming that extra capacity has a cost, flexibility at the supplier should
be integrated in the analysis to contribute to the evaluation. Also, each input could
be linked to limits and weights related to the specificity of the supply chain under
study in order to provide more realistic evaluation.
Finally, consideration of supply chain contracts (see e. g. Cachon, 2003, for a
comprehensive discussion) might help gain additional insights in the value of infor-
mation sharing, the value of capacity and reliability of the supplier and the impact
of the forecasting and replenishment model used by the retailer.

References

Arikan E, Fichtinger J, Jammernegg W (2007) Evaluation and extension of single


period combined inventory and pricing models. In: Proceedings of the 14th Inter-
national Annual EurOMA Conference, Ankara, Turkey
Axsäter S (2006) Inventory Control. Springer Verlag
Banker RD, Charnes A, Cooper WW (1984) Some models for estimating techni-
cal and scale inefficiencies in data envelopment analysis. Management Science
30(9):1078 – 1092
Boute RN, Disney SM, Lambrecht MR, Van Houdt B (2007) An integrated produc-
tion and inventory model to dampen upstream demand variability in the supply
6 A Dynamic Forecasting and Inventory Management Evaluation Approach 77

chain. European Journal of Operational Research 178(1):121–142


Cachon GP (2003) Supply chain coordination with contracts. In: de Kok A, Graves
S (eds) Supply Chain Management: Design, Coordination and Operation, Hand-
books in Operations Research and Management Science, vol 11, pp 227–339
Charnes A, Cooper W, Rhodes E (1978) Measuring the efficiency of decision mak-
ing units. European Journal of Operational Research 2(6):429–444
Chen F (2003) Information sharing and supply chain coordination. In: de Kok T,
Graves S (eds) Supply Chain Management: Design, Coordination, and Operation,
Handbooks in Operations Research and Management Science, vol 11, Elsevier
Chen F, Ryan J, Simchi-Levi D (2000a) The impact of exponential smoothing fore-
casts on the bullwhip effect. Naval Research Logistics 47(4):271–286
Chen YF, Drezner Z, Ryan JK, Simchi-Levi D (2000b) Quantifying the bullwhip
effect in a simple supply chain: The impact of forecasting, lead times, and infor-
mation. Management Science 46(3):436–443
Ciarallo F, Akella R, Morton T (1994) A periodic review, production planning model
with uncertain capacity and uncertain demand-optimality of extended myopic
policies. Management Science 40(3):320–332
Cook WD, Seiford LM (2009) Data envelopment analysis (DEA) - thirty years on.
European Journal of Operational Research 192(1):1–17
de Kok T, Janssen F, van Doremalen J, van Wachem E, Clerkx M, Peeters W (2005)
Philips electronics synchronizes its supply chain to end the bullwhip effect. Inter-
faces 35(1):37–48
Dyson RG, Allen R, Camanho AS, Podinovski VV, Sarrico CS, Shale EA
(2001) Pitfalls and protocols in dea. European Journal of Operational Research
132(2):245–259
Federgruen A, Zipkin P (1986a) An inventory model with limited production ca-
pacity and uncertain demands I. the average-cost criterion. Mathematics of Oper-
ations Research 11(2):193–207
Federgruen A, Zipkin P (1986b) An inventory model with limited production ca-
pacity and uncertain demands II. the discounted-cost criterion. Mathematics of
Operations Research 11(2):208–215
Fransoo JC, Wouters MJ (2000) Measuring the bullwhip effect in the supply chain.
Supply Chain Management: An International Journal 5(2):78–89
Greene WH (2008) Econometric Analysis, 6th edn. Pearson Prentice Hall, Upper
Saddle River, New Jersey, US
Lee HL, Padmanabhan V, Whang S (1997a) The bullwhip effect in supply chains.
Sloan management review 38(3):93–102
Lee HL, Padmanabhan V, Whang S (1997b) Information distortion in a supply
chain: The bullwhip effect. Management Science 43(4):543–558
Lee HL, Padmanabhan V, Whang S (2004) Comments on information distortion in
a supply chain: The bullwhip effect. Management Science 50(12):1887–1893
Liu B, Esogbue A (1999) Decision criteria and optimal inventory processes. Kluwer
Academic Publishers
78 Johannes Fichtinger, Yvan Nieto and Gerald Reiner

Maity K, Maiti M (2005) Numerical approach of multi-objective optimal control


problem in imprecise environment. Fuzzy Optimization and Decision Making
4(4):313–330
Metters R, Vargas V (1999) A comparison of production scheduling policies on
costs, service level, and schedule changes. Production and Operations Manage-
ment 8(1):76–91
Natter M, Reutterer T, Mild A, Taudes A (2007) Practice prize report – an
assortment-wide decision-support system for dynamic pricing and promotion
planning in DIY retailing. Marketing Science 26(4):576–583
Porteus EL (2002) Foundations of Stochastic Inventory Theory. Stanford University
Press
Silver E, Peterson R (1985) Decision systems for inventory management and pro-
duction planning. Wiley, New York et al.
Sterman JD (1989) Modeling managerial behavior: Misperceptions of feedback in
a dynamic decision making experiment. Management Science 35(3):321–339
Zipkin PH (2000) Foundations of Inventory Management. Shelstad, Jeffrey J.
Chapter 7
Performance Evaluation of Process Strategies
Focussing on Lead Time Reduction Illustrated
with an Existing Polymer Supply Chain

Dominik Gläßer, Yvan Nieto and Gerald Reiner

Abstract The ability to fulfil customer orders is crucial for companies which have
to operate in agile supply chains. They have to be prepared to respond to changing
demand without jeopardizing service level, i. e. delivery performance is the market
winner (Christopher and Towill, 2000; Lee, 2002). In this context, lead time re-
duction (average as well as variability) is of key interest since it allows increasing
responsiveness without enlarging inventories. In front of these possible levers (e. g.
Chandra and Kumar (2000), the question arises of the dynamic assessment of poten-
tial process improvements for a specific supply chain and moreover a combination
of potential process improvements related to an overall strategy (responsive, agile,
etc.). Using process simulation, we demonstrate how the coordinated application of
strategic supply chain methods improves performance measures of both intra- (lead
time) and interorganizational (service level) targets.

7.1 Introduction

The intention of this study is to analyse and assess the effects of shortening lead
time, i. e., average as well as variability, on the performance of the entire supply
chain (delivery service, delivery time, cost, etc.). There are a great number of differ-
ent strategic/tactical supply chain approaches (Chandra and Kumar, 2000; Mentzer

Dominik Gläßer
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: dominik.glasser@unine.ch
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch

79
80 Dominik Gläßer, Yvan Nieto and Gerald Reiner

et al, 2001) that make it possible to improve the supply chain processes by means
of, e. g., demand forecast (see Winklhofer et al (1996) for a review), capacity man-
agement, organizational relations, better communication, reduction of supply chain
echelons and adapted inventory management. Also, the possibility of moving the
customer order decoupling point has been recognized (Olhager, 2003), opening the
door to postponement strategies, etc.. In front of these possible levers, the question
arises of the dynamic assessment of potential process improvements for a specific
supply chain and moreover a combination of potential process improvements re-
lated to an overall strategy. Supply chain evaluation is of primarily importance in
order to support decision making. We will demonstrate that these theoretical con-
cepts as well as the related restrictions have to be modified under consideration of
“real” processes. Therefore, the question arises if these concepts are robust enough
to improve also “real” processes. The investigations will be carried out on the basis
of quantitative models using empirical data (Bertrand and Fransoo, 2009). Basi-
cally, with the quantitative examination of empirical data a model will be developed
which reproduces causal correlations between the control variables and the perfor-
mance variables. Furthermore, Bertrand and Fransoo (2002) pointed out that this
methodology offers a great opportunity to further advance the theory. According
to Davis et al (2007) the choice of simulation technology is an important decision
when it comes to achieving the research objective. Thus, simulation models will
be developed, e. g., discrete event simulation (Sanchez et al, 1996), since the pos-
sibility of understanding the supply chain as a whole and analyzing and assessing
different strategic/tactical action alternatives offer a considerable benefit. This is
why we have opted to use ARENA for developing the simulations models (Kelton
et al, 2003). First, Section 2 discusses the effects of optimised lead time as regards
supply chain performance. Furthermore, the importance of supply chain evaluation
is emphasised. Then, in Section 3, we set out our research approach with the help of
a polymer processing supply chain. Finally, Section 4 provides concluding remarks
plus a look at other research possibilities.

7.2 Theoretical Background

Cutting lead time is understood by experienced managers but is seldom awarded


sufficient importance. Thus, it is that Little’s Law (Hopp and Spearman, 1996) says
that cutting delivery time also cuts work in process. However, strategic advantages
such as improving service level or cost advantages, can also be achieved. If one
considers the classic formula for calculating the safety stock that is used as part of
a reorder point replenishment policy,

Is = z μτ υ 2 + λτ στ2 (7.1)
it is evident that delivery time directly affects this (Silver et al, 1998). Here, μτ
stands for the delivery time mean and στ2 for the delivery time variance. The other
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 81

parameters stand for demand mean (λτ ) and demand variance (υ 2 ) as well as the
safety factor z, which represents a trade-off between service level and stock keeping
costs, and Is for safety stock. Therefore there is a lot of interest in reducing the vari-
ance as well as average delivery time. On the one hand, this result in reduced safety
stock that is reflected by lower stock keeping costs. On the other hand, this in no way
worsens the service level, e. g. the number of stock outs. Thus the operational ob-
jective of a supply chain, i. e. increased customer satisfaction and lower costs at the
same time, becomes more realistic. This can even turn out to be a strategic compet-
itive advantage. A decisive element is the customer order decoupling point (CODP)
(Mason-Jones et al, 2000). It is the point where the forecast-driven standard produc-
tion, mostly serial production of standard components (PUSH), and the demand-
driven production, i. e. commissioned production in response to customer orders or
other requirement indicators (PULL) meet. Physically, the decoupling point in the
supply chain is the ultimate inventory where components not yet relating to any or-
der (Mason-Jones and Towill, 1999). The further downstream in the supply chain the
decoupling point is, the less the quantities taken from the inventories agree with real
demand at the point of sale (POS). Owing to the fact that most supply chain partners
do not see real customer demand, they tend to be forecast-driven and not demand-
driven (Christopher and Towill, 2000), which also enforces the so-called “bullwhip
effect” (increasing fluctuations in order quantities and inventory up-stream in the
supply chain whilst end customer demand remains constant (Lee et al, 2004)). To
increase competitive advantage, Olhager (2003) determines that companies can ei-
ther keep the CODP at its current position and reduce delivery lead time or maintain
the delivery lead time and move the CODP upstream in order to reduce or clear
stocks. Strategically positioning the CODP particularly depends on the production
to delivery lead time (P/D) ratio and on relative demand volatility (RDV) (standard
deviation of demand relative to the average demand). In this way, for example, a
make to order (MTO) strategy can only be achieved if the P/D ratio is less than 1
(Olhager, 2003). This is because when production lead time is greater than the deliv-
ery lead time of a customer order, customer service of course suffers (Jammernegg
and Reiner, 2007). On the other hand, it is not advisable to apply a make to stock
(MTS) strategy (lead time is zero) if the RDV is very high because this results in
huge inventories if customer service is to be maintained, and this of course results
in high inventory costs. If, in this case, the P/D ratio is greater than 1, then some
components would have to be produced for stock, which leads to an assembly to
order (ATO) or an MTS strategy. The importance of lead time is also emphasised
by Cachon and Fisher (2000), in that they assert that reducing lead time or batch
size can affect supply chain performance more than information sharing. Likewise,
in the 12 Mason-Jones et al (2000) for simplifying material flow cutting lead time
is an important point.
82 Dominik Gläßer, Yvan Nieto and Gerald Reiner

7.2.1 Supply Chain Evaluation

Evaluation of real supply chain processes is always challenging since a valid esti-
mation can only be obtained through a detailed, specific process analysis. Improve-
ments of a specific supply chain process can never be 100% applied (copied) to an-
other setting. Nevertheless, they can be used as best practice indicating improvement
potentials to another company / supply chain. This analysis must be product-specific
as well as company-specific and the performance measures have to be selected care-
fully and in accordance with the specificity of the system under study (Reiner and
Trcka, 2004). An important step in defining suitable performance measures is de-
termining market qualifiers and market winners, which determine the alignment
and therefore different metrics for leanness and agility of supply chain performance
(Mason-Jones et al, 2000; Naylor et al, 1999). When drawing up the analysis and
assessment model a product-specific supply chain design method should be selected
in order to achieve results that are close to reality. This method provides for the fact
that a supply chain always has to be designed in a product-specific and customer-
specific way (Fisher, 1997) and that the alignment of the supply chain with regard
to its leanness, agility or a combination of both (Lee 2002, Christopher and Towill
2000) plays a decisive role. If a supply chain already exists in reality, then the neces-
sary data for the specified performance measures can be obtained by i. e. analysing
existing IT systems as well as interviewing the supply chain partners. However, if
alternative supply chain strategies have to be analysed in terms of their performance,
then data is never available. In this case, missing values can be calculated, estimated
or obtained by simulation. But calculation is often impossible and a general esti-
mation is too imprecise (Jammernegg and Reiner, 2007). Dynamic stochastic com-
puter simulations can provide not only average values for performance measures
but also give information about their probabilistic distribution (Kelton et al, 2003)
because of the use of random variables (Jammernegg and Reiner, 2007). Random
variables, which simulate risks, are essential to reliable evaluations because, accord-
ing to Hopp and Spearman (1996), risks negatively affect supply chain performance.
To enable precise evaluation, the model must include all important process-related
matters.

7.3 Illustration of the Supply Chain

To illustrate the ”real” improvement potential of theoretical lead time reduction ap-
proaches, we analysed empirical data from a supply chain in the polymer as well as
furniture industry. The supply chain is characterized by three levels, i. e. a supplier, a
manufacturer and a sales office, and ends with a market-leading OEM as unique cus-
tomer. In this case, delivery performance is the market winner and on-time delivery
is therefore crucial to maintain customer loyalty. Due to the tremendous variety of
products offered by the manufacturer (more than 50000), the analysis had to be lim-
ited to key articles. The selection of the product was performed using ABC-XYZ
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 83

analysis. This classification is an extension of the ABC analysis (Vollmann et al,


2004), because it not only takes the value into consideration, but also the variability
of the demand (Schönsleben, 2004). We opt for the most sold product, which has
a coefficient of variation (standard deviation/mean) greater than 2/3. Thereby, this
product represents the AY category (Reiner and Trcka, 2004).

7.3.1 Process Design

The manufacturer, located in Western Europe, delivers goods to a sales office lo-
cated in Eastern Europe. In turn, the sales office supplies the four OEM production
plants (C1, , C4) belonging to the customer and also located in the eastern part of
Europe. The entire procedure is set out in Fig. 1, with the sales office as well as
the production sites arranged as in reality. In more details, the sales office uses its
inventory to fulfil the customer orders. As soon as the inventory level at the sales
office decreases down to a reorder point a stock replenishment order is placed at
the manufacturer. The manufacturer must then supply the goods and send them to
the sales office as fast as possible. No delivery time is specified. It is to be borne
in mind that the manufacturer’s finished goods inventory merely serves as a buffer
store for transport purposes (batching) and is thus not able to deal with any signif-
icant demand fluctuations because the manufacturing strategy is make to order for
the manufacturer.
7U
DQ 2U
VS GH
RU U
WD
WLR
Q


Fig. 7.1 The initial process

The sales distribution process is to be regarded as based on the classic push prin-
ciple (make to stock). In a dynamic environment where there is uncertainty about
demand and fluctuations in demand this make to stock strategy may lead to great
84 Dominik Gläßer, Yvan Nieto and Gerald Reiner

problems. Fig. 2 shows the stock movements at the sales office over a year. The
diagram shows that there is an increase of stock outs during the first half of the
year, and this has a negative effect on customer satisfaction. The problems associ-
ated with in this setting are manifold. (1) Owing to the irregular pattern of customer
order placing, it is difficult for the sales office to produce a forecast for the future.
(2) Furthermore, available information of the sales office is not sent promptly to the
manufacturer. (3) There is a lack of transparency, the manufacturer is not aware of
actual customer demand. Therefore, he is not able to discern whether or not there is
a genuine customer order behind the stock replenishment order placed by the sales
office. This frequently leads to unfavourable prioritisation of the production orders
and this, in turn, sometimes results in long and varying delivery periods. (4) There
is no classical replenishment policy used by the sales office, so that decisions con-
cerning reorder points and order quantity are mostly made on a one-off basis by the
sales office staff.

6WRFN0RYHPHQW






4XDQWLW\






                                             
'D\V

Fig. 7.2 Stock movements at the sales office

As already mentioned, when choosing performance measures, a special focus


is on the total lead time as well as on costs and customer satisfaction (number of
stock outs). The period between receipt of the order at the manufacturer’s sales
office and the point when the goods are dispatched to the customer from the sales
office inventory is described as the total lead time. Reducing lead time has financial
implications as well as strategic effects (Gunasekaran et al, 2001). Table 1 sets out
initial situation performance measures. Long lead times and large numbers of stock
outs are particularly apparent. Stock keeping costs are correspondingly low owing
to the many stock outs. In the following the representation of the initial situation
based on actual historical data is called initial scenario.

7.3.2 A Simulation Study

For our simulation model, we use discrete event simulation (ARENA). We apply
the model to asses the performance of different supply chain settings as well as to
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 85

evaluate design alternatives. For each scenarios tested, replications were carried out
in order to asses the variability and the robustness provided by each strategy (Reiner
and Trcka, 2004). One simulation run is for a period of 365 days. Quantitative mod-
els based on empirical data are largely dependent on the data they integrate as well
as on process design descriptions. These are necessary for making sure that the way
the model works comes as close as possible to actual observations and processes. In
order to obtain a coherent data base free of organisational barriers, the data triangu-
lation approach was chosen (Croom, 2009). In particular, we looked at the existing
IT systems at the plant, the sales office and the organisational administration depart-
ments. Based on direct data access we ensured that data could be directly acquired
from the source using the database query. The model design was adapted in line
with the product-specific supply chain method based on analyses and observations
of reality, e. g. participant observations and questioning responsible supply chain
managers. Product specification, according to Mason-Jones et al (2000), yielded
that the market winner is the level of readiness to deliver, whereas quality, costs and
lead time are the market qualifiers. This indicates an agile supply chain environ-
ment. For model validation, the initial scenario was simulated and the result data
were compared with the real data. The comparison showed that the results of the
model reflect the reality. Finally the completed model design including the simula-
tion results were again confirmed through responsible supply chain managers and
participant observations.

7.3.3 Scenario 1 - Forecast Based Inventory Management

Based on interviews we figured out that a 4-week rolling forecast from customer
could be provided, which will constitute the core alternative of our first scenario.
The rolling forecast represents actual order entry with optional manual adoptions
from the customer. In addition, and in order to support the impact of the forecast,
an (s,S) inventory policy will be applied at the sales office, with a safety stock cal-
culated as in eq. 1 with a target cycle service level of 95%. The order quantity
also takes the manufacturer’s batch size into account. All applied distributions for
stochastic input variables (e. g. delivery time between manufacturer and sales of-
fices incl. production time, transport cost) were worked out on the basis of real data,
taking account of chi-square and Kolmogorov-Smirnov goodness-of-fit hypothesis
tests. In addition, all distributions have been validated in a graphical evaluation. As it
has not yet been possible to estimate the precision of the customer’s forecast, it was
assumed in the simulation that the actual order can deviate 20% from the forecast
per period. The results of the scenario 1 are presented in Table 1.
86 Dominik Gläßer, Yvan Nieto and Gerald Reiner

7.3.4 Scenario 2 - Improvements Along the Supply Chain

Scenario 2 focuses on shortening the supply chain by closing the inventory at the
sales office and by direct customer delivery from the manufacturer. Now, the man-
ufacturer’s order policy envisages always having sufficient articles in stock for the
next two weeks (based on average demand per week). Therefore, in order to enable
this strategy, a forecast is necessary and the 4-weeks rolling forecast from scenario
1 was conserved. By doing so, the manufacturer becomes aware of the actual cus-
tomer requirement leading to an upstream move of the CODP. It is worth to mention
that the sales office stays responsible for customer relation, contract extension and
contract monitoring. In addition, this strategy results in new transport costs. New
transport prices were estimated from interviews with the carrier and were factored
into the simulation. Fig. 3 shows the entire process.

&
5
00
&5

Fig. 7.3 Improved process of scenario 2

7.3.5 Simulation Results

The performance measures are set out in Table 21.2 and are related to an entire year.
Based on the described improvements in scenario 1, we are able to reduce the num-
ber of stock outs, which is a direct indicator of customer satisfaction. Lead times
and costs can not be reduced. Owing to the delivery time (mean and the variance)
between sales office and manufacturer, it is necessary to keep large stocks, which
in turn has a negative effect on stock keeping costs and the profit margin. This case
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 87

would also mean building work to extend the sales office inventory to handle the
high stock of inventory. As it is not possible to find an improved solution according
to all of our performance dimensions (lead time, customer satisfaction and cost),
we decided to consider the entire supply chain in scenario 2. By cutting the flow
of products through shortening the supply chain and postponing the CODP, it was
possible to achieve a marked reduction in total lead time in scenario 2. Compared to
the initial scenario, these activities also had a positive effect on customer satisfac-
tion because it is possible to react much faster to customer requirements. Also stock
keeping costs are reduced as sales office stores are no longer required and produc-
tion is carried out on forecasts provided by the customer. In order to be complete, it
has to be mentioned that this strategy would only be possible by extending the man-
ufacturer inventory capacity. Nevertheless, we assume that this would be a realistic
investment as the costs of building an extension to the manufacturer’s stores would
easily be compensate by the saving on the transport costs side within one year.

7.4 Conclusion

In this paper we analysed and assessed two different possibilities for supply chain
improvements. We regarded their effects on lead time and it was possible to show
financial and strategic enhancements. Our approach was illustrated by a polymer
supply chain with a major OEM as end customer. For each of the alternatives, the
performance was measured using lead time, finished articles inventory stocks as
well as costs, number of stock outs and transport costs; where the number of stock
outs constitutes a decisive index for customer satisfaction. The threshold number
of stock outs should be less than 10 days per year. We were able to confirm the
positive impact of lead time reduction on supply chain performance, i. e. the simul-
taneous reduction of inventory and increase of customer satisfaction. We managed to
identify this specific dynamic behaviour by quantifying the benefits earned through
each alternative. Further on, we confirmed the importance of considering the supply
chain as a whole when assessing improvement alternatives. Our results demonstrate
that the benefits of certain alternatives can only be realised if improvements are
aligned along the supply chain partners, e. g. inventory management is based on the
customer forecast and linked to the production planning. We believe these results
to be interesting for both academics and practitioner as they contribute to better
understanding the dynamics of the supply chain and the importance of the entire
supply chain-specific evaluation of improvements. One of our next research activi-
ties will be to implement the most suitable alternative, in order to be able to draw
further conclusions about the model (see also Mitroff et al (1974) and to ascertain
an appropriate forecast algorithm based on historical data to support the customer
forecast.
88 Dominik Gläßer, Yvan Nieto and Gerald Reiner

Table 7.1 Results of initial scenario and simulation runs


Initial scenario Scenario 1 Scenario 2
1 Total lead time (order entry at  43,14  54,31  14,62
manufacturer up to delivery of min 7,67 min 2,68
goods at the sales office in days) max 112,67 max 38,67
σ 16,60 σ 4,02
2 Period of storage at the  7,8  20,28  12,93
sales office (scenario 1) min 0.0 min 0.0
respectively the manufacturer max 48,80 max 38,67
(scenario 2) in days σ 7,47 σ 7,14
3 Delivery time between sales  35,44  35,11 Omitted
office and manufacturer in days min 1 min 1
max 79 max 87,44
σ 14,93 σ 14,35
4 Production lead time in days No detailed No detailed  1,73
consideration. consideration. min 0.82
Included in Included in max 11.1
row 3 row 3 σ 1.4
5 Stock outs in days  57  0,76  1,75
min 0.0 min 0.0
max 6 max 6,04
σ 1.5 σ 2.55
6 Transportation costs  150000  158964 Omitted
manufacturer → sales min 150800
office in Euros max 163800
7 Transportation costs sales  33000  31895 Omitted
office → customer min 29300
1-4 in Euros max 34430
8 Transportation costs Omitted Omitted  119600
manufacturer → customer min 113100
1-4 in Euros max 126100
9 Inventory costs sales  7091  18753 Omitted
office in Euros min 9964
max 21779
10 Inventory costs  1774  1818  8904
manufacturer in Euros min 1778 min 7686
max 1883 max 9196

Acknowledgements Partial funding for this research has been provided by the project “Matching
supply and demand – an integrated dynamic analysis of supply chain flexibility enablers” supported
by the Swiss National Science Foundation.
7 Performance Evaluation of Process Strategies Focussing on Lead Time Reduction 89

References

Bertrand J, Fransoo J (2002) Operations management research methodologies using


quantitative modeling. International Journal of Operations and Production Man-
agement 22(2):241–264
Bertrand J, Fransoo J (2009) Researching Operations Management, 1st edn, Rout-
ledge, New York, chap Modelling and Simulation
Cachon GP, Fisher ML (2000) Supply Chain Inventory Management and the Value
of Shared Information. Management Science 46(8):1032–1048
Chandra C, Kumar S (2000) Supply chain management in theory and practice: A
passing fad or a fundamental change? Industrial Management and Data Systems
100(3):100–13
Christopher M, Towill D (2000) Supply chain migration from lean and functional
to agile and customised. Supply Chain Management: An International Journal
5(4):206–13
Croom S (2009) Researching Operations Management. 1st edn, Routledge, New
York, chap Introduction to Research Methodology in Operations Management
Davis J, Eisenhardt K, Bingham C (2007) Developing theory through simulation
methods. The Academy of Management Review (AMR) 32(2):480–499
Fisher M (1997) What is the right supply chain for your product? A simple frame-
work can help you figure out the answer. Harvard Bus Rev 75(2):105–116
Gunasekaran A, Patel C, Tirtiroglu E (2001) Performance measures and metrics in
a supply chain environment. International Journal of Operations and Production
Management 21(1/2):71–87
Hopp WJ, Spearman ML (1996) Factory Physics: Foundations of Manufacturing
Management. Irvin Inc., Chicago
Jammernegg W, Reiner G (2007) Performance improvement of supply chain pro-
cesses by coordinated inventory and capacity management. International Journal
of Production Economics 108(1-2):183–190
Kelton W, Sadowski R, Sturrock D (2003) Simulation with ARENA, 3rd edn.
McGraw-Hill Science/Engineering/Math
Lee H (2002) Aligning supply chain strategies with product uncertainties. California
Management Review 44(3):105–119
Lee HL, Padmanabhan V, Whang S (2004) Comments on Information Distortion in
a Supply Chain: The Bullwhip Effect. Management Science 50(12):1887–1893
Mason-Jones R, Towill D (1999) Using the information decoupling point to im-
prove supply chain performance. International Journal of Logistics Management
10(2):13–26
Mason-Jones R, Naylor B, Towill D (2000) Lean, agile or leagile? Matching your
supply chain to the marketplace. International Journal of Production Research
38(17):4061–4070
Mentzer J, DeWitt W, Keebler J, Min S, Nix N, Smith C, Zacharia Z (2001) Defining
supply chain management. Journal of Business logistics 22(2):1–26
90 Dominik Gläßer, Yvan Nieto and Gerald Reiner

Mitroff I, Betz F, Pondy L, Sagasti F (1974) On managing science in the systems


age: Two schemas for the study of science as a whole systems phenomenon. In-
terfaces 4(3):46–58
Naylor J, Naim M, Berry D (1999) Leagility: Integrating the lean and agile manu-
facturing paradigms in the total supply chain. International Journal of Production
Economics 62(1-2):107–118
Olhager J (2003) Strategic positioning of the order penetration point. International
Journal of Production Economics 85(3):319–329
Reiner G, Trcka M (2004) Customized supply chain design: Problems and alterna-
tives for a production company in the food industry. A simulation based analysis.
International Journal of Production Economics 89(2):217–229
Sanchez S, Sanchez P, Ramberg J, Moeeni F (1996) Effective engineering design
through simulation. International transactions in Operational research 3(2):169–
185
Schönsleben P (2004) Integral logistics management: Planning & control of com-
prehensive supply chains. CRC Press
Silver E, Pyke D, Peterson R (1998) Inventory management and production planning
and scheduling, 3rd edn. Wiley New York
Vollmann T, Berry W, Whybark D, Jacobs F (2004) Manufacturing planning and
control systems for supply chain management, 5th edn. McGraw-Hill
Winklhofer H, Diamantopoulos A, Witt S (1996) Forecasting practice: A review of
the empirical literature and an agenda for future research. International Journal of
Forecasting 12(2):193–221
Chapter 8
A Framework for Economic and Environmental
Sustainability and Resilience of Supply Chains

Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

Abstract Traditionally supply chain management decisions are based on the eco-
nomic performance which is expressed by financial and non-financial measures,
i.e. costs and customer service. From this perspective, in the last decades, several
logistics trends, i.e. outsourcing, offshoring and centralization, emerged. Recently,
studies have shown that the focus on the cost aspect is no longer sufficient. Due to
internal and external drivers (e.g. customer pressure, regulations, etc.) environmen-
tal criteria become more and more important for the decision-making of individual
enterprises. Furthermore, the risk which is related to the increased transportation
distances resulting from these strategies is often not taken into account or under-
estimated. These shifts in priorities of companies force them to search for new lo-
gistics strategies that are at the same time cost-efficient, environmentally friendly
and reliable. Based on this integrated perspective new logistics trends, like on- and
nearshoring, flexible supply base or flexible transportation, have come up recently
and will gain more importance in the near future. Relying on a flexible supply base
a company can benefit from low costs in an offshore facility and simultaneously
be able to respond quickly to demand fluctuations and react to delivery delays and
disruptions by serving the market also from an onshore site. A single-period dual
sourcing model is presented to show the effects of emission costs on the offshore,
onshore and total order quantity.

Heidrun Rosič
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: heidrun.rosic@wu.ac.at
Gerhard Bauer
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: gerhard.bauer@wu.ac.at
Werner Jammernegg
Vienna University of Economics and Business, Nordbergstraße 15, 1090 Vienna, Austria,
e-mail: werner.jammernegg@wu.ac.at

91
92 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

8.1 Introduction

Traditionally supply chain management decisions are based on the economic per-
formance which is expressed by financial and non-financial measures, i.e. costs
and customer service. From this perspective, in the last decades, different logistics
trends, i.e. outsourcing, offshoring and centralization, have emerged.
Even though these trends seem to be rather “old” they are still prevailing in to-
day’s businesses. Recently, a study conducted in Austria has shown that 41% of the
interviewed companies still intend to offshore some of their production activities in
the following two years. Furthermore, 35.4% of them plan to move their production
sites to Asia; especially China is a prominent destination for offshoring. The low
cost of the production factors (personal, material, etc.) are the key drivers for their
decisions (Breinbauer et al, 2008).
A European-wide study carried out by Fraunhofer ISI concerning offshoring
showed similar results. Between 25% and 50% of the surveyed enterprises moved
parts of their production abroad in the years 2002 and 2003 (Dachs et al, 2006).
Further examples can be found. For instance, the Austria-based Knill Group,
which is active in the field of infrastructure, supplying systems and applications
for energy and data transmission, built new production facilities in India and China
within the past 36 months in order to take advantage of lower wages in Asia (Brein-
bauer et al, 2008). NXP, a leading semiconductor company is headquartered in Eu-
rope and employs more than 33,500 employees. The company pursued a strong
offshoring strategy and now more than 60% of its production activities are located
in Asia, 5% in America; only 33% have remained in Europe. Also, AT&S, a large
Austrian manufacturer of printed circuit boards, continues its offshoring strategy.
In January 1999, AT&S started operating in India by acquiring the largest Indian
printed circuit board manufacturer and now it will build a second facility located
nearby. The investments for this project will amount to 37 million Euros and pro-
duction activities shall start in the third quarter of 2009. Besides, AT&S operates
facilities in China and Korea.
In section 2 prevalent logistics trends are presented focusing on a cost perspec-
tive, thereby showing the trade-offs that exist between the different cost components.
The trends presented, i.e. outsourcing, offshoring and centralization, usually lead to
lower production (procurement) costs in the case of offshoring and outsourcing or
lower inventory costs in the case of physical centralization. But, in general, they re-
sult in an increase of transportation distances, therefore making supply chains longer
and/or more complex. Often in the evaluation of these strategies side effects of in-
creased transportation distances are not taken into account adequately. Therefore, in
section 3 in addition to the economic criteria, “soft” factors, like lead time, deliv-
ery reliability, flexibility, etc. and the environmental impact are included. Based on
this integrated perspective consisting of costs, risks and environment new logistics
trends are highlighted. One of these new logistics trends is then analyzed in more
detail, namely flexible supply base with the specific variant dual sourcing. In section
4 a transport-focused framework for dual sourcing (off- and onshore supply source)
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 93

and in section 5 a single-period model for dual sourcing including emission costs
are presented.

8.2 Prevalent Logistics Trends: Cost Perspective

The subject of supply chain management is the organization of the transformation


of raw materials into final products by a well defined network of manufacturing,
warehousing and transportation processes. Most of the necessary activities are de-
termined by the design of the supply chain. In network design, for instance, it is
decided where manufacturing activities take place, which location performs a cer-
tain activity, where procurement and/or distribution centers are located and how
the transportation is handled between the different stages. Traditionally, these de-
cisions are based on the economic performance which is expressed by financial
and non-financial measures, i.e. costs and customer service. Often these measures
are conflicting, like optimizing the total landed cost of all involved transformation
processes and satisfying customer requirements. A first trade-off is between logis-
tics cost and customer service: high levels of product availability can be achieved
with high inventory and thus high inventory cost; short delivery times are possi-
ble with additional resources for manufacturing and/or transportation related with
an increase of the respective cost. Moreover, a second trade-off must consider the
costs of resources (manufacturing, transportation, storage facilities). In order to stay
competitive in the market, an enterprise chooses the strategy which is most efficient,
generates lowest total landed cost (facilities, inventory and transportation) and sat-
isfies customer requirements.
Different trends, i.e. outsourcing and offshoring of production activities and
physical centralization, have emerged due to a focus on cost reduction. Outsourc-
ing of production activities means to subcontract a process to a third-party in or-
der to concentrate investments and time on the core competencies of a company.
Outsourced processes may be done more efficiently and cheaper by a third party
which gains economies of scale. Further, the fixed cost of production can be re-
duced (Chopra and Meindl, 2006). Offshoring is defined as locating activities abroad
with varying degree of the geographical distance between the original and the new
location depending on the author, e.g., “outside a country’s boundaries”, “outside
the first world”, “outside of the continent”. In this paper, offshoring does not in-
clude each transfer of manufacturing facilities outside a country’s boundaries, but
the term offshoring only applies to those relocations made to a far-distant country.
With respect to the term “far-distant country” it has to be kept in mind that the ac-
tual geographical distance is relevant and not the legal boundaries of a country. The
main driver for offshoring is to lower operational costs due to lower wages of the
workforce abroad or lower raw material costs. Further reasons for offshoring are
gaining market access, following a key customer or productivity increases. Typical
offshore regions are situated in Asia because there a company can take advantage of
significantly lower labor costs. The dislocation of production activities from West-
94 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

ern Europe to Eastern Europe is often called nearshoring as the distance and the
cultural differences are less (Ferreira and Prokopets, 2009). Physical centralization
means that the number of production, procurement and/or distribution sites is re-
duced to a single one, this means “consolidating operations in a single location”
(Van Mieghem, 2008). The main goal of centralization is to pool risk, reduce inven-
tory and exploit economies of scale (Chopra and Meindl, 2006).
These trends mainly lead to a reduction of total landed cost due to lower produc-
tion (procurement) cost in the case of offshoring and outsourcing or lower inventory
cost due to risk pooling in the case of physical centralization. But as a negative side-
effect supply chains are longer and/or more complex (Tang, 2006). Due to the in-
creased length of supply chains more transportation activities are necessary leading
to an increase of the respective costs. In this paper we will especially pay attention
to the effect of transportation activity within a supply chain.

8.3 New Logistics Trends: Integrated Perspective

The presented logistics trends have proven to be optimal for industrial compa-
nies under economic considerations. Recently studies have shown that the focus
on the cost aspect of a certain strategy is no longer sufficient. Environmental crite-
ria become more and more important for the decision-making of individual enter-
prises. Walker et al (2008) differ between internal (organizational factors, efficiency
improvements) and external drivers (regulation, customers, competition and soci-
ety) which may induce the consideration of environmental aspects in supply chain
decision-making. Especially carbon dioxide (CO2) emissions heavily accelerate the
greenhouse effect; 60% of this effect is caused by CO2. This is a reason why gov-
ernmental institutions (UN, EU, etc.) often focus their regulations on CO2-reduction
(Kyoto protocol, EU emission trading scheme, etc.).
Furthermore, the risk which is related to these strategies is often not taken into
account or underestimated. There are various types of risks that exist especially in
the case of offshoring. Currency risk and political risk depend on the economic and
political stability within a country. Intellectual property risk and competitive risk
should also not be ignored (Van Mieghem, 2008). Ferreira and Prokopets (2009)
conclude from the “2008 Archstone/SCRM Survey of Manufacturers” (in-depth
survey of 39 senior executives from US and European-based manufacturers) that
executives also start to recognize aspects of offshoring, such as “quality problems,
longer supply chains, lack of visibility, piracy and intellectual capital theft”. Due
to these additional aspects the cost savings of offshoring which represent between
25% and 40% on average start to diminish.
In addition, an offshoring strategy negatively affects the flexibility and respon-
siveness of a supply chain as shipments have to be made in large lots (e.g. container-
size) and the delivery time is very long (e.g. up to several months). Besides, the cus-
tomization of products to individual customer needs is more difficult. Furthermore,
the cost components are about to change; 40% of the manufacturing enterprises have
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 95

experienced an increase of 25% or more in direct costs of offshoring (materials,


components, logistics and transportation) over the last three years. Nearly 90% of
them expect costs to rise by more than 10% in the next 12 months. This is due to in-
creasing labor costs in important offshore countries, like China (2005-2008: wages
+ 44%), an increase in transportation charges for sea freight (2005-2008: freight
charges + 135%) and a non-favorable development of foreign currencies (Ferreira
and Prokopets, 2009). Furthermore, Simchi-Levi et al (2008) point out that even
though the oil price has decreased recently it is likely that it will increase again
above $100 a barrel in the year 2009.
Offshoring, outsourcing and centralization result in supply chains which are cost-
efficient, but as a negative side-effect they are longer and/or more complex (Tang,
2006). Due to the increased length of supply chains more transportation activities
are necessary; even though some of the transport can be shifted to more environ-
mental friendly modes, such as sea transport, in total, these trends have a negative
impact on the environment. Similar conclusions can be drawn with respect to the
risk dimension. The more extended a supply chain is, the more risk it has to bear
and the more difficult it is to guarantee a certain delivery reliability. It can be con-
cluded that in the future the existing trends have to be reconsidered; environmental
criteria and the risk dimension will become more important. Further, the cost struc-
ture is expected to change. These shifts in priorities of companies as well as the
shifts in the cost components force companies to search for new logistics strategies
that are at the same time cost-efficient, environmental friendly and reliable.
Supply chain risks as well as environmental aspects should be considered, be-
sides economic criteria, in the performance evaluation of a supply chain. Based on
this integrated perspective new logistics trends have come up recently and will gain
more importance in the near future.
Through network redesign, i.e. by moving production activities back or closer to
the market through near-, onshoring or decentralization, the transportation distances
can be reduced. The study of Ferreira and Prokopets (2009) shows that 30% of the
companies surveyed have already reversed their offshoring decision; 59% are will-
ing to change their strategy with respect to offshoring. This means that either off-
shored activities are relocated or that managers will show an increased awareness
in future offshoring decisions. For instance, a company from the apparel industry
which produces casual wear, sportswear and underwear in two manufacturing sites
in the US considered redesigning its network in order to reduce its CO2-emissions.
For inbound transportation rail and trucks were used whereas on the outbound side
the company completely relied on road transportation. The moving of some produc-
tion activity to a low-cost site in Mexico and the introducing of new distribution sites
was evaluated considering cost and CO2-emissions. The optimization only with re-
spect to cost led to a moving of production activity to Mexico and the installation
of two additional distribution centers; total costs reduction (costs for warehouse
and production sites, transportation and inventory) amounted to 16%, nearly US$
8 million in absolute figures, and CO2-emissions could be lowered by 5%. Then,
a reduction of CO2-emissions by 25% was introduced as constraint. Now, nearly
no production activity was dislocated to Mexico, thereby producing closer to the
96 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

market and reducing transportation distances. This new network design resulted in
a small increase of total costs compared to the optimal solution, but the total costs
are still more than 10% smaller than in the initial situation and the CO2-emissions
could be reduced by a quarter (Simchi-Levi, 2008).
Concerning supply chain risks, it has to be pointed out that offshoring, outsourc-
ing and centralization typically move production away from the market which re-
duces the responsiveness and flexibility of a supply chain. This has to be consid-
ered together with the possible cost reductions of a certain strategy (Allon and
Van Mieghem, 2009). Further, Tang (2006) points out that supply chains have to
become robust which means that a supply chain is able to fulfill customer require-
ments even though a disruption of the supply chain has occurred. This disruption
can be of different kind, either a short one due to congestion or accidents or a long
one which can be the result of a natural disaster or a terrorist attack destroying one
node or arc in the supply chain.
By using a flexible supply base a company can benefit from low costs in an off-
shore facility and simultaneously be able to respond quickly to demand fluctuations
by serving the market also from an onshore site and react to delivery delays and
disruptions. In this way, the amount of long-distant transport can be reduced, there-
fore mitigating transportation risks. For instance, Hewlett Packard uses an offshore
facility to produce the base volume and employs also an onshore facility to quickly
react to disruptions and demand fluctuations (Tang, 2006).
Furthermore, flexible transportation helps to improve the performance of a sup-
ply chain by a change of transport mode, multi-modal transportation or the use of
multiple routes. The use of a single mode is mainly due to cost consideration and
the aim to reduce complexity in supply chains but this increases the vulnerability
of the supply chains. By using multi-modal transportation the supply chain is able
to obtain more flexibility and therefore can handle disruptions easier. Especially in
the case of congestion an alternative route could increase the time- as well as cost-
effectiveness. For instance, LKW Walter decided to change the mode on the link
north-eastern Spain to southern Italy. Road transportation was replaced by a multi-
modal solution (sea/truck). Thereby, 1,211 km per shipment (1,523 km on the road
vs. 312 km short sea/trucking), in total over 1.2 million km per year, could be saved
(ECR, 2008). Nike operates a distribution center in Belgium that serves the Euro-
pean market. 96% of the freight to the location is transported by inland waterways.
Thereby, 10,000 truck loads could be saved and also on the distribution side Nike
very much relies on waterways; only the direct delivery to the customers is carried
out by truck (Seebauer, 2008).
Improvements in transportation efficiency can be achieved through better ve-
hicle utilization, the reduction of empty trips as well as less frequent shipments
with larger lot sizes. This leads to a reduction of the number of transports. Thus
costs, CO2-emmissions and fossil fuel consumption can be reduced significantly.
S.C. Johnson & Son Inc., a household and personal-care products maker, for in-
stance, was able to cut fuel use by 630,000 liters by improving truckload utilization
(Simchi-Levi et al, 2008). By maximizing full truck load and supplying the market
from the closest location, PepsiCo, on average, saved 1.5 million km and 1,200 t
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 97

CO2-emissions (ECR, 2008). The British drugstore chain Boots, for instance, could
avoid empty runs by using route planning. Thereby, 2.2 million kilometers on the
road could be eliminated which resulted in a reduction of 1,750 t CO2-emissions. In
combination with the use of larger containers, increased utilization of the contain-
ers and reduced amount of air transportation Boots achieved a reduction of 3,000 t
CO2 (-29%) between 2004 and 2007. These improvements were only possible due
to the tight collaboration between Boots and its logistics service provider Maersk
Logistics (Seebauer, 2008). According to Simchi-Levi et al (2008) logistics service
providers will be employed more often in order to increase efficiency. They are
able to consolidate the shipments from a large number of customers and therewith
can reduce the number of empty trips. Again Boots was able to save approximately
120,000 km as well as 92 t of CO2-emissions per year by sharing transportation
with another company in the UK. Further examples in this context can be found in
the ECR Sustainable Transport Project (ECR, 2008). Table 8.1 gives an overview of
the presented new logistics trends.
In the following sections we use the flexible supply base - one of the presented
new logistics trends - to develop a transport-focused framework and a stylized model
for dual sourcing.

8.4 Transport-Focused Framework for Dual Sourcing

In the previous section it was exemplarily shown that a flexible supply base can help
to improve the performance of a supply chain from an integrated perspective includ-
ing economic, risk and environmental criteria. In the following we will focus on a
certain type of this strategy, i.e. dual sourcing depending on a cheap but inflexible
and slow offshore supply source and on an expensive but flexible and fast onshore
supply source. The onshore supply source can help to improve the performance of
a supply chain with respect to risks in two cases, to bridge delivery delays and/or
disruptions or to fulfill demand exceeding the offshore order quantity.
Table 8.2 gives an overview of the external conditions that have an impact on a
company’s policy and the decisions to be taken.
Environmental regulations, like the emission trading scheme of the EU, impose
restrictions on companies and therefore influence the policies they choose. The
emission trading scheme of the EU (EU ETS) was implemented in order to reach
the goals stated in the Kyoto protocol. It is a cap-and-trade system of allowances for
emitting CO2 and other greenhouse gases whereby each allowance certifies the right
to emit one ton of CO2. Only certain industries are included in this regulation up-to-
now. These industries are heavy energy consuming industries, like refineries, power
generation with fossil resources, metal production and processing, pulp and paper,
etc. Today, 11,000 sites that produce around 50% of the EU’s total CO2-emissions
are covered by the EU ETS. A certain number of emission allowances are allocated
to the companies free of charge. Those companies that produce fewer emissions than
the number of allowances owned can sell them, whereas those producing more have
98 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

Table 8.1 Overview of new logistics trends: Integrated perspective


Logistics trends Characteristics Relevance on in- Case study
tegrated perspec-
tive
Network Nearshoring, Reduced Using regional distribution centers, a
redesign Onshoring, transportation company from the metal manufacturing
Decentralization distances and industry was able to reduce the aver-
number of age distance to customer by 46%.In the
transports apparel industry, the decision to pro-
duce at an onshore facility reduced CO2-
emissions by 25%.

Flexible supply Using multiple Reduced number Hewlett Packard uses an offshore facility
base supply sources of long-distant to produce the base volume and employs
(offshore and transports and also an onshore facility to quickly react to
onshore) mitigation of disruptions and demand fluctuations.
transportation
risks

Flexible Change of Reduced LKW Walter saved 1,211 km per ship-


transportation transport mode, CO2-emissions ment by changing the mode (1,523 km on
Multi-modal and dependence the road vs. 312 km short sea/trucking), in
transportation, on fossil fuels, total over 1.2 million km per year.
Multiple routes Reaction to
occurrence of
risk events

Transportation Vehicle routing Reduced number By maximizing full truck load, PepsiCo,
efficiency and loading, of empty trips, on average, saved 1.5 million km and
Consolidated Improved, 1,200 t CO2-emissions. A manufacturer
shipments vehicle of household and personal-care products
utilization cut fuel use by 630,000 litres by combin-
ing multiple customer orders.

Table 8.2 Transport-focused framework for dual sourcing


External conditions Environmental regulations (emission trading scheme)
Transportation network including transportation risks
Policies Dual sourcing (off- and onshore supply source)
Decisions Offshore order quantity
Emission allowances

to buy additional allowances, get credits by engaging in emission-saving projects or


have to pay a penalty. The aim is to reduce the number of allowances constantly, so
as to decrease the total CO2-emissions within the EU (-21% until 2020). In 2006,
half of the greenhouse gases in the EU are caused by industry; the second largest
“polluter” is transportation accounting for nearly 20%. The EU is already planning
to increase the number of companies and sectors which have to comply with the trad-
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 99

ing scheme, e.g. include civil aviation by 2013 (EC, 2008). So, it has to be expected
that the whole transport sector will be confronted with more severe regulations or
the inclusion into the EU ETS in the near future.
External conditions are also determined by the transportation network including
respective risks. According to Rodrigues et al (2008) transportation risks are related
to the carrier who executes the transport and to external factors. The carrier is a
source of risk with respect to his fleet capacity, network planning, scheduling and
routing and information system as well as his financial conditions and reliability.
As external risk factors, transport macroeconomics (oil price, availability of drivers,
etc.), infrastructure conditions (congestion, construction, etc.) and future govern-
ment policies have to be mentioned. Further, severe shocks, like terrorist attacks,
natural disasters or industrial action, might have a strong impact on the transporta-
tion network. Whereas the probability of such event is very low, the impacts can be
detrimental. Based on this, they state that with the increasing degree of outsourcing
and the higher geographical spread of supply chains the transportation risks increase
(Rodrigues et al, 2008).
The paper by Allon and Van Mieghem (2009) about global dual sourcing shows
that it is almost impossible to derive the optimal sourcing policy for a responsive
near-shore source and a low-cost offshore source even if the criterion is just cost
minimization. By including an environmental criterion, thus, it seems reasonable to
develop a simple model for dual sourcing with onshore reactive capacity to be able
to analyze the consequences for the offshore order quantity.

8.5 Single-Period Dual Sourcing Model Including Emission Costs

In the seminal newsvendor model a possibility to reduce the mismatch cost of under-
stocking or overstocking is to allow for a second order opportunity. In the simplest
version it is assumed that at the beginning of the selling season the demand of a
product is known exactly or that the second production facility can immediately
produce any requested quantity (see e.g.,Warburton and Stratton, 2005 or Cachon
and Terwiesch, 2009, chapter 12).
In the considered single-period dual sourcing model a product can either be
sourced from an offshore production facility and from an onshore production plant
whereby the capacity of the onshore supply source is unlimited and can deliver im-
mediately. The two suppliers can be internal or external to the company. Because of
the long procurement lead time the offshore order quantity of the product is based on
the random demand X characterized by the distribution function F. The company,
e.g. a retailer, sells the product at the unit selling price p. The purchase price per
unit from the offshore supplier is denoted by co f f , that from the onshore supplier
is con . Leftover inventory at the end of the regular selling season can be sold at a
unit salvage value z. It is assumed that p > con > co f f > z holds. Then the profit P
depends on the offshore order quantity q and on the realized demand x:
100 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

px − co f f q + z(q − x)+ x ≤ q
P(q, x) =
px − co f f q − con(x − q)+ x > q
The optimal offshore order quantity q∗ is derived by maximizing the expected
profit E(P(q, X)). Using the framework of the classical newsvendor model the opti-
mality condition is given by (see, e.g. Cachon and Terwiesch, 2009, section 12.4):
con − co f f
F(q∗ ) =
con − z
The unit purchase price from the offshore supplier is composed of the product
price per unit c and the emission cost factor ϕ ; the unit purchase price from the
onshore supplier is obtained by adding a domestic premium (d · c) to the offshore
product price per unit. This premium mainly is caused by higher labor costs that
have to be paid in the onshore production facility (Warburton and Stratton, 2005).
The two cost parameters are defined as :

co f f = (1 + ϕ )c,
con = (1 + d)c.
The offshore supply source is only used if it is overall cheaper than the onshore
supply source, which is the case as long as ϕ < d. As soon as d ≥ ϕ the product
quantity is exclusively procured from the onshore source on order. The factor ϕ
represents the emission costs per product unit, whereby it is assumed that costs for
emission allowances only arise for long-distant transportation from the offshore lo-
cation. The emission costs per unit sourced from the offshore supplier depend on the
selected transportation route and transportation mode. For the different modes av-
erage emission factors per kilometer exist. Multiplying these emission factors with
the distance the vehicle has to travel, the CO2-emissions for one trip can be calcu-
lated. The emission costs, then, are derived from the buying price of an emission
allowance, traded under the EU ETS. It is reasonable to assume that the emission
cost factor ϕ is independent of the order quantity q if the transport is carried out by
a logistics service provider. The company, e.g. retailer, then has to reserve a fixed
transport capacity which determines the factor ϕ . If part of that reserved capacity is
not used by the company, the logistics service provider can sell it to other customers
and therefore usually achieve high vehicle utilization.
A numerical example with the following cost and price parameters is presented
in order to show the impact of emission costs on the quantity decisions: selling price
p = 20, product price per unit c = 10, salvage value z = 5 and domestic premium d =
0.2. The emission costs factor ϕ is varied in order to show the impact of increasing
environmental costs on the optimal decision. Demand is assumed to be normally
distributed with a mean μ of 1,000 units whereby two different standard deviations
(σ1 = 150, σ2 = 300) are used in order to show the impact of variability. Taking a
normally distributed demand is justified if the coefficient of variation (σ /μ ) is small
enough (Warburton and Stratton, 2005).
The offshore order quantity depends on the relative cost advantage that can be
achieved through offshore sourcing. The lower the offshore cost is the more the
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 101

retailer will procure from the offshore source. The onshore supply source is only
employed in order to fulfill the demand that exceeds the offshore order quantity,
i.e. expected lost sales. Therefore, with the onshore supply source a service level of
100% can be guaranteed. But it should not be forgotten that this comes at a high do-
mestic premium. Nevertheless, the dual sourcing strategy often outperforms a pure
offshoring strategy with respect to expected profit (see, e.g. Cachon and Terwiesch,
2009).
With increasing emission costs (ϕ · c) the company sources less from offshore
as the cost advantage is reduced. The offshore quantity decreases nearly linearly
with increasing ϕ until a certain point after which it decreases sharply. The total
order quantity (off- and onshore quantity) also decreases depending on ϕ . This is
due to the following fact: The fewer units are procured through the offshore supply
source the lower is the expected leftover inventory (I). The whole expected lost sales
quantity (qon) is then fulfilled from the onshore supply source and this decision is
taken under complete certainty. Overall, the total order quantity converges to the
mean demand because

q∗ + qon = E(X) + I
Higher demand uncertainty, i.e. a higher coefficient of variation of demand, im-
plies that the onshore supply source is used more.
The numerical results for the two different demand distributions with the above
price and cost parameters are graphically shown in Fig.8.1 and Fig.8.2. The emission
costs factor is varied in the range 0 ≤ d < ϕ .

Fig. 8.1 Off-, onshore and total order quantity depending on the emission cost factor ϕ for nor-
mally distributed demand with μ = 1, 000, σ1 = 150 and d = 0.2
102 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

Fig. 8.2 Off-, onshore and total order quantity depending on the emission cost factor ϕ for nor-
mally distributed demand with μ = 1, 000, σ1 = 3000 and d = 0.2

The presented model is based on limiting assumptions with respect to the exist-
ing environmental regulations concerning emission allowances. Under the existing
EU ETS companies receive allowances free of charge. Therefore, in contrast to the
model presented, emission costs do not arise for each unit ordered, but only if a cer-
tain threshold is exceeded. For a more general model with a positive emission limit
and the opportunity to buy additional emission allowances or to sell unused ones we
refer to Rosič and Jammernegg (2009).

8.6 Summary

Prevalent logistics trends, i.e. outsourcing, offshoring, and centralization are pre-
sented from a cost perspective. These strategies are chosen with the objective to
reduce total landed costs (e.g. reduction of labor costs through offshoring or in-
ventory costs through centralization). But as direct consequence transportation dis-
tances increase; supply chains become longer and/or more complex. This has nega-
tive impacts on the risk a supply chain has to face (e.g. congestion on transportation
links) and on the environment (e.g. CO2-emissions). An integrated perspective is
presented and new logistics trends which perform better with respect to transporta-
tion risks and the environment are illustrated by several case studies. Further, we use
one of the presented trends - flexible supply base - to develop a transport-focused
framework for dual sourcing. Dual sourcing means that a company relies on a cheap
but slow offshore supply source and on an expensive but fast and unlimited onshore
supply source. The external conditions which influence the policies of an individual
8 A Framework for Economic and Environmental Sustainability and Resilience of SC 103

company are environmental regulations focusing on the emission trading scheme


for CO2-allowances of the EU and the transportation network including the respec-
tive risks. Then, a single-period dual sourcing model is presented. The objective is
to maximize expected profit and it has to be decided how much to order from the
offshore source; the onshore source is used in order to fulfill the demand exceed-
ing the offshore order quantity. The costs for emission allowances are included in
the offshore purchase price as for the quantity procured from the offshore supply
source more transportation activity is necessary. It is shown that with increasing
emission costs the offshore order quantity decreases, whereas the onshore order
quantity increases and the total order quantity converges to the expected demand.
The presented model is based on limiting assumptions with respect to the existing
environmental regulations for emission allowances. For a more general model with
a positive emission limit and the opportunity to buy additional emission allowances
or to sell unused ones we refer to Rosič and Jammernegg (2009).

References

Allon G, Van Mieghem J (2009) Global Dual Sourcing: Tailored Base Surge Alloca-
tion to Near and Offshore Production. Tech. rep., Working Paper, Kellogg School
of Management, Northwestern University
Breinbauer A, Haslehner F, Wala T (2008) Internationale Produktionsver-
lagerung Österreichischer Industrieunternehmer Ergebnisse einer em-
pirischen Untersuchung. Tech. rep., FH des bfi Wien, URL http://www.fh-
vie.ac.at/files/2008 Studie Produktionsverlagerungen.pdf
Cachon G, Terwiesch C (2009) Matching supply with demand: An introduction to
operations management, 2nd edn. McGraw-Hill, Boston
Chopra S, Meindl P (2006) Supply chain management, 3rd edn. Pearson Prentice
Hall, New Jersey
Dachs B, Ebersberger B, Kinkel S, Waser B (2006) Offshoring of production
A European perspective. URL http://www.systemsresearch.ac.at/%20getdown-
load.php?id=154
EC (2008) EU action against climate change The EU Emissions Trading System.
European Commission. URL http://ec.europa.eu/environment/climat/pdf/brochu-
res/ets en.pdf
ECR (2008) ECR Sustainable Transport Project Case Studies. URL
http://www.ecrnet.org/05-projects/transport/Combined%20Case%20stu-
dies v1%208 220508 pro.pdf
Ferreira J, Prokopets L (2009) Does offshoring still make sense? Supply Chain Man-
agement Review 13(1):20–27
Rodrigues V, Stantchev D, Potter A, Naim M, Whiteing A (2008) Establishing a
transport operation focused uncertainty model for the supply chain. International
Journal of Physical Distribution & Logistics Management 38(5):388–411
104 Heidrun Rosič, Gerhard Bauer and Werner Jammernegg

Rosič H, Jammernegg W (2009) The environmental sustainability of quick response


concepts. Working paper, Department of Information Systems and Operations,
Vienna University of Economics and Business
Seebauer P (2008) Supply Chain unter der Öko-Lupe. Logistik heute 2008 (10):54–
55
Simchi-Levi D (2008) Green and supply chain strategies in a volatile world.
Fachkoferenz: Grüne Supply Chains, Frankfurt/Main, Germany
Simchi-Levi D, Nelson D, Mulani N, Wright J (2008) Crude calculations. URL
http://online.wsj.com/article/SB122160061166044841.html
Tang C (2006) Robust strategies for mitigating supply chain disruptions. Interna-
tional Journal of Logistics: Research and Applications 9(1):33–45
Van Mieghem J (2008) Operations Strategy: Principles and Practice. Dynamic Ideas,
Charlestown
Walker H, Di Sisto L, McBain D (2008) Drivers and barriers to environmental sup-
ply chain management practices: Lessons from the public and private sectors.
Journal of Purchasing and Supply Management 14(1):69–85
Warburton R, Stratton R (2005) The optimal quantity of quick response manufac-
turing for an onshore and offshore sourcing model. International Journal of Lo-
gistics, Research and Applications 8(2):125–141
Chapter 9
An Integrative Approach To Inventory Control

Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

Abstract Inventory control systems consist of three types of methods: forecasting,


safety stock sizing and order timing and sizing. These are all part of the interpre-
tation of a planning environment to generate replenishment orders, and may con-
sequently affect the performance of a system. It is therefore essential to integrate
these aspects into a complete inventory control process, to be able to evaluate dif-
ferent methods for certain environments as well as for predicting the overall perfor-
mance of a system. In this research a framework of an integrated inventory control
process has been developed, covering all relations from planning environment to
performance measures. Based on this framework a simulation model has been con-
structed; the objective is to show how integrated inventory control systems perform
in comparison to theoretical predictions as well as to show the benefits of using an
integrated inventory control process when evaluating the appropriateness of inven-
tory control solutions. Results indicate that only simple applications (for instance
without forecasts or seasonality) correspond to theoretical cost and service level
calculations, while more complex models (forecasts and changing demand patterns)
show the need for tight synchronization between forecasts and reordering methods.
As the framework describes all relations that affect performance, it simplifies the
construction of simulation models and makes them accurate. Another benefit of the
framework is that it may be used to transfer simulation models to real-world appli-
cations, or vice versa, without loss of functionality.

Philip Hedenstierna
Logistics Research Group, University of Skövde, 541 28 Skövde, Sweden
Per Hilletofth, Corresponding author
Logistic Research Group, University of Skövde, 541 28 Skövde, Sweden, Tel.: +46 (0)500 44 85
88; Fax: +46 (0)500 44 87 99,
e-mail: per.hilletofth@his.se
Olli-Pekka Hilmola
Lappeenranta Univ. of Tech., Kouvola Unit, Prikaatintie 9, 45100 Kouvola, Finland

105
106 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

9.1 Introduction

The purpose of inventory control is to ensure service to processes or customers in a


cost-efficient manner, which means that the cost of materials acquisition is balanced
with the cost of holding inventory (Axsäter, 1991). This is done by interpreting data
describing the planning environment, i.e. the parameters that may affect the deci-
sion, to generate replenishment order times and quantities (Mattsson, 2004). The
performance of an inventory control system may then be measured by the service
and the total cost caused, when applied in a certain environment. Inventory control
methods may be classified by whether they determine ordering timing, quantity, or
both (Mattsson and Jonsson, 2003). For systems that determine only one aspect,
such as the reorder point system or the periodic ordering system, the undetermined
aspect must be calculated beforehand, typically using the economic order quantity
or the corresponding economic inventory cycle time (Waters, 2003). The parameters
that inventory control systems are typically including such things as demand fore-
casts, projected lead times, holding rates and ordering costs. Of these, the forecast
is of special concern, as it is not a part of the planning environment, but a product
thereof. This makes forecasting, given our definition of inventory control, an inte-
gral part of the inventory control system. To maintain service when there is forecast
and lead time variability, safety stock is used, which is based on variability and on
the uncertain times relating to the used inventory control model (Axsäter, 2006).
As the safety stock incurs a holding cost, it may be argued that it should be part of
the timing/sizing decision; however, it is usually excluded as optimization in most
cases only gives insignificant cost savings (Axsäter, 2006). However, in larger dis-
tribution systems reduction of safety stocks is one primary driver of centralization
of warehouses to one location (e.g. demand pooling, square root law, see Zinn et al
1989 and Das and Tyagi 1999) these could concern feeding warehouses of factories
(Rantala and Hilmola, 2005, e.g.) as well as retail related distribution operations
(Leknes and Carr, 2004, e.g.).
We have now discussed three interdependent areas that are usually treated in-
dividually. They are all part of the interpretation of the planning environment to
generate replenishment orders, and may consequently affect the performance of the
system. The current approach to inventory control does generally not consider it as
a single system, but as separate methods (e.g. Axsäter 2006, Waters 2003, Matts-
son and Jonsson 2003 and Vollmann 2005). An exception is Higgins (1976), who
describes inventory control as a process (see Fig.9.1), but does not detail how in-
formation flows through the model, nor does he isolate the functions of forecasting,
safety stock sizing and inventory control.
Looking at Higgins model, it is easy to realize that corruption of data occurring
in an operation or between operations will cause the incorrect data to affect subse-
quent operations (Ashby, 1957). When theoretical models are applied to scenarios
that follow the assumptions of the models, this is not an issue; but when a model is
applied to a scenario it is not designed for, data corruption ensues. Applied to in-
ventory control, this may mean that a simple forecast or a simple inventory control
method is applied in an environment that does not reflect the method’s assumptions
9 An Integrative Approach To Inventory Control 107

(The popularity of using simple methods, like the reorder point method is shown
in Jonsson and Mattsson 2006 and Ghobbar and Friend 2004). When a method’s
assumptions are unmet, its performance may be difficult to predict. The scenario
of using theoretically improper methods is not unlikely, as businesses may want to
utilize inventory control methods that are simple to manage, such as the reorder
point system, even when the planning environment would require a dynamic lot-
sizing method such as the Wagner-Whitin, Part-period or the Silver-Meal algorithm
(Axsäter, 2006). In the same fashion, simple forecasting methods may be applied
to complex demand patterns to simplify the implementation and management of the
forecasts.

Fig. 9.1 Higgins inventory control process model (Higgins, 1976)

To understand how a method will respond to an environment it was not designed


for, it is necessary to understand the entire process, from planning environment to
measurement of results. As it may be difficult to predict how a system based on a
required type of input will react to unsuitable data, a model of the system may help
to give insight into the system’s performance. In his law of requisite variety, Ashby
(1957) states that a regulatory system must have at least as much variety as its input
to fully control the outcome; applied to the inventory control process, this means
that all aspects of a system must be modeled to get an accurate result. Inventory
control systems consist of three types of methods: forecasting, safety stock sizing as
well as order timing and sizing (Axsäter, 2006). Though there are many individual
methods, only one method of each type may be used in an inventory control system
for a single stock-keeping unit.
In this research a framework of an integrated inventory control process has been
developed, covering all relations from planning environment to performance mea-
sures. The design of the framework was based on a literature review of inventory
control theory and on the authors’ experience of the area. Based on this framework
a simulation model has been constructed considering demand aggregation, fore-
casting, and safety stock calculations, as well as reordering methods. The research
objective of this study is to provide an increased understanding of the following
research questions: (i) ‘How may the process of inventory control be described in
a framework that allows for any combination of methods to be used?’, and (ii) ‘Is
there any benefit of integrated inventory control when deciding appropriate inven-
tory control solutions?’.
108 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

The remainder of this paper is structured as follows: First, Section 9.2 integrates
existing theory to describe a framework for designing inventory control models.
Section 9.3 introduces empirical data from a company, whose planning environ-
ment was interpreted in Section 10.2.2 to develop a simulation model based on the
framework. Section 9.5 describes the results of the simulations. Thereafter, Section
23.5 discusses the implications of the results, while Section 18.5 describes the con-
clusions that can be drawn from the study.

9.2 Framework for Integrated Inventory Control

The design of the framework is based on observing how inventory control meth-
ods operate, what input they require and what output they provide. An underlying
assumption for inventory control systems is that there for any given time t, is an in-
ventory level LL, which is reduced by demand D and increased by replenishment R.
Another assumption is that time is divided into buckets as described by Pidd (1988),
for continuous systems buckets are infinitesimal, and that for each bucket the lowest
inventory level, which is sufficient to evaluate the effects of inventory control, is
governed by Formula 1. The relationship between these factors has been deduced
from the rules that material requirements planning is built on (Vollmann, 2005).

LLt = LLt−1 + Rt−1 − Dt (9.1)


where LLt = lowest inventory level at time t,Rt−1 == replenishment quantity occur-
ring before t and Dt = demand during t.

Formula 1 dictates how transactions of any system placed in the framework will
operate. It considers replenishment to occur between time buckets, meaning that it
is sufficient to monitor the lowest inventory level to manage inventory transactions.
Information such as service levels, inventory position and the highest stock level
may be calculated from the lowest inventory level. The formula governs the inven-
tory transactions of any inventory control system, and must be represented in any
inventory control application. All other parts of an application may vary, either de-
pending on the planning environment in which an inventory control system is used,
or on the design of the system. Fig. 9.2 shows the framework, which starts with the
planning environment and ends with a measurement of the system’s performance.
The planning environment comprises the characteristics of all aspects that may
affect the timing/sizing decisions (Mattsson, 2004). For each time unit, the environ-
ment, which determines the distribution of demand, generates momentary demand
that is passed on to a forecasting method, to an inventory control method and to
actual transactions. The type of demand, which is dictated by the planning envi-
ronment, tells whether a backlog can be implemented or not, and what function that
may represent it (Waters, 2003). Forecasting is affected by past demand information
and the planning environment (Axsäter, 1991). The former is used to do time series
9 An Integrative Approach To Inventory Control 109

analysis, which is common practice in inventory control, while the latter may con-
cern other input, such as information that may improve forecasting or data needed
for causal forecasting. The environment may also tell of changes in the demand pat-
tern, which may necessitate adjusting forecasting parameters or changing the fore-
casting method. It is necessary to consider the aggregation level of data, as longer
term forecasts will have a low coefficient of variation, at the cost of losing forecast
responsiveness (Vollmann, 2005). Forecast data is necessary for inventory control
methods (forecasted mean values), and for safety stock sizing (forecast variability)
(Waters, 2003). Safety stock sizing is a method buffering against deviations from
the expected mean value of the forecast (Waters, 2003). The assumption of safety
stock sizing is that all forecasts are correct estimations of the future mean value of
demand; any deviations from the forecast are attributed as demand variability. This
effectively means that an ill-performing forecast simply detects higher demand vari-
ability than a good forecast. The sizing is also affected by the planning environment,
as the uncertain time determines the need for safety stock, as lead times also may
have variability, and as the environment will determine to what extent customers
accept shortages, or low service (Axsäter, 1991).
Inventory control methods rely on forecasts, on safety stock sizing and on the
planning environment. The safety stock is used as a cushion to maintain service,
while forecasts and data from the planning environment, which are ordering costs,
holding costs and lead times, are used to determine when and/or how much to order
(Vollmann, 2005). The actual balancing of supply, which comes from the replen-
ishment of the inventory, and demand, which is sales or lost sales, takes place as
inventory transactions. Measuring these transactions gives an understanding of how
well an inventory control system performs for the given planning environment (Wa-
ters, 2003).

9.3 Empirical Data

Data was collected from a local timber yard, currently not using an inventory control
policy. Existing functionality for the reorder point method and for the periodic order
quantity method allowed for these methods to be deployed at low cost. The issue
was whether the methods could cope with the fluctuations in demand, as trend and
seasonal components were assumed to exist. Based on an analysis of sales data,
the demand for timber was found to be seasonal, but with no trend component.
This information was used to generate a demand function, based on the normal
distribution. The purpose of the demand function was to allow the simulation model
to run several times (200 independent simulations of three consecutive years were
run, with random seeds for each simulated day). Real demand, as well as a sample
of simulated demand, is shown in Fig. 9.3.
Demand characteristics are shown in Table 9.1 and other parameters pertaining
to the planning environment are shown in Table 9.2. As transport costs were consid-
ered to be semi-fixed rather than variable, the reordering cost is valid for the reorder
110 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

Fig. 9.2 Framework describing relations within inventory control systems

quantity used. Increasing the order quantity was not a cost-effective option. Stock
out costs were not considered, as the consequences of stock outs are hard to mea-
sure; not only are sales lost, there is also the possibility of competitors winning the
sale, and of losing customers, as they cannot find what they need.
Lead times were considered as fixed, as no information on delivery timeliness
was available. The expected fill rate (fraction of demand serviceable from stock)
for the reorder point method was 99%, while the it for the periodic order quantity
method would be 98% and for the lot-for-lot method would be 96% (calculated
9 An Integrative Approach To Inventory Control 111

Fig. 9.3 Simulated and actual monthly demand

Table 9.1 Demand characteristics


Demand Year Month Day
μ 56316 4693 156
σ 710 205 37

Table 9.2 Environmental parameters


Parameter Value
Lead time (days) 7

Reordering cost (SEK) 3200


Holding rate (%) 20%
Unit cost (SEK) 4.49

Order quantity (ROP) 6500


Order interval (POQ) 45
Order interval (L4L) 7
112 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

using the loss function, based on the standard deviation, as described by Axsäter
(2006).

9.4 Simulation Model

To test how the framework could be applied to a real-world scenario, a simulation


model was constructed to evaluate inventory control solutions considered by a lo-
cal timber yard. To support the inventory control systems, some simply managed
forecasting systems were chosen. The complete selection of methods, put into the
context of the framework is shown in Fig. 9.4.

Fig. 9.4 Methods placed in the framework

All methods were verified against theory by testing, if the method implemen-
tations gave the values that theory dictates. For the reorder point method, the re-
order point was raised by days of forecasted demand, to prevent undershoot, as
described in (Mattsson, 2007). Several forecast methods were considered, and the
actual choice of forecast for this case was based on the mean absolute deviation.
9 An Integrative Approach To Inventory Control 113

Bias was calculated to see whether a forecasting method followed the mean of de-
mand. The seasonally adjusted moving average (Waters, 2003) was chosen as the
preferred method, as it proved to be nearly as accurate as Holt-Winters (Axsäter,
1991), while not requiring as careful calibration.

Table 9.3 Summary of forecast errors


Type EXP MA HOLT H-W S-MA
MAD 1281 1189 1267 179 263
Bias(%) 0.9% 0.3% 0.5% 0.1% -0.5%

Forecasts were monthly, and predicted the demand for the following month. The
forecast value was multiplied by 1.5 to reflect an economic inventory cycle time of
45 days. This simplification was done to see how the system would react to system-
atic design errors.

9.5 Simulation Results

To investigate how seasonality affects the performance of simple planning methods,


variations of the presented demand pattern were simulated. In the first, demand was
constant (Case 1); in the second, variance was added (Case 2), while in the third,
seasonality (±20%) was introduced (Case 3). For the different cases a moving aver-
age forecast was used, adjusted for seasonality. Ignoring safety stock altogether, the
measures used for the simulations were fill rate and inventory cycle service level.
It should be noted that lot-for-lot has a different inventory cycle from the other re-
ordering mechanisms, meaning that its service levels cannot be directly compared
to them. Fig. 9.5 shows the results of the simulation runs.
What can be seen is that inventory cycle service is higher for the periodic order
quantity method than for the reorder point method. However, the fill rate shows that
the reorder point method is better at meeting demand. Together, the measures tell
that the periodic order quantity method has fewer, but greater stock-outs than the
reorder point method. Seasonality affected the fill rate of the periodic order method
the worst, while the reorder point method was hardly affected.
The final situation added the seasonal pattern of the original data, and considered
another inventory control solution, namely lot-for-lot. The result is shown in Figure
9.6, which is the fill rate and the holding cost incurred, compared to the theoretical
fill rate, should no seasonality or forecast error be present. The periodic order quan-
tity system shows a considerably worse fill rate than the reorder point system. The
lot-for-lot system has a much lower holding cost than the other two methods due to
more frequent orders; however, safety stock accounts for a larger fraction of the total
holding, than for the other methods, leading to higher safety stock costs per stored
unit. Observing the graphs, the lot-for-lot method is the only method where the mea-
114 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

Fig. 9.5 Cycle service levels and fill rates

sured cost/service relationship does not intersect with the theoretical function. For
the methods with longer order cycles, the measured cost/service relationship shows
a much flatter curve than expected, indicating that these functions will require more
safety stock than expected to improve fill rates.

9.6 Discussion

Completed simulations indicate that the fill rate of the periodic order quantity
method suffers when the seasonal demand pattern is introduced, while the reorder
point method can maintain the same fill rate as if no seasonality were present. This
is a result of the nature of the two methods, where variability affecting the reorder
point method will affect the time of ordering, while the periodic order quantity
method, with fixed ordering times, cannot regulate order timing to prevent stock
outs. Instead, it must let the inventory level take the full effect of any variability.
Conversely, the effect of variability on the reorder point method is that the resulting
order interval may not be economic.
When comparing the methods used in the simulation using the framework, we
find that the reorder point method is superior both concerning holding costs, and fill
rate. What system is preferable depends on whether suppliers can manage to deliver
varying quantities (up to three times the average, both for periodic ordering, and for
lot-for-lot) or at varying intervals.
9 An Integrative Approach To Inventory Control 115

Fig. 9.6 Total inventory costs and fill rates


116 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

The difference between measured and theoretical fill rate demonstrated by the
periodic order quantity shows how inventory control methods not designed for a
certain planning environment can be affected. The use of a monthly forecast not
representing the next inventory cycle may also have contributed to the low fill rate.
The simulation based on the framework helped give insight into how the inventory
control process would react to the planning environment. It showed that a large
safety stock would be required if the periodic order quantity were to be used, as the
periodic order quantity method undershot performance predictions much more than
the reorder point method. If applied over multiple products, the framework can tell
if consolidation using the sensitive periodic order quantity system is less costly than
the reorder point system. Given that the periodic order quantity system has a 100%
uncertain time (Axsäter, 1991), it may be used as a benchmark in simulations, as
variability and problems caused by poor design of the process always are reflected
in the fill rate.
In the future we plan to enlarge our simulation experiments by incorporating dif-
ferent kind of demand types (continuous and discontinuous) as well as new meth-
ods used in forecasting and ordering. Recent research has shown that autoregressive
forecasting methods outperform others in situations where demand is fluctuating
widely and follows a “life-cycle” pattern (Datta et al, 2009). Similarly, purchasing
order method research argues that not a single ordering method should be used (so
basically it is not a question, which method is the best one, but which one best
suits the environment), but usually a combination of different purchasing methods
should be incorporated in ERP systems during the entire life-cycle of a product
(Hilmola et al, 2009). However, if volumes are low, then even economic order quan-
tities/reorder point systems, and periodic order policies should be abandoned; a lot
for lot policy might produce best results in these situations (Hilmola et al, 2009).
Thus, much depends from the operations strategy (order or stock based system),
and from the amount of time, which customers are willing to wait for a delivery to
reach their facilities (Hilletofth, 2008).

9.7 Conclusions

Treating inventory control and forecasting as separate activities, while not acknowl-
edging how forecasting and its application affect inventory control may lead to in-
correct assessments of a system’s performance in a certain planning environment.
Approaching inventory control as a process, starting with a planning environment
and ending with a measurement of the system’s performance shows that all activities
are related, and that the end result may be affected by the activities or by the way
they are connected. This paper uses a simulation model to show how the use of fore-
casts and complexity in demand patterns affects the performance of the reorder point
system and the periodic order quantity system. Simulations show that performance
generally is worse than expected, and that periodic ordering consistently shows a
greater susceptibility both to variability and to design errors, due to its inability to
9 An Integrative Approach To Inventory Control 117

buffer against these by changing the reordering interval. This weakness also appears
in lot-for-lot systems, as they are based on periodic ordering.

References

Ashby WR (1957) An Introduction to Cybernetics. London: Chapman & Hall, Lon-


don, UK
Axsäter S (1991) Lagerstyrning. Studentlitteratur, Lund, Sweden
Axsäter S (2006) Inventory Control. Springer Verlag, New York, USA
Das C, Tyagi R (1999) Effect of correlated demands on safety stock centralization:
Patterns of correlation versus degree of centralization. Journal of Business Logis-
tics 20:205–214
Datta S, Granger CWJ, Graham DP, Sagar N, Doody P, Slone
R, Hilmola O (2009) Forecasting and risk analysis in sup-
ply chain management GARCH Proof of Concept. ESDCEE,
School of Engineering, URL http://dspace.mit.edu/bitstream/han-
dle/1721.1/43943/GARCH%20Proof%20of%20Conept%20 %20Datta Gran-
ger Graham Sagar Doody Slone Hilmola %202008 December.pdf?sequence=1
Ghobbar A, Friend C (2004) The material requirements planning system for aircraft
maintenance and inventory control: A note. Journal of Air Transport Management
10(3):217–221
Higgins J (1976) Information systems for planning and control: Concepts and cases.
Edward Arnold, London, UK
Hilletofth P (2008) Differentiated Supply Chain Strategy Response to a Frag-
mented and Complex Market. PhD thesis, Chalmers University of Technology,
Department of Technology Management and Economics, Division of Logistics
and Transportation, Gteborg, Sweden
Hilmola OP, Ma H, Datta S (2009) A portfolio approach for purchasing systems:
Impact of switching point. Massachusetts Institute of Technology, no. ESD-WP-
2008-07 in Working Paper Series, URL http://esd.mit.edu/WPS/2008/esd-wp-
2008-07.pdf
Jonsson P, Mattsson S (2006) A longitudinal study of material planning applications
in manufacturing companies. International Journal of Operations & Production
Management 26(9):971–995
Leknes H, Carr C (2004) Globalisation, international configurations and strategic
implications: The case of retailing. Long Range Planning 37(1):29–49
Mattsson S (2004) Logistikens termer och begrepp. PLAN, Stockholm, Sweden
Mattsson S (2007) Inventory control in environments with short lead times. Interna-
tional Journal of Physical Distribution & Logistics Management 37(2):115–130
Mattsson S, Jonsson P (2003) Produktionslogistik. Studentlitteratur
Pidd M (1988) Computer simulation in management science. John Wiley & Sons,
Hoboken, USA
118 Philip Hedenstierna, Per Hilletofth and Olli-Pekka Hilmola

Rantala L, Hilmola O (2005) From manual to automated purchasing Case: Middle-


sized telecom electronics manufacturing unit. Industrial Management & Data
Systems 105(8):1053 – 1069
Vollmann T (2005) Manufacturing planning and control systems for supply chain
management. McGraw-Hill
Waters D (2003) Inventory Control and Management. Wiley, Hoboken, USA
Zinn W, Levy M, Bowersox D (1989) Measuring the effect of inventory centraliza-
tion/decentralization on aggregate safety stock: The ’square root law’ revisited.
Journal of Business Logistics 10(1):1–13
Chapter 10
Rapid Modeling of Express Line Systems for
Improving Waiting Processes

Noémi Kalló and Tamás Koltai

Abstract In time-based competition, one of the main management objectives in ser-


vices is to decrease customers’ waiting. Accordingly, search for designs of queuing
systems which reduce waiting has become a major concern of managers. A fre-
quently used solution is the application of express lines. The operation of express
line systems can be optimized based on different objective functions. The minimiza-
tion of average waiting time and the reduction of the variance of waiting times are
the classical objectives for operation managers. According to perception manage-
ment, however, the perceived waiting times and satisfaction generated by waiting
should be considered as well. To analyze the effects of different management objec-
tives on the operation of express line systems, a numerical and a simulation model
were developed. The study of a superstore shows that the rapid numerical model
and the time-consuming simulation model provide the same result when the param-
eter values ensuring optimal operation must be determined. Consequently, in these
problems, simulation can be substituted efficiently by rapid modeling.

10.1 Introduction

Companies, which are successful in cost- and quality-based competitions, look for
other factors that can help them to gain further competitive advantage. Therefore,
time-based competition spreads among leading companies. Time has turned into

Noémi Kalló
Department of Management and Corporate Economics, Budapest University of Technology and
Economics-Hungary, 1111 Budapest, Müegyetem rkp. 9. T. ép. IV. em.
e-mail: kallo@mvt.bme.hu
Tamás Koltai
Department of Management and Corporate Economics, Budapest University of Technology and
Economics-Hungary, 1111 Budapest, Müegyetem rkp. 9. T. ép. IV. em.
e-mail: koltai@mvt.bme.hu

119
120 Noémi Kalló and Tamás Koltai

a strategic resource and, as a consequence, its importance has become equivalent


to the significance of money, productivity, and innovation (Stalk, 1988). That is,
competitiveness nowadays requires balancing quality, cost, and time.
In time-based competition environment, one of the main service management ob-
jectives is reducing waiting times. The simplest way for decreasing customer wait-
ing is using additional servers (Hillier and Lieberman, 1995). This kind of waiting
time reduction is, however, quite expensive. Consequently, search for the best con-
figuration of waiting lines and service facilities has become a major concern of
service managers (Hill et al, 2002).
A queuing system configuration frequently used to reduce waiting times of some
customers is the application of express checkouts. When express checkouts are ap-
plied, two customer groups are created: the first group consists of customers buying
few items while all other customers belong to the second group. Customers buy-
ing more items than a certain number have to use the regular checkouts. Costumers
buying less items than or equal quantity to this certain number can join the express
lines. The number of items that controls line-type selection is called limit value.
Our analyses revealed that one of the parameters which influence the waiting pro-
cess when express lines are applied is the limit value. With different limit values, dif-
ferent waiting characteristics can be achieved. A suitable limit value can minimize
a particular waiting characteristic. However, instead of the specific measures, the
waiting process as a whole should be optimized. An important management objec-
tive connected to express line systems is to determine a limit value which optimizes
the performance of the queuing system. This article presents tools to determine this
value by reviewing some operational issues related to express line systems.
The article is structured as follows. First, the tools developed for analyzing ex-
press line systems are presented. An analytical model (based on the results of queu-
ing theory) and a simulation model (working according to the real process of a
queuing system) are used for the analyses. For checking their validity, the real data
of a do-it-yourself superstore was used. Next, different management objectives re-
lated to the service-profit chain, stretching from waiting minimization to satisfaction
maximization, are discussed. Later, the results based on the case of the superstore
are presented to compare the effects of the different management objectives on the
performance of the system. Finally, the main conclusions are summarized.

10.2 Tools for Analyzing Express Line Systems

Express line systems, like most queuing problems, can be modeled both analyti-
cally and empirically. Analytical models are based on the results of queuing theory.
Generally some existing analytical models are used to approximate the operation of
the queuing system. These models are quite simple to use; however, in the case of
complex queuing systems, they give only rough estimation of the real operation. For
analyzing these problems, new analytical models must be developed or simulation
models can be used (Hillier and Lieberman, 1995). Simulation modeling requires
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 121

more time and resource; however, quite special characteristics of queuing processes
can be modeled in this way. For our analyses, an analytical and a simulation model
were created as well.

10.2.1 Analytical Approach

Queuing systems with express lines have several special characteristics which make
their analytical modeling difficult. The most important specialty is that express lines
are generally used in supermarkets where many service facilities are located and
each has its own separate waiting line. Analyzing this kind of queuing systems with
the models of queuing theory presents difficulties because there is no existing ana-
lytical model which can properly describe such a system. In this case, two analytical
models can be used as approximations: one consisting of many service facilities with
a common waiting line and another containing many independent queuing systems
each having one service facility with its own separate queue.
If analytical formulae have to be used for the whole queuing system - containing
k checkouts and k waiting lines -, the following two approaches can be used:
One-common-line approach. For this, the queuing system can be modeled as if
all checkouts had one common queue. For this, a G/G/k model can be applied or,
according to the system characteristics, a special type of it (for example M/G/k or
M/M/k). If there are E express and R regular checkouts, then a model with k=E and
another with k=R are required.
Modeling the checkout system as a queuing system with one common line for
all checkouts is an optimistic approach. It underestimates the average waiting time
by assuming optimally efficient queue selection of customers which minimizes their
waiting times. That is, it supposes that customers always choose the queue in which
their waiting time will be shortest, and if their waiting line moves too slowly, they
jockey among the queues. In some cases, however, customers cannot behave in the
most efficient way. If there are idle checkouts but jockeying to these lines is diffi-
cult, or jockeying does not provide considerable time savings, then customers do
not change line. Consequently, the one-common-line approach provides a best-case
estimate of the operation of the queuing system.
Independent-queuing-system approach. In this case, k independent G/G/1 models
are applied or, according to the system characteristics, other special models (for
example M/G/1 or M/M/1). If there are E express and R regular checkouts, then
E+R models are required.
Modeling the checkouts of a supermarket as independent queuing systems gives
a pessimistic approach of waiting since it overestimates the average waiting time.
Waiting lines are, however, generally not independent from each other, which can
help to reduce the average waiting time. First, most of the arriving customers try to
join the shortest queue. Second, some customers jockey from slowly moving lines to
fast moving ones. If, for example, some checkouts become idle, customers waiting
in line try to jockey to the idle checkouts. That is, queues are not independent from
122 Noémi Kalló and Tamás Koltai

each other. As a conclusion, the independent-queuing-system approach provides a


worst-case estimate of the operation.
It depends on the system characteristics, which of the presented approaches gives
more accurate approximation of actual operation. If customers estimate the work-
loads of servers before selecting waiting line, the application of the one-common-
line approach gives a better approximation (Rothkopf and Rech, 1987). For some
reasons, however, customers do not always selects the queue with the minimal work-
load. For example, when workloads cannot be observed, the best decision cannot be
made. In this case, the independent-queuing-system approach is more suitable.
To create a numerical model for analyzing express line systems the approaches
discussed above were used as best-case and worst-case estimations of waiting time.
In Fig.10.1, a part of the numerical model can be seen - with the data of a do-it-
yourself superstore (Sec. 2.3).

Fig. 10.1 Numerical model

Introducing express lines into a queuing system requires the consideration of sev-
eral operational issues. One of the main questions related to develop an express line
system is to determine the limit value which optimizes the operation of the system.
To make this decision, the effect of all possible limit values must be examined. As
the limit value determines which customers use the express and which the regular
checkouts, for this analysis, characteristics of customer groups generated with all
possible limit values must be determined. These main characteristics are the arrival
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 123

rates, the average service times and variances of service times. Before introducing
express checkouts, this information is unavailable, that is, it must be determined by
using the data of the existing system.
For building the analytical - and the simulation - models only such information
and data were used which can be determined without the actual introduction of
the express lines. The data used can be obtained by observing and measuring the
operation of the existing queuing system without express lines. Therefore, decisions
about the implementation of express lines can be made in advance and the possible
effects on customer waiting can be forecasted.
For determining the service characteristics of different customers the relationship
between the number of items bought and the service times must be analyzed. By
using this relationship, the average service time and the variance of service times
can be determined for customers buying a certain amount. With the help of the
distribution function of the number of items bought, the average arrival and service
rates, and the variances of services times can be calculated for all possible customer
groups as well (for details see Koltai et al, 2008).
The model works in the following way. Based on the main characteristics of the
existing queuing systems (in italics), the special characteristics of the express line
systems with different limit values are determined. With these parameters, using
the formulae of M/G/1 and M/G/k queuing models, the average waiting times also
can be calculated (typed boldface). Knowing all possible waiting times the smallest
one must be selected (framed). The minimal average waiting time, eventually, deter-
mines the optimal limit value. Analyses with different parameter values showed that
the waiting time as a function of the limit parameter has a distinct minimum. That is,
an optimal limit value can be determined for every express line system (Fig.10.2).

Fig. 10.2 Waiting time as a function of the limit parameter


124 Noémi Kalló and Tamás Koltai

10.2.2 Simulation Model

Express line systems have several special characteristics, and only a few of them
can be taken into consideration with analytical models. For example, the managerial
regulation which controls the use of checkouts can be build into analytical models.
If there are more than one checkout accessible for customers, their choice among
them cannot be considered. The analytical model appropriate for describing express
line systems assume either uniform customer distribution among accessible waiting
lines (independent-system approach) or do not deal with line selection at all (one-
common-queue approach).
A simulation model, considering several customer behavioral issues, was built for
studying the operation of express line systems. The block diagram of the simulation
model created with Arena, simulation software of Rockwell Automation, can be
seen on Fig.10.3

Fig. 10.3 Simulation model

With the first, create block, customers are generated according to a stochastic
distribution. The assign block, based on a formerly defined distribution function, de-
termines the number of items bought by each customer. With this quantity, knowing
their stochastic relationship, the service time of each customer is also calculated.
The branch block creates two customer groups: one of them can use the express
checkouts, the other are directed to the regular lines. Customers entitled to use ex-
press checkouts buy no more items than the limit value. Customers in each group
have to make a decision about which line to choose. Rules forming the base of this
decision can be given in the pickq blocks. Next, the customer joins the selected
queue, waits until the server will be free and can be seized. At this point, the waiting
process in queue ends. The waiting time is recorded by a tally block. The following
branch block is needed for data collection and statistical analyses. The customer’s
route continues along the solid lines while waiting time data of the same customer
group are combined in tally blocks (along the dashed lines). As the service needs
a specific amount of time, the customer is delayed. When service ends, the server
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 125

is released and made free for the next customer. At this point the sojourn time ends
and it is recorded by a tally as well. After combining the different waiting time data,
the customer can leave the system at the dispose block.

10.2.3 The Case Study of a Do-It-Yourself Superstore

For the analyses, the real data of a do-it-yourself superstore is used. In this store,
generally five checkouts operate. Using the data provided by the checkout informa-
tion system, the arrival rates for the different days and for the different parts of the
days was estimated. For all periods, the Poisson arrival process is acceptable accord-
ing to Kolmogorov-Smirnov tests. Based on Rényi’s limiting distribution theorem
and its generalizations, the arrival processes of the two customer groups can also be
approximated with Poisson processes (Rényi, 1956; Szántai, 1971a,b).
The density function of the number of items bought by customers is also pro-
vided by the checkout information system. For describing it, a truncated geometric
distribution with a mean of 3.089 is found acceptable by a chi-square test.
As the service time of customers cannot be obtained from any information sys-
tems, it was measured manually. The relationship between the number of items
bought and the service time was analyzed with regression analysis. A 0.777 cor-
relation coefficient supported the assumption of linearity. According to the results
of linear regression, service time has two parts. On the average, the part independent
of the number of items bought lasts 0.5463 minute, reading a bar code needs 0.1622
minute. With linear regression, the standard deviation of these parameters and the
service times of customers buying different amounts were determined as well (for
details see Koltai et al, 2008).
Results presented in this article are valid for a midday traffic intensity with an
arrival rate most characteristic for the store (λ =95 customers/hour). According to
the geometric distribution, customers buy generally only few items. Therefore, two
of the 5 working checkouts was considered to be express servers (S=5, E=2).
In the store in question, express lines have not been used yet. Therefore, the real
queuing system could not be used to validate the simulation model. Consequently,
the analytical models were used for checking the validity of results. The fundamen-
tal simplifications applied in analytical models were introduced to the simulation
model. In the M/G/k simulation model, there is a common line for customers enti-
tled to use express checkouts and another one for customers buying many items. In
the M/G/1 simulation model, there are independent arrival processes for all of the
checkouts and their own waiting lines. The analytical and simulation results gained
by the same type of models are quite close to each other; accordingly, they can be
considered valid (Table 10.1).
126 Noémi Kalló and Tamás Koltai

Table 10.1 Analytical and simulation results


Limit value
Model L=1 L=2 L=3 L=4
Analytical M/G/k 0.0492 0.0417 0.0811 0.1563
Analytical M/G/1 0.3727 0.2755 0.3289 0.4803
Simulation M/G/k 0.0481 0.0431 0.0838 0.158
Simulation M/G/1 0.3155 0.2414 0.3244 0.4882

10.3 Objective Functions for Operating Express Line Systems

In time-based competition, the most important management objective in services


is to decrease customer waiting. According to the classical objective of operations
management, generally the average waiting time is minimized by service managers.
With the help of queuing theory, measures related to the average waiting can easily
be determined - and further on minimized. Our analyses showed, however, that as an
effect of applying express checkouts, depending on the limit value, the average wait
fluctuates in a high degree (Fig.10.2). With an erroneously determined limit value,
not only the average waiting time but waits in express lines can be higher than in
the original system.
The application of express checkouts, as other specialization of servers, helps to
make the services more standard. They do not necessarily reduce average waiting
time but decrease the fluctuation in the length of services and, consequently, reduce
the standard deviation of waiting time as well. It can happen that customers in an
express line have to wait to the same amount as they would wait in a regular line
but do not have to worry about that a customer buying huge amount will lengthen
their waiting. Customers, being risk-averse, prefer (within limits) longer waiting
with smaller fluctuations. The variance of waiting times can be determined with the
statistical data of simulation or, in simple cases, with the help of queuing theory.
The importance of reducing variance of waiting time attracts attention to the sig-
nificance of human characteristics in service systems. According to the intention
of perception management, instead of minimizing the average of objective wait-
ing times the average of subjective waiting times should be reduced. The perceived
waiting time is known only by the customer who passed through it. As people do
not percept time linearly, there can be significant differences between the two val-
ues. The human time perception, according to the psychophysical literature, can
be approximated by power functions (Stevens, 1957). To calculate perceived wait-
ing times, after determining the values of some parameters, only the actual waiting
times must be known. These values, however, can easily be determined by applying
a suitable transformation.
The evaluation of waiting depends on several factors. The same perceived waiting
times can generate satisfaction to different degrees in different people. One of the
main factors which determine customer satisfaction related to waiting is the value
of the service for which people are waiting (Maister, 1985). Customers in express
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 127

lines buy only few items, that is, they receive service of lower value. Accordingly,
their satisfaction will be lower even if they must wait the same time as customers
in the regular lines. The relationship between waiting and satisfaction can be de-
scribed with a suitable utility function. For the calculations, as a simplification of
the expected utility model, a mean-variance (or a two-moment) decision model can
be used (Levy and Markowitz, 1979; Meyer, 1987). In this way, the transformation
of waiting time into customer satisfaction, after determining the parameter values
characteristic for customers, can be performed based on measures which can easily
be determined analytically or empirically.

10.4 Optimization of the Waiting Process

Actual waiting, perceived waiting, and satisfaction related to waiting constitute a


simplified service-profit chain. Improving any of them will result in a higher ser-
vice level. The ultimate management objective should be the maximization of cus-
tomer satisfaction. To determine the customers’ satisfaction related to waiting, how-
ever, requires thorough knowledge about the customers and, consequently, time-
consuming and expensive analyses. The cost of this analysis may exceed the bene-
fits which can be obtained by a system operated according to a more sophisticated
objective function.
Based on our analyses, it can be concluded, however, that there are no significant
differences in the operation of an express line system whether it is operated accord-
ing any of the first three objective functions. The numerical data for a case which is
most characteristic for the superstore analyzed are given in Table 10.2.

Table 10.2 The operation of the system with different objective functions and limit values
Limit value
Objective L=1 L=2 L=3 L=4
Average waiting time 0.3403 0.257 0.3125 0.4614
Standard deviation of waiting times 0.7374 0.5535 0.5757 0.7444
Average perceived waiting times 0.3288 0.255 0.31 0.4491

In Table 10.2, the optimal objective function values are typed boldface. It can
be seen that the same optimal limit value (L=2) is obtained independently of which
objective function is used. This result has two consequences.
First, managers trying to optimize the operation of their queuing systems can
use, aside from satisfaction maximization, any of the possible objective functions
and will get the same result (optimal limit value). Moreover in this way they will
optimize (or at least improve) all of the measures mentioned.
128 Noémi Kalló and Tamás Koltai

Second, as average waiting time can be optimized easily and fast with analytical
models, there is no need for using a more time-consuming and hardly manageable
simulation model.
It must be mentioned that there are situations when the different objective func-
tions determine different optimal limit values. Our analyses showed, however, that
these limit values are numbers next to each other and, in these cases, the waiting
measures are nearly equal independently of the applied limit values. Therefore,
even if the different objective functions give different optimal solutions for the limit
value, the different limits result only slight differences in the waiting measures.

10.5 Conclusions

The application of express lines is a widely used management tool for waiting pro-
cess improvement. One of the main parameters of express line systems is the limit
value which controls checkout type selection. Its value must be selected carefully
because introducing express lines with an improper limit value can increase cus-
tomer waiting significantly. Therefore, determining the optimal limit value, which
minimizes average waiting time, is one of the most important tasks of managers
operating express lines.
For determining optimal limit value, special tools are required. Our analyses
show, however, that simple analytical models are accurate enough for practical ap-
plications. They give only rough approximation of operation and they are appropri-
ate for analyzing only simple waiting measures and management objectives; how-
ever, they can be used to determine the optimal limit value. Using analytical models,
the time, money and knowledge needed for developing and running simulation mod-
els can be saved. That is, analitical models provide an effective rapid modelling tool
for service managers.
It also must be mentioned, that beside limit value there is another parameter
which can be used by managers for influencing waiting time without cost conse-
quences. This is the ratio of express and regular checkouts (when the total num-
ber of checkouts is constant). With this parameter, if optimal limit values are used,
waiting time cannot significantly decreased, therefore it is recommended to use this
parameter to assure constant limit value when total number of checkouts is changed
for some reasons.
The waiting time decreasing effect of express lines are limited. Notwithstanding
express lines are popular among customers. Therefore, to reveal all consequences
of applying express lines, their effects on waiting distribution among the different
customer groups and, accordingly, on satisfaction related to waiting times must be
analyzed as well. These are topics of our further researches.
10 Rapid Modeling of Express Line Systems for Improving Waiting Processes 129

References

Hill A, Collier D, Froehle C, Goodale J, Metters R, Verma R (2002) Research oppor-


tunities in service process design. Journal of Operations Management 20(2):189–
202
Hillier F, Lieberman G (1995) Introduction to operations research. McGraw-Hill
Koltai T, Kalló N, Lakatos L (2008) Optimization of express line performance:
numerical examination and management considerations. Optimization and En-
gineering pp 1–20
Levy H, Markowitz H (1979) Approximating expected utility by a function of mean
and variance. The American Economic Review 69(3):308–317
Maister D (1985) The psychology of waiting lines. In: Cziepel J, Solomon M, Sur-
prenant C (eds) The Service Encounter, Lexington Books
Meyer J (1987) Two-moment decision models and expected utility maximization.
The American Economic Review 77(3):421–430
Rényi A (1956) A Poisson folyamat egy jellemzése (A possible characterization of
the Poisson process). MTA Mat Kut Int Közl 1:519-527
Rothkopf M, Rech P (1987) Perspectives on queues: combining queues is not always
beneficial. Operations Research 35(6):906–909
Stalk G (1988) Time–the next source of competitive advantage. Harvard Business
Review 66(July-August):41–51
Stevens S (1957) On the psychological law. Psychological Review 64(3):153–181
Szántai T (1971a) On limiting distributions for the sums of random number of ran-
dom variables concerning the rarefaction of recurrent process. Studia Scientiarum
Mathematicarum Hungarica 6:443–452
Szántai T (1971b) On an invariance problem related to different rarefactions of re-
current processes. Studia Scientiarum Mathematicarum Hungarica 6:453–456
Chapter 11
Integrating Kanban Control with Advance
Demand Information: Insights from an
Analytical Model

Ananth Krishnamurthy and Deng Ge

Abstract This paper investigates the benefits of integration of advance demand


information (ADI) with the Kanban Control System (KCS). ADI shared by cus-
tomers is integrated into production release policies thereby enabling simultaneous
improvements in service levels and reductions in inventory and costs. Under Marko-
vian assumptions, an exact analysis of the production system is carried out. Through
numerical studies, the system performance is compared to those obtained from clas-
sical Kanban control system and base stock system with ADI.

11.1 Introduction

Recent advances in information technology have led to the belief that sharing ad-
vance demand information (ADI) to manufacturers will allow customers to receive
better service from their manufacturing suppliers. Manufacturers also expect that
this ADI can be effectively integrated into their production inventory control sys-
tems (PICS) to reduce lead times and inventories. This paper investigates the effect
of integrating ADI into Kanban Control Systems (KCS). Using analytical models,
we quantify the improvements obtained in system performance when the KCS is
integrated with ADI.

Ananth Krishnamurthy
University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 Uni-
versity Avenue, Madison, WI 53706, USA,
e-mail: ananth@engr.wisc.edu
Deng Ge
University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 Uni-
versity Avenue, Madison, WI 53706, USA,
e-mail: dge@wisc.edu

131
132 Ananth Krishnamurthy and Deng Ge

The effect of ADI on PICS has been the focus of several studies. Survey articles
such as Uzsoy and Martin-Vega (1990) provide an overview of the prior research
on kanban controlled systems. A number of researchers like PhiIipoom et al (1987)
and Di Mascolo and Frein (1996) have studied various aspects of the design of a
classical kanban controlled system. Other researchers have proposed and analyzed
the performance of variations of the KCS. For instance, Dallery and Liberopou-
los (2000) introduce the Extended Kanban Control System (EKCS). They show
the EKCS is a combination of the classical KCS and the Base Stock (BS) system.
They also show that the EKCS provides the flexibility to decouple design decisions
related to production capacity and base stock levels. Buzacott and Shanthikumar
(1993) introduces the Production Authorization Control (PAC) system that incorpo-
rates advance order information from customers. Karaesmen et al (2002) analyze a
discrete-time make-to-stock queue and investigate the structure of the optimal policy
and associated base stock levels. Liberopoulos and Koukoumialos (2005) analyze a
system operating under KCS with ADI and conduct simulation experiments to in-
vestigate tradeoffs between base stock levels, number of kanbans and manufacturing
lead times. The analytical model discussed here is a first step towards models that
could provide understanding how system performance can be improved further by
integration of the ADI with the kanban controlled system. The model presented in
this paper is for a single-stage system. We compare system performance with that
obtained under the classical KCS and BS system with ADI. Based on the Markov
chain analysis, we show that the integration of the KCS with ADI results in superior
system performance as the integration combines the best features of KCS and based
stock systems with ADI.

The remainder of the paper is organized as follows. Section 11.2 describes the
operation of a system operating under the KCS with ADI, followed by Section 11.3
that describes the detailed Markov chain analysis for the system. Section 11.4 com-
pares the performance of the different systems, and Section 11.5 summarizes the
insights.

11.2 Kanban Control System with ADI

This section describes the queuing network model of the KCS with ADI using the
general framework provided in Liberopoulos and Tsikis (2003). The operational
characteristics of the system are described in terms of movement of activated or-
ders, products, and free kanbans in the network. The model is composed of a single-
stage manufacturing station (MFG), fork/join synchronization stations (FJ1 , FJ2 )
and order delay station (OD). Figure 11.1 shows a schematic of the system. We
assume that customer orders arrive at the system according to a Poisson process
with rate λ . However, each customer places their order LTD time units in advance
of the due date. We call LTD the demand lead time and let τd = E[LTD ] (the case of
no ADI corresponds to the case where LTD = 0). Note that the demand lead time
11 Kanban Control with ADI 133

is customer specified and it is different from the planning lead time (LTS ) that the
manufacturing system uses for planning order releases for production. Note that if
sufficient ADI is available, the system might be able to meet customer demand with
less finished goods inventory than that required in a system operating under the KCS
without ADI. For instance, if E[LTD ] > LTS it is possible that the system operates in
a make-to-order mode with minimal inventory. This paper focuses on the more in-
teresting case wherein E[LTD ] < LTS . Consequently, orders received from customers
are immediately activated. However, they may not be released into the manufactur-
ing system immediately, as they might wait in buffer BD1 for a free kanban to be
available in queue FK. When a free kanban is available in FK, an activated order
in BD1 and a free kanban are matched together and released into the manufacturing
stage MFG which consists of a single exponential server with mean service time
μs−1 . After completing service, the product queues in the finished goods buffer FG.
At buffer BD2 , LTD time units after an order is placed, the customer arrives demand-
ing a product. If a unit is available in finished goods, the demand is immediately
satisfied. The kanban attached to the order is released and routed back to FK where
it is available to release another activated order into production.

K Kanbans

FK MFG
FG (Z)

Satisfied Demands
BD2
FJ1 Demands
BD1 FJ2
τd OD
External Orders

Fig. 11.1 Queueing network model of the KCS with ADI

We assume that (i) the number of kanbans, K in the system is fixed; (ii) demands
that are not satisfied immediately get back-ordered; (iii) the system maintains a tar-
get base stock level, Z of finished products in FG. The factors affecting system
performance are the demand and planning information, target base stock levels (Z),
the number of kanbans (K), and the characteristics of the demand and manufactur-
ing processes. The service times at the manufacturing station and the inter-arrival
times of demands and orders are assumed to be independent. Since orders arrive at
rate λ , and the service rate of the manufacturing station is μs , we assume that the
system utilization is ρ = μλs ≤ 1.
134 Ananth Krishnamurthy and Deng Ge

To analyze the dynamics of the system, we define the following performance


measures at time t: C(t)= the number of kanbans/parts available at the manufactur-
ing stage, F(t)= the number of free kanbans available at FK, P(t)= the number of
pending orders waiting for free kanbans at BD1 , I(t)= the number of finished items
in FG, W (t) =the number of waiting orders in OD, and B(t)= the number of back-
orders in BD2 . The dynamics of the KCS with ADI implies that the following flow
conservation equations hold at any time t:

F(t) + C(t) + I(t) = K (11.1)


P(t) + C(t) + I(t) = Z + W (t) + B(t) (11.2)

The main performance measures of interest are the (i) average work in process,
E[C], (ii) average finished goods inventory, E[I], (iii) the probability of backorder,
PB , (iv) the average number of backorders, E[B], and (v) the overall average total
cost E[TC].

11.3 Markov Chain Analysis

In this section, we analyze the Markov chain for the KCS with ADI. To develop the
Markov chain analysis, we assume that the demand lead time LTD has an exponential
distribution. Let X1 (t) = F(t) − P(t), X2 (t) = I(t) − B(t), t ≥ 0, then the system
performance measures defined in Section 11.2 can be uniquely determined by states
(X1 (t), X2 (t)) as follows:

F(t) = X1+ (t), P(t) = (−X1 )+ (t), I(t) = X2+ (t), B(t) = (−X2 )+ (t), t ≥ 0 (11.3)

C(t) = K − X1+(t) − X2+ (t), W (t) = K − Z − X1 (t) − (−X2)+ (t), t ≥ 0 (11.4)


where Xi+ (t) = max{Xi (t), 0},
i = 1, 2. Note that the size of Markov chain is infinite
when no limits are imposed on the number of pending orders. To solve a finite
Markov chain, we assume that the number of pending orders at BD1 is at most K0 ,
where K0 < ∞. Then, we have the following important properties for states:

−K0 ≤ X1 (t) ≤ K − Z, t ≥ 0 (11.5)


−(K − Z + K0 ) ≤ X2 (t) ≤ K, t ≥ 0 (11.6)

The Markov chain for the system is developed as shown in Fig. 11.3. The
state space can be partitioned into six areas based on the number of finished
goods/backorders. Let Ni be the number of states in area i, where i ∈ (1, 2, 3, 4, 5, 6).
Then we have the number of states in each area as follows, N1 = K0 + 1, N2 =
1
2 (2K0 + K − Z + 2)(T − 1), N3 = K0 + K − Z + 1, N4 = (K0 + K − Z + 1)(Z − 1), N5 =
K0 + K − Z + 1, N6 = 12 (K0 + K − Z + 1)(K0 + K − Z). This implies that total number
of states is N = ∑6i=1 Ni .
11 Kanban Control with ADI 135

Let π (x1 , x2 ) be the limiting probability i.e. P {limt→∞ [X1 (t), X2 (t)]} = π (x1 , x2 ),
where −K0 ≤ x1 ≤ K −Z, −(K0 +K −Z) ≤ x2 ≤ K. Let T = K −Z, we can write the
Chapman-Kolmogorov equations for each of the six areas of the Markov chain. As
an example the equations for Area 6 where −(T + K0 ) ≤ x2 ≤ −1, −K0 ≤ x1 ≤ T − 1
are given below:

For x2 = −(T + K0 ), x1 = x2 + T :

μπ (x1 , x2 ) = (T − x1 + x2 + 1)τd−1 π (x1 , x2 + 1) (11.7)

Area 1 λ λ λ
0, K −1, K ··· −K0 , K
Tα (T + K0 )α
μ μ
μ (T + 1)α
···
λ λ λ λ
1, K − 1 0, K − 1 −1, K − 1 ··· −K0 , K − 1
(T − 1)α μ Tα μ (T + 1)α (T + K0 )α
μ μ
...... · · · ... ... · · · ... .. .. · · · .. ..
Area 2 . . . .
2α μ (T + 2)α
··· μ (T + 1)α μ
λ λ λ λ
T − 1, Z + 1 ··· 0, Z + 1 −1, Z + 1 ··· −K0 , Z + 1
α 2α
Area 3 λ
μ · · · Tα μ (T + 1)α μ (T + K )α μ 0

λ λ
T, Z T − 1, Z ··· 0, Z −1, Z ··· −K0 , Z

μ α μ · · · μ T α μ (T + 1)α μ (T + K )α
μ
0

λ λ λ λ λ
T, Z − 1 ··· 1, Z − 1 0, Z − 1 ··· −K0 , Z − 1

μ
· · · (T − 1)α μ Tα μ · · · (T + K )α μ 0

Area 4 ... · · · ... ··· ··· ... ... ··· ··· ...
α μ
μ 2α μ (T + 1)α (T + 2)α μ
μ
λ λ λ λ λ
T, 1 T − 1, 1 ··· λ
0, 1 −1, 1 ··· −K0 , 1

μ
α
μ 2α · · · Tα μ
(T + 1)α μ (T + K0 )α μ
Area 5 λ λ λ λ λ
T, 0 T − 1, 0 ··· 0, 0 −1, 0 · · · −K0, 0
μ
μ
α μ ··· Tα · · · μ (T + K )α 0

λ λ
T − 1, −1 ··· 0, −1 −1, −1 · · · λ −K0, −1
μ
· · · (T − 1)α Tα μ ··· (T + K − 1)α
0

... · · · ... ··· ... · · · ...


α
Area 6 μ ··· μ

λ
−(K0 − 1), −(T + K0 − 1) −K0 , −(T + K0 − 1)

μ α

−K0 , −(T + K0 )

Fig. 11.2 Markov chain transition diagram for the EKCS with ADI, where α = τd−1

For x1 = x2 + T :

(λ + μ )π (x1 , x2 ) = μπ (x1 − 1, x2 − 1) + (T − x1 + x2 + 1)τd−1 π (x1 , x2 + 1) (11.8)


136 Ananth Krishnamurthy and Deng Ge

For x1 = −K0 :

(T − x1 + x2 )τd−1 + μ }π (x1, x2 ) = λ π (x1 − 1, x2 )+ (T − x1 + x2 + 1)τd−1 π (x1 , x2 + 1)


(11.9)
For −K0 < x1 < x2 + T :

{λ + (T − x1 + x2 )τd−1 + μ }π (x1, x2 ) = λ π (x1 − 1, x2) + μπ (x1, x2 − 1) +


(T − x1 + x2 + 1)τd−1 π (x1 , x2 + 1)(11.10)

These balance equations can be solved to obtain the key performance measures.
However, the expressions for the performance measures of KCS with ADI are not
closed form. Let Pb , E[I], E[B], E[C] be the probability of being backordered, and
the expectation of I(t), B(t),C(t), respectively. Then, if ρ = μλs and τd = E[LTD ], we
have:

Pb = ∑ π (x1 , x2 ) (11.11)
(x1 ,x2 ):x2 <0

E[I] = ∑ x2 π (x1 , x2 ) (11.12)


(x1 ,x2 ):x2 >0

E[B] = ∑ |x2 |π (x1 , x2 ) (11.13)


(x1 ,x2 ):x2 <0

E[C] = ∑+ (K − x+ +
1 − x2 )π (x1 , x2 ) (11.14)
(x1 ,x2 ):K−x1 −x+
2 >0

11.4 System Comparison

Since a system operating under KCS with ADI combines features of both the classi-
cal KCS and BS system with ADI, we compare the performance of all three policies
assuming that the manufacturing system has same configuration and the parameters
characterizing the ADI and demand arrival processes are the same. Note that ana-
lytical expressions have already been established for the performance measures of
KCS and BS with ADI systems by Dallery and Liberopoulos (2000) and Karaesmen
et al (2002) respectively. Table 11.1 shows expressions of performance measures for
these two systems.
To compare system performance under all three control policies, we introduce
the expected total cost defined in Equation 11.15, where hw , h f and b are cost rates
for average work in process, finished goods and backorders, respectively.

E[TC] = hw E[C] + h f E[I] + bE[B] (11.15)


11 Kanban Control with ADI 137

Table 11.1 Analytical Expressions for Performance Measures


Measures BS with Deterministic ADI Classical KCS

Pb e−μτd (1−ρ ) ρ Z+1 ρ K+1


 
E[I] Z + λ τd − 1−ρ ρ 1 − ρ Z e−μτd (1−ρ ) K − 1−ρ ρ (1 − ρ K )

ρ K+1 Z+1
E[B] 1−ρ e−μτd (1−ρ ) ρ1−ρ
ρ ρ (1−ρ K )
E[C] (1−ρ ) (1−ρ )

11.4.1 Design of Experiments

This section presents the design of experiments used for comparing the performance
of the KCS with ADI with the classical KCS and BS system with ADI. In these ex-
periments, the service time of the manufacturing station is assumed to have an expo-
nential distribution with mean μs−1 = 1. The experiments are conducted by varying
K = (5, 10, 20, 30), Z = (0, K/2, K) and λ = (0.5, 0.6, 0.7, 0.8, 0.9), respectively.
We assume that the average demand lead time LTD (τd ) is set as τd = 0.9τs , where
τs , the average flow time (the average time from order activation at BD1 till the de-
livery of a finished product to FG is estimated by τs  μs 1−λ . Here we set K0 large
enough so that the underlying Markov chain is finite, and yet no more than 0.1% of
the orders that arrive are rejected from the system.

11.4.2 Effect of Base Stock Levels on the Performance Measures

In this section, we discuss the effect of base stock level Z on the performance
measures for the three different policies. The experiment was carried on λ ∈
{0.5, 0.6, 0.7, 0.8, 0.9} and K ∈ {5, 10, 20, 30}. For each given (λ , K), Z ranges from
0 to K. We compare E[B], E[I] and E[TC] for KCS with ADI, BS with ADI and the
classical kanban system (KCS).

Figure 11.3 plots the trade-offs obtained. In particular, Figures 11.3 i-a and i-b
show that the average finished goods of the system operating under KCS with ADI
is less than that of system operating under BS with ADI or the classical Kanban
system. i.e., E[I] ≤ min(E[Ik ], E[Ibsa ]). This implies that KCS with ADI provides a
better control over inventory than the base stock system with ADI or the classical
KCS. Figures 11.3 ii-a and ii-b show that as Z increases, the average number of
backorders decreases for both the KCS with ADI and BS with ADI, but is constant
for the KCS. This is because both KCS with ADI and BS with ADI use a target stock
level Z to reduce the backorders. The KCS does not set a base stock level, and hence
the number of backorders in the system is constant for a given number of kanbans,
138 Ananth Krishnamurthy and Deng Ge

K. We also notice that the average number backorders of the system operating under
KCS with ADI lies between those of BS with ADI and classical KCS. Figures 11.3
iii-a and iii-b show the tradeoffs with respect to total cost. We notice that for a
system operating under KCS with ADI, the E[TC] function is neither convex nor
concave over Z. However, for the BS system with ADI, the expected total cost is
convex over Z. As expected, for the KCS the cost is constant, for a given K and λ .
For low values of λ (or system load), the KCS with ADI behaves similar to the BS
with ADI, but for high values of λ (or system load), the KCS with ADI achieves
lower cost than BS with ADI for all values of Z.

Fig. 11.3 Effect of Z on Performance Measures

11.4.3 Effect of Number of Kanbans on the Performance Measures

In this section, we study the effect of number of kanbans on the performance mea-
sures for the KCS with ADI, BS with ADI and the classical KCS. The target base
stock level Z is set as Z ∗ , which is the optimal base stock level for BS with ADI
hf
ln h +b
system, where Z ∗ = f
lnρ + μτdln(1−
ρ
ρ)
(Buzacott and Shanthikumar, 1993) and K is
varied from Z ∗ to 30.
11 Kanban Control with ADI 139

Figure 11.4 plots the performance tradeoffs. In Figures 11.4 i-a and i-b, we see
that for a system operating under the KCS, E[I] increases almost linearly as K in-
creases, but for the KCS with ADI, E[I] increases initially with the increase in K,
but is then bounded by the target stock level Z ∗ . This is due to the structure of the
KCS system with ADI: the excess of kanbans queue up as free kanbans waiting for
activated orders. This prevents release of additional kanbans into production limit-
ing the built up of excess finished goods inventory. Figures 11.4 ii-a and ii-b show
that initially E[B] decreases with increase in K, but then approaches a constant as
E[I] approaches the target base stock level. The reason is similar to that given above:
when E[I] approaches the target stock level, increase in K does not reduce backo-
rders, as the additional kanbans queue up as free kanbans instead of being used to
further reduce backorders. In Figures 11.4 iii-a and iii-b, we see that for a system
operating under KCS, the expected total system cost, E[TC] is convex, but for a
system operating under KCS with ADI, the expected total cost is neither convex nor
concave. The optimal number of kanbans for the KCS with ADI appears to be close
to the optimal kanban setting for the classical KCS. For either low or high λ (system
load), the KCS with ADI always performs better than classical KCS.

Fig. 11.4 Effect of K on Performance Measures


140 Ananth Krishnamurthy and Deng Ge

11.4.4 Effect of Pair (K, Z) on Total Cost

This section demonstrates the impact of control pair (K, Z) on the overall perfor-
mance. We vary K from 1 to 30 and Z from 0 to K. For each λ ∈ {0.7, 0.8, 0.9},
we consider all 495 combinations of (K, Z) and study its impact on the total cost.
Figure 11.5 shows the case of λ = 0.9. As we have seen in Fig. 11.3 iii-b and Fig.
11.4 iii-b, E[TC] does not demonstrate convexity or concavity over the control pair,
and E[TC] has local minimums.

Fig. 11.5 Effect of (K, Z) on Total Cost

11.5 Conclusions and Ongoing Work

This paper provides analysis of single-stage single-class production-inventory Kan-


ban control system with ADI. We develop an analytical model for a system operating
undet the KCS with ADI and compare the performance to system operating under
the BS system with ADI and the classical KCS. Our results show that the KCS with
ADI helps to reduce the inventory levels and backorders beyond that possible in the
KCS or BS with ADI for the same system parameters. However, cost function for
the KCS with ADI is neither convex nor concave, so determining optimal system
parameters for a system operating under KCS with ADI is a challenge. Our ongoing
11 Kanban Control with ADI 141

work is aimed at developing detailed closed form approximations for key perfor-
mance measures and optimizing overall system performance over the controllable
parameters.

References

Buzacott J, Shanthikumar J (1993) Stochastic models of manufacturing systems.


Prentice Hall, New Jersey
Dallery Y, Liberopoulos G (2000) Extended kanban control system: Combining
kanban and base stock. IIE Transactions 32(4):369 – 386
Di Mascolo M, Frein Y (1996) An analytical method for performance evaluation of
Kanban... Operations Research 44(1):50 –
Frein Y, di Mascolo M, Dallery Y (1995) On the design of generalized kanban
control systems. International Journal of Operations & Production Management
15(9):158 – 184
Karaesmen F, Buzacott JA, Dallery Y (2002) Integrating advance order information
in make-to-stock production systems. IIE Transactions 34(8):649 – 662
Liberopoulos G, Koukoumialos S (2005) Tradeoffs between base stock levels, num-
bers of kanbans, and planned supply lead times in production/inventory sys-
tems with advance demand information. International Journal of Production Eco-
nomics 96(2):213 – 232
Liberopoulos G, Tsikis I (2003) Unified modelling framework of multistage
production-inventory control policies with lot sizing and advance demand in-
formation. In: Shanthikumar J, Yao D, Zijm W (eds) Stochastic Modeling and
Optimization of Manufacturing Systems and Supply Chain, Kluwer Academic
Publishers, pp 271–297
PhiIipoom PR, Rees LP, Tailor III BW, Huang PY (1987) An investigation of the
factors influencing the number of Kanbans required in the implementation of
the JIT technique with Kanbans. International Journal of Production Research
25(3):457–472
Uzsoy R, Martin-Vega L (1990) Modelling Kanban-Based Demand-Pull Systems:
A Survey and Critique. Manufacturing Review 3(3):155–160
Chapter 12
Rapid Modelling in Manufacturing System
Design Using Domain Specific Simulators

Doug Love and Peter Ball

Abstract Simulation is an important tool for evaluating manufacturing system de-


signs in the face of uncertainties like demand variation, supply variation, break-
downs and absenteeism. The simulation model building and experimentation stages
can be long when compared to the time available for the overall manufacturing
system design leading to potential benefits of simulation being limited. Thus the
simulation process may be iterative for a single model/design but rarely iterates
across multiple models for new design options. In order to maximise the value of
simulation and to improve the design outcome the model building time needs to
be minimised to keep pace with the manufacturing system design process. This pa-
per argues that problem specific interfaces are needed for simulators to allow rapid
and intuitive model creation. This paper reviews two case studies that illustrate an
approach using domain-specific simulators combined with specialist software that
manipulates the design data into the form required by the modelling system. The
preprocessor-based simulators were developed to avoid the user having to specify
any of the simulation logic which speeds up model building considerably. This pa-
per contributes to the rapid modelling field by showing how domain specific, data
driven simulators can enhance the manufacturing system design process.

12.1 Simulation in Manufacturing System Design

Many of the key performance aspects of a manufacturing system are related to the
effect of stochastic events on its operation and although mathematical modelling can

Doug Love
Aston Business School, Aston University, Birmingham, B4 7ET, U.K.,
e-mail: d.m.love@aston.ac.uk
Peter Ball
Department of Manufacturing, Cranfield University, Cranfield, Bedford, MK43 0AL, U.K.,
e-mail: p.d.ball@cranfield.ac.uk

143
144 Doug Love and Peter Ball

help with some of these it is simulation that provides the most flexible and powerful
means of estimating their impact. Reliable estimates of lead times, work in progress
levels, delivery performance, resource utilization etc. all depend on proper represen-
tation of such sources of uncertainty. Determination of the robustness of the design
requires study of external and internal sources of uncertainty, for example changes
in volume and product mix are external to the system whilst breakdowns or scrap
are internal factors. Smith (2003) reviews the literature on the use of simulation in
manufacturing and lists many examples of its use in the design of manufacturing
systems. However the review finds few papers that are concerned with role of simu-
lation in a comprehensive manufacturing system design (MSD) process such as that
proposed by Parnaby (1979). Kamrani et al (1998) presented a simplistic three stage
methodology for cell design in which simulation was the third phase. Other exam-
ples of simulation being discussed in the context of the manufacturing system design
process include Paquet and Lin (2003) who introduce ergonomic considerations and
AlDurgham and Barghash (2008) who propose a framework for manufacturing sim-
ulation that covers some aspects of the design problem but is presented from a more
general perspective.
Conventionally simulation has been linked with the ’dynamic design’ stage of the
manufacturing system design process which follows the concept and detail design
phases in which steady-state conditions are assumed (Love, 1996). During these
earlier stages average losses or utilization factors are assumed to cover internal un-
certainties and average conditions said to apply to demand and product mix. Only at
the dynamic design stage are these factors studied (and represented) in more depth
so that reliable estimates of many of the manufacturing system’s key performance
metrics will only be revealed at this late stage. Ideally the evaluation of dynamic
performance of the manufacturing system should be included in every stage in the
design process but this means that the simulation model would need to change as
the engineers develop their view of the manufacturing system design. Lewis (1995)
proposed a manufacturing system design methodology that incorporated just such
a synchronized approach but it was never fully implemented. He suggested that the
simulation model should be used throughout the system design and through the all
the iterations of its development.
The feasibility of such an approach clearly depends on ability of the modeller to
complete the simulation re-build loop inside the time available for each stage in the
system design process. If that cannot be done then inevitably the simulation will be
left until the system design has stabilized to the point where major changes in the
simulation model would not be needed - that is why the simulation is often built
toward the end of the design project once the detail design phase is complete. Of
course it means that any serious deficiencies in the design that emerge from the dy-
namic analysis may require expensive revision of the system architecture that could
have been accommodated more easily at an earlier stage. Manufacturing system
redesign is normally initiated when there is a compelling business need and that
need is usually time-sensitive so that there is considerable pressure to complete the
project as soon as possible. This pressure means that design team are unlikely to
Title Suppressed Due to Excessive Length 145

favour extending the project time scale even if the extra time spent on a simulation
study would result in a higher quality and more robust design.
Clearly if the time required to perform the simulation analysis could be signifi-
cantly reduced then it would alter the trade-off between design quality and project
duration in favour of the use of simulation.

12.2 Rapid Modelling in Manufacturing Systems Design

During the early stages in MSD the architecture of the system may change substan-
tially, for example the cell and related part families may be redefined completely, so
simulation support through this phase implies an ability to completely rebuild the
simulation model quickly. As the architecture is developed a series of models will
be required to test out very different alternatives. Differences will not merely relate
to number and distribution of resources but may require more fundamental revisions
to reflect changes to cell families, material flow paths, work and skill patterns and
machine tool capabilities. This means that the time to build a complete model from
scratch is a key determinant of whether simulation can be used to support this early
phase in the project. Building a model from scratch always takes a significant period
of time, especially if the model is complex Cochran et al (1995) suggest that over
45% of simulation projects take more than three months and nearly 30% require
over three man-months of effort. We have not been able to identify a more recent
study that assessed the impact of the technical enhancements seen since that time or
was focused specifically on manufacturing design projects.
Speeding up model building has long been a desirable objective for simulation
system developers, for example see Love and Bridge (1988). It is clear that whilst
improvements have been made, the position is still seen as one in which scope exists
for further improvement. For example Pedgen’s review (see Pegden, 2005) of future
developments in simulation states that: “If we want to close the application gap we
need to make significant improvements in the model building process to support
the fast-paced decision making environment decision making environment of the
future”.
In response to this pressure software systems have improved considerably, no-
tably in relation to the use of graphics and reusable elements; for examples of this
trend see the Simul8, Witness and Arena systems amongst others. These develop-
ments focus on speeding up the translation of the model logic into executable code
whilst other enhancements provide support for multiple runs, statistical analysis and
the production of common reports, graphs etc. that help by speeding up the exper-
imental process. However the domain independence or breadth of these systems
means that the user is still required to provide much of the detail logic of the model.
This is likely to be a significant task Robinson (2004) suggests the conceptual
modelling stage takes around one third of the total simulation project time. Thus,
although these improvements could be expected to speed up model development
to some extent, they are limited by primarily addressing the coding and experi-
146 Doug Love and Peter Ball

mentation parts of the simulation model development cycle leaving the conceptual
modelling phase relatively untouched.
Whilst the need to repeat the conceptual modelling stage is clearly a serious in-
hibitor on the use of such systems in the architecture design of the manufacturing
system it is a less significant issue for refinement and detail design. The refinement
stages of the manufacturing system design process will generate a need for modi-
fications to an existing model even if the underlying conceptual simulation model
remains largely unchanged. The ease and speed with which these can be done will
have been aided by the improvements mentioned above but may still require longer
than the engineer would wish. The length of the minor modify-experiment cycle
may depend on the ease with which the engineer can interact with the model to im-
plement the required changes and perform the necessary experiments and that, in
turn, may depend on the nature of the simulation software.
It could be argued that some simulation systems already allow models to be built
using only data without any programming and some simulation software companies
may argue that their interfaces are intuitive and can be learnt very quickly. But this
would not be a view shared by a typical manufacturing engineer unfamiliar with
simulation interfaces or the subtle tricks needed to get the systems to represent the
logic required with recourse to programming. Ball and Love (1994) point out that
interfaces may make simulation packages easier to use but this does not necessary
mean easy to use, i.e. easy to use describes the simplicity by which the user can
create the model from data from the problem domain.
Data-driven simulators are usually defined as systems that allow a user to create
a running model without the need to do any programming, for example see Pidd’s
definition (Pidd, 1992). Configuration options are used to define or modify the op-
erational logic of the model usually through menus choices and the setting of entity
properties. Although it is true that this approach does use ‘data’ to define the model,
it may still require the user make decisions that are normally associated with concep-
tual modelling, for example to define model inputs, outputs, and data requirements
and to decide what components are to be included and the level of detail with which
they will be represented. The more freedom the system offers to the user the wider
its potential range of applications will be, but the more specialist knowledge will be
needed to use it.
O’Keefe and Haddock (1991) present a useful diagram that demonstrates the
continuum from pure programming languages through the type of system described
above to highly focused problem-specific simulators that merely require data popu-
lation. The approach used in the cases described here is close to the problem specific
end of the range; the ‘model’ is pre-built and the options offered are limited to those
that are directly related to the manufacturing system design problem itself. They
would be recognised by an engineer as part of the normal specification of the design
and are expressed in domain specific language. The conceptual modelling decisions
have already been made and are hard-wired into the system. The model is populated
by the data that is loaded into it; in these cases the data describe the products, pro-
duction processes and resources that make up the real system. Normally this data
will be extracted from the company databases or ERP systems and formatted be-
Title Suppressed Due to Excessive Length 147

fore uploading although Randell and Bolmsjö (2001) built a demonstration factory
simulation that showed it was: “feasible to run a simulation using the production
planning data as the only information source”. The data used for this project are
very similar to those required by SwiftSim (Love, 2009) - see below. Detailed con-
figuration options may still be required but they are defined and presented in a form
and language familiar to the engineer using a problem specific interface. This means
there is no need for users to learn specialist simulation concepts and terminology. Of
course other aspects of the simulation art are still required, especially those related
to the experimentation.
The avoidance of the conceptual modelling stage altogether and the fact that the
coding stage is also eliminated means that the user can move from data gathering to
running model very quickly since data upload and parameter and option setting are
all that are required to create a running model. The Robinson (2004) suggestion that
the project time is roughly split evenly between conceptual modelling, coding and
experimentation (he excludes implementation from this) means that use of this type
of data-driven simulator could save up to two thirds of the simulation project time.
This paper reviews two case studies that illustrate this approach.

12.3 Case Applications

12.3.1 Case Study 1 Cell Design in an Aerospace Company

The aerospace case is an example of the use of a data-driven simulator in the design
of a cellular manufacturing system. This company manufactures complex parts for
the aerospace industry with application in both the military and commercial markets
and their customers include all the major manufacturers of aircraft. Moulding and
related processes are used in their manufacture so that this application was slightly
unusual in that the processes were very different from those seen in a conventional
machine shop. The variety of parts in the cell’s product family was also substantial
- around 3200 part numbers were considered ‘live’ and each part passed through
around 10 operations. The number of work centres in the cell was more modest at
around 70 although many contained multiple stations. At some stations individual
parts were loaded and processed by the operator whilst at others parts were loaded
in bulk and the processing took place unattended. The need to changeover might be
triggered by a change in part number from one batch to the next or by a change in
some other property or attribute of the part. Operators were multi-skilled and those
skills differed from person to person and shift working was the norm. Special tooling
was used extensively and in some cases travelled with the work through several
operations and could be considered an important resource limitation. In some cases
parts were assembled together at certain operations so that the process route data
had to include bill of materials information. In some cases the constituent parts were
made in the same cell whilst in other instances they were produced elsewhere. MRP
148 Doug Love and Peter Ball

generated works orders were to be used by the company to drive the production
programme for the cell.
The design team recognised the potential benefit of using simulation but were
concerned that it would take too long to develop a usable model given the tight
timeframe that they had been given for the project. The complexity of their pro-
cesses and the size of the part family were also seen as likely to extend the develop-
ment time needed for the model. On the other hand the ability to test the robustness
of the design was recognised as especially important for high-variety cells where
shifts in product mix can cause unforeseen problems in sustaining delivery perfor-
mance and utilisation levels. Since the redesign involved reorganisation of existing
facilities rather than the introduction of new processes it followed that much of the
data held in the company’s ERP system could be used to populate the simulation
model. A revised version of an existing data-driven batch manufacturing simula-
tor had recently become available at Aston University so it was decided to use that
package for the project. The original system (ATOMS, Bridge see 1991) had em-
ployed a manual user-interface in which the engineer typed in all the relevant data
and, whilst some basic data could be uploaded from files, extensive manual editing
was always required before a viable model could be generated. Although the core
of the system was little changed the revised facilities meant much larger models
could be run and a more comprehensive range of upload options were implemented
through a spreadsheet interface. These developments meant that ERP data could be
used without simplification to generate the model.
SwiftSim (Love, 2009) relies entirely on the base manufacturing data and a range
of configuration options (that are also defined by the uploaded data) to generate a
running model of a manufacturing system. The data required is extensive but is no
more comprehensive than would be needed to specify the manufacturing system
design. The system does not offer any programming options at all - if the required
functionality is not present then it cannot be added. To ensure that its range of func-
tionality was as comprehensive as possible the original design was based on a study
of cell design practise across a UK-based multi-national company. Engineers from
the company’s design task forces located in plants across the country were inter-
viewed to identify the features that the system needed to offer. The system was also
refined by application in a number of in-house redesign projects.
Domain data are used to create the model directly, i.e. the data are formatted,
uploaded (or manually entered into the system), run options selected and the model
then executes immediately. The user defines materials (i.e. part numbers), process
routes, bills of materials, work stations, work centres, operators, work patterns, skill
groups, control systems (MRP, Kanban), sequencing rules (FIFO, batching etc),
stock policies, suppliers and lead times, demand sources (generated or input) etc..
The model is created directly from this data. Company terminology is used through-
out so, for example, actual part numbers are used and operators are given their real
names. The system can generate a range of standard reports that vary in the level of
detail offered from simple tables of resource utilisation to event log files that record
everything that happened in a run. The original ATOMS system provided a limited
graphical, schematic, representation of the simulated system that could be used for
Title Suppressed Due to Excessive Length 149

debugging and diagnostic investigation. For this type of system the graphical dis-
play of the system status is rarely used when performing experimental runs but it
remains very useful for diagnostics so that aspect will be a core focus of the new
graphical extension currently being considered for SwiftSim.
The concern to ensure the model was built as quickly as possible and the fact that
the company had no experience of the modelling system influenced their decision
to employ an external consultant (one of the authors) who had knowledge of both
the manufacturing design process and the simulator. This meant that there was a
learning curve faced by the consultant in becoming familiar with the companys
products, processes etc. This approach ensured that the first model was produced
quickly but the extra communications involved did slow the iteration cycle down
during later stages in the project.
The raw process and sales demand data were extracted from the company’s ERP
system into spreadsheets where they could be readily reformatted for upload. The
data for work stations, operators, materials, process routes (including bills of mate-
rial) were all handled that way. Generating demand data proved to be a little more
complex as an MRP calculation was performed in the spreadsheet to convert product
demand to that for the cell family parts. This had the advantage of avoiding any dis-
tortions that might have been present in a works order history extracted from ERP.
The disadvantage was that the spreadsheet calculation was slow, taking 6-8 hours
on average.
The absence of a programming capability did not prove to be a constraint as
the simulator handled all the complications of the manufacturing processes with-
out the need for any special ‘tricks’ or deviations. The time the system needs to
create a model from a spreadsheet is very short (less than a minute) and run times
are also reasonable taking around an hour to run a year’s simulated operation of
the cell. However the time required for initial familiarisation and analysis, data
extraction and reprocessing and data validation and correction meant that the first
proper model took around 100 man hours to produce including the time needed to
include program the MRP explosion into the spreadsheet. This time also included
the consultants learning curve of around 20 hours that would have been avoided by a
SwiftSim-trained engineer. Subsequent revisions to reflect design changes or differ-
ent performance requirements could be accommodated much more quickly taking
around 8 man hours to revise the data set, upload and perform a test run. These times
are taken from contemporaneous log of the projects task times that was used to track
progress and resources used. Once the base model had been created the engineers
were able to obtain feedback on design changes quite rapidly although this cycle
time would have been reduced and some of the initial creation problems may have
been avoided if the engineers had used the simulator themselves from the beginning
of the project.
The engineers were able to use the standard reports from the system and gener-
ally they provided the information needed although the ability to show an animated
graphic of the cell running was seen as very desirable, especially for communicating
with both senior management and the shop floor.
150 Doug Love and Peter Ball

12.3.2 Case Study 2 High Volume Production Line Design

The second case study is drawn from a high volume, engineered product environ-
ment. The company regularly introduces new products which trigger the develop-
ment of new production lines. A production line is developed iteratively over a
number of months and simulation is used as standard practice within those itera-
tions. There are many individuals involved in the production line design process
and although many regularly use simulation only a few are considered simulation
experts. The focus of the design activity is the production line, with some links to
the support, supply chain, etc. activities. The initial users of the simulation output
are the wider design team to trigger redesign work or to confirm performance. The
final simulation output is used as part of the senior management sign off process.
The role of simulation in this case is to support the activities of the manufactur-
ing engineers in removing risk from the design process and, importantly, trigger-
ing design changes that would typically result in a 10% performance improvement.
Numerous simulation models are created during the design of a production line re-
sulting from changes to numbers of machines, machine cycle times, process quality,
expected output rates, etc. The models include details of buffers, selection rules,
conditional routings, scrap rates and operator behaviour. Given their size (100 en-
tities in a model is not unusual) and the scope of the potential changes, the models
are rapidly rebuilt from scratch each time rather than modifying a base model. This
rapid rebuilding of models is considered more robust than model modification and
the scope of the changes required mean that such modifications could take longer
than is available to the overall design team.
The rapid building of simulation models is achieved through a tailored spread-
sheet interface to a commercially available simulation package. The users work with
the interface to specify the model through either manual entry or copy and past-
ing data from other design spreadsheets. The data entered represents the entities
to be modelled as well as the control parameters. Populating the interface the first
time for a new production line design typically takes several days, however, once
achieved subsequent design changes can be accommodated easily with a day, often
in hours. The early modelling work takes many days as the first models are run are
deterministic and stochastic enhancements are progressively added and experiments
performed. Once set up, the interface is able to build the model in the simulation
package, run the model a number of times and retrieve the results. The interface
contains only sufficient functionality to build models for that particular company.
Therefore the user works within the user interface using terminology of a manufac-
turing engineer rather than generalised simulation terminology and is restricted to
entering data typical of that companys requirements. The overall time from start to
finish of modelling a given line is of the order of weeks. Relatively therefore the
model build and run time is short for a given scenario. Overall modelling effort is
actually dictated by the design iterations creating new scenarios.
Manufacturing engineers use the simulation interface to build and run simulation
models, sometimes with the guidance of the simulation experts. The company spe-
cific functionality of the simulation interface means that data specified in the inter-
Title Suppressed Due to Excessive Length 151

face that completely defines the model creation and execution is readily understood
by all, whether or not they were part of initial modelling work. This contrasts with
the typical view that simulation models built by others take time to fully understand.
The size of the models means that manual creation of both the model logic and
the graphical display would take a significant amount of time to create; potentially
the speed is such that modifications would be triggered by the manufacturing engi-
neers before the model was completed. Experimentation times are typically of the
order of hours, sometimes scenarios are batched together and run overnight to use
otherwise idle computers. The modelling approach therefore uses the power of a
commercial simulator to model complex and varied systems and combines this with
the simplicity of an interface dedicated to the particular companys work. This sepa-
ration of the model creation from the power of the simulation software enables staff
to quickly create models without having to develop a dedicated simulator or use
significant staff time.
In summary, the approach combines the power of a commercial simulation pack-
age with the speed and ease of use of a dedicated spreadsheet based interface in
the language of the manufacturing engineer and allows rapid creation of models for
experimentation by simulation experts and non-experts alike. The speed of the mod-
elling is within the pace of the wider design team activity and genuinely informs the
design process triggering design iterations and confirmation of performance before
sign off.

12.4 Discussion of the Cases

The paper has argued that the traditional relationship between manufacturing system
design and simulation needs to evolve to truly draw on the benefits of simulation as
an integral part of the dynamic design stage. The discussion went further to make
the case that the dynamic design should be iterative starting at the concept stage
and that to enable this to take place the interface to simulation systems should be
in the language of the manufacturing engineers who are making the critical design
decisions. The paper has presented two different cases where simulation models
have built to directly contribute to the manufacturing system design process. The
following discussion reviews how far these cases go towards enhancing the design
outcome.
Both cases address the integration of simulation into the manufacturing system
design process. The cases demonstrate the influence of simulation on the design
outcome as well as confirming performance. In both cases there was a simulation
expert supporting the design activities and notably in case two those manufacturing
engineers who are in the core of the design team are also users of the simulation
system.
The cases show the use of simulation to improve design performance, however,
its influence on the concept design differs according to the point at it is deployed at
the design stage. Whilst the first case demonstrated the influence on the design con-
152 Doug Love and Peter Ball

cepts in the second case simulation was utilised after the production line concepts
emerged and therefore its role was to improve the performance of a given design
option. Design iterations can vary in magnitude from parameter changes (such as
cycle times and material control rules) to more fundamental structural changes (such
as number of machines and routings). In case two most iterations were parameter
changes however there were occasional more fundamental changes that resulted, as
would be expected, in longer model creation times.
Developments in simulators have potential to improve simulation model build
times and in turn influence its role in manufacturing systems design. The devel-
opment of a domain specific simulator requires both simulation and application
domain knowledge. Two different approaches were illustrated the case two com-
pany used a standard commercial simulation system as the basis of their simulator
whereas SwiftSim was developed through an academic research project with indus-
trial collaborators. Interestingly the developments most valuable to these cases were
core functionality improvements rather than those relating to animation.
The role of the manufacturing engineer in the use of simulation varies in these
cases. Case one was lead by the simulation expert where as case two was supported
by the simulation expert. It has to be noted that simulation experts were used when
subtle ‘tricks’ are required that are not a standard part of the interface functional-
ity. A simulation specialist may be used to interface between the engineer and the
model but this approach also has drawbacks. The specialist translates the engineer’s
requirements into a model suitable for the purpose but the risk is that features are
lost in translation and delays result. It may be that the popularity, mentioned ear-
lier, of simple spreadsheet models with manufacturing engineers reflects a desire to
directly control all aspects of the analysis.
The level of data translation required from the manufacturing engineering design
world into the simulation analysis world and back influences the level of robustness
of the analysis process as well as the time taken to complete it. Both cases feature a
minimum level of translation of data from the manufacturing engineers world to the
simulation world, hence the manufacturing engineer could readily understand the
model construction and outputs. Subsequently this minimises any nervousness with
verification.
Both cases present implications for model build time and indicate that they are
built rapidly when compared to typically quoted figures from the literature. The
rapid model building has had two impacts: firstly, the simulation output was influ-
encing the design outcome rather than just confirming performance and, secondly,
the level of detail possible is very high providing greater confidence in the design
outcome.

12.5 Conclusions

This paper has presented a discussion on the relationship between the manufactur-
ing system design process and the simulation modelling process. It was argued that
Title Suppressed Due to Excessive Length 153

to improve the design outcome, the model building process needs to be reduced
significantly to enable the results of simulation to truly influence the selection and
refinement of design concepts. The detail of the two industrial cases demonstrates
the challenges for simulation use as well as the benefits obtained. From this, key is-
sues of integration of simulation, the influence on concept design, the functionality
of commercial simulators, the role of the manufacturing engineer and data transla-
tion were identified and discussed. Overall the paper has demonstrated how domain
specific, data driven simulators can enhance the manufacturing system design pro-
cess.

References

AlDurgham MM, Barghash MA (2008) A generalised framework for simulation-


based decision support for manufacturing. Production Planning & Control
19(5):518 – 534
Ball PD, Love DM (1994) Expanding the capabilities of manufacturing simulators
through the application of object-oriented principles. Journal of Manufacturing
Systems (6):412–442
Bridge K (1991) The application of computerised modelling techniques in manu-
facturing system design. PhD thesis, Aston University
Cochran JK, Mackulak GT, Savory PA (1995) Simulation Project Characteristics in
Industrial Settings. Interfaces 25(4):104 – 113
Kamrani A, Hubbard K, Parsaei H, Leep H (1998) Simulation-based methodology
for machine cell design. Computers & Industrial Engineering 34(1):173–188, cel-
lular manufacturing systems:Design, Analysis and Implementation
Lewis P (1995) A systemic approach to the design of cellular manufacturing sys-
tems. PhD thesis, Aston University
Love D (1996) The design of manufacturing systems. In: International Encyclopae-
dia of Business and Management V4, Thompson Business Press, pp 3154–3174
Love D (2009) SwiftSim overview. URL http://oimabs.aston.ac.uk/swiftsim
Love DM, Bridge K (1988) Specification of a computer simulator to support the
manufacturing system design process. In: Proceedings International Conference
Computer-Aided Production Engineering, SME, Michigan
O’Keefe RM, Haddock J (1991) Data-driven Generic Simulators for Flexible Man-
ufacturing Systems. International Journal of Production Research 29(9):1795–
1810
Paquet V, Lin L (2003) An integrated methodology for manufacturing systems de-
sign using manual and computer simulation. Human Factors and Ergonomics in
Manufacturing 13(1):19–40
Parnaby J (1979) Concept of a Manufacturing system. International Journal of Pro-
duction Research 17(2):123 –34
Pegden C (2005) Future directions in simulation modeling. In: Proceedings of the
37th Winter Simulation Conference, pp 1–35
154 Doug Love and Peter Ball

Pidd M (1992) Guidelines for the design of data-driven generic simulators for spe-
cific domains. Simulation 59(4):237–243
Randell L, Bolmsjö G (2001) Database driven factory simulation: A proof-of-
concept demonstrator. In: Peters B, Smith J, Medeiros D, Rohrer M (eds) Pro-
ceedings of the 33rd conference on Winter simulation, December 9-12, pp 977–
983
Robinson S (2004) Simulation: The practice of model development and use. John
Wiley & Sons, Chichester
Smith J (2003) Survey on the use of simulation for manufacturing system design
and operation. Journal of Manufacturing Systems 22(2):157–171
Chapter 13
The Best of Both Worlds - Integrated
Application of Analytic Methods and Simulation
in Supply Chain Management

Reinhold Schodl

This work attempts to discover how complex order fulfillment processes of a sup-
ply chain can be analyzed effectively and efficiently. In this context, complexity is
determined by the number of process elements and the degree of interaction be-
tween them, as well as by the extent variability is influencing process performance.
We show how the combination of analytic methods and simulation can be utilized
to analyze complex supply chain processes and present a procedure that integrates
queuing theory with discrete event simulation. In a case study, the approach is ap-
plied to a real-life supply chain to show the practical applicability.

13.1 Combination of Analytic Methods and Simulation

Analytic models and simulation models are opposing ways to represent supply chain
processes for purposes of analysis. “If the relationships that compose the model are
simple enough, it may be possible to use mathematical methods (such as algebra,
calculus, or probability theory) to obtain exact information on questions of interest;
this is called an analytic solution” (Law and Kelton, 2000). Conversely, simulation
models are quantitative models, which do not consist of an integrated system of pre-
cisely solvable equations. “Computer simulation refers to methods for studying a
wide variety of models of real world systems by numerical evaluation using soft-
ware designed to imitate the system’s operations or characteristics, often over time”
(Kelton et al, 2002).
The use of analytic models and simulation models in supply chain management
harbors distinct merits and demerits (see Table 13.1). By combining analytic meth-
ods and computer simulation, one can potentially derive greater value than by ap-
plying one of these methods alone. This idea has been advocated since the early

Reinhold Schodl
Capgemini Consulting, Lassallestr. 9b, 1020 Wien, Austria,
e-mail: : reinhold.schodl@capgemini.com

155
156 Reinhold Schodl

Table 13.1 Analytic models compared to simulation models


Strengths of Analytic Models Strengths of Simulation Models
No limitation to descriptive models, as an- Ability of representing systems with compre-
alytic models can be prescriptive (i.e., opti- hensive stochastic cause-effect-relationships
mization models) as well
High significance of results (conversely, con- Possibility to determine not only mean val-
clusions derived from stochastic simulation ues, but also distributions of output variables
carry risk, because analysis is based on a lim- by approximation
ited sample size of output values generated by
repeated simulation runs)
Lower time and effort to adapt an existing an- High acceptance among practitioners, as of-
alytic model compared to building a new sim- ten better understandable and more transpar-
ulation model (as simulation models are gen- ent than analytic models (especially when an-
erally built case-specific) imations are being added to simulation mod-
els)

days of computer simulation. Nolan and Sovereign integrate analytic methods with
simulation in an early work in the area of logistics (Nolan and Sovereign, 1972).
Later research utilizes the joint application of the methods in the field of supply
chain management (for examples, see Ko et al, 2006; Gnoni et al, 2003; Merkuryev
et al, 2003; Lee and Kim, 2002).
Depending on the degree of integration, one can distinguish two different forms,
i.e., hybrid modeling and hybrid models. “Hybrid modeling consists of building in-
dependent analytic and simulation models of the total system, developing their solu-
tion procedures, and using their solution procedures together for problem solving”
(Sargent, 1994). An example is the evaluation of alternatives based on economic
viability and operative feasibility by applying an analytic and a simulation model
respectively. A further application is the verification of an analytic model via an
independent simulation model (Jammernegg and Reiner, 2001). “A hybrid simula-
tion/analytic model is a mathematical model which combines identifiable simulation
and analytic models” (Shanthikumar and Sargent, 1983). Hybrid models are char-
acterized by a higher degree of integration, as analytic methods and simulation are
incorporated into a single model.
Hybrid models can be classified according to the type of dynamic and hierarchi-
cal integration (see Table 13.2). Following the classification with regard to dynamic
integration, this work presents a Type I model. Concerning the hierarchical inte-
gration, the presented model is a special case of Type IV, as the simulation model
requires the analytic model’s output, but both models are hierarchically equivalent
and represent the whole system.
13 The Best of Both Worlds 157

Table 13.2 Classification of hybrid models


Dynamic Integration Hierarchical Integration
Hybrid Model Type I: “A model whose be- Hybrid Model Type III: “A model in which
havior over time is obtained by alternating be- a simulation model is used in a subordinate
tween using independent simulation and ana- way for an analytic model of the total system”
lytic models. The simulation (analytic) part of (Shanthikumar and Sargent, 1983)
the model is carried out without intermediate
use of the analytic (simulation) part” (Shan-
thikumar and Sargent, 1983)
Hybrid Model Type II: “A model in which a Hybrid Model Type IV: “A model in which a
simulation model and an analytic model op- simulation model is used as an overall model
erate in parallel over time with interactions of the total system, and it requires values from
through their solution procedure” (Shan- the solution procedure of an analytic model
thikumar and Sargent, 1983) representing a portion of the system for some
or all of its input parameters” (Shanthikumar
and Sargent, 1983)

13.2 Hybrid Models for Complex Supply Chains

The analysis and improvement of complex supply chain processes is a unique chal-
lenge. Given the fact there is no universally accepted definition of the complexity
of supply chain processes, we define the following building factors for complex-
ity: number of process ele-ments (e.g., activities, buffers, information, resources)
and degree of interaction between them, random variability (e.g., machine failures),
and predictable variability (e.g., multiple product variants). The following two ap-
proaches show that hybrid models are particularly suitable for the analysis of com-
plex supply chain processes.
• The entire system is assessed by using analytic methods (e.g., queuing theory).
Subsequently, the results of the assessment are used to construct a model of a
sub-system, which then helps to conduct a more detailed analysis by means of
simulation. This type of approach is used, for instance, to analyze a complex
supply chain in the semi-conductor industry (Jain et al, 1999).
• An analytic model is employed to assess a relatively large number of alternatives
with relatively minimal effort. Promising alternatives are analyzed via simula-
tion in more detail. For instance, such approach is employed to solve complex
transportation problems (Granger et al, 2001).
We now present a procedure to analyze complex supply chains with a balance be-
tween validity and effort. The procedure is different from the discussed approaches
in the following ways. First, narrowing of the system’s scope by an analysis on an
aggregated level is avoided, to incorporate the dynamic behavior of the overall sys-
tem. Second, no preselecting of alternatives by an analysis on an aggregated level
occurs, which prevents an unwanted rejection of promising process designs. The
procedure consists of the following steps:
158 Reinhold Schodl

1. In the first step, the real system’s supply chain processes are modeled as an an-
alytic model and analyzed according to queuing theory. The queuing model de-
livers values of performance indicators (e.g., waiting times) which are inputs for
the complexity reduction in Step 2, as well as for the simulation model in Step 3.
2. This step aims to reduce complexity by identifying non-critical process steps
that can be modeled in a simplified manner in Step 3. If variability is not being
reduced, it has to be buffered by inventory, capacity, or times in order to maintain
process performance. Inventory levels, capacity utilization, and waiting times
represent the degree of buffering, and therefore act as indicators of how critical a
process step is. These indicators can be obtained from the queuing model. Further
indicators can be derived from the real system. An example is a process step’s
relative position in the queuing network as, generally, variability at the beginning
of the process has greater impact than at the end.
3. In this last step, the supply chain processes are modeled as discrete event simula-
tion model. Process steps that are defined in Step 2 as non-critical are modeled in
a simplified manner. Simplification can be achieved by modeling process steps
without capacity restrictions. Waiting times caused by capacity limitations are
then modeled in the simulation model as constants according to the values de-
rived from the queuing model. Finally, the simulation model is applied to analyze
alternative process designs.

13.3 Application in Supply Chain Management

To demonstrate the practical applicability, we applied the described procedure to a


supply chain in the electronic industry, with focus on a first-tier supplier producing
printed circuit boards. In a period of six months, 407 different products consisting
of 817 different components are produced by 325 machines. The value-adding pro-
cesses are heavily influenced by a highly variable demand as a result of short-term
market trends, as well as by frequent disruptions of production due to the application
of complex technologies. The overall aim of the case study is to improve service lev-
els while taking cost constraints into account. In particular, two design alternatives
are considered and compared with the initial situation: first, capacity enlargement
for the main bottleneck work centers and second, implementation of the concept
Make-to-Forecast (Raturi et al, 1990) based on improved short-term forecasts by
using data available from Vendor Managed Inventories. The core of the case study
consists of three steps, as described below.

13.3.1 Step 1: Analysis Based on Analytic Methods

The supply chain processes are modeled as a network of queues to be analyzed ac-
cording to queuing theory. The software MPX (Network Dynamics, Inc.) is applied,
13 The Best of Both Worlds 159

which “... is based on an open network model with multiple classes of customers.
It is solved using a node decomposition approach. [Each] ... node is analyzed as a
GI/G/m queue, with an estimate for the mean waiting time based on the first two
moments of the arrival and service distributions. Next, the MPX solution takes into
account the interconnection of the nodes ... as well as the impact of failures on the
service time and departure distributions” (MPX, 2003). The analytic model’s inputs
include:
• Demand data (primary demand in defined period, variability of customer order
inter-arrival time),
• Bill of material data,
• Routing data (sequence of production steps, average setup time, variability of
setup time, average process time, variability of process time, work center assign-
ment),
• Resource data (parallel machines in work centers, scheduled availability, mean
time to failure, mean time to repair), and
• Production lot sizes.
The model is validated by comparing the mean production lead time with that of the
real system, which differs by less than 5%. It is then applied to find values for each
work center’s capacity utilization and average value-adding times for setting-up and
processing, as well as average waiting times due to capacity restrictions. This output
is required for the reduction of complexity in Step 2 and for the simulation model
in Step 3.

13.3.2 Step 2: Reduction of Complexity

If certain resources, such as work centers, are not modeled in the simulation model,
the effort for building and running the model can be reduced. This is acceptable only
if simplified modeling is limited to resources aligned to non-critical process steps.
Process steps are classified as critical and non-critical based on multiple criteria, as
follows:
• The capability of a process step to deal with variability is an important factor
in evaluating how critical a process step is. Generally, if variability cannot be
reduced, it has to be buffered by capacity, time, and inventory. Fundamental in-
dicators for the degree of buffering of variability are capacity utilization and lead
time efficiency. Both measures are provided by the described queuing model.
• Another factor is the relative contribution of a process step to the overall perfor-
mance of the supply chain, which can be measured by a process step’s proportion
of value-adding time and proportion of cost of goods sold.
• Moreover, a process step’s relationship with other process steps is taken into
account. The relative position of a process step within the network is a relevant
indicator, as generally variability at the beginning of a process has greater impact
than at its end. A further indicator is a process step’s assembling functionality,
160 Reinhold Schodl

as asynchronous arrival from previous process steps, is an important cause for


delays.

13.3.3 Step 3: Simulation-Based Analysis

The processes of the supply chain under study are modeled as a simulation model
to be analyzed in detail with multiple performance measures. After building the
model, verification and validation is carried out and an experimental design is de-
veloped to finally run simulation experiments. Critical process steps are modeled
detailed, i.e., resources that carry out the process steps are represented in the model,
including details about scheduled availability and random breakdowns. Resources
aligned to non-critical process steps are not being modeled. Because of this simpli-
fication, waiting times caused by capacity restrictions cannot be determined by the
simulation model. Thus, for non-critical process steps, the waiting times calculated
by the analytic model are utilized and represented as constants in the simulation
model. This approach guarantees a balance between representing reality as detailed
as necessary while also keeping the effort to build and run the model as low as
possible.
The case study’s discrete event simulation model is implemented with the soft-
ware ARENA (Rockwell Automation), which is based on the simulation language
SIMAN. The simulation model accounts for various risks of the supply chain, espe-
cially variable demand, forecast errors, stochastic setup times and machine break-
downs. The model’s input is comprised of:
• Demand data (order time, order quantity, desired delivery date),
• Forecast data (forecasted order time, forecasted order quantity, forecasted desired
delivery date),
• Bill of material data,
• Routing data (sequence of production steps, assignment of work centers, setup
time, processing time),
• Resource data (parallel machines in work center, scheduled availability, mean
time to failure, mean time to repair, constant waiting time for simplified modeled
work centers),
• Production data and rules (production lot size, rule for dispatching production
orders, rule for prioritization of production orders), and
• Cost data (material cost, time-depended machine cost, quantity-depended ma-
chine cost).
The length of the warm-up-period of the non-terminating simulation is decided
by visual analysis of the dynamic development of inventory. The number of replica-
tions is determined by statistical analysis of the order fulfillment lead time.
The output of the simulation model comprises performance measures whose def-
initions are in line with the well-established Supply Chain Operations Reference
Model (Supply Chain Council, 2009). The defined scenarios are compared with
13 The Best of Both Worlds 161

multiple performance measures, i.e., delivery performance, order fulfillment lead


time, capacity utilization, cost of goods sold, and inventory days of supply.

Table 13.3 Effect of complexity reduction


Degree of Complexity Reduction. Error of Order Fulfillment Lead Time.
0% 0.60%
9% 1.00%
29% 2.90%

The focus of this paper does not lie in the presentation of the scenario’s spe-
cific results, but on a demonstration of the practical applicability of the presented
approach to deal with complex supply chains. Therefore, the validation of the simu-
lation model under different degrees of complexity reduction is of particular interest.
The degree of complexity reduction is expressed as the proportion of work centers
modeled in a simplified manner. Table 13.3 shows how complexity reduction af-
fects the model’s error of order-fulfillment lead time. Complexity reduction of 9%
results in a generally acceptable error of the order fulfillment lead time of 1%; for a
complexity reduction of 29%, the error is still under 3%.
For further validation, statistical analysis of the order fulfillment lead times of
the customer orders was carried out. A Smith-Satterthwaite test is utilized, as the
system and model data are both normal and variances are dissimilar (Chung, 2004).
For a level of significance of 0.05 and a degree of complexity reduction of zero
and 9%, there is no statistically significant difference between the actual system and
the simulation. For a level of significance of 0.01, this is also true for a complexity
reduction of 29%.

13.4 Conclusion

Analytic models and simulation models are characterized by specific strengths and
weaknesses. In this paper, we demonstrated a procedure that combines an analytic
queuing model with a discrete event simulation model to utilize the specific benefits
of both methodological approaches. A balance between validity of the results and
effort for the analysis of the supply chain processes was accomplished.

References

Chung C (2004) Simulation modeling handbook: A practical approach. CRC Press,


Boca Raton
162 Reinhold Schodl

Gnoni M, Iavagnilio R, Mossa G, Mummolo G, Di Leva A (2003) Production plan-


ning of a multi-site manufacturing system by hybrid modelling: A case study
from the automotive industry. International Journal of production economics
85(2):251–262
Granger J, Krishnamurthy A, Robinson S (2001) Stochastic modeling of airlift op-
erations. In: Proceedings Winter Simulation Conference 2001, IEEE Computer
Society Washington, DC, USA, vol 1, pp 432–440
Jain S, Lim C, Gan B, Low Y (1999) Criticality of Detailed Modeling in Semi-
conductor Supply Chain Simulation. In: Proceedings of the Winter Simulation
Conference 1999, ACM New York, NY, USA, vol 1, pp 888–896
Jammernegg W, Reiner G (2001) Ableitung und Bewertung von Handlungsalterna-
tiven in einem Unternehmen der Elektroindustrie. In: Jammernegg W, Kischka
PH (eds) Kundenorientierte Prozessverbesserungen, Konzepte und Fallstudien,
Springer, Berlin, Berlin, pp 237–247
Kelton W, Sadowski R, Sturrock D (2002) Simulation with ARENA, 2nd edn.
McGraw-Hill Science/Engineering/Math, Boston
Ko H, Ko C, Kim T (2006) A hybrid optimization/simulation approach for a distri-
bution network design of 3PLS. Computers & Industrial Engineering 50(4):440–
449
Law A, Kelton W (2000) Simulation modeling and analysis, 3rd edn. McGraw Hill,
New York
Lee Y, Kim S (2002) Production–distribution planning in supply chain considering
capacity constraints. Computers & Industrial Engineering 43(1):169–190
Merkuryev Y, Petuhova J, Grabis J (2003) Analysis of dynamic properties of an
inventory system with service-sensitive demand using simulation. In: Proceedings
of the 15 th European Simulation Symposium-Simulation in Industry, Delft, The
Netherlands, pp 509–514
MPX (2003) MPX WIN 4.3 - For use with Windows, User Manual. Network Dy-
namics Inc, Framingham
Nolan R, Sovereign M (1972) A recursive optimization and simulation approach
to analysis with an application to transportation systems. Management Science
18(12):676–690
Raturi A, Meredith J, McCutcheon D, Camm J (1990) Coping with the build-to-
forecast environment. Journal of Operations Management 9(2):230–249
Sargent R (1994) A historical view of hybrid simulation/analytic models. In: Pro-
ceedings of the Winter Simulation Conference, pp 383–386
Shanthikumar J, Sargent R (1983) A unifying view of hybrid simulation/analytic
models and modeling. Operations Research 31(6):1030–1052
Supply Chain Council (2009) SCOR Model. URL http://www.supply-
chain.org/cs/root/s/scor model/scor model
Chapter 14
Rapid Modeling In A Lean Context

Nico J. Vandaele and Inneke Van Nieuwenhuyse

Abstract Lean management is widespread but theoretical models that scientifically


substantiate the lean practice are scarce. We show how queuing models of manufac-
turing systems and supply chains underpin the practice of Lean. Two quantitative
performance models which relate the system parameters with the system perfor-
mance in terms of lead time and throughput will be discussed, including an exoge-
nous definition of Lean. We show that the ideal level (i.e., the Lean level) of the
system buffers (safety capacity, work-in-process and safety time) are determined by
the targeted system performance. Moreover, the lean concept is dynamic in nature:
when either system characteristics or target performance change, the lean buffer
levels change accordingly. The latter stresses the need for a comprehensive, analyti-
cal and consistent approach. This quantitative approach will be illustrated with lead
time and throughput models.

14.1 Introduction

In industrial practice, the concept of Lean operations management is the hype of the
new millennium. It consists of a set of tools that assist in the identification and steady
elimination of waste (muda), the improvement of quality, and production time and
cost reduction. The concept of Lean operations is built upon decades of insights and
experience from Just-In-Time (JIT) applications. Since the first articles and books

Nico J. Vandaele
Research Center for Operations Management, Department of Decision Sciences and Information
Management, K.U. 3000 Leuven, Belgium,
e-mail: Nico.Vandaele@econ.kuleuven.be
Inneke Van Nieuwenhuyse
Research Center for Operations Management, Department of Decision Sciences and Information
Management, K.U. 3000 Leuven, Belgium,
e-mail: Inneke.VanNieuwenhuyse@econ.kuleuven.be

163
164 Nico J. Vandaele and Inneke Van Nieuwenhuyse

appeared, a tremendous number of publications became available for the practitioner


(e.g. Womack et al (1990), Womack and Jones (1997), Liker (2004)). Many big
companies adopted company-wide leadership programs to get their company on the
lean track.
In academia, the Lean concept has received rather limited attention, based on the
reproach that Lean did not offer much more on top of the traditional JIT body of
knowledge, and hence should be qualified primarily as a philosophy without rigor-
ous scientific foundations. This is only partly true. It is indeed true that the scala
of lean management techniques is often rather descriptive, failing to offer analytical
modeling and/or optimization tools that could guide managerial decision making.
Cosnider for instance the heavily promoted tool Value Stream Mapping as presented
in the best seller ‘Learning To See’ (Rother and Shook, 1999). Value Stream Map-
ping forces to map the logical structure of the processes and adds valuable quantita-
tive data both on system parameters as on performance measures. However, Value
Stream Mapping is incapable to give insight in the relationships between the data
and performance measures as well as the links between various elements of the
mapped processes.
In this paper, we present two quantitative models to underpin the definition of
lean in terms of the lead time and the throughput of a flow system. As will be shown,
the quantification of Lean boils down to an adequate match between the system’s
parameters and target system performance. A mismatch between the two can cause
a system to be either obese, or anorectic.

14.2 The Quantification Of Lean

In order to develop a quantitative approach to the Lean concept, we will rely on some
basic stochastic models for flow systems. Flow systems are systems where a set of
resources is intended to perform operations on flows (see Vandaele and Lambrecht,
2003). Some illustrative examples are listed in table 14.1.

Table 14.1 Some flow system examples


Flow System Typical Resources Typical Flows
Production line Machines, workers Products
Production Plant Machines, internal transport Products
Hospital Hospitals beds, physicians, nurses Patients
Airport Counters, desks Passengers
Traffic Roads traffic lights Cars, trucks
Laboratory Equipment laboratory assistants Samples
Computer network Servers data lines Data, messages
Mobile phone network Antenna’s, transmitters, buffers Calls
Insurance company Inspectors, account managers Files
14 Rapid Modeling In A Lean Context 165

These examples show the rich variety of flow systems. All these systems share
some common physical characteristics: on their routing through the system, flows
visit resources in order to be processed, and hence consume (part of the) capacity of
the resources. This competition for capacity causes congestion: flows may need to
queue up in front of the resources. This congestion in turn inflates the lead time of a
flow entity through the system.
These basic mechanics of a flow system imply that every decision related to
the flow has consequences for the resource consumption over time. For instance,
once lead time off-setting for manufacturing components in an assembly setting is
performed, resources (i.e., capacity) need to be committed in order to be able to
perform the required processes for the components. Vice versa, all resource-related
decisions have an impact on the flow: scheduled maintenance for instance will tem-
porarily impede flow, while sequencing decisions cause certain flows to proceed
while other flows need to wait. Consequently, flow systems contain three funda-
mental decision dimensions: flows, resources and time. If a flow system is to be
managed in an effective and efficient way, the management decisions must consider
the flow, resource and time aspects simultaneously, symbolized by the intersection
visualized in Fig.14.2.

Fig. 14.1 The basic dimensions and buffers of a flow system

In what follows, we assume the flow system to be stochastic: i.e. both the flows
and the resources are subject to variability. In real-life systems, causes of system
variability are omnipresent, like quality problems, resource failures, stochastic rout-
ings, randomness, etc. (see for instance Hopp and Spearman, 2008). It is known
that the presence of variability influences system performance in a negative way
(Vandaele and Lambrecht, 2002). Important system performance measures are re-
source utilization, flow time, inventory, throughput and various forms of service
level. Some of these (e.g. flow time and inventory) are flow oriented, while others
(such as utilization and throughput) are resource oriented.
In order to maintain an acceptable performance in a stochastic environment, a
flow system has to operate with buffers (Vandaele and De Boeck, 2003). Conform
the three basic system dimensions mentioned above, three types of buffers may
be used: inventory buffers (e.g. safety stocks, work-in-process,), capacity buffers
(spare capacity, temporary labor,) and time buffers (safety time, synchronization
buffers,...). Any particular combination of the three buffers leads eventually to a
166 Nico J. Vandaele and Inneke Van Nieuwenhuyse

specific performance level. These buffers can be interchanged in order to reach a


desired performance level. For instance, short lead times combined with the absence
of work-in-process -mandatory in e.g. Just-In-Time systems- can only be guaranteed
with lots of excess capacity. The amount of excess capacity (capacity buffer) can be
reduced if variability and/or randomness are eliminated (see Hopp and Spearman,
2008). Tight capacity typically leads to high utilization at cost of long lead times
and high work-in-process.
Consequently, there may be several combinations of fundamental buffers lead-
ing to the same performance. These can be considered as some kind of technically
equivalent buffer combinations. A monetary evaluation in terms of costs and rev-
enues, can lead to the best economic choice between the technically equivalent op-
tions. The nature of the system largely determines which type of buffer is feasible
(theoretically, or in practice). For instance, service systems do not have the possi-
bility to employ an inventory buffer. Legal advice or consulting services are typical
examples. Likewise, large time buffers in emergency systems (fire brigades, medi-
cal rescue teams) are unacceptable. As a consequence, these systems are known to
operate with huge amounts of safety capacity.
In this view, a system is defined as “Lean” when the buffers present in the system
are restricted to the minimum level necessary in order to support the target perfor-
mance (Hopp and Spearman, 2008). This is referred to as the Lean level. Conse-
quently, all buffering in excess of that necessary minimum can be considered as
obese, in the meaning that there is too much buffering for the desired performance.
This state of obesity can manifest itself into too much inventory, too much safety
time or overcapacity. An obese system could reach its target performance with ei-
ther smaller buffers, or a better allocation of buffers. If these excess buffers are
systematically reduced while keeping up with the desired system performance, the
systems gets leaner. However, excessive reduction of buffers will cause system per-
formance to erode: the target performance will eventually become unachievable. In
these situations, we characterize the system as anorectic.
Note that the above definition of Lean is not static, as it depends both on the
system characteristics and the targeted performance. Consequently, the stronger the
stress on the system’s performance measures, the more buffers (minimally) will be
necessary: hence, the Lean level will change. The specific Lean level of a system
may also differ across different systems. For example, the right amount of safety ca-
pacity for a car assembly plant compared to a truck manufacturer is different if they
both want to be considered lean for similar performance objectives (e.g. a customer
order delivery time of three months). Further, if the performance objectives vary
through time (for instance with increased competition), the Lean level will change
to reflect the new conditions. Note also that the system’s characteristics vary over
time (e.g. the inherent variability of the system may decrease as a consequence of
system improvements, product mix changes, etc.). This also impacts the Lean level.
Following these arguments, the definition of Lean is dynamic in nature.
We will now illustrate these concepts with two basic stochastic models, an M/M/1
queuing model and a model of a production line with limited work-in-process al-
lowance.
14 Rapid Modeling In A Lean Context 167

14.2.1 Lead Time: Safety Time Versus Safety Capacity

In this section we consider a system consisting of only one single server, processing
a single product type. The system’s queueing behavior can be modeled as an M/M/1
system (see e.g. Anupindi et al, 2006), with an arrival rate μ and a processing rate
μ . The desired customer service levvel is defined by S (0<S<1), and Ws refers to
the S percentile of the flow time of products through the system. In a MTO system,
Ws would be the lead time quote necessary in order to guarantee a delivery service
level of S. The performance measures of interest are listed in table 14.2.

Table 14.2 System performance


Performance measure Definition Expression
Utilization ρ λ /μ
Expected Time in the queue (waiting time) Wq (1/μ )(ρ /(1 − ρ ))
Expected Number of units in the queue Nq ρ 2 /(1 − ρ )
Expected Time in the system (lead time) W (1/μ )/(1 − ρ )
Expected Number of units in the system N ρ /(1 − ρ )
Lead time quote (S percentile) Ws (1/μ )/(1 − ρ ) ln[1/(1 − S)]
Safety time Wsa f e Ws −W = W ∗ (1 − ln[1/(1 − S)])
Safety capacity Csa f e 1−ρ

Given these performance measures we can quantify the concept of buffer substi-
tution, the dynamic definition of lean and the issue of six sigma. This will be shown
in figs. 14.2 and 14.3 respectively.
The concept of system buffering is illustrated in fig. 14.2, where the W and Ws
(where S equals 95), are shown as a function of the utilization ρ (the arrival rate
varies from 0.05 to 0.95 while the service rate equals 1). First the strong non-linear
behavior of the lead time as a function of ρ can be observed. As a consequence,
the lead time quote, which includes safety time, gets increasingly larger with larger
utilizations and with higher percentages of service. Therefore we can conclude that
the amount of safety time grows with increasing utilization. It can be clearly seen
that a smaller amount of safety time can be reached with a higher level of safety
capacity and vice versa.
In general, the desired lead time is determined by market conditions. If the com-
pany policy is such that a service level S has to be provided, this desired lead time
quote needs to coincide with Ws . Given Ws and W for the company, the amount of
safety time can be derived. From the relationships in table 14.2, the corresponding
utilization equals
 1 
1 − ln 1−S
ρ= (14.1)
μ × Ws
and the corresponding safety capacity equals
168 Nico J. Vandaele and Inneke Van Nieuwenhuyse

Fig. 14.2 Safety time versus safety capacity

 1 
1 − ln 1−S
Csa f e = 1 − ρ = . (14.2)
μ × Ws
In this way we can derive
 1 
1 − ln 1−S
Csa f e = (14.3)
μ × (Ws + Wsa f e )

where the trade-off can be clearly observed.


At this point we can state the quantitative definition of Lean: it is the status where
the desired performance is reached with the minimum amount of buffering. For a
target service level S and the customer-determined lead time quote Ws , the resulting
utilization ρ is given by equation (1). If a system wants to operate at a lower utiliza-
tion level (or, equivalently, with more safety capacity), this results in a lower lead
time quote. As such, we have overcapacity with respect to the performance needed:
this is the obese state of the system. On the other hand, utilizations exceeding ρ
lead either to lead time quotes that do not meet the market requirements, or to ser-
vice levels that do not meet the company’s targets. This is the anorectic state of the
system.
Finally, an extension of the model (G/G/1, see Hopp and Spearman, 2008, for
a discussion) can be used to quantify the scoring of six sigma projects. From the
equation of WS, we can draw the relationship for various improved cases related to
the base case (M/M/1), materialized in different levels of variability of arrival and/or
service processes. This typically shifts the quoted lead time curve to the lower-right
corner. For an equally defined customer preferred quoted lead time, the quoted lead
14 Rapid Modeling In A Lean Context 169

time Ws can be realized operating at higher utilizations; the same performance with
more sales volume, which is inspired either by an expansion based strategy or a ra-
tionalization strategy, in case of the same sales volume with less resources. An alter-
native way to profit from the project improvements is to operate at the same utiliza-
tion which simply leads to sharper lead time quotes. This embraces an aggressive,
competitive strategy. Of course, all combinations of higher utilization and improved
performance are alternative paths towards the shifted improvement curves. This is
visualized in Fig. 14.3, where WS1, WS2, and WS3 resemble low variability, high
variability and medium variability respectively. Typically, as Six Sigma projects at-
tack system variability under a continuous improvement framework, the three curves
stand for the subsequent and systematically implemented improvements.

Fig. 14.3 Quantification of Six Sigma, increasing levels of variability

14.2.2 Throughput : Work-In-Process Versus Throughput

The production line consists of m identical single-machine stations in series. Each


server has exponentially distributed processing times, with processing rate μ . Hence,
the system is balanced and the bottleneck rate (BNR) equals μ . The line behaves as
a CONWIP system: the work-in-process is held constant and is equal to N units.
The performance measures of interest are listed in table 14.3 (Hopp and Spearman,
2008).
These relationships may be used to illustrate the trade-off between safety capac-
ity and work-in-process. Figure 14.4 illustrates the relationship between N and T H,
for a line consisting of 5 stations with BNR= μ =1 unit per minute. We call this
170 Nico J. Vandaele and Inneke Van Nieuwenhuyse

Table 14.3 System performance


Performance measure Definition Expression
Expected Time in the queue (lead time) W m/μ + (N − 1) × μ
Expected Throughput TH N/(m + N − 1) × μ
Safety capacity Csa f e BNR − T H
Utilization of each server u N/(m + N − 1)

setting the “base case”. The figure shows the strong non-linear, concave behavior of
T H in terms of N. Safety capacity Csa f e hence decreases as WIP increases. Given
the characteristics of the system (i.e., the processing rate μ of each of the servers),
higher T H can be obtained at the price of higher WIP, and lower safety capacity.

Fig. 14.4 TH in terms of N for a 5-station line with BNR = = 1 unit per minute (base case)

The expression for T H in Table 14.3 can be used to quantify the trade-off be-
tween safety capacity and WIP for systems targeting a market-determined through-
put rate T H. As T H=N/(m+N-1)* μ , a decrease in N without jeopardizing T H can
only be obtained by increasing the bottleneck rate μ (and, hence, by increasing the
capacity of the line). Consequently, the same throughput level TH can only be ob-
tained with lower WIP at the price of extra safety capacity. This is visualized in
Fig.14.5. Assuming a market-determined T H equal to 40 units per hour, the base-
case system would require N=8 units, with Csa f e =20 units/hr. The WIP level can
be cut by half (N=4 units) without impacting the throughput rate (T H=40 units per
hour) when the capacity of the system is increased to BNR=μ = 80 units per hour =
4/3 units per minute, implying a safety capacity Csa f e =40 units/hr.
14 Rapid Modeling In A Lean Context 171

Fig. 14.5 T H and SC in terms of N, for the base case and the increased capacity case

Given the systems characteristics and the market-determined T H, the lean level
of WIP and Csa f e can be defined as that combination of WIP and Csa f e that yields
T H. A system employing higher WIP levels can be characterized as obese: in-
deed, as the market determines T H, the inherent capability of the system to achieve
throughput higher than T H will not result in additional sales. Conversely, a system
employing lower WIP levels is anorectic, as it is incapable of achieving the desired
T H.
The impact of six sigma projects can be illustrated in an analogous way. It is
known from the literature that the reduction of system variability shifts the T H
curve to the upper left corner (e.g., Hopp and Spearman, 2008). This is shown in
Fig.14.6 (T H curve for reduced variability case). Consequently, six sigma either al-
lows to obtain a target T H with lower WIP (and, hence, lower working capital) or to
increase T H for the same level of WIP (if this is desired in view of satisfying addi-
tional sales). Obviously, all combinations of lower WIP and improved T H represent
alternative paths towards the shifted improvement curves.

14.3 Conclusion

In this paper we offered a quantitative approach to underpin Lean operations man-


agement. We showed that, based on stochastic models, lean is a dynamic concept,
172 Nico J. Vandaele and Inneke Van Nieuwenhuyse

Fig. 14.6 Quantification of Six Sigma

which depends on of the desired system performance (which tends to be imposed by


the market), and the system’s characteristics. The models can be used to illustrate
the substitution of different buffer types (safety time, safety inventory and safety
capacity) needed in order to reach the target performance. In addition, the system
improvements from six sigma projects can be analyzed accordingly. Future research
may focus on how these concepts can be extended towards larger and more realistic
networks.

References

Anupindi R, Deshmukh S, Chopra S, van Mieghem J, Zemel E (2006) Managing


Business Process Flows, 2nd edn. Prentice-Hall
Hopp W, Spearman M (2008) Factory physics. McGraw-Hill
Liker J (2004) The Toyota way: 14 management principles from the world’s greatest
manufacturer. McGraw-Hill
Rother M, Shook J (1999) Learning to see. Lean Enterprise Institute
Vandaele N, De Boeck L (2003) Advanced Resource Planning. Robotics and Com-
puter Integrated Manufacturing 19(1-2):211–218
Vandaele N, Lambrecht M (2002) Planning and Scheduling in an Assemble-to-
Order Environment: Spicer-Off-Highway Products Division. In: Song J, Yao
D (eds) Supply Chain Structures: Coordination, Information and Optimization,
Kluwer Academic Publishers, pp 207–255
14 Rapid Modeling In A Lean Context 173

Vandaele N, Lambrecht M (2003) Reflections on stochastic manufacturing models


for planning decisions. In: Shantikumar J, Yao D, w Henk M Zijm (eds) Stochas-
tic Modeling and Optimization of Manufacturing Systems and Supply Chains,
Kluwer, pp 53–86
Womack J, Jones D (1997) Lean Thinking: Banish Waste and Create Wealth in Your
Corporation. Touchstone, London
Womack J, Jones D, Roos D (1990) The Machine that changed the World. Macmil-
lan Publishing Company
Part III
Case Study and Action Research
Chapter 15
The Impact of Lean Management on Business
Level Performance and Competitiveness

Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Abstract In this paper we describe a model that investigates the impact of lean
management on business competitiveness. We hypothesize that business competi-
tiveness depends on organizational competences (including both the static level of
operational capability and the dynamic capabilities of improving and adapting to
changing internal and external conditions) and business performance. The lean lit-
erature provides an unbalanced picture of the elements of business competitiveness:
while several researches discuss the impact of lean on static operational measures,
there are much less studies about the relationship between lean and 1) organiza-
tional changes and responsiveness, and between lean and 2) business performance.
In the empirical part of our paper we focus on the latter issues using both case stud-
ies and questionnaires. With our case based research (using two original cases and
relying on several ECCH cases) we can clearly highlight how lean affects, through
employees, organizational responsiveness and how it leads towards higher business
competitiveness. Our analysis is unique in the sense that we could relate the case-
based analysis to the perspective of employees, since in our original cases several
employees (83 and 97) filled in a questionnaire that showed the impact of lean tools

Krisztina Demeter
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: krisztina.demeter@uni-corvinus.hu
Dávid Losonci
Department of Logistics and Supply Chain Management, Corvinus University of Budapest,
Fovam ter 8, H-1093 Budapest, Hungary, e-mail: david.losonci@uni-corvinus.hu
Zsolt Matyusz
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: zsolt.matyusz@uni-corvinus.hu
István Jenei
Department of Logistics and Supply Chain Management, Corvinus University of Budapest, Fovam
ter 8, H-1093 Budapest, Hungary,
e-mail: istvan.jenei@uni-corvinus.hu

177
178 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

and methods on them, as well as their opinion about the improvements both at op-
erational and business levels.

15.1 Introduction

Nowadays, lean management is having its second heyday (Schonberger, 2007; Hol-
weg, 2007). Several companies, many of them outside the automotive industry, im-
plement lean management hoping to achieve competitive advantage. Their hope is
fueled by the success of Toyota and other car manufacturers and their suppliers
(Liker and Wu, 2000). Large international surveys support that pull, customer in-
duced production systems, the basics of lean management, are inevitably the sources
of competitive advantage today (Laugen et al, 2005). Unfortunately these researches
did not investigate whether improving competitive resources and better operational
performance really affect financial performance. Huson and Nanda (1995) were un-
able to present a clear link between JIT and profitability, while Voss (1995, 2005)
suggests that competitiveness would improve - but that term is not defined in any
way. One can also rarely find empirical studies that focus on the organizational
change requirements of lean transformation beyond the usual lean tools and princi-
ples, though it has been widely accepted for decades now that human resources play
an outstanding role in lean transformation, thus lean requires substantial changes in
employees and managers’ perspectives and everyday work (Sugimori et al, 1977;
Hines et al, 2004).
In this paper we discuss the changes triggered by lean (introduction of new tools,
methods and principles) and their results through case studies. We emphasize the
role of human resources in this process and hence we present not only the top man-
agements opinion about lean management, but also the workers impressions.
We begin with a definition of business competitiveness, which sets up the the-
oretical frame for the research. Then the existing literature about the relationship
between lean management and business competitiveness is summarized. After hav-
ing described the research methodology, the empirical results of our study follow.
The final part discusses the results and the limitations of the research.

15.2 The Research Framework

15.2.1 The Building Blocks of Business Competitiveness

We can hardly find overall and well developed definitions of business competitive-
ness in the operations management literature. Therefore we cross the boarder of
operations management and start the discussion of business competitiveness based
on the definition of Chikán (2006):
15 Impact of Lean Management 179

“Business competitiveness is a competence of the company that allows the company


to provide products and services for customers within the standards of social respon-
sibilities, that are (i) preferred to the products and services of other competitors and
(ii) provide profit for the company. The prerequisite of this business competitiveness
is that the company is capable of properly evaluating and adapting to the internal
and external changes, while achieving sustainable competitive advantage.” (Chikán
2006, p.46).
Thus, in order to achieve business competitiveness, the company has to provide
a service package, which is considered by the customer to be better than the com-
petitors (competitive advantage) while being valuable for the company too (i. e. pro-
viding profit). Sustainable competitive advantage refers to the fact that the adequate
service package should be provided not only in the present but by adapting to the
external and internal environment also in the future.
The top, broken line part of Fig. 15.1 represents this model, where each leg of
business competitiveness provides a measurable “picture” of organizational capa-
bilities. Business competitiveness is affected by capability to operate, capability
to change and their market recognition (business performance). The business per-
formance behind sustainable business competitiveness originates from capability to
operate, where the adaptation to dynamic environment is guaranteed by the capabil-
ity to change. This approach, however, does not provide link between the building
and results of capabilities. Thus we propose a two level model, where both the re-
sults and the building blocks behind can be discovered. The level “under” the results
relies on Fig. 15.1, which refer to various methods and levels of combining organiza-
tional resources according to defined objectives (Swink and Hegarty, 1998; Hayes
and Pisano, 1994; Gelei, 2007). In lean environment a new tool/principle can be
treated as a new combination of organizational resources, e. g. introduction of cell
as a production unit. Undisputedly, organizational capabilities finally materialize in
some kind of performance measure. The realized performance (cost/price, quality,
flexibility, reliability and services, see Flynn and Flynn 2004; Li 2000 or the po-
tential capability for this performance (Chikán, 2006) forms the three measurable
legs of business competitiveness (top, broken line part of Fig. 15.1). If we take the
previous example again, a cell means shorter lead time and more flexible produc-
tion processes on the operation performance side. Business competitiveness can be
investigated in its true complexity if we rely both on the realized performance and
on the resource combination perspectives as complementers. The organization, rep-
resented by the three measurable legs, can only adapt to the dynamic environment,
retain and improve operational competitiveness if it is improving continuously and
combines organizational resources in new ways (e. g. based on lean management
principles). The real sources of competitiveness stem from improved individual ca-
pabilities, as well as from redesigned and continuously changing working place
practices and operating routines.
180 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Fig. 15.1 Components of business competitiveness. Based on Chikán (2006) and Gelei (2007)

15.2.2 Lean Management and Business Competitiveness

Lean management was a hot topic already in the early 80s in operations manage-
ment (Schonberger, 2007) and flourished in the US (Holweg, 2007) under the name
of just-in-time (JIT). In the ’90s lean management became the dominant strategy for
organizing production systems (Karlsson and Ahlström, 1996). Moreover, Hines
et al (2004) considered it as the most influential paradigm of operations manage-
ment. Despite this, the effects of lean management on business competitiveness
were anecdotic or case-based, lacking any deeper insight into real managerial issues.
From another point of view (Voss and Blackmon 1994, cited by Davies and Kochhar
2002, operating practice in the field of operations management is crucial for operat-
ing performance, and operating performance is crucial for operational competitive-
ness. Following this logic Voss et al (1997) state, without empirical support, that
outstanding operating performance leads to outstanding business performance and
competitiveness. Adapting this logic to lean management, leaning production im-
proves production performance, and thus production competitiveness, all of which
contribute to business performance and competitiveness. Thus lean management, as
one of the best practices of world class manufacturing, finally leads to improved
business competitiveness (Voss, 1995; 2005). In overall, the relationship between
lean and competitiveness is strong intuitively, but real practical evidence in this is-
sue is missing.
15 Impact of Lean Management 181

15.2.2.1 The Impact of Lean Management on Business Competitiveness The


Capability to Operate

Schmenner (1988) concluded, that “Out of many potential means of improving pro-
ductivity, only the JIT-related ones were statistically shown to be consistently effec-
tive”. Shah and Ward (2007) found a positive relationship between lean management
and outstanding operating performance, and added that the relationship is well ac-
cepted among researchers and practitioners (see their referred sources e. g., Krafcik
(1988); MacDuffie (1995); MacDuffie et al (1996); Shah and Ward (2003); Wom-
ack and Jones (1996)). According to the literature, lean practices impact inventory
turnover, quality, lead time, labour productivity, space utilization, flexibility (vol-
ume and mix) and costs heavily (Crawford et al, 1988; Huson and Nanda, 1995;
Flynn et al, 1995; MacDuffie et al, 1996; Karlsson and Ahlström, 1996; Sakakibara
et al, 1997; Boyer, 1998; McKone et al, 2001; Cua et al, 2001). Thus lean prac-
tices inevitably impact operating performance dimensions positively, and moreover,
concurrent applications of various practices seem to have synergetic effect, they
strengthen each other (Crawford et al, 1988; Cua et al, 2001; Flynn et al, 1995;
Sakakibara et al, 1997; Boyer, 1998; McKone et al, 2001; Shah and Ward, 2007).
To summarize this part, there is far enough empirical support that lean management
contributes to business competitiveness through the capability to operate.

15.2.2.2 The Impact of Lean Management on Business Competitiveness The


Capability to Change

According to Fig. 15.1, the capability to change consists of four areas: (1) mar-
ket relations, (2) personal skills, (3) decision making and communication and the
level of (4) innovativeness. The relationship of lean management with these areas
in most of the cases is clear, but less evident. Nonetheless, all of them are important
in lean transformations. Just think about the importance of (1) market relationship
quality in (i) balancing production load forward and backward in the supply chain,
(ii) identifying the customer value (the first principle of lean thinking, Womack and
Jones (1996), or (iii) organizing JIT supplies (a basic lean element). The (2) hu-
man factor is crucial in lean transformations: “Needless to say, sophisticated tech-
nologies and innovative manufacturing practices alone can do very little to enhance
operational performance unless the requisite human resource management (HRM)
practices are in place” (Ahmad and Schroeder, 2003, p.19). This statement is sup-
ported by the fact that human resources (under the name of cross-functional work
force) are among the most frequent practices within lean management (Shah and
Ward, 2003, 2007). (3) Decision making and communication systems play a central
role in todays organization where information flow and knowledge have enormous
effect on value creating processes. Latest HR practices such as empowerment and
decentralization, which occur in lean as well, shape the structure of these systems.
Areas (2) and (3) overlap heavily, they could not be handled isolated from each
other. Hence we integrate them in the empirical part of this study. The effect of lean
182 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

management on (4) the innovativeness of the company (e.g. R&D expenditures) still
shows contradicting results. Several researchers state that excessive elimination of
waste “cripples” innovative ideas and the extent of developments decreases (Lewis,
2000). A counter-example is Toyota, which brought Prius on the market far earlier
than its competitors, years before the era of hybrid-driven cars (Liker, 2008). The
human factor serves as a basis for two elements of the capability to change, and both
seem to determine the success of the lean transformation. Since areas (2) and (3) are
relevant from the very beginning, they should be developed parallel with lean tools
and principles during the lean transition. In spite of this these elements are rarely in
the focus of empirical works, especially from shop floor workers point of view.

The Human Factor in Lean Management

Emphasizing people is a must in lean management: it comes from the logic of its
operations. Since process dependence increases with the elimination of the buffers,
production problems arise. Thus the demand for motivated and adjustable work
force is obvious (Sugimori et al, 1977; MacDuffie, 1995). Shah and Ward (2007)
reached the same conclusion: employees working in cross-functional, self managed
teams are faster and more efficient in solving identified problems. MacDuffie (1995)
and Shah and Ward (2007) proved that companies relying on HR practices as an
integral part of their lean production system can achieve higher results (this is also
supported by Wood (1999)). Interestingly, HR literature does not discuss the issue of
lean (Wood, 1999) with the exception of Birdi et al (2008). If we consider the effect
of lean on employees, there is are two opposing views (Delbridge et al, 2000). Sup-
porters, mostly researchers of OM, see the positive effects of lean management on
employees (Legge, 2005). Others (Berggren, 1993; Landsbergis et al, 1999; Lowe,
1993; Skorstad, 1994; Wood, 1999) emphasize the “dark side” of lean (e. g. work
intensity, reduced autonomy, overtime, increased horizontal load etc.) Our paper
brings some new thought into this discussion by analyzing lean achievements (using
business competitiveness as a framework) and HR related changes through the eyes
of employees. Based on a comprehensive literature review (Sugimori et al, 1977;
Crawford et al, 1988; Flynn et al, 1995; MacDuffie, 1995; Sakakibara et al, 1997;
Boyer, 1998; McLachlin, 1997; Cua et al, 2001; Hines et al, 2004; Shah and Ward,
2007) we summarize the most important HRM practices in relation to lean manage-
ment:
• Education and training, cross-functional work force;
• Decentralization and empowerment;
• Team work;
• Information flow and feedback.
The elements above overlap with the most important HRM practices of the dom-
inant HRM model, which considers people as valuable assets (see Legge 2005 and
Pfeffer 1998).
15 Impact of Lean Management 183

15.2.2.3 The Impact of Lean Management on Business Competitiveness


Business Performance

Surprisingly, in spite of the popularity of lean management in the last decades, op-
erations management still not supported empirically the relationship between lean
management and business performance. Impacts on operating performance are ob-
vious, but few studies tie operating performance to financial performance. The work
of Huson and Nanda (1995) is exceptional in this respect, even if their conclusion
is that the real impact of just-in-time on profitability is ambiguous. Lewis (2000), in
his casebased research, argues that “Becoming lean does not automatically result in
improved financial performance”, since “[t]he benefits of lean production can very
easily flow to powerful players” (Lewis, 2000, p. 975). Nonetheless, the investiga-
tion of the effect of lean management on financial performance is very important.
Besides proving the direction of the relationship, another goal is to uncover those
elements that influence the quality of the relationship. Based on the literature re-
view (which reflects the view of the top management) Table 21.1 summarizes the
questions investigated in this paper. First we analyze the three elements of competi-
tiveness with the help of case studies that reflect top management opinion ((1)-(3)).
After the operational results of our companies (I) we discuss the ability of respon-
siveness that supports internal and external adaptation (II), where we give special
attention to HR related changes because of their important role in lean implemen-
tation and sustainability. Finally we investigate the relationship between lean and
financial performance (III). Throughout the questionnaire analysis we assume that
good results may motivate employees. The real motivating factor, though, is the
feeling of being part of the changes (V), not just the experience of the good results
((IV) and (VI)).

15.3 Methodology

15.3.1 Case Studies and Survey

Strengthening the role of qualitative research (case study research included) is a


long-time need in the field of OM (Meredith et al, 1989). Until recently many ex-
cellent OM researchers have written some kind of ”teaching note” for case study
research. (e.g. McCutcheon and Meredith, 1993; Meredith, 1998; Stuart et al, 2002;
Voss et al, 2002), nowadays it is one of the standard empirical research methods
(Gupta et al, 2006). Though in the past few years there were published only a hand-
ful of lean case studies. McLachlin (1997) used a methodology based on Yin (1989)
to evaluate management initiatives necessary to implement JIT. Lewis (2000) as-
sesses the relationship between lean production and competitive advantage by using
case studies. In our paper we used two case studies (namely the cases of Rába Au-
tomotive Components Manufacturing Ltd. and OKIN) to analyze the links between
184 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Table 15.1 Questions investigated in this paper


Case studies Lean literature on Surveys
business competitiveness
Do our case companies I. Capability to operate IV. Employees regard lean
lean efforts resulted (3.2.1.1) Well documented, the (3.2.2.1.) as successful as
in the expected positive effects of managers do?
improvements? lean management on
operative measures
are confirmed
What kind of changes II. Capability to change V. What do employees
were caused by the (3.2.1.2.) All elements are (3.2.2.2.) perceive from lean-
applied lean tools or essential in lean, in oriented HR practi-
principles in the ele- lean transformation ces implemented in
ments of capabilities personal skills and the case companies?
to change? What kind decision making and
of infrastrural changes communication seems
changes supported lean? to be crutial
What is the effect of III. Business performance VI. Do employees see
lean on financial (3.2.1.3) Anecdotal evidences, (3.2.1.3.) a relationship
performance? Which few empirical evidences between lean and
factors influence profitability?
this relationship?

lean and competitiveness. The case studies were based on interviews (with middle
and top management), company visits, top management questionnaires and com-
pany documents. Primary company selection criteria were an open attitude and a
solid determination towards lean. In Table 21.2 we summarized the lean manage-
ment tools applied by our two case companies, which indicate the selected compa-
nies fulfillment of the selection criteria.
The surveying method is a common tool for measuring management attitudes,
while it is very uncommon to conduct employee surveys on a large sample in the
field of lean, hence our approach is somewhat unique. The employee questionnaire
consisted of 51 questions, and it was inspired by a previous survey (Tracy, 2004).
The questionnaire intended to grab the anticipations of the companies, the goals
and implementation of lean transformation, the transformational effects, results and
changes on working conditions, tools, applied technology and intra-firm communi-
cation.

15.3.1.1 Rába Automotive Components Manufacturing Ltd.

Rába Automotive Components Manufacturing Ltd. is one of the three divisions of


Rába Automotive Holding Plc. The case was written about the plant near the city
of Mór (called Rába in this paper) with turnover around 30 Million Euro and 589
employees. It is the main plant for producing seat accessory parts (seat frames, car
15 Impact of Lean Management 185

Table 15.2 Lean practices in case companies


Workplace practices, operating routines Rába Okin
Buyer, supplier relations
Supplier base within the plant
Kanban based raw material supply
Direct contact with material suppliers, incoming goods quality check
Intensified communication with customers
Just-in-time order fulfillment
Product development and technology
New equipment
Low cost automation, transformation of on hand equipment
Organization and control of manufacturing processes
Rearrangement of production lines and storage (value stream mapping)
Unified material-, info.-flow and organiz. (warehouse, production, dispatch)
Work environment (5S)
Pull material flow, Just-in-time, kanban
One piece flow
Production cells
Clear definition of tasks and responsibilities
Redesign of process control system and practices
Redesign of inventory management
Implementation of intra-process control (received materials, WIP)
Kaizen workshop / incremental improvement
Visual management (e. g. whiteboard)
Reduced changeover time
Human resource related changes
Lean coordinator (system engineer)
Team work
Training for the management (lean knowledge)
Training for middle managers and workers
New (partially performance based compensation system) motivation system
Internal (open) forum (for better communication)
Refurbishment of work environment
“Empowerment”, quality responsibility of workers

foams and seat trims) for Suzuki, a large OEM in Hungary. Moving towards lean
management was essential in order to survive after some years of unprofitable oper-
ations and downsizing. Altogether 83 employees filled in the questionnaire at Rába
(62% of all employees in the working places affected by lean transformation).
186 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

15.3.1.2 OKIN Hungary Kft.

OKIN Hungary Ltd. has new owners, a German investment group since 2007. It
has around 300 employees with a site in north-east Hungary, next to the city of
Hajdúdorog. It assembles furniture motion mechanics in large variety. Product de-
sign, orders, deadlines and customer relationship management is executed in the
German headquarter. Lean transformation was forced by the owners, because the
Central European wage advantages eroded strongly after new capacities have been
created lately in Far Eastern countries. The employees of four assembly lines filled
in the questionnaire at the company. Practically each employee who was present at
the time wrote his/her opinion (93 people).

15.3.2 Discussion The Impact of Lean Management on Business


Competitiveness

15.3.2.1 Case Studies

The Capability to Operate

Following the lean transformation Rába experienced improvements in dimensions


of costs, productivity, delivery lead time, delivery dependability, inventory turnover,
inventory level, space utilization, labor productivity and volume flexibility. In some
dimensions the improvement was huge. Connecting services were not affected by
lean. Okin achieved great improvements in productivity (shorter setup times), manu-
facturing lead time, quality and inventory turnover, thanks to the lean transformation
and the better organizing of processes. By studying the results of lean companies the
improvement of elements of operational performance (costs, productivity, quality,
manufacturing and delivery lead times, volume and time flexibility) seems evident
this is in line with the expectations. These changes suggest success, and they reflect
the proper application of tools used for reorganizing the manufacturing process.

The Capability to Change

We can conclude from the Rába and Okin cases that the improving measures were
caused by both the reorganization of manufacturing and the changes made in the
supporting infrastructure. The companies made the most changes in the manufac-
turing processes, in their control and in the (2) area of human resources (see Table
21.2). It is worth mentioning that from the other dimensions of responsiveness the
area of (3) decision making and communication systems (which is closely connected
to people) changed strongly, while the area of (1) market relations and (4) innova-
tiveness did not change or only to small extent. The lack of innovativeness could
be explained by the fact that it is not required by the companiescustomers. The sole
15 Impact of Lean Management 187

obligation of the companies is to meet the customer specifications. Relationship


with the customer at Rába was strong before the lean transformation too (justin-
sequence supply), and in the case of Okin it was a logical step forward to form
tighter cooperation with the customers. Lean was crucial in creating more intensive
communication and relationship on the input side too. Workers at both companies
learned lean basics to the necessary extent for their job. Workers were not given fur-
ther education, on the job training and/or their previous knowledge were enough for
the new tasks. There was also a lean manager appointed who coordinated the whole
process. At Okin the lean transformation remained basically centralized and was
built around the lean manager. Manufacturing decisions, which influenced areas of
(2) human resources and (3) decision making and communication, were delegated
to the lowest level. With the participation of the workers a new flow (from bottom
to up) appeared in the communication system (kaizen workshops, work process de-
sign) beginning a transition towards modern management systems. Our research
highlights the fact that lean does not intend to improve competitiveness through in-
novativeness (technological advance, R&D) this may explain the mild interest for
this topic from researchers. Lean affects responsiveness through (2) personal skills
and the closely connected (3) decision making and communication elements. Our
results suggest that human resource management (motivation, education, commu-
nication, decision making) appears as one of the most critical factors in lean. This
topic should get a much bigger interest in itself, not just as an integrated part of the
manufacturing system.

Business Performance

During the years following lean implementation the output and sales revenue in-
creased at both companies (1), but it would be a mistake to consider lean as the
only factor behind this phenomenon. Lean should be regarded as a consequence of
growing as well as a cause of it. At Rába lean implementation as a way of efficiency-
seeking was forced by the capacity reserved by the customer (anticipated growth)
and the unprofitable business, while the improving performance created further uti-
lizable capacity. At Okin the main causes were also capacity issues and more prof-
itable prices. In both cases the companies were able to improve efficiency indicators
((3) and (6)) in a way that no great investments were needed. This is further strength-
ened by the increased inventory turnover (though Okin has a very special inventory
policy). The companies business performance (ROS (4) and operating profit (3)) do
not improve automatically with the implementation of lean. Business performance
is affected by several other factors, e. g. industry, market position (Rába: second-tier
supplier; Okin: mother company), competition intensity (Rába: increasing presence
of competitors and OEMs in the Central European region; Okin: Far Eastern man-
ufacturing sites), power (Rába: OEM as customer; Okin: mother company transfer
prices), product characteristics (complex, substitution, product range), product de-
velopment capability etc.
188 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Table 15.3 Rába’s business performance (2004-2007)1 Source: Company data


2004 2005 2006 2007
(1) Sales (thousand Ft) 9 630 042 10 032 572 12 544 893 16 768 960
(2) Total expenses (thousand Ft) -10 390 086 -10 258 467 -12 135 328 -16 073 747
(3) Operating profit (thousand Ft) {(1)-(2)} -760 044 -225 895 409 565 695 213
(4) Return on sales (3)/(1) - - 3.26% 4.15
(5) Infr., plant and equip. (thousand Ft) 2 924 334 2 401 493 2 401 493 2 603 025
(6) Sales on tangible assets (Ft) {(1)/(5)} 3.293 4.178 4.178 6.442
(7) Inventories (thousand Ft) 1 143 899 1 028 104 1 028 104 1 098 665
(8) Inventory turnover {(2)/(7)} 9.083 9.978 9.978 14.630

1 The column in bold refers to lean implementation year.

Table 15.4 Okins business performance (2004-2007). Source: Company data


2004 2005 2006 2007
(1) Sales (thousand Ft) 857 931 929 775 1 443 278 1 557 893
(2) Total expenses (thousand Ft) -870 166 -995 340 -1 505 677 -1 530 321
(3) Operating profit (thousand Ft) {(1)-(2)} -12 235 -65 565 -62 399 27 572
(4) Return on sales {(3)/(1)} - - - 1.77%
(5) Infr., plant and equip. (thousand Ft) 370 743 370 176 379 185 424 123
(6) Sales on tangible assets (Ft) {(1)/(5)} 2.314 2.512 3.806 3.673
(7) Inventories (thousand Ft) 5 087 11 453 12 275 10 576
(8) Inventory turnover {(2)/(7)} 171.057 86.906 122.662 14.630

In Fig. 15.2 we summarized the findings of the case studies. Similarly to the
results in Table 21.1, we concluded that lean had an obvious positive effect on the
capability to operate. The companies show great improvement in their operational
performance following the lean transformation. Measures of capability to change
are also better, which reflect better market relationships, more skilled workforce,
more advanced decision making systems and more intensive communication. The
capabilities to operate and change together suggest the improvement of business
competitiveness. But did the market accept this improving performance? Business
indicators became better at both companies, to a greater extent at Rába and to a
slighter extent at Okin. According to the case studies the relationship between lean
and business performance is not so strong as between lean and the capabilities to
operate and change, because here we have to deal with several other influencing
contextual factors.

15.3.2.2 Survey

Our case studies in accordance with literature show the “beneficial” influence of
lean on organizational dimensions. This statement is based on managerial inter-
views, managerial surveys and company data. Concerning the employees, there are
15 Impact of Lean Management 189

Fig. 15.2 Lean management and business competitiveness

very few (empirical) research about them and those as we said earlier focus mostly
on the working conditions of lean. We do not know anything about whether they
perceive the “proved” success during their daily work. As the knowing of success
could be inspiring, therefore their opinion and experience may facilitate the manage-
ment of lean. The opinion of the employees, who operate the whole system, hence
could give significant insights for the better understanding of the system. During the
analysis of the employee surveys we kept the three “legs” of competitiveness, but
with a slight modification: inside the capability to change we investigated only the
critical HR practices identified by previous lean literature (see Table 21.1).

The Capability to Operate

Questions in Table 15.5 are about the dimensions of capability to operate. The em-
ployees could mark those statements with which they agreed.
From the previously given performance measures improvement in productivity
was the most chosen one at both companies (83% and 50%), though the frequen-
cies differ significantly. The distribution of employee answers indicates that the lead
time/cycle time, the scrap ratio and quality became significantly better at both com-
panies. As for the remaining measures (inventories, costs, process stability), Rába
performed better. There is also another difference between the companies in the
respondent rate, which was higher at Rába. This can be partly explained by the
fact, that the changes made at Rába were more deep and better communicated. It is
190 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Table 15.5 Capability to operate - shop floor workers’ perceptions


Respondant rate (%)
Performance (might more answers!) Rába Okin Pearson Chi-Square
N = 83 N = 93 (statistical significance)
After lean transformation...
...decreased in our company.
Lead time/cycle time 49 34 3.772 (0.037)
Inventories 26 13 4.498 (0.027)
Scrap 48 44 0.237 (0.371)
Costs 44 13 19.639 (0.000)
All answers 167 104
After lean transformation...
...improved in our company.
Productivity 83 50 18.502 (0.000)
Process 45 15 17.326 (0.000)
Quality 38 48 1.633 (0.131)
All answers 166 113

nonetheless strange that despite the radical changes only 40-50% of the employees
perceived some improvement. The operational results of lean appear at shop floor
level, though to much less extent than among the top managers or what could be ex-
pected after a radical improvement. The difference between employee perceptions
and reality can originate from many sources, e. g. internal communication, focal
points of employee reward system or employee ignorance. It is worth thinking over
how can the achieved results help in the acceptance and sustainability of lean.

The Capability to Change

In this part we are concentrating on the presence and effects of critical lean HR
practices identified in part 2.2.2.1 previously. We investigate only the (2) personal
skills and (3) decision making and communication parts of the capability to change.
Results in Table 15.6 suggest that there are significant differences between some
of the HR practices and the whole transformation was deeper at Rába.
Training. According to the literature, training is one of the key points, though the
employees of the companies evaluated it only as “I slightly agree”. Employees per-
ceived some improvement compared with the previous routines, but only to a small
extent. The most plausible explanation for this is that the employees only got the
necessary knowledge about lean basics, but beyond this there was no further educa-
tion (see below for more details). Cross-functional workforce. Workers perceptions
reflect higher horizontal workload and more supplement activities to do. The latter
originates in the quality approach of lean management: the worker is responsible
for his/her work environment, since it affects product/service quality. Supplement
activities, namely 5S and smaller maintenance tasks (introduced by both firms) do
15 Impact of Lean Management 191

Table 15.6 Capability to change shop floor workers about critical HR practices in lean2
HR practices Average Average F (Statistical
(Rába, Okin) Rába Okin significance)
Training
Learning is essential at my company. (78, 91) 3.88 (1.562) 3.21 (1.410) 8.729 (0.004)
Employees were or will be given some form of
training on how to use the technology/tools 2.88 (1.308) 3.13 (1.532) 1.293 (0.257)
that is required to implement lean (81, 86)
Cross-functional workforce
Since lean I have to know more kind of 2.30 (1.274) 2.63 (1.265) 2.788 (0.097)
operations. (82, 89)
Since lean I have to do more supplement 2.55 (1.441) 2.78 (1.237) 1.291 (0.257)
activities (82, 91)
Empowerment and decentralization
For decisions concerning my work my opinion 2.30 (1.274) 2.63 (1.265) 2.788 (0.097)
is also taken into account. (83, 90)
Since lean I have to do more supplement 3.22 (1.490) 2.93 (1.356) 1.717 (0.192)
activities (82, 91) I have the opportunity to improve 3.00 (1.322) 3.02 (1.339) 0.012 (0.912)
processes. (80, 88)
My boss allows me to be creative. (80, 89) 3.31 (1.365) 3.04 (1.269) 1.743 (0.189)
Lean innovation mistakes are tolerated. (80, 90) 3.39 (1.355) 3.22 (1.356) 0.630 (0.428)
Team work
Within my organization, management and employees 1.65 (0.706) 2.00 (0.789) 0.069 (0.792)
work together to solve problems. (83, 89)
My coworkers supported/support me in lean 2.73 (1.043) 3.11 (1.352) 4.277 (0.040)
implementation (80, 89)
Teams were or will be developed to implement 1.83 (0.792) 2.91 (1.30) 41.405 (0.000)
lean. (80, 87)
Communication
I understand why lean is/was implemented. (76, 90) 2.12 (0.923) 2.64 (1.164) 10.131 (0.002)
I got the necessary knowledge about the essence and 2.69 (1.281) 3.28 (1.529) 7.301 (0.008)
background of lean transformation. (81, 90)
Before lean implementation my manager clarified my 2.54 (1.275) 3.26 (1.410) 11.986 (0.001)
tasks. (81, 88)
During lean implementation my manager clarified my 2.53 (1.253) 2.91 (1.295) 3.882 (0.050)
tasks. (80, 90)
I was told the reasons to implement lean. (81, 88) 2.10 (1.020) 2.97 (1.504) 18.908 (0.000)
My managers told me when and how lean would be 2.09 (0.883) 2.91 (1.403) 20.616 (0.000)
implemented. (81, 90)
I was informed about the results of lean. (77, 79) 2.10 (0.867) 3.35 (1.396) 44.880 (0.000)

2All questions were asked on a 1 to 6 scale, where 1 means total agreement, 2 stands for agreement,
3 is for slight agreement, 4 is for slight disagreement, 5 stands for disagreement and 6 means total
disagreement.

not require professional training, only more basic lean knowledge. So seeing the
slightly “positive” rating of learning in Table 15.6. we state that the companies ef-
192 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

forts in job enrichment (more manufacturing operations) build much more on the ef-
fective exploitation of existing worker knowledge and proper organized processes.
In other words: (professional) training supports mostly technological changes. In
lean (transformation) one should consider and not waste (not exploit) professional
worker knowledge, and use it as a valuable resource (as our case companies did).
In addition basic lean training is an unavoidable premise, just as the deepening of
the training at all hierarchical levels in order to sustain lean later on. One practical
consequence of this is that managers in lean environment should pay more attention
to the learning phase of newly employed people. Empowerment and decentraliza-
tion. The firms rely on employees active participation in improvement activities.
The shop floor workers can be either “advisors”, or “makers”, i. e. they have the op-
portunity to build their own ideas into the reorganized processes. Emphasizing trial
and error approach of improvement activity (as perceived in our cases) provides an
innovative environment for lean efforts. Although the basis for worker participation
is good, as they have the opportunity to take part in projects in a tolerant atmo-
sphere, opinions in Table 15.7. suggest that there is place for further improvement.
Only about 40-50% of shop floor employees feel involved. According to operators
perception middle managers have the most important role in disseminating and ap-
plying lean. This organizational level connects top managers lean commitment with
daily professional practice.

Table 15.7 Involvement in lean implementation shop floor workers’ perception


Respondant rate (%)
Involvement (more answers are possible!) Rába Okin Pearson Chi-Square
N = 81 N = 83 (statistical significance)
The following person or people were or
will be involved in implementing lean
Owner 2.5 14.5 7.546 (0.005)
Top management 2.5 14.5 0.078 (0.453)
Middle management 75 41 0.078 (0.453)
Employees 52 40 2.416 (0.081)
Suppliers 1.2 2.4 0.315 (0.509)

Team work. In lean, the unit of work organization is a team, especially in the case
of Rába. In our companies the foremen/managers have the leading position in team
work (problem solving). This refers to the fact that they manage problem solving
activities, coordinate and frame “leaning”, and dominate the radical changes of the
implementation phase. These findings support the previous paragraph: active par-
ticipation of top and middle management is a substantial success factor in leaning
the shop floor. Communication. Comparing the two firms this element shows the
most remarkable difference, with Rába having an overall advantage in all dimen-
sions. The gap (at least partially) can be explained by the depth of the changes and
at the same time one should be aware that the “amplitude” can and should affect
communication strategy. The offensive communication deserves extra attention on
15 Impact of Lean Management 193

each stages of the implementation (before, during, and after): informing workers
about backgrounds and reasons, tasks and responsibility and evidently about re-
sults. The top-down flow of data and information can be tightly connected to active
management role, which is an additional signal of their central role. The averages
on result-feedback (bottom line in Table 15.6) display and confirm our explana-
tion on differences in capability to operate measurements Table (15.5): employees
in Rába are better informed. The commanding value of the automotive supplier in
this question might be misleading, since operators perceptions, as we argued earlier,
lag significantly behind the real figures. To summarize, HR practices related to lean
management are of high priority in successful lean companies. According to shop
floor perceptions their actual deployment depends on the depth of the transition and
lean commitment of managers. Our case companies suggest that workers, thanks to
the fact that lean exploits operators knowledge more effectively, are responsible for
more “core” and supplement activities. Shop floor employees can actively partic-
ipate in process improvements. Anyway, the problem-solving teamwork is mainly
dominated by managers. Communication has considerable role from the first steps,
employees are informed along the lean journey.

Business Performance

Although the distinction is more outstanding, the pattern of answers resembles the
operative Figs. (Table 15.8). Almost unanimously, Rába workers tie lean and prof-
itability, and this relationship seems to be much stronger than the one with manu-
facturing performance measurement. In the case of Okin workers did not perceive
improved profitability. One powerful rationale is the depth of changes, another that
the recent history of Rába is marked by losses and lay-offs, meanwhile Okin oper-
ations covered its costs and grew constantly. The case studies indicate that workers
are aware of the effect of lean on company performance (both operational and busi-
ness), but they underestimate this effect. It is worth thinking over whether better
feedback could improve employee commitment and satisfaction.

Table 15.8 Business performance shop floor workers’ perceptions


Respondant rate (%)
Involvement (more answers are possible!) Rába Okin Pearson Chi-Square
N = 83 N = 93 (statistical significance)
Lean affected the companys profitability 62 9 51.539 (0.000)
positively.
194 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

15.4 Conclusion

The findings presented in this paper illustrate that becoming lean can contribute to
business competitiveness.
(1) Our case studies confirm the previously ”proven” positive relationship be-
tween lean management and operative performance measures. The data pointed
out that lean implementation can enormously improve operative performance in the
early stages of the transition. (And behind the measurable results can also have some
hardly measurable effects: stability, order and cleanness.) This positive relationship
is obvious company-wide: not only (top) managers know about it, but the majority
of shop floor workers as well. Although in the latter case the perception of the extent
of improvements lags far behind the real outcomes. The more active and effective
communication of the operative success (if employees can influence them directly)
could support the acceptance of lean and enhance workers lean commitment. (2)
Human resource occurs in lean literature as a central element in the transformation
process in spite of this it is rarely in the focus of empirical works. In accordance
with the academic society we found that beside the production-related tools our
case companies apply HR- related practices most frequently. Beyond (i) the grow-
ing importance of the middle management level, this new HR approach appears (ii)
in training, (iii) in more effective exploitation of existing expertise (covering prob-
lem solving, improvement activities, and job enrichment), (iv) in team work and (v)
in intensified communication on the shop floor level. It may be a challenging issue
for an OM professional: lean is not only about operations, because people require
at least as much attention as production and service processes. (3) Although lean
is a “fashionable” management and academic “fad”, surprisingly there is not almost
any research about its possible financial impact. According to financial data case
companies started lean in a period of “motivating crisis”: hallmarked by operating
losses and anticipated growing demand. Our case data refer to positive relationship
between lean and business measures. Cases also highlight that several factors (e. g.
competition in the industry, product characteristic, and customer power) can affect
business performance, and any of these might be stronger than the potential finan-
cial outcome of improved operational performance. Workers perception is mainly
shaped by communication and the earlier performance of the company.

Limitations

In case of researches like this one, the question of validity and reliability is crucial.
We used two means to enhance them: (i) the work is based on managers and shop
floor workers’points of view; (2) we combined qualitative (interviews, company
documents, visits) and quantitative (surveys, company documents) sources during
data gathering and explanation phases. In spite of researchers endeavor, the paper,
with special regard to its findings, should be handled carefully, since companies dif-
fer in size, operate in different industries and business environment, and go along
their own lean “path”. The number of case companies together with the methodol-
15 Impact of Lean Management 195

ogy used to analyze them clearly means a limitation. So the research can not con-
clude with general statements. However, we believe that the chosen research frame-
work served our research objective: the adaptation of business competitiveness on
lean companies.

References

Ahmad S, Schroeder RG (2003) The impact of human resource management prac-


tices on operational performance: recognizing country and industry differences.
Journal of Operations Management 21(1):19 – 43
Berggren C (1993) Lean production The End of History. Work, Employment &
Society 7(2):163–188
Birdi K, Clegg C, Patterson M, Robinson A, Stride CB, Wall TD, Wood SJ (2008)
The impact of human resource and operational management practices on com-
pany productivity: A longitudinal study. Personnel Psychology 61(3):467 – 501
Boyer KK (1998) Longitudinal linkages between intended and realized opera-
tions strategies. International Journal of Operations and Production Management
18(4):356–373
Chikán A (2006) A vállalati versenyképesség mérése. Egy versenyképességi index
és alkalmazása. Pénzügyi Szemle 51(1):42–56
Crawford KM, Blackstone JH, Cox JMJ (1988) A study of JIT implementation and
operating problems. International Journal of Production Research 26(9):1561–
1568
Cua KO, McKone KE, Schroeder RG (2001) Relationships between implementation
of TQM, JIT, and TPM and manufacturing performance. Journal of Operations
Management 19:675–694
Davies AJ, Kochhar AK (2002) Manufacturing best practice and performance stud-
ies - a critique. International Journal of Operations & Production Management
22(3):289–305
Delbridge R, Lowe J, Oliver N (2000) Shopfloor responsibilities under lean team-
working. Human Relations 53(11):1459–1479
Flynn BB, Flynn EJ (2004) An exploratory study of the nature of cumulative capa-
bilities. Journal of Operations Management 22:439–457
Flynn BB, Sakakibara S, Schroeder RG (1995) Relationship between JIT and TQM-
practices and performance. Academy of Management Journal 38(5):1325–1360
Gelei A (2007) Beszállı́tó-tı́pusok és azok alapveto kompetenciái a hazai autóipari
ellátási láncban. PhD thesis, Budapesti Corvinus Egyetem
Gupta S, Verma R, Victorino L (2006) Empirical research published in production
and operations management (1992–2005): Trends and future research directions.
Production and Operations Management 15(3):432–448
Hayes RH, Pisano GP (1994) Beyond World-Class: The New Manufacturing Strat-
egy. (cover story). Harvard Business Review 72(1):77 – 87
196 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Hines P, Holweg M, Rich N (2004) Learning to evolve - A review of contempo-


rary lean thinking. International Journal of Operations & Production Management
24(10):994–1011
Holweg M (2007) The genealogy of lean production. Journal of Operations Man-
agement 25:420–437
Huson M, Nanda D (1995) The impact of just-in-time manufacturing on firm per-
formance in the US. Journal of Operations Management 12(3-4):297 – 310
Karlsson C, Ahlström P (1996) Assessing changes towards lean production. Inter-
national Journal of Operations and Production Management 16(2):24–41
Krafcik JF (1988) Triumph of the lean production system. Sloan Management Re-
view 30(1):41–52
Landsbergis PA, Cahill J, Schnall P (1999) The Impact of Lean Production and
Related New Systems of Work Organization on Worker Health. Journal of Occu-
pational Health Psychology 4(2):108 – 130
Laugen BT, Acur N, Boer H, Frick J (2005) Best manufacturing practices: What do
the best-performing companies do? International Journal of Operations & Pro-
duction Management 25(2):131 – 150
Legge K (2005) Human resource management: Rhetorics and Realities. Anniversary
edition edn, Palgrave Macmillan, chap Human resource management, pp 220–
241
Lewis MA (2000) Lean production and sustainable competitive advantage. Interna-
tional Journal of Operations & Production Management 20(8):959 –979
Li L (2000) Manufacturing capability development in a changing business environ-
ment. Industrial Management and Data Systems 100(6):261–70
Liker J (2008) The Toyota Way: 14 Management Principles from the Worlds Great-
est Manufacturer. McGraw-Hill Professional, USA
Liker JK, Wu YC (2000) Japanese automakers, US suppliers and supply chain su-
periority. Sloan Management Review 42(1):79–94
Lowe J (1993) Manufacturing reform and the changing role of the production su-
pervisor: The case of the automobile industry. Journal of Management Studies
30(5):739 – 758
MacDuffie J (1995) Human resource bundles and manufacturing performance: Or-
ganizational logic and flexible production systems in the world auto industry.
Industrial and labor relations review 48(2):197–221
MacDuffie JP, Sethuraman K, Fisher ML (1996) Product variety and manufacturing
performance: evidence from the international automotive assembly plant study.
Management Science 42(3):350–369
McCutcheon DM, Meredith JR (1993) Conducting case study research in operations
management. Journal of Operations Management 11(3):239–256
McKone KE, Schroeder RG, Cua KO (2001) The impact of total productive main-
tenance practices on manufacturing performance. Journal of Operations Manage-
ment 19(1):39–58
McLachlin R (1997) Management initiatives and just-in-time manufacturing. Jour-
nal of Operations Management 15(4):271–292
15 Impact of Lean Management 197

Meredith J (1998) Building operations management theory through case and field
research. Journal of Operations Management 18(4):441–454
Meredith JR, Raturi A, Amoako-Gyampah K, Kaplan B (1989) Alternative research
paradigms in operations. Journal of Operations Management 8(4):297–326
Pfeffer J (1998) The Human Equation: Building profits by putting people first. Har-
vard Business School Press, USA
Sakakibara S, Flynn BB, Schroeder RG, Morris WT (1997) The impact of just-in-
time manufacturing and its infrastructure on manufacturing performance. Man-
agement Science 43(9):1246–1257
Schmenner R (1988) Behind labor productivity gains in the factory. Journal of Man-
ufacturing & Operations Management 1(4):323–38
Schonberger RJ (2007) Japanese production management. Journal of Operations
Management 25:403–419
Shah R, Ward PT (2003) Lean manufacturing: context, practice bundles, and perfor-
mance. Journal of Operations Management 21(2):129–149
Shah R, Ward PT (2007) Defining and developing measures of lean production.
Journal of Operations Management 25(4):785–805
Skorstad E (1994) Lean production, conditions of work and worker commitment.
Economic and Industrial Democracy 15(3):429
Stuart I, McCutcheon D, Handfield R, McLachlin R, Samson D (2002) Effective
case research in operations management: A process perspective. Journal of Oper-
ations Management 20:419–433
Sugimori Y, Kusunoki K, Cho F, Uchikawa S (1977) Toyota production system and
kanban system. Materialization of just-in-time and respect-for-human system. In-
ternational Journal of Production Research 15(6):553–564
Swink M, Hegarty WH (1998) Core manufacturing capabilities and their links to
product differentiation. International Journal of Operations and Production Man-
agement 18(4):374–396
Tracy M (2004) Lean Transformation Questionnaire. URL
http://www.oakland.edu/lean/download/LeanMfgSurvey forEmploy-
ees.pdf#search=%22lean questionnaire%22
Voss C, Blackmon K (1994) Total Quality Management and ISO 9000: A European
Study. Centre for Operations Management, London Business School, London.
Voss C, Ahlstrm P, Blackmon K (1997) Benchmarking and operational perfor-
mance: Some empirical research. Quality Management & Technology 4(4):273–
285
Voss C, Tsikriktsis N, Frohlich M (2002) Case research in operations management.
International Journal of Operations and Production Management 22(2):195–219
Voss CA (1995) Alternative paradigms for manufacturing strategy. International
Journal of Operations & Production Management 15(4):5 – 16
Voss CA (2005) Alternative paradigms for manufacturing strategy. International
Journal of Operations and Production Management 25(12):1211–1222
Womack JP, Jones DT (1996) Lean thinking: Banish waste and create wealth in your
corporation. Simon&Schuster UK Ltd
198 Krisztina Demeter, Dávid Losonci, Zsolt Matyusz and István Jenei

Wood S (1999) Human resource management and performance. International Jour-


nal of Management Reviews 1(4):367 –414
Yin R (1989) Case study research: design and methods. Sage Publications, Newbury
Park, CA, USA
Chapter 16
Reducing Service Process Lead-Time Through
Inter-Organisational Process Coordination

Henri Karppinen and Janne Huiskonen

Abstract The management of public sector service operations has gained much
attention in the scientific literature during the last fifteen years. As in the industrial
world, also in the service world, different types of processes exist, requiring different
kind of tools and improvement actions. A group of challenging service processes are
the so called ‘fluid service processes’ that are considered as uncontrollable, people-
dominated, diagnosis-focused, and the traditional process improvement tools do not
seem to work in them. The focus of the study is inter-organisational cooperation in
fluid service process delivery. The specific focus of the paper is in understanding and
reducing the service process lead-time. The study is based on three interconnected
action research projects conducted in a Finnish municipality. The results of the study
show that the traditional process development approach is not enough when trying
to solve process-related problems in the inter-organisational context, such as the
lead-time of a service process.

16.1 Introduction

Service process development has become an important topic both in private and
public sector services. In the current financial turbulence, making the service oper-
ations even more efficient while at the same time emphasizing customer-orientation
is a challenging approach for every organisation. It is, however, an inspiring start-
ing point for research in the service sector, which has certain traditions but is still

Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland, tel: +358-5-621 2649, fax: +358-5-621 2699,
e-mail: henri.karppinen@lut.fi
Janne Huiskonen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland

199
200 Henri Karppinen and Janne Huiskonen

in a developing stage. The interest in this study is a specific area of services: the
interorganisational service context, where many organisations from the private and
public sector, with diversified objectives and motivation to participate in coopera-
tion, work together in order to produce a service for the customer. In the literature
the topics of inter-organisational cooperation and relationships as well as a more op-
erational view the service process management are widely discussed and analysed,
but unfortunately separately. Service process management is also a developing area
of research and theory. Our intention is to emphasize that the research of the ser-
vice sector should be about seeing service as it is, and only after that trying to form
the needed approaches and tools, not trying to fit the services into existing models
based on the industrial world. The research work described in this study work was
performed in a Finnish Municipality in three different service sector processes.

16.2 Research Objectives and Literature Review

We define the research gap and the target of the study on the basis of the setting
explained in the introduction. The research gap is very practical world-oriented, and
in order to reach our objective we have selected the action research as the methodol-
ogy, and our aim is to maintain practical relevancy while trying to influence theory
creation. The starting point of our study was a prolonged lead-time problem in three
service processes. Long lead-time causes problems in terms of costs, but also when
measuring the service quality and the service delivered for the customer. The cus-
tomer has a significant role in all three processes, and all of them are very labour
intensive. In the case processes, the biggest challenge is having many different or-
ganisations involved in the service delivery. The participation is also very intensive,
not the usual buyer-supplier relationship, but more like a joint activity or joint ven-
ture.
We started by analysing the existing literature in two separate and usually not in-
terconnected themes: inter-organisational cooperation/relationships without com-
mon organisational structure and managing complex, customer and labour inten-
sive multi-stage service process. Oliver (1990) presents a joint program cooperation
form, which is a specific programme of two agencies working together when plan-
ning and implementing common activities but without a common organisational
structure. The prerequisite according to Oliver is that the objectives of two indi-
vidual agencies can only be achieved by cooperation. Joint programs are formalized
arrangements which tend to institutionalize and stabilize the inter-organisational ex-
change of resources. Oliver mentions that this kind of cooperation form is usual
when dealing with social services and cooperation related to them. A linkage to
managing joint operations is not mentioned, indicating a common problem in this
literature. Much of the literature focuses on static exchange relationships with an
economical rather than an operational perspective (e.g. Ouchi, 1979; Dekker, 2004;
Cäker, 2008). A single attempt to raise the important question related to the oper-
ational view of a cooperative service process has been made by Provan and Sebas-
16 Reducing Through Inter-Organisational Process Coordination 201

tian (1998). They mention the idea of an informal or formal integration structure
that aims to coordinate the services their client needs. This integration form varies
between single information transaction to a full-scale sharing of resources and pro-
grams.
Lundgren (1992) discusses the coordination of activities in the industrial network
context. He states that coordination in networks usually means organising functions
and flows, activities and relationships within a network to increase the effective-
ness of the activities. Coordination of activities will cause changes in the resource
structure, but in the network context it also offers possibilities to form new kinds
of combinations of different resources and activities. Although Lundgren discusses
industrial networks, similar characteristics apply also for the service context. The
coordination focuses on cooperation, the process of interaction between the mem-
bers in the network. Laing and Lian (2005) have formed four different categories
of factors involving the coordination of interactions in the service context: trust,
closeness, process factors, and organisational policy factors.
Our target is to find key elements of the processes in our specific case systems.
We base our processual view on Wemmerlöv (1989), who uses two different process
categories: rigid service processes and fluid service process. Our interest is in fluid
service processes, which are described as follows: they usually require relatively
high technical skills; a great amount of information is needed in order to specify
the nature of service needed exactly; and the service worker goes through an unpro-
grammed search processes and makes several judgement decisions, meaning that
the process is not well defined; the volume of people handled per a unit of time is
low; and the workflow uncertainty is high, process normally involves only one cus-
tomer at a time, the response time to a customer-initiated service request is often
fairly long. Wemmerlöv adds that fluid processes are often people-dominated and
they often exist in highly professional organisations.
The challenges in managing these kinds of processes are high, and because of the
wide scope and unpredictable service requests, forecasting of service flows should
be tightly connected to the resource use per time unit (load), and most efforts on
development should be focused on the expertise of individual persons and the in-
formation on which they base their process control decisions. Though according
to Wemmerlöv (1989), standardization of the fluid process is difficult or worthless,
Bowen and Youngdahl (1998) present an opposite idea in which they combine lean
thinking and the product-line approach. Their idea is that mass customization is
possible both in the industrial world and the service world, and it is all about “hav-
ing flexible processes and structure when producing variable or even individually
customized products or services” (op.cit, 1998, p.222). One of the elements they
include in their idea is the networked organisation that does not exclude responsive-
ness, flexibility or focus on individual customers.
The third part of the literature analysis concerns defining the lead-time in the
context of multi-staged fluid service process. According to Wemmerlöv (1989, p.32)
“Accurate time standards are difficult to derive, and, due the variance in tasks and
processing times, often not worth the effort developing.” From the customer per-
spective, the lead-time of a service is important because it affects on ‘service expe-
202 Henri Karppinen and Janne Huiskonen

rience’ and further on ‘service quality’. From the service provider’s perspective, the
lead-time is often connected to costs; the longer the customer is in the process, the
higher the costs are. We see the service process lead-time as a relevant and impor-
tant measurement, and it should not be undervalued. In this study we consider the
true lead-time as the time the customer is within the process, and both passive and
active time should be included in the lead-time.
The research gap is based on an observation that we do not have enough focused
theory about connecting areas of inter-organisational cooperation and managing a
multi-staged service process and the need existing in real service systems. Our re-
search target and the effort of filling the gap “to analyze lead-time related problems
in the fluid service process and to find the factors that have an influence on lead-time
improvement efforts”

16.3 Lead-Time Related Problem Analysis

The problem analysis was conducted in three different public sector service process
using action research methodology. The selection of methodology was guided by
two main needs: a need to increase understanding of the researchers from the re-
search point of view, in order to benefit the theory building (primary), and a need to
benefit the client system with feasible and process oriented solutions (secondary).
The client system included two healthcare processes and one residential application
process in a Finnish municipality. The original objectives in the process develop-
ment projects included improving the productivity and service quality, improving
the customer flows and lead-time as well as achieving better customer satisfaction,
both with internal and external customers.
The first process (process A) involved children under the age of 7. The patients in
this process typically suffered from developmental disorders and problems that were
caused directly or indirectly by their parents. The second process involved young
persons in the age from 13 to 20. These patients had developmental disorders and
parent related-problems, but also mental, alcohol and drug abuse problems (process
B). The third process (process C) was a service process for people applying for an
apartment or a house from the municipality, and unlike in the first two processes,
the process itself was much less visible to the customer.
The action research projects included all six main steps of the action research cy-
cle: data gathering, data feedback, data analysis, action planning, implementation,
and evaluation (Coughlan and Coghlan, 2002). In process A the action research
steps were conducted in 9 workshops ( day), in process B in six, and in process
C in three workshops. The research team included three researchers: a facilitator,
an observer in the workshops, and a researcher not participating in the workshops
but taking part in the data analysis and evaluation. This kind of setting was needed
because of the validity, and subjectivity challenges related to action research as a
methodology (Coughlan and Coghlan, 2002; Zuber-Skerrit and Fletcher, 2007). The
integration of methodological and problem solving steps is presented in the fig.16.1.
16 Reducing Through Inter-Organisational Process Coordination 203

Fig. 16.1 Integration of methodological steps and process related problem solving

We base our view on observations made in the workshops, focusing on the pro-
cess mapping, problem analysis and solution definition. Already in process A we
learned that process level coordination, as presented in the process management lit-
erature, is inadequate because it tends to create a situation where problem analysis
causes too aggregated solutions. Flexibility is needed not only in the service pro-
cesses but also in the problem analysis. The process mapping and analysis chart
(modelling part) in the case process B is presented in the fig.16.2.
The analysis indicates that in these kinds of processes, the most difficult problem
when trying to improve the lead-time of a service process is a too static view of
the process. The thinking of managers is too much focused on setting high service
standards, measurements, quality systems etc., and at the same time the employees
on the operating level are too much focused on individual tasks. The service process
level does not exist at all, or at least not the kind of process perspective founded in
the service process literature. Flexibility and responsiveness do not work because
the service process definitions and operational policies are based on static and unre-
alistic definitions. The service process lead-time is a sum of different service paths,
often tailored for individual customers, with the result that the original lead-time
targets are never met. The original targets and lead-time measurement however pre-
suppose that the process has a well defined workflow and the customer will get
‘standardized service’ with low variance.
On the basis of problem analysis, we can also state that the process level view that
should include managing the process flows and controlling the interfaces between
204 Henri Karppinen and Janne Huiskonen

Fig. 16.2 Process map and analysis chart in the case process B

the process events, did not change if the load in the process changed. This meant in
processes A and B that alternatives were originally not defined at all for different
process flows. If the process flow stopped, then the customers/patients were moved
to an unplanned and often unintended place to wait for the planned process flow to
continue again. In processes A and B, the lead-time measured (if measured) was not
a result of preset and locked service paths, but rather the unplanned (true) service
paths. At the same time, when the process alternatives were not defined, the pro-
cess included events where, according to the process descriptions, different actors
produced the same activity, but no flow control existed. As process level control did
not exist, the solutions made at the event level, were not optimised and the lead-time
became longer and longer. Despite the fact that there should have been a cooperative
process, the focus was purely on single events.
In process C, the application process was a centered solution where three differ-
ent major apartment/resident owners bought centralised service from the municipal-
ity service provider. The problem in the centralised process caused in the worst case
a stop in the service for all three companies, even if the ‘problematic customer’ was
a customer of only one company. In general the problems were process flow control
focused (flow control was based on diagnosis made in individual events e.g doctors,
experts; process load based control decisions did not exist). Simplified, a fluid ser-
vice process is all about recognizing the state of the process proactively and making
control decisions that are process-loading and flow-based. This does not exclude
16 Reducing Through Inter-Organisational Process Coordination 205

in the professional service context the quality of a diagnosis made in an individual


event.
The solutions aiming for improved lead-time were in the process A and B form-
ing a new inter-organisational and multi-professional “care group” especially for
the early stages of the service process. Also some structural changes and policy so-
lutions were made in the first phases of a process e.g reducing process entries. In
the process C, the radical solution was to replace the original process with three
different application processes for each major company that were operated by the
companies not the municipality. The most important problems and solutions made
are presented in the fig.16.3.

Fig. 16.3 Lead-time improvement related solutions in processes A, B and C

16.4 Discussion

As an answer to the research gap defined, we found out that in all case processes the
dependency/relationship setting between actors was created first, and only after that
the single service processes were planned. This creates a locked situation, where the
cooperation is based on preset dependencies, not on the actors’ interests/objectives,
or the service delivered for the customer. In our opinion the inter-organisational
setting creates a need to develop the service process, not only on the process level but
206 Henri Karppinen and Janne Huiskonen

also on the inter-organisational level and the single event level. Unlike the literature
related to the subject, we consider that a fluid service process is controllable in the
inter-organisational setting. The idea is that if flexibility and responsiveness are to
be maintained on the process level, the actions can and should be controlled and the
operations standardized on the inter-organisational level.
Only when the process-related policies, organisational roles, rules, the required
service paths and alternatives for the different service loads are set on the inter-
organisational level, can the service process planning begin. The process level plan-
ning should be based on polices created at a higher level of cooperation, and the
service process should be planned to be active, or even proactive, not static like the
service literature describes. Agreeing about common policies does not mean that
static process boxes and arrows for a single option only have to be formed. The
idea of three-level planning is that the service process is intelligent and responsive,
a viable system. This is achieved by doing the process planning in cooperation with
the organisations, and setting needs for the resources and the process alternatives
needed in the service delivery together.
At the single event level, which is the smallest entity in our model, the actions
are based on process/event-related professionalism. Therefore the flexibility at this
level is based on making the diagnoses needed, and having the right selections of the
service process alternatives for different customer cases and process loads. These
procedures, the decision-making rules, must be pre-planned, and the person who
diagnoses only selects the right path for the customer case based on the state of the
service process (intelligent service management).
As a summary, the inter-organisational level is for the service process policy,
rules, and role setting; the process-level planning must be based on finding the right
process paths and the alternatives for the different process loads, and the event-level
must be based on the required professional skills and diagnosing the customer cases,
as well as strict rules when controlling the process flows. The study has some scien-
tific limitations, and one of them is the idea of ‘levels of coordination’ being formed
through the observations made during the action research projects, not beforehand.
The decisions made in case process C were, however, a result of new thinking, em-
phasizing “the levels of coordination”. The second limitation is that we do not have
enough quantitative data to validate our observations and client system view related
to service process lead-times (based mostly on qualitative data).
The implications for future research and also for practitioners are similar: we
need to test these ideas further, to find out whether the model of ‘intelligent service’
is beneficial and applicable in other services process, as well. One important issue
related to future research is the challenge of modelling intelligent service, and main-
taining at the same time the relevant information and intelligibility. We intend to fo-
cus also in the future on the ‘service-intelligence’-models in the diagnosis-focused
service processes.
16 Reducing Through Inter-Organisational Process Coordination 207

References

Bowen D, Youngdahl W (1998) “Lean” service: In defense of a production-line


approach. International Journal of Service Industry Management 9(3):207–225
Cäker M (2008) Intertwined coordination mechanisms in interorganizational rela-
tionships with dominated suppliers. Management Accounting Research 19:231–
251
Coughlan P, Coghlan D (2002) Action reseach for operations management. Interna-
tional Journal of Operations and Production Management 22(2):220–240
Dekker H (2004) Control of inter-organizational relationships: Evidence on appro-
priation concerns and coordination requirements. Accounting, Organizations and
Society 29(1):27–49
Laing A, Lian P (2005) Inter-organisational relationships in professional services:
Towards a typology of inter-organisational relationships. Journal of Services Mar-
keting 19(2):114–128
Lundgren A (1992) Coordination and mobilisation processes in industrial networks.
In: Yxelsson B, Easton G (eds) Industrial Networks A New View of Reality,
Routledge, London
Oliver C (1990) Determinants of interorganizational relationships: Integration and
future directions. Academy of management review 15(2):241–265
Ouchi W (1979) A conceptual framework for the design of organizational control
mechanisms. Management science 25(9):833–848
Provan K, Sebastian J (1998) Networks within networks: Service link overlap, orga-
nizational cliques, and network effectiveness. Academy of Management Journal
41(4):453–463
Wemmerlöv U (1989) A Taxonomy for Service Process and its Implications for
System Design. International Journal of Service Industry Management 1(3):20–
40
Zuber-Skerrit O, Fletcher M (2007) The quality of an action research thesis in the
social sciences. Quality Assurance in Education 15(4):413–436
Chapter 17
Is There a Relationship Between VC Firm
Business Process Flow Management and
Investment Decisions?

Jeffrey S. Petty and Gerald Reiner

Abstract This study on the management of business process flows of venture cap-
ital (VC) firms explores the relationship between the utilization rate of the human
resources within the VC firm and deal (project) rejection rate under consideration of
contextual factors. We employ an exploratory research design (a historical case anal-
ysis) as well as quantitative model oriented research based on empirical data in order
to understand what is really going on in terms of VC firm processes with regard to
their system dynamics. We utilize a longitudinal data set comprising 11 years of
archival data covering 3,340 investment decisions collected from a European-based
VC firm. The results indicate that, over time, there are considerable dynamics in the
VC decision making process. Specifically, the investment decisions of venture cap-
italists are influenced by firm-specific factors related to the human capital resources
of the firm, namely management capacity. Implications of these results for research
and practice, in venture capital as well as other service industries, are discussed.

17.1 Introduction

Venture capital (VC) firms, which are typically staffed by a small team, receive deals
according to a stochastic arrival rate over the life of a fund. During this time the team
is responsible for the deal flow process that involves the evaluation of deals, structur-
ing investments, managing portfolio companies and liquidating the fund’s portfolio
(Tyebjee and Bruno, 1984; Fried and Hisrich, 1988), as well as managing the firm
itself. These firm processes are based upon assumptions within a deterministic en-

Jeffrey S. Petty
Lancer Callon Ltd., Suite 298, 56 Gloucester Road, UK-SW7 4UB London
e-mail: jpetty@lancercallon.com
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch

209
210 Jeffrey S. Petty and Gerald Reiner

vironment (no demand variability, etc. is taken into consideration) and are typically
driven by financial performance measures rather than operational ones. Hence, the
question arises as to how the “quality” of the process output is affected during the
life of a venture fund (e. g. Is there a higher risk of rejection, even for a poten-
tially suitable project, at different times based upon capacity problems within the
VC firm?). The literature focused on VC decision making (Macmillan et al, 1985;
Dixon, 1991; Zacharakis and Shepherd, 2005) does not take this firm-specific as-
pect into consideration and focuses typically on the “quality” of the potential deal
based upon: (i) the company’s management team, (ii) the market, (iii) the product
or service, and (iv) the venture’s financial potential. Thus, the existing literature
fails to address the potential impact of the firm-specific processes and resources
(Barney, 1986, 1991; Hitt and Tyler, 1991; Mahoney and Pandian, 1992) on the
strategic decisions (Eisenhardt and Zbaracki, 1992), and ultimately the strategy and
performance, of a firm. As such, until now there have been no direct time related
requirements considered for designing, planning or managing the VC firm’s pro-
cesses. The management and delivery of a VC firm’s product and related processes,
as well as other professional service firms, can typically be characterized as “make-
to-order” (Naylor et al, 1999). Therefore, in general, the question arises about the
quantity of available firm resources (e. g. people, systems, capital) that are required
to successfully pursue the firm’s strategy within the specified time and with the
specified level of service (e. g., Jammernegg and Reiner, 2007). This capacity man-
agement approach is suitable for classical relationships between a service provider
and its customers/clients wherein the scope of services and time requirements can
be specified in a contract. However, what happens in terms of resource allocation
and process effectiveness if there is no clear classification of order requirements,
especially with respect to time related aspects, possible? Therefore, in our study we
deal primarily with the following research questions,
(1) Over time, what is the impact of firm structure (lean staffing) on deal evaluation
and decision making?
(2) What are the firm-specific processes that influence deal evaluation and decision
making under consideration of dynamic aspects?
(3) Is there a relationship between the utilization rate changes over time and the
related investment decisions of VC-firms?

17.2 Research Method

To address the research questions outlined above, we chose an exploratory research


design (a historical case analysis) which is recommended for investigating phenom-
ena that are subtle and/or hardly understood . This type of research design permits a
thorough understanding of the phenomena in question, which is of great importance
for developing new knowledge on complex and dynamic phenomena such as the im-
pact of organizational structure and staffing on VC decision-making over time in a
VC firm (Fried and Hisrich, 1988). By collecting and analyzing data in a single-case,
17 VC Firm Business Process Flow Management and Investment Decisions 211

longitudinal setting, the research design adopted in our study focuses on validity and
accuracy rather than generalizability, and provides the basis for the development of
new theory which then can be further advanced following a multiple-case replica-
tion logic and through large-scale survey research (Eisenhardt, 1989; Strauss and
Corbin, 1998; Yin, 2003). As this study seeks to explore the factors affecting the
VC’s processes over time, the use of archival data analysis is preferred over an in-
terview or survey approach because it allows for the collection and analysis of the
different measures over several time periods. This approach also provides access to
information that helps to gain a more realistic view of the actual environment as
well as the actions that were made by the subjects at the time. Thus, this helps to
enhance the validity of the data as it eliminates recall bias on the part of the subject
as well as other limitations often associated with self-reported techniques (Hall and
Hofer, 1993; Shepherd and Zacharakis, 1999). We also conduct quantitative model-
oriented research, especially under consideration of empirical data, based upon the
results of the qualitative research activities. Bertrand and Fransoo (2002) pointed out
that the methodology of quantitative model-driven empirical research offers a great
opportunity to further advance theory (Davis et al, 2007). In general, quantitative
model-based empirical research provides the ability to generate models of causal
relationships between control variables and performance variables. These models
are then analyzed or tested using different scenarios involving varying levels of con-
straints on the subject variables. The primary concern of this research approach is to
ensure that there is a fit between the actual observations and actions and the result-
ing model, which is based upon reality. Utilizing a combination of different research
approaches thus enables us to address the aforementioned research questions.

17.2.1 Data Collection and Sample

The data used in the model was collected from the archival records of a European-
based VC firm and included the investment/rejection decisions on more than 3,600
deals that had been received by the firm over an 11-year time period. The data set
was created by reading all 7,284 passages of text in the firm’s deal flow data base
as well as related emails and memos in the archived deal files. The database entries
for deals that had made it beyond the initial screening phase into the evaluation and
due diligence phases typically contained a synopsis of the VC’s findings and views;
a random sample of 350 deals was selected in order to compare the notes in the
files to the comments entered in the action log and there was no evidence of gross
omissions or any material rewording of comments, so the database was deemed a
reliable data source. The time a deal spent in the selection process ranged from
one day to more than a year and, after eliminating those deals that lacked sufficient
information (e. g. date of submission or VC decision was missing) to be included
in the study the resulting sample included 3,340 deals. The firm was staffed by a
small team of VCs and the average acceptance rate of deals submitted to the firm
over the entire period was 1%, which is consistent with the description of VC firms
212 Jeffrey S. Petty and Gerald Reiner

and industry averages reported in many other studies. We will use the term “firm”
to describe the VC firm, whereas the terms “company”, “deal”, and “proposal” all
apply to the entrepreneurial ventures evaluated by the VC.

17.2.2 Data Analysis

The initial qualitative analysis involved an interpretative approach (Glaser and


Strauss, 1967; Roberts, 1997; Strauss and Corbin, 1998) to the documents con-
taining the VC’s views and decisions related to the deals they had reviewed over
the 11-year time period. Those comments representing the firm’s view or reason for
the final decision, both explicit and implied, for the ultimate investment/rejection
decision were collected and coded along with descriptive information pertaining to
each deal, which included the date the deal was received by the firm, the date of the
final investment decision. Additional context specific factors including the number
and tenure of the VCs in the firm, the role of the VC firm in each of their portfolio
investments (e. g. lead investor, co-lead, investor) and any extraordinary events (e. g.
raising money for a new fund) over the 11-year time period of the study were also
included in the analysis. In order to discuss our research questions in greater detail
we also developed a quantitative model based upon the results of first research ac-
tivities. In particular, we modeled the VC firm’s process operations in detail (see
also Figure 21.2) based upon the descriptions of the VC decision making process
in the literature (Tyebjee and Bruno, 1984; Fried and Hisrich, 1994; Shepherd et al,
2005). Briefly stated, the principle steps depicted in Tyebjee and Bruno’s (1984)
model of the VC process are: (a) origination, which includes all those activities
related to sourcing proposals, (b) screening, the VC’s initial appraisal of the doc-
umentation that typically results in the majority of proposals being rejected on the
basis of obvious flaws and/or firm specific criteria, (c) evaluation, which involves a
more in-depth assessment of the company’s management team, financials, product
and market, (d) structuring, wherein the VC and the company negotiate the specific
deal terms and (e) post-investment activities, which encompasses the VC’s on-going
monitoring and control activities as well as operational support of their portfolio
companies and, ultimately, liquidation of portfolio deals. Shepherd et al (2005) cre-
ated a VC process model based on simple Jackson open queuing networks where the
expressions of venture flows and status are averages in the long run. This research
work provides important theoretical input for developing our quantitative empirical
model. However, their research differs from this study in two important ways. First,
they do not have actual data from a VC firm so their study remains more theoretical
in nature (e. g., application of Jackson open queuing network). Second, their focus
is more on an individual’s allocation of time and the influence of opportunism rather
than an organizational level analysis. Based on this previous research work we de-
velop a VC process model adapted from the main operational activities that takes
into consideration empirical data and focus on the relationship between operations
management within the VC firm and the acceptance as well as rejections of projects.
17 VC Firm Business Process Flow Management and Investment Decisions 213

This model will provide time specific utilization information for further statistical
analysis.

Fig. 17.1 Process Model of VC firms

We will describe the main mathematical equations, input data as well as variables
(flows and stocks) and performance measures of our model. The above described
initial exploratory research proved the input data for our analysis, i. e., proposals
(Pt ), resources (Rt ), rejections I (RIt ), rejections II (RIIt ), rejections III (RIIIt ), ac-
tivity times for screening, evaluation structuring and portfolio (AIt , AIIt , AIIIt , AIVt ),
termination (Tt ), number of newly hired employees (Ht ) and number of employees
departing from the firm (Qt ) . The process operations of the venture capital firms are
specified as follows. Number of proposals waiting for Screening (SCt ) is increased
by Pt and reduced by the maximum processing rate (OIt ). OIt is determined by the
activity time I (AIt ) as well as the available number of resources within the resource
pool (Rt )

SCt = SCt−Δ t + (Pt − OIt )Δ t (17.1)


 
Rt
OIt = min , SCt−Δ t (17.2)
AIt

It = OIt − RIt where RIt ≥ OIt (17.3)


Number of proposals waiting for Evaluation (EVt ) is increased by inflow I (It ) and
reduced by the maximum processing rate (OIIt ). OIIt is determined by the activity
time II (AIIt ) as well as the available number of resources within the resource pool
(Rt )

EVt = EVt−Δ t + (It − OIIt )Δ t (17.4)


214 Jeffrey S. Petty and Gerald Reiner

 
Rt
OIIt = min , EVt−Δ t (17.5)
AIIt

IIt = OIIt − RIIt where RIIt ≥ OIIt (17.6)


Number of proposals waiting for Structuring (STt ) is increased by inflow II (IIt )
and reduced by the maximum processing rate (OIIIt ). OIIIt is determined by the ac-
tivity time III (AIIIt ) as well as the available number of resources within the resource
pool (Rt )

STt = STt−Δ t + (IIt − OIIIt )Δ t (17.7)


 
Rt
OIIIt = min , STt−Δ t (17.8)
AIIIt

IIIt = OIIIt − RIIIt where RIIIt ≥ OIIIt (17.9)


Number of projects within the Portfolio (POt ) is increased by inflow III (IIIt ) and
reduced by the outflow/termination (Tt ) of projects.

POt = POt−Δ t + (IIIt − Tt )Δ t (17.10)


Finally, we will define the most important performance measure in the context
of our research framework, i. e., utilization (Ut ). To be able to calculate Ut it is
necessary to take the theoretical available number of resources within the resource
pool (Nt ) as well as the real available number of resources within the resource pool
(Rt ) into consideration based on allocation of employees to VC activities (i. e., AIt ,
AIIt , AIIIt , AIVt ) as well as the hiring (Ht ) and departure (Qt ) of employees.

Nt = Nt−Δ t + (Ht − Qt )Δ t (17.11)

Rt = Rt−Δ t − (OIt AI + OIIt AII + OIIIt AIII + IIIt − OIt−Δ t AI


(17.12)
−OIIt−Δ t AII − OIIIt−Δ t AIII − Tt − Ht + Qt )Δ t

Rt
Ut = 1 − (17.13)
Nt
Furthermore, we defined time related performance measures, to be able to vali-
date our model. In particular, we calculate the flow time for each VC process opera-
tion based on waiting time as well as activity time, i. e., average screening flow time
(SCT ), average evaluation flow time (EV T ), average structuring flow time (ST T ) as
well as well as the average flow time within the portfolio (POT ).
To be able to calculate the flow times (T L) we use a general distributed (times be-
tween arrivals and service time) queuing model (Hopp and Spearman, 1996). Based
17 VC Firm Business Process Flow Management and Investment Decisions 215

on the application of queuing models we were able to calculate the average time
(periods in month) spent for evaluation before investment (BIT )

BIT = SCT + EV T + ST T (17.14)


The average time a project deal spends in process before being rejected (PRT ) is
defined as
SCT ∑ RIt + EV T ∑ RIIt + ST T ∑ RIIIt
PRT = (17.15)
∑ (RIt + RIIt + RIIIt )
Validation of the process analysis model is based on empirical data generated
during the initial analysis (see Table 17.1) as well as simulated data based on 136
periods/months (T ). The model validation shows that our model is able to analyze
the performance of VC process operations.

Table 17.1 Model validation


Performance measure Empirical data Model results
BIT 7.9 months 7.5 months
PRT 2.1 months 2.0 months

17.3 Results

Throughout the period under study there were a surprising number of instances
(n = 67) when the team of VCs within the firm simply did not have the manage-
ment capacity to adequately evaluate a potential deal, even when the deal was ac-
knowledged to be potentially viable (e. g. “Interesting but time constraints due to
other due diligence.”). Similarly, during these periods of increased activity, deals
that were viewed as potentially time consuming were rejected, despite any potential
interest (e. g. “it has high potential but very much handson work and stirring will
be required”). These instances demonstrate that the team’s capacity utilization, in
combination with the characteristics of the potential “client”, play a significant role
in the decision-making process. Thus, a firm’s strategy with respect to staffing and
structure may need to be adjusted in order to adapt to changing demand needs.
Also, once all of the 3,340 deal decisions had been coded, the occurrence of
deal specific decision reasons in relation to the VC management team’s utilization
were compared with the chi-square test. All analyses were performed with SPSS
(version 15.0). Multiple scenarios using different VC team utilization rates were
tested and the chi-square statistic for all tests was highly significant (P value of
.000) thus indicating that the stated reason(s) for the decisions are associated with
the VC team’s available safety capacity. Additionally, further quantitative analysis
216 Jeffrey S. Petty and Gerald Reiner

utilization arrival rate rejection rate

100% 70

90%
60

80%

50
70%

number of proposals
60%
40
[%]

50%

30
40%

30%
20

20%

10
10%

0% 0
0 20 40 60 80 100 120
Time (months)

Fig. 17.2 Process Model results

revealed, contrary to what may be expected, that the arrival rate of new deals did not
necessarily predict deal rejection rates or team utilization rates (see Figure 17.2).
However, the relationship between the team’s utilization rate and deal rejection is
much more pronounced, especially in the latter months of the fund when VCs are
operating at a high utilization rate as a result of on-going portfolio management
activities.

17.4 Discussion and Conclusion

In this study we explore the relationship between utilization rate and decision mak-
ing, specifically the rejection rate, under consideration of contextual factors. Based
upon both qualitative and quantitative analysis we find that there are considerable
dynamics in the VC decision making process, especially over time. In this context,
we have shown that operations management and in particular capacity management
provides further valuable insight. Second, our findings extend prior conceptualiza-
tions of the VC decision-making process by showing that the importance of decision
making criteria can significantly change over the lifecycle of a fund. Developing and
managing a portfolio places constraints on the resources (i. e., human resources) of
the VC firm so that during periods of extensive due diligence, deal closings and
managing existing portfolio companies there is less time to spend screening and
evaluating new deals. Additionally, this study raises the question of whether or not
the existing research focused on individual VC decision making is in fact represen-
17 VC Firm Business Process Flow Management and Investment Decisions 217

tative of the actual decision making within the firm. VC decision making is but one
aspect of an organizational process and therefore researchers should not assume that
a study of individual VC decision making is the same as VC firm level decision mak-
ing. Much of the research that has been conducted on VC decision making appears
to be suited to the initial screening phase of the process but without considering the
current needs or requirements of the firm it is not practical to assume that we are
developing models that accurately depict the firm-level processes and actions.

17.4.1 Implications

Looked at from the perspective of the VC firm, our findings suggest that VCs should
evaluate their existing management capacity. They should develop strategies to ac-
commodate for times when they experience increased deal flow, above average due
diligence activity or number of deal closings and hands-on management of the port-
folio firms. While there has been limited research on how VC firms are organized
and managed (Gorman and Sahlman, 1989), there is no evidence that VCs acknowl-
edge or attempt to address any potential under-capacity in terms of management
time, especially in the latter years of a fund. This missing focus on capacity man-
agement in combination with the performance evaluation of VC firms is solved in
our research study by integrating state-of the art service operations management
knowledge. The apparent constraint on VC management time over the life of a fund
adds to the existing literature focused on the allocation of a VC’s time. Although
the issue of VC time allocation has typically been concerned with the management
of the fund’s portfolio (Gorman and Sahlman, 1989; Jääskeläinen et al, 2006), our
findings provide additional evidence that both pre and post-investment activities
(Gifford, 1997; Shepherd et al, 2005) may influence the decisions of the VCs within
a firm. Furthermore, considering that VC management capacity has a considerable
impact on the deal selection process, entrepreneurs need to be aware that there will
be instances when the entrepreneur is basically at the right place at the wrong time:
their business proposal is compelling, yet the VC firm does not have the capacity
to evaluate it and therefore, out of necessity, makes the decision to reject it. As is
the case in most service settings, entrepreneurs may be limited in what they can do
to influence the expert’s opinions of their proposals yet they may find it well worth
the effort to attempt to learn about the current capacity of the firm in order to avoid
those times when the firm is overloaded with work in terms of management time,
thus improving the chance that the decision maker will be able to devote their full
attention to the proposal. Finally, given that other service firms (e. g. legal, advertis-
ing, financial, and consulting) are also characterized by “make-to-order” processes,
lean staffing, and demand variability, there may be possibilities for generalization
of the research results across the service sector.
218 Jeffrey S. Petty and Gerald Reiner

17.4.2 Limitations and Future Research

A key limitation of this study is that the researchers were not present in the VC
firm during the actual deal evaluation process so there is no way to guarantee that
all of the relevant information was recorded in the firm’s database. Additionally, the
fact that the data was obtained from a single VC firm limits the ability to generalize
the findings across the industry as a whole. Although not addressed in this study,
there are many other factors, such as the size and location of the VC firm (Gupta
and Sapienza, 1992) and biases on the part of individual VCs (Franke et al, 2008;
Matusik et al, 2008; Shepherd et al, 2003), which may also have an effect on an indi-
vidual VC’s investment decisions. Despite these limitations this study shows that the
criteria used by VCs are not consistent over time and provides evidence that there
is much to be discovered about the VC decision making process that cannot be ac-
complished using existing approaches. Based upon the results of this study, selected
firm-specific factors possessing a variable nature (Cyert and March, 1963) appear to
have a greater impact on managerial action and decision making than previously re-
ported. As such, more longitudinal research is required within the context of service
firms that will enable researchers to capture the complexity of the task environment
as well as the resulting decisions. Although each firm pursues a different strategy
suited to their specific goals all venture capitalists operate under similar constraints
when it comes to staffing, portfolio demands, and investment restrictions so similar
results are expected in studies conducted in other firms. Further limitations related
to our empirical quantitative model are that we did not take into consideration the
use of resources for sourcing deals and termination (or exit) of the portfolio compa-
nies. However, in reality, the majority of deals received by a VC firm are unsolicited
so the sourcing activity is more passive in nature (Fried and Hisrich, 1988), which
would imply that the time requirements on the part of the VC are quite minimal.
Further research work should deal also with the identification and evaluation of po-
tential VC firm process improvements (capacity management, etc.) and finally the
implementation of the process improvements to be able to finish the entire research
cycle (Mitroff et al, 1974) in combination with longitudinal research.

References

Barney J (1991) Firm Resources and Sustained Competitive Advantage. Journal of


Management 17(1):99 – 120
Barney JB (1986) Organizational Culture: Can It Be a Source of Sustained Compet-
itive Advantage? Academy of Management Review 11(3):656 – 665
Bertrand J, Fransoo J (2002) Operations management research methodologies using
quantitative modeling. International Journal of Operations and Production Man-
agement 22(2):241–264
Cyert R, March J (1963) A behavioral theory of the firm
17 VC Firm Business Process Flow Management and Investment Decisions 219

Davis J, Eisenhardt K, Bingham C (2007) Developing theory through simulation


methods. The Academy of Management Review (AMR) 32(2):480–499
Dixon R (1991) Venture Capitalists and the Appraisal of Investments. Omega
19(5):333 – 344
Eisenhardt KM (1989) Making fast strategic decisions in high-velocity environ-
ments. Academy of Management Journal 32(3):543 – 576
Eisenhardt KM, Zbaracki MJ (1992) Strategic Decision Making. Strategic Manage-
ment Journal 13(Special Issue):17–37
Franke N, Gruber M, Harhoff D, Henkel J (2008) Venture Capitalists’ Evaluations
of Start-Up Teams: Trade-Offs, Knock-Out Criteria, and the Impact of VC Expe-
rience. Entrepreneurship: Theory & Practice 32(3):459 – 483
Fried VH, Hisrich RD (1988) Venture Capital Research: Past, Present and Future.
Entrepreneurship: Theory & Practice 13(1):15 – 28
Fried VH, Hisrich RD (1994) Toward a Model of Venture Capital Investment
Decision Making. FM: The Journal of the Financial Management Association
23(3):28 – 37
Gifford S (1997) Limited attention and the role of the venture capitalist. Journal of
Business Venturing 12(6):459 – 482
Glaser B, Strauss A (1967) The discovery of Grounded Theory: Strategies for qual-
itative research. Chicago, Aldine Pub. Co.
Gorman M, Sahlman WA (1989) What do venture capitalists do? Journal of Busi-
ness Venturing 4(4):231 – 248
Gupta AK, Sapienza HJ (1992) Determinants of venture capital firms’ preferences
regarding the industry diversity and geographic scope of their investments. Jour-
nal of Business Venturing 7(5):347 – 362
Hall J, Hofer CW (1993) Venture capitalists’ decision criteria in new venture evalu-
ation. Journal of Business Venturing 8(1):25 – 42
Hitt M, Tyler B (1991) Strategic decision models: Integrating different perspectives.
Strategic Management Journal 12(5):327–351
Hopp WJ, Spearman ML (1996) Factory Physics: Foundations of Manufacturing
Management. Irvin Inc., Chicago
Jääskeläinen M, Maula M, Sepp T (2006) Allocation of Attention to Portfolio Com-
panies and the Performance of Venture Capital Firms. Entrepreneurship: Theory
& Practice 30(2):185 – 206
Jammernegg W, Reiner G (2007) Performance improvement of supply chain pro-
cesses by coordinated inventory and capacity management. International Journal
of Production Economics 108(1-2):183 – 190
Macmillan IC, Siegel R, Narasimha PNS (1985) Criteria used by venture capitalists
to evaluate new venture proposals. Journal of Business Venturing 1(1):119 – 128
Mahoney JT, Pandian JR (1992) The Resource-Based View Within the Conversation
of Strategic Management. Strategic Management Journal 13(5):17–37
Matusik S, George J, Heeley M (2008) alues and judgment under uncertainty: Evi-
dence from venture capitalist assessments of founders. Strategic Entrepreneurship
Journal 2(2):95–115
220 Jeffrey S. Petty and Gerald Reiner

Mitroff II, Betz F, Pondy LR, Sagasti F (1974) On managing science in the sys-
tems age: Two schemas for the study of science as a whole systems phenomenon.
Interfaces 4(3):46 – 58
Naylor J, Naim M, Berry D (1999) Leagility: Integrating the lean and agile manu-
facturing paradigms in the total supply chain. International Journal of Production
Economics 62(1-2):107–118
Roberts C (1997) Text analysis for the social sciences: Methods for drawing statis-
tical inferences from texts and transcripts. Lawrence Erlbaum Associates
Shepherd DA, Zacharakis A (1999) Conjoint analysis: A new methodological ap-
proach for researching the decision policies of venture capitalists. Venture Capital
1(3):197 – 217
Shepherd DA, Zacharakis A, Baron RA (2003) VCs’ decision processes: Evidence
suggesting more experience may not always be better. Journal of Business Ven-
turing 18(3):381 – 401
Shepherd DA, Armstrong MJ, Lévesque M (2005) Allocation of attention within
venture capital firms. European Journal of Operational Research 163(2):545 –
564
Strauss A, Corbin J (1998) Basics of qualitative research: Techniques and proce-
dures for developing grounded theory. Sage Publications Inc
Tyebjee TT, Bruno AV (1984) A Model of Venture Capitalist Investment Activity.
Management Science 30(9):1051–1066
Yin R (2003) Case Study Research Design and Methods, 3rd edn. Sage, Thousand
Oaks, CA
Zacharakis A, Shepherd DA (2005) A non-additive decision-aid for venture capital-
ists’ investment decisions. European Journal of Operational Research 162(3):673
– 689
Chapter 18
What Causes Prolonged Lead-Times in Courts
of Law?

Petra Pekkanen, Henri Karppinen and Timo Pirttilä

Abstract The paper highlights the challenges in process performance issues in large
public sector professional organizations. Factors causing process inefficiencies and
prolonged lead-times in two Finnish Courts of Law are introduced and analyzed.

18.1 Introduction

The Finnish Constitution states that everyone has the right to have his/her legal
case heard properly and without undue delays before a legally component court of
law. This statement is also written into the European Declaration of Human Rights.
Finnish courts have struggled with prolonged lead-times and have received appeals
regularly from the European Court of Human Rights concerning unreasonable du-
ration in the handling of judicial cases. Complaints about delays in courts are not
solely Finnish phenomena nor are they something new. The court systems in many
countries have been criticized for years for being inflexible, for taking too long, and
for demanding more and more resources (Martins et al, 2007; McWilliams, 1992;
Smolej, 2006).
This research started with a call for help from the Finnish Ministry of Justice
wanting to study the court system processes in order to find ways to reduce the

Petra Pekkanen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland,
e-mail: petra.pekkanen@lut.fi
Henri Karppinen
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland
Timo Pirttilä
Department of Industrial Management, Lappeenranta University of Technology, P.O. Box 20, FIN-
53851 Lappeenranta, Finland

221
222 Petra Pekkanen, Henri Karppinen and Timo Pirttilä

time that cases stay in the process without endangering the quality of decisions
or increasing the resources. The backbone of court system operations is, like in
all professional organizations, autonomous work of highly motivated and educated
individuals (Brock et al, 1999; Lowendahl, 2005; Mintzberg, 1983). In the court
system, the judges also need to be completely independent and “beyond control” to
ensure objective ruling. Still, at the same time the court system is a process with a set
of sequential tasks and activities linked together, concerning different participants.
It is a process that demands continuous and coordinated flow of a very large number
of individual and infinitely different types of cases. Court systems are organizations
balancing between the needs of independent professional work and effective mass-
production processes. The problems with process performance indicate that process
effectiveness issues have not been given the attention they need in different areas
of justice organizations operations. These features are often considered, at least to
some degree, opposing. There is a fear that increasing the process viewpoint and
process performance will lead to unfavorable circumstances for professional work
and thus weaken the quality of the decisions made.
The aim of the research project was to find ways to help court system organi-
zations find new ways of actions that take the divergent requirements into account
better. The first part of the task was to find out what the exact problem was and what
caused it. This paper concentrates on defining the lead-time problem and identifying
and analyzing the reasons and sources for inefficiencies and prolonged lead-times.
The main research question is:
• What are the main factors in court systems current way of actions which have
caused and influenced the problems in process performance and prolonged lead-
times?
The analysis is based on experiences gained from large process improvement
projects in two Finnish Courts of Law. The case organizations are introduced first.
After that, the improvement projects and the data collection methods are presented.
In section 4, an analysis of the factors behind the prolonged lead-times is introduced.
Finally, concluding remarks are made.

18.2 Case Organizations

The Finnish court system is tripartite for civil and criminal cases. The first level is
the District Courts. The decisions of District Courts can normally be appealed in a
Court of Appeal. The decisions of the Courts of Appeal, then, can be restrictively
appealed in the Supreme Court. In addition, there are special courts, for example
Insurance Court and Administrative Courts.
18 What Causes Prolonged Lead-Times in Courts of Law? 223

18.2.1 Helsinki Court of Appeal

The first case court in this study is the largest Court of Appeal in Finland. It han-
dles about 4000 cases annually, which make about 30 The cases are prepared and
presented for decision by legally trained referanderies, who are called Senior As-
sistant Justices. After preparation, one of the judge members, who are called Se-
nior Justices, goes through and verifies the prepared case. A responsible judge and
referandary are appointed for every case. The cases are then decided in a Court
session by a composition of three Senior Justices. In the case court, there are 170
employees and it operates in seven departments. Each department operates indepen-
dently and is headed by one of the Senior Justices. The case handling operations are
presented in the Fig.18.1

Fig. 18.1 Case handling operations in Helsinki Court of Appeal

The needed preparation time varies according to the complexity of the case. The
cases are divided to five size-groups: S, M, L, XL, and XXL. There are two types
of cases: criminal cases and civil cases. The civil cases are usually more complex
and require more preparation time. The cases are also prioritized and categorized
to three classes according to the assessed urgency of the case. The first priority
level concerns “emergency” type of cases, which need to be handled immediately,
for example child guardian issues or restraining orders. Other cases are divided to
priority level 2 or priority level 3 with several criteria according to the nature of the
felony or dispute.
There are two main ways to handle individual cases: a written procedure or a
main hearing. In a main hearing, the parties involved are present and witnesses are
heard. The average lead-time in 2006 was 12 months, but the dispersion of lead-
times was huge, from weeks to several years. From the year 2003 on, the case court
has solved annually more cases than have arrived. The proportion of very old cases
was quite bad when the improvement project started: 34 % of the pending cases were
224 Petra Pekkanen, Henri Karppinen and Timo Pirttilä

older than 12 months. The age of the pending cases at the start of the improvement
project is shown in Fig.18.2.

Fig. 18.2 Pending cases in Helsinki Court of Appeal on 4 May 2006

18.2.2 Insurance Court

The Insurance Court is a special court for social security issues. It handles 10 000
cases annually. There are 120 employees and the court operates in three depart-
ments. There is only one case handling procedure, a written procedure. At the mo-
ment there is no formal division of cases, either by size or urgency. The average
lead-time was 14 months in 2007, varying from a couple of months to several years.
In recent years, the Insurance Court has solved much more cases annually than have
arrived, and the number of pending cases is diminishing fast. The problem is that
while the number of pending cases has almost halved in one year, the large dis-
persion of lead-times has not changed, or the number of very old cases dropped.
The number of cases pending and their age in the years 2007-2008 are presented in
Fig.18.3.
18 What Causes Prolonged Lead-Times in Courts of Law? 225

Fig. 18.3 Age of pending cases in the Insurance Court (30 September 2007 and 30 September
2008)

18.3 Process Improvement Projects and Data Collection

The research and data collection was carried out using the action research approach,
which is a generic term covering many forms of action-oriented research (Cough-
land and Coghlan, 2002; Gummesson, 2000).
The process improvement project in Helsinki Court of Appeal started in May
2006 and in the Insurance Court in June 2008. The improvement team in both courts
consisted of members from all organitional levels, altogether 15 persons per team.
The main stages of the improvement projects were data gathering, data analysis, ac-
tion planning, implementation, and evaluation. The work was done in several work-
shop meetings of the improvement teams. The project in Helsinki Court of Appeal is
now in the evaluation stage and the project in Insurance Court in the action planning
stage.
The research group has actively participated in the process improvement projects
as external experts and change facilitators from the beginning. The main source for
data has been active observation and monitoring of the improvement work in group
workshop meetings. Complementary data collection methods have included for ex-
ample collecting operational statistics and generating numerical analyses from the
database of clients and interviews of 60 members of personnel (30 in both case
courts) concerning the problems behind process performance and process improve-
ment potentials.
226 Petra Pekkanen, Henri Karppinen and Timo Pirttilä

18.4 Analyzing the Factors Behind the Process Inefficiencies and


Prolonged Lead-Times

The most apparent problem in the lead-times of both case courts was the fact that
complex and large cases get stuck in the process for some reason. This has created
both large dispersion of lead-times and several complaints concerning unreasonable
duration. Because both case courts solve more cases annually than arrive, the piling
up of large cases is not strictly related to a lack of resources. The next task was
to analyze the process and the way of actions in order to find out what causes and
furthers this phenomenon. The analysis revealed four main categories of factors,
introduced in Fig.18.4.

Fig. 18.4 Sources for prolonged lead-times

18.4.1 Inappropriate Goal Setting and Performance Measurement


Systems

The performance indicators used in the courts are selected by the Ministry of Jus-
tice, which also sets targets and monitors their accomplishment. Public sector or-
ganizations are said to be still facing more problems associated with performance
measurement than private sector organizations. One very typical trap is the use of
too simplified output measures or the concentration on managing one single success
factor at a time. (Rantanen et al, 2007)
The most important goal and performance indicator used in the courts is the num-
ber of annual output. It is emphasized a lot and monitored carefully. This does not
encourage preparing the complex, often badly overdue cases, and makes it feasible
to increase the total output by ignoring the more complex cases. The overemphasis
on the annual output indicator has also led to competition between departments and
18 What Causes Prolonged Lead-Times in Courts of Law? 227

restrained cooperation between them. A lot of energy is used in optimizing the num-
ber of solved cases, and the last part of the year is always employed by solving the
small cases and getting the output goal filled. In the Insurance Court, the output of
referanderies is monitored even more carefully. They need to prepare eleven cases
every week, which has also led to a quite inflexible system.
The only monitored goal and indicator for lead-time is the average lead-time of
solved cases, which is monitored annually. This also makes it more feasible to solve
the smaller cases. All the indicators used describe past performance and output, and
there is no indicator showing what is left behind, what the current situation is, or
what would be a goal for a maximum lead-time. It is quite obvious that not a lot
of attention is paid to the choice of the most appropriate performance indicator,
or to the dangerous and negative effects of wrong goal and performance indicator
choices. The use of simplified output goals and indicators is very likely due to lack
of time and knowhow in the Ministry of Justice and difficulties in defining more
comprehensive outcome goals and measures when the final product is quite abstract
and variable.

18.4.2 Lack of Process Efficiency-Oriented Management Practices

A typical feature in professional organizations is the fact that managers are chosen
by substance skills rather than managerial capabilities, which means that the best
professional becomes the manager - not the best manager. Independent professionals
are not, in addition, the easiest to manage, and the management of professional
organizations depends much on negotiation, consensus and the good work ethic of
professionals (Lowendahl, 2005; Mintzberg, 1983; Rantanen et al, 2007).
The experience, knowledge and interest in process performance issues vary a lot
between individual managers in the case courts. Some managers follow lead-times
and process performance very carefully, some managers not at all, and do not even
feel that it is important. The promotion system in the case courts is based only on
achievement in judicial issues and the training of referanderies concentrates only on
these issues. The courts are places for the referendaries to get experience and train-
ing and qualify as Senior Justices. This is why the referendaries change departments
or even the court in every couple of years. This tradition has its benefits, but it also
leads to a lack of clear responsibilities and a sense of duty over the complex cases in
the current department. The high turnover of referanderies makes the whole system
very vulnerable. The process is very referendaryled, and they have huge respon-
sibility for the start-up and smooth running of the handling process. The judge’s
responsibilities are unclear, and their responsible role is quite trivial in practice.
The fact that judges need to be completely independent poses a lot of challenges
for the management. Practically nothing can be done in situations where the work
ethics of an individual judge fails. The judges can not be fired or their salary reduced.
Even though the need to be totally independent was originally meant to cover only
the content issues of a ruling, the convention has spread also to working methods.
228 Petra Pekkanen, Henri Karppinen and Timo Pirttilä

While the management must respect this status, they must also be able to intervene
if backlog of cases increases without a good reason. Some of the managers do not
feel superior to other judges, and do not see it to be their place to intervene in
“colleagues” work. On the other hand, there are individual managers who follow
the situation and intervene almost too much, which is also experienced as quite
oppressive. However, the general opinion is that more managerial feedback (positive
and negative) is needed and that the managers should follow the lead-times and
backlogs of an individual employee more carefully and take actions if necessary,
but it should be done in a constructive and respectful manner.
The management system relies on a very clearly articulated hierarchy, where sta-
tus and ranking are emphasized, but clear responsibilities and chains of command
are missing. The follow-up and management duties are almost solely the responsi-
bility of the Head of the Department. Lower-level superiors are appointed (Superior
Referandary), but they do not have any formal manager status. Their role is mainly
to distribute the cases evenly to the referendaries and to monitor the case load of
the whole department. By expanding the management duties, there would be more
time and resources to concentrate also on managing the process performance-related
issues.

18.4.3 Lack of Production Planning at All Levels

At the start of the improvement project it was a general concern and notion that it is
not possible to increase productivity without increasing resources. A large number
of cases are solved but without any orderliness, leading to the aging of complex
cases. Restraining the number of very old cases is not a question of doing more cases
but a question of doing things according to some kind of a plan and order. There
is practically no production planning or production scheduling in the case courts.
The measures and follow-up indicators used have led to the fact that individual
workers plan their work mostly by picking out the cases that best help to meet the
expectations. A very common opinion in the start of the projects was also that you
can not plan professional work or you should not chain up brainwork with schedules.
It is true that the lack of deadlines for cases makes planning seem quite useless in
the eyes of an individual professional.
The fact that no planning practices are used, leads to the fact that it is very hard to
find uninterrupted time for preparing the complex cases. When the time for longer
preparation is not planned and scheduled, new, more high-priority, cases always
emerge and the preparation for complex cases stops. The setup times grow enor-
mously when getting acquainted with the case material is done several times all
over again. The weekly number quota for referanderies makes it even harder to find
the preparation time needed for complex cases. One fact that complicates the plan-
ning is that the individual buffers of cases is so large and unmanageable that the
occupants no longer know and recall the situation of an individual case nor the age
18 What Causes Prolonged Lead-Times in Courts of Law? 229

of it. The piles of cases just keep growing on your desk and you have no plan for
preparing them.
There is a lot of compulsory waiting time in the case handling process that is
completely wasted without long-term planning. The court session will not be started
to be arranged and called together until the preparation of the case is completed. The
arrangements for a court session are a difficult coordination task of getting all parties
present. This leads to situations where the court session may take place months or
even years after the preparation has been completed. In practice the referandary and
the judge need to get acquainted with the case material all over again nearer the
court session.

18.4.4 Traditional Values and Attitudes of the Legal Profession

Every organization and profession creates its own set of values, beliefs and attitudes.
When talking about the legal profession, which has its origins in the antiquity, these
values and attitudes, have a very long and traditional history. The legal professionals
take much pride in their occupation and they enjoy a lot of respect from whole
society as well. This status relies heavily on the complex knowledge base of judges
and lawyers and the long traditions of methods and routines (Becher, 1999; Schein,
2004). In this sense there is very little room for appreciation of productivity and
effectiveness. This thinking is reflected in all aspects of court operations: what is
valued is measured, managed, educated and done. The quality of rulings made is
followed, for example, very carefully and lot of time is spent on the spelling and
phrasing of final acts and checking every little detail in the final decisions. So far
lead-time has not been very high in the valuation lists and has been seen as an enemy
for the traditionally valued aspects of quality.
While legal professionals might not be the most dynamic or change-oriented
professionals, they are very ethical and hard working and do enormous work in order
to fulfill what is expected of them. The prolonged lead-times are a cause of stress
but the professionals have felt quite powerless in front of this evergrowing problem.
When they see that something really works and helps, despite of preconceptions, the
acceptance of change initiatives is good, especially among younger professionals.
Changes in values and attitudes will not happen overnight, and the improvement
steps need to be justified and taken gradually.

18.5 Conclusions

When starting to describe courts of law as organizations, the first word that comes to
mind is professional organizations. But unlike for example small private law firms,
the justice courts can also be described as production plants, operating in a multi-
staged process. Added with the features arising from public administration, it can be
230 Petra Pekkanen, Henri Karppinen and Timo Pirttilä

concluded that the courts of justices are organizations with diversified requirements
for operations. If the organization and its practices are designed with too much em-
phasis on one single feature, the other ones will suffer.
This paper has concentrated on describing the lead-time problem and its causes
in courts of law. It can be concluded that process performance problems are not a
matter of producing too little numbers or a problem of average lead-time. The prob-
lem is the fact that some cases get prolonged so much that the courts get complaints
and backlogs emerge. Since the number of produced cases is very good in both case
courts presented in this study, the source for the problem is not a lack of resources,
nor is it an outcome of any other single factor. It is a complex mixture of causes
and effects connected to unsuitable design parameters and way of actions, added
with management practices and a value system with long history and traditions.
Several public sector professional organizations are facing a similar situation as the
pressures for better efficiency and productivity are constantly increased. They need
examine the operational practices thoroughly and pinpoint the sources for ineffi-
ciency, and find ways to transform. In justice courts the first step is to recognize and
accept the fact that they have also process effectiveness-related demands that need
to be delivered as well as other quality criteria. It is largely an issue of broadening
the image of the organization and its tasks, and translating that into operation.
Big changes have been made to the measurement and follow-up systems, man-
agement practices and to the planning and scheduling practices in the case courts.
For example in reforming the planning practices it became evident that scheduling
supports professional work, not impedes it, and the time that can be spent on an
individual case is longer when uninterrupted preparation time has been scheduled.
All the improvement initiatives planned and taken in the case courts have proved
that good results can be achieved and that the quality of justice decisions and pro-
cess effectiveness are not exclusionary, but they even support each other. While long
traditions of working methods and values take long time to change, it is possible to
find ways to balance the requirements of professional work and process efficiency
better.

References

Becher T (1999) Professional practices: Commitment & capability in a changing


environment. Transaction Publishers, London
Brock D, Powell M, Hinings C (1999) Restructuring the professional organization:
Accounting, health care and law. Routledge
Coughland P, Coghlan D (2002) Action reseach for operations management. Inter-
national Journal of Operations and Production Management 22(2):220–240
Gummesson E (2000) Qualitative methods in management research. Sage Publica-
tions
Lowendahl B (2005) Strategic management of professional service firms, 3rd edn.
Copenhagen Business School Press
18 What Causes Prolonged Lead-Times in Courts of Law? 231

Martins AL, Helgheim BI, de Carvalho JC (2007) A logistics approach to shorten


lead-times in Courts of Law A case study. Proceedings from the 19th Annual
NOFOMA Conference, Reykjavik Iceland
McWilliams JM (1992) Setting the record straight: Facts about litigation costs and
delay. Business Economics 27(4):19
Mintzberg H (1983) Structure in fives: Designing effective organizations. Prentice-
Hall International
Rantanen H, Kulmala H, Lnnqvist A, Kujansivu P (2007) Performance measure-
ment systems in the Finnish public sector. International Journal of Public Sector
Management 20(5)
Schein E (2004) Organizational culture and leadership, 3rd edn. Jossey-Bass
Smolej M (2006) Time Management in Nordic Courts: Review of proposals and
policies aimed at reducing delays in courts. Report series in european commis-
sion for the efficiency of justice (cepej) at its 8th plenary meeting in strasbourg,
European Commission for the Efficiency of Justice (CEPEJ) Strasbourg
Chapter 19
Logistics Clusters - How Regional Value Chains
Speed Up Global Supply Chains

Ralf Elbert and Robert Schönberger

Abstract Concerning the topic of “How Regional Value Chains Speed Up Global
Supply Chains”, the research question of this paper will be the acceleration poten-
tial of logistics clusters in the entire supply chain. Acceleration potential can occur
e.g. as lead time reductions or as agile and quick responses. Logistics clusters can
make it easier for companies to gain an increase in innovation ability and produc-
tivity, and so enable a higher reactivity. The increase is being caused by a time
advantage, which on the other hand results from a better product-market position,
a connection of resources, core competencies and knowledge, and declining trans-
actions cots within the cluster. Therefore Porters diamond model, which he used to
transfer his own model of national advantages to the field of regional clusters, will
be extended by an additional factor, that is to say time.

19.1 Introduction

“The world seems to move faster” - this statement is not new, but it never felt
stronger than today. For companies it implies that traditional business models en-
counter limitations and in addition a strong conflict occurs. Always being one in-
novation ahead to the competitor, securing immediate reactivity and a 100 percent
stock availability are opposed to an “it-may-cost-nothing”-mentality and the find-
ings of lean management.
It is clear that it is harder for a single company, as a “lone standing fighter”, to
meet these challenges of time and money. But cooperation in networks can offer

Ralf Elbert
University of Technology Berlin, Chair of Logistics Services and Transportation
e-mail: elbert@logistik.tu-berlin.de
Robert Schönberger
University of Technology Darmstadt, Chair of Clusters & Value Chain
e-mail: schoenberger@tud-cluster.de

233
234 Ralf Elbert and Robert Schönberger

an additional potential of responsiveness and flexibility in a long run. As a special


form of inter-organisational cooperation, clusters 1 . were discussed intensively in
recent years. Almost 20 years ago it was Michael E. Porter who transferred the
term “clusters” into the managerial economics and connected it with competitive
advantages and so set the focal point for the current enthusiasm on clusters.
Manifold studies reveal and document that clusters are a source for success of
corporations (see e.g., Sölvell et al 2003; Rosenfeld 1997; Porter 1998; Roelandt
and den Hertog 1999 and Porter 2003). So far, cluster development emerges as a
legitimate instrument of economic policy - the economic potential of clusters re-
ceives attention from policy-makers at all political levels throughout Europe: The
European Commission, national, regional as well as city governments are promot-
ing regional clusters. On the European policy level see e.g. “The European Clus-
ter Memorandum” from December 2007 that is supported by national and regional
agencies for innovation and economic development and addressed to policy makers
at the national and European levels. In Germany, the Federal Ministry of Education
and Research spends 600 million Euros for the development of 15 clusters all over
the country in the next years. Furthermore, there are cluster development programs
on the federal state level e.g. in Bavaria (“Bayerische Clusterpolitik”), Hesse (“1.
Clusterwettbewerb des Landes Hessen”) and North Rhine-Westphalia (“RegioClus-
ter.NRW”) where cluster development is supported with financial means as well as
training and consulting of cluster managers (see Haasis and Elbert, 2008, p. 21-22).
Clusters as a specific inter-organisational form were also recognized by the lo-
gisticians immediately. They quickly realized - because of their general knowledge
about inter-organisational cooperation e.g. in networks and supply chains - the op-
portunities of this kind of cooperation (see Pfohl and Trumpfheller, 2004, p.3). So it
is no surprise that until today more than 40 logistics clusters have been established
in Germany (see Elbert et al, 2009, p.65).
To find out how logistics clusters can create time advantages for the involved
companies, a case study in an explorative research design and on the basis of a text
analysis points out, which statements are made by the logistics clusters regarding
time and speed. This paper shows an examination of the websites of the 40 identi-
fied logistics clusters in Germany in order to detect passages, which refer to time
and speed. First references will be discussed, whether potential of acceleration can
be identified in the Porter diamond and which logistics concepts are made possible
through regional cooperation and how logistics clusters enable - if speed is the de-
termining factor - a reduction of lead times and a fast turning with comparably small
costs in the new mega-hubs logistics clusters.

1 Often also described with the terms industrial districts (see Markusen (1996) and Marshall
(1920), industry cluster (see Cortright, 2005, p.8), innovative milieus (see Franz, 1999, p. 112-114),
hot spots (see Pouder and St. John, 1996, S. 1194), sticky places, regional innovation networks
and hub-and-spoke-districts (see Markusen, 1996, p. 296-297))
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 235

19.2 Competitive Advantages through Clusters

In its broadest sense, a network can be thought as social, economic and/or political
relations between individuals and organizations (see Schubert, 1994, p. 9). Value
creation networks are characterized by cooperation between economic and legal in-
dependent companies in order to realize competitive advantages (see Jarillo 1988,
p. 32 and Liebhart 2002, p. 113). A regional value creation network consists a large
number of companies from the same region, which are tied to each other by coop-
erative and competitive horizontal and vertical links (see Ritsch, 2005, p. 28). This
is what Porter defined as clusters: a geographic concentration of companies, spe-
cialized suppliers and service providers, as well as companies in related sectors and
institutions, which are all connected in a special field of interaction. The character-
istics of this connection could be a common branch and industry or related process
technologies.
According to Porter “the enduring competitive advantages in a global economy
lie increasingly in local things - knowledge, relationships, and motivation that dis-
tant rivals cannot match” (see Porter, 1998, p. 78). It is again Porter, who analyzes
clusters the first time from its competitive strategy framework and describes them as
a superior economic structure. Clusters affect competition in three broad ways: they
improve the productivity of cluster companies, increase their innovation capability
and stimulate the creation of new demand. Porter explains these effects with envi-
ronmental influences that converge in six attributes that have the greatest influence
on a company’s ability to innovate and upgrade. These attributes, which he terms
diamond, “shape the information firms have available to perceive opportunities, the
pool of inputs, skills and knowledge they can draw on, the goals that condition in-
vestment, and the pressures in firms to act” (see Porter, 1990, p. 111).
His diamond model represented in the illustration below covers the following
four determinants: The factor conditions describe the position of a region concerning
the availability of production factors. This covers all relevant factors for an industry
such as work force, raw materials and services. The demand conditions describe the
kind of regional demand for products or services of an industry. The related and sup-
porting industries mark the presence of international competitive related industries
or suppliers. The firm strategy, structure and rivalry determine finally the conditions
of a cluster, how companies are organized and led, as they cooperate and how the
regional competition looks like. As further two determinants the chance and the
government were later added, which can shape the regional competition sustainable
and give at the same time important impulses for the cluster development.
The development of clusters is either founded by particularly favourable condi-
tions in one of the four original determinants of the diamond model or it can be
released by acts of business, which cannot be traced to special local conditions. The
further development of clusters explains Porter with a reciprocal process, which
runs between the determinants of the diamond model. He understands the diamond
as a strengthening model, in which a positive development in one determinant leads
to an improvement of the other competitive conditions. He assumes that a certain
number of participants in the cluster - a critical mass - must be achieved, thus the
236 Ralf Elbert and Robert Schönberger

Fig. 19.1 The Porter diamond model (source:Porter 1990, p.72)

interactions in the cluster initiate an inherent dynamism and automatically lead to a


settlement of specialized subcontracting firms, special training and research institu-
tions, as well as to a further development of the infrastructure (see Porter, 1990, p.
216).
The mostly macro-oriented analysis of Porter and others is very important in or-
der to set up the topic ‘cluster’ in the mind of scientists, politicians and company
leaders. But these mostly economically and economically-geographically driven
studies do not explain how the success impact of clusters can be generated. Even
if regional agglomerations and also clusters in various industries have been a re-
search subject for decades, it was until now no focus on how regional value chains
in clusters speed up global supply chains.
Bringing clusters and logistics together, it is definite to say that logistics clusters
are clusters, where logistics services are the central attribute of the connection be-
tween the participants (see Elbert et al, 2008, p. 315). Following the argumentation
of Porter, logistics clusters can be described as concentrations of logistics compa-
nies, specialized suppliers for logistics companies, logistics service providers, firms
in logistics-intensive industries as well as public and private institutions related to
logistics, which can compete and cooperate with one another. The term logistics
company covers both, the logistics industry (engineering and plants for the intralo-
gistics and material handling, software producer for logistic applications and system
integrators, etc.) and the logistics service providers (players like contract logistics,
shipping companies, warehouse operators, IT service providers, consulting firms,
etc.). Referring to the related devices, research institutions with an emphasis on
logistics as well as logistics associations and logistics-relevant institutes for stan-
dardization could be named as potential cluster participants. Depending upon the
development of a logistics cluster, vertical interdependences by supplier-customer
relations on the one hand and horizontal interdependences by companies on the
same stage in the value chain on the other hand can occur (see Elbert et al, 2008, p.
316).
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 237

19.3 Research Framework

In section 19.2 it was shown that clusters can generate competitive advantages re-
garding the Porter diamond. In order to deduce the emergence of these competitive
advantages, it is required to analyze the sources of value creation in clusters, where
basically three categories can be identified: “product-market position”, “resources,
core competencies and knowledge”, as well as “transaction costs ” (see Elbert et al,
2009, p. 63).

19.3.1 Product-Market Position

According to the market-based view value chains in clusters allow on the one hand
bundling of existing regional product-market positions and on the other hand the
development of new product-market positions (see Möller, 2006, p. 40). By pooling
of already developed markets, companies can profit among themselves of the market
positions of the cooperation partners and develop individually new markets on their
own. For companies without network participation such a market entrance is much
more difficult and becomes more time and cost intensive. Particular the bundling
of financial and organisational resources within networks enables beyond that an
accelerated and for the single firm more economical development of new markets
(see Wrona and Schell, 2003, p. 320). In addition regional cooperation with already
established competitors can reduce the existing rivalry, whereby economic rents can
increase temporarily (see Zahn and Foschiani, 2002, p. 271).

19.3.2 Resources, Core Competencies and Knowledge

According to the resource-based view (see Penrose, 1959; Prahalad and Hamel,
1990; Barney, 1991; Amit and Schoemaker, 1993). and its enhancements the cre-
ation of value in networks is induced through bundling and generation of resources,
of core competencies and of knowledge. The aggregation and composition of com-
plementary resources enables the development of individually strengths as well as
the reconciliation of existing weaknesses. At the same time network-specific intan-
gible and only with difficulties imitable knowledge can be generated as new core
competency, which leads to a competence-based ability to innovate by the involved
companies (see Duschek, 2002, p. 172). The capacity for innovation of the compa-
nies represents a central competitive advantage, which positively affects the value
creation within the network. Regarding the knowledge which can be developed, the
bundling in a regional network enables also the research and development within
specialized areas and reduces thereby the risk for each individual company. Beyond
that network-specific value creation results in particular from the transfer and the
enhancement of knowledge as well as learning from and with the partners in the
238 Ralf Elbert and Robert Schönberger

cluster (see Zahn and Foschiani, 2002, p. 271). The bundling of the existing and the
generation of new knowledge is based on the confidence between the partners as
well as o the existing competence in relationships as another central core compe-
tency (see Wrona and Schell, 2003, p. 320).

19.3.3 Transactions Costs

According to the transaction cost theory different kinds of transaction costs can
be affected by cooperation in regional networks (see Woratschek and Roth, 2003,
p. 156). Cost advantages arise in networks in particular as a result of scale effects,
which cannot be realized individually by a single company (see Zahn and Foschiani,
2002, p. 270-271). From the transaction cost theory view the costs per transaction
can be reduced by investments in relational capital, since repetitive transactions be-
tween a small group of regional network participants are reducing the initiation and
arrangement costs; in addition scale and scope effects with a rising contract volume
in the region can be obtained and the average completion costs can be reduced. An
extensive exchange of information leads to a reduction of the information asymme-
tries and therewith connected a reduction of the control costs in the region (see Dyer,
1997, p. 543-544). However these advantages do not result automatically regarding
the transaction costs. Instead inter-organisational cooperation can require on the one
hand a stronger coordination and organization of the activities. On the other hand
a lack of confidence and reputation of the companies can lead to opportunistic be-
haviour in the regional network and requiring thus higher safety precautions (see
Williamson, 1991, p. 291). In both cases higher transaction costs would result from
cooperation in a regional network.
As it is obvious from the discussion above value creation within clusters is based
on two mechanisms. First, clusters can improve firm’s productivity and second,
increase their innovation capability. Both allow companies to produce a superior
output with similar or lower costs thereby improving their competitive position.
Whereas agglomerations lead to shared commonalities across companies they still
act independently in the market place. It is only through cooperation that companies
engage in joint regional value creation systems, in which an independent transforma-
tion process takes place. The companies’ input, consisting of joint configuration of
value activities as well as their combined resources and capabilities, are transformed
by the reinforcing effects of the Porter diamond and lead to upgraded products and
innovations.
The following figure illustrates how clusters - starting from a simple agglomer-
ation of companies - can be a source of innovation and productivity through coop-
eration, in combination with an activating cluster management. The Porter diamond
reinforces the underlying sources of competitive advantages leading to superior pro-
ductivity and enhanced innovation capabilities of its related companies. Time as
determining factor is being added as additional advantage.
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 239

Fig. 19.2 Time advantages on the way to innovation and productivity

Thus, it becomes evident how a cluster generates time advantages and so ac-
celeration potential for the involved companies. Only remains the transmission on
logistics and the empirical confirmation that it is possible to gain time advantages
through clusters in the global supply chain.

19.4 Empirical Analysis

In a really short time more than 40 logistics clusters were established in Germany.
A deeper contemplation shows that all sizes, ages and forms of organization can be
found within the logistics clusters. On the one hand there are big and established
ones, which are supported by ministries or regional governments; on the other hand
there are small and young logistics clusters, which - it seems like that - are still
searching for their role in the global supply chain. But most important is that it
does not look like that logistics clusters are only a phenomenon, which is going to
disappear during the next years. Since most logistics clusters have a long-ranging
business model, it is to assume that their positions will even strength. The logistics
clusters which can be identified in Germany and were analyzed are put together in
the Appendix.
240 Ralf Elbert and Robert Schönberger

If logistics clusters will be relevant players in the supply chain it is important to


know, what kind of advantages companies are effectively able to gain. As shown in
section 19.3 the Porter diamond enables to realize time advantages through cluster
activities. Although most logistics clusters are young and new actors in the supply
chain, all forty clusters have web pages and an internet presence. Therefore it is
reasonable to use this form of publication for a desktop research and to find out
more about what clusters think about their own acceleration potential in achieving
innovations and higher productivity. Several logistics clusters are obviously working
on questions of speed, agile and quick responses, as well as reactivity. Most clusters
reference on:
• Infrastructure and the possibilities on working on an improvement together
within the cluster to achieve faster movements.
• Cooperation with customs to reduce latency and to increase the passage.
• Local traffic jams warning systems to reduce waiting times and to reduce en-
ergy usage, environment pollution and traffic transport and the negative effects
of traffic.
• Increase of transparency to reduce contact times.
• Building up a freight exchange to increase the productivity of all carriers.
• Reducing throughput times through area-wide, intelligent, traffic control systems
to reduce the risk of traffic jams in the region.
• Connecting the regional players through regional networking and formulation
and implementation of joint logistics projects to speed up the ability to react
within the cluster and the region.
• Making it easier to adduce logistics services by special infrastructural offers and
to be able to innovate.
• Capturing, structuring and operationalization of the term flexibility to create the
basis for advantages for small and medium sized businesses and increasing their
productivity.
• Knowledge exchange in several ways to accelerate the knowledge spill-over.
• Establishing ideas on how to segment customers by lead-times to increase the
reactivity.
• Promotion and acceleration of research and development to transfer creative im-
pulses into marketable logistics products and services.
Figure 19.2 shows, how, on the basis of an expedient “Product-Market Position”,
a beneficial combination of “Resources, Core Competencies and Knowledge” as
well a reduction of the “Transactions Costs”, regional logistics clusters can generate
time advantages. This model therefore sets direct connections from the basis up to
time advantages and the increase of innovation and higher productivity. A look into
the logistics clusters practice brings up concrete examples for the importance of
these connections:
• Product-Market Position → Innovation & Productivity
In many logistics clusters - for example in the logistics net Berlin Brandenburg
or in the competence centre logistics Bremen - the cluster participants bundle their
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 241

interests also for the development and for the use of existing industrial real estate
areas and promote thus the settlement of new companies, as well as the growth of
the existing companies and the entire region. In the same time they improve the de-
velopment of the labor force by common education and vocational trainings, or by
the common investment into the infrastructure and enable so their development and
a better extent of the utilization. The activities mentioned affect on the one hand the
factor conditions, i.e. the existing human, principal and natural resources. On the
other hand the settlement of new companies affects the demand conditions and the
competition between the logistics cluster participants. Beyond that some logistics
clusters concentrate on the networking between the logistics industry and related,
supporting industries. A very good example for this is the logistics cluster metropo-
lis Ruhr, which wants to lead the Ruhr region by the interdisciplinary connection
of logistics and IT “from the traffic-technical center” to the “information-logistics
center of Europe”: A regional bundling of the available as well as a development of
new product-market positions.
• Resources, Core Competencies and Knowledge→ Innovation & Productivity
The knowledge transfer between the cluster participants - as basis, in order to
strengthen and develop common strengths - represents a frequently specified goal
by the analyzed logistics clusters. In this coherence interdisciplinary cooperation
is aimed between companies, universities and research institutions, as for example
“Bavaria innovatively: Cluster logistics” particularly emphasizes as an activity, for
example. So especially for small and medium-size companies it should be easier to
get in touch with research institutions. The opportunities for knowledge transfer are
offered by workshops, lectures, working groups and further meetings. Common core
competencies are developed, if the cluster selects a certain topic within logistics to
create a basis for competence-based innovation ability. As example the intralogistics
network in Baden-Wrttemberg could be named.
• Transactions Costs → Productivity
The majority of the current activities of the logistics clusters, like e.g. newsletter
mailings, the implementation of an internet portal as well as the organization of
different events - as for example the logistics cluster North-Rhine/Westphalia or
the network logistics Leipzig are doing - have the goal to create trust between the
logistics clusters participants. These activities are to be rated as investment into the
relations capital and form thus the basis for cooperation in the logistics clusters.
Beyond that the information transfer also enables the networking of economics,
science, research and politics. Proximity is created by the exchange between the
companies, which leads to a reduction of the initiation and arrangement costs, for
example. The arising needs of coordination and organization make an own cluster
office necessary. Nearly two thirds of the observed logistics clusters already have
their own cluster office; an example for this is the logistics network Thuringia.
The discussion shows that several ways are gone within the logistics clusters to
create a basis for cooperation and for development of requirements in the region
to accelerate the logistics services. Nowadays most issues related to an accelera-
tion potential, and named on the websites, are an improvement in infrastructure to
242 Ralf Elbert and Robert Schönberger

speed up traffic and action in the logistics clusters area. But especially joint research
and development activities show the intension to create new logistics services. And
bringing the local logistics actors together in working groups or in discussion fo-
rums is a chance to exchange knowledge and - even more - establishes the basis
for cooperation. Because people are close together and using the chance of short
distance, a time advantage leads into faster innovations and higher productivity.

19.5 Further Research and Outlook

With a description of the sources of value creation and the combination with the
self-strengthening effects of the Porter diamond it was possible to show how clusters
can create a time advantage on the way to an increase of innovation and higher pro-
ductivity. Short ranges, fast contacts and response times, as well as the knowledge
spill-over in a cluster lead into the time advantages and into the fact that redundant
resources can be reduced.
The web-pages of the 40 identified logistics clusters in Germany give an impres-
sion what kind of acceleration potential lies in the regional networks. Since in most
regions the logistics clusters are relatively new, the results of the cluster work are
not measurable yet. So for future research it is necessary to keep an eye on the clus-
ter development and to follow up what they are doing in the region concerning time
and speed.
If logistics clusters are succeeding to dismount their position in the supply chain
as new mega hubs, the question, as well, is taking into account whether such re-
gional networks can activate additional adding value and whether in the future the
production will follow logistics.

19.6 Appendix

Logistics Clusters in Germany


• Arbeitsgruppe Logistik
• Arbeitskreis Logistik Oldenburg
• Cluster Nutzfahrzeuge Schwaben (CNS)
• Cluster-Offensive Bayern: Cluster Logistik
• CNA Center for Transportation & Logistics Neuer Adler e.V.
• Cross Border Logistics CBLog
• FAV (Forschungs- und Anwendungsverbund Verkehrssystemtechnik) Berlin
• ForLog
• Gesamtzentrum fr Verkehr Braunschweig e.V. (GZVB)
• GVZ (Gterverkehrszentrum) Augsburg
• Intralogistik-Netzwerk in Baden-Wrtemberg e.V.
• KLOK Kooperationszentrum Logistik e.V.
• Kompetenzzentrum Logistik Bremen
• KTMC - Kompetenzzentrum Telematik . Mobile Computing . Customer Care
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 243

• Last Mile Logistik Netzwerk GmbH


• LOGIS.NET
• Logistics-Ruhr.com
• Logistik in Schwaben
• Logistik Initiative Rhein-Erft
• Logistik Netzwerk Thringen e.V.
• Logistik RheinMain
• Logistikagentur Oberfranken
• Logistikcluster “LogistikLand NRW”
• Logistik-Image-Kampagne Baden-Württemberg
• Logistik-Initiative Baden-Württemberg
• Logistik-Initiative Duisburg/Niederrhein
• Logistikinitiative Emsland
• Logistik-Initiative Greven
• Logistik-Initiative Hamburg
• Logistikinitiative Log-I SH
• Logistikinitiative Mecklenburg-Vorpommern
• Logistikinitiative Niedersachsen
• LogistikNetz Bayerischer Untermain
• LogistikNetz Berlin-Brandenburg
• LogistikRuhr
• Logport77
• logRegio Lübeck
• MoWiN.net - Mobilittswirtschaft in Nordhessen
• Netzwerk Logistik Leipzig/Halle
• Research Cluster for Dynamics in Logistics

References

Amit R, Schoemaker PJH (1993) Strategic Assets and Organizational Rent. Strate-
gic Management Journal 14(1):33–46
Barney J (1991) Firm Resources and Sustained Competitive Advantage. Journal of
Management 17(1):99
Cortright J (2005) Making Sense of Clusters Regional Competi-
tion and Economic Development. Last check 30.07.2008, URL
http://www.brook.edu/metro/pubs/20060313 Clusters.pdf.
Duschek S (2002) Innovation in Netzwerken: Renten-Relationen-Regeln. Wies-
baden
Dyer JH (1997) Effective Interfirm Collaboration: How Firms Minimize Trans-
action Costs and Maximize Transaction Value. Strategic Management Journal
18(7):535–556
Elbert R, Schönberger R, Müller F (2008) Regionale Gestaltungsfelder für robuste
und sichere globale Logistiksysteme. Strategien zur Vermeidung, Reduzierung
und Beherrschung von Risiken durch Logistik-Cluster. In: Pfohl HC, Wimmer T
(eds) Wissenschaft und Praxis im Dialog. Robuste und sichere Logistiksysteme,
Hamburg, pp 294–322.
244 Ralf Elbert and Robert Schönberger

Elbert R, Schönberger R, Tschischke T (2009) Wettbewerbsvorteile durch Logistik-


Cluster. In: Wolf-Kluthausen H (ed) Jahrbuch Logistik 2009, Korschenbroich, pp
61–67.
Franz P (1999) Innovative Milieus: Extrempunkte der Interpenetration von
Wirtschafts- und Wissenschaftssystem. Jahrbuch fr Regionalwissenschaft
19:107–130
Haasis HD, Elbert R (2008) Bringing regional networks back-into global supply
chains: Strategies for logistics service providers as integrators of logistics clus-
ters. In: Kersten W, Blecker T, Flämming H (eds) Global Logistics Management,,
Berlin, pp 21–31
Jarillo JC (1988) On Strategic Networks. Strategic Management Journal 9(1):31–41
Liebhart UE (2002) Strategische Kooperationsnetzwerke. Entwicklung, Gestaltung
und Steuerung. Wiesbaden
Markusen A (1996) Sticky Places in Slippery Space: A Typology of Industrial Dis-
tricts. Economic Geography 72(3):293–313
Marshall A (1920) Principles of Economics. Lodon
Möller K (2006) Wertschöpfung in Netzwerken. Mnchen
Penrose ET (1959) The Theory of the Growth of the Firm. Oxford
Pfohl HC, Trumpfheller M (2004) Das ATHENE-Projekt. Auf dem Weg zur Net-
zkompetenz in Supply Chains. In: Pfohl HC (ed) Netzkompetenz in Supply
Chains. Grundlagen und Umsetzung, Wiesbaden, pp 1–10
Porter ME (1990) The Competitive Advantage of Nations. London
Porter ME (1998) Clusters and Competition: New Agendas for Companies, Gov-
ernments, and Institutions. In: Porter ME (ed) On Competition, Boston
Porter ME (2003) The Economic Performance of Regions. Regional Studies
37(6/7):549–578
Pouder R, St John CH (1996) Hot spots and blind spots: Geographical clusters of
firms and innovation. Academy of Management Review 21(4):1192 – 1225
Prahalad CK, Hamel G (1990) The Core Competence of the Corporation. Harvard
Business Review 68(3):79 – 91
Ritsch K (2005) Wissensorientierte Gestaltung von Wertschöpfungsnetzwerken.
Aachen
Roelandt T, den Hertog P (1999) Cluster Analysis and Cluster-Based Policy Mak-
ing in OECD Countries: An Introduction to the Theme, Ch 1. In: OECD (ed)
Boosting Innovation: The Cluster Approach, Paris, pp 9–23
Rosenfeld SA (1997) Bringing business clusters into the mainstream of economic
development. European Planning Studies 5(1):1–15
Schubert K (1994) Netzwerke und Netzwerkansätze: Leistungen und Grenzen eines
sozialwissenschaftlichen Konzeptes. In: Kleinaltenkamp M, Schubert K (eds)
Netzwerkansätze im Business-to-Business-Marketing. Beschaffung, Absatz und
Implementierung Neuer Technologien, Wiesbaden, pp 8–49
Sölvell O, Lindqvist G, Ketels C (2003) The Cluster Initiative Greenbook. Stock-
holm
Williamson OE (1991) Comparative Economic Organization: The Analysis of Dis-
crete Structural Alternatives. Administrative Science Quarterly 36(2):269 – 296
19 Logistics Clusters - How Regional Value Chains Speed Up Global Supply Chains 245

Woratschek H, Roth S (2003) Kooperation: Erklärungsperspektive der Neuen In-


stitutionenkonomik. In: Zentes J, Swoboda B, Morschett D (eds) Kooperationen,
Allianzen und Netzwerke. Grundlagen Ansätze Perspektiven, Wiesbaden, pp
141–166
Wrona T, Schell H (2003) Globalisierungsbetroffenheit von Unternehmen und die
Potenziale der Kooperation. In: Zentes J, Swoboda B, Morschett D (eds) Koop-
erationen, Allianzen und Netzwerke. Grundlagen Ansätze Perspektiven, Wies-
baden, pp 305–332.
Zahn E, Foschiani S (2002) Wertgenerierung in Netzwerken. In: Alback H, Kaluza
B, Kersten W (eds) Wertschöpfungsmanagement als Kernkompetenz, Wiesbaden,
pp 265–275
Part IV
Survey and Longitudinal Research
Chapter 20
Measuring the Effects of Improvements in
Operations Management

Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

Abstract This paper examines the relationship between a managerial focus on re-
ducing inventory and improvements in value added. We analyze financial informa-
tion on large non-service US based firms over the 25 year period from 1980 to 2004.
Our results show a very strong correlation between the increase in value added and
the decrease in days of inventory speed across all manufacturing industries. The
results strongly support the operations management literature which claims a man-
agerial focus on efficiency, in particular increasing the speed of operations, will
result in significant value creation for firms. The results also imply that the concept
of competition based on operational speed has not been transferred across all firms
and the potential for improvement still exits in most industries.

20.1 Introduction

Operational speed, defined as the lead time from order handling, through production
and delivery, to the customer, has long been recognized as one of the common char-
acteristics of successful companies in competitive business environments. All oper-
ational management methods to improve operations are supposed to make processes

Vedran Capkun
HEC School of Management, 1, rue de la Liberation, 78351 Jouy-en-Josas cedex, France, tel. +33-
1-39-67-96-11 fax. 70-86
e-mail: capkun@hec.fr
Ari-Pekka Hameri, corresponding author
Ecole des HEC, University of Lausanne, Internef, Lausanne 1015 Switzerland, tel +41 21 692 3460
fax 3495
e-mail: Ari-Pekka.Hameri@unil.ch
Lawrence A. Weiss
McDonough School of Business, Georgetown University, Old North G01A, Washington, DC
20057-1147, USA, tel. +1-202-687-3802 fax. 4031
e-mail: law62@georgetown.edu

249
250 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

faster, more controllable and accurate. This includes business process engineering,
total quality management, vendor managed inventories, supply chain integration,
just-in-time, lean thinking, and activity based management. Among the best known
firms which focus on operational speed are the computer assembler Dell and the
apparel company Zara. These companies avoid the perilous impact of supply chain
dynamics by operating at speeds where the capital bound by their operations is a
fraction of the overall volume of their business. This provides them with the agility
to react to sudden demand variations and outperform their competitors. These com-
panies are also less dependent on forecasts and preplanned operations giving them
an additional advantage and cost efficiency over their slower competitors. Schmen-
ner, 1988; Stalk and Hout, 1990; Womack and Jones, 1996, all find a strong positive
relationship between financial results and those firms who set operational speed as
their key strategic approach.
According to the operations management literature, each operation that is part
of a business process should add to the value of the end-product. The more effi-
ciently a company creates value which customers are willing to pay for, the greater
the firm’s ability to deliver value to its stakeholders. One of the first steps to in-
creasing the value creation capability of a process is to remove those operations that
do not add value (in the sense of providing something a customer is willing to pay
for). Essentially a process should create value without bottlenecks and the process
variability, inherent or external, should be minimized (see Schmenner and Swink,
1998). Whether the term used is swiftness, operational speed, or reduced days of
supply, the aim is to improve operational speed by reducing the lead times of the
value creation processes of the company.
The majority of success stories in operations management stem from automo-
tive, machinery and job-shop (i.e. assembly) industries. It is important to review
how just-in-time and other operational methodologies have affected these and other
industries since their introduction in early 1980’s. The paper begins by presenting
the relevant literature on lead time reduction and competition based on speed. This
is followed by a description of the research hypothesis, sample description, and ap-
plied methodologies. Then we document the relationship between value creation
and operational speed by using financial information from large U.S. companies.
Next, we illustrate how development in operational speed has taken place in general
and across different industries. Finally, we present a summary of our key findings
with their managerial implications.

20.2 Literature Review

Manufacturing processes have come a long way since Henry Ford’s moving assem-
bly line and mass production. Ford’s focus was on output and cycle time, however,
he also provided examples of how repeating tasks improve lead time along the tradi-
tional learning curve - like the case of disassembling excess war ships after the First
World War (Ford, 1922). Scale and cost centric manufacturing were a managerial
20 Measuring the Effects of Improvements in Operations Management 251

focus until the 1970s. Then, the quality movement turned the focus to continuous
improvement and errorless operations. By the end of the 1970s, quality management
had become entrenched in manufacturing operations. The arrival of Just-in-Time
(JIT), and its emphasis on waste, inventory reduction, and operational flexibility,
created a managerial focus on operational speed in the early 1980’s.
Schonberger (1982), in one of the first books on JIT, describes the benefits stem-
ming from reduced set-up times and smaller lot-sizes. He also warns of the perils
of monster machines and excessive investments in technological marvels which re-
quire near 100% utilization levels to justify the financial investment. Hall (1983) and
Monden (1983) document the increase in efficiency obtained by Japanese produc-
tion facilities. All these books use examples from car assemblers and their suppliers
with a few references to other machine assembly industries. Thus, the initial fo-
cus of the early JIT literature was on a relatively repetitive type of production with
assembly of complex products. This trend continued with other supportive studies
reporting on job shops with huge product variety, and numerous different operations
and product routings.
Goldratt and Cox (1984) and Suri (1994, 1998) refined the managerial focus to
a relentless reduction of bottlenecks and lead time. These approaches, theory of
constraints and quick response manufacturing, rooted their message on flow and
lead time reduction by representing a plethora of cases from job shops and machine
assembly companies. The same message was being delivered by other scholars and
practitioners under different labels like time based competition (Stalk, 1988; Stalk
and Hout, 1990) and lean manufacturing (Womack et al, 1990; Womack and Jones,
1996).
Little (1961) shows that speed is directly proportional to average inventory. This
means speed will increase with inventory reductions1. Schmenner (2001) provides
an overall perspective on the history of manufacturing and operations management
with the simple phrase “swift, even flow”. He argues that the operational manage-
ment emphasis should be an expeditious, well controlled, flow of material through
various value adding operations without bottlenecks and excess variability. Accord-
ing to Schmenner (2001), throughout history, companies that focused on flow with
an emphasis on speed and variability reduction have outperformed companies em-
phasizing other goals. This is consistent with the mathematical principles of opera-
tions management, based on queuing theory, which demonstrates the relationships
between lot sizes, bottlenecks, lead times, and process variability (see Hopp and
Spearman, 1996).
Despite the sound mathematical grounds and numerous documented cases, speed
based competition is still principally rooted in high volume and repetitive industries.
The flagship of this trend remains the automotive industry, followed by job shops
assembling machinery. Studies like Holmström (1995) and Hameri and Lehtonen
(2001) indicate that most industries are dormant when it comes to operational speed.
Some industries, like pulp and paper, have improved their productivity by investing
in automation and through vertical integration including strong merger and acqui-
1Unless of course there is a supply chain glitch, which would then cause a decline in speed. See
Hendricks and Singhal (2003)
252 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

sition strategies (Hameri and Weiss, 2006). The Internet has also pushed opera-
tional speed into a new focus over the past few years. Firms now use the internet
to improve information transparency across organizational boundaries. Piszczalski
(2000) and Bruun and Mefford (2004) provide examples of how new communica-
tion technologies speed-up operations and reduce mistakes. They show how com-
pletely new business models based on automatic order handling and verification
have emerged. There are some anecdotal studies on JIT and speed based competi-
tion in services (Barlow, 2002) and other manufacturing industries. Unfortunately,
these are few and sporadic. The vast majority of the examples of speed based com-
petition are based on industries related to automobiles, assembly of machinery, and
computer equipment.
Over half a century ago Forrester’s (1961) non-linear simulations on information
and delivery delays in supply chains helped academics and managers understand
how information distortion and order batching leads to ever longer lead times and
inventory build up. JIT emerged in the early 1980s and brought supplier relations to
the forefront. Activity Based Costing systems arrived a few years later, in the mid
1980’s. These systems allowed managers to quantify the cost of resource demands
and operational processes. This provided managers with an improved understanding
of the underlying economics of their operations and allowed them to make more
informed decisions on their product and process choices (see Kaplan and Cooper,
1998). In the early 1990’s, the focus on the entire supply chain (from elementary
suppliers to the end customer) gained extensive momentum. Today, supply chain
management is a corner stone of modern operations management and supply chain
structures and control principles are the subject of extensive academic research (e.g.
Houlihan, 1987; Fisher and Raman, 1996; Fisher, 1997; Frohlich and Westbrook,
2002).
As noted above, speed and lead time related research has focused principally on
the automotive, machinery, and computer assembly operations. By contrast, sup-
ply chain research extends to all industries. It reviews supply chain structures with
distributions centers and various forwarding parties included in the chain. The un-
derlying theme in supply chain management is on information transparency, reliable
lead times, and the clever positioning of various value adding operations in the long
logistical chains. Companies with efficient in-house operations are also more likely
to display competence in managing their supply chains.
To summarize, operational speed, defined as the lead time from order handling,
through production and delivery, to the customer, has been recognized as one of
the common characteristics of successful companies in competitive business en-
vironments. Most of the research in the field of operational speed documents the
numerous cases where lead time reduction has resulted in major advantages for
the company. The vast majority of this research concerns industries where JIT and
speed based competition was first introduced - namely automotive, job shops, and
electronic equipment. Surprisingly, there are no studies which review different in-
dustries and their development over longer periods of time. Our paper aims to fill
this gap by focusing on the longitudinal development in the reduction of operational
speed (defined as days of supply) and the related increase in value creation.
20 Measuring the Effects of Improvements in Operations Management 253

20.3 Research Questions, Sample Selection, and Methodology

This paper examines the relationship between value creation and three key elements
ascribed to a managerial focus on operational efficiency: days of supply, new in-
vestments in plant and equipment, and expenditures on research and development.
The key findings of the operations management literature indicates changes in value
added should be at least partially explained by these three key variables, which de-
fines the model to be tested in this study (Fig. 20.1). Following the literature review
we set our research questions as:
1. What is the impact of a managerial focus on operations on value creation?
2. How does this link vary across industries? and
3. Is there a stronger correlation among industries that were early adopters of speed
based operations strategies?

Fig. 20.1 The underlying model of the study: How capital expenditures, days of supply and R&D
costs (our three independent variables), correlate with increases in value added (our dependent
variable)

The Thomson Financial database is used to retrieve the Standardized Industrial


Classification (SIC) codes of the firms in our sample and we use these codes to
split our sample into two sub-samples - manufacturing firms and non-manufacturing
firms. Table 1 documents the classification of firms based on their SIC codes. We
begin with 2,160 non-financial US based firms with assets in excess of US$ 100
million. On average, firms in our sample have total assets of $ 4.2 billion with a
median of $ 783 million. The range includes firms with assets from $ 100 million to
$462 billion. The sample is then divided into 1,166 manufacturing firms (SIC codes
from 20-39) and 994 non-manufacturing firms (SIC codes from 40-59). Manufac-
turing firms are significantly smaller than the non-manufacturing firms. The sample
254 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

of the manufacturing firms has a mean in total assets of $ 3.9 billion (median of $
646 million) compared to $4.6 billion (median $1.0 billion) for the sample of non-
manufacturing firms. To compute the change in our variables (as defined below) we
require at least two consecutive periods and exclude data without a consecutive pe-
riod. We also remove negative value added data from the sample. Our final samples
consist of 915 firms and 10,882 observations (an average of 12 periods per firm)
for the regressions 1 & 2, and 927 firms with 11,244 observations (on average 12
periods per firm) for regression 3, over the 25 year period 1980-2004.

Table 20.1 SIC industry classification codes for firms used in our sample. Our sample consists of
firms with SIC primary codes from 01-59
01.09 Agriculture, Forestry, And Fishing
10.14 Mining
15-17 Construction
20-39 Manufacturing
40-49 Transportation, Communications, Electric, Gas, And Sanitary Services
50-51 Wholesale Trade
52-59 Retail Trade
60-67 Finances
70-89 Services
91-99 Public Administration

We collect data on our dependent variable and the three independent variables
from the annual report of each firm over the 25 year period 1980-1994. Our variables
are defined as follows:
(1) VAD = (Sales-COGS) / Number of Employees

VAD is the value added defined as gross profit per employee. Sales are annual sales
taken from the firm’s annual income statement. COGS are annual costs of goods
sold taken from the firm’s annual income statement. Number of employees is the
year end total number of employees as reported in annual report.
(2) DoS = (End of the year inventory / COGS) × 365

DoS is the days of supply. End of the year inventory is taken from the firm’s balance
sheet. COGS are the annual costs of goods sold taken from the firm’s annual income
statement. A reduction in DoS proxies for improvements in operations and should
lead to an increase in VAD.

(3) CAPEX = net capital expenditures on new investments in long term assets
reported on the cash flow statement.
20 Measuring the Effects of Improvements in Operations Management 255

CAPEX is the capital expenditure on long term assets. An increase in CAPEX indi-
cates new investments for operations which should lead to an increase in VAD.

(4) R&D = research and development expenses reported on the income state-
ment.

R&D is the annual expenditure on research and development. An increase in R&D


includes improvements in operating processes which should lead to an increase in
VAD. We specify our regression as follows:

(5) Δ ln (VAD) = Δ ln (DoS) + Δ ln (CAPEX) + Δ ln (RD) + exp 2

We measure the percentage change in value added per employee in relationship to


the percentage change in speed of inventory, investments, and research and devel-
opment. We use the fixed effects regression to analyze the relationship between the
variables3 . This method was originally developed and used by economists and has
gained broad acceptance and use throughout the social sciences. The fixed effects
method eliminates major biases from regression models with multiple observations,
especially longitudinal ones as we have here, for each subject.
There is a direct link between inventory reduction and gross margin; however,
this link is very small. As inventory is reduced there will be profit improvements
due to interest savings as well as a reduction in storage fees, handling, and waste.
These savings have been estimated by the literature to be on the order of 20-30% (see
Brigham and Gapenski, 1993). The interest savings are the reduction in inventory
times the firms cost of capital. This will have no effect on gross margin as interest
revenue or a reduction in interest expense is reported after operating profits. This
leaves an impact of approximately 20% in other savings which could appear as
an improvement in gross margin. The impact on value added (gross margin/# of
employees) will be on the order of 2%.

20.4 Empirical Analysis, Results and Discussion

We begin our examination of the link between days of supply, capital expenditures
and R&D costs with value creation across all firms and then separate our sample
between manufacturing and non-manufacturing firms. Table 20.2 shows the results
2 Specifying the regression differently does not change our conclusions. Using CAPEX/Total As-

sets and RD/Sales instead of changes in CAPEX and RD does not change our conclusions nor does
using absolute values of VAD and DoS.
3 Our conclusions do not change if we use random effects regression or if we lag value added.

Using other definitions of days supply and value added also does not change our conclusions nor
does using absolute values of variables.
256 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

for the whole sample and the two sub-samples for the three regression models. The
first model (regression 1) is the original model with value added per employee as the
dependent variable. To test the sensitivity of our results to the choice of dependent
variable and macroeconomic conditions, in the second model (regression 2), we ad-
just value added per employee for inflation resulting in value added per employee in
1980 US dollars. This adjustment does not change our conclusions. We run the third
model (regression 3) with value added divided by total assets (instead of divided
by employees) as the dependent variable. Our conclusions remain the same. For the
whole sample there is a strong and statistically significant (1% level) link between
our three independent variables and value added, regardless of the choice of regres-
sion model. A closer look shows that this holds only for manufacturing firms. For
non-manufacturing firms, the only statistically significant coefficient is days of sup-
ply, and in most cases only at the 10% significance level. This was expected based
both on prior research and common sense. Value creation in non-manufacturing op-
erations is by default less related to our chosen independent variables. Many of the
non-manufacturing firms have little or no inventory, and others having a constant
inventory level, thus making their value added insensitive to changes in inventory
management. The detailed results for non-manufacturing industries (industry anal-
ysis) are mixed and do not indicate a relationship between the dependent variable
and any of the three independent variables4 .
N is the number of firms in the sample and the sub-samples. Value Added is
defined as (Sales - COGS)/Number of Employees. Days of Supply is defined as
(End of the year inventory/COGS)×365. Capital expenditure is defined as the net
capital expenditures on new investments in long term assets. R&D cost is defined as
the research and development expenses. Values in parentheses are standard errors of
the coefficients, while ***, ** and * represent significance levels of 1, 5, and 10%
respectively.
As noted above, for manufacturing firms the coefficient of speed is significant
and negatively related to value added. The coefficients associated with both capital
expenditures and R&D expenditures are positive and significant for manufacturing
firms. These results are consistent with the argument that improving operational
speed, increasing capital investments and/or increasing funding of R&D all lead to
an increase in value added. The results are consistent with our model depicted in
Fig. 20.1 for manufacturing companies.
To analyze manufacturing industries in more depth, we further divide our sam-
ple of manufacturing firms into sub-samples based on the two first digits of their
SIC code. We eliminate those industries in our sample with less than 20 firms. The
results are presented in Table 20.3. While the coefficient associated with days sup-
ply remains negative and significant, the capital expenditure and R&D expenditure
coefficients differ across industries. The coefficient of capital expenditure is signif-
icant and positive for primary metal, machinery and computer equipment, electron-
ics, transportation equipment, and instruments. By contrast, an increase in capital
expenditure does not seem to have any relationship to the value added increase in
4 We analyzed all non-manufacturing industries but we do not present the results in this paper. The
results are not significant for days of supply in any of the analyzed non-manufacturing industries.
20 Measuring the Effects of Improvements in Operations Management 257

Table 20.2 The table presents fixed effects regression coefficients of the percentage increase in
Value Added during the 25 year period 1980 to 2004 on the percentage change in days of supply,
capital expenditures and research and development costs
All firms Days R&D Capital
of Supply costs Expenditures
Regression 1: (915 firms)
Value added per Employee -0.185*** 0.042*** 0.061***
-0.01 -0.009 -0.006
Regression 2: (915 firms)
Value added per employee adjusted for inflation -0.184*** 0.042*** 0.061***
-0.01 -0.009 -0.006
Regression 3: (927 firms)
Value added per Total assets -0.216*** 0.051*** 0.0214***
-0.01 -0.01 -0.006

Manufacturing Days R&D Capital


of Supply costs Expenditures
Regression 1: (826 firms)
Value added per Employee -0.192*** 0.046*** 0.065***
-0.01 -0.01 -0.006
Regression 2: (826 firms)
Value added per employee adjusted for inflation -0.191*** 0.045*** 0.064***
-0.011 -0.01 -0.006
Regression 3: (838 firms)
Value added per Total assets -0.219*** 0.056*** 0.022***
-0.011 -0.01 -0.022

food, chemical products or fabricated metal industries. The coefficient of R&D ex-
penses also differs across industries. It remains positive and significant for paper
products, machinery and computer equipment, electronics, and instruments. How-
ever, it is not significant in food, primary metal, fabricated metal, or transportation
equipment.
All manufacturing industries with more than 20 firms were analyzed. Value
Added is defined as (Sales - COGS)/Number of Employees. Days of Supply is de-
fined as (End of the year inventory/COGS)×365. Capital expenditure is defined as
the net capital expenditures on new investments in long term assets. R&D cost is
defined as the research and development expenses. Values in parentheses are stan-
dard errors of the coefficients, while ***, ** and * represent significance levels of
1, 5, and 10% respectively.
Capital expenditures appear to play a major role in capital intensive, some high-
tech and complex products producing industries. A similar pattern is found on the
impact of increasing R&D investments. The underlying message seems to be that
258 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

Table 20.3 The table presents fixed effects regression coefficients of the percentage increase in
Value Added during the 25 year period 1980 to 2004 on the percentage change in days of supply,
capital expenditures and research and development costs
# of Days Capital R&D
SIC Industry name firms of Supply Expenditure costs
20 Food 21 -0.313*** -0.012 -0.062
(0.119) (0.060) (0.095)
28 Chemical and allied products 141 -0.110*** 0.015 0.0712**
(0.025) (0.016) (0.0344)
33 Primary metal 23 -0.256** 0.139*** -0.107**
(0.108) (0.036) (0.054)
34 Fabricated metal 28 -0.374*** 0.019 -0.007
(0.051) (0.027) (0.041)
35 Machinery and allied products 124 -0.184*** 0.054*** 0.077***
(0.022) (0.013) (0.027)
36 Electronics 194 -0.223*** 0.116*** 0.064**
(0.024) (0.013) (0.027)
37 Transportation equipment 58 -0.425*** 0.042** 0.037*
(0.034) (0.020) (0.022)
38 Instruments 136 -0.099*** 0.047*** 0.048***
(0.024) (0.012) (0.016)

competition in different industries has been increasing over the past two and half
decades and survival has required some level of improved operational efficiency.
We next track trends in the relationship between value added and days of supply by
industry.
As noted above, some managers of job-shop and assembly industries began shift-
ing to JIT and other related methods to improve speed and efficiency in the early
1980s. To provide evidence on this shift in managerial focus, we compute indus-
try medians of value added and days of supply for different industries. Figure 20.2
shows 4 different industries with their very different development profiles over the
past 25 years.
The transportation equipment industry (Fig. 20.2a), SIC 37) has reduced days of
supply from around 100 days to 40 days, while at the same time the value added has
almost doubled. The improvement in speed occurred primarily during the 1980s.
This was the period when JIT and a focus on small lot sizes was introduced to the
industry, and is clearly reflected in the graph. This period of speed improvement
also witnessed a strong improvement in value creation. Lieberman and Demeester
(1999) document, in their profound study on Japanese automakers, that each 10%
reduction in inventory leads to approximately a 1% gain in labor productivity, with
a lag of about one year. Although their result is from a limited specific sample,
our sample of US companies in the transportation equipment industry also finds
that a focus on speed results in improved value creation. A more detailed look at
this industry reveals that those firms with large relative reductions in the days of
supply have higher increases in value creation, regardless of the absolute starting
level. Companies which are speedier but do not improve their speed are stagnant
20 Measuring the Effects of Improvements in Operations Management 259

Fig. 20.2 Inflation adjusted value added per employee and days of supply for four industries. Value
Added, adjusted to 1980 US dollars, is defined as (Sales - COGS)/Number of Employees. Days of
Supply is defined as (End of the year inventory/COGS)×365
260 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

in their value generation. This indicates the important element is to focus on speed
improvement as opposed to having the lowest absolute days supply. Effectively,
continuous improvement in speed appears to be as vital for valuation creation as
quality management.
The electronic equipment and component industry (Fig. 20.2b), SIC 36) shows
a continuous increase in value added over our sample period. The median of value
added increased three times while speed improved (was reduced) by 20%. This can
perhaps be explained by a more in-depth examination of the sample. The SIC code
36 holds several different industries. The conventional electronics industries, like
household appliances, operate in well established commodity type industries. These
firms have maintained a fairly constant speed level throughout the 25 year period.
The highly competitive telecommunication industries have had a major drive to im-
prove their speed. Here, the top performing companies halved their speed and in-
creased their value creation ten times.
Next, we examine the 25th and 75th percentiles for the industrial machinery and
computer equipment industries (SIC code 35, see Fig. 20.3). Here we expect a strong
correlation between improvements in operational speed and value creation. The top
companies of this industry have more than halved, from 160 to 75 days, their oper-
ational speed during the past 25 years, while their value added has increased more
than three times. During the same period the worst performing companies have re-
duced their speed by 20% but their value added remained virtually unchanged with
only a 10% increase in 25 years. We note that speed for the worst companies started
on a significantly better level than the best performing companies. This means, that
in relative terms, the speed improvement is very drastic among the top value cre-
ating companies. This gives strong support to the hypothesis that companies with
a focus on the speed of their operations have a greater potential to increase value
added.
The most efficient firm in the machinery and computer equipment industry (SIC
code 35) in 2004 was Dell with days of supply of only 4.15, followed by Apple
Computers (6.19 days). Dell outperformed its competition in terms of speed in most
years since it entered our sample in 1987, resulting in above average value added per
employee of $ 73.000 in 2004 (in 1980 dollars). Figure 20.4 shows the speed and
performance of Dell in the period of 1987-2004. Days of supply decreased from 80
to 4 while the firm’s value added per employee doubled from $ 36.000 to $ 73.000
(in constant 1980 dollars).

20.5 Summary and Conclusions

This paper examines the relationship between a managerial focus on improvements


in inventory reduction and value added. We analyze financial information on large
non-service US based firms over the 25 year period from 1980 to 2004. Our results
show a very strong correlation between the percentage increase in value added (de-
fined as gross margin per employee) and the percentage reduction in days of supply
20 Measuring the Effects of Improvements in Operations Management 261

Fig. 20.3 Companies in SIC 35 belonging to top and bottom 25th percentiles when measured in
value added and days of supply. Value Added is defined as (Sales - COGS)/Number of Employees.
Days of Supply is defined as (End of the year inventory/COGS)× 365

across all manufacturing industries. The relationship between value added and the
capital expenditure and R&D expenses is also strong with differences across indus-
tries based on the standard industrial classification codes. The longitudinal analysis
shows that speed based competition was especially pronounced in the machine pro-
duction, transportation, and computer equipment industries. Other industries also
displayed improvements in value added but without a similar relationship to in-
ventory. The results strongly support the operations management literature which
claims a managerial focus on efficiency, in particular increases to the speed of op-
erations, will result in significant value creation for firms. The results also imply
that the concept of competition based on operational speed has not been transferred
across all firms and the potential for improvement still exits in most industries.
Our results are based on correlations, not on experimental research. This means
our results do not prove causality between our variables. However, the many docu-
mented cases in the literature on positive turnarounds in manufacturing companies
through the reduction of inventories and speeding operations serve as experimental
research which through the manipulation of the concerned variables has strongly
demonstrated the causal relationship between the variables. Our statistical analysis
combined with the visualization of the time series clearly supports our hypotheses.
There are several implications of the analyses which provide additional insight to
the studied variables and their relationships. The following list summarizes the main
262 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

Fig. 20.4 Inflation adjusted value added per employee and days of supply for Dell Computer. Value
Added, adjusted to 1980 US dollars, is defined as (Sales - COGS)/Number of Employees. Days of
Supply is defined as (End of the year inventory/COGS) × 365. Dell’s value added per employee
(in 1980 dollars) and days of supply in the 1980-2004 period

findings of our study and how manufacturing companies should perceive them in an
operational sense:
• Slow companies will eventually fail. The analyzed data demonstrate that com-
panies which are unable to improve their operational speed will gradually loose
their capability to produce value. Even the worst performing companies show
progress in operational speed. Without this improvement they probably would
not have been able to remain competitive. Improving operational speed appears
to be vital for the survival of a firm, independent of the firm’s current level of
performance. Naturally, companies with a monopolistic position are excluded
from our discussion as they are removed from the competitive environment. The
strong correlation between days of supply and value creation across all manufac-
turing industries should invite managers to emphasize operational speed and its
reduction in their strategies. A managerial focus on operational speed should be
perceived to be as important as ROI, market share, and profit.
• Each industry sector has companies that exploit speed based operational supe-
riority over others. Our data demonstrates that despite differences across indus-
tries, in each industry there are companies who enjoy larger value creation due
to their speedier operations. Although some industries are competing based on
20 Measuring the Effects of Improvements in Operations Management 263

factors other than speed, there appears to be additional opportunities for firms to
improve relative to the industry by focusing on expediting operations.
• Slower companies, in general, have a higher potential to improve their value
added, while already faster companies can further increase their lead with addi-
tional improvements in operational speed. Our analysis indicates that, in relative
terms, all top percentile companies improved their speed in absolute terms rel-
ative to the lower percentile companies. However, in some industries the lower
level companies improved at a faster rate than the top companies. This demon-
strates the importance of both derivatives of speed improvement - the first is the
importance of improving the absolute level, the second is the importance of im-
proving the rate of change.
As noted above, the savings from inventory costs related to reduced inventory
levels have very little impact on COGS and the Gross Margin per employee. This
means that companies who improve their speed are doing things better in gen-
eral. They tap outsourcing, technology, or whatever new operational principles
emerge faster than the other companies and incorporate the benefits more efficiently.
Clearly, doing things faster is not simply having people work harder and having
trucks move faster. Rather, it means a wide range of operations are being done in a
different and more intelligent manner.

References

Barlow G (2002) Just-in-Time: Implementation within the hotel industryA case


study. International Journal of Production Economics 80(2):155–167
Brigham E, Gapenski L (1993) Intermediate financial management. The Dryden
Press, New York
Bruun P, Mefford R (2004) Lean production and the Internet. International Journal
of Production Economics 89(3):247–260
Fisher M (1997) What is the right supply chain for your product? Harward Business
Review 75(2):105–116
Fisher M, Raman A (1996) Reducing the cost of demand uncertainty through accu-
rate response to early sales. Operations Research 44(1):87–99
Ford H (1922) My life and work. Ayer Company, Salem, New Hampshire
Forrester J (1961) Industrial Dynamics. MIT Press, and John Wiley & Sons, Inc,
New York
Frohlich M, Westbrook R (2002) Demand chain management in manufacturing and
services: Web-based integration, drivers and performance. Journal of Operations
Management 20(6):729–745
Goldratt EM, Cox J (1984) The Goal. North River Press, Croton-on-Hudson, New
York
Hall R (1983) Zero inventories. Homewood, IL, Dow Jones-Irwin
Hameri A, Lehtonen J (2001) Production and supply management strategies in
Nordic paper mills. Scandinavian Journal of Management 17(3):379–396
264 Vedran Capkun, Ari-Pekka Hameri and Lawrence A. Weiss

Hameri A, Weiss L (2006) Value creation and days of supply in major pulp and
paper companies, paper and Timber (forthcoming)
Hendricks K, Singhal V (2003) The effect of supply chain glitches on shareholder
wealth. Journal of Operations Management 21(5):501–522
Holmström J (1995) Speed and efficiencyA statistical enquiry of manufacturing in-
dustries. International Journal of Production Economics 39(3):185–191
Hopp W, Spearman M (1996) Factory Physics. Irwin, Chicago
Houlihan J (1987) International supply chain management. International Journal of
Physical Distribution and Materials Management 17(2):51–66
Kaplan R, Cooper R (1998) Cost and Effect: Using integrated cost systems to
drive profitability and performance. Harvard Business School Press, Boston, Mas-
sachussets
Lieberman M, Demeester L (1999) Inventory reduction and productivity growth:
linkages in the Japanese automotive industry. Management Science 45(4):466–
485
Little J (1961) A proof for the queuing formula: L= λ W. Operations Research
9:383–387
Monden Y (1983) Toyota production system: An integrated approach to just-in-
time. Industrial Engineering and Management Press, Institute of Industrial Engi-
neers, Norcross, GA
Piszczalski E (2000) Lean vs. Information Systems. Automotive Manufacturing &
Production 112(8):26–28
Schmenner R (1988) The merit of making things fast. Sloan Management Review
30(1):11–17
Schmenner R (2001) Looking ahead by looking back: Swift, even flow in the history
of manufacturing. Production and Operations Management 10(1):87–96
Schmenner R, Swink M (1998) On theory in operations management. Journal of
Operations Management 17(1):97–113
Schonberger R (1982) Japanese manufacturing techniques: Nine hidden lessons in
simplicity. Free Press, New York
Stalk G (1988) Time-The next source of competitive advantage. Harvard Business
Review 66(4):41–51
Stalk G, Hout T (1990) Competing against time: How time-based competition is
reshaping global markets. The Free Press, New York
Suri R (1994) Common misconceptions and blunders in implementing quick re-
sponse manufacturing. In: Proceedings of the SME Autofact 94 Conference, So-
ciety of Manufacturing Engineers, p 23
Suri R (1998) Quick Response Manufacturing. Productivity Press: Portland, OR
Womack J, Jones D (1996) Lean thinking: Banish waste and create wealth in your
corporation. Simon & Schuster, New York
Womack J, Jones D, Roos D (1990) The machine that changed the world. Rawson
Associates, New York
Chapter 21
Managing Demand Through the Enablers of
Flexibility: The Impact of Forecasting and
Process Flow Management

Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

Abstract In recent years increased attention has been paid to integrated demand and
supply chain management. The present research study discusses the main concept
in this context, i. e., flexibility. In particular, we have learned from existing research
work, that is difficult to link the construct “flexibility” with performance (efficiency
as well as effectiveness). Therefore, we analyzed important enablers of flexibility.
Based on our conceptual flexibility framework, we discussed the impact of layout,
process flow management as well as the forecasting performance (error). Our results
provided some interesting insights. In particular, only process flow management is
linked with effectiveness as well as efficiency performance. On the other hand, lay-
out as well as forecasting performance, is only linked with efficiency. These results
demonstrate that, e. g., forecasting is not directly linked with external results like
customer satisfaction. Based on these results it is possible to motivate further re-
search activities that should investigate these complex relationships more in detail.

21.1 Introduction

In recent years, literature has devoted more and more attention to the problem of
supply and demand management in uncertain contexts. On the one hand, literature

Matteo Kalchschmidt
Department of Economics and Technology Management, Universit di Bergamo – Viale Marconi 5,
24044 Dalmine
e-mail: matteo.kalchschmidt@unibg.it
Yvan Nieto
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: yvan.nieto@unine.ch
Gerald Reiner
Institut de l’entreprise, Université de Neuchâtel – Rue A.-L. Breguet 1, CH-2000 Neuchâtel
e-mail: gerald.reiner@unine.ch

265
266 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

in the field of demand management has discussed this topic from several points of
view; relevant attention has been paid on building better and more performing fore-
casting techniques and approaches, in order to reduce the uncertainty companies
perceive (Hanssens et al, 2003). Other authors have applied models to learn how to
reduce demand uncertainty through the adoption of marketing actions, e. g., every
day low price strategies (Lee and Tang, 1997). In this context, improvement of infor-
mation sharing based on partnerships with customers is also of interest (Cachon and
Fisher, 2000). On the other hand, in the supply management literature several con-
tributions have been provided on the adoption of flexibility as a mean to cope with
uncertainty (Lee, 2002). Limited contributions, however, can be found regarding the
interaction among these leverages (i. e. forecasting, information sharing and process
flow management) to manage demand in uncertain contexts as well as regarding
their joint effect on company performance. In the end, it should be one of the core
objectives in OM research to match capacity and inventory management with cus-
tomer demand management in order to maximize business results. In other words,
in terms of our specific research study, what is flexibility and what is the impact of
flexibility with regard to dynamic interactions with forecast accuracy/characteristic,
process flow, etc.? Based on these results the next research questions could be to
identify what is the “right” flexibility level? The aim of this work is thus to study
the relationship between enablers (i. e., practices under consideration of contingency
factors) of flexibility and company performance.

21.2 The “Concept” of Flexibility

Despite flexibility being commonly recognized as a key answer to environmental


uncertainty (Ho et al, 2005), understanding the relations existing between flexibility
and performance still presents open challenges. Of course, evidence of the positive
impact of flexibility on performance have been provided (e. g. Suarez et al, 1996;
Das, 2001; Jack and Raturi, 2002; Hallgren and Olhager, 2009). Nevertheless, im-
portant knowledge is still assumed to be missing in order to clearly understand flexi-
bility and performance interrelationship. A first explanation for this gap is of course
the complexity of the flexibility concept which, for example, still lacks an unani-
mous definition (Zhang et al, 2002). Due to the vast and multi-dimensional aspect
of the concept, numerous difficulties arise when trying to encapsulate flexibility as a
whole in a single measure (see also below). Flexibility has been widely studied at the
manufacturing level (see e. g. Kara and Kayis, 2004; Koste and Malhotra, 1999, for
good reviews), and knowledge is nowadays also increasing regarding supply chain
flexibility (see Stevenson and Spring, 2007, for a review). Considering the latter,
Reichhart and Holweg (2007) proposed an interesting conceptual framework of re-
sponsiveness and flexibility at the supply chain level. In fact, that framework focuses
on responsiveness but the proximity of those concepts makes it easy to transpose it
to a flexibility framework. Nevertheless, the proximity of both notions leads to con-
21 Managing Demand Through the Enablers of Flexibility 267

siderable ambiguity in the existing literature and both words have even been used
interchangeably, even though they are distinct concepts. Reichhart and Holweg’s
framework make the distinction between external and internal flexibility, i. e. exter-
nal flexibility is linked to achieving a competitive advantage (‘what the customer
sees’) by opposition to internal flexibility which is the internal means by which ex-
ternal flexibility can be achieved (‘what can we do’). Following Slack (1987) and
Upton (1994) a system’s flexibility is based on internal resources that can be used
to achieve different types of internal flexibility, which in turn can support the sys-
tem’s ability to demonstrate external flexibility to its environment. This distinction
separates the capabilities of operations resources from the market requirements, the
dual influences that need to be reconciled by operations strategy (Slack, 2002). This
discrepancy between internal and external flexibility may explain contradicting re-
sults concerning the relationship between uncertainty and flexibility. E. g., Swami-
dass and Newell (1987) found that flexibility improves performance in uncertain
environments, in contrast Pagell and Krause (1999) found no relationship between
measures of environmental uncertainty and operational flexibility in a survey among
North-American manufacturers.

Average on hand inventory, inventory

Direct customer requirements and demand


obsolescence, bullwhip effect measure,
utilization (resources), costs, …

Enhanced forecasting characteristic


Enablers and their Interrela-

External Results
Internal Results

Capacity Management Flexibility


tions

Inventory management

………..

Delivery performance, delivery time, sales, lost sales,


customer satisfaction, …

Fig. 21.1 Flexibility: conceptual framework

A second dimension of Reichhart and Holweg’s framework is its distinction be-


tween potential and demonstrated responsiveness. Based on Upton (1994), this el-
ement also applied for flexibility, differentiating the available flexibility (potential)
from the flexibility actually needed to fulfill customer requirements (demonstrated).
Based on the above mentioned research, it can be summarized that the approaches
applied (e. g., inventory management, forecasting) are the flexibility enablers to ful-
fill the customer requirements and finally increase customer satisfaction. Further-
268 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

more, these approaches will have also an impact on the efficiency, i. e. costs. The
total success (effectiveness as well as efficiency) of flexibility can be only evalu-
ated under consideration of both aspects. The overall framework is presented in Fig.
21.1. Flexibility and performance are known to be influenced by contingency fac-
tors which also affect their relation. Numerous examples of influences from contin-
gent factors have been provided, including e. g. perceived environmental uncertainty
(Swamidass and Newell, 1987) or company’s business strategy (Gupta and Somers,
1996) and, reviewing literature on manufacturing flexibility, Vokurka and O’Leary-
Kelly (2000) identify four general contingent factors. Specifically, the authors men-
tioned strategy, environmental factors, organizational attributes and technology as
being exogenous variables impacting on flexibility and performance, highlighting
once more the complexity of the topic. Finally, we can summarize, when consid-
ering flexibility as the result of a set of business practices, that contingency is of
main importance as a selected practice may not be possible in all settings (e. g. Ke-
tokivi, 2006). In the context of demand management, forecasting is also known to
be a strong lever against uncertainty and thus can contribute in gaining better perfor-
mances. Literature traditionally considers accuracy as the relevant performance to
be evaluated in a forecasting process (Mentzer and Bienstock, 1998; Chase, 1999).
When forecast accuracy increases, cost and delivery performances consequently im-
prove, as they are typically correlated with forecast error. Inventory levels, and thus
related costs, can be reduced; manufacturing systems are better managed, as equip-
ment utilization improves and companies can effectively plan in advance actions to
be undertaken (Vollmann et al, 1992; Ritzman and King, 1993; Fisher and Raman,
1996). In turn manufacturing and product costs decrease. Delivery performances
(e. g., order fulfillment and delivery speed/punctuality) also improve as, when fore-
cast accuracy is higher, it is more probable that products are available when the cus-
tomer orders (Enns, 2002; Kalchschmidt et al, 2003). Forecast inaccuracy causes
major rescheduling and cost difficulties for manufacturing (Ebert and Lee, 1995)
and it may impact on logistic performances such as delivery timeliness and quality
(Kalchschmidt and Zotteri, 2007). For these reasons, it is no surprise that several
surveys show accuracy as the most important criterion in selecting a forecasting
approach (Dalrymple, 1987; Mahmoud et al, 1988). For this reason some authors
have even recommended eliminating forecasts entirely (Goddard, 1989). Other pos-
sibilities to hedge uncertainty are inventory management as well as capacity man-
agement. Traditionally, inventory management is challenging because uncertain de-
mand and uncertain supply and/or production flow times make it necessary to hold
inventory at certain positions to provide adequate service to the customers. As a
consequence, increasing process inventories will increase customer service and rev-
enue, but it comes at a higher cost. Therefore, management has to resolve this trade-
off by identifying possibilities to decrease inventories by simultaneously improving
customer service. A well known management lever in this respect is risk pooling by
different types of centralization or standardization, e. g. central warehouses, product
commonalities, postponement strategies (see e. g. Tallon, 1993). In this way, it is
usually possible to reduce inventory costs to a large extent. However, this reduction
of inventory costs often is related with an increase of other costs such as transporta-
21 Managing Demand Through the Enablers of Flexibility 269

tion costs or production costs. If activities are postponed downstream in the process
by shifting the customer order decoupling point upstream in the process, the order
flow time is affected. E. g., if no additional resources are allocated to the postponed
activities, the order flow time and thus the delivery time for a customer will be in-
creased. Therefore, additional resources (labour and/or equipment) have to be taken
into account for the evaluation of such process changes and the additional produc-
tion costs have to be traded off with the reduction in inventory costs (Jammernegg
and Reiner, 2007).

21.3 Objectives and Methodology

The aim of this work is to study the relationship between enablers of flexibility
and performance. In particular, the paper wants to address the following research
question: what is the impact of flexibility enablers (forecasting, layout, process flow
management, etc.) on companies’ operational performances? In order to analyze
this research question, we considered two different performances: efficiency and ef-
fectiveness. Analytical literature suggests that enablers may have some impacts on
both performances, but as we mentioned, limited empirical evidence can be found
on these relationships. Thus the theoretical model we are considering is represented
in Fig. 21.2. Empirical analysis is based on data collected from the 4th edition of the
GMRG survey. The Global Manufacturing Research Group (GRMG) collects infor-
mation regarding manufacturing practices in several countries all over the world.
Currently, full data sets have been provided by 598 companies in 13 different coun-
tries (Austria, Australia, China, Germany, Hungary, Italy, Korea, Mexico, Poland,
Sweden, Switzerland, Taiwan, USA), all belonging to manufacturing and assembly
industry.

/D\RXW

3URFHVVIORZPDQDJHPHQW (IILFLHQF\DQG
HIIHFWLYHQHVV

)RUHFDVWLQJ

Fig. 21.2 The theoretical model

Table 21.1 synthesizes the distribution of the sample in terms of size and Table
21.2, the distribution among the different countries. The sample size shows several
medium and large companies, but also some small companies are present in the
dataset.
270 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

Table 21.1 Distribution of the sample in terms of size


Company size Frequency
Small (less than 50 employees) 19.6 %
Medium (50 – 250 employees) 38.0 %
Large (more than 250 employees) 42.4 %

Table 21.2 Distribution of the sample between countries


Country % Country %
Australia 5.0 Mexico 13.1
Austria 2.8 Polonia 9.5
China 9.5 Sweden 2.2
Germany 0.7 Switzerland 5.6
Hungary 8.8 Taiwan 8.3
Italy 9.0 USA 6.6
Korea 19.1

In order to analyze the research questions previously mentioned we proceed as


follows.
• First we define proper items and constructs in order to measure the relevant vari-
ables we have considered. Based on the GMRG database we were able to col-
lect information regarding the different variables. Reliability of the constructs is
tested through confirmative factor analysis and reliability analysis.
• Then we adopt linear regression to evaluate the relationships between the con-
sidered variables.

21.4 Empirical Analysis

In order to define the different constructs we applied a confirmative factor analy-


sis based on the items that according to current literature should be influenced by
these variables. All items considered are measured on a 7 point Likert scale ranging
from 1 (not at all) to 7 (a great extent). In order to evaluate flexibility we defined
two separate constructs based on previous literature on the enablers of flexibility:
layout and process flow management. Since our goal is to study the impact of flex-
ibility on performances, theoretically we should have considered flexibility perfor-
mances. However, it is very difficult to identify flexibility performances that are
strictly related to specific practices. For this reason we decide to evaluate flexibility
by means of practices, thus assuming that a relationship exists between what com-
panies do (i. e. practices) and what they gain (i. e., performances). The use of proper
layout solutions can influence flexibility capabilities i. e. through the use of cellu-
lar manufacturing systems or by leveraging on automation. Coherently with previ-
21 Managing Demand Through the Enablers of Flexibility 271

ous literature, in order to measure the extent of investment on layout for flexibility,
we considered the extent to which companies have invested on: (1) cellular manu-
facturing and (2) factory automation.The two items are correlated with each other
(Pearson Correlation index is 0.44 and significant at 0.01 level). To measure the ex-
tent of investment on responsiveness, we considered the extent to which companies
have invested on: (1) Just-In-Time, (2) Manufacturing throughput time reduction,
(3) Setup time reduction and (4) Total Quality Management. The items are corre-
lated with each other (all Pearson Correlation indexes are above 0.40 and significant
at 0.001 level). Thus the constructs layout and process flow management are defined
by averaging the specific items. We assessed convergent validity and unidimension-
ality of the defined constructs with a confirmative factor analysis model. Literature
recommends using normed fit index (NFI) and comparative fix index (CFI) together
in assessing model fit. NFI is 0.98 and CFI is 0.99 which let us consider the model
as acceptable (Hu and Bentler, 1999). In addition root mean square error of approx-
imation (RMSEA) is 0.05 which suggests that the model fit is acceptable. Factors
loads are all significant and respect the lower suggested value of 0.40 (Gefen et al,
2000). Cronbach’s Alpha was also measured in order to verify reliability of the con-
structs; constructs were considered reliable if Alpha’s value is above the minimum
requirement of 0.60 (Nunnally and Bernstein, 1994). To evaluate forecasting accu-
racy, companies have been asked to provide the average error for a single product
on a two-month period. Thus we evaluate short term forecast performances. In the
end, efficiency and effectiveness performances were considered. As for efficiency,
three items were examined, as we asked respondents to provide an evaluation of
the following performances compared with their competitors on a 7 point Likert
scale (1 is for “far worse than” and 7 for “far better than”): (1) direct manufacturing
costs, (2) total product costs, (3) raw material costs. As to the effectiveness per-
formances, a similar question was asked for the following: (1) product quality, (2)
delivery speed and (3) delivery as promised. It can be noted that, as it is difficult
to compare performances between companies operating within different contexts,
this research focuses on perceptual and relative measures of cost and delivery per-
formances. Thus the constructs efficiency and effectiveness are defined by averaging
the specific items. NFI is 0.99 and CFI is 1.00 which let us consider the model as
acceptable. In addition RMSEA is 0.00 which suggests that the model fits well. Fac-
tors loads are all significant and Cronbach’s Alpha value is significantly above the
minimum requirement of 0.60. When dealing with survey data, common method
bias (CMB) can affect statistical results. As suggested by Podsakoff et al (2003),
we checked for this problem by means of confirmatory factor analyses (CFA) on
competing models that increase in complexity (Podsakoff et al, 2003). If method
variance is a significant problem, a simple model (e. g., single factor model) should
fit the data as well as a more complex model (in this case a five factor model). The
hypothesized model, containing five factors yielded a better fit of the data than the
simple model (one factor model: CFI 0.56 and RMSEA 0.17; five factor model: CFI
0.98 and RMSEA 0.04). Furthermore, the improved fit of the six factor model over
the simple model was statistically significant: the change in χ 2 is 1030.40 and the
change in df is 9 (p <.001). Thus, CMB did not appear to be of concern in our analy-
272 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

sis. In order to study the research questions we adopted linear regression between the
different variables considered. In particular we run two different regression analyses
by considering efficiency and effectiveness as dependent variables. We also control
the regression results by adding the size of the company as a control variable; to
evaluate the contribution of the independent variables we consider the coefficient of
determination R2 increment (specifically whether it is significant or not) and check
for multicollinearity problems through the analysis of the variance inflation factor
(VIF). Table 21.3 summarizes the results of the linear regression between forecast-
ing performances and flexibility practices on efficiency performances.

Table 21.3 Results of the regression analysis: dependent variable is efficiency


Model Variables Unstd. Stand. Sig. VIF
coef. coef.
11 (Constant) 4.36 0.000
Employees 0.00 0.19 0.000 1.00
22 (Constant) 3.51 0.000
Employees 0.00 0.11 0.015 1.08
Layout 0.13 0.19 0.000 1.58
Process flow mgt 0.12 0.16 0.003 1.61
Forecast error -0.59 -0.14 0.001 1.01

As we can see from the first regression analysis, the size of the company is posi-
tively related to its capability of being efficient. This is no surprise since size is typ-
ically related to the ability of gaining economics of scale. However, in the second
model, the variables we are considering are all significantly related to efficiency and
the model fit is significantly better. In particular both flexibility-related practices are
positively related to efficiency, thus the more companies invest in layout and respon-
siveness the more they are able to improve their performances. Coherently, forecast
error is negatively related to efficiency. Multicollinearity doesn’t seem to be a main
concern since VIF is lower than 2 for all variables. Table 21.4 provides the same
analysis for effectiveness performances.
As we can see, size is no longer significant significant any more. Quite interest-
ingly, only process flow management is related to effectiveness performances. This
provides evidence that layout investments for improving flexibility and forecasting
accuracy mainly have impact on efficiency.

1 Regression is significant at 0.000 level; R2 = 0.037; R2ad j = 0.035.


2 Regression is significant at 0.000 level; R2 = 0.154; R2ad j = 0.147; R2 change is significant at
0.000 level.
3 Regression is not significant
4 Regression is significant at 0.000 level; R2 = 0.102; R2 = 0.094; R2 change is significant at
ad j
0.000 level.
21 Managing Demand Through the Enablers of Flexibility 273

Table 21.4 Results of the regression analysis: dependent variable is effectiveness


Model Variables Unstd. Stand. Sig. VIF
coef. coef.
13 (Constant) 5.23 0.000
Employees 0.00 0.07 0.158 1.00
24 (Constant) 4.39 0.000
Employees 0.00 -0.02 0.725 1.07
Layout 0.04 0.006 0.261 1.59
Process flow mgt 0.18 0.27 0.000 1.62
Forecast error -0.20 -0.05 0.236 1.01

21.5 Discussion

A first interesting result is that a relationship between flexibility enablers and per-
formance exists. This is important empirical evidence because we are able to show
that flexibility enablers (layout as well as process flow management) can be very
effective in gaining better performances both internally (efficiency) and externally
(effectiveness). Quite interestingly, however, this relationship is very strong with
efficiency performances, while only process flow management has an impact on ef-
fectiveness. This means that companies investing on layout should expect to achieve
better internal performances, but at the same time, they may have to pay attention to
the impact on the customer. A second interesting result relates to forecasting. In fact
we can see that forecasting accuracy impacts on efficiency performances (coherently
with what other contributions provide (see previous literature discussion). However,
this impact disappears when effectiveness is considered, claiming that forecasting
accuracy doesn’t directly replicate in customers’ satisfaction. This result is consis-
tent with previous works (see Danese and Kalchschmidt, 2008) that show that this
missing link doesn’t mean that forecasting is useless, but rather that this relation-
ship is far more complex than expected. We argue thus that the relationship between
forecasting and process flow management deserves more attention. These results do
not provide evidence of a clear preference between the two practices. Therefore, it
would interesting to understand whether any synergic relationship exists between
the two or if process flow management simply compensates forecasting error, thus
limiting the negative impact that forecast error may have on performances. In the
end the provided results emphasize that the impact of flexibility on performances
is very complex: specifically, different levers can have very different impacts on
performances. Thus, attention on what companies are doing to increase flexibility
should be seriously considered. To conclude, we would also like to address some
limitations of this work. First of all, as we mentioned, we compared flexibility en-
ablers (i. e. practices) with forecasting performances. This approach was due to the
inability of defining specific measures for flexibility. The comparison is not com-
pletely fair since we are not putting together homogeneous variables. Next, studies
should take this issue into consideration, by, for example, looking at forecasting
274 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

practices and not simply at the outcome of this process. A second issue relates to
contingencies: we didn’t consider specifically any contingent factor that may influ-
ence the different variables and relationships here described. Future works should
devote attention to those factors that may change how variables are defined. We ar-
gue that general results will be not be drastically affected by these variables, also
because several degrees of freedom are left to companies in terms of which practices
they can adopt.

Acknowledgements Partial funding for this research has been provided by the PRIN 2007 fund
“La gestione del rischio operativo nella supply chain dei beni di largo consumo” as well as by the
project “Matching supply and demand - an integrated dynamic analysis of supply chain flexibility
enablers” supported by the Swiss National Science Foundation.

References

Cachon GP, Fisher ML (2000) Supply chain inventory management and the value
of shared information. Management Science 46(8):1032–1048
Chase C (1999) Sales forecasting at the dawn of the new millennium? Journal of
Business Forecasting Methods and Systems 18:2–2
Dalrymple D (1987) Sales forecasting practices: Results from a United States sur-
vey. International Journal of Forecasting 3(3):379–91
Das A (2001) Towards theory building in manufacturing flexibility. International
journal of production research 39(18):4153–4177
Enns S (2002) MRP performance effects due to forecast bias and demand uncer-
tainty. European Journal of Operational Research 138(1):87–102
Fisher M, Raman A (1996) Reducing the cost of demand uncertainty through accu-
rate response to early sales. Operations Research 44(1):87–99
Gefen D, Straub D, Boudreau M (2000) Structural equation modeling and regres-
sion: Guidelines for research practice. Structural Equation Modeling 4(7)
Goddard W (1989) Lets scrap forecasting. Modern Materials Handling 39:39
Gupta Y, Somers T (1996) Business strategy, manufacturing flexibility, and orga-
nizational performance relationships: a path analysis approach. Production and
Operations Management 5(3):204–233
Hallgren M, Olhager J (2009) Flexibility configurations: Empirical analysis of vol-
ume and product mix flexibility. Omega 37(4):746–756
Hanssens D, Parsons L, Schultz R (2003) Market response models: Econometric
and time series analysis. Kluwer Academic Publishers
Ho C, Tai Y, Tai Y, Chi Y (2005) A structural approach to measuring uncertainty in
supply chains. International Journal of Electronic Commerce 9(3):91–114
Hu L, Bentler P (1999) Cutoff criteria for fit indexes in covariance structure anal-
ysis: Conventional criteria versus new alternatives. Structural equation modeling
6(1):1–55
21 Managing Demand Through the Enablers of Flexibility 275

Jack E, Raturi A (2002) Sources of volume flexibility and their impact on perfor-
mance. Journal of Operations Management 20(5):519–548
Jammernegg W, Reiner G (2007) Performance improvement of supply chain pro-
cesses by coordinated inventory and capacity management. International Journal
of Production Economics 108(1-2):183–190
Kalchschmidt M, Zotteri G (2007) Forecasting practices: empirical evidence and a
framework for research. International Journal of Production Economics 108:84–
99
Kalchschmidt M, Zotteri G, Verganti R (2003) Inventory management in a multi-
echelon spare parts supply chain. International Journal of Production Economics
81:397–413
Kara S, Kayis B (2004) Manufacturing flexibility and variability: an overview. Jour-
nal of Manufacturing Technology Management 15:466–478
Ketokivi M (2006) Elaborating the contingency theory of organizations: The case
of manufacturing flexibility strategies. Production and Operations Management
15(2):215–228
Koste L, Malhotra M (1999) A theoretical framework for analyzing the dimensions
of manufacturing flexibility. Journal of Operations Management 18(1):75–93
Lee H (2002) Aligning supply chain strategies with product uncertainties. California
Management Review 44(3):105–119
Lee H, Tang C (1997) Modelling the costs and benefits of delayed product differen-
tiation. Management science 43(1):40–53
Mahmoud E, Rice G, Malhotra N (1988) Emerging issues in sales forecasting
and decision support systems. Journal of the Academy of Marketing Science
16(3):47–61
Mentzer J, Bienstock C (1998) Sales forecasting management. Sage Beverley Hills,
CA
Nunnally J, Bernstein I (1994) Psychometric theory. New York, NY
Pagell M, Krause D (1999) A multiple-method study of environmental uncertainty
and manufacturing flexibility. Journal of Operations Management 17(3):307–325
Podsakoff P, MacKenzie S, Lee J, Podsakoff N (2003) Common method biases in
behavioral research: A critical review of the literature and recommended reme-
dies. Journal of Applied Psychology 88(5):879–903
Reichhart A, Holweg M (2007) Creating the customer-responsive supply chain:
a reconciliation of concepts. International Journal of Operations & Production
Management 27(11):1144–1172
Ritzman L, King B (1993) The relative significance of forecast errors in multistage
manufacturing. Journal of Operations Management 11(1):51–65
Slack N (1987) The flexibility of manufacturing systems. International Journal of
Operations & Production Management 7(4):35 – 45
Slack N (2002) Operations Strategy. Prentice Hall
Stevenson M, Spring M (2007) Flexibility from a supply chain perspective: defini-
tion and review. International Journal of Operations & Production Management
27(7):685–713
276 Matteo Kalchschmidt, Yvan Nieto and Gerald Reiner

Suarez F, Cusumano M, Fine C (1996) An empirical study of manufacturing flexi-


bility in printed circuit board assembly. Operations Research 44(1):223–240
Swamidass P, Newell W (1987) Manufacturing strategy, environmental uncertainty
and performance: a path analytic model. Management Science 33(4):509–524
Tallon W (1993) The impact of inventory centralization on aggregate safety stock:
the variable supply lead time case. Journal of Business Logistics 14:185–185
Upton D (1994) The management of manufacturing flexibility. California Manage-
ment Review 36(2):72–89
Vokurka R, O’Leary-Kelly S (2000) A review of empirical research on manufactur-
ing flexibility. Journal of Operations Management 18(4):485–501
Vollmann T, Berry W, Whybark D (1992) Manufacturing planning and control sys-
tems
Zhang Q, Vonderembse M, Lim J (2002) Value chain flexibility: a dichotomy
of competence and capability. International Journal of Production Research
40(3):561–583
Chapter 22
Threats of Sourcing Locally Without a Strategic
Approach: Impacts on Lead Time Performances

Ruggero Golini and Matteo Kalchschmidt

Abstract This paper analyses the impact of local sourcing on lead time perfor-
mances. In particular attention is devoted to the effect of choosing to source locally
without having made a proper strategic analysis of the purchasing process. Analyses
are based on data provided by the IMSS research project regarding more than 500
companies in different countries around the world. Results show that local sourcing
can lead to bad performances if it is adopted without a clear sourcing strategy.

22.1 Introduction

During the last twenty years companies have witnessed a considerable expansion
of supply chains into international locations (Taylor, 1997; Dornier et al, 1998).
This growth in globalization has motivated both practitioner and academic interest
in global supply chain management (Prasad and Babbar, 2000).
Looking only at the upstream part of the supply chain, global sourcing (i.e. the
management of supplier relationships on a global perspective) has been considered
and analyzed (e.g., Kotabe and Omura, 1989; Murray et al, 1995). One major issue
regarding global sourcing is why companies extend their relationships internation-
ally and to what extent this practice contributes to increase their competitive advan-
tage (e.g., Alguire et al, 1994; Womack and Jones, 1996; Trent and Monczka, 2003).
Bozarth et al (1998) identify different motivators for global sourcing: offset require-

Ruggero Golini, corresponding author


Department of Economics and Technology Management, Università degli Studi di Bergamo, Viale
Marconi 5, 24044 Dalmine (BG), Italy, Tel. +39 035 205 2360, Fax. +39 035 205 2077,
e-mail: ruggero.golini@unibg.it
Matteo Kalchschmidt
Department of Economics and Technology Management, Università degli Studi di Bergamo, Viale
Marconi 5, 24044 Dalmine (BG), Italy, Tel. +39 035 205 2360 Fax. +39 035 205 2077,
e-mail: matteo.kalchschmidt@unibg.it

277
278 Ruggero Golini and Matteo Kalchschmidt

ments, currency restrictions, local content and countertrade, lower prices, quality,
technology access, access to new markets, shorter product development and life cy-
cles, competitive advantage. In some cases, internal factors (e.g., company image)
can be the principal motivators (Alguire et al, 1994).
However, sourcing globalization is still little diffused (Trent and Monczka, 2003;
Cagliano et al, 2008). This is because global sourcing can imply longer supply lead
times that can bring to higher inventory levels and other hidden costs (Handfield,
1994). Moreover, in a global sourcing context it becomes more difficult to have
an integrated and efficient supply chain (Das and Handfield 1997). Nevertheless,
sourcing locally can lead to a competitiveness loss, if other companies are able
to efficiently exploit globalization opportunities. In fact, thanks to experience and
investment in supply chain, companies can achieve better performances, also on
lead times and inventories, even with a globalized supply base (Bozarth et al, 1998;
Golini and Kalchschmidt, 2008). On the other side, local sourcing allows companies
to invest in JIT with suppliers thus allowing improving procurement performances.
In fact many companies prefer to source locally, and in some cases they invest in
Just-in-Time practices - that require physical proximity - to keep inventories under
control (Das and Handfield, 1997; Prasad and Babbar, 2000).
Actually, several studies have failed to detect any significant impact on general
business success of global supply chains (Kotabe and Omura, 1989; Steinle and
Schiele, 2008) and specifically of global sourcing. Only some weak evidences have
been found: it seems that global sourcing can improve product and process inno-
vation, but it seems to have no impact on strategic performances (Kotabe, 1990;
Murray et al, 1995). Companies, however, have to carefully select their globaliza-
tion strategy, using, for example, hybrid global/local approaches according to the
type of goods purchased (Steinle and Schiele, 2008). In the rather developed liter-
ature on global or local sourcing, however, there are limited empirical researches
regarding the impact of this practice on lead times. This paper aims at contributing
to this issue, by providing evidence on the relationship between global and local
purchasing, supply chain management and lead time performances.
The remainder of the paper is structured as follows. In the next section literature
regarding the relationship between global sourcing, supply chain management and
lead time performances is taken into account. Following, research objectives and
methodology are detailed and empirical analyses are described. Then discussion of
empirical results is provided and, in the end, we draw some conclusions and suggest
potential future developments.

22.2 Literature Review

Lead times are a major concern for suppliers as these impact directly on cus-tomers’
performances: lower lead times induce lower inventories and allow to be more flexi-
ble. Also from a supply chain perspective, lead time reduction positively contributes
in reducing the bullwhip effect (Chen et al, 2000) making the entire supply chain
22 Threats of Sourcing Locally Without a Strategic Approach 279

more efficient. On the other side competitive pressures (lower costs, higher qual-
ity, innovativeness) drive many companies in scouting suppliers abroad (Alguire
et al, 1994; Ettlie and Sethuraman, 2002; Frear et al, 1992; Smith and Reece, 1999;
Trent and Monczka, 2003; Birou and Fawcett, 1993; Womack and Jones, 1996).
This practice may intuitively cause higher procurement lead times mainly because
of geographical distances. This is confirmed by Handfield (1994): among the top
five costs problems experienced in using international sources there are long lead
time and inventory costs. In the same study it is also shown that an international
sourcing systematically causes less on-time deliveries, longer lead times and higher
lead times.
However the problem is more complex, as at least three aspects of the system
dynamic have to be considered. First of all, companies may compensate higher pro-
curement lead time with higher inventories thus not affecting the lead time for the
customer. The second aspect is the position of the decoupling point: companies can
have higher material inventories if they produce in make-to-stock or make-to-order.
Engineer-to-order companies instead have a direct impact of the procurement lead
time on the total lead time. Moreover the quality level of the supply may affect the
manufacturing lead time, as scrap and reworks can make it longer. Again companies
can react to this with higher work-in-progress and finished goods inventories if their
production model allows it (i.e. make-to-stock).
Finally supply chain practices have to be considered. In fact literature suggests
reducing procurement lead time by means of investments in supply chain integration
with suppliers (e.g., Droge et al, 2004; Fliedner, 2003). Many contributions (e.g.,
Frohlich and Westbrook, 2001) show how a higher level of integration provides
better operational performances (also in terms of lead time), thus suggesting that
firms should invest in this direction.
In this area, researchers have identified two complementary ways in which sup-
ply chain integration can be applied (Cagliano et al, 2005; Vereecke and Muylle,
2006): information sharing and system coupling. The first regards exchanging in-
formation on market demand, inventories, production plans, delivery dates, etc. Lee
et al (1997) provide several examples of information sharing as an effective instru-
ment to face the bullwhip effect. Regarding information sharing domain, significant
importance has been given to the use of electronic tools for information exchange
and integration. The adoption of electronic communication channels between firms
has been a relevant issue for several years (e.g., Malone et al, 1987), and the liter-
ature contains several contributions on the role of ICT in SCM from different per-
spectives (for a review, see Gunasekaran and Ngai, 2005; Power and Singh, 2007).
More recently, literature has focused on Internet-based electronic communication
(i.e. eBusiness). According to this literature, purchasing performances can be im-
proved through the adoption of internet tools (McIvor et al, 2000), and the flow of
information along the supply chain can be easily transferred, which helps companies
to be more responsive (e.g., Naylor et al, 1999; Aviv, 2001; Devaraj et al, 2007).
The second area of integration, system coupling, is represented by coordinating
physical activities, through mechanisms as VMI, CPFR or JIT-Kanban to obtain a
smooth material flow and a seamless supply chain (see for example Childerhouse
280 Ruggero Golini and Matteo Kalchschmidt

et al, 2002; Disney and Towill, 2003). From this point of view, an integrated supply
chain offers the opportunity for firms to compete on the basis of speed and flexibility,
while at the same time holding minimum levels of inventory in the chain (Power,
2005). In particular some studies have highlighted that JIT sourcing requires specific
conditions (e.g., frequent and fast deliveries, small lots, etc.) that can be difficult to
be performed in an international environment. So, even if it is possible to achieve
efficiency in a global sourcing context through JIT, they are not yet comparable to
what can be gained at domestic level (Das and Handfield, 1997).
Finally, some authors have highlighted the importance of designing a proper
sourcing process in all its subphases and applying proper tools (Zeng, 2003; Pe-
tersen et al, 2005; Quintens et al, 2005; Gelderman and Semeijn, 2006). All this
evidence suggests to companies with global sourcing processes to invest in supply
chain management and integration, e.g. to keep lead times under control (Bozarth
et al, 1998).
In conclusion, literature highlights how companies can improve operational per-
formances (and lead time ones specifically) through supply chain integration, i.e.
information sharing and system coupling. Nevertheless, integration - and especially
system coupling - can be difficult to be performed in a global sourcing context be-
cause of suppliers’ distance. This can make more difficult for companies to control
sourcing globalization counter effects - mainly longer lead times - and neutralize
lower cost seeking strategies.

22.3 Research Objectives and Methodology

The objective of this paper is to explore the relationship between global sourcing,
supply chain investments and lead time performances.
Literature suggests that global sourcing can have an impact on lead time perfor-
mances, however this relationship is not completely straightforward. For example,
from one side global sourcing means that companies purchase from suppliers that
are far from the plant compared to what companies can do in a local sourcing sit-
uation. This should increase the procurement lead time due mainly to transporta-
tion over longer distances. However companies sourcing locally will have to choose
among a limited set of potential suppliers while when global sourcing is adopted
companies are able, at least potentially, to choose the best suppliers, thus being able
to gain better performances. For this reason, the first research question this work
wants to address is:
RQ1 What is the impact of local sourcing on lead time performances?

Some companies often source locally simply because they don’t consider other
alternatives, either because they are too small or because they rely on consolidated
relationships with local suppliers. The impact of local sourcing on performances
may not be completely straightforward also because some companies choose to
22 Threats of Sourcing Locally Without a Strategic Approach 281

source locally through a clear and rational analysis. Other companies, on the con-
trary, choose to purchase locally because that’s enough to achieve their business
goals. For this reason we aim at considering also the impact of how the choice of
local sourcing is made, whether it is likely the result of a structured analysis or not.
We argue that companies that source locally based on a strategic analysis of their
context have better performances compared to those that simply choose to purchase
locally without taking into account strategic issues. Thus we formulate the following
research question:
RQ2 What is the impact of local sourcing on lead time performances?
Literature suggests that improvements on procurement lead time can be achieved
by leveraging on supply chain investments such as JIT, information sharing with
suppliers, coupling between customer and supplier production systems, etc. Limited
evidence however can be found regarding the extent to which these investments
are related to local sourcing approaches. In particular some investments (e.g., JIT)
are positively influenced by local sourcing, but previous works (e.g., Golini and
Kalchschmidt, 2008) have shown that other investments are negatively related to
local sourcing (e.g., purchasing process improvement programs).
Thus our third research question is:
RQ3 What is the impact of local sourcing on supply chain investments that influence
lead time performances?
In order to investigate the above research questions, data have been collected
within the fourth edition of the International Manufacturing Strategy Survey (IMSS),
a research project carried out in 2005 by a global network. This project, originally
launched by London Business School and Chalmers University of Technology, stud-
ies manufacturing and supply chain strategies within the assembly industry (ISIC
2835 classification), through a detailed questionnaire that is administered simulta-
neously in many countries by local research groups; responses are gathered in a
unique global database (Lindberg and Trygg, 1991).
The sample consisted of 660 companies from 21 countries, with an average re-
sponse rate of 34 %. The usable sample included 620 companies, which provided
enough information for the purpose of this study. Among these companies we lim-
ited our analyses only on those companies that don’t rely on Engineering to order
manufacturing systems. This is due to the fact that this kind of companies face dif-
ferent challenges compared to assemble/make-to-order or make-to-stock ones (e.g.
inventories have a different role). The distribution of the sample in terms of country,
size and ISIC code is shown in Table 22.1 and Table 22.2
In order to measure the extent of localization of sourcing activities, we collected
information regarding the percentage of purchases inside the country where the plant
is based. On the other side, to evaluate the extent to which companies decide to
purchase locally for strategic reasons we collected the information regarding the
extent to which companies consider the proximity of suppliers a key element in
selecting a supplier. This item is measured on a 1-5 Likert scale where 1 equals to
“not at all” and “to a great extent”.
282 Ruggero Golini and Matteo Kalchschmidt

Table 22.1 Sampe distribution in terms of country (a) and size (b) - Small: less than 250 employ-
ees, Medium: 251-500 employees, Large: over 501 employees

Country N % Country N % Size N %


Argentina 36 6.8 Italy 39 7.4 Small 300 56.9
Australia 8 1.5 Netherlands 49 9.3 Medium 100 19
Belgium 27 5.1 New Zealand 21 4 Large 124 23.5
Brazil 11 2.1 Norway 15 2.8 NA 3 0.6
Canada 12 2.3 Portugal 8 1.5
Denmark 26 4.9 Sweden 74 14 Total 527 100
Estonia 19 3.6 Turkey 28 5.3 (b)
Germany 17 3.2 UK 13 2.5
Hungary 48 9.1 United States 30 5.7
Ireland 9 1.7 Venezuela 22 4.2
Israel 15 2.8
Total 527 100
(a)

Table 22.2 Distribution of the sample in terms of ISIC code


ISIC Industry description N %
Code
28 Manufacture of fabricated metal products, except machinery and equipment 213 40.4
29 Manufacture of machinery and equipment not elsewhere classified 95 18
30 Manufacture of office, accounting and computing machinery 11 2.1
31 Manufacture of electrical machinery and apparatus not elsewhere classified 63 12
32 Manufacture of radio, television and communication equipment and apparatus 31 5.9
33 Manufacture of medical, precision and optical instruments, watches and clocks 26 4.9
34 Manufacture of motor vehicles, trailers and semi-trailers 51 9.7
35 Manufacture of other transport equipment 32 6.1
NA 5 0.9
Total 527 100

We created three clusters based on these two variables (a detailed methodological


description is reported in the following) and next different lead time performances
have been compared among clusters. In particular we considered: i) Procurement
lead time compared to competitors; ii) Manufacturing lead time compared to com-
petitors; iii) Delivery speed compared to competitors iv) Throughput time efficiency
compared to competitors.
In order to control our results we also considered inventories since they are typi-
cally influenced by lead time performances. Specifically we measured: i) Number of
days of production in raw materials inventory; ii) Number of days of production in
work in progress inventory; iii) Number of days of production in finished products
inventory.
22 Threats of Sourcing Locally Without a Strategic Approach 283

Then, we considered manufacturing conformance performance (relative to com-


petitors and in terms of scrap and rework costs), since it could depend from the
quality of the purchased products and it can negatively affect the manufacturing
lead time. In the end to evaluate investments in supply chain management, we con-
sidered the following variables1 :
• Adoption of Just In Time for managing procurement (measured by the percent-
age of deliveries from suppliers managed through JIT).
• Extent of information sharing with suppliers. We defined a latent variable based
on the degree of adoption with suppliers of: Share inventory level knowledge,
Share production planning decisions and demand forecast knowledge, Order
tracking/tracing, Agreements on delivery frequency;
• Extent of system coupling with suppliers. We defined a latent variable based on
the degree of adoption with suppliers of: Vendor Management Inventory systems,
Collaborative Planning, Forecasting and Replenishment, Physical integration;
• Extent of use of Internet for managing suppliers. We define a latent variable
based on the degree of use with suppliers of: scouting/pre-qualify, Auctions, Rfx,
Data analysis, Access to catalogues, Order management and tracking, Content
and knowledge management, Collaboration support services.
• Purchasing process improvements. We defined a latent variable based on the
company’s investments on: Rethinking and restructuring supply strategy and
the organization and management of suppliers portfolio through (e.g. tiered net-
works, bundled outsourcing, and supply base reduction), Implementing supplier
development and vendor rating programs, Increasing the level of coordination of
planning decisions and flow of goods with suppliers including dedicated invest-
ments (in e.g. Extranet/ EDI systems, dedicated capacity/tools/equipment, dedi-
cated workforce, etc.)

22.4 Results

In order to analyze our research questions we proceeded as follows. First we ap-


plied cluster analysis in order to identify different groups based on the use of local
sourcing and whether strategic decision making is applied. Then we analyzed the
differences among clusters on some contingent variables in order to check for clus-
ters’ reliability. In the end, we compared the different clusters on performances and
supply chain practices.

1 Reliability of these latent variables were checked by controlling Cronbach’s Alpha values (for all

variables over 0.6) and items’ factor loads (for all variables over 0.5)
284 Ruggero Golini and Matteo Kalchschmidt

22.4.1 Cluster Analysis

The cluster analysis has been performed on two variables: degree of local sourcing
(measured as percentage of purchases within the same country where the plant is
located - scale from 0% to 100%) and the extent to which physical proximity is
important in the suppliers’ selection. Through the analysis of the dendrogram, it is
possible to identify that the best solution is three or four clusters. For interpretability
sake we selected three clusters because the two variables are positively correlated
(sig. 0,000); by looking at how the points are jointly distributed in a scatter diagram
it is possible to identify a priori these three high density areas:
• Low local sourcing (i.e. global sourcing) and relatively low importance to phys-
ical proximity (named in the following Globals)
• High local sourcing and high importance to physical proximity (named Patriots).
• High local sourcing and low importance to physical proximity (named Idlers).
Companies with global sourcing and high importance to physical proximity are,
as one could expect, very few and we put them together with the Globals.
Because of this data structure, when we perform a Two Step cluster analysis (log-
likelihood distance on standardized variables) specifying three as the cluster number
we obtain results reported in Table 22.3.

Table 22.3 Cluster analysis results. For each cluster the average values and the significant dissim-
ilarities respect to other clusters are represented (** Mann-Whitney U sig.< 0.000)
N. Local Sourcing Physical proximity
1 Patriots 233 81.725 (3)** 3.624 (2,3)**
2 Idlers 105 78.933 (3)** 1.762 (1,3)**
3 Globals 189 26.060 (1,2)** 2.476 (1,2)**
Average 56.558 2.746

In order to check for the reliability of the defined clusters we tested differences
among clusters on different contingency variables related to company size, busi-
ness objectives, manufacturing globalization, industrial sector and position of de-
coupling point (see Table 22.4 for details). As we can see there are no differences
related to the business objectives and industrial sector. Global sourcers are larger
companies and have a more globalized manufacturing network than local ones. This
is quite intuitive, even if it is not always confirmed by literature (e.g., Cavusgil
et al, 1993; Quintens et al, 2005). Finally it is interesting to notice that companies
sourcing locally have higher direct salaries/wages costs and lower direct materi-
als/parts/components. This suggests that Globals, since they have a higher incidence
of direct materials and components on the total costs, are more driven to scout best
and most convenient suppliers around the world. Given the relevance of these pur-
chases, a longer lead times or higher supply chain investments can be acceptable. We
can summarize results by stating that no relevant differences arise from this analysis
22 Threats of Sourcing Locally Without a Strategic Approach 285

and thus we can argue that the clusters are not biased. After the contingency analy-
sis, we looked for differences among clusters on lead time and other related perfor-
mances and, separately, on SC practices. Since analyzed variables are not normally
distributed (based on Kolmogorov-Smirnov test) we adopted non-parametric tests.

Table 22.4 Contingent factors analysis results


Contingency Results
Business Objectives No differences
Company size Globals are bigger than Patriots and Idlers
Manufacturing globalization Globals have a more globalized production network
Industry (ISIC code) No significant patterns.
Cost structure Globals have a lower incidence of direct salaries/wages
and higher incidence of direct materials/parts/components

22.4.2 Performances

For what concerns performances we took into account lead time performances (pro-
curement, manufacturing and delivery lead times), manufacturing conformance,
throughput time efficiency, scrap and rework costs, raw materials/components in-
ventory, WIP and final products inventory. In particular we made a two-by-two com-
parisons among clusters using Mann-Whitney’s U test. Here results are reported for
each comparison.
Globals vs Idlers. Looking at Table22.5, Globals have superior performances
compared to Idlers on Procurement lead time, Manufacturing lead time, Delivery
speed and Manufacturing conformance. This claims that even if Globals have more
distant suppliers they are still able to have competitive lead times without carrying
higher inventories. The shorter manufacturing lead time can be in part explained
by the higher manufacturing conformance even if there is no difference on scrap
and rework costs. Quite interestingly there is no difference on the throughput time
efficiency.
Globals vs Patriots. Patriots don’t show particular differences with Globals apart
from superior throughput time efficiency (Table22.6). This shows that a strategic
approach to local sourcing can give the opportunity to remain competitive on lead
time performances (not only procurement, but also manufacturing and delivery lead
times). On the other side, global sourcing doesn’t always imply worse lead times.
Patriots vs. Idlers. Finally we compared Patriots with Idlers (Table22.7). Results
show almost no differences between the clusters a part from a superior throughput
time efficiency of the Patriots. Because of that Patriots have probably other reasons
rather than lead times to select suppliers locally.
286 Ruggero Golini and Matteo Kalchschmidt

Table 22.5 Average rank of the analyzed clusters on the different performances
Globals Idlers Sig.
Procurement lead time 130 113 0.043
Manufacturing lead time 132 107 0.004
Delivery speed 135 115 0.027
Manufacturing conformance 136 118 0.045
Throughput Time Efficiency 134 138 0.720
Scrap and rework costs 149 163 0.217
Material/ components inventory 160 158 0.814
WIP inventory 153 172 0.077
Finished products inventory 159 158 0.940

Table 22.6 Average rank of the analyzed clusters on the different performances
Globals Patriots Sig.
Procurement lead time 163 153 0.252
Manufacturing lead time 165 148 0.061
Delivery speed 167 159 0.394
Manufacturing conformance 170 161 0.311
Throughput Time Efficiency 151 194 0.000
Scrap and rework costs 183 203 0.077
Material/ components inventory 204 186 0.121
WIP inventory 192 203 0.350
Finished products inventory 203 187 0.146

Table 22.7 Average rank of the analyzed clusters on the different performances
Idlers Patriots Sig.
Procurement lead time 104 112 0.290
Manufacturing lead time 103 113 0.194
Delivery speed 106 118 0.142
Manufacturing conformance 110 119 0.251
Throughput Time Efficiency 99 123 0.008
Scrap and rework costs 131 133 0.793
Material/ components inventory 143 131 0.262
WIP inventory 139 131 0.411
Finished products inventory 138 129 0.344

22.4.3 Supply Chain Management Practices

We also took into consideration the supply chain management practices put in place
by companies. Namely we considered: Just in time adoption, Information sharing,
System coupling, eBusiness, Purchasing process improvements, as defined in the
Methodology chapter. Following the same approach of the previous chapter, we
compared the clusters two-by-two on the different practices.
22 Threats of Sourcing Locally Without a Strategic Approach 287

Globals vs Idlers. Globals use information sharing, system coupling and eBusi-
ness more than Idlers while there are no significant differences on just in time and
supply chain investments (Table22.8).
Globals vs Patriots. Globals tend to adopt less JIT compared to Patriots, while
there are no significant differences on the other dimensions (Table22.9).
Patriots vs Idlers. Patriots use all supply chain practices to a more extent com-
pared to Idlers. The only exception is supply chain investments where no difference
arises (Table22.10).

Table 22.8 Average rank of the analyzed clusters on the different supply chain practices
Globals Idlers Sig.
Just in time 156 155 0.980
Information Sharing 174 144 0.007
System Coupling 178 139 0.001
eBusiness 170 134 0.001
Supply chain investments 164 144 0.069

Table 22.9 Average rank of the analyzed clusters on the different supply chain practices
Globals Patriots Sig.
Just in time 180 208 0.013
Information Sharing 207 203 0.739
System Coupling 211 200 0.343
eBusiness 204 201 0.816
Supply chain investments 206 187 0.084

Table 22.10 Average rank of the analyzed clusters on the different supply chain practices
Idlers Patriots Sig.
Just in time 124 144 0.043
Information Sharing 130 153 0.028
System Coupling 127 153 0.010
eBusiness 121 152 0.003
Supply chain investments 135 139 0.648

22.5 Discussion

The previous results highlight several issues we discuss here.


288 Ruggero Golini and Matteo Kalchschmidt

A first interesting result comes from the clustering procedure itself.


Several companies adopt local sourcing without any strategic reason for that, but
probably due to common practices. Even if a large extent of our sample considers
strategy a key element to decide how to manage sourcing, some companies still
don’t pay relevant attention to this topic (20% of our sample). Moreover, even if
global sourcing is an ever growing practice (see Cagliano et al, 2008), several com-
panies decide to keep sourcing local and prefer not to extend their supply chains
abroad (Patriots constitute 36% of our sample). We argue that this result contributes
to the debate on the impact of globalization on managerial practices and the extent
of the phenomenon.
A second important result relates to the impact on performances. Analyses show
clearly that no significant differences emerge between those companies that source
global and those that clearly decide to stay local. This lack of difference may be
for sure due to several contingency factors (e.g., product characteristics, produc-
tion organization, strategic goals, etc.), even if contingency analysis doesn’t provide
any relevant difference in some of these items. We argue that this result highlights
that there is no “one best way” in how to manage sourcing activities (i.e., local vs.
global), but that different options are available to companies. Quite interestingly,
however, the only difference between those companies that choose to stay local (Pa-
triots) and all others (Idlers and Globals) is that the former ones are capable of hav-
ing a better throughput time efficiency compared to the latter. This result provides
evidence that local sourcing, if it is properly managed, can have significant impacts
in terms of internal efficiency. Companies should then take in proper consideration
the impact of going global not only for their sourcing activities, but also for their
manufacturing processes. Quite interestingly no significant differences (besides that
on through put time efficiency previously mentioned) arise between Patriots and
Idlers. This may be due to the fact that companies may decide to stay local for
different reasons either related to performances that they want to improve directly
(e.g., transportation cost, delivery lead time, flexibility etc.) or to access specific lo-
cal competences. This means that each Patriots may focus its attention only on some
specific performances thus leading variance in the sample to increase and limit the
difference between the two clusters.
A third important result arises again from the comparison of Idlers and the other
groups. Quite interestingly, besides the differences on performances, Idlers are also
characterized by lower investments on supply chain practices. In fact, compared
to both Globals and Patriots, these companies tend not to invest on information
sharing with suppliers, system coupling and eBusiness tools. This result explains, at
least partially, why these companies have also lower performances compared to the
others. It rather seems that sometimes it is not so important from where companies
source but how this is done. The only exception to this observation is the adoption
of JIT deliveries with suppliers that, due to its characteristics, is less adopted by
Globals compared to those that source locally.
In the end the paper provides evidence that sourcing locally is not going to pro-
vide companies with better performances if this choice isn’t part of the purchasing
22 Threats of Sourcing Locally Without a Strategic Approach 289

strategy of a company. Company should thus spend proper attention in defining a


structure approach to sourcing by means of analysis tools and proper investments.

22.6 Conclusion

This paper contributes in the understanding of the impact of global and local sourc-
ing on both companies’ performances and behavior. The paper provides evidence of
the importance of choosing carefully a sourcing strategy and to consider how it can
be supported by leveraging on supply chain management. In the end, we would like
to highlight the limitations of this work. First of all, statistical analyses were based
on a specific sample, thus in the future replication of this study should be considered
in order to verify the reliability of the results provided.
Second, due to limitations in terms of space, we didn’t consider at any rate why
companies choose to adopt different sourcing approaches. We argue that this el-
ement should help in understanding better differences and similarities among the
considered clusters.
In the, we clustered companies based on the extent of local or global sourcing.
However some companies adopt hybrid approaches between these two alternatives
according to the type of good purchased. This aspect has not been considered here
and future studies should consider this situation and how companies manage hybrid
configurations.

Acknowledgements Financial support for this research was provided by the PRIN 2007 fund “La
gestione del rischio operativo nella supply chain dei beni di largo consumo”.

References

Alguire M, Frear C, Metcalf L (1994) An examination of the determinants of global


sourcing strategy. Journal of Business and Industrial Marketing 9(2):62–74
Aviv Y (2001) The effect of collaborative forecasting on supply chain performance.
Management Science 47(10):1326–1343
Birou LM, Fawcett SE (1993) International Purchasing: Benefits, Requirements,
and Challenges. International Journal of Purchasing & Materials Management
29(2):28 – 37
Bozarth C, Handfield R, Das A (1998) Stages of global sourcing strategy evolution:
An exploratory study. Journal of operations management 16(2-3):241–255
Cagliano R, Caniato F, Spina G (2005) E -business strategy: How companies are
shaping their supply chain through the internet. International Journal of Opera-
tions & Production Management 25(12):1309–1327
290 Ruggero Golini and Matteo Kalchschmidt

Cagliano R, Golini R, Caniato F, Kalchschmidt M, Spina G (2008) Supply chain


configurations in a global environment: A longitudinal perspective. Operations
Management Research DOI 10.1007/s12063-008-0012-0
Cavusgil S, Yaprak A, Yeoh P (1993) A decision-making framework for global
sourcing. International Business Review 2(2):143–156
Chen F, Drezner Z, Ryan J, Simchi-Levi D (2000) Quantifying the bullwhip effect
in a simple supply chain: The impact of forecasting, lead times, and information.
Management Science 46(3):436–443
Childerhouse P, Aitken J, Towill D (2002) Analysis and design of focused demand
chains. Journal of Operations Management 20(6):675–689
Das A, Handfield R (1997) Just-in-time and logistics in global sourcing: An empir-
ical study. International Journal of Physical Distribution and Logistics Manage-
ment 27(3/4):244–259
Devaraj S, Krajewski L, Wei J (2007) Impact of eBusiness technologies on opera-
tional performance: The role of production information integration in the supply
chain. Journal of Operations Management 25(6):1199–1216
Disney S, Towill D (2003) The effect of vendor managed inventory (VMI) dynam-
ics on the Bullwhip Effect in supply chains. International Journal of Production
Economics 85(2):199–215
Dornier P, Ernst R, Fender M, Kouvelis P (1998) Global operations and logistics:
Text and cases. John Wiley & Sons, Inc., New York, NY
Droge C, Jayaram J, Vickery S (2004) The effects of internal versus external integra-
tion practices on time-based performance and overall firm performance. Journal
of Operations Management 22(6):557–573
Ettlie J, Sethuraman K (2002) Locus of supply and global manufacturing. Interna-
tional Journal of Operations and Production Management 22(3):349–370
Fliedner G (2003) CPFR: an emerging supply chain tool. Industrial Management
and Data Systems 103(1-2):14–21
Frear C, Metcalf L, Alguire M (1992) Offshore sourcing: Its nature and scope. In-
ternational Journal of Purchasing and Materials Management 28(3):2–11
Frohlich M, Westbrook R (2001) Arcs of integration: An international study of sup-
ply chain strategies. Journal of Operations Management 19(2):185–200
Gelderman C, Semeijn J (2006) Managing the global supply base through pur-
chasing portfolio management. Journal of Purchasing and Supply Management
12(4):209–217
Golini R, Kalchschmidt M (2008) Moderating the impact of global sourcing on in-
ventories through supply chain management. ISIR 2008 Conference Proceedings,
Budapest
Gunasekaran A, Ngai E (2005) Build-to-order supply chain management: A litera-
ture review and framework for development. Journal of Operations Management
23(5):423–451
Handfield R (1994) US global sourcing: Patterns of development. International Jour-
nal of Operations and Production Management 14(6):40–51
22 Threats of Sourcing Locally Without a Strategic Approach 291

Kotabe M (1990) The relationship between offshore sourcing and innovativeness


of US multinational firms: An empirical investigation. Journal of International
Business Studies 21(4):623–638
Kotabe M, Omura G (1989) Sourcing Strategies of European and Japanese Multina-
tionals: A Comparison. Journal of International Business Studies 20(1):113–130
Lee H, Padmanabhan V, Whang S (1997) Information distortion in a supply chain:
The Bullwhip Effect. Management Science 43(5):546–558
Lindberg P, Trygg L (1991) Manufacturing strategy in the value system. Interna-
tional Journal of Operations & Production Management 11(3):52–62
Malone T, Yates J, Benjamin R (1987) Electronic markets and electronic hierarchies.
Communications of the ACM 30(6):484–497
McIvor R, Humphreys P, Huang G (2000) Electronic commerce: Re-engineering the
buyer-supplier interface. Business Process Management Journal 6(2):122–138
Murray J, Kotabe M, Wildt A (1995) Strategic and financial performance implica-
tions of global sourcing strategy: A contingency analysis. Journal of International
Business Studies 26(1):181–202
Naylor B, Naim M, Berry D (1999) Leagility: Integrating the lean and agile manu-
facturing paradigms in the total supply chain. International Journal of Production
Economics 62(1-2):107–118
Petersen K, Handfield R, Ragatz G (2005) Supplier integration into new product
development: Coordinating product, process and supply chain design. Journal of
Operations Management 23(3-4):371–388
Power D (2005) Supply chain management integration and implementation: A liter-
ature review. Supply chain management: An international journal 10(4):252–263
Power D, Singh P (2007) The e-integration dilemma: The linkages between Inter-
net technology application, trading partner relationships and structural change.
Journal of Operations Management 25(6):1292–1310
Prasad S, Babbar S (2000) International operations management research. Journal
of Operations Management 18(2):209–247
Quintens L, Matthyssens P, Faes W (2005) Purchasing internationalisation on both
sides of the Atlantic. Journal of Purchasing and Supply Management 11(2-3):57–
71
Smith T, Reece J (1999) The relationship of strategy, fit, productivity, and business
performance in a services setting. Journal of Operations Management 17(2):145–
161
Steinle C, Schiele H (2008) Limits to global sourcing? Strategic consequences of
dependency on international suppliers: cluster theory, resource-based view and
case studies. Journal of Purchasing and Supply Management 14(1):3–14
Taylor D (1997) Global Cases in Logistics and Supply Chain Management. Infor-
mation distortion in a supply chain: The bullwhip effect, Boston, MA
Trent R, Monczka R (2003) Understanding integrated global sourcing. International
Journal of Physical Distribution and Logistics Management 33(7):607–629
Vereecke A, Muylle S (2006) Performance improvement through supply chain col-
laboration in Europe. International Journal of Operations & Production Manage-
ment 26(11):1176–1198
292 Ruggero Golini and Matteo Kalchschmidt

Womack J, Jones D (1996) Lean Thinking: Banish Waste and Create Wealth in Your
Corporation. Simon & Schuster New York, NY
Zeng A (2003) Global sourcing: Process and design for efficient management. Sup-
ply Chain Management: An International Journal 8(4):367–379
Chapter 23
Improving Lead Times Through Collaboration
With Supply Chain Partners: Evidence From
Australian Manufacturing Firms

Prakash J. Singh

Abstract Whilst it is well recognized that lead time reductions can be of strategic
benefit to firms, most existing methods for generating these outcomes are seen as
being too complex and difficult to implement. In this paper, the possibility of using
supply chain collaboration for the purpose of reducing lead times was examined.
Data from a study involving 416 Australian manufacturing plants showed that there
were strong albeit indirect links between collaborative practices that firms develop
with key customers and suppliers, and lead time performance. From this, it is sug-
gested that firms consider, amongst other strategies, developing strong collabora-
tive relationships with their trading partners if they wish to reduce lead times.

23.1 Introduction

Benefits of reducing lead times associated with new product development and man-
ufacturing are well documented with publications such as Quick Response Manufac-
turing (Suri, 1998), Competing Against Time (Stalk and Hout, 1990) and Clockspeed
(Fine, 1998) bringing sharp attention to this issue. However, a significant challenge
continues to persist. This challenge is in how firms can achieve significant reduc-
tions in lead time. Although the literature provides some prescriptions on how lead
times can be reduced, many firms continue to struggle to reduce lead times. The
existing ideas appear to be too steeped in mathematical modeling, leading managers
to believe that lead time reduction methods are too difficult and costly to implement
(Suri, 1998; De Treville et al, 2004; Tersine and Hummingbird, 1995; Suri, 1999).
There is a need to develop methods for lead time reduction that are simpler and
practically achievable. One such idea emanates from the supply chain management
(SCM) body of knowledge. More specifically, the strong emphasis that SCM the-

Prakash J. Singh
Department of Management & Marketing, University of Melbourne-Parkville, 3010, Australia
e-mail: pjsingh@unimelb.edu.au

293
294 Prakash J. Singh

ory places on close inter-organizational relationships between key trading partners


would appear, prima facie, to provide the opportunity for firms to achieve significant
reductions in lead times, along with many other benefits.
A number of terms are used to describe the close relationships between supply
chain partners. These include cooperation, coordination and collaboration. Although
these terms are sometimes used interchangeably, there are subtle but important dif-
ferences between them. In this paper, the focus is on collaboration. An attempt is
made to shed light on what characterizes successful collaboration in order enable
firms to reduce lead times associated with new product development and manufac-
turing. Since much of the SCM literature is replete with the idea that firms should
work closely with their key customers and suppliers, this collaboration model is
essentially based on this principle. To inform the model, both the demandside and
supply-side literature were examined for practices on how firms should collabo-
rate with customers and suppliers respectively. This led to the development of three
key collaboration constructs: internal organizational processes, relationships with
customers and relationships with suppliers. It was predicted that relationships with
customers and relationships with suppliers affected performance (measured in terms
of reductions in new product development and manufacturing lead times), both di-
rectly and through internal organizational processes. This model was empirically
tested with survey data from 416 Australian manufacturing firms

23.2 Literature Review

23.2.1 Lead Time Reduction

Lead time can generally be defined as the time period between the initiation of a
task and its completion. More specific definitions depend on context. For example,
in the new product development area, lead time is conceived in terms of the time it
takes to identify a market need, design and test the product, and develop the pro-
cesses for manufacturing (Tennant and Roberts, 2001; Yazdani and Holmes, 1999).
Manufacturing lead time, on the other hand, is the “elapsed time between releasing
an order and receiving it” (Hsu and Lee, 2008, p.1). A number of different terms
are used to describe manufacturing lead time. Examples include “manufacturing
throughput time” (Johnson, 2003) and “order-to-delivery time” (Zhang et al, 2007).
From a SCM perspective, supply chain lead time is the “time spent by the supply
chain to process the raw materials to obtain the final products and deliver them to
the customer” (Bertolini et al, 2007, p.199).
For the purpose of this study, only new product development and manufactur-
ing lead times will be considered. There are two reasons for this. Firstly, these lead
times are more directly under the control of firms; supply chain lead times can often
be too difficult for individual firms to control and influence. Secondly, the interest is
in more closely examining the concepts that come under “time-based competition”
23 Improving Lead Times Through Collaboration With Supply Chain Partners 295

(Stalk and Hout, 1990; Stalk, 1988). These include “fast-to-market” and “fast-to-
product”. Firms that compete with fast-to-market strategy emphasize reductions in
new product development lead times. On the other hand, fast-to-product firms em-
phasize speed in responding to customer demands for existing products. This in-
volves reducing the time it takes to manufacture products (throughput time) as well
as the ability to reduce the time between taking a customer’s order and actually
delivering the product (delivery speed).
Studies have shown that lead times can involve tremendous amounts of waste,
with about 85 percent of time spent waiting between value-adding steps (Holweg
and Pil, 2004). Therefore, there are many benefits of reducing lead times. Ceteris
paribus, firms that are able to reduce new product development lead times can gain
a market edge over others that are not able to do same (Tennant and Roberts, 2001).
Also, these firms move along the learning curve faster than their competition. Both
these factors increase barriers to competitors. Methods for reducing this form of lead
time include application of concurrent engineering practices (Yazdani and Holmes,
1999; Wilding and Yazdani, 1997), careful project management policies (Tennant
and Roberts, 2001) and formal stagegate processes (Cooper, 1995). Similarly, reduc-
tions in manufacturing lead times can generate numerous benefits, including lower
inventory levels, improved quality, reduced forecasting errors and increased flexi-
bility (Johnson, 2003; Ouyang et al, 2007). These then improve customer service
and satisfaction levels, which in turn contribute to competitive advantage accruing
to the firms (Tersine and Hummingbird, 1995). Practical methods for generating
manufacturing lead time reductions include process reengineering (Bertolini et al,
2007), reducing product variety (Zhang et al, 2007), implementing lean and JIT
manufacturing practices (De Treville et al, 2004; Ouyang et al, 2007), using cellu-
lar manufacturing arrangements (Suri, 1998), and using ICT tools (Bertolini et al,
2007).
A number of researchers have complained that many of these methods that have
been suggested in the literature have not been taken up and implemented in firms
for the purpose of reducing lead times (Suri, 1998; De Treville et al, 2004; Tersine
and Hummingbird, 1995; Suri, 1999). Reasons for these include the complexity of
some of these ideas, and the perceived difficulties associated with them. Hence,
there is a need to develop more simpler and easy-to-implement ideas for reducing
new product development and manufacturing lead times. Ideas from the SCM area
have some potential.

23.2.2 Lead Time Reductions through Collaboration between


Supply Chain Partners

Collaboration involves firms working together and sharing resources and benefits,
with the expectation of generating joint improvements in customer service, and
achieving competitive advantage (Simatupang and Sridharan, 2002; Foster, 2005).
SCM proponents claim that firms that collaborate with their customers and suppliers
296 Prakash J. Singh

are able to generate many benefits, including reduced new product development and
manufacturing lead times (Christopher, 2005; Lee, 2004). The empirical evidence
for these claims is in the form of case studies and survey type multi-organizational
cross-sectional studies. For example, case studies of high profile firms such as Nokia
(Heikkilä, 2002) and Toyota (Stalk, 1988) demonstrate the ability of these firms to
achieve lead time reductions through, inter alia, pursuing close collaborative ar-
rangements with their trading partners. In a similar vein, survey based studies show
that collaboration has a positive impact on the operational (including lead time re-
ductions) and financial performance of firms (Vickery et al, 2003; Wisner, 2003).
Despite collaboration’s inherent attractiveness, many firms have found it is not
all that easy to sustainably develop these types of relationships. There appears to
be two main reasons for this. Firstly, collaboration is a relatively ’hard’ concept to
implement because it requires a lot more effort from firms relative to other forms of
inter-organizational relationships such as cooperation and coordination. Secondly,
firms have predominantly focused on technical aspects of collaboration, with the
belief that these systems would provide the infrastructure for strong relationships
(Narasimhan and Kim, 2001; Patterson et al, 2003). As such, large investments
have been made into establishing ICT systems (Bendoly and Kaefer, 2004). Rela-
tively less investment has gone into establishing appropriate social systems (Burgess
and Singh, 2006; Nahapiet and Ghoshal, 1998; Tsai and Ghoshal, 1998). But, as
Spekman et al. Spekman et al (1998) and Golicic et al. Golicic et al (2003) con-
tend, collaboration is as much to do with the technical system as it is with the social
system. The relative neglect of social issues has led to firms not achieving the pur-
ported benefits of collaboration. For firms to be able to establish and benefit from
collaboration, it is self-evident that the above mentioned issues need to be resolved.
We attempt to (at least partially) address this in this paper.
We focus on one key element of collaboration: the nature and extent of integration
that firms need to enter into with their trading partners in order to realize significant
benefits in the form of lead time reductions. Frohlich and Westbrook Frohlich and
Westbrook (2002) separate this into supply and demand integration. Supply integra-
tion includes JIT (frequent, small lots) delivery, small supply base, suppliers selected
on the basis of quality and delivery performance, longterm contracts with suppliers
and elimination of paperwork. Demand integration, on the other hand, includes in-
creased access to demand information throughout the supply chain to permit rapid
and efficient delivery, coordinated planning and improved logistics communication.

23.2.3 Hypotheses

Based on above discussions and purpose of this study, we hypothesize that firms that
are able to develop high levels of collaboration (through integration) with supply
chain partners would be able to generate significant reductions in both new product
development and manufacturing lead times. Presenting this formally, it is hypothe-
sized that:
23 Improving Lead Times Through Collaboration With Supply Chain Partners 297

• H1: Collaboration based internal organizational processes are positively related


to lead time performance of firms.
• H2a - H2b: Collaboration based relationships with customers are positively re-
lated to lead time performance of firms, both directly and through their internal
organizational processes.
• H2c - H2d: Collaboration based relationships with suppliers are positively re-
lated to lead time performance of firms, both directly and through their internal
organizational processes.
• H2e: Collaboration based relationships with suppliers and relationships with cus-
tomers that firms develop are positively inter-related..
These hypotheses can be summarized in diagrammatic form as shown in Fig.23.1.

Fig. 23.1 Theoretical model

23.3 Research Method

23.3.1 Study Participants

Data for the empirical testing of the above hypotheses was obtained through a postal
survey targeting firms in the manufacturing industry in Australia. The JASANZ Reg-
ister (Standards, 2004) was used for selecting the sample of firms. The unit of anal-
ysis was at the plant level. A target list of 1,053 unique plants was selected from
the database. The respondents were senior managers (general, operations, quality,
298 Prakash J. Singh

production, etc). The survey was carried out in two stages. The final usable re-
sponse rate was 41 percent (n=416). The study participants were predominantly
small plants with almost half having fewer than 100 employees and $A10 million
in annual revenue. Also, the plants were mainly from the machinery and equipment
manufacturing (26 percent) and metal products (17 percent) manufacturing industry
sub-categories.

23.3.2 Measurement Instrument

The items used in this study was from a measurement instrument that was de-
veloped for a large study of quality and operations management practices (Singh,
2003).Since this instrument was original in many respects, a full set of tests for re-
liability and validity were performed to ensure that the various types of errors were
within acceptable levels. These included pretest with eight practitioners and aca-
demicians and pilot test within 21 firms. A total of 146 items were present. Each of
the items was measured on a five-point Likert scale.For this paper, a subset of the
items relevant to the key constructs of internal organizational processes, relation-
ships with suppliers, relationships with customers, and lead time performance was
used. These constructs along with their associated items, and the scales that were
used to measure the items, are shown in Table 23.1.

23.4 Data Analysis Procedures and Results

23.4.1 Psychometric Properties of Measurement Models

Content Validity. The lists of items assigned to the constructs were based on litera-
ture (summarized in the Literature Review section earlier). This provided evidence
that the items associated with the four constructs had sufficient grounding in rele-
vant literature and therefore had content validity.
Correlation Coefficients and Descriptive Statistics. The inter-item Pearson correla-
tion coefficients were low to moderate in magnitude, suggesting that multicollinear-
ity related problems were not present, with all coefficients being less than the thresh-
old value of 0.9 (Hair, 2006). Further, the mean and standard deviation values of
all the items suggested that the item measures did not suffer from excessive non-
normality.
Reliability. The Cronbach’s alpha reliability coefficients for the constructs inter-
nal organizational processes, customer relationship, supplier involvement and firm
performance constructs were 0.830, 0.654, 0.634 and 0.774 respectively. These co-
efficients exceeded the minimum threshold level of 0.6 for acceptable reliability
(Nunnally and Bernstein, 1978) for all the constructs. Therefore, the selected items
23 Improving Lead Times Through Collaboration With Supply Chain Partners 299

Table 23.1 Constructs and associated items


1. Internal organizational processes
IOP1 The quality assurance processes ensure that the organization’s own output requirements
are consistently met
IOP2 The actual manufactured products are checked against customer orders before they are
delivered
IOP3 The equipment to carry out tests and inspections are available when needed
IOP4 Products and processes are inspected and/or tested
IOP5 Processes that produce products that cannot be tested or inspected are continuously
monitored
IOP6 Everyone is aware of what needs to be done with raw materials, work-in-progress and
finished products that fail inspections
IOP7 The handling, storage, packaging and delivery methods help to minimize quality-related
problems
IOP8 It is possible to identify clearly when a raw material, work-in-progress or finished
product has been inspected
IOP9 It is possible to establish relevant details (such as parts suppliers, place and date
of manufacture, persons-in-charge) of all finished products
2. Relationships with customers
RC1 The organization is aware of the requirements of its customers
RC2 Processes and activities of the organization are designed to increase customer
satisfaction levels
RC3 Misunderstandings between customers and the organization about customer orders are
rare
RC4 All contracts are systematically reviewed before acceptance, even if they are
routine ones
RC5 The organization has systematic processes for handling complaints
RC6 Changes made to contracts lead to confusion in the organization (Reverse coded)
3. Relationships with suppliers
RS1 The organization seeks assurance of quality from suppliers
RS2 The main criterion for choosing suppliers is the quality of their products
RS3 Misunderstandings between suppliers and the organization about orders placed with them
are rare
RS4 The quality of supplied products and services are assessed
RS5 Materials provided by the customer for incorporation into products are treated the same
as materials from any other suppliers
4. Operating performance
OP1 Total lead-time of manufacturing products
OP2 Time taken for new product development
OP3 Delivery performance

reliably estimated the constructs.


Convergent and Discriminant Validities. Convergent validity (i.e., the assigned
items yield roughly the same results) and discriminant validity (i.e., the items es-
timate only the assigned construct and not any others) were both assessed by us-
ing a confirmatory factor analysis (CFA) model testing approach. The CFA model
300 Prakash J. Singh

is a structural equation model (SEM) where the constructs are all co-varied with
each other. The SEM analysis was conducted using the AMOS 5.0 R (Arbuckle and
Wothke, 2004) software package. The maximum likelihood (ML) estimation tech-
nique was used to fit the model to the data because it is a reasonably scale- and
distribution-free procedure (Hair, 2006). A number of commonly reported indices
for assessing the goodness-of-fit of SEM models with data were obtained for the
CFA model. These were as follows: χ 2 (246) = 876 with p-value ¡ 0.001; normed
χ 2 = 3.567; goodness-of-fit index (GFI) = 0.844; adjusted goodness-of-fit index
(AGFI) = 0.810; Tucker-Lewis index (TLI) = 0.764; comparative fit index (CFI) =
0.790; root mean square residual (RMR) = 0.045; and, root mean square error of
approximation (RMSEA) = 0.079. The χ 2 fit measure has a tendency to produce
negative results with sample sizes are greater than 200, and so was be disregarded.
For all other fit indices, applying the cutoff criteria proposed by researchers (Hair,
2006; Marsh et al, 2004; Sharma et al, 2005; Schermelleh-Engel et al, 2003), the
’acceptable’ descriptor reasonably accurately captures the level of fit that has been
obtained here. The parameters associated with the CFA showed the convergent va-
lidity of the constructs was generally supported; all the estimated factor loadings of
items on constructs were significant (at p-values ¡ 0.001), the signs were all positive
and only one was below 0.4, with the minimum being +0.346. Further, from the
squared multiple correlation coefficient values, the variances of the items explained
by their constructs were reasonably high (with the average being 34 percent). As
for discriminant validity, correlations between the constructs were mostly moder-
ate, suggesting that items assigned to one construct were not significantly highly
loading on others.

23.4.2 SEM Results for Structural Model

23.4.2.1 Evaluation of Goodness-of-Fit Indices

As with the CFA model, the SEM analysis procedure was used to assess the hypoth-
esized relationships in the theoretical model presented in Fig.23.1. The fit indices for
the hypothesized model were as follows: χ 2 (df=246) = 876 with p-value = 0.000;
χ 2 /df = 3.567; GFI = 0.844; AGFI = 0.810; TLI = 0. 764; CFI = 0.790; RMR =
0.045; and, RMSEA = 0.079. These indices suggest that the theoretical model has
adequate level of empirical support.1

1 To further confirm this result, a χ 2 difference test is traditionally used whereby the χ 2 values for
the hypothesized model is compared with the CFA model (Anderson and Gerbing, 1988). However,
in our case, the number of parameters in the hypothesized model is the same as that in the CFA,
resulting in all fit indices being the same for the two models. Hence, this χ 2 difference test could
not be meaningfully performed.
23 Improving Lead Times Through Collaboration With Supply Chain Partners 301

23.4.3 Evaluation of Theoretical Model Parameter Estimates

Figure23.2 shows all the structural model parameters (regression, relevant squared
multiple correlation, and correlation coefficients) within the theoretical model.
These are in standardized form. The results have several noteworthy aspects. In
terms of the magnitude and sign of the relationships, as Fig.23.2 shows, only two
out of six relationships (H2a and H2c) are statistically insignificant in magnitude,
having p-values greater than 0.05. The other relationships are all statistically sig-
nificant in magnitude (p-values less than 0.05) and positive in sign. Also, the inter-
correlation between the two exogenous constructs is positive, statistically significant
and moderate in magnitude. Finally, the squared multiple correlation values for the
two endogenous constructs, internal organizational processes and lead time perfor-
mance, were 0.726 and 0.334 respectively. The exogenous constructs therefore ac-
counted for large proportions of variances in these constructs. We further analyzed
the regression and correlation data presented in Fig.23.2 by examining the standard-
ized effect sizes between the constructs. Effect size is the increase/decrease in the
endogenous construct (in standard deviation units) when there is a one standard de-
viation increase in the exogenous construct. The standardized direct effects, indirect
effects (calculated using the path analysis tracing rules described by Kline (2005))
and total effects of all the exogenous constructs on the endogenous constructs of the
model are shown in Table 23.2.

Fig. 23.2 Theoretical model, showing maximum likelihood estimates of standardized regression
coefficients (on straight lines with single-arrowheads), squared multiple correlation coefficients (on
constructs) and correlation coefficients (on curved lines with double-arrowheads.* 0.05 < p-value
≤ 0.1; ** 0.01 < p-value ≤ 0.05; *** p-value ≤ 0.01
302 Prakash J. Singh

Table 23.2 Effects decomposition of paths in the hypothesized model


Endogenous construct: Internal organizational processes
Exogenous construct:
Direct effect Indirect effect Total effect
Relationships with customers 0.521 0.251 0.772
Relationships with suppliers 0.431 0.304 0.735

Endogenous construct: Operating performance


Exogenous construct:
Direct effect Indirect effect Total effect
Relationships with customers 0.156 0.783 0.939
Relationships with suppliers 0.065 0.38 0.445
Internal organizational processes 0.398 - 0.398

23.5 Discussion

The SEM model-data fit results suggested that in an overall sense, there was em-
pirical support for SCM-based collaboration - lead time performance model as de-
scribed in Fig.23.1. However, in terms of the specific hypothesized relationships, the
same analysis showed that two direct hypothesized relationships (H2a: Relationship
with customers → lead time performance, and H2c: Relationship with suppliers →
lead time performance) were not supported. All other hypothesized relationships
had empirical support. One could therefore conclude that SCM based collaboration
strategies such as development of greater integration with supply chain partners are
not so effective. This however would be an erroneous conclusion. Effects analysis
results in Table23.2 show that the indirect effects are strong and more than compen-
sate for the weak direct effects. As Table23.2 shows, the total effect of relationships
with customers on lead time performance was 0.939, indicating that one standard de-
viation improvement in the relationships with customers construct is associated with
0.939 standard deviation change in lead time improvement. This is a very strong ef-
fect. Similarly, the total effect of relationships with suppliers construct on lead time
performance was 0.445, indicating strong effect. These strong total effects indicate
that collaboration practices combine in a synergistic manner to affect lead time per-
formance. As a result, it would be myopic if each individual corroboration construct
is assessed and evaluated for its effect in isolation.
The literature on SCM collaboration has shown a number of benefits that firms
can generate. To date, however, it has not been clear if lead time performance could
also improve through supply chain collaboration. This study has shown that there is
a strong albeit indirect link between supply chain collaboration practices and lead
time performance. As such, firms can pursue such strategies with the understanding
that a wide range of benefits that include lead time performance improvements can
be achieved.
Due to the manner in which the key supply chain collaboration constructs have
been defined, measured and empirically validated, there is guidance to managers
as to how these constructs can be operationalised in practice. From the items in
23 Improving Lead Times Through Collaboration With Supply Chain Partners 303

Table23.1, it is clear that much of the actual practices relating to supply chain col-
laboration are quite straightforward and simple in nature, and therefore realistic to
achieve. One could indeed consider most of these items as being commonsensical
and generally good practices that could be found in most well managed firms. As
such, a key contribution of the paper is the articulation of a simple set of practices
that enable and facilitate supply chain collaboration. Firms can be confident that
if they put into practice the items listed in Table23.1, then by the empirical links
established in the study, there will be a good chance that lead time performance
improvements would follow. In doing so, this would overcome the belief that many
managers have that the existing methods for lead time reductions are too complex
and difficult to implement.

23.6 Conclusion

In this study, the aim was to establish if it is possible to use ideas from the SCM
field, specifically those relating to collaboration between firms and their trading
partners, for the purpose of reducing lead times for new product development and
manufacturing. The empirical data from a sample of 416 Australian manufacturing
plants showed that this may indeed be possible, with data showing that relationships
that firms develop with key customers and suppliers, acting directly and indirectly
through internal organizational processes, strongly affected lead time performance.
Given that lead time reduction has been recognized as playing a vital strategic role in
enabling firms to successfully and sustainably compete, managers should therefore
consider supply chain management collaboration as one more tool available to them,
amongst other existing methods, for the purposes of generating lead time reductions.

References

Anderson J, Gerbing D (1988) Structural equation modeling in practice: A review


and recommended two-step approach. Psychological bulletin 103(3):411–423
Arbuckle J, Wothke W (2004) Amos R 5.0 users guide. Chicago: Smallwaters
Bendoly E, Kaefer F (2004) Business technology complementarities: impacts of the
presence and strategic timing of ERP on B2B e-commerce technology efficien-
cies. Omega 32(5):395–405
Bertolini M, Bottani E, Rizzi A, Bevilacqua M (2007) Lead time reduction through
ICT application in the footwear industry: A case study. International Journal of
Production Economics 110(1-2):198–212
Burgess K, Singh P (2006) A proposed integrated framework for analysing supply
chains. Supply Chain Management: An International Journal 11(4):337–344
Christopher M (2005) Logistics and supply chain management: Creating value-
adding networks, 3rd edn. London, Prentice Hall
304 Prakash J. Singh

Cooper RG (1995) Developing new products on time, in time. Research Technology


Management 38(5):49–57
De Treville S, Shapiro R, Hameri A (2004) From supply chain to demand chain:
The role of lead time reduction in improving demand chain performance. Journal
of Operations Management 21(6):613–627
Fine C (1998) Clockspeed: Winning Industry Control in the Age of Temporary Ad-
vantage. Perseus Books
Foster S Fand Srikanth (2005) Seven imperatives for successful collaboration. Sup-
ply Chain Management Review 9(1):30–37
Frohlich M, Westbrook R (2002) Demand chain management in manufacturing and
services: Web-based integration, drivers and performance. Journal of Operations
Management 20(6):729–745
Golicic S, Foggin J, Mentzer J (2003) Relationship magnitude and its role in interor-
ganizational relationship structure. Journal of Business Logistics 24(1):57–76
Hair Ja (2006) Multivariate Data Analysis, 5th edn. New Jersey: Pearson Prentice-
Hall
Heikkilä J (2002) From supply to demand chain management: Efficiency and cus-
tomer satisfaction. Journal of Operations Management 20(6):747–767
Holweg M, Pil F (2004) The second century: Reconnecting customer and value
chain through build-to-order. MIT Press Cambridge, MA
Hsu SL, Lee CC (2008) Replenishment and lead time decisions in manufacturer-
retailer chains. Transportation Research Part E: Logistics and Transportation Re-
view 45(3):398 – 408
Johnson D (2003) A framework for reducing manufacturing throughput time. Jour-
nal of Manufacturing Systems 22(4):283–298
Kline R (2005) Principles and practice of structural equation modeling, 2nd edn.
Guilford Press, New York
Lee H (2004) The triple-A supply chain. Harvard Business Review (10):102–112
Marsh H, Hau K, Wen Z (2004) In search of golden rules: Comment on hypothesis-
testing approaches to setting cutoff values for fit indexes and dangers in over-
generalizing Hu and Bentler’s (1999) findings. Structural Equation Modeling
11(3):320–341
Nahapiet J, Ghoshal S (1998) Social capital, intellectual capital and the organiza-
tional advantage. Academy of management review 23(2):242–266
Narasimhan R, Kim S (2001) Information system utilization strategy for supply
chain integration. Journal of Business Logistics 22(2):51–76
Nunnally J, Bernstein I (1978) Psychometric theory, 2nd edn. New York: McGraw-
Hill
Ouyang L, Wu K, Ho C (2007) An integrated vendor–buyer inventory model with
quality improvement and lead time reduction. International Journal of Production
Economics 108(1-2):349–358
Patterson K, Grimm C, Corsi T (2003) Adopting new technologies for supply chain
management. Transportation Research Part E: Logistics and Transportation Re-
view 39(2):95 –121
23 Improving Lead Times Through Collaboration With Supply Chain Partners 305

Schermelleh-Engel K, Moosbrugger H, Müller H (2003) Evaluating the fit of struc-


tural equation models: Tests of significance and descriptive goodness-of-fit mea-
sures. Methods of psychological research online 8(2):23–74
Sharma S, Mukherjee S, Kumar A, Dillon W (2005) A simulation study to investi-
gate the use of cutoff values for assessing model fit in covariance structure mod-
els. Journal of Business Research 58(7):935–943
Simatupang T, Sridharan R (2002) The Collaborative Supply Chain. International
Journal of Logistics Management 13(1):15–30
Singh P (2003) What Really Works in Quality Management: A Comparison of Ap-
proaches. Consensus Books, Sydney
Spekman R, Kamauff J, Myhr N (1998) An empirical investigation into supply chain
management: A perspective on partnerships. International Journal of Physical
Distribution & Logistics Management 28(8):630–650
Stalk G (1988) Time–The next source of competitive advantage, vol 66. Harvard
Business Review
Stalk G, Hout T (1990) Competing against time: How time-based competition is
reshaping global markets. New York : Free Press
Standards A (2004) Joint Accredited Systems - Australia and New Zealand (JAS-
ANZ) Register [cited Oct 2004]. URL http://www.jas-anz.com.au
Suri R (1998) Quick response manufacturing: A companywide approach to reducing
lead times. OR., Productivity Press, Portland, OR, Portland
Suri R (1999) How Quick Response Manufacturing Takes the Wait Out. Journal for
Quality & Participation 22(3):46–49
Tennant C, Roberts P (2001) A faster way to create better quality products. Interna-
tional Journal of Project Management 19(6):353–362
Tersine R, Hummingbird E (1995) Lead-time reduction: The search for competi-
tive advantage. International Journal of Operations and Production Management
15(2):8–18
Tsai W, Ghoshal S (1998) Social capital and value creation: The role of intrafirm
networks. Academy of management journal 41(4):464–476
Vickery S, Jayaram J, Droge C, Calantone R (2003) The effects of an integrative
supply chain strategy on customer service and financial performance: An anal-
ysis of direct versus indirect relationships. Journal of Operations Management
21(5):523–539
Wilding R, Yazdani B (1997) Concurrent Engineering in the Supply-Chain. Logis-
tics Focus 5:16–22
Wisner J (2003) A structural equation model of supply chain management strategies
and firm performance. Journal of Business Logistics 24(1):1–25
Yazdani B, Holmes C (1999) Four models of design definition: sequential, design
centered, concurrent and dynamic. Journal of Engineering Design 10(1):25–37
Zhang X, Chen R, Ma Y (2007) An empirical examination of response time, prod-
uct variety and firm performance. International Journal of Production Research
45(14):3135–3150
Appendix A
International Scientific Board

The chair of the international scientific board of the 1st rapid modelling conference
”Increasing Competitiveness - Tools and Mindset” consisted of:
• Gerald Reiner (University of Neuchâtel, Switzerland)
Members of the international scientific board as well as referees are:
• Djamil Aı̈ssani (LAMOS, University of Béjaia, Algeria)
• Michel Bierlaire (EPFL, Switzerland)
• Bnamar Chouaf (University of Sidi Bel Abes, Algeria)
• Lawrence Corbett (Victoria University of Wellington, New Zealand)
• Krisztina Demeter (Corvinus University of Budapest, Hungary)
• Suzanne de Treville (University of Lausanne, Switzerland)
• Gerard Gaalman (University of Groningen, The Netherlands)
• Petri Helo (University of Vaasa, Finland)
• Werner Jammernegg (Vienna University of Economics and Business Adminis-
tration, Austria)
• Matteo Kalchschmidt (University of Bergamo, Italy)
• Doug Love (Aston Business School, UK)
• Jose Antonio Dominguez Machuca (University of Sevilla, Spain)
• Jeffrey S. Petty (Lancer Callon, UK)
• Boualem Rabta (University of Neuchâtel, Switzerland)

307
Appendix B
Sponsors

The sponsors of the 1st rapid modelling conference ”Increasing Competitiveness -


Tools and Mindset” are:
• Journal of Modelling in Management http://www.emeraldinsight.com/jm2.htm
• LANCER CALLON http://www.lancercallon.com
• REHAU http://www.rehau.at
• SOFTSOLUTION http://www.softsolution.at
• SWISS OPERATIONS RESEARCH SOCIETY http://www.svor.ch
• THENEXOM http://www.thenexom.net
• University of Lausanne, Faculty of Business and Economics (HEC) http://www.hec.unil.ch
• University of Neuchâtel, Faculty of Economics http://www.unine.ch/seco

309

You might also like