You are on page 1of 129

SPRINGER BRIEFS IN

APPLIED SCIENCES AND TECHNOLOGY

Miryam Barad

Strategies and
Techniques for
Quality and
Flexibility

123
SpringerBriefs in Applied Sciences
and Technology

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Systems Research Institute,
Warsaw, Poland
SpringerBriefs present concise summaries of cutting-edge research and practical
applications across a wide spectrum of fields. Featuring compact volumes of 50–
125 pages, the series covers a range of content from professional to academic.
Typical publications can be:
• A timely report of state-of-the art methods
• An introduction to or a manual for the application of mathematical or computer
techniques
• A bridge between new research results, as published in journal articles
• A snapshot of a hot or emerging topic
• An in-depth case study
• A presentation of core concepts that students must understand in order to make
independent contributions
SpringerBriefs are characterized by fast, global electronic dissemination,
standard publishing contracts, standardized manuscript preparation and formatting
guidelines, and expedited production schedules.
On the one hand, SpringerBriefs in Applied Sciences and Technology are
devoted to the publication of fundamentals and applications within the different
classical engineering disciplines as well as in interdisciplinary fields that recently
emerged between these areas. On the other hand, as the boundary separating
fundamental research and applied technology is more and more dissolving, this
series is particularly open to trans-disciplinary topics between fundamental science
and engineering.
Indexed by EI-Compendex and Springerlink.

More information about this series at http://www.springer.com/series/8884


Miryam Barad

Strategies and Techniques


for Quality and Flexibility

123
Miryam Barad
Department of Industrial Engineering, The
Iby and Aladar Fleishman Faculty of
Engineering
Tel Aviv University
Tel Aviv
Israel

ISSN 2191-530X ISSN 2191-5318 (electronic)


SpringerBriefs in Applied Sciences and Technology
ISBN 978-3-319-68399-7 ISBN 978-3-319-68400-0 (eBook)
https://doi.org/10.1007/978-3-319-68400-0
Library of Congress Control Number: 2017956326

© The Author(s) 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

It has been a long time since I first thought of writing a book. I told my friends that
when I retire I would write a book. They laughed and said that academic people
working at universities, who really want to write a book, do not wait until their
retirement. I do not write easily. It takes me a long time to write a paper, but I enjoy
doing it. On the other hand, writing a book is a complex and big work, so I decided
to wait until my retirement. However, upon my retirement from Tel Aviv
University the Rector of a college asked me to join the academic staff as Head of the
Industrial Engineering and Management Department in order to enable the students
to get an academic degree similar to that of universities, i.e., Bachelor of Science
(B.Sc.). Therefore, again the book had to wait.
The idea of writing a book came back to me after a few more years. The editor of
an important professional journal asked me, as member of its editorial board, to
write a paper for a special volume commemorating 50 years of the journal first
issue.
He mentioned that I was free to select any topic for the paper, eventually
expressing my specific experience related to the topic. I selected ‘Flexibility’ as the
topic to write about. The paper title became Flexibility development—a personal
retrospective. It integrated several of my published papers emphasizing my per-
sonal views and experience on the topic.
Since the publication of this paper, the road toward writing the book became
clear to me. As the papers I had published dealt with many topics, I started to write
building blocks for the book, in terms of papers integrating my main research
topics. The book had to provide the relationships between them.
At the beginning of the 1980s, Quality started to become an important topic in
industry. That was before Quality Management or Total Quality Management
emerged as vital strategies. At that period, I started my research on quality-oriented
organizations through a survey, investigating Quality Assurance Systems in Israeli
industries. I selected a particular industrial area, Electric and Electronics industry
because this area made use of the highest developed technologies, sensitive to

v
vi Preface

quality aspects. Hence, it seemed the most promising area for the development of
quality-oriented organizations and for application of quality methods. However,
even in that area, we found out that managers were not aware enough of the
economic opportunities of quality systems. They acted toward their development
because of the pressure exerted upon them by strong buyers.
Since that period, I continued intermittently to study and research this topic
using various techniques such as Design of Experiments and later on Quality
Function Deployment. I also published invited chapters on Total Quality
Management in several encyclopedias.
In the meantime, a more exciting topic emerged, Flexibility. For many years,
flexibility has been my main topic of research. It is a complex and challenging topic
with never-ending research possibilities. It is important in the human body, and
according to recent research works, it seems that it is important in the brain per-
formance as well. In manufacturing and other man-made systems such as infor-
mation, logistics, or supply chains, there is consensus that flexibility means
adaptations to changes.
The early approaches to flexibility research were associated with Flexible
Manufacturing Systems (FMSs). These early approaches to flexibility had a
bottom-up structure related to a manufacturing hierarchy, i.e., from basic flexibility
types, such as ‘machine flexibility,’ to system flexibility such as ‘volume flexibil-
ity,’ or ‘mix flexibility.’ My first research on flexibility used a bottom-up structure
described in two papers published at the end of 1980s. Both papers used Petri nets
to model flexibility in manufacturing systems. By the end of the 1990s, the
importance of flexibility got its main recognition from a strategic perspective.
Accordingly, my next research projects were devoted to flexibility-oriented
strategies, through a top-down approach. Many of these projects used Quality
Function Deployment.

Chapter Organization and Topical Coverage

The book has two parts and six chapters.


Part I is about strategies. It is an overview on Quality and Flexibility as linked to
my professional development and comprises three chapters.
Chapter 1 has one section that includes some general Definitions of Strategy and
its importance. Chapter 2 describes Quality-oriented Strategies and contains seven
sections. Chapter 3 describes Flexibility-oriented Strategies and comprises six
sections.
Part II is about techniques. It describes several of my published papers that
apply multipurpose techniques for assessing quality and flexibility and contains
three chapters as well.
Preface vii

Chapter 4 is about Design of Experiments (DOE) and has five sections.


Chapter 5 describes Petri Nets and contains six sections. Chapter 6 is about Quality
Function Deployment (QFD) and contains six sections as well.
At the end of each chapter, there is a list of references.

Tel Aviv, Israel Miryam Barad


Contents

Part I Strategies
1 Definitions of Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Quality-oriented Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5
2.2 Quality Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6
2.3 Quality Development in the US (Before the Quality Revolution
in the Mid-1980s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Quality Organization in Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Implementation of the American Quality Methods . . . . . . . 10
2.4.2 The Integrated Japanese Quality System . . . . . . . . . . . . . . 11
2.5 Total Quality Strategies After the Quality Revolution—Universal
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13
2.5.1 USA—Malcolm Baldridge National Quality Award
(MBNQA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13
2.5.2 Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 16
2.5.3 Australia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 17
2.5.4 Cultural/Geographical Styles—East Versus West
(China Versus Australia) . . . . . . . . . . . . . . . . . . . . . . . .. 18
2.6 Quality Management Theories and Practices—Usage Aspects . . .. 22
2.6.1 Logistics Versus Manufacturing . . . . . . . . . . . . . . . . . . .. 22
2.6.2 Contribution of QM Tools and Practices to Project
Management Performance . . . . . . . . . . . . . . . . . . . . . . .. 24
2.7 Soft Versus Hard Quality Management Practices . . . . . . . . . . . .. 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 28

ix
x Contents

3 Flexibility-oriented Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Flexibility in Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Flexible Manufacturing Systems (FMSs) . . . . . . . . . . . . . . 33
3.2.2 Classifications of Flexibility in Manufacturing Systems . . . 35
3.2.3 Flexibility Types and Measures (Dimensions/Metrics) . . . . 36
3.3 Flexibility in Logistic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Flexibility of Generic Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.1 Clouds Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.2 Flexibility Aspects and Analysis of Several Generic
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 46
3.4.3 Context Perspective—Information, Manufacturing
(or Service) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Flexibility in Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6 Strategic Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Part II Techniques
4 Design of Experiments (DOE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.1 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.2 Fisher’s Basic DOE Principles . . . . . . . . . . . . . . . . . . . . . 62
4.1.3 Factorial Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Impact of Flexibility Factors in Flexible Manufacturing
Systems—A Fractional Factorial Design of a Simulation
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 64
4.2.1 The Selected Factors (Independent Variables), Their
Levels and the Simulation Design . . . . . . . . . . . . . . . . . . . 64
4.2.2 Response (Dependent Variable) . . . . . . . . . . . . . . . . . . . . 66
4.2.3 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.4 Some Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3 Prior Research Is the Key to Fractional Factorial
Design—A Fractional Factorial Design
of a Physical Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 68
4.3.1 The Selected Factors (Independent Variables)
and Their Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.2 Response (Dependent Variable) . . . . . . . . . . . . . . . . . . . . 69
4.3.3 Choice of the Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3.4 Analysis of the First Data Set . . . . . . . . . . . . . . . . . . . . . . 71
4.3.5 Analysis of the Second Data Set and Some Concluding
Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 71
Contents xi

4.4 Flexibility Factors in Logistics Systems—A Full Factorial Design


to Investigate a Calculated Complex Deterministic Expression . . . 72
4.4.1 The Design and Its Rationale . . . . . . . . . . . . . . . . . . . . . . 73
4.4.2 Analysis and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.3 Interpretation of the Interaction Results . . . . . . . . . . . . . . . 75
4.5 Taguchi’s Experimental Design Techniques . . . . . . . . . . . . . . . . . 75
4.5.1 A Quality Engineering Strategy . . . . . . . . . . . . . . . . . . . . 75
4.5.2 Classical DOE Versus Taguchi’s Parameter Design
Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 77
4.5.3 Parameter Design Experimental Structure
(Inner and Outer Arrays) . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.5.4 Signal to Noise Ratios (S/N) . . . . . . . . . . . . . . . . . . . . . . 78
4.5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2 Petri Nets and Their Time Modeling . . . . . . . . . . . . . . . . . . . . . . 82
5.3 Decomposing TPNs of Open Queuing Networks . . . . . . . . . . . . . 84
5.4 TPN Based Expected Station Utilization at Steady State . . . . . . . . 87
5.4.1 Modeling Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.5 TPNs as a Verification Tool of Simulation Models
at Steady State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 90
5.6 Weaving Processes—An Additional Application
of Timed Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 94
5.6.1 Manufacturing Assumptions Describing
the Weaving Process . . . . . . . . . . . . . . . . . . . . . ....... 94
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... 98
6 Quality Function Deployment (QFD) . . . . . . . . . . . . . . . . . . . . . . . . 101
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Quality Function Deployment—The Original Version . . . . . . . . . . 101
6.3 Quality Function Deployment—An Enhanced View . . . . . . . . . . . 103
6.4 Linking Improvement Models to Manufacturing Strategies . . . . . . 104
6.4.1 Deployment of the Improvement Needs . . . . . . . . . . . . . . 104
6.5 Strategy Maps as Improvement Paths of Enterprises . . . . . . . . . . . 108
6.5.1 Building a Strategy Map . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.5.2 A Detailed Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.6 A QFD Top–Down Framework for Deploying Flexibility
in Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Abbreviations

Acad Manage Exec Academy Management Executive


Account Horiz Account Horizons
Ann Oper Res Annals of Operations Research
AM Applied Mathematics
Cal Manage Rev California Management Review
Decision Sci Decision Sciences
Eur J Marketing European Journal of Marketing
Eur J Oper Res European Journal of Operational Research
Harv Bus Rev Harvard Business Review
Int J Agile Manuf Sys International Journal of Agile Manufacturing
Systems
Int J Comp Integ M International Journal of Computer Integrated
Manufacturing
Int J Flex Manuf Sys International Journal of Flexible Manufacturing
Systems
Int J Gen Sys International Journal of General Systems
Int J Prod Oper Prod Man International Journal of Operations and Production
Management
Int J Prod Econ International Journal of Production Economics
Int J Prod Res International Journal of Production Research
Int J Proj Manag International Journal of Project Management
J Acad Marketing Sci Journal of the Academy Marketing Science
J Bus Strategy Journal of Business Strategy
J Inform Technol Journal of Information Technology
J Intell Manuf Journal of Intelligent Manufacturing
J Manage Stud Journal of Management Studies
J Marketing Journal of Marketing
J Oper Res Soc Journal of the Operational Research Society
JOM Journal of Operations Management
Manage Decis Management Decision

xiii
xiv Abbreviations

Manag Rev Management Review


Manage Sci Management Science
Marketing Sci Marketing Science
Oper Res Operations Research
Psychol Bull Psychological Bulletin
Qual Prog Quality Progress
Res Policy Research Policy
Strategic Manage J Strategic Management Journal
Transport Res Transportation Research
List of Figures

Fig. 2.1 Deming’s cycle of improvement (PDCA) . . . . . . . . . . . . . . . .. 11


Fig. 2.2 Company-Wide Quality Control (CWQC) . . . . . . . . . . . . . . . .. 13
Fig. 2.3 Baldridge Award criteria framework (1993) . . . . . . . . . . . . . .. 15
Fig. 2.4 Baldridge Performance Excellence Program
(Overview 2016–2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 15
Fig. 2.5 Porter’s generic competitive strategies . . . . . . . . . . . . . . . . . . .. 16
Fig. 2.6 Framework of the Quality prize in Europe (1991) . . . . . . . . . .. 17
Fig. 3.1 A planning decision sequence by hierarchy
and short/medium time horizon . . . . . . . . . . . . . . . . . . . . . . . .. 34
Fig. 3.2 A logistic system with varying levels of trans-routing
flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 43
Fig. 3.3 Analysis of the flexibility of an object
[Phase 1—designed object] . . . . . . . . . . . . . . . . . . . . . . . . . . .. 45
Fig. 3.4 Analysis of the flexibility of an object
[Phase 2—object in use] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Fig. 4.1 Taguchi loss function and the classical tolerance interval . . . . . 76
Fig. 5.1 An ordinary Petri Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Fig. 5.2 An open queuing network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Fig. 5.3 Disturbances as TPN models—Versions A and B . . . . . . . . . . . 89
Fig. 5.4 PN modeling of an AGV: Graphical representation
and Incidence Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 92
Fig. 6.1 House of Quality (HOQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Fig. 6.2 QFD conceptual model for deploying the strategic
improvement needs of an enterprise to its improvement
actions by areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Fig. 6.3 The top-down, level by level, recursive perspective. . . . . . . . . . 111
Fig. 6.4 Strategy map of the pharmaceutical firm . . . . . . . . . . . . . . . . . . 116
Fig. 6.5 QFD conceptual model for deploying flexibility
in supply chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

xv
Part I
Strategies
Chapter 1
Definitions of Strategies

Strategy (from Greek) is a high-level plan to achieve one or more goals under
conditions of uncertainty. In the sense of the ‘art of the general’, which included
several subsets of skills including tactics, siege craft, logistics etc., the term came
into use in the 6th century in East Roman terminology. It was translated into
Western languages only in the 18th century. From then until the 20th century, the
word ‘strategy’ came to denote ‘a comprehensive way to try to pursue political
ends, including the threat or actual use of force, in a dialectic of wills’ in a military
conflict, in which both adversaries interact.
Strategy (of an organization) generally involves setting goals, determining
actions to achieve the goals, and mobilizing resources to execute the actions.
A strategy describes how the ends (goals) are to be achieved by the means
(resources).
Strategy is important because the resources available to achieve these goals are
usually limited. Generally, the senior leadership of an organization determines its
strategy. Strategy can be intended or can emerge as a pattern of activity as the
organization adapts to its environment or competes.
Mintzberg (1978) defined strategy as ‘a pattern in a stream of decisions’ to
contrast with a view of strategy as planning. Kvint (2009) defines strategy as ‘a
system of finding, formulating and developing a doctrine that will ensure long-term
success if followed faithfully’.
Strategy typically involves two major processes: formulation and implementa-
tion. Formulation involves analyzing the environment or situation, making a
diagnosis, and developing guiding policies. It includes such activities as strategic
planning and strategic thinking. Implementation refers to the action plans taken to
achieve the goals established by the guiding policy (Mintzberg 1996).
In his book, ‘What is Strategy and does it Matter?’ Whittington (2000) described
four generic approaches: Classical, Evolutionary, Processual and Systemic. The
first two aim to profit maximization, while the last two are more pluralistic.
Whittington differentiates between two strategy dimensions: Outcomes (what is
strategy?) and process (how it is done?)
© The Author(s) 2018 3
M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_1
4 1 Definitions of Strategies

Here, we shall present two strategies, Quality-oriented Strategies and Flexibility-


oriented Strategies, in the order of their association with my professional evolve-
ment and matching the timing of their global importance.
Hence, we shall start with Quality-oriented strategies.

References

Kvint V (2009) The global emerging market: strategic management and economics. Routledge,
London
Mintzberg H (1978) Patterns in strategy formation. Manage Sci 24(9):934–948
Mintzberg H (1996) The strategy process: concepts, contexts cases. Prentice Hall, New York
Whittington R (2000) What is Strategy—and does it matter? 2nd edn. Cengage Learning, London
Chapter 2
Quality-oriented Strategies

This chapter introduces quality-oriented strategies through the concepts of the


well-known Total Quality Management (TQM). It presents its quality core per-
ception and historical development emphasizing its underlying principles and
philosophy, continuous improvement and customer/supplier quality chain, and its
system-oriented approach, with the required infrastructure for supporting the
improvement process. It also briefly describes Quality Organization in Japan and
attempts to compare cultural/geographical styles of Quality Management (QM)—
East versus West. It shows usage aspects of QM theories and practices including
soft versus hard practices.

2.1 Introduction

In the worldwide changing environment of the 1980s, organizations strove for


competitive advantages. In the 1970s and early 1980s, the economic advantage
realized by Japanese firms through their strategic approach to quality, made firms in
the Western world realize the tremendous competitive advantage of quality. It led to
the quality revolution, the recognition of quality by top management as a ‘strategic
competitive edge’ and to the partnership between quality and management,
expressed by Total Quality Management.
TQM is a management practice/philosophy that emerged in the mid-1980s, as a
new way of thinking aimed at achieving competitive advantages in the market place
through quality improvement. It has evolved from a narrow focus on
quality-oriented tools and techniques to embrace system structure and behavioral
methods intended to achieve an ‘overall improvement’ of the organizational per-
formances. Within TQM, the term ‘quality’ also includes ‘time’ as a universal
quality characteristic of products and services. Reducing cycle time or improving
productivity is another aspect of quality improvement under the auspices of TQM.

© The Author(s) 2018 5


M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_2
6 2 Quality-oriented Strategies

The TQM approach to improving business performance focuses on customer needs


and satisfaction, achieved with the participation of all employees.
Within the quality function itself, a stage-by-stage evolution of methods and
techniques could be observed. By the 1960s, this organic development reached (in
the Western world) a stage known as ‘total quality control’. This stage was dis-
tinguished by breadth and functional integration as expressed by the word ‘total’,
but still limited in scope, as imposed by the name ‘quality control’. By the 1970s,
the Japanese developed and practiced their special version of quality control called
‘company-wide quality control’. By mid 1980s, in the Western world, the separate
goals of quality and management got closer.
The operational means, which provided the binding ties between quality and
management, were: (a) The quality tools and techniques used as ‘universal’ tools
for management improvement. (b) Management’s perception of the importance of
time transformed into a ‘universal’ product quality characteristic.
The quality tools (e.g. statistical process control and cause-and-effect diagrams)
were able to support company-wide improvement programs. They reached areas not
directly related to quality such as sales, purchasing, invoicing, finance, products,
training and service.
Time is an important characteristic of the processes involved in designing,
producing and marketing products. Reducing performance time (of any kind) had
high priority by management, simply because less time meant more quantity. By
using a broad interpretation of product quality characteristics so that they include
‘timeliness’, any conflicts arising between improvement of quality and improve-
ment of productivity (reducing performance time) are solved.

2.2 Quality Definitions

Here are some basic quality definitions.


Quality gurus (such as Juran, Deming, Feigenbaum, Crosby and Taguchi) define
quality from three viewpoints: customer, producer and society. Product quality is
described as ‘fitness for use’ (Juran 1991) or ‘conformance to requirements’
(Crosby 1979); it has ‘to satisfy the needs of customer present and future’ (Deming
1986). As seen, Deming is also concerned with the time-oriented ability of a
product to perform satisfactorily. Feigenbaum’s definition is more elaborate: ‘the
composite product and service characteristics of marketing, engineering, manu-
facture and maintenance, through which the product and service in use will meet the
expectation by the customer’ (Feigenbaum 1991). This definition is an expression
of the ‘total quality control concept’, stressing an integrative view of quality, as
reflected on other functions; also, the notion of product quality is extended to
quality of service.
A different way of defining quality, providing a new way of thinking, is
Taguchi’s approach: ‘quality is the loss imparted to the society from the time a
product is shipped’ (Taguchi 1986). This approach considers costs and benefits of
2.2 Quality Definitions 7

quality improvement projects from an overall and interactive economic view rep-
resenting the ‘society’ (economics of producers and customers). According to
Taguchi’s philosophy, not investing in a prevention project likely to avoid future
customer costs higher than the project investment cost will later incur a much
higher loss on the producer in terms of lost market share.

2.3 Quality Development in the US (Before the Quality


Revolution in the Mid-1980s)

The quality definitions above reflect the organizational development of quality over
the years in different parts of the world.
The historical development of total quality management in the USA is rooted in
the evolution of the quality function over the years before the management
recognition of quality as a strategic competitive edge. The three main evolutionary
stages are Inspection, Statistical Quality Control and Quality Assurance (Barad
1996).
Inspection
Inspection of final manufactured products is related to the development of mass
production, and as such, it started before the end of the nineteenth century. It
focused on conformance of production to product specifications. The goal of
inspection was to separate between ‘conforming’ and ‘non-conforming’ units
through sieving.

Statistical Quality Control


Statistical process control and statistical sampling procedures developed next. The
goal of statistical process control is to control the quality of products during their
manufacturing (on-line) and thus, (by eventually stopping of the process and its
readjustment as necessary) to prevent manufacturing of non-conforming units.
Commonly, process control activities were carried out under a shared responsibility
of the production and the engineering departments. Statistical sampling procedures,
intended to reduce inspection costs, constituted another aspect of the statistical
quality control methodology.
Quality Assurance
The quality methods gradually grew into a body of knowledge comprising wider
topics, such as ‘quality costs’, ‘total quality control’ and ‘zero defects’, known
together as ‘quality assurance’ methods.
Quality costs represented the first formalization of the economic aspects of
quality tasks. Juran introduced this topic in the first edition of his Quality Control
Handbook, published in 1951. The approach trades off ‘unavoidable’ costs (stem-
ming from ‘appraisal’ of product quality and from investments, preventing the
8 2 Quality-oriented Strategies

manufacturing of defective units, against ‘failure’ costs, which are considered


‘avoidable’. The latter are the costs incurred by the defective units produced, some
of which discovered prior to shipment to customers, causing internal failures, and
others of which reach the customers, thus incurring external failures that may also
result in loss of reputation. The model prescribes a cost minimizing ‘optimal’
quality level, as appropriate to a specific company, with lower than 100%
non-defective (conforming) units, implying an ‘optimal higher than zero’ per-
centage of defective units. This ‘optimal’ percentage of defective units is expected
to vary with the type of company, its size, type of industry and in principle it can get
as low as to be expressed in terms of parts per million. This classical model used to
be generally accepted, but it was in direct opposition to the theories advocating
‘zero defects’ as an optimal conformance level. Hence, it became controversial and
was banned, although its logic was undeniable. Within TQM, there is renewed use
of quality costs as an important quality economics tool.
By the 1960s, the evolution of the quality organization reached a stage known
among quality practitioners and analysts as total quality control. This expression is
from Feigenbaum’s book, which appeared in 1961. It presented a first version of
quality integration within a company, and its goal was to achieve quality through
cross-functional coordination and collaboration. This stage was featured by breadth
and functional integration, as expressed by the name ‘total’ but was still relatively
limited in scope, as imposed by the name ‘quality control’. Quality was presented as
a ‘system’, within which the traditional (on-line) quality control activities on the
shop floor were expanded (off-line) to include preparation activities such as quality
design of new products and control of raw materials. Feigenbaum’s pioneering
principle was that quality responsibility had to be shared by all departments.
Zero Defects expressed an entirely different aspect of quality development,
ultimately intended to achieve workers’ participation to reducing the alarming level
of defectives among the American products during the 1960s and the 1970s. The
zero defects ‘movement’ started with Crosby’s book ‘Quality is Free’, published in
1979, which was a great commercial success. In Crosby’s view, quality is free
because the cost of prevention will always be lower than the costs of appraisal
(detection) and failure (correction, scrap, warranty, reputation).
Quality Assurance (QA) Systems in Israeli industries
At the beginning of the 1980s, as quality started to become an important topic in
industry, we began to investigate Quality Assurance Systems in Israeli industries
(Barad 1984). We selected a particular industrial area, Electric and Electronics
industry because this area made use of the highest developed technologies, sensitive
to quality aspects. Hence, it seemed the most promising area for the development of
quality-oriented organizations and for application of quality methods. Besides the
factual aspects of the survey, we were also interested in collecting and analyzing the
opinions and views of managers and heads of quality assurance departments
regarding the existing quality assurance systems in their companies and compare
them to our evaluation of the same systems.
2.3 Quality Development in the US … 9

We choose to describe the quality assurance systems by three types of variables,


representing status, structure and activities. The status of the quality system in the
organization is an expression of the importance attributed to quality by manage-
ment. The position of the quality manager and the relative size of the unit were
criteria defining status. The functional capability of the unit depends on its internal
organization, its structure. We choose to express it by the distribution of quality
assurance personnel by type of jobs. A high percentage of quality personnel in
testing and inspection versus a low percentage in planning and analysis indicates a
system emphasizing detection and sieving of the defects rather than their elimi-
nation. The major quality activities in our survey were: specifications, designing
inspection and testing, assurance of inspection, testing and control, quality costs,
data analysis and the human factor.
The survey
The sampling population was comprised of manufacturing enterprises with 50 or
more employees. Our final sample consisted in 32 companies, representing a
response rate of 40%. We interviewed two people in each company, a manager (not
QA) and the head of the QA unit. Besides questions covering factual data on the
system, we asked both the managers and the QA heads (separately), a group of
identical questions on their views regarding the activities of the QA units and on the
factors likely to affect the development of the quality system.
The statistical analysis
The aim of the statistical analysis was to detect significant influences of external
factors (size of company, geographical region, type of ownership, use of quanti-
tative methods, types of final products, types of buyers, etc.) and also to find out
how these factors affect one another).
The techniques we used were multiple linear regression and correlation, as well
as nonparametric correlation.
Findings
1. Findings based on factual data
Status: There were great variations in the relative size of the QA unit and no
factor had a significant effect on the relative size. About 60% of the QA heads
reported directly to the managing director. The reporting procedure was sig-
nificantly affected by the percentage of products (by value) supplied to the
ministry of defence or the aeronautical industry.
Structure: About 80% of quality personnel were employed in ‘testing and
inspection’.
Major activities: ‘Assurance of Inspection testing and control’ and
‘Specifications’ were the activities which attained the highest performance level.
‘Data Analysis’ and ‘Quality costs’ had the lowest performance level. The
companies whose QA heads reported directly to the managing director had a
significantly higher level of activity performances.
10 2 Quality-oriented Strategies

2. Facts versus opinions


We obtained two average evaluation of each activity, one based on managers
and the other based on QA heads. Comparing our factual evaluations with the
evaluations of these two groups, we found out that the QA heads evaluations
were closer to the factual evaluations.
We tested the measure of agreement between the two groups and obtained a
significant difference of opinions about ‘Assurance of Inspection testing and
control’ and ‘Recording and reporting’ which got a much lower evaluation by
managers. This implies that managers considered these activities as the main
tasks of QA and therefore judged them more severely. ‘Data Analysis’ was
significantly higher evaluated by managers which means that they did not
consider this activity to be an important task of QA and therefore were more
lenient towards it.
The two groups were asked to point out the three most important factors
affecting (positively or negatively) the development of the QA systems. The
most important positive factor according to the QA heads was ‘Management
Commitment’, while according to managers it was ‘Buyers reaction’. The two
groups were in agreement regarding the two most negative factors. These were
‘Lack of professional manpower’ and ‘Lack of cooperation among
departments’.
Conclusions
Both facts and opinions implied that management commitment was the most
important factor in the development and proper functioning of the quality systems.
But managers were not aware enough of the economic possibilities of these sys-
tems. They acted towards their development only because of the pressure exerted
upon them by strong buyers. They did not regard ‘data analysis’ and ‘quality costs’
as important quality activities. As a result, these were the most neglected activities.

2.4 Quality Organization in Japan

2.4.1 Implementation of the American Quality Methods

One of the most successful American export areas after World War II was Quality
Control methods, particularly well received in Japan. Until mid-1960s, the Japanese
absorbed the principles and philosophies of quality management and quality eco-
nomics introduced by the well-known quality control ambassadors such as Deming,
Juran and Feigenbaum. Deming presented to managers a series of seminars
focusing on statistical process control.
In his well-known fourteen points, besides stressing the importance of process
quality ‘improvement’ and the advantages of the statistical methods over mass
inspection, Deming also stressed the need to preserve workers’ pride and the
importance of ‘training’ for stimulating workers’ motivation to improve quality. He
2.4 Quality Organization in Japan 11

considered management commitment and leadership as crucial to achieving quality


improvement. Deming’s cycle of improvement PDCA (Plan, Do, Check, Act) is a
methodology intended to support his fundamental concept of ‘continuous
improvement’ (see Fig. 2.1). It consists of defining the problem objectives and
planning the necessary data collection (Plan), gathering the data (Do) and analyzing
the results (Check). If these are all right, the next step is implementing them (Act);
if not, the next step is starting anew.
Juran emphasized the necessity of a ‘quality system’ approach and the impor-
tance of ‘managing’ quality. He viewed the management role as defining quality
goals, providing recognition and communicating results. He encouraged analysis of
quality costs, which could offer good opportunities for improving quality.
Feigenbaum brought his total quality control principles, the tools for system
integration.
These gurus had some themes in common. They believed that the ‘management’
and the ‘system’ are the cause of poor quality, that ‘commitment’ and ‘leadership’
by top management are essential ingredients for successful quality programs, that
quality programs need organization-wide ‘long term efforts’ and that ‘quality
precedes timeliness’.

2.4.2 The Integrated Japanese Quality System

The total quality control methods described in Feigenbaum’s book were applied in
Japan by the 1970s under the name Company-Wide Quality Control (CWQC). This
enhanced Japanese version of total quality control, is actually the first material-
ization of TQM.
The quality tools and techniques imported from the West were assimilated by
top management and disseminated downwards. This movement did not produce
satisfactory participation at the lower levels and resistance to applying the new
methods occurred. The Japanese answer to the lack of work motivation at the
bottom was the creation of Quality Control circles. These were voluntary, homo-
geneous small groups intended to open implementation channels for quality
improvement.

Fig. 2.1 Deming’s cycle of


Plan
improvement (PDCA)

Do No
YES
OK Act
Check
12 2 Quality-oriented Strategies

Among the Japanese quality gurus who extensively contributed to this integrated
approach were Kaoro Ishikawa and Genichi Taguchi. Ishikawa gathered simple
graphical tools to be used by members of the Quality Control circles. The seven
basic tools for employees participating in these improvement programs: Pareto
diagrams, Flow charts, Cause and effect diagrams, Histograms, Check sheets,
Scatter diagrams and Quality control charts. Deming’s cycle of improvement
provided the logical connections between these tools, some of which pertain to
‘plan’ and other to ‘do’, ‘check’ or ‘act’.
Taguchi developed methods promoting the use of Design of Experiments to
improve product and process quality by reducing variability. It was important that
these methods be applied during the development stage of products and processes,
the ultimate result being products exhibiting on-target and low variance quality
characteristic (features that according to Taguchi’s philosophy made products
attractive to customers) and reduced costs. Two essential components make up
Taguchi’s strategy aimed at ‘selling’ design of experiments:
1. Providing ‘economic’ motivation for management to use design of experiments,
in terms of a loss function expressing customers’ discontent with products
whose quality characteristics are not on target and/or exhibit variations.
2. Offering ‘easy-to-use’ instructions for implementing the methodology of design
of experiments, originally an elitist western statistical method, known to
statisticians and very few engineers.
Taguchi’s loss function is related to the deviation of a quality characteristic from
its target value. It can be expressed in terms of its variance, showing that the higher
the variance, the higher the loss. Hence, reducing performance variance means
reducing loss. As design of experiments is an effective tool for reducing perfor-
mance variance, it is also, according to Taguchi’s logic, a tool for reducing loss and
attracting customers, (see also Taguchi’s experimental design technique in Part II of
this book).
Both types of techniques, the simple graphical techniques used by Japanese
workers in quality control circles and the sophisticated design of experiments used
by Japanese engineers, brought about the great quality improvement of the Japanese
products and the improved efficiency of their manufacturing processes.
Figure 2.2 displays three themes of the integrated quality system: Main
Objective, Tactics and Quality Technique. Each theme is viewed from three per-
spectives: Workers, Customers and Management. From the workers perspective the
main quality system objective is achieving work motivation. From the customers
perspective the product quality properties have to fit the specifications, while from
the management perspective the most important objective is attracting customers
and reducing costs. The workers oriented tactic is Quality Control circles, while the
appropriate tactic for both customers and management is low variability of the
product properties e.g. low ‘loss’, according to Taguchi. As mentioned above,
Ishikawa supplied the workers oriented techniques within the quality control circles
2.4 Quality Organization in Japan 13

(the seven basic tools) and Taguchi promoted the technique for reducing variability
of quality characteristics and thus reducing loss (design of experiments).

2.5 Total Quality Strategies After the Quality


Revolution—Universal Views

A corporate strategy is the pattern of decisions in a company that determine its


goals, produces the policies and plans to achieve the goals, and defines the kind of
economic and human organizations it intends to be. In the mid-1980s, quality as a
strategy was a new idea. Traditionally, strategic ‘quantity’ goals always preceded
‘quality’ goals. As mentioned before, the recognition of quality as a strategic
competitive edge by managers in the west came because of the extensive
market-sharing losses to the Japanese, in the 1960s and 1970s.

2.5.1 USA—Malcolm Baldridge National Quality Award


(MBNQA)

At the core of Total Quality Management (TQM) philosophy is Deming’s


pioneering concept of ‘continuous improvement’ (initially adopted in Japan as
kaizen), implying a constant change. ‘Create consistency of purpose towards
improvement of product and service. Adopt the new philosophy’ (Deming 1986),
The Deming Prize, established in Japan in 1951, defined the first and self-auditing
process of quality improvement. The name TQM is related to the first American
quality prize, the Malcolm Baldridge National Quality Award (MBNQA), instituted
in 1987 in the USA to encourage the implementation of the same concept. It is
worth mentioning that it happened 36 years after Deming prize.

Perspective
Themes Workers Customers Management
Work moƟvaƟon Product properƟes AƩracƟng customers
Main ObjecƟve
on -target Reducing costs

TacƟcs Quality Control Reducing variability e.g. reducing loss


circles

Quality Ishikawa’s seven Design of Experiments


Techniques basic tools Taguchi Style

Fig. 2.2 Company-Wide Quality Control (CWQC). Reproduced from Barad (1996)
14 2 Quality-oriented Strategies

The major rationale for the creation of the Malcolm Baldridge Award was
foreign competition. Realizing that customers desired high quality, the original aims
of the Award were to support national efforts to improve quality and thus satisfy
customers’ desires. Soon it became clear that the principle of satisfying customers’
desires could be applied not only to separate companies but within the same
company as well. A manufacturing department is the customer of the engineering
department that produces the design, whose quality has to meet the manufacturing
requirements. For example, tolerances are product requirements, achieved through
the designed product. In each section and department there are various series of
customers and suppliers and all are part of a quality chain. Thus, the ‘quality chain’
became another fundamental concept of TQM (Barad 2002).
The tremendous impact of the Award on US and Western industry and later on
global industry can be attributed to its excellent structural quality framework,
enabling companies to assess themselves against it. The framework is a product of
the National Institute of Standards and Technology (NIST).
Based on the early MBNQA framework, TQM consists in essence of:
1. Provision of high quality products/services, to satisfy customer wishes (a
dynamic goal achieved through a continuous quality improvement process).
2. Achievement of high total quality in products and processes at low cost
(managing process quality so as to increase productivity, get supplies collabo-
ration and reduce waist).
3. Management of total quality through involvement of all employees, measure-
ment of progress and communication of results.
Figure 2.3 describes MBNQA early framework (1993). It has (1) a ‘driver’ (top
management leadership), (2) a ‘system’ whose elements are management of process
quality, human resources, strategic quality planning and information and analysis,
(3) a ‘goal’ focusing on customer satisfaction and market share gain and (4) mea-
sures of progress in terms of quality and operational results.
The Baldridge quality prize is awarded each year in a variety of sectors. After a
few years, to reflect the evolution of the field of quality from a focus on product,
service and customer quality to a broader, strategic focus on overall organizational
quality, the name of the prize changed to Baldridge Performance Excellence
Program. The framework evolved during the years. According to Fig. 2.4 (2016–
2017), it consists in seven integrative ‘critical aspects’, (in some academic papers,
see e.g. Samson and Terziovsky (1999) they are called ‘empirical constructs’)
‘leadership’, ‘strategy’, ‘customers’, ‘workforce’, ‘operations’, ‘results’ and
‘measurement, analysis and knowledge management’. According to the results of
the much cited research paper mentioned above, only three among the empirical
constructs (leadership, people management and customer focus) had a significant
impact on performance.
In recent years, many of the Award recipients belonged to the health care,
service, education and small business sectors.
2.5 Total Quality Strategies After the Quality … 15

System

Management
of process
quality Goal

Driver Human resource Customer


development focus and
and saƟsfacƟon
Senior management
ExecuƟve
leadership Strategic Measures of
Quality
progress
planning
Quality
and
InformaƟon operaƟonal
and results
analysis

Fig. 2.3 Baldridge Award criteria framework (1993). Source NIST 1993: 33

OrganizaƟonal Profile

Strategy Workforce

RESULTS
Leadership IntegraƟon

Customers OperaƟon

Measurement, Analysis, and Knowledge Management

Core Values and Concepts

Fig. 2.4 Baldridge Performance Excellence Program (Overview 2016–2017). Source NIST
2016–2017
16 2 Quality-oriented Strategies

Porter’s generic competitive strategies


A competitive strategy is a broad formula for how a business is going to compete
(Porter 1985). There are two essential types of competitive advantages: (1) differ-
entiation—product driven (unique and superior value to the buyer in terms of
product quality and/or after sale service) and (2) lower cost—process driven.
Generic quality strategies is a combination of the above two types of competitive
advantages, each achieved by two types of competitive strategies: (1) Total Quality
Management and (2) Time Based Management (see Fig. 2.5).
Differentiation is achieved by high product design quality and by reduced time to
market. Lower costs are achieved by reduced level of non-conformance and by
reduced cycle time and inventories. In contrast to the quality gurus who believed
quality precedes timeliness, the TQM philosophy as expressed by MBNQA does
not impose hierarchical discrimination of these two notions, Total Quality
Management per se and Time Based Management (TBM). Both represent quality in
the broad sense of the world. The improvement priority (product or process) is to be
dynamic, i.e. determined by the most critical improvement needs of a company, at
any given point in time, seen from a customer and business view.

2.5.2 Europe

To create a model of excellence for business across Europe, in 1991 the European
Foundation for Quality Management instituted a European Quality Award for
business (see Fig. 2.6). The model distinguished between two major types of cri-
teria, ‘results’ and ‘enablers’ (the means to achieve results).
The results concerned ‘customers’ satisfaction’, ‘people satisfaction’ (employ-
ees) and ‘impact on society’ (including meeting the demands of shareholders)
ultimately leading to excellence in ‘business results’. The enablers are five criteria:
‘leadership’ that drives human management, ‘strategy and tactics’, ‘resources’ and
‘processes’. The results are assessed by internal (self-assessment representing
pre-requisites for improvement), as well as by external assessment, comparisons
with competitors and best-in-class organizations. Self-assessment enables to dis-
cover areas of improvement.

Fig. 2.5 Porter’s generic CompeƟƟve Advantages


competitive strategies
CompeƟƟve Lower cost Differen a on
Strategies
Reducing Improving Quality of
Total Quality
non-conformance Design
Management level

Time Based Reducing cycle Ɵme Reducing Ɵme to


Management and inventories market
2.5 Total Quality Strategies After the Quality … 17

Human People
Management saƟsfacƟon

Leadership Strategy and Processes Customers’ Business


tacƟcs saƟsfacƟon Results

Impact on
Resources society

Enablers Results

Fig. 2.6 Framework of the Quality prize in Europe (1991). Source European Quality Award
Framework (1991)

Challenging problems for managers were how to create a ‘best path’ to


improvement in terms of organizational structures, how to measure and communicate
progress and how to achieve and sustain active participation of employees at all levels.

2.5.3 Australia

During my sabbatical at the School of Mechanical and Manufacturing Engineering,


University of New South Wales (1993–1994), I worked with a young Australian
colleague on a research project studying a sample of Australian companies on their
way to become continuous improvement systems.
At that period, in Australia following the US example, companies became aware
of the need to change their organizational orientation from a standard maintaining
approach to a continuous improvement system. To that end, they could get support
from the Australian National Industry Extension Service (NIES).
An important factor in the successful implementation of this changing process
were quality teams, called in our project Improvement Support Systems, ISS (Barad
and Kayis 1994). We modeled the team infrastructure by a three-stage sequential
process with simple but systematic measures to evaluate the infrastructure elements.
Stage 1: Setting the process in motion through a steering committee, basic training
for managers and other employees and initiatory improvement teams. Stage 2:
Monitoring and control of the teams. The challenge to management was to finding
the right extent to which monitoring and control should be applied to improvement
teams. Stage 3: Keeping the process alive to avoid stagnation of the improvement
process. To realize that, we suggested to extend active participation of employees,
systematic generation of new improvement topics (eventually through splitting and
continuation of old ones) as well as by continuous upgrading of training.
18 2 Quality-oriented Strategies

The findings exposed different levels and patterns of team infrastructure. We


considered a steady output flow of successfully finished projects as evidence of an
active (as opposed to a stagnant) improvement system. Companies in the sample
had selected a variety of topics for their improvement projects. By differentiating
between time-based management (TBM) projects and TQM quality per se projects,
we found that the TBM projects (such as reducing cycle time and delivery duration)
constituted a majority. Focusing on these topics may represent the right decision at
a certain period, but on a long-term basis, it may not be enough to keep the process
alive. Quality per se improvement projects have to be undertaken on a larger scale.
For these topics, more sophisticated techniques are needed and among them Design
of Experiments, a technique not practiced by any of the surveyed companies.

2.5.4 Cultural/Geographical Styles—East Versus West


(China Versus Australia)

My sabbatical in P R China (1991–1992) at the Department of Precision


Machinery, University of Science and Technology of China, Hefei enabled me to
have a glimpse into quality practices in P R China at that period. One year later, I
continued my sabbatical in Australia at the School of Mechanical and
Manufacturing Engineering, University of New South Wales and, as described
above, I participated in a small-scale research on quality systems.
Culture is related to social and environmental processes and appears in different
guises. Internal consistency with what human beings (such as managers and
employees) may do, say and accomplish, disclose their distinctive cultural pattern,
or style. We may associate cultural styles with geographical regions.
To reveal some basic cultural/geographical styles on quality strategies principles
and practices, I attempted to compare my glimpse into quality practices in P R
China with the evaluation of some quality practices in Australia (Barad 1995).
Company Wide Quality Control (CWQC) represents a strategic approach to quality
according to a Japanese style. The principles of Total Quality Management
(TQM) represent the North American style strategic approach to quality.
Continuous improvement is the core of all these quality strategies and like other
cultural manifestations is bound to wear different forms in different countries.
Whilst collaboration is an important element of any quality strategy, the way people
communicate and collaborate one with the other is strongly dependent on their
cultural/geographical background. Indisputable there are distinctive differences
between Eastern and Western cultures.
Our basic research questions were: Do similar marked differences also appear in
the practice of quality strategies? Should we consider a distinctive ‘Chinese’ or
‘Australian’ style or rather an Eastern versus a Western quality style?
Before describing our research, let us first discuss multi-cultural styles in quality
management by comparing the two quality management pillars: USA and Japan.
2.5 Total Quality Strategies After the Quality … 19

Multi-cultural style in Quality Management (QM)


Examining the basic principles of TQM and CWQC, we easily arrive at the con-
clusion that they are similar. Both emphasize the importance of Continuous
Improvement, Customer-oriented Policy, participation of all employees and
deployment of these principles in all company’s functions. The differences are in
the extent to which each of the two famous approaches emphasizes each element.
In TQM there is more emphasis in Leadership and less emphasis on Education and
training in general and on quality control in particular. This point seems very
significant. In CWQC theoretical knowledge on Quality Control (QC) and use of
statistical tools are strongly encouraged. Tradition of education and learning seems
to be more rooted in Japan as compared to the US. They are related to the high level
of Japanese basic education in mathematics. This educational background enabled
the Japanese managers to grasp the application potential of the theories preached by
the excellent teachers who came from the Western world, where managers did not
listen to statistical wisdom.
Groupism—an Eastern cultural feature
As cultural style is a universal human feature, it is expected to reflect a com-
pany’s approach to human resources/management. ‘Groupism’, or collectivity
orientation, is strongly mirrored in the CWQC strategy.
The QC circles can be regarded as a hybrid organization, drawing from both
formal and informal elements. The pattern that makes the Japanese quite different
from people from the Western part of the globe is that submission of an individual
to the group decisions happens without any loss of personal dignity.
Within QC circles, the intense use of statistical methods is apparent. To enable
understanding of statistical principles, simple graphical statistical tools were
developed by Japanese scientists and engineers. During the 1980s, Japan promoted
technology transfer. Methods of QC were successfully applied in Korea, Singapore,
Thailand and Malaise. But application of QC circles was not commonly accepted in
Western countries, as revealed by a large-scale study, which compared practice of
QC circles in Japan and Korea versus Denmark.
Western reply to groupism—cellular organizations
In order to extend participation of employees in continuous improvement pro-
cesses, ‘cellular organizations’ were introduced. There is not much formal theory to
support these organizations, but some Western organizations adopted them suc-
cessfully. They emerged as a result of several needs and principles.
First, there was the need to simplify material handling and material flow in order
to shorten throughput time of parts and products. Group technology is an engi-
neering concept of technical efficiency that consists of grouping physical facilities
into cells. Under these conditions, it is also easier to apply a Just in time policy and
thus benefit from reduced Work in Process. Secondly, according to management
‘thinkers’, process redesign can be combined with organizational changes, leading
to a kind of ‘groupism’ on the shop floor. By creating ‘Continuous Improvement
20 2 Quality-oriented Strategies

Cells’, comprising a group of employees working in the same cell, by training and
empowering them to make decisions, the active participation of employees in the
improvement process could be boosted. There is a marked difference between the
QC circles and cells within a cellular organization. The QC circles are practiced on
a voluntary basis, while cells in a cellular organization are not voluntary but part of
the system’s formal organization.
Let us now return to our research, which compared some quality practices in P R
China and Australia.
The survey
In P R China, the information was gathered from several visits to state-owned
enterprises in Anhui Province and Beijing area in the period May–June 1992. In
Australia, it was collected through a small but systematic study conducted in New
South Wales in the period September 1992–January 1993. It should be mentioned
that the information gathered in P R China was based on a selective, non-random
group of industrial enterprises, while in Australia it relied on a random, stratified
sample. Objective difficulties, mostly based on language barriers prevented us from
applying in China the systematic methodology of the Australian study. In spite of
the above differences, there was a similarity between the surveyed enterprises in
China and Australia. They both formally embarked on a quality improvement
program.
State enterprises in P R China receive basic information and training from the
National Society of TQM. The material is disseminated top-down through hierar-
chical established TQM channels (Province, County). Hence, it is not surprising
that the group of state enterprises in P R China in our survey exhibited some
common traits, having some similarity to a Japanese style of quality management.
The small number of enterprises of the study did not enable a statistical com-
parison between the TQM practices in China versus Australia. Nevertheless, we
noticed some distinctive differences between the surveyed companies in the two
countries. The differences concerned (a) the reason for commencing the improve-
ment program; (b) practice of QC circles; (c) quality costs reporting; (d) scope of
jobs and (e) topics of improvement projects.
(a) In Australia, the commencement of TQM was motivated by product demand
crisis. In China, the visited companies had as main objective product certifi-
cation. In some companies, the manufacturing capacity was fully utilized. There
were no such occurrences among the Australian companies, whose revenues
were solely limited by product demand.
(b) In some of the Australian companies, a 100% TQM participation at the shop
level was ensured through a division of the shop floor into cells (cellular
organizations). We did not find such cellular organization in any of the visited
Chinese companies. There, TQM participation at the shop level was only
apparent through QC circles (50% average participation). By contrast, none of
the Australian companies practiced QC circles.
2.5 Total Quality Strategies After the Quality … 21

(c) None of the Australian companies practiced quality cost reporting being thus in
sharp contrast with the visited Chinese companies where reporting of quality
costs was practiced on a regular basis.
(d) Broad scope jobs were encouraged in Australia, while in China we found a
tendency to have narrow scope jobs.
(e) In Australia, most improvement projects had a time-based orientation. In China
they were mostly quality-per se oriented.
Concluding remarks
Keeping in mind the limitations of the study, one can still draw some conclusions.
1. Distinctive differences between the way TQM was applied in China and
Australia were noticed. The different organization of active employees’ partic-
ipation in the continuous improvement process at the shop level, namely vol-
untary QC circles in China, versus cellular organizations in Australia, can be
definitely attributed to cultural influences.
2. In spite of the fact that in China we did not find any reference to CWQC, but
only to TQM, the visited Chinese enterprises seemed to be closer to what may
be called ‘a Japanese style’. Evidence is provided by an over-all practice of QC
circles, as well as by the early commencement of quality improvement man-
agement (1982). While no TQM principles existed in 1982, the CWQC were
well established at that time. On the other hand, contrary to Japanese quality
practices, which encourage quick reaction and process simplifications, the
Chinese companies did not exhibit these practices. Another difference between
China and other countries is the tendency of narrow scope jobs. Possibly that is
dictated by its huge population.
3. The Australian improvement style looks rather similar to the North American
improvement style. Its principles seemed to be rooted in the MBNQA. The
reported quality improvement commencement occurred at the end of the 1980s,
when TQM was already established. This may supply some circumstantial
evidence.
4. Future perspectives:
– In Australia, like in other Western countries guided by the general practices
of MBNQA, based on some economic successes, they may continue to boost
active participation of employees through cellular organizations.
– China is a class by itself. What makes it special is its deeply rooted edu-
cational heritage, coupled with the natural curiosity and creative thinking of
its people, its size and its socialistic regime. This factor combination makes it
difficult to predict future development of TQM in P R China.
– On a global scale, there is an exchange of cultural principles and learning on
quality management. The Eastern world learned from the Western world
general principles of Management, while Japan in particular also learned
modern statistical theories. The Western world learned from the Eastern
22 2 Quality-oriented Strategies

world principles of teamwork collaboration. Hopefully, this exchange of


cultural information will continue to enrich the quality management topic,
providing a deeper understanding of the multitude and complexity of the
factors involved in applying it.

2.6 Quality Management Theories and Practices—Usage


Aspects

Quality is now universally accepted as a major concern for every organization. To


improve quality, companies implement various Quality Management
(QM) practices. Numerous quality management philosophies, methodologies,
practices and tools have been designed, developed and applied. Many studies have
investigated the extent and nature of their contribution to organizational perfor-
mance. A main objective was to define and measure a variety of QM components.
Most of the studies such as, Saraph et al. (1989), Black and Porter (1996) and
Samson and Terziovsky (1999) used the MBNQA criteria and dealt with applica-
tion aspects of QM in manufacturing and service industries.
In this section, we try to investigate the adaptation of global QM tools, proved
effective in manufacturing and service, to additional areas such as Logistics and
Project Management (Barad and Raz 2000).
Our research query was:
Are the QM tools and practices, originally developed for the manufacturing area, fit to deal
with the needs of other areas, different from manufacturing? In other words, are QM tools
generic tools?

2.6.1 Logistics Versus Manufacturing

To address the usage aspect, we first compare the study of Ahire et al. (1996),
whose data were from the automotive and the manufacturing areas, with the study
carried out by Anderson et al. (1998) in logistics, a specific area different from
manufacturing.
Ahire et al. considered questions regarding a holistic versus a piece meal
implementation of the MBNQA criteria. Their study identified twelve QM com-
ponents, and developed items to measure them. The components’ list comprises:
‘management commitment’, ‘internal quality information usage’, ‘benchmarking’,
‘design QM’, ‘employee empowerment’, ‘employee involvement’, ‘employee
training’, ‘supplier QM’, ‘supplier performance’, ‘SPC usage’, ‘customer focus’,
‘product quality’.
Their findings reveal (a) the critical importance of the human aspect and its
development (employee training, employee involvement and employee empower-
ment) relative to the other QM components. They imply that people are a key
2.6 Quality Management Theories and Practices … 23

element in the successful implementation of QM strategies. (b) ‘Management


commitment’ was found to be highly correlated with the practice of customer focus,
supplier QM and employee empowerment but not so highly correlated with product
quality. According to the authors, this indicates that top management commitment
is a necessary but not a sufficient condition for attaining superior product quality.
(c) The application of practices in isolation, such as SPC and Benchmarking, was
not highly correlated with product quality.
Anderson et al. collected data from members of the American Society of
Transportation and Logistics. They considered ten QM components, expressing the
seven criteria of the MBNQA: ‘leadership’, ‘information and analysis’, ‘mea-
surement’, ‘training’, ‘teamwork’, ‘morale’, ‘benchmarking’, ‘supplier manage-
ment’, ‘operational results’ and ‘customer satisfaction’. The questionnaire design
enabled assessment of QM influences on logistic performance. The data analysis
intended to find a causal model of the QM component practices. The main findings
were:
(a) There was no direct effect of leadership on operational results. The direct
effect of leadership was on team and training (human resource focus) and on
benchmarking. Information, supplier management (process management) and
operational results exhibited indirect effects of leadership. (b) Supplier manage-
ment, training and information did directly affect operational results.
(c) Operational results as well as morale and team organization (human resource
focus) directly affected customer satisfaction.
Let us now integrate the building blocks of the two studies. As mentioned above,
both studies looked for linkages among input QM practices and their outcomes
(operational results and customer satisfaction).
Input QM practices
There is much similarity between the input QM components/practices as defined
in each study. Actually, the components in both studies reflected the seven
MBNQA criteria. ‘Management commitment’ is similar to ‘leadership’ while ‘in-
ternal quality information usage’ is similar to ‘measurement, analysis and
knowledge’.
Operational results
The operational results as considered by the two studies were formulated dif-
ferently. Ahire et al., who investigated a manufacturing area, only referred to
‘product quality’ while Anderson et al., who investigated the logistics area referred
to ‘operational results’ from a broader perspective. These were expressed by three
indicators: logistics cost performance, order cycle time and effectiveness and effi-
ciency of transaction processes. The indicators used to express customer satisfaction
were also specifically related to customer expectations in the logistics area.
We may conclude that from a generic perspective the input QM components and
their indicators as defined for a manufacturing area are equivalent with those
defined for logistics, thus indicating their universality. Hence, they may fit other
24 2 Quality-oriented Strategies

application areas such as project management. The operational components of the


logistic research were specific to the investigated area. In the next section, we
analyze quality management practices considered in a survey of project managers.

2.6.2 Contribution of QM Tools and Practices to Project


Management Performance

In its Guide to the Project Management Body of Knowledge (1996), the Project
Management Institute defines a project as “a temporary endeavor undertaken to
create a unique product or service”.
Raz and Michael (1999), carried out a survey to find out which tools and
practices are associated with successful project management in general, and with
effective project risk management in particular. The survey was carried out between
April and June 1998. The authors gave a wide interpretation to the term ‘practice’,
meaning special purpose tools and processes. A questionnaire, written in Hebrew,
was distributed either personally or via email, to a random sample of about 400
project managers from the software and high-tech sectors in Israel. Finally, there
were 84 usable completed questionnaires.
The questionnaire consisted of several parts, each containing a number of brief
questions, to be answered on a 1–5 scale.
Although the main emphasis of the survey was on project risk management, two
of its parts are relevant to this section. In their paper, whose title is identical with the
title of this section, Barad and Raz (2000) detailed the analysis of the two relevant
parts of the above questionnaire.
The first relevant part dealt with the extent of contribution of individual practices
to project success in general, and included 13 Project Management (PM) generic
practices. Our interpretation here of the term ‘perceived contribution’ is that a
practice of highly perceived contribution is likely to have a high usage level.
According to the findings of a pilot version of the questionnaire, the perceived
contribution was highly correlated with the ‘extent of use’ in the organization.
Hence, we will alternatively make use of ‘perceived contribution’ or ‘usage’.
The second relevant part consisted of six questions dealing with the effectiveness
and efficiency of the manner in which projects are managed in the respondent’s
organization and with project outcomes, such as product quality and customer
satisfaction.
The data were analyzed in several steps. First, the authors assessed the perceived
contribution of each individual practice in PM. Next, in order to assess the actual
contribution of the practices, they calculated the coefficients of correlation between
the perceived contribution of the practice, and the project management outcomes.
Finally, they compared perceived contribution with actual contribution of practices.
2.6 Quality Management Theories and Practices … 25

Perceived contributions of practices


The 13 PM practices were ranked by their perceived contribution to the success
of a project management. ‘Simulation’, ‘Subcontractor management’ and
‘Brainstorming’, were perceived as practices with the highest contribution to the
success of a project management (ranks 1–3). At the bottom of the list, we find
‘Cause and Effect analysis during control’, ‘Control of trend and deviations’ and
‘Training programs’ (ranks 11–13).
Actual contribution
To assess the actual contribution of the practices, we looked for significant
correlations between the usage level of a practice and the level of the outcome
variables.
Three outcome variables evaluated the PM ‘process performance’ level and the
other three outcome variables evaluated the ‘operational outcomes’ of the PM
process.
The PM process performance was evaluated by C1—extent and frequency of
plan changes; C2—frequency of emergency meetings; C3—ratio of effort invested
versus effort required. C1 and C2 measure process stability, while C3 measures
process efficiency.
The operational outcomes of the PM process were evaluated by C4—satisfaction
of participants including project manager; C5—customer satisfaction; C6—
Product quality measured by absence of product errors.
First, we examined the scores received by each outcome variable. Process sta-
bility, measured as plan stability (C1) received the lowest score. It was closely
followed by process efficiency (C2) and then by process stability measured by
frequency of emergency meetings (C3). Similarly to process stability and process
efficiency, product quality (C6) was also among the low-scored outcome variables.
Customer satisfaction (C5), was scored the highest and this result is not sur-
prising. After all, this score is an expression of the customer satisfaction as per-
ceived by the respondent, i.e. by the project manager. The satisfaction experienced
by participants including project managers (C4) was also high.
Next, we used the above outcome oriented evaluations to assess the actual
contribution of the PM practices to the success of a project, by ranking them
according to the number of their significant correlations with the six outcome
variables, Ci, i = 1,2,…,6, as indicators of project success.
Only 8 out of 13 PM practices were highly correlated with one or more outcome
variables.
T12—Training programs, was the practice highly correlated with all the six
outcome variables and thus reached the highest rank 1. It was way ahead of all the
other PM practices.
The next three practices were highly correlated with two outcome variables
(ranks 2–4).
T4—Process Control, and T5—Process Control Analysis were highly correlated
with plan stability (C1) and with satisfaction of participants (C4).
26 2 Quality-oriented Strategies

T13—Customer focus was highly correlated with satisfaction of participants


(C4) and naturally, with customer satisfaction (C5).
The next four practices were highly correlated with one outcome variable (ranks
5–8).
T11—Quality Management and T3—Usage of internal information were highly
correlated with customer satisfaction (C5). T7—Benchmarking was highly corre-
lated with satisfaction of participants (C4), while T9—Supplier management was
highly correlated with product quality (C6).
The remaining five practices were not significantly correlated with any outcome
variable.
Thus, we obtained two sets of ranks for each generic PM practice. The first set
expresses the practice contribution to a project success as ‘perceived’ by the
respondents. The second set expresses an ‘actual’ contribution of the practice
according to its strong correlations with the outcome-oriented indicators. To
improve the project managers’ understanding of the PM process we calculated and
analyzed the discrepancies between the two sets of ranks.
We marked as ‘underestimated practices’, those practices whose actual rank was
much higher than their rank by perceived contribution (usage) and as ‘overesti-
mated practices’, those practices whose actual rank was much lower that their rank
by perceived contribution (usage).
The underestimated PM practices were ‘process control analysis’, ‘training’ and
‘process control’ (their respective rank discrepancy: 10, 9 and 8).
The overestimated PM practices were ‘simulation’ and ‘brainstorming’ (their
respective rank discrepancy: 10 and 8). It is worthwhile mentioning that the
respective usage of these two (overestimated) practices was not significantly related
to that of any other PM practices, i.e. they were used in isolation. This result
supports Ahire et al. hypothesis that a piece meal implementation of QM practices
is not likely to yield outcome results and hence it is bound to failure.
Concluding remarks
The specific findings of the survey point out to certain critical quality needs of
the project management process as identified by this study:
1. Improvement of ‘process control’ (control of trends and deviations) and ‘process
control analysis’ are likely to improve process stability (extent and frequency of
plan changes).
2. Improvement of ‘training’, whose currently reported usage was relatively low,
is likely to improve all outcome-oriented variables.
The results of this study exhibit certain similarities with the findings reported in
the manufacturing and logistics area. They concern:
– The importance of the human resources development (here training), on quality
oriented operational results.
– The influence of management commitment (here quality management) on the
practice of training and on customer focus.
2.6 Quality Management Theories and Practices … 27

– No direct effect of management commitment (leadership) on operational results.


From a methodological perspective, the analysis reported here suggests that most
of the input QM components in the manufacturing and other areas such as logistics
or project management are equivalent from a generic viewpoint. Accordingly,
similar indicators can be used to describe them, regardless of the specific area of the
empirical research. This is particularly true for ‘supplier management’, ‘bench-
marking’ and ‘training’, which are perceived by QM researchers as universal
quality oriented practices of major importance in any application area and deserve
to be investigated accordingly. By contrast, ‘information’ and ‘operational results’
have to be described by indicators specific to the application area.

2.7 Soft Versus Hard Quality Management Practices

Many scientists have investigated the effect of quality management practices on a


firm’s performance. Recent explorations of quality management practices studied
the relationships of hard versus soft factors on an organizational performance.
Rachman and Bullock (2005) carried out an empirical investigation on the effect
of soft TQM and hard TQM on organizations performances. They analyzed 260
manufacturing enterprises in Australia, looking for direct and indirect impacts on
performance. They suggested that soft TQM plays a number of roles. One is to
create an environment where implementation of hard TQM can take place, and the
other is to directly affect organizations performance. They considered six elements
of soft TQM: work force commitment, shared vision, customer focus, personnel
training, use of teams and supplier relationships. They suggested four elements of
hard TQM: Computer based technology, Just in Time principles, technology uti-
lization and continuous improvement enablers (Flexible Manufacturing Systems,
Statistical Process Control and Value Added Management). They measured orga-
nizational performance by customer satisfaction, employee morale, productivity,
defects in percentage, on time delivery, cost of quality in percentage of sales.
Findings
(a) Work force commitment, shared vision and supplier relationships were sig-
nificantly related to Just in Time principles, technology utilization and con-
tinuous improvement enablers.
(b) Three out of four hard TQM elements (excluding computer-based technology)
had significant relationships with all elements of soft TQM.
(c) Five out of six soft TQM elements (excluding training), had significant rela-
tionships with organizational performance.
(d) Use of Just in Time principles was significantly related to several measures of
organizational performance (employee morale, productivity and cost of
quality).
28 2 Quality-oriented Strategies

The findings suggest that in general soft TQM elements are significantly related
to organizational performance.
Certain hard TQM elements also have a significant effect on performance. For
hard TQM to impact performance it is essential that such hard elements be sup-
ported by elements of soft TQM.
Another paper, Gadenne and Sharma (2009), described a similar survey whose
objective was to investigate the hard and soft quality management factors and their
association with firm performance. The survey considered Australian Small and
Medium Enterprises.
Their findings were quite similar to those of Rachman and Bullock.
They suggested that improved overall performance was favorably influenced by
a combination of ‘hard’ QM factors and ‘soft’ QM factors. Their hard QM factors
were benchmarking, quality measurement, continuous improvement, and efficiency
improvement. Their soft QM factors consisted of top management philosophy and
supplier support, employee training and increased interaction with employees and
customers. The QM factors of employee training, efficiency improvement, and
employee and customer involvement were important in maintaining customer sat-
isfaction, whilst employee and customer involvement were also important in
maintaining a competitive edge in terms of return on assets.
The next chapter is about Flexibility oriented strategies

References

Ahire LS, Golhar DY, Waller MA (1996) Development and validation of TQM implementation
constricts. Decis Sci 27(1):23–56
Anderson RD, Jerman RE, Crum R (1998) Quality management influences on logistics
performance. Transport Res 43(2):137–148
Barad M (1984) Quality assurance systems in Israeli industries, part i: electric and electronics
industries. Int J Prod Res 22(6):1033–1042
Barad M (1995) Some cultural/geographical styles in quality strategies and quality costs (P.R.
China versus Australia). Int J Prod Econ 41:81–92
Barad M (1996) Total quality management. In: Warner M (ed) IEBM Thomson Business Press,
London, pp 4884–4901
Barad M (2002) Total quality management and information technology/systems. In: Warner M
(ed) IEBM Online Thomson Learning, London
Barad M, Kayis B (1994) Quality teams as Improvement Support Systems (ISS): an Australian
perspective. Manage Decis 32(6):49–57
Barad M, Raz T (2000) Contribution of quality management tools and practices to project
management performance. Int J Qual Reliab Manag 17:571–583
Black SA, Porter LJ (1996) Identification of the critical factors of TQM. Decis Sci 27(1):1–21
Crosby PB (1979) Quality is free. McGraw-Hill, New York
Deming WE (1986) Out of the crisis. Cambridge MIT Press, Mass USA
Feigenbaum AV (1991) Total quality control, 3rd edn. McGraw-Hill, New York
Gadenne D, Sharma B (2009) An investigation of the hard and soft quality management factors of
Australian SMEs and their association with firm performance. Int J Qual Reliab Manag 26
(9):865–880
Juran JM (1991) Strategies for world-class quality. Qual Prog 24:81–85
References 29

Porter ME (1985) Competitive advantage. McGraw-Hill, New York


Rachman SU, Bullock P (2005) Soft TQM, hard TQM and organizational performance
relationships: an empirical investigation. Omega 33(1):73–83
Raz T, Michael E (1999) Use and benefits of tools for project risk management. Int J Proj Manag
19(1):9–17
Samson D, Terziovsky M (1999) The relationship between total quality management practices and
operational performance. JOM 17(4):393–409
Saraph JV, Benson PG, Schroeder RG (1989) An instrument for measuring the critical factors of
quality management. Decis Sci 20(4):810–829
Taguchi G (1986) Introduction to quality engineering. Asian Productivity Organization, Tokyo
Chapter 3
Flexibility-oriented Strategies

This chapter describes the development of flexibility through my personal


involvement in researching this complex concept. It starts with flexibility in man-
ufacturing systems where the early researchers modeled Flexible Manufacturing
Systems (FMSs) from a bottom-up perspective (flexibility of elements, system and
aggregate flexibility). It continues with other modeling perspectives of flexibility,
such as flexibility in logistic systems, flexibility of generic objects and supply
chains flexibility. A variety of changes that flexibility has to cope with are mod-
elled, enabling to view flexibility as a powerful ingredient for responding to
changes and reducing risks in uncertain and changeable environments.

3.1 Introduction

Flexibility is literally defined as “the capability to bend without breaking” (Encarta


Dictionary). Flexibility as a concept is a potential and multi-dimensional capability.
It is important in the human body and recently it has been recognized as a useful
capability of the human brain. The complexity of flexibility stems from its nature.
De Groote 1994 defined it ‘an attribute of a system technology for coping with the
variety of its environmental needs’. Neely et al. (1995) viewed it as an organiza-
tional performance measure, similar to quality, time and cost. However, while
time and cost are measurable by nature, flexibility and quality are both complex,
ill-defined concepts and consequently difficult to measure (Upton 1994). When
viewed as a performance measure, flexibility becomes an expression of the effec-
tiveness and efficiency of performing a system activity in an environment charac-
terized by a diversity of its requirements (such as diversity of decision problems,
diversity of cost types or diversity of manufacturing parts).
Unlike quality, that is always needed in order to satisfy customers, flexibility is
mostly needed under changing requirements and/or changing conditions.
Consequently, the frequency and magnitude of the envisaged changes in
© The Author(s) 2018 31
M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_3
32 3 Flexibility-oriented Strategies

requirements that according to its policy a company is prepared to cope with, is


bound to play an essential role in establishing the importance of flexibility and its
measurement in the company’s strategies. In manufacturing and other systems such
as information systems, logistics, supply chains or architecture there is consensus
that flexibility means adaptability to changes predicted or unpredicted (Buzacott
1982; Zelenovic’ 1982). System adaptability to changes can be provided through
appropriate decision making at different development stages such as design, plan-
ning, operation and control. In its application, the relevant aspects of flexibility are
highly dependent on the particular environment in which the changes are
embedded.
In the early 1970 flexibility became an important feature of manufacturing
systems in industries which had to manufacture or assembly products in small or
medium batches. In batches, work in process levels were high and machine uti-
lization was low. Jobs spent a high proportion of time waiting to be moved, waiting
for a machine to be set-up or waiting for other jobs on a machine to be completed. It
was recognized that some means of automatically routing jobs through the manu-
facturing system from one machine to the next and some ways for reducing the
set-up of jobs on a machine could improve the matter. These ideas lead to the
development of the basic concept of a Flexible Manufacturing System (FMS).
FMSs combined the technology of NC manufacturing, automated material handling
and computer hardware and software to create an integrated system for the auto-
matic processing of palletized parts across various workstations in the manufac-
turing system. FMSs became widely used by industries that manufactured or
assembled products in small to medium batches (Yao and Buzacott 1985).
In the increasing highly competitive business environment as well as continu-
ously changing and highly uncertain, flexibility has become one of the most useful
attributes of all systems. From the end of the 1980s, the papers on flexibility grew
almost exponentially. Researchers paid more and more attention to flexibility in
areas different from manufacturing. Gradually, flexibility became an important
aspect in many firms, not as related to Flexible Manufacturing Systems. By the
1990s, the importance of flexibility started to get its main recognition from a
strategic perspective.
This chapter is called flexibility-oriented strategy, but we cannot talk about a
flexibility-oriented strategy, as we did about a quality-oriented strategy in the
previous chapter. In 1987, the Malcolm Baldridge National Quality Award
(MBNQA) formally recognized quality as a strategic competitive edge. The award
framework, a product of the National Institute of Standards and Technology,
enabled companies to assess themselves against it and thus guided them to realize a
quality-oriented strategic management: Total Quality Management, the outcome of
the quality ‘revolution’.
Many researchers such as Acker and Mascarenhas (1984), De Meyer et al.
(1989), Evans (1991), Sanchez (1995), Narain et al. (2000), Combe and Greenley
(2004) and others wrote papers and books on strategic flexibility, but these are not
equivalent to a formal recognition of flexibility as a strategic competitive edge.
There was no flexibility ‘revolution’ and no national prize dedicated to flexibility.
3.1 Introduction 33

The need for flexibility developed gradually until in some companies it reached the
strategic level. The last section of this chapter, entitled Strategic Flexibility,
addresses this topic.
The next section reviews Flexibility in Manufacturing Systems, the first area that
recognized the beneficial results of flexibility.

3.2 Flexibility in Manufacturing Systems

According to Buzacott (1982), any attempt to evaluate the flexibility of a manu-


facturing system must begin by taking into consideration the nature of the change
that the system has to cope with. A manufacturing system should cope with two
types of changes:
(1) External changes stemming from alterations of the requirements imposed on the
system such as a variety of product types, mixes, and quantities. These are
dictated by technological progress, market and firm policy.
(2) Internal changes, regarded as disturbances, brought about by breakdowns and
variability in processing time, leading to congestion.
Some Definitions of Manufacturing Flexibility
Gupta and Goyal (1989)—The ability of a Manufacturing System to cope with
changing circumstances or instability caused by the environment.
Gerwin (1993)—The ability to manage production resources and uncertainty to
meet customer requests.
Upton (1994)—The ability to change or react with little penalty in time, effort, cost
or performance.
Koste and Malhotra (1999)—The ability of the Manufacturing function to react to
changes in its environment, without significant sacrifices to firm performance.
These definitions reflect the diversity in understanding the subject. We may also
mention that all definitions include the ability to respond to changes and the use of
flexibility to accommodate uncertainty.

3.2.1 Flexible Manufacturing Systems (FMSs)

As mentioned above, the first organizational form for applying flexibility in man-
ufacturing was through the design and operation of Flexible Manufacturing
Systems (FMSs). Flexible Manufacturing Systems are a class of manufacturing
systems. Their name implies that the feature that characterizes them among all other
features is their flexibility. Their primary expected capability was adaptation to
changing environmental conditions and process requirements.
34 3 Flexibility-oriented Strategies

An FMS has a hierarchical structure, which starting at the bottom consists of


elements (processing modules or machines, material handling units and computers),
followed at the next level by flexible cells. An FMS is literally a collection of
flexible cells.
A flexible cell typically comprises a few machines and storage elements inter-
faced with automated material handling under central computer control. A main
feature of any FMS is its capability to maintain a stable performance under
changing conditions. The flexibility of such a system is therefore dependent upon
its components’ capabilities, their interconnections and upon the operation and
control mode (Buzacott 1982; Browne et al. 1984; Barad and Sipper 1988).
A typical planning-decisions sequence (short-medium term) in a manufacturing
system with flexible cells is described in Fig. 3.1 (Barad 1992).
At the high hierarchical level, there is the plant. The cellular Flexible
Manufacturing System is at the next level, followed by the cells elements (ma-
chines, material handling devices and computers). In terms of time units, the time
horizon (short, medium, long) is different for each hierarchical level.
The decision sequence is initiated by the annual aggregate production planning
at the plant level. Following the arrow, the next decision involves the detailed parts
production planning, still at plant level but at a shorter horizon (weeks).Then, at the
FMS level the set of part types to be simultaneously manufactured are selected

Short Term Medium Term


Parts detailed Parts aggregate
Plant planning Planning
(weeks) (months)

Selecting the parts


to be simultaneously
manufactured
FMS, flexible cells (Batching, balancing)
(days-weeks)

Re-routing Loading
(days-weeks)

Elements Manufacturing Operations assignment


Machines, MH and transfer (minutes-hours)

Failures or other
interruptions

Fig. 3.1 A planning decision sequence by hierarchy and short/medium time horizon (reproduced
from Barad 1992)
3.2 Flexibility in Manufacturing Systems 35

(batching), followed by a partitioning of machines into groups of identical


machines. Finally, the groups of identical machines are loaded, i.e. operations are
assigned to machine groups. At the machine level, a balanced load is the objective
of the decisions regarding operations assignments to each machine. Manufacturing
and transfer proceed at the elements level for as long as there are no failures or other
changes. When these occur, special decisions have to be made, such as re-routing of
parts to other machines.
Failures: By their nature, failures occur at short-term horizon and induce dis-
turbances at the lowest hierarchical level. Machines or tools within machines,
handling and transfer components or devices within these components are subject to
failures (so are computers and data-transmission devices). However, such local
interruptions may generate further disturbances at a higher production level (cell or
plant). Hence, in order to cope with failures, decisions at higher levels (cell or plant)
have sometimes to be considered. An example is the re-routing of parts from the
failed machine to another one.
Changes in Requirements: These may appear at any of the three time horizon
levels (short, medium or long). Changes in due dates, and quality control or
delivery problems, may alter priorities at short-term intervals (hours days), neces-
sitating re-routing at the machine or cell level. Other changes, such as in part mix or
volume may occur at short span at the plant-level planning stage. These could also
necessitate re-routing. Decisions at the plant level, associated with long-term
intervals (months-years), are concerned with changes stemming from the intro-
duction of new products.

3.2.2 Classifications of Flexibility in Manufacturing


Systems

Among the comprehensive classifications of flexibility in manufacturing, we may


mention Browne et al. (1984), Sethi and Sethi (1990), De Toni and Tonchia (1998),
Koste and Malhotra (1999).
Several authors have presented frameworks for implementing manufacturing
flexibility in organizations (Slack 1987; Kochikar and Narendran 1992; Gerwin
1993; De Groote 1994; Olhager and West 2002; Oke 2005; Rogers et al. 2011). Jain
et al. 2013 presented a recent and comprehensive review of manufacturing
flexibility.
Manufacturing Flexibility hierarchical levels
The early modelling frameworks of manufacturing flexibility were built bottom-up,
matching a manufacturing hierarchy. The bottom up approach typically comprised
of three hierarchical manufacturing flexibility levels.
36 3 Flexibility-oriented Strategies

(1) Basic flexibility, related to the manufacturing elements such as ‘machine’


flexibility.
(2) System flexibility, related to composite activities such as ‘routing’ flexibility,
i.e. the system capability of processing a part through different route/machines
or ‘process (mix)’ flexibility.
(3) Aggregate flexibility, consisting in ‘program flexibility’, ‘production flexibility’
and ‘marketing flexibility’.
Our discussion will consider the manufacturing flexibility classification of
Browne et al., Sethi and Sethi and Koste and Malhotra, the most cited in the
literature.
Browne et al (1984) identified eight flexibility types: Machine, Operation,
Process, Routing, Product, Volume, Expansion and Production.
Sethi and Sethi (1990) added three more flexibility types: material handling,
program and market. They categorized these eleven flexibility types by the hier-
archical levels mentioned above.
Basic flexibility level: machine, material handling and operation
System flexibility level: process, routing, product, volume and expansion
Aggregate flexibility level: program, production and marketing
Koste and Malhotra (1999) added ‘labour flexibility’ to the above eleven flex-
ibility types.
We may see that most of the flexibility types in Browne et al. classification (the
earliest one) belong to the system flexibility level.

3.2.3 Flexibility Types and Measures (Dimensions/Metrics)

Many papers analyzed these types or dimensions of flexibility and proposed mea-
suring units to quantify and measure them (Slack 1983; Ramasesh and Jayakumar
1991; Barad and Nof 1997; Malhotra and Subbash 2008; Jain et al. 2013). Some
flexibility types carried the name of the type of change that the manufacturing
system had to cope with (such as change in the output ‘volume’, or changes in the
product ‘mix’).
It seems worthwhile mentioning that in literature, manufacturing flexibility types
or dimensions are used interchangeably. Here we use types of flexibility (for flex-
ibility of elements or activities) while dimensions express measuring units or
metrics.
The primary role of flexibility in any system is its adaptation to changing
environmental conditions and process requirements. The system adaptation to
foreseen and unforeseen changes can be provided at different stages (design,
planning or operation) and through different means (hardware or software).
The capability needed to cope with failures at machine level is a quick tool
change or repair. To cope with changes in requirements, a quick changeover is
3.2 Flexibility in Manufacturing Systems 37

required (at machine and plant level), coupled with process and transfer variety. At
the plant level, the answer is alternative machines with variety of operations.
According to the above, two fundamental system characteristics are needed to
cope with changes:
1. Short changeover time (set-up)
2. Variety (operations and transfer)
Flexibility dimensions/metrics
Two flexibility dimensions or metrics, ‘response’ and ‘range’, measure the above
characteristics. Slack (1987) suggested two primary dimensions of flexibility in any
system: ‘range’ of states and ‘time’ (or cost) to move from one state to another.
Browne et al. (1984) indicated that these two primary dimensions originate from
‘machine flexibility’ measures. Response represents machine set up duration, while
range represents the operations’ variety of a machine, its versatility. Olhager and
West (2002) added a third dimension ‘distension’. They defined it as the invested
effort/time/cost for enhancing the current variety of operations and transfer, if
needed, in order to cope with a given change that currently is outside the action
space. Summarizing the above:
Range measures the current variety of alternatives—action space—for coping with
given changes.
Response is the preparation time/cost for coping with a change within the current
action space.
Distension is the invested effort/time/cost for enhancing the current action space if
needed thus enabling the element or activity flexibility to accommodate a given
change that currently is outside the action space.
Let us now present the flexibility types in their bottom up hierarchical order.
Basic flexibility types
Machine flexibility is ‘ease of making a change for producing a given set of part
types’. It is related to the set of different tasks it is capable of performing, (its
operation set) representing its ‘range’ dimension, also called ‘versatility’.
Eventually, the operation set may also include the relative time efficiency with
which the machine is capable of processing the operations (Brill and Mandelbaum
1989).
Machine flexibility is also related to the duration of its preparation tasks (set-up),
e.g. part loading, software accessibility, fault diagnostics, representing its ‘re-
sponse’ dimension. The ‘response’ dimension of machine flexibility may separately
consider various preparation times such as the time needed to change tools in a tool
magazine; the positioning time of a tool; and the time necessary to mount new
fixtures, which may be different for each part type (Chandra and Tombak 1992).
Machine flexibility provides adaptation to operations variety and copes with
failures as well as with changeovers from job to job. This flexibility comprises all
three dimensions: Set-up time (response), variety of operations (range) and even-
tually invested effort for enhancing the current variety of operations (distension).
38 3 Flexibility-oriented Strategies

Production managers consider machine flexibility as one of the most important


flexibility type (Slack 1983). Its impact is strong at both the off-line and on-line
decision stages. Off-line, its operations variety, its versatility, affects the decisions at
the pre-release planning level; the versatility of the machines improve the balancing
and batching options. On-line, machine flexibility affects the decisions that follow
failures/changes, such as re-routing of parts to other machines.
Labour flexibility, was defined by Koste and Malhotra as ‘number and variety of
tasks/operations a worker can execute without incurring high transitions penalty, or
large changes in performance’.
Actually, this is an individual, human related flexibility, resembling machine
flexibility. We may also define a human flexibility at a system level: team flexibility.
Barad (1998) defined it as ‘the expected number of individuals in a team capable of
actively participating in the performance of any task within a given set’.
Material Handling flexibility is defined as ‘the ability of material handling
systems to move different part types effectively through the manufacturing facility,
including loading and unloading of parts, inter-machine transportation and storage
of parts under various conditions of the manufacturing facility’. Hence, it concerns
attributes of the transporting devices, the transporting paths, the layout design as
well as the storage system (local and central).
It is worthwhile to mention that for measuring purposes ‘material handling’
flexibility is ill-defined. Although classified as a ‘basic’ flexibility, actually it is a
composite flexibility that should be decomposed to be measured. Barad and Nof
(1997) proposed to decompose material handling flexibility into ‘transporting
device’ and ‘transporting network’. A specific transporting device pertaining to the
material handling system is conceptually similar to a processing machine. In other
words it can also be measured in the above three dimensions. Its ‘range’ dimension
is expressed by the set of tasks it can accomplish; the ‘response’ dimension is
represented by its preparation (setup) time while the ‘distension’ dimension is the
invested effort to increase the current set of tasks. However, in contrast to a pro-
cessing machine, a decision that concerns a transporting device has to take into
account its travelling space limitations as well, i.e. the transporting network flexi-
bility. To measure the range dimension of material handling flexibility (actually that
of the transporting network), Gupta and Sommers 1996 proposed a ratio, namely
the number of transporting paths in the system to the number of paths in a ‘uni-
versal’ linking network, connecting every pair of machines in the system. To
consider the transporting paths in terms of a set in a universal linking network, as
suggested above, we have to assume a given layout.
Operation flexibility is the ability to interchange the ordering of several opera-
tions on each part while complying with the design restrictions. Buzacott (1982)
pointed out that if control costs are ignored, any restrictions on the options available
at the operational level are disadvantageous. The more variety available to the
controller, the less will be the effect of disturbances on system performance.
3.2 Flexibility in Manufacturing Systems 39

System flexibility types


The system flexibilities are composites of the basic flexibilities.
Process flexibility (Browne et al. 1984) alias job flexibility (Buzacott 1982) alias
mix flexibility (Gerwin 1993) represents the variety of parts that can be processed
within a mix (with minor set-ups); as such is dependent on the flexibility of the
machines in the system. Process flexibility refers to the ability to produce different
part types at the same time. It is useful for reducing batch sizes, work in process and
inventory costs (Sethi and Sethi 1990). Process flexibility is also dependent on the
range dimension of the transporting devices.
Routing flexibility is the capability of processing a part through varying routes or,
in other words, through using different machines. Hence, for a given number of
machines in the system (machine set size), routing flexibility (similarly to process
flexibility) will increase with the individual range flexibility of the machines in the set.
By their definitions, similar system processing capabilities are required by both
the process and the routing flexibilities. Most of the definitions of process flexibility
in the literature refer to routing flexibility as well. The definitions of these flexi-
bilities also imply that for measuring and comparison purposes, mainly the ‘range’
flexibility dimension of the machines in the set is to be taken into account; even-
tually the response dimension should not exceed a required threshold (minor set-
ups). To measure both process and routing flexibilities, a simple composite range
measure combining the ‘range’ flexibility (versatility) of the individual machines in
the system (or cell), was suggested: ‘the expected number of machines capable of
processing an operation’ (Barad 1992).
It seems important to stress that these two flexibility types are meant to be used in
different situations. The rationale of process flexibility is to enable loading of a great
variety of parts in the same mix and is thus associated with scheduling (off-line)
decisions meant to cope with a diversity of products, expressing various priority
requirements. In contrast, the rationale of routing flexibility is to cope (through
re-routing control) with disrupting events (such as machine breakdowns). Routing
flexibility is associated with control decisions intended to enable a smooth flow of
parts through a system (whose environment is featured by a diversity of short-term
disturbances and changes in requirements). To realize it, not only versatile machines
are needed but also material handling flexibility (of transporting paths and devices).
In the literature, the above rationale is not always well understood.
The beneficial effect of process flexibility and routing flexibility are similar, i.e.
short time throughput of parts and low manufacturing costs stemming from low
work-in-process and high machine utilization.
Product flexibility typically expresses the time/cost efforts to add new parts to
the system or modify existing parts (response dimension). Sethi and Sethi (1990),
assert that product flexibility is very important from a market perspective. It
measures how quick the firm responds to market changes.
Volume flexibility is the capability to produce economically at varying pro-
duction volumes including low production volumes. When interpreted as the
minimal economic production volume, volume flexibility is dependent on the
system set-up, hence again on the response dimension. It may be measured in terms
40 3 Flexibility-oriented Strategies

of the manufacturing costs per unit, namely, A/V + C, where A and C are,
respectively, the fixed (set-up related) and variable costs, while V is the production
volume.
Volume flexibility is also defined as the ‘possible range of production volumes at
which the firm can run profitably’ (Sethi and Sethi 1990).
Expansion flexibility has been defined as ‘time/cost efforts necessary to increase/
change the existing manufacturing capacity by a given percentage’ (response
dimension). It enables expanding production gradually. Expansion flexibility is
concerned with modifications of the manufacturing system capacity, enabling its
adaptation to perceived future changes in product demand. It reduces time and cost
to manufacturing new products.
Aggregate flexibility types
The aggregate flexibilities represent the combined attributes of a manufacturing
system technology, intended to cope with the diversity of its exogenous environ-
ment. As such, this environment represents strategic requirements imposed on a
manufacturing system.
Program flexibility is the system capability for untended production as needed
for special (environmental) manufacturing conditions where, because of physical or
other constraints, human collaboration is not possible. Sethi and Sethi defined it as
‘ability of a system to run virtually unattended for long enough periods’. It is
especially important for automated manufacturing systems for processing parts
during night shifts with no working personnel.
Production flexibility represents the universe of part types that can be manu-
factured by the system, without adding major capital equipment (Sethi and Sethi
1990). Although this measure can be considered an aggregate measure (at the plant
level), it is still an inherent measure and not related to specific requirements of an
exogenous environment. Hence, production flexibility is a potential flexibility,
appropriate for a strategic approach intended to cope with external customers,
requesting a high diversity of ‘regular’ parts.
The ‘range’ dimension seems to be the more important dimension for measuring
this flexibility type.
Marketing flexibility alias new product flexibility has been defined as time/cost
required for introducing a new product. In contrast to ‘production’ flexibility,
‘marketing’ flexibility is intended to cope with strategic decisions in an external
environment that requests a diversity of ‘special’ product types. The external
customers are bound to appreciate frequent launching of new and more attractive
products and thus, a beneficial effect of marketing flexibility would be an improved
product demand. In the literature, the attempts to assess marketing flexibility typ-
ically focus on the investment costs and not on benefits such as the above.
By its definition, the capabilities involved in achieving marketing flexibility can
be decomposed into ‘new product design’ activities and activities related to mod-
ifications of the processing resources and eventually their re-configuration. These
activities are also associated with expansion flexibility and product flexibility.
A summary of the flexibility types and their measuring dimensions is presented
in Table 3.1.
3.2 Flexibility in Manufacturing Systems 41

Table 3.1 Flexibility types by hierarchical levels and measuring dimensions


Flexibility type Measuring dimensions
Range Response/distention
Basic flexibilities
– Machine Variety of operations (Versatility) Set-up for change in operation/
Set-up for enhancing the variety
of operations
– Labour Variety of tasks/operations a worker
(Individual) can execute without high transitions
penalties
– Labour (Team) The expected number of individuals in
a team capable of actively participating
in performing any task within a given
set
– Transporting Variety of operations Set-up for change in operation/
device Set-up for enhancing the variety
(Material of operations
handling)
– Transporting Variety of transporting paths Set-up for enhancing the variety
network of transporting paths
(Material
handling)
– Operation Variety of interchanging order of Set-up for enhancing the variety
operations of interchanging order of
operations
System flexibilities
– Process The expected number of machines, Set-up for increasing the number
capable of processing an operation of machines, capable of
processing an operation
– Routing The same (as above) The same (as above)
– Product Time/cost to modify or add new
parts to the system
– Volume System set-up
– Expansion Effort to increase/or change
manufacturing capacity by given
percentage
Aggregate flexibilities
– Program Process and Routing range metrics
– Production Process range metrics and Product
response metric
– Marketing Product, Volume and Expansion
response metrics

Impact of manufacturing flexibility on performance


In literature, we find empirical researches that explored the influence of manufac-
turing flexibility on firm performance, such as Vokurka and O’Leary-Kelly (2000),
Zhang et al. (2003), associating firm performance with achieving customer
42 3 Flexibility-oriented Strategies

satisfaction. Their framework differentiated between competence and capability. In


their view, flexible competence is provided by machine, labour, material handling
and routing flexibilities. Flexible capability needed to satisfy customer is repre-
sented by volume flexibility and mix flexibility. These two flexibility types directly
affect customer satisfaction.
According to Vokurka and O’Leary-Kelly, the results differ among companies.
Increasing manufacturing flexibility will not automatically increase firms’ perfor-
mance, but will depend on the active environmental factors. As manufacturing
flexibility is composed of many flexibility types these authors proposed a contin-
gency perspective. This topic reveals different results by different researchers and
needs much further investigation.
Let us now examine another area where flexibility may have a beneficial effect.

3.3 Flexibility in Logistic Systems

Contrary to flexibility in manufacturing systems, which has been widely researched,


research on flexibility in logistics, distribution and marketing systems has been
conspicuous by its absence (Jensen 1997).
Flexibility research in decision theory has been carried out with regard to the
value of flexibility in sequential decisions under uncertainty (see e.g. Mandelbaum
and Buzacott 1990). A ‘flexible decision’ is amendable in the future whereas a
non-flexible decision is irreversible. Hence, flexibility may reduce the risk of an
early commitment to an alternative whose value is uncertain. Benjaafar and
Ramakrishnan (1996), defined flexibility in multi-stage decision making as ‘the
degree of future decision making freedom an action leaves, once it is implemented’.
Thus, flexibility allows decision makers to alter their position upon receipt of new
information. A flexible approach to multi-stage decision making is justified in an
environment where future information on values and costs is expected.
This section reviews Barad and Even-Sapir (2003). In their paper, the authors
emphasized the importance of flexibility in logistic systems and proposed a
framework for research, focusing on a suggested flexibility oriented type, which
they denoted trans-routing flexibility. It combined principles of decision flexibility,
transshipment and routing flexibility. In the reviewed paper, trans-routing flexibility
involves movement of stock between locations at the same echelon level where
physical distances between the demand and the supply locations are small. This
capability can be interpreted in the sense of physical postponement of assigning the
stock to a specific location. The scenario is appropriate to any inventory situation
where there is a common distribution center and several closely located similar base
demands with trans-routing capabilities and shared on-line information on their
inventory status. For instance, blood supply can be improved by having such
trans-routing capabilities among hospitals in the same region.
Trans-routing flexibility can be measured on each of the two primary dimen-
sions: response and range. On a response dimension it is measured as the transfer
3.3 Flexibility in Logistic Systems 43

time between two locations at the same echelon. The mathematical model presented
in the paper assumes that this value is negligible. The focus is on the range
dimension where trans-routing is measured by u, the number of transshipment links
per end user at the echelon level: u = 0, 1,… N − 1, where N is the number of end
users. In a rigid system u = 0. Figure 3.2 (reproduced from the original paper)
depicts such a system with N = 5.
Modeling the benefits of trans-routing flexibility
The model presented in the original paper stemmed from a military logistics sce-
nario. The problem focused on the logistics performance of combat units (end
users) including movement of stock among them (end users at the same echelon
level) and the logistic relations with their common supply location).
As conventional cost oriented logistic performance measures systems may not
always be appropriate, the paper investigated the performance of a logistic system
possessing trans-routing flexibility focusing on flexibility’s benefits exclusive of
cost considerations. An advantageous alternative, a customer oriented logistic
performance measure, logistic dependability, emphasizing quick response delivery
through reduced supply uncertainty during a replenishment cycle, was suggested.
In a reliability context, dependability is a probability measure of an item to
successfully complete a mission, given its reliability and assuming that if a failure
occurs it is repaired in due time. In the reviewed paper, logistic dependability was
defined as ‘the probability that there is no supply shortage until the end of a
replenishment cycle’.
To match dependability in the reliability literature, a simplified, conceptual
equation calculated logistic dependability (DL) as follows:
DL = Pr + (1 − Pr) * Ps│u
where Pr is the probability that demand does not exceed the inventory on hand
during a cycle at all the echelon end users (N) and Ps│u is the probability of
overcoming a shortage for a given trans-routing flexibility, measured by u, u = 0,1,
… N − 1.
The reviewed paper investigated the beneficial effects of trans-routing flexibility
on logistic dependability. To get a better understanding of the logistics decision
problems and their eventual solutions, a multifactor statistical experiment was
designed, carried out and analyzed. The experiment is detailed in Part II of this
book, Chap. 4.
The outlook of the next section is very different. It depicts flexibility in terms of
the flexibility of a generic object and uses tools related to information systems.

Fig. 3.2 A logistic system


with varying levels of
trans-routing flexibility
(reproduced from Barad and
Even-Sapir 2003)

u=1 u=2 u=4 (=N-1)


44 3 Flexibility-oriented Strategies

3.4 Flexibility of Generic Objects

This section presents a framework for analyzing flexibility of generic objects uti-
lizing cloud diagrams. It reviews a paper (Fitzgerald et al. 2009) written by authors
who have been working in the domain of flexibility in their respective different
areas: information systems and manufacturing systems. The starting point was work
undertaken by one of the authors, which portrayed some information systems
flexibility concepts in diagrammatic form, known as cloud diagrams. First, the
potential flexibility of a designed object was portrayed and then its actual flexibility
expressing its ability to adapt to changes. The visual nature of this representation
was regarded as useful and original. The framework is presented here to illustrate
these ideas. In the original paper examples of its applications in a manufacturing
system and in an information system are also displayed.

3.4.1 Clouds Representation

One of the key aspects is that the framework operates at the level of a generic
object, which is flexible or desires to be flexible. The object can be physical or
logical and the analysis is performed with respect to a given achievement. For
instance, if the object is a software product the desired achievement may be its
operation on any platform/operating system. The framework is divided into two
phases. In the first phase the object is analyzed in terms of its design characteristics
which are identified as potential flexibility capabilities, before they have been tested
in use under change. The second phase of the framework looks at the object under
change and examines its behavior in terms of its ability to adapt to new circum-
stances. This is its actual flexibility, which may be evaluated through its keeping in
line with the desired achievement.
Figure 3.3 depicts Phase 1 of the framework. The object is the entity under
investigation and is characterized by a number of attributes that determine its
flexibility. Examples of a generic object can be a car, a computer, a piece of
software, a manufacturing system or an information system. The object is analyzed
with a desired achievement in mind. For example if the object is a manufacturing
system, the desired achievement expressed by the managers of the manufacturing
company (the stake holders) can be ‘on-time delivery’ of a major product.
Characterizing a generic object (such as a product or a system) as physical or
logical or both helps in narrowing down the flexibility aspects, which need to be
analyzed in relation to the object desired achievement. A most important tool in the
analysis here is decomposition. To analyze the potential flexibility of an object we
decompose its flexibility into subsets of flexibility aspects.
An aspect is determined by the focus of interest and by the analyst selected
perspective. E.g. for the manufacturing system whose desired achievement is
‘on-time delivery’, its flexibility can be decomposed into four subsets of flexibility
3.4 Flexibility of Generic Objects 45

a1 b1
Desired achievement
defined/ measurable
Object (designed) suggested by stake
holders

Attributes

Inherent Object
Flexibility

Nature? Decomposable?

Lifetime?
List of Flexibility Aspects
No

Logical
or Short or Domain
Physical Long Yes Range (action space)
or both?
Response or Distension
(of action space)

Fig. 3.3 Analysis of the flexibility of an object [Phase 1—designed object] (reproduced from
Fitzgerald et al. 2009)

aspects: ‘human resources’, ‘information/analysis/decision making’, ‘communica-


tion links/relationships’ and ‘manufacturing processes’.
Each flexibility aspect is viewed within a flexibility measurable domain: ‘range’,
‘response’ or ‘distension’. Range represents the scope of an aspect and the action
space within which the object may be used. The response is the preparation time
and/or cost involved for the object to cope with a change within the existing action
space, while distension is the elasticity an object possesses for extending its action
space.
Additionally, the change is expected to happen at a low or high frequency and
have a long or a short duration. Apart from the change itself the effect of the change
on the desired achievement is the most important.
In order to investigate this effect we compare the desired achievement after the
change with the one measured under normal conditions of use. The change effect is
relevant to the flexibility aspects of the object and its flexibility measurement
domain: the pertinent range, response and possible distension
Figure 3.4 depicts Phase 2 of the framework. Contrary to phase I, the object is
now in use and we analyze its behavior under change.
The change is triggered by a change agent who can be a person, normally the
user of the object, or an environmental change (event/action such as a machine
failure or addition of a new product). The change can be unexpected such as
a physical disaster or planned as the decision to sell the object in a new market.
46 3 Flexibility-oriented Strategies

a2 b2 Desired achievement
Object in use (Previous to change)

b3 Desired achievement
(Following the change)

Change
effect?
Change features? Change
agent?

List of Flexibility
Type Magnitude Frequency Duration User and/or aspects affected
environment by change
Unexpected Minor Low Short
or planned or or or Action space
significant high long compatible or
else
Distension
Possible or
else not possible

Fig. 3.4 Analysis of the flexibility of an object [Phase 2—object in use] (reproduced from
Fitzgerald et al. 2009)

The magnitude of the change can be minor or significant according to its impact on
the object’s behavior.

3.4.2 Flexibility Aspects and Analysis of Several Generic


Objects

Flexibility aspects
As flexibility is a complex concept involving a combination of factors, which may
be viewed from a variety of aspects or perspectives, we decompose the flexibility of
an object into subsets of flexibility aspects. An aspect is determined by the focus of
interest and by the analyst selected perspective. We use the flexibility aspects of
generic objects in an analytical way showing that they can be applied as a
methodological tool for simplifying the methodical assessment of flexibility for
improving the understanding of this complex concept and the variety of its
interactions.
We emphasize that flexibility is driven by changes, here the changes that a
generic object may be exposed to. Some changes are anticipated while others are
not, some are not even envisaged. We deem that it is easier to visualize types of
changes by associating them with specific flexibility aspects (see also Fitzgerald
and Siddiqui 2002).
3.4 Flexibility of Generic Objects 47

The flexibility of five generic objects appear in Table 3.2 (reproduced from the
original paper). The objects were approximately ordered on a complexity scale:
method, product, strategy, process and system. Complexity of an object was con-
ceptualized as ‘many elements with relatively simple interrelations’ (see e.g.
Cooper et al. 1992). It was measured in a simplified way, through the number of its
flexibility aspects. The flexibility aspects were defined in a broad way so that some
are shared by several generic objects in the set while others are specific to one
generic object.
Let us examine the flexibility aspects of the generic objects listed in Table 3.2
considering the type of changes they can cope with.
Method and product
A method here is a systematic way for performing an activity, implying an orderly
logical arrangement.
A product/service is a usable result, an artifact that has been created by someone
or by some process.
Shared flexibility aspects
These two generic objects share two flexibility aspects: ‘usage’ and ‘design/
architecture’.
The implied change associated with ‘usage’ flexibility is a change in application
or a change in user. Usage flexibility can be measured on two dimensions: range
and response. On a range dimension it expresses the variety of possible applica-
tions or the variety of possible users of a method or a product. On a response
dimension it expresses the preparation time or effort needed to move from one
existing application to another when both applications are included in the action
space.
The ‘design’ flexibility perspective concerns quick/easy adaptation of the
existing product or method to new requirements not included in the action space,
leading to a new (similar) product or to an enhanced method. We dare say that this
flexibility aspect is dependent on the object structure/architecture. Its measuring
dimension, expressing the adaptation effort (time/cost) for enhancing the action
space, its distension.
Product specific flexibility aspects
The ‘maintenance’ flexibility aspect is concerned with a product quick/easy
maintenance. We suggest measuring it on a response dimension, whose magnitude
is dependent on the product structure/architecture. The implied change is related to
the product functioning status. Similarly to maintenance, ‘transfer’ of a product
implying a change in location, can be measured on a response dimension whose
magnitude is also dependent on the product structure/architecture. It is worthwhile
mentioning that this flexibility aspect is mainly relevant for a physical product.
Strategy, process and system
Strategy here is a long term and systematic action plan for achieving a goal.
Process is defined here as a set of interrelated activities, which transform inputs
into outputs.
A system is a more complex generic object. Among the many and dissimilar
definitions in the literature we selected the following: ‘an organized assembly of
48

Table 3.2 Generic objects and their flexibility aspects–shared or specific—(reproduced from Fitzgerald et al. 2009)
Generic Flexibility aspects
Object
Method Usage Design/Architecture
Product/ Usage Design/architecture Maintenance Transfer
service
Strategy HR Information/Analysis/Decision Communication Links/ Scale
Making Relationships
Process HR Information/Analysis/Decision Communication Links/ Order of activities’ Input/ Tools
Making Relationships execution Suppliers
System HR Information/Analysis/Decision Communication Links/ Equipment configuration/ Processes
Making Relationships network
3 Flexibility-oriented Strategies
3.4 Flexibility of Generic Objects 49

resources and procedures united and regulated by interaction or interdependence to


accomplish a set of specific functions’. In Table 3.1 we used this definition to
encapsulate processes in systems.
Shared flexibility aspects
Strategy, process and system share three flexibility aspects: ‘Human Resources (HR)’,
‘information/analysis/decision making’ and ‘communication links/relationships’.
From a strategy point of view, flexibility of human resources is related to
potential technological changes to be eventually reflected in a strategic policy
encouraging enhancement of the current skill range of employees. To envisage such
technological changes, a versatile strategic team, i.e. knowledgeable in a variety of
domains is needed.
From a process point of view flexibility of human resources mainly concerns
team versatility, determined by the individual versatility of the team members and
by the team size (Barad 1998). Team versatility may be assessed with respect to a
universal set of desired skills as relevant to the process under consideration. We
may differentiate between an operational process, such as a manufacturing process
with eventual repetitive activities, and a design process with non-repetitive activi-
ties. For an operational process the universal set of skills will comprise skills related
to the current operational methods/tools. These skills will enable coping with
anticipated changes such as absenteeism, job rotation and eventually with tools’
failures. If the process under consideration is a design process the appropriate
universal set of skills will comprise a variety of high level expertise. Interaction
between designers possessing such expertise variety is likely to enhance team
creativity and synergy leading to an innovative process output, i.e. an innovative
new product. The changes that the versatile design team has to cope with are
associated with the design requirements of the new product as compared to the
design requirements of a (similar) previous product.
From a system point of view the flexibility of human resources is expressed by
the current versatility of its employees with respect to a given set of functions. This
versatility is needed for eventual re-deployment following changes in demand type
(e.g. different mixes of products). Another feature of the HR flexibility at a system
level is related to flexible employment (by hours, weeks and months) expected to
accommodate fluctuations in the required demand volume (overall output per time
unit).
Flexibility of ‘Information/analysis/decision making’ can be measured on a
range dimension as the variety of information contents and the variety of analysis
tools available to the decision maker. On a response dimension it can be measured
as the time needed for analysis and decision making following new information
related to a change that occurred. The change can be at the strategic level such as
the loss of a major customer, or at a process level such as a tool failure, or at a
system level such a change in the due date of an order. Accordingly, the analysis
and decision making may be carried out respectively by a versatile strategic team or
by a versatile process team or by a versatile individual/team responsible for the
short/medium term system planning.
50 3 Flexibility-oriented Strategies

The flexibility of communication links/relationships can be measured on a range


dimension expressing the variety of these communication links. Again we may
consider a variety of links at a strategic level such as international links and other
types of links appropriate for a process or a system.
Specific flexibility aspects
‘Scale’ is suggested here as a specific flexibility aspect of strategy to be measured
on a range dimension.
We deem that an important specific flexibility aspect of a process is its capability
to change the ‘order of its activities’ execution’. From this perspective, a serial
process with a fixed execution order of its activities is a most rigid one. Other
specific flexibility aspects of a process are the versatility of its ‘input sources/
suppliers’ and the versatility of its ‘tools’. All these aspects can be measured on a
range dimension.
We suggest two specific flexibility aspects of a system: ‘equipment
configuration/network’ and ‘processes’ (whose respective flexibility aspects were
considered above). A flexible equipment configuration/network for accommodating
the existing one to the manufacturing/development of new products is important
when new products are frequently introduced (see e.g. Mehrabi et al. 2002). Range
and distensions are the appropriate flexibility metrics of this aspect.
Flexibility integrative evaluation
To assess the overall flexibility of any generic object in an integrative way, the
relative importance of all its flexibility aspects has first to be determined.

3.4.3 Context Perspective—Information, Manufacturing


(or Service)

Are the defined flexibility aspects of generic objects affected by context? Is a


manufacturing system similar to an information system? Is a manufactured physical
product similar to a software product?
It seems there is a noteworthy difference between a manufacturing system and an
information system. In contrast to a manufacturing system, which is typically
embedded in a manufacturing enterprise, an information system can be embedded
not only in a manufacturing enterprise or in a service enterprise (such as healthcare)
but it can be shared by several enterprises. In the literature, this issue is not always
clearly presented.
Frequently, an information system is viewed as a complex multifunctional
product, which may explain its ambivalent relationship to its environment, its
embedding enterprise. The emphasis is in its design/development stage. Its desired
achievements, which should be measurable, may prove to be even difficult to
define. By contrast, under no circumstance can we envisage a manufacturing system
as a multifunctional product. The emphasis is typically on how well it executes its
principal function (in use) which is to manufacture products. It seems easier to
3.4 Flexibility of Generic Objects 51

define desired achievements of a manufacturing system. In many cases they are


measured in time or cost units. Thus, for evaluating the flexibility of a manufac-
turing system, we may measure the time/cost of a desired achievement under
normal conditions of use and compare it with the value obtained when the system
has to function under altered conditions, e.g. following a change.
Such comparisons are more difficult to realize in an information system,
implying that evaluating the flexibility of an information system seems to be a more
difficult task. Fitzgerald (1990) has proposed a technique, ‘flexibility analysis’
which is undertaken when an information system is developed. It seeks to identify
potential changes and their respective probability of them actually occurring and the
cost of accommodating them in the current design. This information is used to
support decision making for improving (or not) the flexibility of the system under
consideration during its development/design stage.
The simplified conceptualized complexity in the previous section disregarded
subtlety. However, when examining an information system we may realize that it is
more complex than a manufacturing system because it has properties that are
associated with subtlety. The interrelations between its elements are not simple. An
information system comprises software components, which are products, task
scheduling representing a specific process feature, a specific hardware configuration
representing a specific system feature and development elements eventually com-
mon to systems and processes.
This assertion is supported by Kruchten’s paper (1995) on software architecture.
He considers an information system from four different angles: logical, process,
physical and environmental. The logical view supports the services or functional
requirements an information system has to provide to its end users, i.e. it is a
product view. The process view is concerned with its performances. Process loads
affect message flow and hence performance time. Concurrent activities are useful
for improving performances under changing loads. The physical view maps the
various elements identified in the logical, process and environmental views into the
network of computers. Different physical configurations are used: some for
development and testing, others for system deployment at various sites or for
different customers.
As mentioned at the beginning of the section, the original paper also presented
applications of the conceptual frame to a manufacturing system and to an infor-
mation system. A concise comparison between them shows that the flexibility
metrics considered were appropriate to both systems. However, contrary to the
manufacturing system, decomposition of the information system into flexibility
aspects could not be carried out according to the proposed frame. To be analyzed,
the information system had to be first decomposed into functional building blocks.
Also, consensus on a desired and measurable achievement of the information
system could not be reached.
The paper showed that in the information context, flexibility was loosely defined
and future research will be useful for designers of new systems in cases where
flexibility is an important feature of a potential system.
52 3 Flexibility-oriented Strategies

3.5 Flexibility in Supply Chains

A supply chain is a collection of suppliers, component manufacturers, end-product


manufacturers, logistics providers, and after-sales service providers, intended to
leverage their strategic positioning and to improve their operating efficiency
(Bowersox et al. 2002).
The causes for emergence of supply chains are manifold:
Global competition, customization—where the emphasis is on tailoring a pro-
duct to the exact requirements of a customer-, rapid innovations that have drasti-
cally reduced product lifecycles thus forcing manufacturers to build closer links
with their suppliers and distributors, and advances in information technology
enabling partners to exchange information and to plan jointly to form a seamless
supply chain.
Intense competition and very high levels of uncertainties characterize the current
markets. The added dimension of economic downturn is putting serious stress on
individual links. Given such difficult circumstances, a methodology for studying the
industry needs and for finding pathways for companies to take care of these needs
becomes very important.
Why flexibility?
Flexibility is an expression of effectiveness and efficiency in an environment
characterized by diversity and uncertainty. From a performance perspective, flex-
ibility is a powerful ingredient that enables stable performances under changing
conditions (see e.g. Swamidass and Newell 1987; De Meyer et al. 1989; Gerwin
1993). As most of the supply chain characteristics exhibit changes and uncertainties
(Bowersox et al. 2002) flexibility in these systems may well represent a potential
source of improved efficiency. Hence, for long-term sustainability, each firm in a
supply chain, and the chain as a whole, need flexibility.
In the 90s, the importance of flexibility started to get its main recognition from a
strategic perspective. Accordingly, later frameworks of flexibility researches were
built top-down, i.e. from strategies to resources, focusing on the strategic impor-
tance of flexibility (Olhager and West). Select flexibility types became directly
linked to competitive priorities of enterprises. Some authors e.g. Narasimhan and
Jayaranan (1998), considered flexibility as such a competitive priority or in other
words regarded it as a ‘global’ concept. However, the contingency view of flexi-
bility postulates that different flexibility components are important to different
strategic components (see also Slack 1987; Gupta and Sommers 1996).
In the top-down flexibility studies, an approach called ‘flexibility tunnel’ was
used (Olhager et al. 1991). Corporate objectives (growth, survival, profit) or
competitive priorities were linked to first order flexibilities (featured by high
market/customer visibility) such as delivery, volume and new products. In most of
these studies, flexibility types at a lower level such as routing (denoted ‘system
flexibilities’ in the semantics of the bottom-up perspective) where not specifically
considered. Instead, the first order flexibilities were directly linked to resources (e.g.
process, labor and suppliers), which were considered ‘providers’ or ‘enablers’ of
3.5 Flexibility in Supply Chains 53

flexibility. Top-down flexibility researchers did not bother to discuss flexibility of


elements.
Otto and Kotzab (2003), examined the Supply Chain Management (SCM) goals
and performance metrics through six perspectives: System Dynamics, Operations
Research, Logistics, Marketing, Organization and Strategy. Their argument was
that each perspective perceived the nature of SCM in a different way and hence
each had different goals, standard problems and solutions. From a strategic per-
spective, SCM is perceived as an arrangement of competencies and profit, which
matches the fundamental competitive priorities in the manufacturing strategies. It is
worthy of note that ‘quality’ does not explicitly figure among the goal oriented
metrics of Supply Chains. There are two explanations to this: either quality is
nowadays taken for granted or quality is implicitly considered as a driver of cus-
tomer satisfaction.
As the competitive priorities in Supply Chain Management strategies are con-
sistent with the competitive priorities in manufacturing, Barad (2012) considered a
set of such priorities. It comprises the typical competitive advantages found in the
manufacturing strategy literature, excluding quality and price: delivery (fast,
dependable), product diversity and new products. She viewed flexibility in supply
chains in terms of an ordered top-down perspective. The flexibility framework she
adopted was Quality Function Deployment (QFD), originally a product planning
methodology expressing the voice of the customer. The QFD top-down framework
for deploying flexibility in supply chains is detailed in Part II of the book, Chap. 6.

3.6 Strategic Flexibility

Strategic flexibility is the capability of an organization to respond to major changes


that take place in its external environment by allocating the resources necessary to
respond to those changes. The organization should be able to identify change
markers so that it can go back to its previous state when the external environmental
change is reversed.
Some definitions and discussions of this topic
Strategic flexibility consists in managerial capabilities related to the goals of the
organization (Acker and Mascarenhas 1984).
In their 1989 paper De Meyer et al. reported that research teams from INSEAD
(Fontainebleau), Boston University and Waseda University (Tokyo) have admin-
istered a yearly survey on the manufacturing strategy of the large manufacturers of
the three industrialized regions of the world (Europe, US and Japan). The results of
the Japanese manufacturing futures survey seemed to indicate that an increasing
number of Japanese manufacturers suffered from shortening product life cycles and
increasing market/demand fluctuations. To cope with these changes, production is
required to offer a vast variety of products, designs and volume fluctuations. The
results of the 1989 Manufacturing Futures Survey indicated that at that time the
average Japanese competitor seemed to be farther down the road in implementing a
54 3 Flexibility-oriented Strategies

flexible strategy and discovering the problems related to it as compared to European


and American competitors.
Strategic flexibility is implicit in the adoption of the marketing concept, because
implementing marketing suggests that firms should change according to the
changes in present and potential customer needs. To achieve this requires options in
strategic decision making so that adaptation takes place. Strategic flexibility can be
an offense as well as a defense mechanism. In other words, a reactive form of
strategic flexibility deals with changes that have already occurred; a preemptive
form of strategic flexibility identifies niches or gaps in the markets and thus wins
the considerable advantage of the first mover. For example, when a new player
enters the market with a new product, it is a major change in the external envi-
ronment of an existing organization. If the organization takes steps after a new
player has entered the market, such as investing more in research and development
(R&D) and/or market research and sales, this is a reactive form of strategic flexi-
bility. If the organization takes these steps before a potential new player has entered
the market, this is a preemptive form of strategic flexibility (Evans 1991; Sanchez
1995). Stalk et al. 1992 emphasized the role of capabilities-based competition
combining scale with flexibility. The companies need very flexible processes
enabling to use the same capabilities to serve simultaneously different business. To
achieve this they need speed to respond quickly to changes in customer/market
demands and agility to adapt simultaneously to different business environments.
Narain et al. (2000) classify the implications of flexibility in manufacturing
systems in terms of Necessary Flexibility (machine, product, labor, material han-
dling, routing and volume) which they consider operational, Sufficient Flexibility
(process, operations, program and material) which they consider tactical, and
Competitive Flexibility (production, expansion and marketing) which they con-
sidered strategic flexibility.
Strategic flexibility can be applied at two levels: at the level of the firm (its
ability to respond and adopt the environmental change) and at the level of the
decision makers for generating and considering new and alternative options in
strategic decision-making (Grewal and Tansuhaj 2001). These authors investigated
the role of strategic flexibility and market orientation in helping Thai firms manage
an economic crisis. The strategic flexibility tools and skills were useful in the
surveyed economic crisis. The researchers stressed that in high competitive envi-
ronment, the firms’ performance after the crisis was positively related to its strategic
flexibility and not to its market orientation.
Johnson et al. (2003) presented a market oriented strategic flexibility. They
started by reviewing flexibility concepts in terms of operational flexibility (short
term capabilities), tactical flexibility to deal with changes in product design and mix
and strategic flexibility dealing with actions following environmental changes.
Then they linked strategic flexibility with marketing concepts. They define market
oriented strategic flexibility as ‘firm’s intent and capabilities to generate firm
specific real options for configuration and re-configuration of appreciably superior
customer value propositions’. The research work presented by these authors is
3.6 Strategic Flexibility 55

deeply rooted in marketing concepts, which are enhanced by linking them to


flexibility concepts.
The strategic flexibility of a firm is its capability to respond quickly to envi-
ronmental changes (Combe and Greenley 2004). The organization has to identify
the changes and quickly alter commitment of resources to new courses of action to
counter the change. The capabilities for strategic flexibility are dynamic capabili-
ties, because they are associated with new resource allocations required to lead or
deal with change. In their article, the authors were also concerned with the pos-
session of the capabilities for different forms of strategic flexibility. The ability to
demonstrate strategic flexibility can increase the value of an organization, as the
business is able to adapt more quickly to change and therefore manage risk better.

References

Acker DA, Mascarenhas B (1984) The need for strategic flexibility. J Bus Strategy 5(2):74–82
Barad M, Sipper D (1988) Flexibility in manufacturing systems: definitions and Petri net
modelling. Int J Prod Res 26(2):237–248
Barad M (1992) (The impact of some flexibility factors on FMSs—a performance evaluation
approach. Int J Prod Res 30(11):2587–2602
Barad M, Nof SY (1997) CIM flexibility measures: a review and a framework for analysis and
applicability assessment. Int J Comp Integ M 10(1–4):296–308
Barad M (1998) Flexibility Performance Measurement Systems—A Framework for Design. In:
Neely AD, Waggoner DB (eds), Proceedings of the first international conference on
performance measurement centre for business performance, University of Cambridge, UK,
pp 78–85
Barad M, Even-Sapir D (2003) Flexibility in logistic systems—modeling and Performance
Evaluation. Int J Prod Econ 85:155–170
Barad M (2012) A methodology for deploying flexibility in supply chains. IFAC Proc Vol 45
(6):752–757
Benjaafar S, Ramakrishnan R (1996) Modelling, measurement and evaluation of sequencing
flexibility in manufacturing systems. Int J Prod Res 34(5):1195–1220
Bowersox DJ, Closs DJ, Cooper MB (2002) Supply Chain Logistics Management McGraw Hill,
New York
Brill P, Mandelbaum M (1989) Measures of flexibility in manufacturing systems. Int J Prod Res 27
(5):747–756
Browne J, Dubois D, Rathmill K, Sethi SP, Stecke K (1984) Classification of flexible
manufacturing systems. FMS Magazine 2:114–117
Buzacott JA (1982) The fundamental principles of flexibility in manufacturing system. In:
Proceedings of the 1st international conference on FMSs, Brighton, pp 13–22
Chandra P, Tombak MM (1992) Models for the evaluation of routing and machine flexibility.
Eur J Oper Res 60:156–165
Combe IA, Greenley GE (2004) Capabilities for strategic flexibility: a cognitive content
framework. Eur J Marketing 38(11/12):1456–1480
Cooper WW, Sinha KK, Sullivan RS (1992) Measuring complexity in high-technology
manufacturing: indices for evaluation. Interfaces 22(4):38–48
De Groote X (1994) The flexibility of production processes: a general framework. Manage Sci
40:933–945
56 3 Flexibility-oriented Strategies

De Meyer A, Nakane J, Miller JG, Ferdows K (1989) Flexibility the next competitive battle.
Strategic Manage J 10(2):135–144
De Toni A, Tonchia S (1998) Manufacturing flexibility: a literature review. Int J Prod Res 36
(6):1587–1617
Evans S (1991) Strategic flexibility for high technology manoeuvres: a conceptual framework.
J Manage Stud 28(1):69–89
Gerwin D (1993) Manufacturing flexibility: a strategic perspective. Manage Sci 39:395–410
Grewal R, Tansuhaj P (2001) Building organizational capabilities for managing economic crisis:
the role of market orientation and strategic flexibility. J Marketing 65:67–80
Gupta YP, Goyal S (1989) Flexibility in the manufacturing system: concepts and measurement.
Eur J Oper Res 43:119–135
Gupta YP, Sommers TM (1996) The measurement of manufacturing flexibility. Eur J Oper Res
60:166–182
Fitzgerald G (1990) Achieving flexible information systems: the case of improved analysis.
J Inform Technol 5:5–11
Fitzgerald G, Siddiqui FA (2002) Business process reengineering and flexibility. Int J Flex Manuf
Sys 14(1):73–86
Fitzgerald G, Barad M, Papazafeiropoulou A, Alaa G (2009) A framework for analyzing flexibility
of generic objects. Int J Prod Econ 122(1):329–339
Jain A, Jain PK, Chan FTS, Singh S (2013) A review on manufacturing flexibility. Int J Prod Res
51(19):5946–5970
Jensen A (1997) Inter-organizational logistics flexibility in marketing channels. In: Tilanus B
(ed) Information systems in logistics and transportation. Pergamon, New York, pp 57–75
Johnson JL, Lee RP, Saini A, Grohman B (2003) Market focus strategic flexibility: conceptual
advances and integrative Mode. J Acad Mark Sci 31(1):74–89
Kochikar VP, Narendran TT (1992) A framework for assessing manufacturing flexibility. Int J
Prod Res 30(12):2873–2895
Koste LL, Malhotra MK (1999) A theoretical framework for analyzing the dimensions of
manufacturing flexibility. JOM 18(1):75–93
Kruchten PB (1995) The 4 + 1 view model of architecture. IEEE Softw 12(6):42–50
Mandelbaum M, Buzacott J (1990) Flexibility and decision making. Eur J Oper Res 44:17–27
Malhotra MK, Subbash S (2008) Measurement equivalence using generalizability theory: an
examination of manufacturing flexibility dimensions. Decision Sci 39(4):643–669
Mehrabi MG, Ullsoy AG, Kore Y, Heytler P (2002) Trends and perspectives in flexible and
reconfigurable manufacturing systems. J Intell Manuf 13(2):135–146
Narain R, Yadav RC, Sarkins J, Cordeiro JJ (2000) The strategic implications of flexibility in
manufacturing systems. Int J Agile Manuf Sys 2(3):202–213
Narasimhan R, Jayaranan J (1998) Causal linkages in supply chain management. Decision Sci 29
(3):579–605
Neely AD, Gregory M, Platts K (1995) Performance measurement system design-a literature
review and research agenda. Int J Oper Prod Man 15(4):80–116
Oke A (2005) A framework for analyzing manufacturing flexibility. Int J Oper Prod Man 25
(10):973–996
Olhager J, West BM (2002) The house of flexibility: using the QFD approach to deploy
manufacturing flexibility. Int J Oper Prod Man 22(1):50–79
Otto A, Kotzab H (2003) Does supply chain management really pay? Six perspectives to measure
the performance of managing a supply chain. Eur J Oper Res 144:306–320
Ramasesh RV, Jayakumar MD (1991) Measurement of manufacturing flexibility: a value based
approach. JOM 10(4):446–468
Rogers PP, Ohja D, White RE (2011) Conceptualizing complementarities in manufacturing
flexibility: a comprehensive view. Int J Prod Res 49(12):3767–3793
Sanchez R (1995) Strategic flexibility in product competition. Strategic Manage J 16:135–159
Sethi AK, Sethi SP (1990) Flexibility in manufacturing: a survey. Int J Flex Manuf Sys 2:289–328
Slack N (1983) Flexibility as a manufacturing objective. Int J Oper Prod Man 3(3):3–13
References 57

Slack N (1987) The flexibility of manufacturing systems. Int J Oper Prod Man 7(4):35–45
Stalk G, Evans P, Shulman LE (1992) Competing on capabilities: the new rules of corporate
strategy. Harv Bus Rev 70(2):57–69
Swamidass PM, Newell WT (1987) Manufacturing strategy, environmental uncertainty and
performance: a path analytic model. Manage Sci 33(4):509–524
Upton DM (1994) The management of manufacturing flexibility. Calif Manage Rev 36(2):72–89
Vokurka RJ, O’Leary-Kelly SW (2000) A review of empirical research on manufacturing
flexibility. JOM 18(4):485–501
Yao DD, Buzacott JA (1985) Modelling the performance of flexible manufacturing systems. Int J
Prod Res 23:945–959
Zhang M, Vonderembse A, Lim JS (2003) Manufacturing flexibility: defining and analyzing
relationships among competence, capability and customer satisfaction. JOM 21:173–191
Zelenovic’ DM (1982) Flexibility—a condition for effective production systems. Int J Prod Res
20:319–337
Part II
Techniques
Chapter 4
Design of Experiments (DOE)

4.1 Introduction

Design of Experiments (DOE) is a multi-purpose technique (Box et al. 2005). It


consists in a series of tests in which changes are made to input variables of a
process, so that we may observe and identify corresponding changes in the output
response. It can be applied to physical experiments, to simulation experiments (Law
and Kelton 2000) or to other decision problems where the effects of several factors
are examined. Experimental Design methods may be used to design processes or
products, to improve process performance or to obtain a robust process, insensitive
to sources of external variations.
The origins of DOE were in the UK, where at the beginning of the previous
century the method was developed by Sir Fisher and was successfully used in
agriculture. However, its application in western industries was meager. ‘Taguchi
methods’ that were developed in Japan during the 1960s and were disseminated in
the west during the 1980s represent a specific style of experimental design with
specific advantages and pitfalls (see e.g. Dehnad 1989).
Applying DOE to physical or to simulation experiments as well as to other
purposes consists of some well-defined activities (see Montgomery 2012).

4.1.1 Guidelines
– Statement of the problem
– Choice of the factors to be varied (independent variables) and the levels to be
investigated
– Selection of the response (dependent variable)
– Choice of the experimental design

© The Author(s) 2018 61


M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_4
62 4 Design of Experiments (DOE)

– Performing the experiment/series of experiments


– Data analysis
– Conclusions and recommendations.
DOE is an economical technique, because reliable results can be obtained based
on a relatively small number of observations. Correctly analyzing data from a small
but well-designed experiment will yield statistically sound interpretations. Its great
advantage is a methodical examination of the factor main effects and especially of
their interactions that may shed light on more complex aspects of a decision
problem (see Sect. 4.1.3).

4.1.2 Fisher’s Basic DOE Principles


– Randomization: Each unit of the investigated population has the same chance of
being selected in a sample of units on which the experiment is performed.
– Statistical replications: As measurements are subject to measurement uncer-
tainty, to better estimate the true effects of treatments, experiments are
replicated.
– Blocking: Arrangement of experimental units into homogeneous groups to
reduce variation between units within such group and thus achieve greater
estimating precision.
– Orthogonality: An experimental design is orthogonal if each factor can be
evaluated independently at all the other factors. It is achieved by matching each
level of each factor with an equal number of each level of the other factors.

4.1.3 Factorial Experiments

A full factorial experiment is an experiment whose design consists of two or more


independent variables (factors), each with discrete possible values or ‘levels’, and
whose experimental units take on all possible combinations of these levels across
all such factors. Such an experiment allows the investigator to study the effect of
each factor on the response (dependent variable), as well as the effects of interac-
tions between factors on the response variable.
A main effect is the effect of an independent variable on a dependent variable
averaged across the levels of the other independent variables.
Interaction between two factors occurs when the effect of one factor, as esti-
mated at a given level of another factor, is not the same at all levels of the other
factor and vice versa. Interaction between factor effects is a basic, important concept
in the experimental design methodology.
4.1 Introduction 63

If the number of combinations in a full factorial design is too high to be


logistically feasible, a fractional factorial design may be used, in which some of the
possible combinations are omitted.
For readers who do not have any knowledge of fractional factorial experiments
(discussed in Sects. 2 and 3 of this chapter), we present below two instances.

4.1.3.1 Instance 1 (A Half-Fractional Factorial)

Suppose we want to investigate five factors A, B, C, D, E each at two levels. A full


factorial design will need 32 = 25 combinations. A half-fractional factorial design
will solely need 16 = 25−1 experimental units, equivalent to a full factorial design
of four factors. We may construct the design via confounding the effects.
Confounded effects means the effects cannot be separated. Let us confound the fifth
factor with a high order interaction, E = ABCD. The product of E with ABCD
equals 1, I = ABCDE.
This represents a defining relation (contrast). The length of the defining contrast
determines the resolution of the fractional design, i.e. its ability to separate main
effects and low-order interactions from one another. Important fractional designs
are those with resolution IV, and V. As the current defining relation is ABCDE this
example has resolution V.
Resolution V: It estimates main effects un-confounded by three-factor (or less)
interactions. It estimates two-factor interaction effects un-confounded by other
two-factor interactions.
Resolution IV: It estimates main effects un-confounded by two-factor interac-
tions. It estimates two-factor interaction effects, but these are confounded with other
two-factor interactions. The confounded effects are aliases.

4.1.3.2 Instance 2 (A Quarter-Fractional Factorial)

Suppose we want to investigate six factors A, B, C, D, E, F each at two levels.


A full factorial design will need 64 = 26 combinations. A quarter-fractional fac-
torial design will only need 16 = 26−2 experimental units, again equivalent to a full
factorial design of four factors. Again, we may construct the design via confounding
the effects. However, now we need to confound two factors, i.e. two defining
relations are needed. Let us confound the factor E = ABC and factor F = BCD. The
product of E with ABC equals 1, I = ABCE. The product of F with BCD equals 1,
I = BCDF. The two defining relations induce a third relation, their product.
I = ABCE*BCDF = AEDF.
The defining contrasts, which cannot be estimated are: I = ABCE = BCDF =
ADEF. Their length, which determines the design resolution, is IV. In a
quarter-fractional design four different treatments are considered as one because
their individual effects cannot be separated, they are aliases. Below is the aliases
table.
64 4 Design of Experiments (DOE)

A ¼ BCE ¼ ABCDF ¼ DEF AD ¼ BCDE ¼ ABCF ¼ EF


B ¼ ACE ¼ CDF ¼ ABDEF AE ¼ BC ¼ ABCDEF ¼ DF
C ¼ ABE ¼ BDF ¼ ACDEF AF ¼ BCEF ¼ ABCD ¼ DE
D ¼ ABCDE ¼ BCF ¼ AEF BD ¼ ACDE ¼ CF ¼ ABEF
E ¼ ABC ¼ BCDEF ¼ ADF BF ¼ ACEF ¼ CD ¼ ABDE
F ¼ ABCEF ¼ BCD ¼ ADE ABD ¼ CDE ¼ ACF ¼ BEF
AB ¼ CE ¼ ACDF ¼ BDEF ABF ¼ CEF ¼ ACD ¼ BDE
AC ¼ BE ¼ ABDF ¼ CDEF

We see that the two-factor interactions are confounded with other two-factor
interactions.

4.1.3.3 Remark

It is beyond the scope of this chapter to present the standard technical statistical
analysis of the experimental results (Analysis of Variance). Our aim here is for the
reader to understand the capability of the DOE technique as an analysis tool. It is
not an ‘exercise’ in statistical theory.

4.2 Impact of Flexibility Factors in Flexible


Manufacturing Systems—A Fractional Factorial
Design of a Simulation Experiment

Simulation is a computer experimentation technique that uses input/output models


for gaining information about a real-world system or process. Hence, as part of any
simulation project, experiments have to be designed to yield the desired information
needed for meeting the project’s objectives.
The aim of the simulation experiment here was to investigate the impact of
several factors in a flexible manufacturing system (FMS) on its performance (Barad
1992). See also Part I, Chap. 3, Sect. 3.

4.2.1 The Selected Factors (Independent Variables),


Their Levels and the Simulation Design

R—a design factor representing system versatility (R1 low level, R2 high level)
B, C—two control factors, expressing scheduling rules assigning priorities to the
parts waiting to be processed based on the available information.
Factor B is part oriented, meant to prevent long delays, by assigning a higher
priority to a part according to its waiting time in system.
4.2 Impact of Flexibility Factors in Flexible Manufacturing … 65

Factor C is system oriented, meant to avoid machine starvation. It assigned a


higher priority to a part, if its next processing machine was idle. Each rule may be
considered or not considered, a factor being at its respective high level (B2 = 1,
C2 = 1) when the rule is considered, otherwise (B1 = 0, C1 = 0).
U—an environmental factor, planned machine utilization (U1 = 0.60,
U2 = 0.80).
Factor E represents alterations in type of demand (change factor) and its two
levels are two different mixes generated from an aggregate mix: E1 a mix with
skewed load distribution, E2 a mix with uniform load distribution of the various
part types.
A complete five-factor experiment of the above factors, each at two levels
involves 25 = 32 treatment combinations. We used a half-fractional factorial
experimental design 25−1, resulting in 16 combination, as in instance 1 in the
introduction. The design of the simulation experiment is detailed in Table 4.1. This
design has resolution V, where the main effects and the two-factor interactions are
un-confounded one with another and two-factor interactions are not confounded
with other two-factor interactions.
There was a sixth and very important change factor, F, representing equipment
failures, at two levels as well: no failures, 10% failures. The percentage expresses
the ratio MTTR/MTBF (Mean Time to Repair/Mean Time between Failures).
Each of the 16 combinations was tested at the two failure levels.

Table 4.1 The half-fractional factorial design of the simulation experiment—equipment failures
not considered (reproduced from Barad 1992)
Machine versatility Mixes Control methods Utilization level
Low (R1) (E1) None High
Mix 1 B2 Low
C2 Low
B2 and C2 High
Mix 2 None Low
(E2) B2 High
C2 High
B2 and C2 Low
High (R2) Mix 1 None Low
(E1) B2 High
C2 High
B2 and C2 Low
Mix 2 None High
(E2) B2 Low
C2 Low
B2 and C2 High
66 4 Design of Experiments (DOE)

4.2.2 Response (Dependent Variable)

Two on-line performance measures of the simulated manufacturing system were


considered:
1. Td—Standardized tardiness, a unified service-oriented measure defined as ‘total
waiting time in system of a part unit as a ratio of its processing time’.
2. Work in Process (WIP), a design oriented measure.
The system was investigated under unlimited storage and the values of the
planned machine utilization were selected in such a way as to ensure that each
machine is below saturation. Under these conditions, work in process (WIP) was a
performance measure able to supply relevant information at the design stage due to
its expected sensitivity to failures and to the planned utilization level.

4.2.3 Simulation Analysis

The simulation model and the experimental conditions were developed using Siman
network modeling. A simulation run represented a particular treatment combina-
tion. Length of each simulation run was 14,400 min (240 h).
The analysis was concentrated on the steady-state results, considering the
principles prevailing in a ‘non-terminating’ simulation environment. The Siman
output processor was extensively utilized. Through the ‘filter’ command, the
transient period for each run was determined, the transient results were deleted and
the remaining steady state period was divided into batches, which are the ‘blocks’
of the experimental design. The means of the batches approximately represent
independent observations. The output processor performs automatically statistical
testing of the independence between batches (Fishman’s method).
Lengths of the transient periods for runs with no equipment failures, varied
between 10 and 15 h. Under the same conditions, the size of the independent
batches varied between 80 and 140 consequent observations (part units manufac-
tured). On the average, we obtained nine batches per run. Lengths of the transient
periods under equipment failures exceeded 40 h. We assumed an exponential
distribution for both the time between failures and the repair duration. The mean
time between failures was MTBF = 600 min, coupled with a mean time to repair,
MTTR = 60 min representing 10% failures.
It is worth mentioning that if instead of using batches (blocks), replicated runs
had been used, a minimum of 120 h would have been required for one replication.
When nine independent replications per run (equivalent to our nine batches) are
considered, the total simulated time per replicated run becomes 9  120 = 1080 h
as compared to our 240 h per run. Hence, a relative efficiency of more than 4/1 was
obtained.
4.2 Impact of Flexibility Factors in Flexible Manufacturing … 67

4.2.4 Some Results

The following partial results of the above simulation experiments show the
designed experiments capability to investigate main and interaction effects (positive
or negative) on the responses. Owing to the highly relevant effect of equipment
failures on the system’s performance, the results are summarized separately for each
failure levels considered (0–10%).
For each of the two equipment failure levels the main effect of ‘planned uti-
lization level’ (factor U) was the most significant, leading to a high increase
(positive effect) of each of the two responses, standardized tardiness and WIP.
• Under no failures, the most important counter effect (leading to a decrease in
both standardized tardiness and WIP) was the interaction RU (negative effect).
Its interpretation is that a high system versatility level (R2), is especially
effective under high planned machine utilization (U2). The second best counter
effect (negative) was the main effect of factor R (system versatility) meaning that
it improved the system performance under all operating conditions. The inter-
action EU (negative) was also found significant. It implies that the system with
no equipment failures can better cope with a uniform mix (E2) and that this is
especially important at a high utilization level (U2). The control strategies
B (part oriented) and C (system oriented) were not effective under no equipment
failures.
• When 10% failures were considered, the most important effect in reducing
standardized tardiness and WIP was the interaction BU (negative). This implies
that under equipment failure, the on-line control strategy assigning scheduling
priority by tardiness (factor B at its high level B2) is especially effective when
the system is operated at a high planned utilization level (U2). The next factor in
order of significance was the main effect of the control strategy B, meaning that
this strategy was effective under both low and high planned utilization. The
main effect of system versatility R was also significant, though less important
than the control factor B.

4.2.5 Concluding Remarks

Simulation is a versatile technique for modelling complex systems. However, in


order to achieve successful and reliable results, it is necessary to use experimental
design for designing and then systematically analyzing the simulation experiments.
68 4 Design of Experiments (DOE)

4.3 Prior Research Is the Key to Fractional Factorial


Design—A Fractional Factorial Design of a Physical
Experiment

The aim of the experiment detailed in this section was to introduce some changes in
the design and the manufacturing of a special type of Nickel—Cadmium battery to
improve its capacity at low temperature (Barad et al. 1989). The R&D engineers
agreed to investigate six factors, with each factor at two levels. A full factorial
experiment would require 64 observations. Because of the budgetary and time
constraints, they decided to start with a condensed pilot experiment involving
one-quarter of the full experiment, i.e. 16 observations as in instance 2 in the
introduction. This experimental design has resolution IV, where the main effects are
un-confounded with two factor interaction but two factor interactions are con-
founded with other two factor interactions. Many textbooks detail such designs,
(see e.g. Montgomery 2012).
The original paper title, which is the title of this section, emphasized that in such
compact designs reliable previous information is necessary in order to avoid
erroneous conclusions.

4.3.1 The Selected Factors (Independent Variables)


and Their Levels
Factor A type of plate. The levels of factor A were A1, negative plate and A2,
positive plate, measured by the quantity of the active material in each
type of plate.
Factor B additions to electrolyte. The two levels of factor B were B1, no addition
to the electrolyte, B2, a given mixture of additions.
Factor C manufacturing method. Two methods for manufacturing plates, method
I, C1 and method II, C2 were to be compared to determine their effect on
capacity.
Factor D impurities in the electrolyte. The two levels of factor D were D1, no
impurities, D2 a given impurities concentration.
Factor E testing temperature. This factor was of particular interest in the
experiment because of the intended usage conditions of the battery.
The low level of this factor, E1 was −20 °C and the high level, E2 was
20 °C.
Factor F discharge rate. F1 and F2 represented respectively, the low and high rate
of discharge and were respectively selected as half and twice the number
of ampere hours of the battery’s nominal capacity.
4.3 Prior Research Is the Key to Fractional … 69

4.3.2 Response (Dependent Variable)

The considered response was the capacity expressed in ampere-hours per plate unit
area.

4.3.3 Choice of the Design

As mentioned above the selected fractional factorial experiment had resolution IV,
where two factor interactions are confounded with other two factor interactions. In
this experiment, one-quarter of a complete factorial, groups of four different
interactions are considered as one because their individual effects cannot be sepa-
rated, they are aliases. If any of them has a significant influence on the measured
dependent variable (here, the capacity), the statistical analysis will eventually show
it but will be unable to point to the source. They will be confounded.
The a priori requirements of the fractional factorial design in this study were to
analyze and estimate the main factor effects of each of the six factors and each of
three two factor-interactions (BC, CD and EF). The R&D engineers considered all
the other two factor-interactions to be insignificant. Another objective of the
experiment was to supply an estimate of the variance caused by randomness.
The testing conditions expressing the selected design appear in Table 4.2.

Table 4.2 The fractional factorial design of the physical experiment (reproduced from Barad
et al. 1989)
Type of plate Manufacturing Electrolyte Temperature Discharge
(°C) rate
Negative Method I KOH −20 Low
Negative Method I KOH + AD +20 Low
Negative Method I KOH + IMP +20 High
Negative Method I KOH + AD + IMP +20 High
Negative Method II KOH +20 High
Negative Method II KOH + AD −20 High
Negative Method II KOH + IMP +20 Low
Negative Method II KOH + AD + IMP −20 Low
Positive Method I KOH + IMP +20 Low
Positive Method I KOH + AD + IMP −20 Low
Positive Method I KOH +20 High
Positive Method I KOH + AD −20 High
Positive Method II KOH + IMP −20 High
Positive Method II KOH + AD + IMP +20 High
Positive Method II KOH −20 Low
Positive Method II KOH + AD +20 Low
70 4 Design of Experiments (DOE)

Each row represents a treatment combination whose capacity had to be measured


in the experiment, thus providing one observation.
Sixteen standard area plates were prepared: eight negatives and eight positives.
Four of the negatives and four of the positives were manufactured by method I.
Four of the negatives and four of the positives were manufactured by method II.
Each group of four plates of the same type (negative or positive) manufactured by
the same method (I or II) comprised a physical experimental unit. One of the four
was randomly selected to be immersed in a standard volume of electrolyte with no
additions or impurities, another was immersed in electrolyte with additions, a third
was immersed in electrolyte with impurities and the remaining one was immersed in
electrolyte with additions and impurities. The particular temperature and discharge
rate for each immersed plate were according to the table. The testing order of the 16
observations was random as well. As mentioned earlier, because of the fractional
structure of the experiment confounded effects or aliases were created.
The main issue in the design of a fractional experiment is to enable the sepa-
ration, through statistical analysis, of the effects that are considered important prior
to designing the experiment. That means not including two or more such important
effects in the same group of aliases. If a certain alias group is found to have a
significant effect, it is attributed solely to the main effect or the particular interaction
previously considered important within the group.
The aliases emerging from the designed experiment here are listed in Table 4.3.
This table does not include the four defining contrasts, namely,
I = ABCE = ACDF = BDEF, which define the design resolution and cannot be
analyzed. The six main effects and the three factor interactions considered important
prior to the designing of the experiment are underlined. There is not more than one
underlined set of letters in the same row. The significance of the important nine

Table 4.3 List of the A BCE CDF ABDEF


confounded effects or aliases
(reproduced from Barad et al. B ACE ABCDF DEF
1989) C ABE ADF BCDEF
D ABCDE ACF BEF
E ABC ACDEF BDF
F ABCEF ACD BDE
AE BC CDEF ABDF
AF BCEF CD ABDE
BD ACDE ABCF EF
AB CE BCDF ADEF
AC BE DF ABCDEF
AD BCDE CF ABEF
BF ACEF ABCD DE
ABD CDE BCF AEF
ABF CEF BCD ADE
4.3 Prior Research Is the Key to Fractional … 71

effects selected to be investigated will be separated as a result of the statistical


analysis which will test them against the estimate of the random variation.

4.3.4 Analysis of the First Data Set

The findings, based on the statistical data analysis, showed that the factors that
significantly affected the battery capacity were:
F—discharge rate, E—temperature and BC—interaction between the addition to
the electrolyte and the manufacturing method. The evaluated standard deviation
caused by randomness was 0.51 ampere-hour/plate unit area.
The R&D engineers were reluctant to accept all the results of the experiment.
They expected some imbalance between the active material in the positive and the
negative plates, i.e. that the effect of factor A would be significant. But, as men-
tioned above, the data analysis did not find the effect of factor A significant. Also, it
was hard to logically explain the highly significant effect of BC, the interaction
between the addition to the electrolyte and the manufacturing method. But, luckily,
at this stage we got supplementary data on the already conducted experiment. It
should be remembered that the experiment comprised 16 physical experimental
units that had to be tested at a certain temperature and discharge rate, as detailed in
Table 4.2. In practice, each capacity test according to the design conditions was
preceded by a required preliminary capacity test, conducted for all units at normal
conditions, high temperature level (E2) and low discharge rate (F1). After that, each
physical unit was recharged and retested as designed. These preliminary 16
observations were performed because of technical requirements. They represented a
full factorial with 4 factors A, B, C and D (the first 3 columns of Table 4.2),
executed at constant values of the remaining factors E and F (high temperature and
low discharge rate).

4.3.5 Analysis of the Second Data Set and Some Concluding


Remarks

The analysis of the second data set revealed that factor A (type of plate) was the
most significant. Factor C (manufacturing method) was also significant and so was
their interaction, AC. At high temperature (E2) and low rate of discharge (F1) the
estimated standard deviation caused by randomness was 0.11 A hour/plate unit
area, much lower than its previous estimate (0.51). Integrating the findings of the
two data sets, several conclusions could be reached.
72 4 Design of Experiments (DOE)

• The main effects of factors A and C, which were significant at normal testing
conditions (high temperature level and low rate of discharge) were obscured and
thus insignificant when the results were averaged over the entire range of test
conditions imposed on the originally designed experiment. The results of the
second data set indicated that the negative plates provided significantly more
capacity under normal test conditions (as the R&D engineers expected). But
they were affected more than the positives under severe test conditions, see
below.
• The statistical analysis of the first data set found the interaction BC very sig-
nificant. By examining its aliases (Table 4.3) it was seen that the alias group
comprising interaction BC also comprised interaction AE, between the type of
plate and temperature. The meaning of this significant interaction is that the
temperature effect is not the same for the two types of plates.
• Looking at the combined findings of the two data sets it can be seen that this
result fits as a piece in a puzzle. In designing the experiment AE was not
considered an important interaction. In its alias group the only considered
interaction was BC. Therefore, the highly significant effect that was attributed to
BC (and was hard to explain) was actually the effect of AE.
• The apparent inconsistency of results with regard to C (manufacturing method)
between the two data sets could be explained similarly. Presumably, the inter-
actions CE and CF, initially assumed to be negligible, were actually important.
They could not be tested because they were included in the estimate of the
random variance. Some confirmation of this hypothesis is provided by the
significant increase of the random standard deviation when carrying out the
experiment at varying temperature and discharge rate (0.51) as compared to its
estimate under normal testing conditions (0.11).
• Fractional factorial design is an efficient technique. It can analyze the effects of
many factors using a low number of observations. Here, 16 observations were
used to investigate the effects of 6 main factors and 3 two-factor interaction.
However, it should be used with caution. As mentioned above in the choice of
the design, the design had resolution IV where two factor interactions are
confounded with other two factor interactions. Reliable prior information the-
oretical and/or empirical is necessary to avoid erroneous conclusions as illus-
trated by the analysis here of the first data set.

4.4 Flexibility Factors in Logistics Systems—A Full


Factorial Design to Investigate a Calculated Complex
Deterministic Expression

The third example (Barad and Even-Sapir 2003) examined the numerical value of a
complex analytical expression representing a customer oriented logistics perfor-
mance measure, denoted in the original paper logistics dependability, as calculated
4.4 Flexibility Factors in Logistics … 73

for different values of its parameters (the given numerical values of the investigated
factors). Accordingly, the dependent variable (or response) was not a stochastic
variable but a deterministic one. The experimental design was a full factorial with
five factors and enabled a methodical examination of all factor effects and espe-
cially their interactions on this deterministic expression, thus shedding light on
some complex aspects of the logistics decision problem. The scenario of this
example was framed in an inventory system.

4.4.1 The Design and Its Rationale

One aim of the original paper was to get a better understanding of the logistics
decision problems and their eventual solutions. To achieve that aim it was necessary
to examine the effects of several factors on the logistics dependability, a perfor-
mance measure of a given logistics system.
Logistics dependability emphasized quick response delivery through reduced
supply uncertainty during an inventory replenishment system. It was defined as the
probability that there is no supply shortage until the end of the replenishment cycle,
for given starting condition and inventory policy.
As the complexity of the equations for calculating logistics dependability (see
original paper) did not permit an analytical approach, the paper showed how the
statistical experimental technique could be used in an innovative manner to easily
obtain effective results. Several factor types that may affect logistics dependability
were considered. Design factors which could be manipulated in order to increase
the system dependability; environmental factors which were selected in order to
represent different normal operating conditions.
As the paper dealt with flexibility, a factor representing changes was also
selected. Such factor is particularly relevant for emphasizing the value of flexibility,
represented in the paper by trans-routing flexibility, a design factor measured by
variable u, the number of transshipment links per end user at the echelon level. In a
rigid system which has no transshipment links, u = 0. In a flexible system with N
end users at the same echelon level, maximal trans-routing flexibility is obtained for
u = N − 1, meaning that all end users at the same echelon level are inter-linked. See
also Part I, Chap. 3, Sect. 3.
The experiment was designed as a full two level factorial experimental design in
five factors, 25. The investigated system consisted in four end users, N = 4.
The design factors were:
A Trans-routing flexibility supporting decision making in real time, measured by
u (u = 0 low level, u = 3 high level)
B Service level supporting static planning based decisions, measured by 1 − a
(0.8—low level, 0.9—high level). The environmental factors were
C Lead time, measured in terms of a fraction of the replenishment cycle time L
(L/8 low level, L/4 high level)
74 4 Design of Experiments (DOE)

D Demand variability, measured by variability coefficient r/l (0.25 low level,


0.30 high level)
E A change factor representing increase in the demand variability during the
replenishment cycle (no change—low level, a given change—high level).
The following questions were investigated in the original paper:
– How effective is trans-routing flexibility, as compared to an increased service
level, for coping with changes that occur during a replenishment cycle?
– Is trans-routing equally effective for any demand variability?
– Does lead-time affect the benefits of trans-routing?
These questions were answered through an analysis of the interaction effects
between the investigated factors.

4.4.2 Analysis and Results

Typically, in a statistical experimental design the response is a stochastic variable.


As mentioned before, in this example the response (logistics dependability) was
not a stochastic variable, but was calculated for varying values of the parameters
(factors) detailed above. The role of experimental design was to discriminate in a
methodical way between the important factor effects and the negligible ones. As the
analysis results were not used to estimate variances, or to build confidence intervals,
we deemed it was legitimate to use this technique.
The data analysis in the original paper combined ANOVA (analysis of variance)
with a graphical procedure relying on a Normal Probability plot. The procedure is
described in the experimental design literature (see e.g. Montgomery 2012).
According to this procedure, standardized effects are calculated for each factor and
each interaction. The Normal Probability Plot is used to graphically estimate the
important effects. The negligible effects will be normally distributed with mean zero
and random variance and will tend to fall along a straight line. The significant
effects will have non-zero means and will not lie along the straight line. The random
variance needed for applying ANOVA is estimated by pooling the variances of the
negligible effects that lie along the straight line. Here the random variance stands for
the negligible factor effects but, as mentioned above, no further usage will be made
of its numerical value.
The results of an analysis of variance (ANOVA) are represented by F-Ratios and
p-values. The zero p-values in ANOVA disclose the very important factor effects.
These were:
A-trans-routing flexibility and B-service level, the two design factors, as well as
E-the change in the demand variability.
Several interaction effects, namely AD, AE and BE, were also very important.
Among the factors exhibiting a zero p-value we deliberately ignore the effect of
factor D-demand variability. In this particular case the mean effect of factor D,
4.4 Flexibility Factors in Logistics … 75

which proved to be ‘significant’ was actually confounded with the interaction AD


(see original paper). Thus, the analysis only considered the effect of interaction AD.
When there is no flexibility, i.e. factor A is at a low level, the service level neu-
tralizes the effect of any increase in the demand variability (see original paper for
further explanations and calculations).
The zero p-values do not show the effect direction, whether it is positive, i.e. it
increases logistics dependability or it is negative, i.e. it decreases logistics
dependability. The Normal Probability Plot supplies this information.
Four factor effects were positively placed with respect to the line and two were
negatively placed. The two most significant positive effects were the main effects of
A and B, the design factors. Let us recall that the effect of factor A represented the
increase in logistics dependability obtained for a system with the highest number of
transshipment links per end user at the echelon level, u = N − 1 = 3, as compared
to a rigid system, u = 0. The effect of factor B represented the increase in logistics
dependability following an increase in the service level 1 − a, from 0.8 to 0.9). As
judged by their magnitude these two effects were equivalent. The two other positive
effects were the two interactions, AD and AE.
The most negative factor effect was the effect of factor E, the change in the
demand variability that increased the risk level, a. The next negative effect was the
interaction BE. The lead-time did not affect the logistics dependability.

4.4.3 Interpretation of the Interaction Results

The negative interaction between the change factor E and the design factor B,
means that under changing conditions increasing the service level is not an effective
procedure. By contrast, under changing conditions trans-routing flexibility is more
effective. The positive effect of interaction AE illustrates this. As shown by the
positive significant effect of interaction AD, trans-routing flexibility (factor A) is
more effective for higher demand variability than it is for lower demand variability.
This result can be explained by the fact that high demand variability necessitates a
high stock level. This stock is not well utilized in a rigid system. By contrast, a
flexible system utilizes this higher stock in a dynamic way during a replenishment
cycle thus improving the system dependability.

4.5 Taguchi’s Experimental Design Techniques

4.5.1 A Quality Engineering Strategy

Taguchi succeeded in making engineers and other people use Design of


Experiments for improving quality at the design stages of products and processes.
The essence of his work is a combination of statistical methods with a deep
76 4 Design of Experiments (DOE)

understanding of engineering problems and a quality strategy based on societal


economic motivation (see e.g. Dehnad 1989; Ross 1996).
It starts with his definition of quality, which is very different from all the other
definitions. According to Taguchi, ‘quality is the loss imparted to the society from
the time a product is shipped’ (Taguchi 1986). In other words, all societal losses
due to poor performance of a product should be attributed to the product quality.
Investments in quality improvement projects are justified as long as the resulting
savings to customers are more than the cost of improvement.
The Loss Function was Taguchi’s selling approach to Design of Experiments.
His logic was very simple. The smaller the loss, the more desirable is the product.
The loss function is related to the variation of a product performance. Statistically
planned experiments can be used to identify the settings of products and process
parameters that reduce performance variation and consequently reduce loss.
In his approach to DOE, he classified the variables that affect the performance
into two categories: design factors and noise factors. The nominal settings of the
design factors define the desired value of any product or process characteristic, its
target value, s. The noise factors cause the performance characteristics to deviate
from their target values. The widespread practice is to set a tolerance interval for a
characteristic y, meaning that as long as the characteristic is within the interval there
is no loss. According to Taguchi, any deviation from the target value s, incurs a
loss, l(y). It is difficult to determine the actual form of the loss function. Often, a
quadratic approximation is used (see Fig. 4.1).

l(y) = k(y  T)2

k is a constant which can be determined when the loss for a particular value of y
is known. For instance, suppose that the tolerance interval for y is (s − D, s + D)
and the cost of discarding the product is D £.
Using this information, let y = s + D. Then, l(s + D) = kD2 = D and we obtain
k = D/D2.
A numerical example
A thermostat is set-up to a nominal value, s = 22 °C. Its tolerance interval is
(s − 0.5, s + 0.5).

Fig. 4.1 Taguchi loss Tolerance


interval
function and the classical
Cost
tolerance interval Loss funcƟon
D

Target

0
τ-Δ τ τ+Δ y
4.5 Taguchi’s Experimental Design Techniques 77

If the thermostat is outside its tolerance interval it has to be replaced, incurring a


cost of 50 £. Accordingly, k = 50/0.25 = 200 and the loss function is l(y) = 200
(y − 22)2.
Assume, y = 22.3 °C. As this value is within the tolerance interval, by the
classical view there is no loss. But according to Taguchi there is a loss, l
(22.3) = 200 (0.3)2 = 18 £.
The loss function shows the importance of reducing variability of products
characteristics.
As mentioned above, the loss function was Taguchi’s selling approach to sta-
tistically designed experiments. Before comparing his technique with the classical
approach to DOE we may start by stating that Taguchi use of experimental design
represented his quality engineering strategy, to improve product (or process)
quality. The classical approach to DOE had no such specific purpose.

4.5.2 Classical DOE Versus Taguchi’s Parameter Design


Structure

Common points
1. Investigating the influence of several factors on a measured variable.
2. Orthogonal structure of the multi-factor experiments.
Main differences
1. Taguchi classified the factors into design factors and noise factors. The design
factors are those product or process parameters, whose nominal settings can be
chosen by the engineers (R&D) and define the product or process specifications.
The noise factors cause the performance characteristics to deviate from their
nominal settings and are not under control. The ‘key’ noise factors should be
identified, and included in the experiment. During the experiment, they should
be under control.
There is no such classification in the classical DOE.
2. The object of the Taguchi experiment is to identify the settings of the design
parameters at which the effect of the noise factors is minimal. These ‘optimal’
settings are identified by systematically varying the settings of the design
parameters in the experimental runs and comparing the effect of noise factors for
each test run.
The previous point implies the difference.
3. Taguchi’s performance statistics is ‘signal to noise’ (s/n) ratios and not the
measured value of variables as in the classical approach. According to Box,
1986, it is better to study the mean ȳ and the variance, s2 separately rather than
combining them into a single ratio.
78 4 Design of Experiments (DOE)

4.5.3 Parameter Design Experimental Structure


(Inner and Outer Arrays)

A parameter design experiment consists of two parts: a design matrix and a noise
matrix. The design matrix is the ‘inner array’, while the noise matrix is the ‘outer
array’. The columns of a design matrix represent the design parameters (factors).
The rows represent the different levels of the parameters in a design test run (trial).
The rows of a noise matrix represent key noise factors and the columns represent
different combinations of the levels of the noise factors. In a complete parameter
design experiment, each trial of the design matrix is tested at each of the n columns
(trials) of the noise factors. These represent n replications of the design trials.
Assuming there are m design trials, the total number of runs are m*n.

4.5.3.1 A Taguchi Style Experiment

Let us recall the example presented in Sect. 4.3, which investigated the effects of
five factors on the capacity of a car battery. Each factor was tested at two levels, say
1 and 2. A—active material by type of plate, B—additions to electrolyte, C—
manufacturing method, D—testing temperature, E—discharge rate.
Assume that A, B and C are design factors, while D and E are key noise factors
and the aim of the experiment is to find the ‘optimal’ value of each design factor
that minimizes the effect of the noise factors (see Table 4.4). Table 4.4 shows a
Taguchi experimental layout with an inner array for design (control) factors A, B
and C and an outer array for factors D and E. In this experimental arrangement,
there are 8 design trials and 4 test conditions with respect to the noise factors.
Accordingly, for each design trial we obtain four resulting data: y1, y2, y3, y4. These
data are used to calculate the value of Taguchi’s performance statistics, the S/N
ratio.

4.5.4 Signal to Noise Ratios (S/N)

In the classical DOE we investigated the factors effects on the measured value (of a
continuous variable) or on the average, for replicated experiments. Taguchi has
created a transformation of the replications to another variable, which measures the
variation. The transformation is the signal to noise S/N ratio. There are several S/N
ratios depending on the ideal value of the measured characteristic; nominal is best,
lower (say, cost) is best, higher (say, strength) is best.
An example of an S/N ratio for nominal is best: S/N ratio = 20 log [ȳ/s].
In terms of this transformed variable, the optimization problem is to determine
the optimum factors levels so that the S/N ratio is maximum, while keeping the
mean on target.
4.5 Taguchi’s Experimental Design Techniques 79

Table 4.4 Inner/Outer Taguchi’s parameter design experiment

Outer Array
D 1 2 1 2
E 1 1 2 2

Inner Array Data


A B C y1 y2 y3 y4 S/N
Trial no
1*4 1 1 1 * * * * *
2*4 2 1 1 * * * * *
3*4 1 2 1 * * * * *
4*4 1 1 2 * * * * *
5*4 2 2 1 * * * * *
6*4 2 1 2 * * * * *
7*4 1 2 2 * * * * *
8*4 2 2 2 * * * * *

4.5.5 Summary

This section discussed Taguchi’s methodology to experimental design emphasizing


his parameter design strategy for improving product and process quality design. It
presented his loss function concept for quantifying the cost of product and process
variations. It compared the classical approach to DOE with Taguchi’s techniques
and displayed a typical parameter design arrangement with an inner/outer layout.

References

Barad M (1992) The impact of some flexibility factors in fmss—a performance evaluation
approach. Int J Prod Res 30:2587–2602
Barad M (2014) Design of experiments (DOE)—a valuable multi-purpose methodology. Appl
Math 5:2120–2129
Barad M, Even-Sapir D (2003) Flexibility in logistics systems—modeling and performance
evaluation. Int J Prod Econ 85:155–170
Barad M, Bezalel C, Goldstein JR (1989) Prior research is the key to fractional factorial design.
Qual Prog 22:71–75
Box GE, Hunter WG, Hunter JS (2005) Statistics for experimenters, 2nd edn. Wiley, New York
Dehnad K (ed) (1989) Quality control. Robust design and the Taguchi method. Wadsworth &
Brooks/Cole, California
Law AM, Kelton WD (2000) Simulation modeling and analysis, 3rd edn. McGraw Hill, New York
Montgomery DC (2012) Design and analysis of experiments, 5th edn. Wiley, New York
Ross PJ (1996) Taguchi techniques for quality engineering, 2nd edn. McGraw Hill, New York
Taguchi G (1986) Introduction to quality engineering. Asian Productivity Organization, Tokyo
Chapter 5
Petri Nets

This chapter describes the fundamental concepts and characteristics of Petri Nets
(PNs). It follows the extensions that improved the implementations capabilities of
the original PNs and presents some applications. Their first and most relevant
extension was time modeling, a vital aspect of system performances not considered
in the original version.
The chapter comprises six sections. The first section is an introduction to Petri
Nets. The second section describes basic definitions of PNs and their time modeling
which transformed them into Timed Petri Nets (TPNs). The third section illustrates
the decomposition of a Timed Petri Net representing an open queuing network. The
fourth section describes a TPN based method for estimating the expected utilization
at steady state of a workstation including disturbances. The fifth section presents
TPNs as verification tool of simulation models at steady state, including an example
where the net comprised processing resources and Automated Guided Vehicles
(AGVs). The sixth section describes weaving processes, as an additional applica-
tion of TPNs. Petri Nets as a versatile modeling structure has been recently pub-
lished in Applied Mathematics, see Barad (2016).

5.1 Introduction

A Petri Net (PN) is both a graphical and an analytical modeling tool. Carl Adam
Petri has developed it in 1962, in his Ph.D. thesis at Bonn University, Germany, as
a special class of generalized graphs or nets. The chief attraction of this tool is the
way in which the basic aspects of various systems are identified conceptually,
through a graphical representation, and mathematically, through formal program-
ming languages.
As a graphical tool, PNs can be used as a visual-communication aid similar to
flow charts, block diagrams, and networks. In addition, in these nets tokens sim-
ulate the dynamic activities of systems. As a mathematical tool, it allows setting up
© The Author(s) 2018 81
M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_5
82 5 Petri Nets

algebraic equations for formally analyzing the mathematical models governing the
systems’ behavior, similarly to other approaches to formal analysis, e.g. occur-
rences nets and reachability trees.
PNs are capable to describe and analyze asynchronous systems with concurrent
and parallel activities. Using PNs to model such systems is simple and straight-
forward making them a valuable technique for analyzing complex real time systems
with concurrent activities. They are capable of modeling material and information
flow phenomena, detecting the existence of deadlock states and inconsistency and
are well suited for the study of Discrete Event Systems (DES). As such, they have
been applied in manufacturing and logistics, hardware design and business pro-
cesses, as well as in distributed databases, communication protocols and DES
simulation (Moody and Antsaklis 1998; Yakovlev et al. 2000).
Early attempts to use PNs for modeling systems’ behavior revealed some
drawbacks. There was no time measure in the original PN and no data concepts.
Hence, their modeling capability was limited and models often became excessively
large, because all data manipulation had to be represented directly into the net
structure. To overcome these drawbacks, over the years PNs have been extended in
many directions including, time, data and hierarchy modeling. Currently, there
exists a vast literature on this subject, covering both its theoretical and its appli-
cation aspects. This chapter is intended to help the reader to get acquainted with the
basic notions and properties of PNs and then, to offer a more general view on their
capabilities by discussing some of its many extensions. It does not attempt to cover
PNs in a comprehensive way (see also Barad 2003).

5.2 Petri Nets and Their Time Modeling

Let us first present a formal definition of Petri Nets, first as a graphical tool and then
as an analytical tool.
PN as a Graphical Tool
A graph consists of nodes (vertices) and arcs (edges) and the manner in which they
are interconnected. PNs have two types of nodes: places and transitions.
Formally, a Petri net is a graph, N = (P, T, A, M) where:
P is a set of places (P1,P2,…,Pm), portrayed by circles, representing conditions.
T is a set of transitions (T1,T2,…,Tn), portrayed by bars, representing instanta-
neous or primitive events.
A is a set of directed arcs that connect them; input arcs of transitions (P*T)
connect places with transitions and output arcs (T*P) start at a transition.
M a marking of a Petri net is a distribution of tokens (or markers) to the places of
a Petri net. As tokens move according to firing rules, the net marking changes.
A transition can fire (meaning the respective event occurs) if there is at least one
token in each of its input places (the necessary conditions for its occurrence are
fulfilled). When a transition fires it removes a token from each input place and adds
a token to each output place.
5.2 Petri Nets and Their Time Modeling 83

A simple example of an ordinary PN, say PN1, is illustrated in Fig. 5.1


It is seen that the initial marking of the net contains one token in place P0,
meaning the machine is available. The firing of transition T1 will insert one token in
P1. As P0 and P1 are the input places of T2, they enable the firing of transition T2
which will remove the tokens in P0 and P1 and will insert a token in P2. Then
transition T3 will fire removing the token in P2 and inserting a token in P0. This
process represents a firing sequence.
PNs as an Analytical Tool
The structure of a PN is represented by a matrix C called an incidence matrix. C is
an m*n matrix whose m rows correspond to the m places and whose n columns
correspond to the n transitions of the net. An element of C is defined as follows:

1 for Pi eIðTj Þ
Cij ¼ þ 1 for Pi eO(Tj )
0 Otherwise
The meaning of the above equation is that upon firing transition Tj, j = 1,2,…,n,
one token is removed from each input place Pi, i = 1,2,…,m and one token is added
to each output place. The incidence matrix C1 of PN1 in Fig. 5.1 is given.
Incidence matrix C1

T1 T2 T3
P0 −1 1
P1 1 −1
P2 1 −1

Time Modeling of PNs—Timed Petri Nets (TPNs)


There is no inherent time measure in the original PNs. To adapt them to the study of
system performances it was necessary to include a time concept into the model.
There are several possibilities for introducing time in PNs. Time was first intro-
duced in PNs by associating it with transitions (Ramchandani 1974); when a
transition is enabled, its firing is delayed by s(T) time units. A few years later,
Sifakis (1977) associated it with places; tokens in an input place become available
to fire a transition only after a delay s(P) has elapsed. The delay may be modeled
either as a deterministic or as a stochastic variable. However, if a stochastic model
is used, calculations will only consider the expected values.
Here, we adopt Sifakis’ approach i.e. we associate time with places. A token
arriving at a timed place Pi, becomes available only after zi time units. This delay is
independent of the token’s arrival instant. A Timed Petri Net graph looks exactly
like a PN graph, except that adjacent to a timed place there is a label z.
Turning back to Fig. 5.1, we may observe that P2, representing ‘job is being
processed’ is a time elapsing activity. To transform the graph of PN1 into a graph of
the timed PN1, TPN1, we have to add a label z2 close to the timed place P2,
representing the processing time.
84 5 Petri Nets

T = {T1, T2, T3} P = {P2, P3, P4 }


T1 Job arrives
T1
T2 T3 T2 Start machining
P1 P2
T3 End machining
P0 Machine available
P1 Job is waiting
P2 Job is being
processed
P0

Fig. 5.1 An ordinary Petri Net

To convert PN1 into TPN1 we use incidence matrix C1 and delay matrix Z1. Z1
is defined as a square matrix m*m (here m = 3) with elements zi, i = 0,1,2 for i = j,
and zero otherwise.

z0 0 0 I1
Z1 ¼ 0 z1 0 I = I2
0 0 z2 I3

By convention, the flow of tokens through transition Tj is denoted Ij,


j = 1,2,…,n. Here n = 3.

5.3 Decomposing TPNs of Open Queuing Networks

In open queueing networks, customers enter and leave the network in an uncon-
strained manner. Barad (1994) provided a detailed illustration and proof for
decomposing a TPN model of n-part type open queueing network with M work-
station into M independent TPNs.
The essence of the approach consists in proving that, under steady state con-
ditions, as defined by Sifakis and applied to open systems, the arrival flow of any
such job to its assigned workstation equals the input flow to the system of its parent
part type.
Figure 5.2 graphically illustrates an open queueing network. It consists of M = 3
workstations and n = 4 part types. Parts of type j, (exogenous customers com-
prising the mix served by the queueing network) arrive at the system with given
flow rate I0j , j = 1, 2, 3, 4. Processing of a part is composed of a number of
operations (jobs) to be performed in a given order by previously assigned stations
(deterministic routing). Upon arrival, each part is routed to its initial station.
5.3 Decomposing TPNs of Open Queuing Networks 85

Fig. 5.2 An open queuing network (reproduced from Barad 2003)

According to Sifakis, a TPN model of an n-part type open queueing network


with M workstations can be decomposed into M independent TPNs, provided
steady state conditions are fulfilled at each workstation, m = 1, 2,…,M.
Steady state can be reached if there exist constant flows, representing expected
frequency of firing, for which the total number of tokens in the net is bounded. As
stated by Sifakis, given a timed Petri net by its incidence matrix C and delay matrix
Z, the net functions at its steady-state rate for a given flow vector I0 iff I0 satisfies
two equations:

CI0 ¼ 0 ðI0 [ 0Þ ð5:1Þ

and

Js Q(0Þ ¼ Js Z(C þ ÞI0 ð5:2Þ

where [Js] s = 1,2,…,k is a generator of the set of solutions of (5.1), [Js C = 0] and
Q(0) is a vector representing the initial marking of the net.
(C+) is obtained by replacing the ‘−1’ elements in the incidence matrix C by
zeros.
Intuitively Eqs. (5.1) and (5.2) can be respectively associated with conservation
of flow and conservation of tokens.
86 5 Petri Nets

Application of the decomposition rules


Let us now apply the decomposition rules to the queuing network in Fig. 5.2.
Equations (5.1) and (5.2) should be fulfilled at each of the three stations. Here we
shall only analyze station 1. The station processes part type 1 and part type 2. Their
entering flows are respectively I01 and I02 . Part type 1 proceeds to station 2, while part
type 2 proceeds to station 3.
The structure of the net is:

P ¼ fPo ; P1 ; P2 ; P3 ; P4 g T ¼ fT1 ; T2 ; T3 ; T4 ; T5 ; T6 g

The incidence matrix C2 of station 1, as a TPN is presented below:

T1 T3 T4 T2 T5 T6
P0 −1 1 −1 1
P1 1 −1
C2 = P2 1 −1
P3 1 −1
P4 1 −1

The machine is available when there is a token in P0.


Part type 1 enters through T1 (flow I01 ), proceeds to P1 (waiting), T3 (start
processing), P2 (processing) and exits trough T4 (end processing) thus making the
machine available again (the token is restored to P0).
Part type 2 enters throughT2 (flow I02 ), proceeds to P3 (waiting), T5 (start pro-
cessing), P4 (processing) and exits through T6 (end processing). Then the token is
restored to P0.
The diagonal matrix Z of the net has three timed places z0, z2, z4 (in bold)
 
 z0 
 
 z1 
 
Z ¼  z2 

 z3 
 
 z4 

Applying Eq. (5.1) to incidence matrix C2, C2*I0 = 0 [I0 > 0] we obtain:
Flow of part type 1: I01 = I3; I3 = I4 Flow of part type 2: I02 = I5; I5 = I6
Combining the equations:

I01 ¼ I3 ¼ I4 and I02 ¼ I5 ¼ I6 ð5:3Þ

It is seen that under steady-state conditions, the entering flow of part type 1 to
station 1, I01 , is equal to its exiting flow, I4, and the entering flow part type 2, I02 , is
equal to its exiting flow from the station, I6. According to the above, indeed
Eq. (5.1) represents conservation of flow.
5.3 Decomposing TPNs of Open Queuing Networks 87

Let us now apply Eq. (5.2) representing preservation of tokens. In this example,
a generator of the set of solutions of Eq. (5.1) is J1,

J1 ¼ ½1; 0; 1; 0; 1 J1 C2 ¼ 0 ð5:4Þ

The “1” values represent all the possible steady states of workstation 1: P0-idle,
P2-processing part type 1, P4-processing part type 2.
Substituting Eq. (5.4) in Eq. (5.2) and using Eq. (5.3) we obtain:

Q0 þ Q2 þ Q4 ¼ z0 ðI01 þ I02 Þ þ z2 I01 þ z4 I02 ð5:5Þ

where Q0, Q2, Q4 are the token content of the corresponding places representing the
possible steady states of workstation 1.
The left hand side (LHS) of Eq. (5.5) represents the sum of tokens over the
steady state of workstation 1. Since there is but one resource entitled workstation 1
and these three states represent all its mutually exclusive states, LHS = 1. The three
components of the right hand side (RHS) represent the respective proportion of time
that workstation 1 is expected to spend in each state in steady state conditions. After
substitution, Eq. (5.5) becomes:

1 ¼ z0 ðI01 þ I02 Þ þ z2 I01 þ z4 I02 ð5:6Þ

z4 I01 and z4 I02 are the respective contributions of part type 1 and part type 2 to the
utilization of station 1. As I01 þ I02 represent the total entering flow to station1,
intuitively, z0 can be interpreted as the station expected idle time between any two
consecutive arrivals of parts.
Any feasible solution has to satisfy z0 > 0; leading to:

z2 I01 þ z4 I02 \1 ð5:7Þ

The LHS of this inequality is the expected utilization of the station at steady
state.

5.4 TPN Based Expected Station Utilization


at Steady State

Decomposing Timed PNs of multi-source open queuing networks enables us to deal


separately with each station and thus to easily consider a variety of entering and
exiting flows of parts as well as eventual flows of disturbances and/or set-ups.
The main result of the decomposition as proved in Barad (1994), is summarized
as follows.
88 5 Petri Nets

The total expected steady-state utilization rm at station m, m = 1, 2,…,M, is the


sum of its expected utilization by each class of customers, including disturbances.

X
n
ðmÞ ðmÞ ðmÞ ðmÞ ðmÞ ðmÞ
rm ¼ rj þ rf rj ¼ I0j  tj j ¼ 1; 2; . . .; n rf ¼ If  t f ð5:8Þ
j

ðmÞ ðmÞ
rj and rf are the respective contributions of I0j , the input flow of part types j,
j = 1,2,…,n, and that of If, the flow of disturbances to the station m utilization.
ðmÞ ðmÞ
tj and tf are the respective expected operation duration of part type j, j = 1,2,
…,n, and the expected treatment of a disturbance at station m, m = 1,2,…, M.
Provided the constrained rm < 1 is satisfied for each station in the open queuing
network, Eq. (5.8) is a valid expression of the TPN based expected steady state
utilization rm of any processing station m in the system.
A numerical example
Let us assign numerical values to the queuing network in Fig. 5.2 and calculate the
expected utilization of each workstation. The queueing network comprises M = 3
work stations, processing n = 4 part types, according to the deterministic routing
illustrated in Fig. 5.2.
According to the preservation of flow, at steady state the entering flows of each
part is equal to its exiting flow, provided the expected steady state utilization of
each workstation rm, m = 1,2,…,M is less than 1.
The entering flow of part type 1, I01 ¼ 1=10 parts/sec, is to Station 1, where its
ð1Þ ð3Þ
processing time t1 = 5.0 secs. It proceeds to Station 3 with processing time t1 ¼
1:7 secs and exits.
The entering flow of part type 2, I02 ¼ 1=100 parts/sec, is also to Station 1 and its
ð1Þ ð2Þ
processing time, t2 ¼ 2 secs. It proceeds to Station 2 with processing time t2 ¼
ð3Þ
2:7 secs and then to Station 3 with processing time t2 ¼ 3:5 secs, returns to Station
ð2Þ
2 with processing time t′ 2 ¼ 3:8 secs. and exits.
The entering flow of part type 3, I03 ¼ 1=25 parts/sec, is to Station 2 where its
ð2Þ ð3Þ
processing time t3 ¼ 2:9 secs. It proceeds to Station 3 with processing time t3 ¼
ð2Þ
2:5 secs, returns to Station 2 with processing time t′ ¼ 4 secs and exits. 3
The entering flow of part type 4, I04 ¼ 1=20 parts/sec, is also to station 2 with
ð2Þ ð3Þ
processing time t4 ¼ 4:4 secs. It proceeds to station 3 with processing time t4 ¼
7:0 secs and exits.
We shall now apply Eq. (5.8) to calculate the TPN based expected utilization of
each of the three stations, r1, r2 and r3.
5.4 TPN Based Expected Station Utilization at Steady State 89

ð1Þ ð1Þ
r1 ¼ I01  t1 þ I02  t2 ¼ 1=10  5:0 þ 1=100  2 ¼ 0:520
ð2Þ 0 ð2Þ ð2Þ 0 ð2Þ ð2Þ
r2 ¼ I02 ðt2 þ t2 Þ þ I03 ðt3 þ t3 Þ þ I04  t4
¼ 1=100ð2:7 þ 3:8Þ þ 1=25ð2:9 þ 4Þ þ 1=20  4:4 ¼ 0:561
ð3Þ ð3Þ ð3Þ ð3Þ
r3 ¼ I01  t1 þ I02  t2 þ I03  t3 þ I04  t4
¼ 1=10  1:7 þ 1=100  3:5 þ 1=25  2:5 þ 1=20  7:0 ¼ 0:655

As the above calculated expected utilization of each station is less than 1, the
numerical results are correct.

5.4.1 Modeling Disturbances

Disturbances of different types can be modeled as additional classes of customers.


In Barad (1994), two versions of modeling disturbances with TPNs are described.
Figure 5.3, reproduced from Barad (1994), shows these two TPN versions and their
respective incidence matrices.
In version A it is assumed that following the disturbance, the part whose pro-
cessing was interrupted, is discarded. Accordingly, after the repair whose duration
is z5, the resource is returned to its idle place. The processing duration of a part is
denoted z3.
In version B, after repair the resource is returned to resume its interrupted
activity. Since it is not possible to keep track of the remaining processing time, a

Version A Version B
T1 T4 T1 T4

4
1 T2 4 T5 T6 T2 T5 T6
1

3 5 3 5
0
0
T3 T3

T1 T2 T3 T4 T5 T6
T1 T2 T3 T4 T5 T6 P0 -1 1
P0 -1 1 1 P1 1 -1
P1 1 -1 P3 1 -1 -1 1
P3 1 -1 -1 P4 1 -1
P4 1 -1 P5 1 -1

Fig. 5.3 Disturbances as TPN models—Versions A and B (reproduced from Barad 1994)
90 5 Petri Nets

preemptive repeat assumption is applied. In each version it is assumed that when


the resource fails, the repair process starts instantaneously (transition T5 represents
both events). Both versions imply that a failure has a higher priority than a pro-
cessed part. However, as the calculated results express the total expected resource
utilization at steady state as a sum of its expected utilization by each class of
customers regardless of their priority, special modeling of priorities is superfluous.
By applying version A to the steady states conditions [Eqs. (5.1) and (5.2)] we
obtain Eq. (5.9), whose RHS structure is identical to that of the LHS of inequality
(7), thus implying that the contribution of the failures to the expected steady-state
machine utilization is no different from the contribution of any other customer (part
type). The flows I1 and I4 originate respectively from transitions T1 and T4 repre-
senting the entering flow of parts and failures.

rm ðAÞ ¼ z3 I1 þ z5 I4 ð5:9Þ

Again, rm < 1 is the condition for achieving steady-state.


However, the results of applying Eq. (5.1) to the incidence matrix of version A
show that the exiting flow of parts, I5, no longer equals the entering flow of parts I1;
instead it reduces to, I5 = I1 − I4. This result stems from our modeling assumption
in version A according to which each station failure causes the part, currently
processed, to be discarded.
Through the same procedure, version B yields a different formula for the con-
tribution of failures to the expected steady-state state machine utilization:

rm ðBÞ ¼ z3 I1 þ ðz5 þ z3 ÞI4 ð5:10Þ

Due to the preempt repeat assumption in B, rm (B) = rm (A) + z3 I1. This


equation shows that the failure flow inflates the resource utilization because each
failure causes a part to be serviced again. Naturally, the resource productive uti-
lization is not increased, as under this version the exiting flow of parts stays equal to
the entering flow, I5 = I1. This may demonstrate how much of the resource uti-
lization is wasted. Again, steady-state necessitates, rm (B) < 1.
This simple model can be easily enhanced to make it compatible with other more
realistic scenarios, such as a more complex repair system, as well as with modeling
defective units.

5.5 TPNs as a Verification Tool of Simulation Models


at Steady State

An important and difficult task in any simulation project is the validation and
verification of the simulation model. Validation is the process intended to confirm
that the conceptual simulation model, within its domain of applicability, is an
accurate representation of the real world system it describes. The verification
5.5 TPNs as a Verification Tool of Simulation Models at Steady State 91

process seeks to determine if the computerized simulation model was built right and
operates as intended. There is a rich literature on these topics (see e.g. Shannon
1981; Sargent 1991; Balci 1994).
From an output analysis perspective, system simulation is classified into ter-
minating or steady state. Terminating simulations aim to estimate system param-
eters for periods well defined in terms of starting and ending condition. Steady state
simulations aim to estimate system parameters obtained for infinite simulation time.
Before reaching steady state conditions, when the system parameters attain stable
values, the system behavior undergoes transition periods, which reflect the initial
system operating conditions (see e.g. Kelton 1989). In the analysis of such systems,
an important objective is to avoid bias in parameter estimation, eventually intro-
duced by data collected during transition periods. One among the many approaches
to simulation verification is comparing simulation outputs with analytical results
(Kleijnen 1995). In many simulation studies dealing with manufacturing systems or
computers and communication systems, networks of queues represent the reality
modeled by the system simulation. Hence, for simulation verification of such
systems analytical approaches describing queueing networks may be appropriate.
Section 5.3 described a TPN based decomposition approach of a queueing
network consisting of M workstations. Equation (5.8) in Sect. 5.4 is a formula for
calculating the TPN based expected utilization at steady state of each workstation in
the decomposed network. Accordingly, TPNs can be considered an appropriate
analytical approach for verification of computerized queueing network simulations
at steady state. Barad 1998 presented this approach at the 1998 Winter Simulation
Conference. Two simulation cases were described. In case 1 the network consisted
solely in processing resources. In case 2 the network comprised processing
resources as well as Automated Guided Vehicles (AGVs) in Segmented Flow
Topology (SPT) based on the work of Barad and Sinriech (1998). These authors
developed a method for verifying the segmentation phase in the SFT procedure
through Petri Nets based approximations. They extended the TPN decomposition
approach developed in Barad (1994) to include besides time the distance dimen-
sion, which applies to the AGV movement through the shop floor. To methodically
examine the accuracy of this proposed simulation verification method, they
designed and carried out a multifactor simulation experiment of an investigated
system. Here we shall solely show the graphical presentation of the decomposed
TPN modeling of the AGV and some results of the simulation experiment.
PN based activity cycle of an AGV
Figure 5.4 describes the PN based activity cycle associated with an AGV assigned to
serve a bi-directional segment and its incidence matrix (the time labels are omitted).
The cycle starts with a part j, whose operation at a workstation on its processing
route has been completed and calls for an AGV to transport it to its next station. The
part arrival is described by a firing transition T1, with input flow I0j , which places a
token in P1, denoting the part is waiting for transportation. The token will stay there
until a token arrives in P9 meaning the AGV is available and is waiting at the
pick-up location. Hence, transportation can start. This event is described by
92 5 Petri Nets

T1 T7
T4
8 T1 Part arrival
1 T2 Start part transportation
T2 T3 T3 Transportation completed
2 3 4 T4 Start moving to new
0 pick-up from last drop-off
9 T5 7 T5 Start moving to staging
T6 Staging reached
5 T7 Start moving to new
T9 T6 pick-up from staging
T8, T9 New pick-up reached

T8

T
P 1 2 3 4 5 6 7 8 9
1 1 -1 Part is waiting
2 1 -1 Part is being transported
3 1 -1 -1 AGV reached destination
4 1 -1 AGV travels empty from
drop-off station to new pick-up
5 1 -1 AGV travels empty from drop-
off to staging location
0 1 -1 AGV available at staging
7 1 -1 AGV travels empty from
staging to new pick-up
8 1 -1 -1 A call for AGV
9 -1 1 1 AGV available at pick-up

Fig. 5.4 PN modeling of an AGV: Graphical representation and Incidence Matrix (reproduced
from Barad and Sinriech 1998)

transition T2 which is thus enabled. Transportation of the part to its delivery station,
whose duration is z2 is denoted by a token in P2. After z2 time units the loaded
AGV will reach its delivery station. This event is described by the firing of tran-
sition T3 which inserts a token in P3 meaning the AGV has reached the drop off
point where it may eventually wait.
Let us return for a moment to transition T1 representing the arrival of part j. We see
that besides P1, T1 has a second output place, P8. A token in this place means a call for
the AGV that a part is waiting to be transported. P8 is input to transition T4 and also to
transition T7. This information will reach the AGV, either immediately after the
drop-off (enabling firing of T4) or at its staging location P0 (enabling firing of T7).
Accordingly, upon its leaving P3, the AGV starts moving empty, either towards
a new pick-up station (firing of T4) or, if T4 didn’t fire, meaning there was no call,
towards its staging location P0 (firing of T5). Firing of T4 places a token in P4,
which resides there z4 time units (duration of empty travel from last drop-off to new
pick-up). It ends when AGV reaches new pick-up. This event is expressed by the
firing of T9 that will place a token in P9, meaning the AGV is ready to transport the
part that required service. Firing of T5 places a token in P5 meaning that since there
5.5 TPNs as a Verification Tool of Simulation Models at Steady State 93

were no calls, the AGV will start moving empty to its staging location P0, with
expected duration z5.The token in P5 will stay there z5 time units until T6 will fire,
placing a token in P0. This means that the AGV is available at its staging location.
Summary of the simulation experiment results
As mentioned above, the analytical TPN model and the multi-factor simulation are
detailed in Barad and Sinriech (1998).
Here is a short summary of the simulation experiment.
a. The effects of four factors were investigated:
– Planned utilization of the resources
– Inter-arrival statistical distributions
– Generating seeds of Siman software
– Queue discipline (solely for the processing resources)
b. The dependent variable of the experiment, the accuracy of the TPN based
utilization estimates, was expressed by the difference (in percentage) between
the steady state simulation results (respective mean utilizations of the resources)
and their TPN counterpart.
c. The experiment investigated the utilizations of three resources in the queuing
network:
Two moveable resources, AGV1 and AGV2, and one of the net processing
resources.
The separate results of the three resources are as follows.
Moveable resources
AGV 1—Its low and high planned utilizations were 0.5 and 0.6 (both relatively
low). The overall accuracy was 0.2%. The results were not significantly affected by
any of the investigated factors.
AGV 2—Its low and high planned utilizations were 0.65 and 0.78 (higher than of
AGV 1) The accuracy of the low planned, 0.65, was 1.6% while that of the high
planned, 0.78, was 2.9%.
The accuracy of the moveable resources was significantly affected by the
planned utilization. It was also significantly affected by the generating seeds of the
Siman software as well as by the interaction between the seeds and the inter-arrival
distributions.
Processing Resource—Its low and high planned utilizations were 0.75 and 0.9
(both relatively high). The accuracy was not significantly affected by the planned
utilization. As both planned levels (0.75 and 0.9) were high, the overall accuracy
was about 2.9%, similar to that of AGV 2 at its high planned utilization (0.78).
All Resources
The accuracy of the TPN based utilization estimates for the simulation experi-
ment varied from 0.2% at low traffic to about 2.9% at higher traffic. The relatively
low differences between the numerical results of the two methods provided evi-
dence that the TPN technique can be used to verify simulation models. Eventual
flaws in the computerized simulation model can be detected through observed
94 5 Petri Nets

discrepancies between the TPN based and the simulation results. An important
advantage of the proposed method is its easiness of use. Also, the method imposes
no restrictions on the size of the network to be decomposed.

5.6 Weaving Processes—An Additional Application


of Timed Petri Nets

Barad and Cherkassky (2010) applied the TPN based method for estimating the
expected resources utilization at steady state to weaving processes.
They broadened the TPN decomposition approach, as applied to manufacturing
systems of discrete items, to include weaving process, which is a continuous pro-
cess. Their study focused on the conceptual planning stage during which valid
approximation methods are needed. The discrete input to the system, representing
orders of varying length and fabric types, was transformed into continuous flows of
fabric length by types. The procedure considered both the continuous weaving
process as well as the beam change, which is a discrete operation. Here is a
summary of their study.
The weaving process
The weaving process produces fabrics by interlacing the weft yarns with the warp
yarns according to the pattern of the desired fabric. In the reviewed study the
weaving operation performed by the weaving looms is the main manufacturing
operation.
The input to the weaving process are fabric orders, characterized by the required
fabric type and order length. The study focuses on adaptation of the technical
parameters of the existing looms in the plant (loom types) to produce the style type
and design of the desired fabric. Incoming orders are stored and wait for processing.
Waiting time in storage depends on equipment availability and eventually on pri-
orities. There are two main operation categories: equipment oriented operations
(looms) and manpower oriented operations such as beam change, fabric change and
loom breakdown repairs (or preventive maintenance).
To apply the TPN decomposition approach for calculating the expected (long
run) utilization at steady state of the looms and eventually of the manpower ser-
vicing them, the authors adapted the basic assumptions in Barad (1994) to a
weaving processing system, as detailed below.

5.6.1 Manufacturing Assumptions Describing the Weaving


Process

(a) Orders for type j fabric, j = 1,2,…,n and class length k, k = 1,2,…,K, arrive at
the system with given mean rate Ij,k (orders per hour). The processing of an
5.6 Weaving Processes—An Additional Application of Timed Petri Nets 95

order is composed of a number of operations (jobs) to be performed in a given


order by previously assigned stations (deterministic routing). Upon arrival, each
order type is routed to its initial station, where it is stored and waits for pro-
cessing. Here the initial station is a weaving workstation.
(b) The weaving workstation is served by M looms.
ðmÞ
(c) Rj;k defines the given assignment of orders as follows:

ðmÞ
Rj;k ¼1 if a type j order of class length k is assigned to loom m; m = 1,2 M
0 otherwise

Since each order has to be assigned to one loom, it follows that:

X
M
ðmÞ
Rj;k ¼ 1 for j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; K
m¼1

(d) The system is open, meaning that each loom has unlimited buffer space and the
expected utilizations of all servers (looms) is less than 1 (unsaturated system).
ðmÞ ðmÞ
(e) Xj;k are stochastic variables with means tj;k representing the processing
duration of type j fabric order of class k, k = 1,2,…,K length at loom m,
m = 1,2,…M.

Evaluating the mean utilization of the looms at steady state using TPN
The mean utilization of a loom m as defined for visits of type j orders of class
ðmÞ
k length. rj;k , is the relative contribution of type j fabric of class length k to the
utilization of loom m. To calculate it we shall use Eq. (5.8) from Sect. 5.4 as
adapted to the present situation where the specific entering material has to be
ðmÞ
assigned to the specific loom. This is represented by the addition of item Rj;k
representing the given assignment of type j fabric of class length k to loom m,
defined above in (c).

ðmÞ ðmÞ ðmÞ


rj;k ¼ tj;k Rj;k Ij;k j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .K; m ¼ 1; 2; . . .; M: ð5:11Þ

ðmÞ
tj;k is the expected processing duration of type j fabric order of class length k at
station m.
Ij,k is the mean rate of type j fabrics of class k length orders.
Set-ups such as beam change are additional types of customers. Consider for
example a warp beam, that has a certain length, say L meters. Accordingly, every
L meters the beam should be changed. The beam change duration is ts hours and is
96 5 Petri Nets

performed by a worker team. During beam change loom m is not available. Given
the numerical value of L, the mean entering rate of all fabric types of class length k
orders (I.k), and the length distribution of class length k, the mean rate of beam
change per class k, can be easily calculated.
The following equation considers set-ups s as an additional type of customers
(orders) contributing to rm, the long run utilization of loom m.

K X
X n
ðmÞ
rm ¼ rj;k þ rðmÞ
s ð5:12Þ
k¼1 j¼1

ðmÞ ðmÞ
rj;k and rs represent the respective contributions of the input flow of type j
orders of class length k, and that of the input flow of set-ups s to the utilization of
station m.

ðmÞ
rs is defined as follows:
rðmÞ
s ¼ tðmÞ
s  Is

ðmÞ
where ts is the expected duration of set-ups s at machine m and Is is the input flow
of set-ups s. Eventually disturbances and/or other set-ups can be eventually
considered.

Provided the constraint rm < 1 is satisfied for each loom m, m = 1,2,…,M, and
all buffers are of unlimited size, eq. (5.12) is a valid expression of the TPN based
expected steady state utilization rm, of any loom m, m = 1,2,…,M in the weaving
work system.
As the waving operation executed by looms is a continuous operation, Eq. (5.12)
has to be modified. The data for modifying the equation are as follows:
ðmÞ
A loom m has a given velocity, vj for processing a type j fabric, j = 1,2,…,n,
m = 1,2,…,M.
The loom velocity for performing this operation is dependent on its technical
capability (loom type) and on the required fabric type j. It is not dependent on the
length class k.
ðmÞ
Let Lk define the midrange of class length k. Accordingly tj;k is replaced by
ðmÞ
Lk =vj .
The following equation replaces Eq. (5.11) above
h i
ðmÞ ðmÞ ðmÞ
rj;k ¼ Lk =vj  Ij;k Rj;k j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .K; m ¼ 1; 2; . . .; M
ð5:13Þ
5.6 Weaving Processes—An Additional Application of Timed Petri Nets 97

A numerical example
The example is based on a real weaving plant. The data represent a working period
of about four months, during which the plant was operated 5 days per week, 24 h
per day (3 shifts). We shall use the input data and the processing data to assign the
entering orders to the existing looms
Input data (entering orders)
The mean overall order rate is 0.343 orders per hour.
There are five fabric types (j = 1, 2,…,5) and four length classes (k = 1,2,…,4).
The length ranges of the four length classes and their respective input mean rates Ik
(orders/h) are detailed in Table 5.1
Given that every L = 1600 m the beam should be changed, we may calculate the
expected number of beam change for each class length k (nk) by assuming that the
fabric length in each class follows a uniform distribution. The mean rate of beam
changes, I’k, as a function of I k, becomes: I’k = I,k (1 + nk), k = 1,2,3,4. The
distribution of orders by fabric type j, pj, is the same for all length classes and is
given (Table 5.2).
We may now calculate the mean rates of type j orders of class k length:

Ij;k ¼ pj  I:k j ¼ 1; 2; . . .5 k ¼ 1; 2. . .4

Processing data
The velocity of a loom is dependent on the loom type m and on the desired fabric
type j. Two loom types are considered, m = 1,2. There are 60 looms of type 1
ðmÞ
(m = 1) and 40 looms of type 2 (m = 2). The velocity matrix, vj , (m/h) is given in
Table 5.3.

Table 5.1 Input orders by class length


k Range (m) Midrange (Lk) Orders/h (I.k)
1 1–1000 500 0.078
2 1001–2000 1500 0.141
3 2001–3000 2500 0.073
4 3001–4000 3500 0.051

Table 5.2 Distribution of orders by fabric type


j 1 2 3 4 5
pj 0.4 0.3 0.15 0.10 0.05

ðmÞ
Table 5.3 The velocity matrix vj , (m/h)
mj 1 2 3 4 5
1 9.60 8.88 – – 7.98
2 12.0 – 10.32 13.62 10.20
98 5 Petri Nets

Table 5.4 Number of utilized looms by loom types and fabric types
mj 1 2 3 4 5 Total number of utilized looms
1 12 28 – – – 40 (out of 60)
2 18 – 12 6 4 40 (out of 40)

The relative contribution of type j fabric of length class k is calculated using


Eq. (5.13) to which we add the beam change (set-up) contribution.
h i
ðmÞ ðmÞ ðmÞ
rj;k ¼ Lk =vj  Ij;k þ I0j;k  ts Rj;k j ¼ 1; 2; . . .; 5; k ¼ 1; 2; . . .4; m ¼ 1; 2

It takes ts = 2 h to perform a beam change for any fabric type j, j = 1,2…5.


Operational decisions regarding loom assignments
1. To eliminate the fabric change set-up we assign each fabric type j to a different
loom.
2. According to the velocity matrix above, fabric type j = 2 has to be assigned to a
type 1 loom, while fabric types j = 3,4 have to be assigned to a type 2 loom.
3. For fabric types j = 1 and j = 5, that can be processed on both type looms, type
2 looms are preferred, because of their higher velocity.
4. To avoid long queues, the planned utilization of a loom should not exceed 0.75.
ðmÞ
The formula for calculating Nj , the number of type loom m for fabric j orders,
including set-up changes, given 0.75 utilization of looms is given below:
( )
4 h
X i X
4
ðmÞ ðmÞ
Nj ¼ Ij; k  Ik =vj þ I0j;k  nk =0:75
k¼1 k¼1

Based on the above decisions, the final assignment of orders by fabric types j,
j = 1,2,…5 to loom types m = 1,2 is detailed in Table 5.4.
We see that the TPN based calculated utilization of the looms enabled to fully
utilize the preferred type 2 looms.
Conclusion
The TPN decomposition approach as applied to manufacturing systems of discrete
items was extended to include weaving process, a continuous process.

References

Balci O (1994) Validation, verification and testing techniques throughout the life cycle of a
simulation study. Ann Oper Res 53:121–173
Barad M (1994) Decomposing timed Petri Net models of open queueing networks. J Oper Res Soc
45(12):1385–1397
References 99

Barad M (1998) Timed Petri Nets as a Verification Tool. In 1998 Winter Simulation Conference
Proceedings, Washington, USA, pp 547–554
Barad M, Sinriech D (1998) A Petri Net model for the operational design and analysis of
Segmented Flow Topology (SFT) AGV systems. Int J Prod Res 36(12):1401–1425
Barad M (2003) An introduction to Petri Nets. Int J Gen Sys 32:565–582
Barad M, Cherkassky A (2010) A Timed Petri Nets perspective on weaving processes. IFAC Proc
Vol 43(12):438–444
Barad M (2016) Petri Nets—a versatile modeling structure. A M 7:829–839
Kelton WD (1989) Random initialization methods in simulation. IIE Trans 21:355–367
Kleijnen JPC (1995) Verification and validation of simulation models. Eur J Oper Res 82:145–162
Moody JO, Antsaklis PJ (1998) Supervisory control of discrete event systems using Petri Nets.
Kluwer, Boston
Ramchandani C (1974) Analysis of asynchronous concurrent systems by timed Petri Nets.
Ph.D. Thesis, MIT Department of Electrical Engineering, Cambridge, Mass
Sargent RG (1991) Simulation Model Verification and Validation. In 1991 Winter Simulation
Conference Proceedings, Phoenix, USA, 37–47
Shannon RE (1981) Tests for the Verification and Validation of Computer Simulation Models.
1981 Winter Simulation Conference Proceedings, Atlanta, USA, 573–577
Sifakis J (1977) Use of Petri Nets for performance evaluation. Acta Cybernetica 4:185–202
Yakovlev A, Gomes L, Lavagno L (eds) (2000) Hardware design and Petri Nets. Kluwer, Boston
Chapter 6
Quality Function Deployment (QFD)

6.1 Introduction

Quality Function Deployment (QFD), originally a product oriented quality tech-


nique, was developed for extracting the customer needs or desires, expressed in
his/her own words and then translating them into measurable product quality
characteristics and further processing. However, its usage is not limited to product
planning. By its structure it is a generic/multi-purpose planning framework, which
can be applied to translate improvement needs of various systems into prioritized
improvement activities. QFD is a team-based technique and there are many
informative publications about its various usages (see e.g. Chan and Wu 2002).
A main advantage of the QFD framework is its structured and consistent
approach, which may start at the highest level, i.e. with the strategic improvement
needs of an enterprise, may lead through the functions required to answer these
needs and finally achieve the best improvement oriented results. This chapter does
not present new applications of the framework. Its objective is solely to show how
useful applications of QFD are in a variety of circumstances: realizing the manu-
facturing strategy of an enterprise, establishing improvement paths of enterprises, or
deploying flexibility in supply chains.

6.2 Quality Function Deployment—The Original Version

The QFD technique has its roots in Japan of the late 60s and the early 70s. The
Japanese created a methodology to support the development process for complex
products by linking the planning elements of the design and construction processes
to specific customer requirements. Quality Function Deployment expresses the
voice of the customer. The voice of the customer represents a set of customer needs

© The Author(s) 2018 101


M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_6
102 6 Quality Function Deployment (QFD)

where each need has assigned to it a priority, which indicates its importance to the
customer.
The methodology was applied at Mitsubishi in 1972, (Akao 1990). During the
80s and the 90s of the previous century, U.S. and Japanese firms gradually and
successfully adopted it (see e.g. Bossert 1991 and King 1995). The method is
typically carried out by teams of multidisciplinary representatives from all stages of
product development and manufacturing (Lai et al. 1998). In recent years, the
application domains of the QFD methodology have been expanded, and its popu-
larity increased tremendously.
The essence of the original QFD is to extract the prioritized customer needs or
desires, expressed in his/her own words (WHATs), to translate them into prioritized
technical product quality characteristics (HOWs) and subsequently into compo-
nents’ characteristics, operating decisions and other decisions. Each translation of
customer ‘voices’ and subsequent processes uses a matrix relating the HOWs with
the WHATs, associated with any specific QFD stage. The HOWs of one matrix
become the WHATs of the next matrix. Important parameters in the translation
process are the numerical values of the matrix elements representing the strength of
the relations between the variables involved.
Griffin and Hauser (1993), presented a comparison of different approaches for
collecting customer preferences in QFD. They considered the gathering of customer
information to be a qualitative task, carried out through interviews and focus
groups, and found both person to person interviews and focus groups to be equally
effective methods in extracting the customer needs.
The QFD methodology is implemented through sequential matrices The first and
best documented matrix which translates the customer requirements expressed in
his/her own words into measurable product technical characteristics is called ‘The
House of Quality’ (see e.g. Hauser and Clausing 1988).
The House of Quality is presented in Fig. 6.1. The matrix inputs (the house’s
western wall) are the customer needs, the WHATs and their respective numerical
importance to the customer. They are translated into the HOWs (the house’s ceiling),
which represent the measurable product technical characteristics, or specifications.

Fig. 6.1 House of Quality HOWs


(HOQ)

Technical
characteristics
WHATs
Competitive
Importance
Customer

priorities
needs

RELATIONSHIP
MATRIX

Importance
(normalized)
6.2 Quality Function Deployment—The Original Version 103

The relationship between each technical characteristic and each customer need
are the core of the matrix and show how well each technical characteristic expresses
the related customer need. The typical relationship strengths are weak, strong and
very strong; they are all positive and assessed by the technical team. The triangle
(the house’s roof) describes the relationships between the technical characteristics.
Some are positively related, while others are negatively related and typically do not
explicitly appear in the calculations; they are mainly used as trade-off. Rows rep-
resenting competitive customer needs may be emphasized by multiplying their
original values by indices (competitive priorities on the house’s eastern wall). The
translated values (the matrix output) represent the calculated importance of each
technical characteristic. As mentioned above, the output of matrix I becomes the
input of matrix II. This sequential approach continues from matrix to matrix.
In this original version of QFD, as a product design method, there is but one
external numerical input, the customer needs with their respective numerical
importance. This input is introduced in the first matrix, and translated along the
entire sequence of matrices. The next section presents an enhanced view of this
methodology.

6.3 Quality Function Deployment—An Enhanced View

As mentioned in the previous section the basic concept of the original QFD is the
customer’s voice as an input to improvement. No other external numerical inputs
are introduced. Its implementation framework is the QFD sequential matrices. The
published examples of an enhanced QFD view that will be presented here, adhere to
the above implementation framework. However, they exhibit changes in structure
and context.
The changes in structure are as follows:
(1) The matrices have no roofs.
(2) Each matrix has two inputs.
One input preserves the initial sequential path, i.e. the output of any given matrix
(HOWs), becomes the input of the next matrix, its WHATs.
The second input is the given weights of the HOWs. In the original QFD the
weights of the HOWs are calculated from the weights of the WHATs multiplied by
the strength of their relationships with the HOWs. Here, the HOWs have given
input weights. Their output weights are calculated in a similar manner as the output
weights of the HOWs in the original QFD, but the calculating formulae take into
account their input weights as well.
The change in context is related to the main topic.
Instead of ‘product’ as the main topic, with its planning, design and manufac-
turing processes, a variety of topics may be considered.
As mentioned in the introduction, the examples here express different topics.
104 6 Quality Function Deployment (QFD)

6.4 Linking Improvement Models to Manufacturing


Strategies

The title of this section is the title of the first example (published application). Barad
and Gien (2001) carried out a research whose main objective was to develop a
structured and integrative methodology for supporting the improvement of manu-
facturing systems in Small Manufacturing Enterprises (SMEs). The QFD specific
methodology of the paper relied on three basic concepts: ‘strategic priorities’,
‘concerns’ and ‘strategic improvement needs’.
The first concept was to define a set of strategic priorities of a manufacturing
enterprise. It comprised the typical competitive advantages found in the manufac-
turing strategy literature: delivery (fast, dependable), quality (high design quality,
consistent product quality), price. The set also contained an internal, human ori-
ented strategic priority: employees’ involvement. This strategic priority represented
the possibility of achieving a competitive advantage through people (Pfeffer 1995).
The second basic concept consisted in asking interviewees (in the empirical
study) about specific concerns instead of about improvement needs as in the
original QFD. According to Flanagan (1954), ‘the improvement needs of a system
stem from concerns that express unsatisfied needs’. It implies that negative, rather
than positive past experiences, are likely to be registered by customers.
The study differentiated between two types of concerns: strategic concerns and
operating concerns. All concerns were evaluated on a ‘gravity’ scale. Information
supplied by open questions in a pilot investigation was used to formulate specific
concerns for each of the four performances (time, quality, costs and human
oriented).
The third concept regarded the evaluation of the strategic improvement needs of
an enterprise by combining the ‘importance’ attributed by the enterprise to each
strategic priority with the ‘gravity’ of its associated concerns. ‘The higher the
importance of a strategic priority and the higher the gravity of its concerns, the
higher are the enterprise improvement needs’. The concept is equivalent to the
‘importance-performance matrix as a determinant of improvement priority’ devel-
oped by Slack (1994).
Based on this concept, the strategic improvement needs of an enterprise were
calculated by multiplying their evaluated importance with the evaluated gravity of
the respective concern.

6.4.1 Deployment of the Improvement Needs

We adapted the QFD technique to accommodate propagation of the improvement


needs of an enterprise from the strategic level to the action level as displayed in
Fig. 6.2. First, as mentioned above, the strategic improvement needs were evaluated
by multiplying the importance of each strategic priority with the gravity of its
6.4 Linking Improvement Models to Manufacturing Strategies 105

Gravity of
Importance of Strategic
Strategic
Strategic Improvement Needs
Concerns
Priorities (SNi)

Gravity of
Operating Operating
MATRIX I Improvement Needs Concerns
(ONk)

Potential
Deployed Improvement
MATRIX II Improvement Actions Actions
(DAm(a)) (PAm(a))

Fig. 6.2 QFD conceptual model for deploying the strategic improvement needs of an enterprise to
its improvement actions by areas (reproduced from Barad and Gien 2001)

respective concern. Two deployment matrices were built. Matrix I performed the
strategic deployment, from strategic improvement needs, to prioritized operating
improvement needs. Its second input was the gravity of the operating concerns.
Matrix II deployed the prioritized operating improvement needs to deployed
improvement actions by areas. Its second input were the potential improvement
actions by areas.
Survey and questionnaire
To test the developed methodology, the authors planned a small empirical study
based on a sampling population of SMEs. This consisted of manufacturing com-
panies with 50–200 employees and less than US$50 million annual sales. The
survey sample comprised 21 enterprises from two types of industry (metal products
and plastic products) and was carried out through interviews with three managers/
engineers in each sampled enterprise. As the interviews were person to person and
took place at the plant sites, there were no incomplete responses.
An interview questionnaire comprising four sections was developed, using
information from the manufacturing strategy literature, as well as information
provided by a pilot investigation of 12 enterprises, that preceded the formal
questionnaire based investigation. Each interviewee was asked to fill out the first
three sections of the questionnaire. The answers had to reflect his/her views on each
topic. The fourth section was intended to provide background data (characteristic
features) of the investigated enterprises and was usually filled up by the CEO or
his/her representative.
The first section of the questionnaire supplied data on the importance of each
strategic priority of an enterprise. The defined set of strategic priorities comprised
seven items: low price, fast delivery, dependable delivery, high product quality,
consistent product quality, product variety and employees’ involvement. The
106 6 Quality Function Deployment (QFD)

importance of each strategic priority had to be assessed by the interviewees on a


scale from 1 (not important) to 9 (most important).
The second section supplied data on the concerns facing the enterprise. The
interviewees had to consider concerns regarding the performances of their products
as compared with those of their competitors (to match each of the six strategic
priorities) and concerns regarding the involvement of their employees in solving
problems (to match the human oriented strategic priority). They also had to assess
concerns associated with the operating capability of their enterprise. The gravity of
each specific concern had to be assessed on a scale from 1 (not serious) to 9 (very
alarming). The operating concerns in the questionnaire were extracted from inter-
viewed managers/engineers in six of the enterprises in the pilot study. Each
interviewee in these enterprises had to formulate specific operating concerns by four
performance scales: costs, delays, quality and human resources. The 22 operating
concerns in the final version of the questionnaire are detailed below.
Cost oriented operating concerns: raw materials, manufacturing, inventory,
quality control, coping with distribution of manufacturing overheads.
Delay oriented operating concerns: frequent changes in requirements, short term
capacity shortages, in process delays, in transfer delays, lengthy supply time, supply
not on time, lengthy changes in design.
Quality oriented operating concerns: inadequate quality techniques, high per-
centage defectives, deficient storage conditions, quality of purchased materials not
satisfactory, quality of purchased materials not consistent, difficulties in satisfying
customer required tolerances.
Human oriented operating concerns: deficient communications among
departments/functions, low skill/versatility of employees, low motivation, high
absence level.
The third section of the questionnaire supplied data on potential actions within
several improvement areas. We considered eight improvement areas: production
planning and control, inventory and logistics, vendor relations, equipment main-
tenance and reconfiguration, quality management, human management, information
systems and technology, research and development. The potential improvement
contribution of each listed action within those areas had to be assessed on a scale
from 1 (not relevant) to 9 (highly significant potential contribution). Specific
improvement actions within each area (questionnaire items), were formulated by the
researchers, using published material on manufacturing strategies as well as their
own expertise.
Data Analysis
The Strategic improvement Needs, SNi, i = 1, 2, … 7, of an enterprise were
calculated as follows:

SNi ¼ Xi  Yi i ¼ 1; 2; . . .; 7; ð6:1Þ

where, Xi—importance of strategic priority i, Yi—gravity of concerns regarding


strategic priority i, i = 1, 2, …, 7 (normalized data from the questionnaire)The
6.4 Linking Improvement Models to Manufacturing Strategies 107

higher the score, the higher was the improvement need of the respective strategic
priority.
Matrix I performed the strategic deployment. It projected the gravity of the
Operating Concerns, OCk, k = 1, 2, …, 22, on the strategic improvement needs. Its
output represented the prioritized Operating improvement Needs, ONk, k = 1, 2, …,
22, of an enterprise which were calculated as follows:

X
7
ONk ¼ OCk SNi  R1i;k ; k ¼ 1; 2; . . .; 22: ð6:2Þ
i¼1

R1i,k represents the impact of operating concern k on the realization of strategic


priority i, i = 1, 2, …, 7
As customary in all QFD matrices, three possible positive relationship levels
were considered (weak, strong and very strong). The numerical values detailed
below were arbitrarily selected to accommodate these levels.
R1i,k = 0 removing operating concern k will have no effect on strategic priority i,
0.33 a weak effect on strategic priority i is expected
0.67 a strong effect on strategic priority i is expected
1.00 a very strong effect on strategic priority i is expected
i = 1, 2, …, 7, k = 1, 2, …, 22
Matrix II linked the prioritized operating improvement needs (output of matrix I)
with the potential contributions of actions pertaining to several improvement areas.
The considered improvement areas were planning, inventory and logistics, vendor
relations, equipment maintenance, quality management, human management, in-
formation systems and research and development. Thus, matrix II deployed the
contributions of the improvement action, by considering their potential effect on the
operating improvement needs of the enterprises. The strength of the relationships
between the potential contribution of an improvement action and an operating
improvement need expressed the potential impact of an improvement action on an
operating improvement need. Ultimately, matrix II was intended to detect the
critical improvement actions of an enterprise by areas.
Similarly to matrix I, matrix II had two inputs. One was the output of matrix I,
ONk, while the other, PAm(a), represented the Potential contribution of Action m
pertaining to improvement area a. The output of matrix II, DAm(a), expressing the
Deployed contribution of improvement Action m(a), and was calculated using
Eq. 6.3. Its structure is similar to that of Eq. 6.2.

X
22
DAmðaÞ ¼ PAmðaÞ ONk  R2k;mðaÞ mðaÞ ¼ 1; 2; . . .; qðaÞ; a ¼ 1; 2; . . .; 8
k¼1
ð6:3Þ

R2 k,m(a) represents the potential effect of the improvement action m(a) on the
operating improvement need k.
108 6 Quality Function Deployment (QFD)

Again, three possible positive relationship levels are considered. The numerical
values selected are similar to those in matrix I.
R2 k,m(a) =0 action m(a) has no influence on operating improvement need k
0.33 action m(a) is expected to slightly reduce operating improvement
need k
0.67 action m(a) is expected to substantially reduce operating
improvement k
1.00 action m(a) is expected to solve operating improvement need k
k = 1, 2, …, 22, m(a) = 1, 2, …, q(a), a = 1, 2, …, 8.
It is seen that since matrix II is built separately for each improvement area a,
a = 1, 2, …, 8, it actually represents a union of eight matrices, namely, matrices II
(a), a = 1, 2, …, 8. To build each of those matrices, say matrix II(a0), the
improvement actions m(a0), m(a0) = 1, 2, …, q(a0), for area a0, are considered.
A partial view of Matrix II (R2 k,m(a) > 0) in the area of Quality Management is
presented in Table 6.1. The specific numerical values of R2 k,m(a) are replaced by
√.
Some results and conclusions
• The most urgent operating improvement needs of the sampled SMEs at the end
of the last century were associated with delay concerns and human oriented
concerns. The delay concerns were caused by short term capacity shortages, low
dependability of supply delivery and frequent changes in customer require-
ments. The human oriented concerns were caused by low skill/versatility of the
employees and low motivation.
• The research identified three generic improvement models: (1) time reducing
techniques coupled with flexibility (2) improved quality oriented organization
(3) time reducing techniques coupled with vendor relations. It was interesting to
note that some of the tendencies observed in large European manufacturers (De
Meyer et al. 1989) were also revealed in the study of Small Manufacturing
Enterprises:
(a) Delivery as a competitive priority was strongly stressed. (b) Manufacturers
tended to put more emphasis on human resources and less on stand-alone
technology.

6.5 Strategy Maps as Improvement Paths of Enterprises

The second published QFD example is based on a paper entitled ‘Strategy maps as
improvement paths of enterprises’. The main research question of the authors
(Barad and Dror 2008) was: How to define the improvement path of an enterprise
across a generic hierarchical structure, for enhancing the realization of its business
objectives?
Table 6.1 Partial view of matrix II for the area of quality management
Operating Improvement actions
improvement Quality Autonomous SPC Raw Quality oriented Product quality DOE Collaboration with
needs teams quality control material training control customers
quality
control
Higher quality √ √ √
techniques
Lower quality √ √
control costs
Lower percentage √ √ √ √ √ √ √
defectives
6.5 Strategy Maps as Improvement Paths of Enterprises

Better raw materials √


Satisfying customer √ √ √
tolerances
Improving √ √
communications
Improving √ √
motivation
109
110 6 Quality Function Deployment (QFD)

They selected the well-known Balanced Scorecard (BSC) as an appropriate


modeling framework for building such a path. Its authors, Kaplan and Norton had
published several versions of this framework (1996, 2001, 2004).
Strategic improvement paths
The Balance Scorecard (BSC) introduced by Kaplan and Norton (1996) is a
strategic frame with four perspectives: financial, customer, internal processes, and
learning and growth. It develops strategic cause and effect relationships by linking
measures of financial performances to measures on their drivers. To improve their
well-known framework, Kaplan and Norton (2001) introduced strategy maps.
A strategy map is a diagram describing how an enterprise creates values by con-
necting strategy objectives with each other in explicit cause-and-effect relationships
along the BSC perspectives. Thus, it provides a global perspective on the important
linkages between its weakest components at different hierarchical levels. The map
had some limitations. It loosely defined the customer perspective and did not
provide a mechanism for selecting the competitive priorities to be improved.
Barad and Dror (2008) dealt with those limitations. They applied strategy maps
to prioritize improvement needs of enterprises at different hierarchical levels.
(a) They adopted the approach of Dror and Barad (2006) to replace the loosely
defined ‘customer’ perspective of the BSC map by a set of well-defined
‘competitive priorities’; these are price (low), delivery (reliable, fast), product
quality (high, stable), product variety, new products and employee
involvement.
(b) They replaced the ‘learning’ perspective’ of the BSC map with ‘organizational
profile’ components, merging principles of the Malcolm Baldridge National
Quality Award and the European Foundation for Quality Management. Their
assumption was that improving the organizational profile of an enterprise will
later on lead to an improvement of its processes.
(c) They used Quality Function Deployment (QFD) to build ‘strategy paths’ and
thus provide the linking mechanism of the strategic improvement path.
The essence of the generic QFD hierarchical framework of Barad and Dror was
to extract the desired improvement needs of the business objectives (as viewed by
its managers), to translate them into required improvement of its competitive pri-
orities, required improvement in its core processes and, ultimately, into the required
improvement in its organizational profile/its infrastructure. This is a top down
recursive perspective path, from the highest hierarchical level, business objective, to
the lower hierarchical level, organizational profile/infrastructure, revealing
improvement needs.
For building the strategy map of an individual company, Barad and Dror utilized
three QFD matrices. Figure 6.3 presents a QFD overview of the top down ‘re-
cursive perspective’ on the generic hierarchical objectives. It starts by prioritizing
the improvement objectives at the highest hierarchical level (business). This is done
6.5 Strategy Maps as Improvement Paths of Enterprises 111

Core
Prioritized From processes
improvement business
needs of the objectives to
business competitive
objectives priorities
From
competitive
priorities to
core
processes
From
core processes
to
organizational
profile

Fig. 6.3 The top-down, level by level, recursive perspective (reproduced from Barad and Dror
2008)

by multiplying the importance attributed to each business objective with the ca-
pability gap existing between the desired state of a business objectives and its
realization (the ‘concerns’ in the previous example). The higher the importance and
the higher the gap, the higher is the improvement need of a business objective. The
latter are translated downward, level by level, until they reach the prioritized
improvement needs of the organizational profile elements. Figure 6.3 is not a
strategy map. We view strategy maps as bottom up improvement paths, which start
at the lower hierarchical level (organizational profile), follow the upward causality
links and end at the higher hierarchical level (business objectives), see Fig. 6.4.
Matrix 1: From business objectives to competitive priorities
This matrix translates the relative improvement needs of the business objectives
(market share, marginal profitability and return on investment), the WHATs, into
the relative improvement needs of its competitive priority measures (fast delivery,
reliable delivery, low price, high product quality, stable product quality, product
variety, new products and employee involvement), the HOWs. The core of matrix 1
expresses the relationship strength between the improvement of each competitive
priority and its expected effect on the improvement of each business objective.
Matrix 2: From competitive priorities to core processes
As customary in QFD, the output of matrix 1, its HOWs, is fed as input to the
current matrix, its WHATs. Construction of matrix 2 necessitates an additional
input in terms of information from interviewees on the core processes in their
organization and on their respective relevant measures. Matrix 2 translates the
improvement need of each competitive priority into the relative improvement need
of each core process. The core of matrix 2 expresses the relationship strength
112 6 Quality Function Deployment (QFD)

between the improvement of each core process and its expected effect on the
improvement of each competitive priority. According to the MBNQA, the core
processes are specific to an organization and are related to the variables of its
organizational profile, MBNQA (2016–2017).
Matrix 3 From core processes to components of the organizational profile
Again, according to the QFD principles, the output of matrix 2 representing the
relative improvement need of each core process is fed as input to matrix 3. Matrix 3
translates the latter into the relative improvement need of each component of the
organizational profile. These components are defined in the reviewed paper by
means of the MBNQA organizational profile as follows: human resources (em-
ployee features, teamwork), technology (equipment, IS/IT), planning (strategic),
and organizational relationships (internal, external). A firm has to define relevant
measures for the components of its organizational profile whose improvement
might later on lead to an improvement in its core processes. The output of this
matrix is calculated similarly to that of the previous matrix.

6.5.1 Building a Strategy Map

Data have to be extracted from interviewed managers in each enterprise, by means


of a questionnaire, which consists of:
(a) Importance and capability gap of the company’s business objectives (for
calculating the weight of each WHAT in matrix 1.
(b) Strengths of relationships between the WHATs and the HOWs in all three
matrices (for translating the WHATs into HOWs).
(c) Core processes of the investigated enterprise for matrix 2.
(d) Performance measures of the core processes and of the organizational profile.
The scores assigned by the interviewees have to be blended, and replaced by
their median. In each matrix, the scores are ‘normalized’, i.e. presented as relative
weights.
The procedure for building a graphical strategy map consists in the following
steps:
(1) Calculate the normalized improvement scores of the items at each level of the
strategic frame.
(2) Select the items to be improved.
(3) Use arrows (wide arrows for very strong relationships and thin arrows for
strong relationships) to describe up-going relationships, linking nodes at a
lower level with nodes at the next higher level.
6.5 Strategy Maps as Improvement Paths of Enterprises 113

Data analysis
Given data for matrix 1 (calculated from the questionnaires)
– Median importance of each business objective i, (say, Xi, i = 1, 2, 3)
– Median capability gap for each business objective i, (say, Yi, i = 1, 2, 3).
– Median strength of relationship between each business objective i and each
competitive priority j (say Aij, i = 1, 2, 3, j = 1, 2, …, 8).
Calculations for matrix 1
(a) Calculate improvement need of business objective i, say Ni

Ni ¼ Xi  Yi ; i ¼ 1; 2; 3

(b) Calculate normalized improvement need of business objective i, (input to


matrix 1):
X
Ii ¼ Ni = Ni i ¼ 1; 2; 3
i

(c) Calculate improvement need of competitive priority j, say NPj


X
NPj ¼ Ij  Aij ; j ¼ 1; 2; . . .; 8
i

(d) Calculate normalized improvement need of competitive priority j, say Oj


(output of matrix 1)
X
Oj ¼ NPj = NPj ; j ¼ 1; 2; . . .; 8
j

Remark
The calculations for matrices 2 and 3 are conducted in a similar manner but they
only contain steps (c) and (d).

6.5.2 A Detailed Case

A test case, a pharmaceuticals firm, is presented for illustrating the construction of a


strategy map. The firm develops, manufactures and markets high-quality generic
and branded pharmaceuticals, both prescription and over-the-counter, which are
used by patients in dozens of countries worldwide.
114 6 Quality Function Deployment (QFD)

Matrix 1: From business objectives to competitive priorities.


The production manager, the R&D manager and the purchasing manager were
interviewed and supplied input regarding the importance and capability gap of the
business objectives. The possible values of the input were based on a Likert scale
with values ranging from 1 to 5. The individual input supplied by the three man-
agers was blended and replaced by the medians.
The results showed that marginal profit was the most important business
objective, followed by market share, while return on investment was the last one.
The same order was retained with respect to the capability gap. Following the
rational principle developed by Barad and Gien (2001), the improvement need of
each objective was calculated as a multiple of its importance score and its capability
gap score and then normalized. The highest normalized improvement need (0.62)
was marginal profit; it was followed by market share (0.38) and both were input to
the first QFD matrix.
The core of this matrix presented the median relationship strength between each
competitive priority and each business objective thus enabling the translation of the
normalized improvement needs of the business objectives into the improvement
needs of the competitive priorities (matrix output). These results were normalized
i.e. presented in relative weights.
The group of the selected competitive priorities (to be improved) included two
competitive priorities: new products and reliable delivery.
By summing up the results obtained from the first QFD matrix we see that
reliable delivery was the driver of market share, while new products was the driver
of both market share and marginal profit.
Matrix 2: From competitive priorities to core processes
The three interviewees considered ‘R&D’, ‘purchasing’ and ‘production’ as core
processes. One of the interviewees also considered the delivery process to be a core
process but only core processes considered as such by at least two interviewees
were further analyzed. The calculated normalized scores show that the improvement
needs of the production process were the highest (0.42), followed by R&D (0.36)
and purchasing (0.22). Most of the main (active) ingredients of the medicines come
from external suppliers, which may explain the importance assigned to the pur-
chasing process. The significant impacts of R&D are on ‘high products quality’ and
‘new products’. The production process highly affects ‘delivery’ (fast and reliable)
and ‘stable products quality’. The main functions of the investigated company
involve producing and developing medicines. Its marketing, sales and distribution
processes are located abroad and are managed as a different unit. Hence, the
interviewees had no information about the core processes related to this part of the
organization. The interviewees were also asked to suggest relevant measures for the
core processes and for the important variables of the organizational profile.
6.5 Strategy Maps as Improvement Paths of Enterprises 115

The relevant measures of the core processes are detailed below:


(1) R&D process: ‘Mean product development time’, ‘mean product development
cost’ and ‘percentage of successful development projects’.
(2) Purchasing process: ‘On-time delivery rate’, ‘raw material cost relative to the
conventional market prices’ and a ‘raw material defects percentage’.
(3) Production processes: ‘Mean manufacturing time’, ‘average utilization of
resources’ and ‘percentage non-conforming’.
We may classify these measures, suggested by the interviewees, into three
measurement categories: time/efficiency indices, cost and quality indices.
The firm used the strategy map as input for a combined application of Six Sigma
and Lean Manufacturing procedures. Two teams were established for this purpose—a
production team for improving the delivery reliability and an R&D team for reducing
the new product development time.
As the percentage of non-conforming units in one of the manufacturing stages
was high, to improve delivery reliability the production team considered applying
more rigorously Statistical Process Control (SPC) procedures. Thus the percentage
of non-conforming units could be reduced and the output of conforming units be
stabilized.
To shorten the product development time, the R&D team investigated the pos-
sibility to introduce the Quality Function Deployment (QFD) methodology for
improving the product design process. It is expected that the usage of this method
will save a significant amount of time and effort spent in correcting conceptual
design errors.
Matrix 3: From core processes to components of the organizational profile
Three components of the organizational profile were selected as the vital few to be
improved: equipment (normalized score 0.24), employee features (0.24), and IS/IT
(0.22). Customer relationships and particularly those related to the marketing net-
work in the USA, are managed by a marketing unit located in New York. These
processes (and the organizational profile variables that might affect them) were not
included in the analysis.
The scores assigned by the interviewed managers emphasized the effect of R&D
employees features on the R&D process, the impact of equipment on the production
process, as well as the influences of IS/IT and of the external relationships on the
purchasing process (information technology is useful for detecting potential
suppliers).
The interviewees suggested the following measures for the organizational
profile:
(1) R&D employee features: ‘Average number of education years’, ‘average
number of hours of continuing education per R&D employee’, and ‘average
number of development projects per employee’, ‘average number of training
hours per R&D employee’.
116 6 Quality Function Deployment (QFD)

(2) Production employee features: ‘Average number of training hours per pro-
duction employee’.
(3) Equipment: ‘Investment in equipment upgrading’.
(4) IS/IT: ‘Investment in IS/IT’, and ‘investment in developing a production
planning and supervision system’.
(5) External relationships: ‘Active participation of pharmaceuticals developers in
product exhibitions’ (for buying raw materials).
The investment in increasing the average number of training hours per R&D
employee and in increasing the average number of training hours per production
employee was done to support the implementation of SPC and QFD tools.
The strategy map (improvement path)
Figure 6.4 depicts the strategy map of the investigated enterprise, drawn according
to the stepwise procedure detailed in the previous section. The normalized improve-
ment score of each node (selected component) appears on the map. The competitive
priorities (‘new products’ and ‘reliable delivery’) were selected by means of applying
statistical analysis. Production, purchasing and R&D processes were considered by
the three interviewees as important processes and therefore appear on the map as well.
Statistical analysis selected ‘employee features’, ‘equipment’ and ‘IS/IT’ as the most

Business Marginal Market


objectives profit share
0.62 0.38

Competitive New Reliable


priorities products delivery
0.30 0.18

Core R&D
processes Purchasing Production
0.36 0.22 0.42

Components of the External Employee IS/IT


Equipment
organizational relations features 0.22
0.24
profile 0.09 0.24

Fig. 6.4 Strategy map of the pharmaceutical firm (reproduced from Barad and Dror 2008)
6.5 Strategy Maps as Improvement Paths of Enterprises 117

important components of the organizational profile that needed to be improved. An


additional component, ‘external relationships’, was added to the map due to its strong
influence (up-going relationship) on the purchasing process.
Concluding remarks
Typically, improvement methods highlight selected improvement features for
achieving goals they emphasize. By contrast, our approach highlights selected
improvement features for realizing the most critical improvement needs of the
enterprise business objectives.
Prioritized improvement needs can be identified at four levels through the
generic hierarchical structure we have methodically developed. This generic
structure, which integrates the BSC hierarchical perspective with principles of the
Malcolm Baldrige National Quality Award and the European Foundation for
Quality Management, significantly extends the use of simple casual loop diagrams
as used in BSC strategy maps that only serve to identify major causal linkages.
The well-structured QFD, selected as the implementation methodology for
supporting its application in an individual enterprise, enabled quantification of the
strengths of the causal linkages, while providing an aggregation mechanism for
building the upward improvement path/strategy map of the enterprise starting from
root causes (infrastructure) to the business objectives. Since the output at each
hierarchical level is a numerically weighted list of improvement priorities, it helps
to efficiently utilize limited budgets for upgrading the weakest infrastructure
components and to uncover process improvement opportunities that would be
missed when only the most obvious dependencies are considered.
The main limitations of the approach are related to the low level of detail
considered and to its disregarding eventual interactions between system elements
that may affect system outcomes.

6.6 A QFD Top-Down Framework for Deploying


Flexibility in Supply Chains

The third published QFD example addresses flexibility in supply chains. The main
objectives of the methodology presented in Barad (2012) and reviewed here were:
(1) Building a structured framework for matching flexibility improvement needs to
the strategic priorities of enterprises within a supply chain.
(2) Considering changes and uncertainties in a systematic way.
The conceptual model
The deployment of flexibility in supply chains is modeled by a QFD matrix
structure comprising three matrices (see Fig. 6.5).
First, a set of strategic elements related to flexibility is considered. It comprises
the typical competitive advantages found in the manufacturing strategy literature
118 6 Quality Function Deployment (QFD)

Prioritized
Strategic Metrics Changes 2
Changes 1
Strategic importance II
I III
Strategic Internal
Improve Strategic Flexibility Providers
Competitive capability ment Flexibility Deployment
Needs Deployment

Prioritized Prioritized Prioritized


Customer Internal Flexibility Resources/
Flexibility metrics metrics by facets Infrastructures
Due date, Volume, Design, Human,
Mix, Time to Manufacturing, Information,
market Collaboration Logistics

Fig. 6.5 QFD conceptual model for deploying flexibility in supply chains (reproduced from
Barad 2012)

(excluding quality and cost): delivery (fast, dependable), product diversity and new
products. These strategic metrics are prioritized by combining the strategic
importance attributed by an enterprise to each element, with its competitive inca-
pability (Narasimhan and Jayaraman 1998). As in Barad and Gien (2001), we
multiply (and normalize) the scores of these two attributes so that the higher the
importance of a strategic element and the higher is its competitive incapability, the
higher is its priority for improvement.
Matrix I performed the Strategic Flexibility deployment. The prioritized
strategic improvement needs represented its first input (the WHATs). Its second
input were changes of type 1 that the enterprises in the supply chain had to cope
with such as changes in due dates, volume, mix, new product and time to market.
Coping with these types of changes necessitates flexibility with the same names,
e.g. due date flexibility, volume flexibility. This flexibility set is called customer or
first order flexibility types and represents the HOWs of Matrix I. Matrix I translated
the prioritized strategic metrics into prioritized customer oriented flexibility metrics
(due date, volume, mix and time to market) by considering the type and impact of
changes or uncertainties that affect a given strategic metric. For instance, delivery
reliability is translated into due date flexibility (whose role is to make delivery
reliability robust to modifications to agreements regarding due dates and into vol-
ume flexibility (for making delivery reliability robust to changes in the required
volume. New products need time to market flexibility, whose role is to make the
time to market duration robust to additional information on the new product fea-
tures necessitating changes.
Matrix II translated the customer oriented flexibility metrics into prioritized
internal flexibility capabilities by facets (design, manufacturing and collaboration).
Table 6.2 is a concise representation of matrix II in the conceptual model. The
linkages between some specific customer-oriented flexibility types—the WHATs—
and specific internal flexibility capabilities—the HOWs—are symbolically repre-
sented by √.
Table 6.2 A concise view of flexibility capabilities and their relationships (reproduced from Barad 2012)
Internal flexibility capabilities by facets
Design Manufacturing Collaboration
Customer Interchange R&D Configuration Flexible Versatile Versatile Short (late) Trans-routing Outsourcing
oriented flexibility flexibility flexibility employment operator machines Set-up Product flexibility
flexibility customization
types
Due date √ √ √ √ √ √
Volume √ √ √ √
Mix √ √ √
Product √ √
Time to √ √ √
market
6.6 A QFD Top–Down Framework for Deploying Flexibility in Supply Chains
119
120 6 Quality Function Deployment (QFD)

We envisage flexibility in design in terms of Interchange flexibility, R&D


flexibility and configuration flexibility. Interchange flexibility stands for flexible/
interchangeable products. It is related to principles of modularity and can reduce
design complexity (see Ernst and Kamrad 2000). Flexibility in R&D has a human
orientation and is related to flexible designers (versatile individuals with multiple
capabilities) and flexible synergetic teams. It can reduce design time of a product
given its complexity (see Barad 1998). From a design perspective, flexible hard-
ware is related to layout and configuration flexibility enabling a quick adaptation
(reconfiguration) of the existing layout to new products (see Mehrabi et al. 2002).
In the manufacturing facet, we entwine flexibility capabilities from the earlier
bottom-up studies of flexibility, versatile machines and short set-ups, with more
recent ‘human’ aspects in manufacturing (see Hopp and Van Oyen 2003). These are
related to flexible employment and to versatile operators/teams.
The flexibility types in the collaboration facet are mainly intended to enable
reduction of stock levels without increasing risk. Among others we suggest flexible
(delayed) product customization with respect to price marking, mixing or packaging
(see Narasimhan and Jayaraman, Ernst and Kamrad) and (if needed) in transfer of
stock between users at the same echelon and postponement (see Barad and Even
Sapir 2003).
Matrix III translated the internal flexibility capabilities into flexibility providers.
The flexibility providers considered were: human, informational and logistic
infrastructures as well as collaborators (suppliers, customers and eventually com-
peting firms).
Summary
The multi-purpose capabilities of the QFD framework were displayed through three
conceptual models.
• The first model propagated the improvement needs of a manufacturing enter-
prise from the strategic level to prioritized improvement actions.
• The second model defined an improvement path of an enterprise across a
generic hierarchical structure based on the Balanced Scorecard framework.
• The third model propagated the strategic improvement needs of enterprises in a
supply chain to prioritized flexibility providers (human resources, information/
communication, logistics/infrastructures).

References

Akao Y (1990) Quality function deployment: integrating customer requirements into product
design. Productivity Pres, Cambridge
Barad M (1998) Flexibility performance measurement systems—A framework for design. In:
Neely, AD, Waggoner DB (eds) Proceedings of the first international conference on
performance measurement, Cambridge, pp 78–85
References 121

Barad M (2012) A methodology for deploying flexibility in supply chains. IFAC Proc Vol 45
(6):752–757
Barad M, Dror S (2008) Strategy maps as improvement paths of enterprises. Int J Prod Res 46
(23):2675–2695
Barad M, Even-Sapir D (2003) Flexibility in logistic systems—modeling and performance
evaluation. Int J Prod Econ 85:155–170
Barad M, Gien D (2001) Linking improvement models to manufacturing strategies. Int J Prod Res
39(12):6627–6647
Bossert J (1991) Quality function deployment—a practitioner’s approach. ASQC Quality Press,
Milwaukee
Chan LK, Wu ML (2002) Quality function deployment: a literature review. Euro J Oper Res
143:463–497
De Meyer A, Nakane J, Miller JG, Ferdows K (1989) Flexibility: the next competitive battle the
manufacturing futures survey. Strategic Manage J 10(2):135–144
Dror S, Barad M (2006) House of strategy (HOS)—from strategic objectives to competitive
priorities. Int J Prod Res 44(18/19):3879–3895
Ernst R, Kamrad B (2000) Valuation of supply chain structures through modularization and
postponement. Euro J, Oper Res, p 124
Flanagan JC (1954) The critical incident technique. Psychol Bull 51:327–358
Griffin A, Hauser JR (1993) The voice of the customer. Market Sci 12(2):141–155
Hauser JR, Clausing D (1988) The house of quality. Harv Bus Rev 66:1–27
Hopp JW, Van Oyen MP (2003) Agile workforce evaluation: a framework for cross training and
coordination. IIE Trans 36:919–940
Kaplan RS, Norton DP (1996) Linking the balance scorecard to strategy. Calif Manage Rev
39:53–79
Kaplan RS, Norton DP (2001) Transforming the balanced scorecard from performance
measurement to strategic management: part I. Acc Horiz 15(1):87–104
Kaplan RS, Norton DP (2004) Strategy maps: converting intangible assets into tangible outcomes.
Harvard Business School Press, Boston
King R (1995) Designing products and services that customer wants. Productivity Press, Portland
Lai YJE, Ho A, Chang SI (1998) Identifying customer preferences in QFD using group
decision-making techniques. In: Usher JM, Roy U, Parsaei HR (eds) Integrated product and
process development. Wiley, New York
Malcolm Baldrige National Quality Award (MBNQA) (2016–2017) Criteria for Performance
Excellence (NIST, Department of Commerce: Gaithersburg, MD)
Mehrabi MG, Ullsoy AG, Kore Y, Heytler P (2002) Trends and perspectives in flexible and
reconfigurable manufacturing systems. J Intell Manuf 13(2):135–146
Narasimhan R, Jayaranan J (1998) Causal linkages in supply chain management. Decis Sci 29
(3):579–605
Pfeffer J (1995) Producing sustainable competitive advantage through the effective management of
people. Acad Manage Exec 9(10):55–69
Slack N (1994) The importance—performance matrix as a determinant of improvement priority.
Int J Oper Prod Man 14(5):59–75

You might also like