Professional Documents
Culture Documents
Miryam Barad
Strategies and
Techniques for
Quality and
Flexibility
123
SpringerBriefs in Applied Sciences
and Technology
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Systems Research Institute,
Warsaw, Poland
SpringerBriefs present concise summaries of cutting-edge research and practical
applications across a wide spectrum of fields. Featuring compact volumes of 50–
125 pages, the series covers a range of content from professional to academic.
Typical publications can be:
• A timely report of state-of-the art methods
• An introduction to or a manual for the application of mathematical or computer
techniques
• A bridge between new research results, as published in journal articles
• A snapshot of a hot or emerging topic
• An in-depth case study
• A presentation of core concepts that students must understand in order to make
independent contributions
SpringerBriefs are characterized by fast, global electronic dissemination,
standard publishing contracts, standardized manuscript preparation and formatting
guidelines, and expedited production schedules.
On the one hand, SpringerBriefs in Applied Sciences and Technology are
devoted to the publication of fundamentals and applications within the different
classical engineering disciplines as well as in interdisciplinary fields that recently
emerged between these areas. On the other hand, as the boundary separating
fundamental research and applied technology is more and more dissolving, this
series is particularly open to trans-disciplinary topics between fundamental science
and engineering.
Indexed by EI-Compendex and Springerlink.
123
Miryam Barad
Department of Industrial Engineering, The
Iby and Aladar Fleishman Faculty of
Engineering
Tel Aviv University
Tel Aviv
Israel
It has been a long time since I first thought of writing a book. I told my friends that
when I retire I would write a book. They laughed and said that academic people
working at universities, who really want to write a book, do not wait until their
retirement. I do not write easily. It takes me a long time to write a paper, but I enjoy
doing it. On the other hand, writing a book is a complex and big work, so I decided
to wait until my retirement. However, upon my retirement from Tel Aviv
University the Rector of a college asked me to join the academic staff as Head of the
Industrial Engineering and Management Department in order to enable the students
to get an academic degree similar to that of universities, i.e., Bachelor of Science
(B.Sc.). Therefore, again the book had to wait.
The idea of writing a book came back to me after a few more years. The editor of
an important professional journal asked me, as member of its editorial board, to
write a paper for a special volume commemorating 50 years of the journal first
issue.
He mentioned that I was free to select any topic for the paper, eventually
expressing my specific experience related to the topic. I selected ‘Flexibility’ as the
topic to write about. The paper title became Flexibility development—a personal
retrospective. It integrated several of my published papers emphasizing my per-
sonal views and experience on the topic.
Since the publication of this paper, the road toward writing the book became
clear to me. As the papers I had published dealt with many topics, I started to write
building blocks for the book, in terms of papers integrating my main research
topics. The book had to provide the relationships between them.
At the beginning of the 1980s, Quality started to become an important topic in
industry. That was before Quality Management or Total Quality Management
emerged as vital strategies. At that period, I started my research on quality-oriented
organizations through a survey, investigating Quality Assurance Systems in Israeli
industries. I selected a particular industrial area, Electric and Electronics industry
because this area made use of the highest developed technologies, sensitive to
v
vi Preface
quality aspects. Hence, it seemed the most promising area for the development of
quality-oriented organizations and for application of quality methods. However,
even in that area, we found out that managers were not aware enough of the
economic opportunities of quality systems. They acted toward their development
because of the pressure exerted upon them by strong buyers.
Since that period, I continued intermittently to study and research this topic
using various techniques such as Design of Experiments and later on Quality
Function Deployment. I also published invited chapters on Total Quality
Management in several encyclopedias.
In the meantime, a more exciting topic emerged, Flexibility. For many years,
flexibility has been my main topic of research. It is a complex and challenging topic
with never-ending research possibilities. It is important in the human body, and
according to recent research works, it seems that it is important in the brain per-
formance as well. In manufacturing and other man-made systems such as infor-
mation, logistics, or supply chains, there is consensus that flexibility means
adaptations to changes.
The early approaches to flexibility research were associated with Flexible
Manufacturing Systems (FMSs). These early approaches to flexibility had a
bottom-up structure related to a manufacturing hierarchy, i.e., from basic flexibility
types, such as ‘machine flexibility,’ to system flexibility such as ‘volume flexibil-
ity,’ or ‘mix flexibility.’ My first research on flexibility used a bottom-up structure
described in two papers published at the end of 1980s. Both papers used Petri nets
to model flexibility in manufacturing systems. By the end of the 1990s, the
importance of flexibility got its main recognition from a strategic perspective.
Accordingly, my next research projects were devoted to flexibility-oriented
strategies, through a top-down approach. Many of these projects used Quality
Function Deployment.
Part I Strategies
1 Definitions of Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Quality-oriented Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5
2.2 Quality Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6
2.3 Quality Development in the US (Before the Quality Revolution
in the Mid-1980s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Quality Organization in Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 Implementation of the American Quality Methods . . . . . . . 10
2.4.2 The Integrated Japanese Quality System . . . . . . . . . . . . . . 11
2.5 Total Quality Strategies After the Quality Revolution—Universal
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13
2.5.1 USA—Malcolm Baldridge National Quality Award
(MBNQA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13
2.5.2 Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 16
2.5.3 Australia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 17
2.5.4 Cultural/Geographical Styles—East Versus West
(China Versus Australia) . . . . . . . . . . . . . . . . . . . . . . . .. 18
2.6 Quality Management Theories and Practices—Usage Aspects . . .. 22
2.6.1 Logistics Versus Manufacturing . . . . . . . . . . . . . . . . . . .. 22
2.6.2 Contribution of QM Tools and Practices to Project
Management Performance . . . . . . . . . . . . . . . . . . . . . . .. 24
2.7 Soft Versus Hard Quality Management Practices . . . . . . . . . . . .. 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 28
ix
x Contents
3 Flexibility-oriented Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Flexibility in Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Flexible Manufacturing Systems (FMSs) . . . . . . . . . . . . . . 33
3.2.2 Classifications of Flexibility in Manufacturing Systems . . . 35
3.2.3 Flexibility Types and Measures (Dimensions/Metrics) . . . . 36
3.3 Flexibility in Logistic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Flexibility of Generic Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.1 Clouds Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.2 Flexibility Aspects and Analysis of Several Generic
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 46
3.4.3 Context Perspective—Information, Manufacturing
(or Service) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Flexibility in Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6 Strategic Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Part II Techniques
4 Design of Experiments (DOE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.1 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.2 Fisher’s Basic DOE Principles . . . . . . . . . . . . . . . . . . . . . 62
4.1.3 Factorial Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Impact of Flexibility Factors in Flexible Manufacturing
Systems—A Fractional Factorial Design of a Simulation
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 64
4.2.1 The Selected Factors (Independent Variables), Their
Levels and the Simulation Design . . . . . . . . . . . . . . . . . . . 64
4.2.2 Response (Dependent Variable) . . . . . . . . . . . . . . . . . . . . 66
4.2.3 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.4 Some Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3 Prior Research Is the Key to Fractional Factorial
Design—A Fractional Factorial Design
of a Physical Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 68
4.3.1 The Selected Factors (Independent Variables)
and Their Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.2 Response (Dependent Variable) . . . . . . . . . . . . . . . . . . . . 69
4.3.3 Choice of the Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3.4 Analysis of the First Data Set . . . . . . . . . . . . . . . . . . . . . . 71
4.3.5 Analysis of the Second Data Set and Some Concluding
Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 71
Contents xi
xiii
xiv Abbreviations
xv
Part I
Strategies
Chapter 1
Definitions of Strategies
Strategy (from Greek) is a high-level plan to achieve one or more goals under
conditions of uncertainty. In the sense of the ‘art of the general’, which included
several subsets of skills including tactics, siege craft, logistics etc., the term came
into use in the 6th century in East Roman terminology. It was translated into
Western languages only in the 18th century. From then until the 20th century, the
word ‘strategy’ came to denote ‘a comprehensive way to try to pursue political
ends, including the threat or actual use of force, in a dialectic of wills’ in a military
conflict, in which both adversaries interact.
Strategy (of an organization) generally involves setting goals, determining
actions to achieve the goals, and mobilizing resources to execute the actions.
A strategy describes how the ends (goals) are to be achieved by the means
(resources).
Strategy is important because the resources available to achieve these goals are
usually limited. Generally, the senior leadership of an organization determines its
strategy. Strategy can be intended or can emerge as a pattern of activity as the
organization adapts to its environment or competes.
Mintzberg (1978) defined strategy as ‘a pattern in a stream of decisions’ to
contrast with a view of strategy as planning. Kvint (2009) defines strategy as ‘a
system of finding, formulating and developing a doctrine that will ensure long-term
success if followed faithfully’.
Strategy typically involves two major processes: formulation and implementa-
tion. Formulation involves analyzing the environment or situation, making a
diagnosis, and developing guiding policies. It includes such activities as strategic
planning and strategic thinking. Implementation refers to the action plans taken to
achieve the goals established by the guiding policy (Mintzberg 1996).
In his book, ‘What is Strategy and does it Matter?’ Whittington (2000) described
four generic approaches: Classical, Evolutionary, Processual and Systemic. The
first two aim to profit maximization, while the last two are more pluralistic.
Whittington differentiates between two strategy dimensions: Outcomes (what is
strategy?) and process (how it is done?)
© The Author(s) 2018 3
M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_1
4 1 Definitions of Strategies
References
Kvint V (2009) The global emerging market: strategic management and economics. Routledge,
London
Mintzberg H (1978) Patterns in strategy formation. Manage Sci 24(9):934–948
Mintzberg H (1996) The strategy process: concepts, contexts cases. Prentice Hall, New York
Whittington R (2000) What is Strategy—and does it matter? 2nd edn. Cengage Learning, London
Chapter 2
Quality-oriented Strategies
2.1 Introduction
quality improvement projects from an overall and interactive economic view rep-
resenting the ‘society’ (economics of producers and customers). According to
Taguchi’s philosophy, not investing in a prevention project likely to avoid future
customer costs higher than the project investment cost will later incur a much
higher loss on the producer in terms of lost market share.
The quality definitions above reflect the organizational development of quality over
the years in different parts of the world.
The historical development of total quality management in the USA is rooted in
the evolution of the quality function over the years before the management
recognition of quality as a strategic competitive edge. The three main evolutionary
stages are Inspection, Statistical Quality Control and Quality Assurance (Barad
1996).
Inspection
Inspection of final manufactured products is related to the development of mass
production, and as such, it started before the end of the nineteenth century. It
focused on conformance of production to product specifications. The goal of
inspection was to separate between ‘conforming’ and ‘non-conforming’ units
through sieving.
One of the most successful American export areas after World War II was Quality
Control methods, particularly well received in Japan. Until mid-1960s, the Japanese
absorbed the principles and philosophies of quality management and quality eco-
nomics introduced by the well-known quality control ambassadors such as Deming,
Juran and Feigenbaum. Deming presented to managers a series of seminars
focusing on statistical process control.
In his well-known fourteen points, besides stressing the importance of process
quality ‘improvement’ and the advantages of the statistical methods over mass
inspection, Deming also stressed the need to preserve workers’ pride and the
importance of ‘training’ for stimulating workers’ motivation to improve quality. He
2.4 Quality Organization in Japan 11
The total quality control methods described in Feigenbaum’s book were applied in
Japan by the 1970s under the name Company-Wide Quality Control (CWQC). This
enhanced Japanese version of total quality control, is actually the first material-
ization of TQM.
The quality tools and techniques imported from the West were assimilated by
top management and disseminated downwards. This movement did not produce
satisfactory participation at the lower levels and resistance to applying the new
methods occurred. The Japanese answer to the lack of work motivation at the
bottom was the creation of Quality Control circles. These were voluntary, homo-
geneous small groups intended to open implementation channels for quality
improvement.
Do No
YES
OK Act
Check
12 2 Quality-oriented Strategies
Among the Japanese quality gurus who extensively contributed to this integrated
approach were Kaoro Ishikawa and Genichi Taguchi. Ishikawa gathered simple
graphical tools to be used by members of the Quality Control circles. The seven
basic tools for employees participating in these improvement programs: Pareto
diagrams, Flow charts, Cause and effect diagrams, Histograms, Check sheets,
Scatter diagrams and Quality control charts. Deming’s cycle of improvement
provided the logical connections between these tools, some of which pertain to
‘plan’ and other to ‘do’, ‘check’ or ‘act’.
Taguchi developed methods promoting the use of Design of Experiments to
improve product and process quality by reducing variability. It was important that
these methods be applied during the development stage of products and processes,
the ultimate result being products exhibiting on-target and low variance quality
characteristic (features that according to Taguchi’s philosophy made products
attractive to customers) and reduced costs. Two essential components make up
Taguchi’s strategy aimed at ‘selling’ design of experiments:
1. Providing ‘economic’ motivation for management to use design of experiments,
in terms of a loss function expressing customers’ discontent with products
whose quality characteristics are not on target and/or exhibit variations.
2. Offering ‘easy-to-use’ instructions for implementing the methodology of design
of experiments, originally an elitist western statistical method, known to
statisticians and very few engineers.
Taguchi’s loss function is related to the deviation of a quality characteristic from
its target value. It can be expressed in terms of its variance, showing that the higher
the variance, the higher the loss. Hence, reducing performance variance means
reducing loss. As design of experiments is an effective tool for reducing perfor-
mance variance, it is also, according to Taguchi’s logic, a tool for reducing loss and
attracting customers, (see also Taguchi’s experimental design technique in Part II of
this book).
Both types of techniques, the simple graphical techniques used by Japanese
workers in quality control circles and the sophisticated design of experiments used
by Japanese engineers, brought about the great quality improvement of the Japanese
products and the improved efficiency of their manufacturing processes.
Figure 2.2 displays three themes of the integrated quality system: Main
Objective, Tactics and Quality Technique. Each theme is viewed from three per-
spectives: Workers, Customers and Management. From the workers perspective the
main quality system objective is achieving work motivation. From the customers
perspective the product quality properties have to fit the specifications, while from
the management perspective the most important objective is attracting customers
and reducing costs. The workers oriented tactic is Quality Control circles, while the
appropriate tactic for both customers and management is low variability of the
product properties e.g. low ‘loss’, according to Taguchi. As mentioned above,
Ishikawa supplied the workers oriented techniques within the quality control circles
2.4 Quality Organization in Japan 13
(the seven basic tools) and Taguchi promoted the technique for reducing variability
of quality characteristics and thus reducing loss (design of experiments).
Perspective
Themes Workers Customers Management
Work moƟvaƟon Product properƟes AƩracƟng customers
Main ObjecƟve
on -target Reducing costs
Fig. 2.2 Company-Wide Quality Control (CWQC). Reproduced from Barad (1996)
14 2 Quality-oriented Strategies
The major rationale for the creation of the Malcolm Baldridge Award was
foreign competition. Realizing that customers desired high quality, the original aims
of the Award were to support national efforts to improve quality and thus satisfy
customers’ desires. Soon it became clear that the principle of satisfying customers’
desires could be applied not only to separate companies but within the same
company as well. A manufacturing department is the customer of the engineering
department that produces the design, whose quality has to meet the manufacturing
requirements. For example, tolerances are product requirements, achieved through
the designed product. In each section and department there are various series of
customers and suppliers and all are part of a quality chain. Thus, the ‘quality chain’
became another fundamental concept of TQM (Barad 2002).
The tremendous impact of the Award on US and Western industry and later on
global industry can be attributed to its excellent structural quality framework,
enabling companies to assess themselves against it. The framework is a product of
the National Institute of Standards and Technology (NIST).
Based on the early MBNQA framework, TQM consists in essence of:
1. Provision of high quality products/services, to satisfy customer wishes (a
dynamic goal achieved through a continuous quality improvement process).
2. Achievement of high total quality in products and processes at low cost
(managing process quality so as to increase productivity, get supplies collabo-
ration and reduce waist).
3. Management of total quality through involvement of all employees, measure-
ment of progress and communication of results.
Figure 2.3 describes MBNQA early framework (1993). It has (1) a ‘driver’ (top
management leadership), (2) a ‘system’ whose elements are management of process
quality, human resources, strategic quality planning and information and analysis,
(3) a ‘goal’ focusing on customer satisfaction and market share gain and (4) mea-
sures of progress in terms of quality and operational results.
The Baldridge quality prize is awarded each year in a variety of sectors. After a
few years, to reflect the evolution of the field of quality from a focus on product,
service and customer quality to a broader, strategic focus on overall organizational
quality, the name of the prize changed to Baldridge Performance Excellence
Program. The framework evolved during the years. According to Fig. 2.4 (2016–
2017), it consists in seven integrative ‘critical aspects’, (in some academic papers,
see e.g. Samson and Terziovsky (1999) they are called ‘empirical constructs’)
‘leadership’, ‘strategy’, ‘customers’, ‘workforce’, ‘operations’, ‘results’ and
‘measurement, analysis and knowledge management’. According to the results of
the much cited research paper mentioned above, only three among the empirical
constructs (leadership, people management and customer focus) had a significant
impact on performance.
In recent years, many of the Award recipients belonged to the health care,
service, education and small business sectors.
2.5 Total Quality Strategies After the Quality … 15
System
Management
of process
quality Goal
Fig. 2.3 Baldridge Award criteria framework (1993). Source NIST 1993: 33
OrganizaƟonal Profile
Strategy Workforce
RESULTS
Leadership IntegraƟon
Customers OperaƟon
Fig. 2.4 Baldridge Performance Excellence Program (Overview 2016–2017). Source NIST
2016–2017
16 2 Quality-oriented Strategies
2.5.2 Europe
To create a model of excellence for business across Europe, in 1991 the European
Foundation for Quality Management instituted a European Quality Award for
business (see Fig. 2.6). The model distinguished between two major types of cri-
teria, ‘results’ and ‘enablers’ (the means to achieve results).
The results concerned ‘customers’ satisfaction’, ‘people satisfaction’ (employ-
ees) and ‘impact on society’ (including meeting the demands of shareholders)
ultimately leading to excellence in ‘business results’. The enablers are five criteria:
‘leadership’ that drives human management, ‘strategy and tactics’, ‘resources’ and
‘processes’. The results are assessed by internal (self-assessment representing
pre-requisites for improvement), as well as by external assessment, comparisons
with competitors and best-in-class organizations. Self-assessment enables to dis-
cover areas of improvement.
Human People
Management saƟsfacƟon
Impact on
Resources society
Enablers Results
Fig. 2.6 Framework of the Quality prize in Europe (1991). Source European Quality Award
Framework (1991)
2.5.3 Australia
Cells’, comprising a group of employees working in the same cell, by training and
empowering them to make decisions, the active participation of employees in the
improvement process could be boosted. There is a marked difference between the
QC circles and cells within a cellular organization. The QC circles are practiced on
a voluntary basis, while cells in a cellular organization are not voluntary but part of
the system’s formal organization.
Let us now return to our research, which compared some quality practices in P R
China and Australia.
The survey
In P R China, the information was gathered from several visits to state-owned
enterprises in Anhui Province and Beijing area in the period May–June 1992. In
Australia, it was collected through a small but systematic study conducted in New
South Wales in the period September 1992–January 1993. It should be mentioned
that the information gathered in P R China was based on a selective, non-random
group of industrial enterprises, while in Australia it relied on a random, stratified
sample. Objective difficulties, mostly based on language barriers prevented us from
applying in China the systematic methodology of the Australian study. In spite of
the above differences, there was a similarity between the surveyed enterprises in
China and Australia. They both formally embarked on a quality improvement
program.
State enterprises in P R China receive basic information and training from the
National Society of TQM. The material is disseminated top-down through hierar-
chical established TQM channels (Province, County). Hence, it is not surprising
that the group of state enterprises in P R China in our survey exhibited some
common traits, having some similarity to a Japanese style of quality management.
The small number of enterprises of the study did not enable a statistical com-
parison between the TQM practices in China versus Australia. Nevertheless, we
noticed some distinctive differences between the surveyed companies in the two
countries. The differences concerned (a) the reason for commencing the improve-
ment program; (b) practice of QC circles; (c) quality costs reporting; (d) scope of
jobs and (e) topics of improvement projects.
(a) In Australia, the commencement of TQM was motivated by product demand
crisis. In China, the visited companies had as main objective product certifi-
cation. In some companies, the manufacturing capacity was fully utilized. There
were no such occurrences among the Australian companies, whose revenues
were solely limited by product demand.
(b) In some of the Australian companies, a 100% TQM participation at the shop
level was ensured through a division of the shop floor into cells (cellular
organizations). We did not find such cellular organization in any of the visited
Chinese companies. There, TQM participation at the shop level was only
apparent through QC circles (50% average participation). By contrast, none of
the Australian companies practiced QC circles.
2.5 Total Quality Strategies After the Quality … 21
(c) None of the Australian companies practiced quality cost reporting being thus in
sharp contrast with the visited Chinese companies where reporting of quality
costs was practiced on a regular basis.
(d) Broad scope jobs were encouraged in Australia, while in China we found a
tendency to have narrow scope jobs.
(e) In Australia, most improvement projects had a time-based orientation. In China
they were mostly quality-per se oriented.
Concluding remarks
Keeping in mind the limitations of the study, one can still draw some conclusions.
1. Distinctive differences between the way TQM was applied in China and
Australia were noticed. The different organization of active employees’ partic-
ipation in the continuous improvement process at the shop level, namely vol-
untary QC circles in China, versus cellular organizations in Australia, can be
definitely attributed to cultural influences.
2. In spite of the fact that in China we did not find any reference to CWQC, but
only to TQM, the visited Chinese enterprises seemed to be closer to what may
be called ‘a Japanese style’. Evidence is provided by an over-all practice of QC
circles, as well as by the early commencement of quality improvement man-
agement (1982). While no TQM principles existed in 1982, the CWQC were
well established at that time. On the other hand, contrary to Japanese quality
practices, which encourage quick reaction and process simplifications, the
Chinese companies did not exhibit these practices. Another difference between
China and other countries is the tendency of narrow scope jobs. Possibly that is
dictated by its huge population.
3. The Australian improvement style looks rather similar to the North American
improvement style. Its principles seemed to be rooted in the MBNQA. The
reported quality improvement commencement occurred at the end of the 1980s,
when TQM was already established. This may supply some circumstantial
evidence.
4. Future perspectives:
– In Australia, like in other Western countries guided by the general practices
of MBNQA, based on some economic successes, they may continue to boost
active participation of employees through cellular organizations.
– China is a class by itself. What makes it special is its deeply rooted edu-
cational heritage, coupled with the natural curiosity and creative thinking of
its people, its size and its socialistic regime. This factor combination makes it
difficult to predict future development of TQM in P R China.
– On a global scale, there is an exchange of cultural principles and learning on
quality management. The Eastern world learned from the Western world
general principles of Management, while Japan in particular also learned
modern statistical theories. The Western world learned from the Eastern
22 2 Quality-oriented Strategies
To address the usage aspect, we first compare the study of Ahire et al. (1996),
whose data were from the automotive and the manufacturing areas, with the study
carried out by Anderson et al. (1998) in logistics, a specific area different from
manufacturing.
Ahire et al. considered questions regarding a holistic versus a piece meal
implementation of the MBNQA criteria. Their study identified twelve QM com-
ponents, and developed items to measure them. The components’ list comprises:
‘management commitment’, ‘internal quality information usage’, ‘benchmarking’,
‘design QM’, ‘employee empowerment’, ‘employee involvement’, ‘employee
training’, ‘supplier QM’, ‘supplier performance’, ‘SPC usage’, ‘customer focus’,
‘product quality’.
Their findings reveal (a) the critical importance of the human aspect and its
development (employee training, employee involvement and employee empower-
ment) relative to the other QM components. They imply that people are a key
2.6 Quality Management Theories and Practices … 23
In its Guide to the Project Management Body of Knowledge (1996), the Project
Management Institute defines a project as “a temporary endeavor undertaken to
create a unique product or service”.
Raz and Michael (1999), carried out a survey to find out which tools and
practices are associated with successful project management in general, and with
effective project risk management in particular. The survey was carried out between
April and June 1998. The authors gave a wide interpretation to the term ‘practice’,
meaning special purpose tools and processes. A questionnaire, written in Hebrew,
was distributed either personally or via email, to a random sample of about 400
project managers from the software and high-tech sectors in Israel. Finally, there
were 84 usable completed questionnaires.
The questionnaire consisted of several parts, each containing a number of brief
questions, to be answered on a 1–5 scale.
Although the main emphasis of the survey was on project risk management, two
of its parts are relevant to this section. In their paper, whose title is identical with the
title of this section, Barad and Raz (2000) detailed the analysis of the two relevant
parts of the above questionnaire.
The first relevant part dealt with the extent of contribution of individual practices
to project success in general, and included 13 Project Management (PM) generic
practices. Our interpretation here of the term ‘perceived contribution’ is that a
practice of highly perceived contribution is likely to have a high usage level.
According to the findings of a pilot version of the questionnaire, the perceived
contribution was highly correlated with the ‘extent of use’ in the organization.
Hence, we will alternatively make use of ‘perceived contribution’ or ‘usage’.
The second relevant part consisted of six questions dealing with the effectiveness
and efficiency of the manner in which projects are managed in the respondent’s
organization and with project outcomes, such as product quality and customer
satisfaction.
The data were analyzed in several steps. First, the authors assessed the perceived
contribution of each individual practice in PM. Next, in order to assess the actual
contribution of the practices, they calculated the coefficients of correlation between
the perceived contribution of the practice, and the project management outcomes.
Finally, they compared perceived contribution with actual contribution of practices.
2.6 Quality Management Theories and Practices … 25
The findings suggest that in general soft TQM elements are significantly related
to organizational performance.
Certain hard TQM elements also have a significant effect on performance. For
hard TQM to impact performance it is essential that such hard elements be sup-
ported by elements of soft TQM.
Another paper, Gadenne and Sharma (2009), described a similar survey whose
objective was to investigate the hard and soft quality management factors and their
association with firm performance. The survey considered Australian Small and
Medium Enterprises.
Their findings were quite similar to those of Rachman and Bullock.
They suggested that improved overall performance was favorably influenced by
a combination of ‘hard’ QM factors and ‘soft’ QM factors. Their hard QM factors
were benchmarking, quality measurement, continuous improvement, and efficiency
improvement. Their soft QM factors consisted of top management philosophy and
supplier support, employee training and increased interaction with employees and
customers. The QM factors of employee training, efficiency improvement, and
employee and customer involvement were important in maintaining customer sat-
isfaction, whilst employee and customer involvement were also important in
maintaining a competitive edge in terms of return on assets.
The next chapter is about Flexibility oriented strategies
References
Ahire LS, Golhar DY, Waller MA (1996) Development and validation of TQM implementation
constricts. Decis Sci 27(1):23–56
Anderson RD, Jerman RE, Crum R (1998) Quality management influences on logistics
performance. Transport Res 43(2):137–148
Barad M (1984) Quality assurance systems in Israeli industries, part i: electric and electronics
industries. Int J Prod Res 22(6):1033–1042
Barad M (1995) Some cultural/geographical styles in quality strategies and quality costs (P.R.
China versus Australia). Int J Prod Econ 41:81–92
Barad M (1996) Total quality management. In: Warner M (ed) IEBM Thomson Business Press,
London, pp 4884–4901
Barad M (2002) Total quality management and information technology/systems. In: Warner M
(ed) IEBM Online Thomson Learning, London
Barad M, Kayis B (1994) Quality teams as Improvement Support Systems (ISS): an Australian
perspective. Manage Decis 32(6):49–57
Barad M, Raz T (2000) Contribution of quality management tools and practices to project
management performance. Int J Qual Reliab Manag 17:571–583
Black SA, Porter LJ (1996) Identification of the critical factors of TQM. Decis Sci 27(1):1–21
Crosby PB (1979) Quality is free. McGraw-Hill, New York
Deming WE (1986) Out of the crisis. Cambridge MIT Press, Mass USA
Feigenbaum AV (1991) Total quality control, 3rd edn. McGraw-Hill, New York
Gadenne D, Sharma B (2009) An investigation of the hard and soft quality management factors of
Australian SMEs and their association with firm performance. Int J Qual Reliab Manag 26
(9):865–880
Juran JM (1991) Strategies for world-class quality. Qual Prog 24:81–85
References 29
3.1 Introduction
The need for flexibility developed gradually until in some companies it reached the
strategic level. The last section of this chapter, entitled Strategic Flexibility,
addresses this topic.
The next section reviews Flexibility in Manufacturing Systems, the first area that
recognized the beneficial results of flexibility.
As mentioned above, the first organizational form for applying flexibility in man-
ufacturing was through the design and operation of Flexible Manufacturing
Systems (FMSs). Flexible Manufacturing Systems are a class of manufacturing
systems. Their name implies that the feature that characterizes them among all other
features is their flexibility. Their primary expected capability was adaptation to
changing environmental conditions and process requirements.
34 3 Flexibility-oriented Strategies
Re-routing Loading
(days-weeks)
Failures or other
interruptions
Fig. 3.1 A planning decision sequence by hierarchy and short/medium time horizon (reproduced
from Barad 1992)
3.2 Flexibility in Manufacturing Systems 35
Many papers analyzed these types or dimensions of flexibility and proposed mea-
suring units to quantify and measure them (Slack 1983; Ramasesh and Jayakumar
1991; Barad and Nof 1997; Malhotra and Subbash 2008; Jain et al. 2013). Some
flexibility types carried the name of the type of change that the manufacturing
system had to cope with (such as change in the output ‘volume’, or changes in the
product ‘mix’).
It seems worthwhile mentioning that in literature, manufacturing flexibility types
or dimensions are used interchangeably. Here we use types of flexibility (for flex-
ibility of elements or activities) while dimensions express measuring units or
metrics.
The primary role of flexibility in any system is its adaptation to changing
environmental conditions and process requirements. The system adaptation to
foreseen and unforeseen changes can be provided at different stages (design,
planning or operation) and through different means (hardware or software).
The capability needed to cope with failures at machine level is a quick tool
change or repair. To cope with changes in requirements, a quick changeover is
3.2 Flexibility in Manufacturing Systems 37
required (at machine and plant level), coupled with process and transfer variety. At
the plant level, the answer is alternative machines with variety of operations.
According to the above, two fundamental system characteristics are needed to
cope with changes:
1. Short changeover time (set-up)
2. Variety (operations and transfer)
Flexibility dimensions/metrics
Two flexibility dimensions or metrics, ‘response’ and ‘range’, measure the above
characteristics. Slack (1987) suggested two primary dimensions of flexibility in any
system: ‘range’ of states and ‘time’ (or cost) to move from one state to another.
Browne et al. (1984) indicated that these two primary dimensions originate from
‘machine flexibility’ measures. Response represents machine set up duration, while
range represents the operations’ variety of a machine, its versatility. Olhager and
West (2002) added a third dimension ‘distension’. They defined it as the invested
effort/time/cost for enhancing the current variety of operations and transfer, if
needed, in order to cope with a given change that currently is outside the action
space. Summarizing the above:
Range measures the current variety of alternatives—action space—for coping with
given changes.
Response is the preparation time/cost for coping with a change within the current
action space.
Distension is the invested effort/time/cost for enhancing the current action space if
needed thus enabling the element or activity flexibility to accommodate a given
change that currently is outside the action space.
Let us now present the flexibility types in their bottom up hierarchical order.
Basic flexibility types
Machine flexibility is ‘ease of making a change for producing a given set of part
types’. It is related to the set of different tasks it is capable of performing, (its
operation set) representing its ‘range’ dimension, also called ‘versatility’.
Eventually, the operation set may also include the relative time efficiency with
which the machine is capable of processing the operations (Brill and Mandelbaum
1989).
Machine flexibility is also related to the duration of its preparation tasks (set-up),
e.g. part loading, software accessibility, fault diagnostics, representing its ‘re-
sponse’ dimension. The ‘response’ dimension of machine flexibility may separately
consider various preparation times such as the time needed to change tools in a tool
magazine; the positioning time of a tool; and the time necessary to mount new
fixtures, which may be different for each part type (Chandra and Tombak 1992).
Machine flexibility provides adaptation to operations variety and copes with
failures as well as with changeovers from job to job. This flexibility comprises all
three dimensions: Set-up time (response), variety of operations (range) and even-
tually invested effort for enhancing the current variety of operations (distension).
38 3 Flexibility-oriented Strategies
of the manufacturing costs per unit, namely, A/V + C, where A and C are,
respectively, the fixed (set-up related) and variable costs, while V is the production
volume.
Volume flexibility is also defined as the ‘possible range of production volumes at
which the firm can run profitably’ (Sethi and Sethi 1990).
Expansion flexibility has been defined as ‘time/cost efforts necessary to increase/
change the existing manufacturing capacity by a given percentage’ (response
dimension). It enables expanding production gradually. Expansion flexibility is
concerned with modifications of the manufacturing system capacity, enabling its
adaptation to perceived future changes in product demand. It reduces time and cost
to manufacturing new products.
Aggregate flexibility types
The aggregate flexibilities represent the combined attributes of a manufacturing
system technology, intended to cope with the diversity of its exogenous environ-
ment. As such, this environment represents strategic requirements imposed on a
manufacturing system.
Program flexibility is the system capability for untended production as needed
for special (environmental) manufacturing conditions where, because of physical or
other constraints, human collaboration is not possible. Sethi and Sethi defined it as
‘ability of a system to run virtually unattended for long enough periods’. It is
especially important for automated manufacturing systems for processing parts
during night shifts with no working personnel.
Production flexibility represents the universe of part types that can be manu-
factured by the system, without adding major capital equipment (Sethi and Sethi
1990). Although this measure can be considered an aggregate measure (at the plant
level), it is still an inherent measure and not related to specific requirements of an
exogenous environment. Hence, production flexibility is a potential flexibility,
appropriate for a strategic approach intended to cope with external customers,
requesting a high diversity of ‘regular’ parts.
The ‘range’ dimension seems to be the more important dimension for measuring
this flexibility type.
Marketing flexibility alias new product flexibility has been defined as time/cost
required for introducing a new product. In contrast to ‘production’ flexibility,
‘marketing’ flexibility is intended to cope with strategic decisions in an external
environment that requests a diversity of ‘special’ product types. The external
customers are bound to appreciate frequent launching of new and more attractive
products and thus, a beneficial effect of marketing flexibility would be an improved
product demand. In the literature, the attempts to assess marketing flexibility typ-
ically focus on the investment costs and not on benefits such as the above.
By its definition, the capabilities involved in achieving marketing flexibility can
be decomposed into ‘new product design’ activities and activities related to mod-
ifications of the processing resources and eventually their re-configuration. These
activities are also associated with expansion flexibility and product flexibility.
A summary of the flexibility types and their measuring dimensions is presented
in Table 3.1.
3.2 Flexibility in Manufacturing Systems 41
time between two locations at the same echelon. The mathematical model presented
in the paper assumes that this value is negligible. The focus is on the range
dimension where trans-routing is measured by u, the number of transshipment links
per end user at the echelon level: u = 0, 1,… N − 1, where N is the number of end
users. In a rigid system u = 0. Figure 3.2 (reproduced from the original paper)
depicts such a system with N = 5.
Modeling the benefits of trans-routing flexibility
The model presented in the original paper stemmed from a military logistics sce-
nario. The problem focused on the logistics performance of combat units (end
users) including movement of stock among them (end users at the same echelon
level) and the logistic relations with their common supply location).
As conventional cost oriented logistic performance measures systems may not
always be appropriate, the paper investigated the performance of a logistic system
possessing trans-routing flexibility focusing on flexibility’s benefits exclusive of
cost considerations. An advantageous alternative, a customer oriented logistic
performance measure, logistic dependability, emphasizing quick response delivery
through reduced supply uncertainty during a replenishment cycle, was suggested.
In a reliability context, dependability is a probability measure of an item to
successfully complete a mission, given its reliability and assuming that if a failure
occurs it is repaired in due time. In the reviewed paper, logistic dependability was
defined as ‘the probability that there is no supply shortage until the end of a
replenishment cycle’.
To match dependability in the reliability literature, a simplified, conceptual
equation calculated logistic dependability (DL) as follows:
DL = Pr + (1 − Pr) * Ps│u
where Pr is the probability that demand does not exceed the inventory on hand
during a cycle at all the echelon end users (N) and Ps│u is the probability of
overcoming a shortage for a given trans-routing flexibility, measured by u, u = 0,1,
… N − 1.
The reviewed paper investigated the beneficial effects of trans-routing flexibility
on logistic dependability. To get a better understanding of the logistics decision
problems and their eventual solutions, a multifactor statistical experiment was
designed, carried out and analyzed. The experiment is detailed in Part II of this
book, Chap. 4.
The outlook of the next section is very different. It depicts flexibility in terms of
the flexibility of a generic object and uses tools related to information systems.
This section presents a framework for analyzing flexibility of generic objects uti-
lizing cloud diagrams. It reviews a paper (Fitzgerald et al. 2009) written by authors
who have been working in the domain of flexibility in their respective different
areas: information systems and manufacturing systems. The starting point was work
undertaken by one of the authors, which portrayed some information systems
flexibility concepts in diagrammatic form, known as cloud diagrams. First, the
potential flexibility of a designed object was portrayed and then its actual flexibility
expressing its ability to adapt to changes. The visual nature of this representation
was regarded as useful and original. The framework is presented here to illustrate
these ideas. In the original paper examples of its applications in a manufacturing
system and in an information system are also displayed.
One of the key aspects is that the framework operates at the level of a generic
object, which is flexible or desires to be flexible. The object can be physical or
logical and the analysis is performed with respect to a given achievement. For
instance, if the object is a software product the desired achievement may be its
operation on any platform/operating system. The framework is divided into two
phases. In the first phase the object is analyzed in terms of its design characteristics
which are identified as potential flexibility capabilities, before they have been tested
in use under change. The second phase of the framework looks at the object under
change and examines its behavior in terms of its ability to adapt to new circum-
stances. This is its actual flexibility, which may be evaluated through its keeping in
line with the desired achievement.
Figure 3.3 depicts Phase 1 of the framework. The object is the entity under
investigation and is characterized by a number of attributes that determine its
flexibility. Examples of a generic object can be a car, a computer, a piece of
software, a manufacturing system or an information system. The object is analyzed
with a desired achievement in mind. For example if the object is a manufacturing
system, the desired achievement expressed by the managers of the manufacturing
company (the stake holders) can be ‘on-time delivery’ of a major product.
Characterizing a generic object (such as a product or a system) as physical or
logical or both helps in narrowing down the flexibility aspects, which need to be
analyzed in relation to the object desired achievement. A most important tool in the
analysis here is decomposition. To analyze the potential flexibility of an object we
decompose its flexibility into subsets of flexibility aspects.
An aspect is determined by the focus of interest and by the analyst selected
perspective. E.g. for the manufacturing system whose desired achievement is
‘on-time delivery’, its flexibility can be decomposed into four subsets of flexibility
3.4 Flexibility of Generic Objects 45
a1 b1
Desired achievement
defined/ measurable
Object (designed) suggested by stake
holders
Attributes
Inherent Object
Flexibility
Nature? Decomposable?
Lifetime?
List of Flexibility Aspects
No
Logical
or Short or Domain
Physical Long Yes Range (action space)
or both?
Response or Distension
(of action space)
Fig. 3.3 Analysis of the flexibility of an object [Phase 1—designed object] (reproduced from
Fitzgerald et al. 2009)
a2 b2 Desired achievement
Object in use (Previous to change)
b3 Desired achievement
(Following the change)
Change
effect?
Change features? Change
agent?
List of Flexibility
Type Magnitude Frequency Duration User and/or aspects affected
environment by change
Unexpected Minor Low Short
or planned or or or Action space
significant high long compatible or
else
Distension
Possible or
else not possible
Fig. 3.4 Analysis of the flexibility of an object [Phase 2—object in use] (reproduced from
Fitzgerald et al. 2009)
The magnitude of the change can be minor or significant according to its impact on
the object’s behavior.
Flexibility aspects
As flexibility is a complex concept involving a combination of factors, which may
be viewed from a variety of aspects or perspectives, we decompose the flexibility of
an object into subsets of flexibility aspects. An aspect is determined by the focus of
interest and by the analyst selected perspective. We use the flexibility aspects of
generic objects in an analytical way showing that they can be applied as a
methodological tool for simplifying the methodical assessment of flexibility for
improving the understanding of this complex concept and the variety of its
interactions.
We emphasize that flexibility is driven by changes, here the changes that a
generic object may be exposed to. Some changes are anticipated while others are
not, some are not even envisaged. We deem that it is easier to visualize types of
changes by associating them with specific flexibility aspects (see also Fitzgerald
and Siddiqui 2002).
3.4 Flexibility of Generic Objects 47
The flexibility of five generic objects appear in Table 3.2 (reproduced from the
original paper). The objects were approximately ordered on a complexity scale:
method, product, strategy, process and system. Complexity of an object was con-
ceptualized as ‘many elements with relatively simple interrelations’ (see e.g.
Cooper et al. 1992). It was measured in a simplified way, through the number of its
flexibility aspects. The flexibility aspects were defined in a broad way so that some
are shared by several generic objects in the set while others are specific to one
generic object.
Let us examine the flexibility aspects of the generic objects listed in Table 3.2
considering the type of changes they can cope with.
Method and product
A method here is a systematic way for performing an activity, implying an orderly
logical arrangement.
A product/service is a usable result, an artifact that has been created by someone
or by some process.
Shared flexibility aspects
These two generic objects share two flexibility aspects: ‘usage’ and ‘design/
architecture’.
The implied change associated with ‘usage’ flexibility is a change in application
or a change in user. Usage flexibility can be measured on two dimensions: range
and response. On a range dimension it expresses the variety of possible applica-
tions or the variety of possible users of a method or a product. On a response
dimension it expresses the preparation time or effort needed to move from one
existing application to another when both applications are included in the action
space.
The ‘design’ flexibility perspective concerns quick/easy adaptation of the
existing product or method to new requirements not included in the action space,
leading to a new (similar) product or to an enhanced method. We dare say that this
flexibility aspect is dependent on the object structure/architecture. Its measuring
dimension, expressing the adaptation effort (time/cost) for enhancing the action
space, its distension.
Product specific flexibility aspects
The ‘maintenance’ flexibility aspect is concerned with a product quick/easy
maintenance. We suggest measuring it on a response dimension, whose magnitude
is dependent on the product structure/architecture. The implied change is related to
the product functioning status. Similarly to maintenance, ‘transfer’ of a product
implying a change in location, can be measured on a response dimension whose
magnitude is also dependent on the product structure/architecture. It is worthwhile
mentioning that this flexibility aspect is mainly relevant for a physical product.
Strategy, process and system
Strategy here is a long term and systematic action plan for achieving a goal.
Process is defined here as a set of interrelated activities, which transform inputs
into outputs.
A system is a more complex generic object. Among the many and dissimilar
definitions in the literature we selected the following: ‘an organized assembly of
48
Table 3.2 Generic objects and their flexibility aspects–shared or specific—(reproduced from Fitzgerald et al. 2009)
Generic Flexibility aspects
Object
Method Usage Design/Architecture
Product/ Usage Design/architecture Maintenance Transfer
service
Strategy HR Information/Analysis/Decision Communication Links/ Scale
Making Relationships
Process HR Information/Analysis/Decision Communication Links/ Order of activities’ Input/ Tools
Making Relationships execution Suppliers
System HR Information/Analysis/Decision Communication Links/ Equipment configuration/ Processes
Making Relationships network
3 Flexibility-oriented Strategies
3.4 Flexibility of Generic Objects 49
References
Acker DA, Mascarenhas B (1984) The need for strategic flexibility. J Bus Strategy 5(2):74–82
Barad M, Sipper D (1988) Flexibility in manufacturing systems: definitions and Petri net
modelling. Int J Prod Res 26(2):237–248
Barad M (1992) (The impact of some flexibility factors on FMSs—a performance evaluation
approach. Int J Prod Res 30(11):2587–2602
Barad M, Nof SY (1997) CIM flexibility measures: a review and a framework for analysis and
applicability assessment. Int J Comp Integ M 10(1–4):296–308
Barad M (1998) Flexibility Performance Measurement Systems—A Framework for Design. In:
Neely AD, Waggoner DB (eds), Proceedings of the first international conference on
performance measurement centre for business performance, University of Cambridge, UK,
pp 78–85
Barad M, Even-Sapir D (2003) Flexibility in logistic systems—modeling and Performance
Evaluation. Int J Prod Econ 85:155–170
Barad M (2012) A methodology for deploying flexibility in supply chains. IFAC Proc Vol 45
(6):752–757
Benjaafar S, Ramakrishnan R (1996) Modelling, measurement and evaluation of sequencing
flexibility in manufacturing systems. Int J Prod Res 34(5):1195–1220
Bowersox DJ, Closs DJ, Cooper MB (2002) Supply Chain Logistics Management McGraw Hill,
New York
Brill P, Mandelbaum M (1989) Measures of flexibility in manufacturing systems. Int J Prod Res 27
(5):747–756
Browne J, Dubois D, Rathmill K, Sethi SP, Stecke K (1984) Classification of flexible
manufacturing systems. FMS Magazine 2:114–117
Buzacott JA (1982) The fundamental principles of flexibility in manufacturing system. In:
Proceedings of the 1st international conference on FMSs, Brighton, pp 13–22
Chandra P, Tombak MM (1992) Models for the evaluation of routing and machine flexibility.
Eur J Oper Res 60:156–165
Combe IA, Greenley GE (2004) Capabilities for strategic flexibility: a cognitive content
framework. Eur J Marketing 38(11/12):1456–1480
Cooper WW, Sinha KK, Sullivan RS (1992) Measuring complexity in high-technology
manufacturing: indices for evaluation. Interfaces 22(4):38–48
De Groote X (1994) The flexibility of production processes: a general framework. Manage Sci
40:933–945
56 3 Flexibility-oriented Strategies
De Meyer A, Nakane J, Miller JG, Ferdows K (1989) Flexibility the next competitive battle.
Strategic Manage J 10(2):135–144
De Toni A, Tonchia S (1998) Manufacturing flexibility: a literature review. Int J Prod Res 36
(6):1587–1617
Evans S (1991) Strategic flexibility for high technology manoeuvres: a conceptual framework.
J Manage Stud 28(1):69–89
Gerwin D (1993) Manufacturing flexibility: a strategic perspective. Manage Sci 39:395–410
Grewal R, Tansuhaj P (2001) Building organizational capabilities for managing economic crisis:
the role of market orientation and strategic flexibility. J Marketing 65:67–80
Gupta YP, Goyal S (1989) Flexibility in the manufacturing system: concepts and measurement.
Eur J Oper Res 43:119–135
Gupta YP, Sommers TM (1996) The measurement of manufacturing flexibility. Eur J Oper Res
60:166–182
Fitzgerald G (1990) Achieving flexible information systems: the case of improved analysis.
J Inform Technol 5:5–11
Fitzgerald G, Siddiqui FA (2002) Business process reengineering and flexibility. Int J Flex Manuf
Sys 14(1):73–86
Fitzgerald G, Barad M, Papazafeiropoulou A, Alaa G (2009) A framework for analyzing flexibility
of generic objects. Int J Prod Econ 122(1):329–339
Jain A, Jain PK, Chan FTS, Singh S (2013) A review on manufacturing flexibility. Int J Prod Res
51(19):5946–5970
Jensen A (1997) Inter-organizational logistics flexibility in marketing channels. In: Tilanus B
(ed) Information systems in logistics and transportation. Pergamon, New York, pp 57–75
Johnson JL, Lee RP, Saini A, Grohman B (2003) Market focus strategic flexibility: conceptual
advances and integrative Mode. J Acad Mark Sci 31(1):74–89
Kochikar VP, Narendran TT (1992) A framework for assessing manufacturing flexibility. Int J
Prod Res 30(12):2873–2895
Koste LL, Malhotra MK (1999) A theoretical framework for analyzing the dimensions of
manufacturing flexibility. JOM 18(1):75–93
Kruchten PB (1995) The 4 + 1 view model of architecture. IEEE Softw 12(6):42–50
Mandelbaum M, Buzacott J (1990) Flexibility and decision making. Eur J Oper Res 44:17–27
Malhotra MK, Subbash S (2008) Measurement equivalence using generalizability theory: an
examination of manufacturing flexibility dimensions. Decision Sci 39(4):643–669
Mehrabi MG, Ullsoy AG, Kore Y, Heytler P (2002) Trends and perspectives in flexible and
reconfigurable manufacturing systems. J Intell Manuf 13(2):135–146
Narain R, Yadav RC, Sarkins J, Cordeiro JJ (2000) The strategic implications of flexibility in
manufacturing systems. Int J Agile Manuf Sys 2(3):202–213
Narasimhan R, Jayaranan J (1998) Causal linkages in supply chain management. Decision Sci 29
(3):579–605
Neely AD, Gregory M, Platts K (1995) Performance measurement system design-a literature
review and research agenda. Int J Oper Prod Man 15(4):80–116
Oke A (2005) A framework for analyzing manufacturing flexibility. Int J Oper Prod Man 25
(10):973–996
Olhager J, West BM (2002) The house of flexibility: using the QFD approach to deploy
manufacturing flexibility. Int J Oper Prod Man 22(1):50–79
Otto A, Kotzab H (2003) Does supply chain management really pay? Six perspectives to measure
the performance of managing a supply chain. Eur J Oper Res 144:306–320
Ramasesh RV, Jayakumar MD (1991) Measurement of manufacturing flexibility: a value based
approach. JOM 10(4):446–468
Rogers PP, Ohja D, White RE (2011) Conceptualizing complementarities in manufacturing
flexibility: a comprehensive view. Int J Prod Res 49(12):3767–3793
Sanchez R (1995) Strategic flexibility in product competition. Strategic Manage J 16:135–159
Sethi AK, Sethi SP (1990) Flexibility in manufacturing: a survey. Int J Flex Manuf Sys 2:289–328
Slack N (1983) Flexibility as a manufacturing objective. Int J Oper Prod Man 3(3):3–13
References 57
Slack N (1987) The flexibility of manufacturing systems. Int J Oper Prod Man 7(4):35–45
Stalk G, Evans P, Shulman LE (1992) Competing on capabilities: the new rules of corporate
strategy. Harv Bus Rev 70(2):57–69
Swamidass PM, Newell WT (1987) Manufacturing strategy, environmental uncertainty and
performance: a path analytic model. Manage Sci 33(4):509–524
Upton DM (1994) The management of manufacturing flexibility. Calif Manage Rev 36(2):72–89
Vokurka RJ, O’Leary-Kelly SW (2000) A review of empirical research on manufacturing
flexibility. JOM 18(4):485–501
Yao DD, Buzacott JA (1985) Modelling the performance of flexible manufacturing systems. Int J
Prod Res 23:945–959
Zhang M, Vonderembse A, Lim JS (2003) Manufacturing flexibility: defining and analyzing
relationships among competence, capability and customer satisfaction. JOM 21:173–191
Zelenovic’ DM (1982) Flexibility—a condition for effective production systems. Int J Prod Res
20:319–337
Part II
Techniques
Chapter 4
Design of Experiments (DOE)
4.1 Introduction
4.1.1 Guidelines
– Statement of the problem
– Choice of the factors to be varied (independent variables) and the levels to be
investigated
– Selection of the response (dependent variable)
– Choice of the experimental design
We see that the two-factor interactions are confounded with other two-factor
interactions.
4.1.3.3 Remark
It is beyond the scope of this chapter to present the standard technical statistical
analysis of the experimental results (Analysis of Variance). Our aim here is for the
reader to understand the capability of the DOE technique as an analysis tool. It is
not an ‘exercise’ in statistical theory.
R—a design factor representing system versatility (R1 low level, R2 high level)
B, C—two control factors, expressing scheduling rules assigning priorities to the
parts waiting to be processed based on the available information.
Factor B is part oriented, meant to prevent long delays, by assigning a higher
priority to a part according to its waiting time in system.
4.2 Impact of Flexibility Factors in Flexible Manufacturing … 65
Table 4.1 The half-fractional factorial design of the simulation experiment—equipment failures
not considered (reproduced from Barad 1992)
Machine versatility Mixes Control methods Utilization level
Low (R1) (E1) None High
Mix 1 B2 Low
C2 Low
B2 and C2 High
Mix 2 None Low
(E2) B2 High
C2 High
B2 and C2 Low
High (R2) Mix 1 None Low
(E1) B2 High
C2 High
B2 and C2 Low
Mix 2 None High
(E2) B2 Low
C2 Low
B2 and C2 High
66 4 Design of Experiments (DOE)
The simulation model and the experimental conditions were developed using Siman
network modeling. A simulation run represented a particular treatment combina-
tion. Length of each simulation run was 14,400 min (240 h).
The analysis was concentrated on the steady-state results, considering the
principles prevailing in a ‘non-terminating’ simulation environment. The Siman
output processor was extensively utilized. Through the ‘filter’ command, the
transient period for each run was determined, the transient results were deleted and
the remaining steady state period was divided into batches, which are the ‘blocks’
of the experimental design. The means of the batches approximately represent
independent observations. The output processor performs automatically statistical
testing of the independence between batches (Fishman’s method).
Lengths of the transient periods for runs with no equipment failures, varied
between 10 and 15 h. Under the same conditions, the size of the independent
batches varied between 80 and 140 consequent observations (part units manufac-
tured). On the average, we obtained nine batches per run. Lengths of the transient
periods under equipment failures exceeded 40 h. We assumed an exponential
distribution for both the time between failures and the repair duration. The mean
time between failures was MTBF = 600 min, coupled with a mean time to repair,
MTTR = 60 min representing 10% failures.
It is worth mentioning that if instead of using batches (blocks), replicated runs
had been used, a minimum of 120 h would have been required for one replication.
When nine independent replications per run (equivalent to our nine batches) are
considered, the total simulated time per replicated run becomes 9 120 = 1080 h
as compared to our 240 h per run. Hence, a relative efficiency of more than 4/1 was
obtained.
4.2 Impact of Flexibility Factors in Flexible Manufacturing … 67
The following partial results of the above simulation experiments show the
designed experiments capability to investigate main and interaction effects (positive
or negative) on the responses. Owing to the highly relevant effect of equipment
failures on the system’s performance, the results are summarized separately for each
failure levels considered (0–10%).
For each of the two equipment failure levels the main effect of ‘planned uti-
lization level’ (factor U) was the most significant, leading to a high increase
(positive effect) of each of the two responses, standardized tardiness and WIP.
• Under no failures, the most important counter effect (leading to a decrease in
both standardized tardiness and WIP) was the interaction RU (negative effect).
Its interpretation is that a high system versatility level (R2), is especially
effective under high planned machine utilization (U2). The second best counter
effect (negative) was the main effect of factor R (system versatility) meaning that
it improved the system performance under all operating conditions. The inter-
action EU (negative) was also found significant. It implies that the system with
no equipment failures can better cope with a uniform mix (E2) and that this is
especially important at a high utilization level (U2). The control strategies
B (part oriented) and C (system oriented) were not effective under no equipment
failures.
• When 10% failures were considered, the most important effect in reducing
standardized tardiness and WIP was the interaction BU (negative). This implies
that under equipment failure, the on-line control strategy assigning scheduling
priority by tardiness (factor B at its high level B2) is especially effective when
the system is operated at a high planned utilization level (U2). The next factor in
order of significance was the main effect of the control strategy B, meaning that
this strategy was effective under both low and high planned utilization. The
main effect of system versatility R was also significant, though less important
than the control factor B.
The aim of the experiment detailed in this section was to introduce some changes in
the design and the manufacturing of a special type of Nickel—Cadmium battery to
improve its capacity at low temperature (Barad et al. 1989). The R&D engineers
agreed to investigate six factors, with each factor at two levels. A full factorial
experiment would require 64 observations. Because of the budgetary and time
constraints, they decided to start with a condensed pilot experiment involving
one-quarter of the full experiment, i.e. 16 observations as in instance 2 in the
introduction. This experimental design has resolution IV, where the main effects are
un-confounded with two factor interaction but two factor interactions are con-
founded with other two factor interactions. Many textbooks detail such designs,
(see e.g. Montgomery 2012).
The original paper title, which is the title of this section, emphasized that in such
compact designs reliable previous information is necessary in order to avoid
erroneous conclusions.
The considered response was the capacity expressed in ampere-hours per plate unit
area.
As mentioned above the selected fractional factorial experiment had resolution IV,
where two factor interactions are confounded with other two factor interactions. In
this experiment, one-quarter of a complete factorial, groups of four different
interactions are considered as one because their individual effects cannot be sepa-
rated, they are aliases. If any of them has a significant influence on the measured
dependent variable (here, the capacity), the statistical analysis will eventually show
it but will be unable to point to the source. They will be confounded.
The a priori requirements of the fractional factorial design in this study were to
analyze and estimate the main factor effects of each of the six factors and each of
three two factor-interactions (BC, CD and EF). The R&D engineers considered all
the other two factor-interactions to be insignificant. Another objective of the
experiment was to supply an estimate of the variance caused by randomness.
The testing conditions expressing the selected design appear in Table 4.2.
Table 4.2 The fractional factorial design of the physical experiment (reproduced from Barad
et al. 1989)
Type of plate Manufacturing Electrolyte Temperature Discharge
(°C) rate
Negative Method I KOH −20 Low
Negative Method I KOH + AD +20 Low
Negative Method I KOH + IMP +20 High
Negative Method I KOH + AD + IMP +20 High
Negative Method II KOH +20 High
Negative Method II KOH + AD −20 High
Negative Method II KOH + IMP +20 Low
Negative Method II KOH + AD + IMP −20 Low
Positive Method I KOH + IMP +20 Low
Positive Method I KOH + AD + IMP −20 Low
Positive Method I KOH +20 High
Positive Method I KOH + AD −20 High
Positive Method II KOH + IMP −20 High
Positive Method II KOH + AD + IMP +20 High
Positive Method II KOH −20 Low
Positive Method II KOH + AD +20 Low
70 4 Design of Experiments (DOE)
The findings, based on the statistical data analysis, showed that the factors that
significantly affected the battery capacity were:
F—discharge rate, E—temperature and BC—interaction between the addition to
the electrolyte and the manufacturing method. The evaluated standard deviation
caused by randomness was 0.51 ampere-hour/plate unit area.
The R&D engineers were reluctant to accept all the results of the experiment.
They expected some imbalance between the active material in the positive and the
negative plates, i.e. that the effect of factor A would be significant. But, as men-
tioned above, the data analysis did not find the effect of factor A significant. Also, it
was hard to logically explain the highly significant effect of BC, the interaction
between the addition to the electrolyte and the manufacturing method. But, luckily,
at this stage we got supplementary data on the already conducted experiment. It
should be remembered that the experiment comprised 16 physical experimental
units that had to be tested at a certain temperature and discharge rate, as detailed in
Table 4.2. In practice, each capacity test according to the design conditions was
preceded by a required preliminary capacity test, conducted for all units at normal
conditions, high temperature level (E2) and low discharge rate (F1). After that, each
physical unit was recharged and retested as designed. These preliminary 16
observations were performed because of technical requirements. They represented a
full factorial with 4 factors A, B, C and D (the first 3 columns of Table 4.2),
executed at constant values of the remaining factors E and F (high temperature and
low discharge rate).
The analysis of the second data set revealed that factor A (type of plate) was the
most significant. Factor C (manufacturing method) was also significant and so was
their interaction, AC. At high temperature (E2) and low rate of discharge (F1) the
estimated standard deviation caused by randomness was 0.11 A hour/plate unit
area, much lower than its previous estimate (0.51). Integrating the findings of the
two data sets, several conclusions could be reached.
72 4 Design of Experiments (DOE)
• The main effects of factors A and C, which were significant at normal testing
conditions (high temperature level and low rate of discharge) were obscured and
thus insignificant when the results were averaged over the entire range of test
conditions imposed on the originally designed experiment. The results of the
second data set indicated that the negative plates provided significantly more
capacity under normal test conditions (as the R&D engineers expected). But
they were affected more than the positives under severe test conditions, see
below.
• The statistical analysis of the first data set found the interaction BC very sig-
nificant. By examining its aliases (Table 4.3) it was seen that the alias group
comprising interaction BC also comprised interaction AE, between the type of
plate and temperature. The meaning of this significant interaction is that the
temperature effect is not the same for the two types of plates.
• Looking at the combined findings of the two data sets it can be seen that this
result fits as a piece in a puzzle. In designing the experiment AE was not
considered an important interaction. In its alias group the only considered
interaction was BC. Therefore, the highly significant effect that was attributed to
BC (and was hard to explain) was actually the effect of AE.
• The apparent inconsistency of results with regard to C (manufacturing method)
between the two data sets could be explained similarly. Presumably, the inter-
actions CE and CF, initially assumed to be negligible, were actually important.
They could not be tested because they were included in the estimate of the
random variance. Some confirmation of this hypothesis is provided by the
significant increase of the random standard deviation when carrying out the
experiment at varying temperature and discharge rate (0.51) as compared to its
estimate under normal testing conditions (0.11).
• Fractional factorial design is an efficient technique. It can analyze the effects of
many factors using a low number of observations. Here, 16 observations were
used to investigate the effects of 6 main factors and 3 two-factor interaction.
However, it should be used with caution. As mentioned above in the choice of
the design, the design had resolution IV where two factor interactions are
confounded with other two factor interactions. Reliable prior information the-
oretical and/or empirical is necessary to avoid erroneous conclusions as illus-
trated by the analysis here of the first data set.
The third example (Barad and Even-Sapir 2003) examined the numerical value of a
complex analytical expression representing a customer oriented logistics perfor-
mance measure, denoted in the original paper logistics dependability, as calculated
4.4 Flexibility Factors in Logistics … 73
for different values of its parameters (the given numerical values of the investigated
factors). Accordingly, the dependent variable (or response) was not a stochastic
variable but a deterministic one. The experimental design was a full factorial with
five factors and enabled a methodical examination of all factor effects and espe-
cially their interactions on this deterministic expression, thus shedding light on
some complex aspects of the logistics decision problem. The scenario of this
example was framed in an inventory system.
One aim of the original paper was to get a better understanding of the logistics
decision problems and their eventual solutions. To achieve that aim it was necessary
to examine the effects of several factors on the logistics dependability, a perfor-
mance measure of a given logistics system.
Logistics dependability emphasized quick response delivery through reduced
supply uncertainty during an inventory replenishment system. It was defined as the
probability that there is no supply shortage until the end of the replenishment cycle,
for given starting condition and inventory policy.
As the complexity of the equations for calculating logistics dependability (see
original paper) did not permit an analytical approach, the paper showed how the
statistical experimental technique could be used in an innovative manner to easily
obtain effective results. Several factor types that may affect logistics dependability
were considered. Design factors which could be manipulated in order to increase
the system dependability; environmental factors which were selected in order to
represent different normal operating conditions.
As the paper dealt with flexibility, a factor representing changes was also
selected. Such factor is particularly relevant for emphasizing the value of flexibility,
represented in the paper by trans-routing flexibility, a design factor measured by
variable u, the number of transshipment links per end user at the echelon level. In a
rigid system which has no transshipment links, u = 0. In a flexible system with N
end users at the same echelon level, maximal trans-routing flexibility is obtained for
u = N − 1, meaning that all end users at the same echelon level are inter-linked. See
also Part I, Chap. 3, Sect. 3.
The experiment was designed as a full two level factorial experimental design in
five factors, 25. The investigated system consisted in four end users, N = 4.
The design factors were:
A Trans-routing flexibility supporting decision making in real time, measured by
u (u = 0 low level, u = 3 high level)
B Service level supporting static planning based decisions, measured by 1 − a
(0.8—low level, 0.9—high level). The environmental factors were
C Lead time, measured in terms of a fraction of the replenishment cycle time L
(L/8 low level, L/4 high level)
74 4 Design of Experiments (DOE)
The negative interaction between the change factor E and the design factor B,
means that under changing conditions increasing the service level is not an effective
procedure. By contrast, under changing conditions trans-routing flexibility is more
effective. The positive effect of interaction AE illustrates this. As shown by the
positive significant effect of interaction AD, trans-routing flexibility (factor A) is
more effective for higher demand variability than it is for lower demand variability.
This result can be explained by the fact that high demand variability necessitates a
high stock level. This stock is not well utilized in a rigid system. By contrast, a
flexible system utilizes this higher stock in a dynamic way during a replenishment
cycle thus improving the system dependability.
k is a constant which can be determined when the loss for a particular value of y
is known. For instance, suppose that the tolerance interval for y is (s − D, s + D)
and the cost of discarding the product is D £.
Using this information, let y = s + D. Then, l(s + D) = kD2 = D and we obtain
k = D/D2.
A numerical example
A thermostat is set-up to a nominal value, s = 22 °C. Its tolerance interval is
(s − 0.5, s + 0.5).
Target
0
τ-Δ τ τ+Δ y
4.5 Taguchi’s Experimental Design Techniques 77
Common points
1. Investigating the influence of several factors on a measured variable.
2. Orthogonal structure of the multi-factor experiments.
Main differences
1. Taguchi classified the factors into design factors and noise factors. The design
factors are those product or process parameters, whose nominal settings can be
chosen by the engineers (R&D) and define the product or process specifications.
The noise factors cause the performance characteristics to deviate from their
nominal settings and are not under control. The ‘key’ noise factors should be
identified, and included in the experiment. During the experiment, they should
be under control.
There is no such classification in the classical DOE.
2. The object of the Taguchi experiment is to identify the settings of the design
parameters at which the effect of the noise factors is minimal. These ‘optimal’
settings are identified by systematically varying the settings of the design
parameters in the experimental runs and comparing the effect of noise factors for
each test run.
The previous point implies the difference.
3. Taguchi’s performance statistics is ‘signal to noise’ (s/n) ratios and not the
measured value of variables as in the classical approach. According to Box,
1986, it is better to study the mean ȳ and the variance, s2 separately rather than
combining them into a single ratio.
78 4 Design of Experiments (DOE)
A parameter design experiment consists of two parts: a design matrix and a noise
matrix. The design matrix is the ‘inner array’, while the noise matrix is the ‘outer
array’. The columns of a design matrix represent the design parameters (factors).
The rows represent the different levels of the parameters in a design test run (trial).
The rows of a noise matrix represent key noise factors and the columns represent
different combinations of the levels of the noise factors. In a complete parameter
design experiment, each trial of the design matrix is tested at each of the n columns
(trials) of the noise factors. These represent n replications of the design trials.
Assuming there are m design trials, the total number of runs are m*n.
Let us recall the example presented in Sect. 4.3, which investigated the effects of
five factors on the capacity of a car battery. Each factor was tested at two levels, say
1 and 2. A—active material by type of plate, B—additions to electrolyte, C—
manufacturing method, D—testing temperature, E—discharge rate.
Assume that A, B and C are design factors, while D and E are key noise factors
and the aim of the experiment is to find the ‘optimal’ value of each design factor
that minimizes the effect of the noise factors (see Table 4.4). Table 4.4 shows a
Taguchi experimental layout with an inner array for design (control) factors A, B
and C and an outer array for factors D and E. In this experimental arrangement,
there are 8 design trials and 4 test conditions with respect to the noise factors.
Accordingly, for each design trial we obtain four resulting data: y1, y2, y3, y4. These
data are used to calculate the value of Taguchi’s performance statistics, the S/N
ratio.
In the classical DOE we investigated the factors effects on the measured value (of a
continuous variable) or on the average, for replicated experiments. Taguchi has
created a transformation of the replications to another variable, which measures the
variation. The transformation is the signal to noise S/N ratio. There are several S/N
ratios depending on the ideal value of the measured characteristic; nominal is best,
lower (say, cost) is best, higher (say, strength) is best.
An example of an S/N ratio for nominal is best: S/N ratio = 20 log [ȳ/s].
In terms of this transformed variable, the optimization problem is to determine
the optimum factors levels so that the S/N ratio is maximum, while keeping the
mean on target.
4.5 Taguchi’s Experimental Design Techniques 79
Outer Array
D 1 2 1 2
E 1 1 2 2
4.5.5 Summary
References
Barad M (1992) The impact of some flexibility factors in fmss—a performance evaluation
approach. Int J Prod Res 30:2587–2602
Barad M (2014) Design of experiments (DOE)—a valuable multi-purpose methodology. Appl
Math 5:2120–2129
Barad M, Even-Sapir D (2003) Flexibility in logistics systems—modeling and performance
evaluation. Int J Prod Econ 85:155–170
Barad M, Bezalel C, Goldstein JR (1989) Prior research is the key to fractional factorial design.
Qual Prog 22:71–75
Box GE, Hunter WG, Hunter JS (2005) Statistics for experimenters, 2nd edn. Wiley, New York
Dehnad K (ed) (1989) Quality control. Robust design and the Taguchi method. Wadsworth &
Brooks/Cole, California
Law AM, Kelton WD (2000) Simulation modeling and analysis, 3rd edn. McGraw Hill, New York
Montgomery DC (2012) Design and analysis of experiments, 5th edn. Wiley, New York
Ross PJ (1996) Taguchi techniques for quality engineering, 2nd edn. McGraw Hill, New York
Taguchi G (1986) Introduction to quality engineering. Asian Productivity Organization, Tokyo
Chapter 5
Petri Nets
This chapter describes the fundamental concepts and characteristics of Petri Nets
(PNs). It follows the extensions that improved the implementations capabilities of
the original PNs and presents some applications. Their first and most relevant
extension was time modeling, a vital aspect of system performances not considered
in the original version.
The chapter comprises six sections. The first section is an introduction to Petri
Nets. The second section describes basic definitions of PNs and their time modeling
which transformed them into Timed Petri Nets (TPNs). The third section illustrates
the decomposition of a Timed Petri Net representing an open queuing network. The
fourth section describes a TPN based method for estimating the expected utilization
at steady state of a workstation including disturbances. The fifth section presents
TPNs as verification tool of simulation models at steady state, including an example
where the net comprised processing resources and Automated Guided Vehicles
(AGVs). The sixth section describes weaving processes, as an additional applica-
tion of TPNs. Petri Nets as a versatile modeling structure has been recently pub-
lished in Applied Mathematics, see Barad (2016).
5.1 Introduction
A Petri Net (PN) is both a graphical and an analytical modeling tool. Carl Adam
Petri has developed it in 1962, in his Ph.D. thesis at Bonn University, Germany, as
a special class of generalized graphs or nets. The chief attraction of this tool is the
way in which the basic aspects of various systems are identified conceptually,
through a graphical representation, and mathematically, through formal program-
ming languages.
As a graphical tool, PNs can be used as a visual-communication aid similar to
flow charts, block diagrams, and networks. In addition, in these nets tokens sim-
ulate the dynamic activities of systems. As a mathematical tool, it allows setting up
© The Author(s) 2018 81
M. Barad, Strategies and Techniques for Quality and Flexibility,
SpringerBriefs in Applied Sciences and Technology,
https://doi.org/10.1007/978-3-319-68400-0_5
82 5 Petri Nets
algebraic equations for formally analyzing the mathematical models governing the
systems’ behavior, similarly to other approaches to formal analysis, e.g. occur-
rences nets and reachability trees.
PNs are capable to describe and analyze asynchronous systems with concurrent
and parallel activities. Using PNs to model such systems is simple and straight-
forward making them a valuable technique for analyzing complex real time systems
with concurrent activities. They are capable of modeling material and information
flow phenomena, detecting the existence of deadlock states and inconsistency and
are well suited for the study of Discrete Event Systems (DES). As such, they have
been applied in manufacturing and logistics, hardware design and business pro-
cesses, as well as in distributed databases, communication protocols and DES
simulation (Moody and Antsaklis 1998; Yakovlev et al. 2000).
Early attempts to use PNs for modeling systems’ behavior revealed some
drawbacks. There was no time measure in the original PN and no data concepts.
Hence, their modeling capability was limited and models often became excessively
large, because all data manipulation had to be represented directly into the net
structure. To overcome these drawbacks, over the years PNs have been extended in
many directions including, time, data and hierarchy modeling. Currently, there
exists a vast literature on this subject, covering both its theoretical and its appli-
cation aspects. This chapter is intended to help the reader to get acquainted with the
basic notions and properties of PNs and then, to offer a more general view on their
capabilities by discussing some of its many extensions. It does not attempt to cover
PNs in a comprehensive way (see also Barad 2003).
Let us first present a formal definition of Petri Nets, first as a graphical tool and then
as an analytical tool.
PN as a Graphical Tool
A graph consists of nodes (vertices) and arcs (edges) and the manner in which they
are interconnected. PNs have two types of nodes: places and transitions.
Formally, a Petri net is a graph, N = (P, T, A, M) where:
P is a set of places (P1,P2,…,Pm), portrayed by circles, representing conditions.
T is a set of transitions (T1,T2,…,Tn), portrayed by bars, representing instanta-
neous or primitive events.
A is a set of directed arcs that connect them; input arcs of transitions (P*T)
connect places with transitions and output arcs (T*P) start at a transition.
M a marking of a Petri net is a distribution of tokens (or markers) to the places of
a Petri net. As tokens move according to firing rules, the net marking changes.
A transition can fire (meaning the respective event occurs) if there is at least one
token in each of its input places (the necessary conditions for its occurrence are
fulfilled). When a transition fires it removes a token from each input place and adds
a token to each output place.
5.2 Petri Nets and Their Time Modeling 83
1 for Pi eIðTj Þ
Cij ¼ þ 1 for Pi eO(Tj )
0 Otherwise
The meaning of the above equation is that upon firing transition Tj, j = 1,2,…,n,
one token is removed from each input place Pi, i = 1,2,…,m and one token is added
to each output place. The incidence matrix C1 of PN1 in Fig. 5.1 is given.
Incidence matrix C1
T1 T2 T3
P0 −1 1
P1 1 −1
P2 1 −1
To convert PN1 into TPN1 we use incidence matrix C1 and delay matrix Z1. Z1
is defined as a square matrix m*m (here m = 3) with elements zi, i = 0,1,2 for i = j,
and zero otherwise.
z0 0 0 I1
Z1 ¼ 0 z1 0 I = I2
0 0 z2 I3
In open queueing networks, customers enter and leave the network in an uncon-
strained manner. Barad (1994) provided a detailed illustration and proof for
decomposing a TPN model of n-part type open queueing network with M work-
station into M independent TPNs.
The essence of the approach consists in proving that, under steady state con-
ditions, as defined by Sifakis and applied to open systems, the arrival flow of any
such job to its assigned workstation equals the input flow to the system of its parent
part type.
Figure 5.2 graphically illustrates an open queueing network. It consists of M = 3
workstations and n = 4 part types. Parts of type j, (exogenous customers com-
prising the mix served by the queueing network) arrive at the system with given
flow rate I0j , j = 1, 2, 3, 4. Processing of a part is composed of a number of
operations (jobs) to be performed in a given order by previously assigned stations
(deterministic routing). Upon arrival, each part is routed to its initial station.
5.3 Decomposing TPNs of Open Queuing Networks 85
and
where [Js] s = 1,2,…,k is a generator of the set of solutions of (5.1), [Js C = 0] and
Q(0) is a vector representing the initial marking of the net.
(C+) is obtained by replacing the ‘−1’ elements in the incidence matrix C by
zeros.
Intuitively Eqs. (5.1) and (5.2) can be respectively associated with conservation
of flow and conservation of tokens.
86 5 Petri Nets
P ¼ fPo ; P1 ; P2 ; P3 ; P4 g T ¼ fT1 ; T2 ; T3 ; T4 ; T5 ; T6 g
T1 T3 T4 T2 T5 T6
P0 −1 1 −1 1
P1 1 −1
C2 = P2 1 −1
P3 1 −1
P4 1 −1
Applying Eq. (5.1) to incidence matrix C2, C2*I0 = 0 [I0 > 0] we obtain:
Flow of part type 1: I01 = I3; I3 = I4 Flow of part type 2: I02 = I5; I5 = I6
Combining the equations:
It is seen that under steady-state conditions, the entering flow of part type 1 to
station 1, I01 , is equal to its exiting flow, I4, and the entering flow part type 2, I02 , is
equal to its exiting flow from the station, I6. According to the above, indeed
Eq. (5.1) represents conservation of flow.
5.3 Decomposing TPNs of Open Queuing Networks 87
Let us now apply Eq. (5.2) representing preservation of tokens. In this example,
a generator of the set of solutions of Eq. (5.1) is J1,
J1 ¼ ½1; 0; 1; 0; 1 J1 C2 ¼ 0 ð5:4Þ
The “1” values represent all the possible steady states of workstation 1: P0-idle,
P2-processing part type 1, P4-processing part type 2.
Substituting Eq. (5.4) in Eq. (5.2) and using Eq. (5.3) we obtain:
where Q0, Q2, Q4 are the token content of the corresponding places representing the
possible steady states of workstation 1.
The left hand side (LHS) of Eq. (5.5) represents the sum of tokens over the
steady state of workstation 1. Since there is but one resource entitled workstation 1
and these three states represent all its mutually exclusive states, LHS = 1. The three
components of the right hand side (RHS) represent the respective proportion of time
that workstation 1 is expected to spend in each state in steady state conditions. After
substitution, Eq. (5.5) becomes:
z4 I01 and z4 I02 are the respective contributions of part type 1 and part type 2 to the
utilization of station 1. As I01 þ I02 represent the total entering flow to station1,
intuitively, z0 can be interpreted as the station expected idle time between any two
consecutive arrivals of parts.
Any feasible solution has to satisfy z0 > 0; leading to:
The LHS of this inequality is the expected utilization of the station at steady
state.
X
n
ðmÞ ðmÞ ðmÞ ðmÞ ðmÞ ðmÞ
rm ¼ rj þ rf rj ¼ I0j tj j ¼ 1; 2; . . .; n rf ¼ If t f ð5:8Þ
j
ðmÞ ðmÞ
rj and rf are the respective contributions of I0j , the input flow of part types j,
j = 1,2,…,n, and that of If, the flow of disturbances to the station m utilization.
ðmÞ ðmÞ
tj and tf are the respective expected operation duration of part type j, j = 1,2,
…,n, and the expected treatment of a disturbance at station m, m = 1,2,…, M.
Provided the constrained rm < 1 is satisfied for each station in the open queuing
network, Eq. (5.8) is a valid expression of the TPN based expected steady state
utilization rm of any processing station m in the system.
A numerical example
Let us assign numerical values to the queuing network in Fig. 5.2 and calculate the
expected utilization of each workstation. The queueing network comprises M = 3
work stations, processing n = 4 part types, according to the deterministic routing
illustrated in Fig. 5.2.
According to the preservation of flow, at steady state the entering flows of each
part is equal to its exiting flow, provided the expected steady state utilization of
each workstation rm, m = 1,2,…,M is less than 1.
The entering flow of part type 1, I01 ¼ 1=10 parts/sec, is to Station 1, where its
ð1Þ ð3Þ
processing time t1 = 5.0 secs. It proceeds to Station 3 with processing time t1 ¼
1:7 secs and exits.
The entering flow of part type 2, I02 ¼ 1=100 parts/sec, is also to Station 1 and its
ð1Þ ð2Þ
processing time, t2 ¼ 2 secs. It proceeds to Station 2 with processing time t2 ¼
ð3Þ
2:7 secs and then to Station 3 with processing time t2 ¼ 3:5 secs, returns to Station
ð2Þ
2 with processing time t′ 2 ¼ 3:8 secs. and exits.
The entering flow of part type 3, I03 ¼ 1=25 parts/sec, is to Station 2 where its
ð2Þ ð3Þ
processing time t3 ¼ 2:9 secs. It proceeds to Station 3 with processing time t3 ¼
ð2Þ
2:5 secs, returns to Station 2 with processing time t′ ¼ 4 secs and exits. 3
The entering flow of part type 4, I04 ¼ 1=20 parts/sec, is also to station 2 with
ð2Þ ð3Þ
processing time t4 ¼ 4:4 secs. It proceeds to station 3 with processing time t4 ¼
7:0 secs and exits.
We shall now apply Eq. (5.8) to calculate the TPN based expected utilization of
each of the three stations, r1, r2 and r3.
5.4 TPN Based Expected Station Utilization at Steady State 89
ð1Þ ð1Þ
r1 ¼ I01 t1 þ I02 t2 ¼ 1=10 5:0 þ 1=100 2 ¼ 0:520
ð2Þ 0 ð2Þ ð2Þ 0 ð2Þ ð2Þ
r2 ¼ I02 ðt2 þ t2 Þ þ I03 ðt3 þ t3 Þ þ I04 t4
¼ 1=100ð2:7 þ 3:8Þ þ 1=25ð2:9 þ 4Þ þ 1=20 4:4 ¼ 0:561
ð3Þ ð3Þ ð3Þ ð3Þ
r3 ¼ I01 t1 þ I02 t2 þ I03 t3 þ I04 t4
¼ 1=10 1:7 þ 1=100 3:5 þ 1=25 2:5 þ 1=20 7:0 ¼ 0:655
As the above calculated expected utilization of each station is less than 1, the
numerical results are correct.
Version A Version B
T1 T4 T1 T4
4
1 T2 4 T5 T6 T2 T5 T6
1
3 5 3 5
0
0
T3 T3
T1 T2 T3 T4 T5 T6
T1 T2 T3 T4 T5 T6 P0 -1 1
P0 -1 1 1 P1 1 -1
P1 1 -1 P3 1 -1 -1 1
P3 1 -1 -1 P4 1 -1
P4 1 -1 P5 1 -1
Fig. 5.3 Disturbances as TPN models—Versions A and B (reproduced from Barad 1994)
90 5 Petri Nets
rm ðAÞ ¼ z3 I1 þ z5 I4 ð5:9Þ
An important and difficult task in any simulation project is the validation and
verification of the simulation model. Validation is the process intended to confirm
that the conceptual simulation model, within its domain of applicability, is an
accurate representation of the real world system it describes. The verification
5.5 TPNs as a Verification Tool of Simulation Models at Steady State 91
process seeks to determine if the computerized simulation model was built right and
operates as intended. There is a rich literature on these topics (see e.g. Shannon
1981; Sargent 1991; Balci 1994).
From an output analysis perspective, system simulation is classified into ter-
minating or steady state. Terminating simulations aim to estimate system param-
eters for periods well defined in terms of starting and ending condition. Steady state
simulations aim to estimate system parameters obtained for infinite simulation time.
Before reaching steady state conditions, when the system parameters attain stable
values, the system behavior undergoes transition periods, which reflect the initial
system operating conditions (see e.g. Kelton 1989). In the analysis of such systems,
an important objective is to avoid bias in parameter estimation, eventually intro-
duced by data collected during transition periods. One among the many approaches
to simulation verification is comparing simulation outputs with analytical results
(Kleijnen 1995). In many simulation studies dealing with manufacturing systems or
computers and communication systems, networks of queues represent the reality
modeled by the system simulation. Hence, for simulation verification of such
systems analytical approaches describing queueing networks may be appropriate.
Section 5.3 described a TPN based decomposition approach of a queueing
network consisting of M workstations. Equation (5.8) in Sect. 5.4 is a formula for
calculating the TPN based expected utilization at steady state of each workstation in
the decomposed network. Accordingly, TPNs can be considered an appropriate
analytical approach for verification of computerized queueing network simulations
at steady state. Barad 1998 presented this approach at the 1998 Winter Simulation
Conference. Two simulation cases were described. In case 1 the network consisted
solely in processing resources. In case 2 the network comprised processing
resources as well as Automated Guided Vehicles (AGVs) in Segmented Flow
Topology (SPT) based on the work of Barad and Sinriech (1998). These authors
developed a method for verifying the segmentation phase in the SFT procedure
through Petri Nets based approximations. They extended the TPN decomposition
approach developed in Barad (1994) to include besides time the distance dimen-
sion, which applies to the AGV movement through the shop floor. To methodically
examine the accuracy of this proposed simulation verification method, they
designed and carried out a multifactor simulation experiment of an investigated
system. Here we shall solely show the graphical presentation of the decomposed
TPN modeling of the AGV and some results of the simulation experiment.
PN based activity cycle of an AGV
Figure 5.4 describes the PN based activity cycle associated with an AGV assigned to
serve a bi-directional segment and its incidence matrix (the time labels are omitted).
The cycle starts with a part j, whose operation at a workstation on its processing
route has been completed and calls for an AGV to transport it to its next station. The
part arrival is described by a firing transition T1, with input flow I0j , which places a
token in P1, denoting the part is waiting for transportation. The token will stay there
until a token arrives in P9 meaning the AGV is available and is waiting at the
pick-up location. Hence, transportation can start. This event is described by
92 5 Petri Nets
T1 T7
T4
8 T1 Part arrival
1 T2 Start part transportation
T2 T3 T3 Transportation completed
2 3 4 T4 Start moving to new
0 pick-up from last drop-off
9 T5 7 T5 Start moving to staging
T6 Staging reached
5 T7 Start moving to new
T9 T6 pick-up from staging
T8, T9 New pick-up reached
T8
T
P 1 2 3 4 5 6 7 8 9
1 1 -1 Part is waiting
2 1 -1 Part is being transported
3 1 -1 -1 AGV reached destination
4 1 -1 AGV travels empty from
drop-off station to new pick-up
5 1 -1 AGV travels empty from drop-
off to staging location
0 1 -1 AGV available at staging
7 1 -1 AGV travels empty from
staging to new pick-up
8 1 -1 -1 A call for AGV
9 -1 1 1 AGV available at pick-up
Fig. 5.4 PN modeling of an AGV: Graphical representation and Incidence Matrix (reproduced
from Barad and Sinriech 1998)
transition T2 which is thus enabled. Transportation of the part to its delivery station,
whose duration is z2 is denoted by a token in P2. After z2 time units the loaded
AGV will reach its delivery station. This event is described by the firing of tran-
sition T3 which inserts a token in P3 meaning the AGV has reached the drop off
point where it may eventually wait.
Let us return for a moment to transition T1 representing the arrival of part j. We see
that besides P1, T1 has a second output place, P8. A token in this place means a call for
the AGV that a part is waiting to be transported. P8 is input to transition T4 and also to
transition T7. This information will reach the AGV, either immediately after the
drop-off (enabling firing of T4) or at its staging location P0 (enabling firing of T7).
Accordingly, upon its leaving P3, the AGV starts moving empty, either towards
a new pick-up station (firing of T4) or, if T4 didn’t fire, meaning there was no call,
towards its staging location P0 (firing of T5). Firing of T4 places a token in P4,
which resides there z4 time units (duration of empty travel from last drop-off to new
pick-up). It ends when AGV reaches new pick-up. This event is expressed by the
firing of T9 that will place a token in P9, meaning the AGV is ready to transport the
part that required service. Firing of T5 places a token in P5 meaning that since there
5.5 TPNs as a Verification Tool of Simulation Models at Steady State 93
were no calls, the AGV will start moving empty to its staging location P0, with
expected duration z5.The token in P5 will stay there z5 time units until T6 will fire,
placing a token in P0. This means that the AGV is available at its staging location.
Summary of the simulation experiment results
As mentioned above, the analytical TPN model and the multi-factor simulation are
detailed in Barad and Sinriech (1998).
Here is a short summary of the simulation experiment.
a. The effects of four factors were investigated:
– Planned utilization of the resources
– Inter-arrival statistical distributions
– Generating seeds of Siman software
– Queue discipline (solely for the processing resources)
b. The dependent variable of the experiment, the accuracy of the TPN based
utilization estimates, was expressed by the difference (in percentage) between
the steady state simulation results (respective mean utilizations of the resources)
and their TPN counterpart.
c. The experiment investigated the utilizations of three resources in the queuing
network:
Two moveable resources, AGV1 and AGV2, and one of the net processing
resources.
The separate results of the three resources are as follows.
Moveable resources
AGV 1—Its low and high planned utilizations were 0.5 and 0.6 (both relatively
low). The overall accuracy was 0.2%. The results were not significantly affected by
any of the investigated factors.
AGV 2—Its low and high planned utilizations were 0.65 and 0.78 (higher than of
AGV 1) The accuracy of the low planned, 0.65, was 1.6% while that of the high
planned, 0.78, was 2.9%.
The accuracy of the moveable resources was significantly affected by the
planned utilization. It was also significantly affected by the generating seeds of the
Siman software as well as by the interaction between the seeds and the inter-arrival
distributions.
Processing Resource—Its low and high planned utilizations were 0.75 and 0.9
(both relatively high). The accuracy was not significantly affected by the planned
utilization. As both planned levels (0.75 and 0.9) were high, the overall accuracy
was about 2.9%, similar to that of AGV 2 at its high planned utilization (0.78).
All Resources
The accuracy of the TPN based utilization estimates for the simulation experi-
ment varied from 0.2% at low traffic to about 2.9% at higher traffic. The relatively
low differences between the numerical results of the two methods provided evi-
dence that the TPN technique can be used to verify simulation models. Eventual
flaws in the computerized simulation model can be detected through observed
94 5 Petri Nets
discrepancies between the TPN based and the simulation results. An important
advantage of the proposed method is its easiness of use. Also, the method imposes
no restrictions on the size of the network to be decomposed.
Barad and Cherkassky (2010) applied the TPN based method for estimating the
expected resources utilization at steady state to weaving processes.
They broadened the TPN decomposition approach, as applied to manufacturing
systems of discrete items, to include weaving process, which is a continuous pro-
cess. Their study focused on the conceptual planning stage during which valid
approximation methods are needed. The discrete input to the system, representing
orders of varying length and fabric types, was transformed into continuous flows of
fabric length by types. The procedure considered both the continuous weaving
process as well as the beam change, which is a discrete operation. Here is a
summary of their study.
The weaving process
The weaving process produces fabrics by interlacing the weft yarns with the warp
yarns according to the pattern of the desired fabric. In the reviewed study the
weaving operation performed by the weaving looms is the main manufacturing
operation.
The input to the weaving process are fabric orders, characterized by the required
fabric type and order length. The study focuses on adaptation of the technical
parameters of the existing looms in the plant (loom types) to produce the style type
and design of the desired fabric. Incoming orders are stored and wait for processing.
Waiting time in storage depends on equipment availability and eventually on pri-
orities. There are two main operation categories: equipment oriented operations
(looms) and manpower oriented operations such as beam change, fabric change and
loom breakdown repairs (or preventive maintenance).
To apply the TPN decomposition approach for calculating the expected (long
run) utilization at steady state of the looms and eventually of the manpower ser-
vicing them, the authors adapted the basic assumptions in Barad (1994) to a
weaving processing system, as detailed below.
(a) Orders for type j fabric, j = 1,2,…,n and class length k, k = 1,2,…,K, arrive at
the system with given mean rate Ij,k (orders per hour). The processing of an
5.6 Weaving Processes—An Additional Application of Timed Petri Nets 95
ðmÞ
Rj;k ¼1 if a type j order of class length k is assigned to loom m; m = 1,2 M
0 otherwise
X
M
ðmÞ
Rj;k ¼ 1 for j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .; K
m¼1
(d) The system is open, meaning that each loom has unlimited buffer space and the
expected utilizations of all servers (looms) is less than 1 (unsaturated system).
ðmÞ ðmÞ
(e) Xj;k are stochastic variables with means tj;k representing the processing
duration of type j fabric order of class k, k = 1,2,…,K length at loom m,
m = 1,2,…M.
Evaluating the mean utilization of the looms at steady state using TPN
The mean utilization of a loom m as defined for visits of type j orders of class
ðmÞ
k length. rj;k , is the relative contribution of type j fabric of class length k to the
utilization of loom m. To calculate it we shall use Eq. (5.8) from Sect. 5.4 as
adapted to the present situation where the specific entering material has to be
ðmÞ
assigned to the specific loom. This is represented by the addition of item Rj;k
representing the given assignment of type j fabric of class length k to loom m,
defined above in (c).
ðmÞ
tj;k is the expected processing duration of type j fabric order of class length k at
station m.
Ij,k is the mean rate of type j fabrics of class k length orders.
Set-ups such as beam change are additional types of customers. Consider for
example a warp beam, that has a certain length, say L meters. Accordingly, every
L meters the beam should be changed. The beam change duration is ts hours and is
96 5 Petri Nets
performed by a worker team. During beam change loom m is not available. Given
the numerical value of L, the mean entering rate of all fabric types of class length k
orders (I.k), and the length distribution of class length k, the mean rate of beam
change per class k, can be easily calculated.
The following equation considers set-ups s as an additional type of customers
(orders) contributing to rm, the long run utilization of loom m.
K X
X n
ðmÞ
rm ¼ rj;k þ rðmÞ
s ð5:12Þ
k¼1 j¼1
ðmÞ ðmÞ
rj;k and rs represent the respective contributions of the input flow of type j
orders of class length k, and that of the input flow of set-ups s to the utilization of
station m.
ðmÞ
rs is defined as follows:
rðmÞ
s ¼ tðmÞ
s Is
ðmÞ
where ts is the expected duration of set-ups s at machine m and Is is the input flow
of set-ups s. Eventually disturbances and/or other set-ups can be eventually
considered.
Provided the constraint rm < 1 is satisfied for each loom m, m = 1,2,…,M, and
all buffers are of unlimited size, eq. (5.12) is a valid expression of the TPN based
expected steady state utilization rm, of any loom m, m = 1,2,…,M in the weaving
work system.
As the waving operation executed by looms is a continuous operation, Eq. (5.12)
has to be modified. The data for modifying the equation are as follows:
ðmÞ
A loom m has a given velocity, vj for processing a type j fabric, j = 1,2,…,n,
m = 1,2,…,M.
The loom velocity for performing this operation is dependent on its technical
capability (loom type) and on the required fabric type j. It is not dependent on the
length class k.
ðmÞ
Let Lk define the midrange of class length k. Accordingly tj;k is replaced by
ðmÞ
Lk =vj .
The following equation replaces Eq. (5.11) above
h i
ðmÞ ðmÞ ðmÞ
rj;k ¼ Lk =vj Ij;k Rj;k j ¼ 1; 2; . . .; n; k ¼ 1; 2; . . .K; m ¼ 1; 2; . . .; M
ð5:13Þ
5.6 Weaving Processes—An Additional Application of Timed Petri Nets 97
A numerical example
The example is based on a real weaving plant. The data represent a working period
of about four months, during which the plant was operated 5 days per week, 24 h
per day (3 shifts). We shall use the input data and the processing data to assign the
entering orders to the existing looms
Input data (entering orders)
The mean overall order rate is 0.343 orders per hour.
There are five fabric types (j = 1, 2,…,5) and four length classes (k = 1,2,…,4).
The length ranges of the four length classes and their respective input mean rates Ik
(orders/h) are detailed in Table 5.1
Given that every L = 1600 m the beam should be changed, we may calculate the
expected number of beam change for each class length k (nk) by assuming that the
fabric length in each class follows a uniform distribution. The mean rate of beam
changes, I’k, as a function of I k, becomes: I’k = I,k (1 + nk), k = 1,2,3,4. The
distribution of orders by fabric type j, pj, is the same for all length classes and is
given (Table 5.2).
We may now calculate the mean rates of type j orders of class k length:
Ij;k ¼ pj I:k j ¼ 1; 2; . . .5 k ¼ 1; 2. . .4
Processing data
The velocity of a loom is dependent on the loom type m and on the desired fabric
type j. Two loom types are considered, m = 1,2. There are 60 looms of type 1
ðmÞ
(m = 1) and 40 looms of type 2 (m = 2). The velocity matrix, vj , (m/h) is given in
Table 5.3.
ðmÞ
Table 5.3 The velocity matrix vj , (m/h)
mj 1 2 3 4 5
1 9.60 8.88 – – 7.98
2 12.0 – 10.32 13.62 10.20
98 5 Petri Nets
Table 5.4 Number of utilized looms by loom types and fabric types
mj 1 2 3 4 5 Total number of utilized looms
1 12 28 – – – 40 (out of 60)
2 18 – 12 6 4 40 (out of 40)
Based on the above decisions, the final assignment of orders by fabric types j,
j = 1,2,…5 to loom types m = 1,2 is detailed in Table 5.4.
We see that the TPN based calculated utilization of the looms enabled to fully
utilize the preferred type 2 looms.
Conclusion
The TPN decomposition approach as applied to manufacturing systems of discrete
items was extended to include weaving process, a continuous process.
References
Balci O (1994) Validation, verification and testing techniques throughout the life cycle of a
simulation study. Ann Oper Res 53:121–173
Barad M (1994) Decomposing timed Petri Net models of open queueing networks. J Oper Res Soc
45(12):1385–1397
References 99
Barad M (1998) Timed Petri Nets as a Verification Tool. In 1998 Winter Simulation Conference
Proceedings, Washington, USA, pp 547–554
Barad M, Sinriech D (1998) A Petri Net model for the operational design and analysis of
Segmented Flow Topology (SFT) AGV systems. Int J Prod Res 36(12):1401–1425
Barad M (2003) An introduction to Petri Nets. Int J Gen Sys 32:565–582
Barad M, Cherkassky A (2010) A Timed Petri Nets perspective on weaving processes. IFAC Proc
Vol 43(12):438–444
Barad M (2016) Petri Nets—a versatile modeling structure. A M 7:829–839
Kelton WD (1989) Random initialization methods in simulation. IIE Trans 21:355–367
Kleijnen JPC (1995) Verification and validation of simulation models. Eur J Oper Res 82:145–162
Moody JO, Antsaklis PJ (1998) Supervisory control of discrete event systems using Petri Nets.
Kluwer, Boston
Ramchandani C (1974) Analysis of asynchronous concurrent systems by timed Petri Nets.
Ph.D. Thesis, MIT Department of Electrical Engineering, Cambridge, Mass
Sargent RG (1991) Simulation Model Verification and Validation. In 1991 Winter Simulation
Conference Proceedings, Phoenix, USA, 37–47
Shannon RE (1981) Tests for the Verification and Validation of Computer Simulation Models.
1981 Winter Simulation Conference Proceedings, Atlanta, USA, 573–577
Sifakis J (1977) Use of Petri Nets for performance evaluation. Acta Cybernetica 4:185–202
Yakovlev A, Gomes L, Lavagno L (eds) (2000) Hardware design and Petri Nets. Kluwer, Boston
Chapter 6
Quality Function Deployment (QFD)
6.1 Introduction
The QFD technique has its roots in Japan of the late 60s and the early 70s. The
Japanese created a methodology to support the development process for complex
products by linking the planning elements of the design and construction processes
to specific customer requirements. Quality Function Deployment expresses the
voice of the customer. The voice of the customer represents a set of customer needs
where each need has assigned to it a priority, which indicates its importance to the
customer.
The methodology was applied at Mitsubishi in 1972, (Akao 1990). During the
80s and the 90s of the previous century, U.S. and Japanese firms gradually and
successfully adopted it (see e.g. Bossert 1991 and King 1995). The method is
typically carried out by teams of multidisciplinary representatives from all stages of
product development and manufacturing (Lai et al. 1998). In recent years, the
application domains of the QFD methodology have been expanded, and its popu-
larity increased tremendously.
The essence of the original QFD is to extract the prioritized customer needs or
desires, expressed in his/her own words (WHATs), to translate them into prioritized
technical product quality characteristics (HOWs) and subsequently into compo-
nents’ characteristics, operating decisions and other decisions. Each translation of
customer ‘voices’ and subsequent processes uses a matrix relating the HOWs with
the WHATs, associated with any specific QFD stage. The HOWs of one matrix
become the WHATs of the next matrix. Important parameters in the translation
process are the numerical values of the matrix elements representing the strength of
the relations between the variables involved.
Griffin and Hauser (1993), presented a comparison of different approaches for
collecting customer preferences in QFD. They considered the gathering of customer
information to be a qualitative task, carried out through interviews and focus
groups, and found both person to person interviews and focus groups to be equally
effective methods in extracting the customer needs.
The QFD methodology is implemented through sequential matrices The first and
best documented matrix which translates the customer requirements expressed in
his/her own words into measurable product technical characteristics is called ‘The
House of Quality’ (see e.g. Hauser and Clausing 1988).
The House of Quality is presented in Fig. 6.1. The matrix inputs (the house’s
western wall) are the customer needs, the WHATs and their respective numerical
importance to the customer. They are translated into the HOWs (the house’s ceiling),
which represent the measurable product technical characteristics, or specifications.
Technical
characteristics
WHATs
Competitive
Importance
Customer
priorities
needs
RELATIONSHIP
MATRIX
Importance
(normalized)
6.2 Quality Function Deployment—The Original Version 103
The relationship between each technical characteristic and each customer need
are the core of the matrix and show how well each technical characteristic expresses
the related customer need. The typical relationship strengths are weak, strong and
very strong; they are all positive and assessed by the technical team. The triangle
(the house’s roof) describes the relationships between the technical characteristics.
Some are positively related, while others are negatively related and typically do not
explicitly appear in the calculations; they are mainly used as trade-off. Rows rep-
resenting competitive customer needs may be emphasized by multiplying their
original values by indices (competitive priorities on the house’s eastern wall). The
translated values (the matrix output) represent the calculated importance of each
technical characteristic. As mentioned above, the output of matrix I becomes the
input of matrix II. This sequential approach continues from matrix to matrix.
In this original version of QFD, as a product design method, there is but one
external numerical input, the customer needs with their respective numerical
importance. This input is introduced in the first matrix, and translated along the
entire sequence of matrices. The next section presents an enhanced view of this
methodology.
As mentioned in the previous section the basic concept of the original QFD is the
customer’s voice as an input to improvement. No other external numerical inputs
are introduced. Its implementation framework is the QFD sequential matrices. The
published examples of an enhanced QFD view that will be presented here, adhere to
the above implementation framework. However, they exhibit changes in structure
and context.
The changes in structure are as follows:
(1) The matrices have no roofs.
(2) Each matrix has two inputs.
One input preserves the initial sequential path, i.e. the output of any given matrix
(HOWs), becomes the input of the next matrix, its WHATs.
The second input is the given weights of the HOWs. In the original QFD the
weights of the HOWs are calculated from the weights of the WHATs multiplied by
the strength of their relationships with the HOWs. Here, the HOWs have given
input weights. Their output weights are calculated in a similar manner as the output
weights of the HOWs in the original QFD, but the calculating formulae take into
account their input weights as well.
The change in context is related to the main topic.
Instead of ‘product’ as the main topic, with its planning, design and manufac-
turing processes, a variety of topics may be considered.
As mentioned in the introduction, the examples here express different topics.
104 6 Quality Function Deployment (QFD)
The title of this section is the title of the first example (published application). Barad
and Gien (2001) carried out a research whose main objective was to develop a
structured and integrative methodology for supporting the improvement of manu-
facturing systems in Small Manufacturing Enterprises (SMEs). The QFD specific
methodology of the paper relied on three basic concepts: ‘strategic priorities’,
‘concerns’ and ‘strategic improvement needs’.
The first concept was to define a set of strategic priorities of a manufacturing
enterprise. It comprised the typical competitive advantages found in the manufac-
turing strategy literature: delivery (fast, dependable), quality (high design quality,
consistent product quality), price. The set also contained an internal, human ori-
ented strategic priority: employees’ involvement. This strategic priority represented
the possibility of achieving a competitive advantage through people (Pfeffer 1995).
The second basic concept consisted in asking interviewees (in the empirical
study) about specific concerns instead of about improvement needs as in the
original QFD. According to Flanagan (1954), ‘the improvement needs of a system
stem from concerns that express unsatisfied needs’. It implies that negative, rather
than positive past experiences, are likely to be registered by customers.
The study differentiated between two types of concerns: strategic concerns and
operating concerns. All concerns were evaluated on a ‘gravity’ scale. Information
supplied by open questions in a pilot investigation was used to formulate specific
concerns for each of the four performances (time, quality, costs and human
oriented).
The third concept regarded the evaluation of the strategic improvement needs of
an enterprise by combining the ‘importance’ attributed by the enterprise to each
strategic priority with the ‘gravity’ of its associated concerns. ‘The higher the
importance of a strategic priority and the higher the gravity of its concerns, the
higher are the enterprise improvement needs’. The concept is equivalent to the
‘importance-performance matrix as a determinant of improvement priority’ devel-
oped by Slack (1994).
Based on this concept, the strategic improvement needs of an enterprise were
calculated by multiplying their evaluated importance with the evaluated gravity of
the respective concern.
Gravity of
Importance of Strategic
Strategic
Strategic Improvement Needs
Concerns
Priorities (SNi)
Gravity of
Operating Operating
MATRIX I Improvement Needs Concerns
(ONk)
Potential
Deployed Improvement
MATRIX II Improvement Actions Actions
(DAm(a)) (PAm(a))
Fig. 6.2 QFD conceptual model for deploying the strategic improvement needs of an enterprise to
its improvement actions by areas (reproduced from Barad and Gien 2001)
respective concern. Two deployment matrices were built. Matrix I performed the
strategic deployment, from strategic improvement needs, to prioritized operating
improvement needs. Its second input was the gravity of the operating concerns.
Matrix II deployed the prioritized operating improvement needs to deployed
improvement actions by areas. Its second input were the potential improvement
actions by areas.
Survey and questionnaire
To test the developed methodology, the authors planned a small empirical study
based on a sampling population of SMEs. This consisted of manufacturing com-
panies with 50–200 employees and less than US$50 million annual sales. The
survey sample comprised 21 enterprises from two types of industry (metal products
and plastic products) and was carried out through interviews with three managers/
engineers in each sampled enterprise. As the interviews were person to person and
took place at the plant sites, there were no incomplete responses.
An interview questionnaire comprising four sections was developed, using
information from the manufacturing strategy literature, as well as information
provided by a pilot investigation of 12 enterprises, that preceded the formal
questionnaire based investigation. Each interviewee was asked to fill out the first
three sections of the questionnaire. The answers had to reflect his/her views on each
topic. The fourth section was intended to provide background data (characteristic
features) of the investigated enterprises and was usually filled up by the CEO or
his/her representative.
The first section of the questionnaire supplied data on the importance of each
strategic priority of an enterprise. The defined set of strategic priorities comprised
seven items: low price, fast delivery, dependable delivery, high product quality,
consistent product quality, product variety and employees’ involvement. The
106 6 Quality Function Deployment (QFD)
SNi ¼ Xi Yi i ¼ 1; 2; . . .; 7; ð6:1Þ
higher the score, the higher was the improvement need of the respective strategic
priority.
Matrix I performed the strategic deployment. It projected the gravity of the
Operating Concerns, OCk, k = 1, 2, …, 22, on the strategic improvement needs. Its
output represented the prioritized Operating improvement Needs, ONk, k = 1, 2, …,
22, of an enterprise which were calculated as follows:
X
7
ONk ¼ OCk SNi R1i;k ; k ¼ 1; 2; . . .; 22: ð6:2Þ
i¼1
X
22
DAmðaÞ ¼ PAmðaÞ ONk R2k;mðaÞ mðaÞ ¼ 1; 2; . . .; qðaÞ; a ¼ 1; 2; . . .; 8
k¼1
ð6:3Þ
R2 k,m(a) represents the potential effect of the improvement action m(a) on the
operating improvement need k.
108 6 Quality Function Deployment (QFD)
Again, three possible positive relationship levels are considered. The numerical
values selected are similar to those in matrix I.
R2 k,m(a) =0 action m(a) has no influence on operating improvement need k
0.33 action m(a) is expected to slightly reduce operating improvement
need k
0.67 action m(a) is expected to substantially reduce operating
improvement k
1.00 action m(a) is expected to solve operating improvement need k
k = 1, 2, …, 22, m(a) = 1, 2, …, q(a), a = 1, 2, …, 8.
It is seen that since matrix II is built separately for each improvement area a,
a = 1, 2, …, 8, it actually represents a union of eight matrices, namely, matrices II
(a), a = 1, 2, …, 8. To build each of those matrices, say matrix II(a0), the
improvement actions m(a0), m(a0) = 1, 2, …, q(a0), for area a0, are considered.
A partial view of Matrix II (R2 k,m(a) > 0) in the area of Quality Management is
presented in Table 6.1. The specific numerical values of R2 k,m(a) are replaced by
√.
Some results and conclusions
• The most urgent operating improvement needs of the sampled SMEs at the end
of the last century were associated with delay concerns and human oriented
concerns. The delay concerns were caused by short term capacity shortages, low
dependability of supply delivery and frequent changes in customer require-
ments. The human oriented concerns were caused by low skill/versatility of the
employees and low motivation.
• The research identified three generic improvement models: (1) time reducing
techniques coupled with flexibility (2) improved quality oriented organization
(3) time reducing techniques coupled with vendor relations. It was interesting to
note that some of the tendencies observed in large European manufacturers (De
Meyer et al. 1989) were also revealed in the study of Small Manufacturing
Enterprises:
(a) Delivery as a competitive priority was strongly stressed. (b) Manufacturers
tended to put more emphasis on human resources and less on stand-alone
technology.
The second published QFD example is based on a paper entitled ‘Strategy maps as
improvement paths of enterprises’. The main research question of the authors
(Barad and Dror 2008) was: How to define the improvement path of an enterprise
across a generic hierarchical structure, for enhancing the realization of its business
objectives?
Table 6.1 Partial view of matrix II for the area of quality management
Operating Improvement actions
improvement Quality Autonomous SPC Raw Quality oriented Product quality DOE Collaboration with
needs teams quality control material training control customers
quality
control
Higher quality √ √ √
techniques
Lower quality √ √
control costs
Lower percentage √ √ √ √ √ √ √
defectives
6.5 Strategy Maps as Improvement Paths of Enterprises
Core
Prioritized From processes
improvement business
needs of the objectives to
business competitive
objectives priorities
From
competitive
priorities to
core
processes
From
core processes
to
organizational
profile
Fig. 6.3 The top-down, level by level, recursive perspective (reproduced from Barad and Dror
2008)
by multiplying the importance attributed to each business objective with the ca-
pability gap existing between the desired state of a business objectives and its
realization (the ‘concerns’ in the previous example). The higher the importance and
the higher the gap, the higher is the improvement need of a business objective. The
latter are translated downward, level by level, until they reach the prioritized
improvement needs of the organizational profile elements. Figure 6.3 is not a
strategy map. We view strategy maps as bottom up improvement paths, which start
at the lower hierarchical level (organizational profile), follow the upward causality
links and end at the higher hierarchical level (business objectives), see Fig. 6.4.
Matrix 1: From business objectives to competitive priorities
This matrix translates the relative improvement needs of the business objectives
(market share, marginal profitability and return on investment), the WHATs, into
the relative improvement needs of its competitive priority measures (fast delivery,
reliable delivery, low price, high product quality, stable product quality, product
variety, new products and employee involvement), the HOWs. The core of matrix 1
expresses the relationship strength between the improvement of each competitive
priority and its expected effect on the improvement of each business objective.
Matrix 2: From competitive priorities to core processes
As customary in QFD, the output of matrix 1, its HOWs, is fed as input to the
current matrix, its WHATs. Construction of matrix 2 necessitates an additional
input in terms of information from interviewees on the core processes in their
organization and on their respective relevant measures. Matrix 2 translates the
improvement need of each competitive priority into the relative improvement need
of each core process. The core of matrix 2 expresses the relationship strength
112 6 Quality Function Deployment (QFD)
between the improvement of each core process and its expected effect on the
improvement of each competitive priority. According to the MBNQA, the core
processes are specific to an organization and are related to the variables of its
organizational profile, MBNQA (2016–2017).
Matrix 3 From core processes to components of the organizational profile
Again, according to the QFD principles, the output of matrix 2 representing the
relative improvement need of each core process is fed as input to matrix 3. Matrix 3
translates the latter into the relative improvement need of each component of the
organizational profile. These components are defined in the reviewed paper by
means of the MBNQA organizational profile as follows: human resources (em-
ployee features, teamwork), technology (equipment, IS/IT), planning (strategic),
and organizational relationships (internal, external). A firm has to define relevant
measures for the components of its organizational profile whose improvement
might later on lead to an improvement in its core processes. The output of this
matrix is calculated similarly to that of the previous matrix.
Data analysis
Given data for matrix 1 (calculated from the questionnaires)
– Median importance of each business objective i, (say, Xi, i = 1, 2, 3)
– Median capability gap for each business objective i, (say, Yi, i = 1, 2, 3).
– Median strength of relationship between each business objective i and each
competitive priority j (say Aij, i = 1, 2, 3, j = 1, 2, …, 8).
Calculations for matrix 1
(a) Calculate improvement need of business objective i, say Ni
Ni ¼ Xi Yi ; i ¼ 1; 2; 3
Remark
The calculations for matrices 2 and 3 are conducted in a similar manner but they
only contain steps (c) and (d).
(2) Production employee features: ‘Average number of training hours per pro-
duction employee’.
(3) Equipment: ‘Investment in equipment upgrading’.
(4) IS/IT: ‘Investment in IS/IT’, and ‘investment in developing a production
planning and supervision system’.
(5) External relationships: ‘Active participation of pharmaceuticals developers in
product exhibitions’ (for buying raw materials).
The investment in increasing the average number of training hours per R&D
employee and in increasing the average number of training hours per production
employee was done to support the implementation of SPC and QFD tools.
The strategy map (improvement path)
Figure 6.4 depicts the strategy map of the investigated enterprise, drawn according
to the stepwise procedure detailed in the previous section. The normalized improve-
ment score of each node (selected component) appears on the map. The competitive
priorities (‘new products’ and ‘reliable delivery’) were selected by means of applying
statistical analysis. Production, purchasing and R&D processes were considered by
the three interviewees as important processes and therefore appear on the map as well.
Statistical analysis selected ‘employee features’, ‘equipment’ and ‘IS/IT’ as the most
Core R&D
processes Purchasing Production
0.36 0.22 0.42
Fig. 6.4 Strategy map of the pharmaceutical firm (reproduced from Barad and Dror 2008)
6.5 Strategy Maps as Improvement Paths of Enterprises 117
The third published QFD example addresses flexibility in supply chains. The main
objectives of the methodology presented in Barad (2012) and reviewed here were:
(1) Building a structured framework for matching flexibility improvement needs to
the strategic priorities of enterprises within a supply chain.
(2) Considering changes and uncertainties in a systematic way.
The conceptual model
The deployment of flexibility in supply chains is modeled by a QFD matrix
structure comprising three matrices (see Fig. 6.5).
First, a set of strategic elements related to flexibility is considered. It comprises
the typical competitive advantages found in the manufacturing strategy literature
118 6 Quality Function Deployment (QFD)
Prioritized
Strategic Metrics Changes 2
Changes 1
Strategic importance II
I III
Strategic Internal
Improve Strategic Flexibility Providers
Competitive capability ment Flexibility Deployment
Needs Deployment
Fig. 6.5 QFD conceptual model for deploying flexibility in supply chains (reproduced from
Barad 2012)
(excluding quality and cost): delivery (fast, dependable), product diversity and new
products. These strategic metrics are prioritized by combining the strategic
importance attributed by an enterprise to each element, with its competitive inca-
pability (Narasimhan and Jayaraman 1998). As in Barad and Gien (2001), we
multiply (and normalize) the scores of these two attributes so that the higher the
importance of a strategic element and the higher is its competitive incapability, the
higher is its priority for improvement.
Matrix I performed the Strategic Flexibility deployment. The prioritized
strategic improvement needs represented its first input (the WHATs). Its second
input were changes of type 1 that the enterprises in the supply chain had to cope
with such as changes in due dates, volume, mix, new product and time to market.
Coping with these types of changes necessitates flexibility with the same names,
e.g. due date flexibility, volume flexibility. This flexibility set is called customer or
first order flexibility types and represents the HOWs of Matrix I. Matrix I translated
the prioritized strategic metrics into prioritized customer oriented flexibility metrics
(due date, volume, mix and time to market) by considering the type and impact of
changes or uncertainties that affect a given strategic metric. For instance, delivery
reliability is translated into due date flexibility (whose role is to make delivery
reliability robust to modifications to agreements regarding due dates and into vol-
ume flexibility (for making delivery reliability robust to changes in the required
volume. New products need time to market flexibility, whose role is to make the
time to market duration robust to additional information on the new product fea-
tures necessitating changes.
Matrix II translated the customer oriented flexibility metrics into prioritized
internal flexibility capabilities by facets (design, manufacturing and collaboration).
Table 6.2 is a concise representation of matrix II in the conceptual model. The
linkages between some specific customer-oriented flexibility types—the WHATs—
and specific internal flexibility capabilities—the HOWs—are symbolically repre-
sented by √.
Table 6.2 A concise view of flexibility capabilities and their relationships (reproduced from Barad 2012)
Internal flexibility capabilities by facets
Design Manufacturing Collaboration
Customer Interchange R&D Configuration Flexible Versatile Versatile Short (late) Trans-routing Outsourcing
oriented flexibility flexibility flexibility employment operator machines Set-up Product flexibility
flexibility customization
types
Due date √ √ √ √ √ √
Volume √ √ √ √
Mix √ √ √
Product √ √
Time to √ √ √
market
6.6 A QFD Top–Down Framework for Deploying Flexibility in Supply Chains
119
120 6 Quality Function Deployment (QFD)
References
Akao Y (1990) Quality function deployment: integrating customer requirements into product
design. Productivity Pres, Cambridge
Barad M (1998) Flexibility performance measurement systems—A framework for design. In:
Neely, AD, Waggoner DB (eds) Proceedings of the first international conference on
performance measurement, Cambridge, pp 78–85
References 121
Barad M (2012) A methodology for deploying flexibility in supply chains. IFAC Proc Vol 45
(6):752–757
Barad M, Dror S (2008) Strategy maps as improvement paths of enterprises. Int J Prod Res 46
(23):2675–2695
Barad M, Even-Sapir D (2003) Flexibility in logistic systems—modeling and performance
evaluation. Int J Prod Econ 85:155–170
Barad M, Gien D (2001) Linking improvement models to manufacturing strategies. Int J Prod Res
39(12):6627–6647
Bossert J (1991) Quality function deployment—a practitioner’s approach. ASQC Quality Press,
Milwaukee
Chan LK, Wu ML (2002) Quality function deployment: a literature review. Euro J Oper Res
143:463–497
De Meyer A, Nakane J, Miller JG, Ferdows K (1989) Flexibility: the next competitive battle the
manufacturing futures survey. Strategic Manage J 10(2):135–144
Dror S, Barad M (2006) House of strategy (HOS)—from strategic objectives to competitive
priorities. Int J Prod Res 44(18/19):3879–3895
Ernst R, Kamrad B (2000) Valuation of supply chain structures through modularization and
postponement. Euro J, Oper Res, p 124
Flanagan JC (1954) The critical incident technique. Psychol Bull 51:327–358
Griffin A, Hauser JR (1993) The voice of the customer. Market Sci 12(2):141–155
Hauser JR, Clausing D (1988) The house of quality. Harv Bus Rev 66:1–27
Hopp JW, Van Oyen MP (2003) Agile workforce evaluation: a framework for cross training and
coordination. IIE Trans 36:919–940
Kaplan RS, Norton DP (1996) Linking the balance scorecard to strategy. Calif Manage Rev
39:53–79
Kaplan RS, Norton DP (2001) Transforming the balanced scorecard from performance
measurement to strategic management: part I. Acc Horiz 15(1):87–104
Kaplan RS, Norton DP (2004) Strategy maps: converting intangible assets into tangible outcomes.
Harvard Business School Press, Boston
King R (1995) Designing products and services that customer wants. Productivity Press, Portland
Lai YJE, Ho A, Chang SI (1998) Identifying customer preferences in QFD using group
decision-making techniques. In: Usher JM, Roy U, Parsaei HR (eds) Integrated product and
process development. Wiley, New York
Malcolm Baldrige National Quality Award (MBNQA) (2016–2017) Criteria for Performance
Excellence (NIST, Department of Commerce: Gaithersburg, MD)
Mehrabi MG, Ullsoy AG, Kore Y, Heytler P (2002) Trends and perspectives in flexible and
reconfigurable manufacturing systems. J Intell Manuf 13(2):135–146
Narasimhan R, Jayaranan J (1998) Causal linkages in supply chain management. Decis Sci 29
(3):579–605
Pfeffer J (1995) Producing sustainable competitive advantage through the effective management of
people. Acad Manage Exec 9(10):55–69
Slack N (1994) The importance—performance matrix as a determinant of improvement priority.
Int J Oper Prod Man 14(5):59–75