You are on page 1of 493

FM_1 09/30/2008 1

ENTERPRISE
EXCELLENCE
A Practical Guide to World-Class
Competition

NORMAND L. FRIGON
HARRY K. JACKSON JR.

JOHN WILEY & SONS, INC.


Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
FM_1 09/30/2008 2

This book is printed on acid-free paper. 


1

Copyright # 2009 by Normand L. Frigon & Harry K. Jackson Jr. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey


Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee
to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax
(978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should
be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken,
NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and the author have used their best efforts
in preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be suitable
for your situation. You should consult with a professional where appropriate. Neither the publisher nor
the author shall be liable for any loss of profit or any other commercial damages, including but not
limited to special, incidental, consequential, or other damages.

For general information about our other products and services, please contact our Customer Care
Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or
fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may
not be available in electronic books. For more information about Wiley products, visit our web site at
www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

ISBN: 978-0-470-27473-6
Frigon, Normand L.
Enterprise excellence : a guide to world class competition / Normand
L. Frigon, Harry K. Jackson Jr.
p. cm.
ISBN 978-0-470-27473-6 (cloth)
1. Management. 2. Organizational effectiveness. I. Jackson, Harry K.
II. Title.
HD31.F756 2009
658.40013–dc22 2008022804

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1
FM_1 09/30/2008 3

This book is dedicated to Lucille Frigon and Sally Baron.


Their patience, love, and support made this book possible.
FM_1 09/30/2008 5

CONTENTS

FOREWORD ix

ACKNOWLEDGMENTS xi

1 Introduction 1
Law of Unintended Consequences, 1
Enterprise Excellence, 3
Enterprise Excellence Model, 7
Continuous Measurable Improvement, 10
Achieving Enterprise Excellence, 12
Key Points, 15

2 Managing and Leading Enterprise Excellence 20


Management Systems, 21
Leading Enterprise Excellence, 28
Understanding and Overcoming Resistance to Change, 54
Key Points, 73

3 Enterprise Excellence Deployment 79


Enterprise Excellence Infrastructure, 80
Deployment Measurement, Analysis, and Reporting, 83
Enterprise Excellence Deployment Planning, 104
Establishing Enterprise Excellence Policies, Guidelines,
and Infrastructure, 136
Key Points, 139

v
FM_1 09/30/2008 6

vi CONTENTS

4 Enterprise Excellence Implementation 144


Management and Operations Plans, 144
Enterprise Excellence Projects, 145
Enterprise Excellence Project Decision Process, 149
Planning the Enterprise Excellence Project, 154
Tollgate Reviews, 164
Project Notebook, 169
Key Points, 169
5 Listening to the Voice of the Customer 176
Voice of the Customer (VOC), 177
Quality Function Deployment, 180
CDOV Process, 184
Key Points, 207
6 Define: Knowing and Understanding Your Processes 212
Understanding Process Variation, 214
Acquire All Process Documentation, 224
Process Mapping, 225
Value Stream Mapping, 237
Value Stream Analysis, 244
Failure Modes and Effects Analysis, 253
Key Points, 269
7 Measure 274
Process Measurement, 274
Statistical Process Control, 277
Statistical Process Control Charts, 281
Types of Control Charts and Applications, 285
Attribute Control Charts, 298
Process Capability Analysis, 307
Measurement Systems Evaluation (MSE), 311
Gage Reproducibility and Repeatability (R&R), 315
Transactional MSE, 323
Key Points, 326
8 Analyze and Improve Effectiveness 329
Analysis of Variance, 329
One-Way ANOVA, 331
Two-Way ANOVA, 340
Multivariate ANOVA, 349
Linear Contrasts, 363
Design of Experiments, 370
Key Points, 393
FM_1 09/30/2008 7

CONTENTS vii

9 Analyzing and Improving Efficiency 394


5S Process, 396
The Seven Forms of Waste, 399
Takt Time, 403
Cycle Time, 405
Routing Analysis, 405
Spaghetti Diagram, 406
Work Content Analysis, 407
Process Availability Analysis, 411
Process Yield Measures, 412
Calculating Cycle Time, 414
Just-in-Time, 416
Kanban, 420
Mixed-Model Production, 428
A, B, C Material Handling, 428
Workable Work, 429
Workload Balancing, 430
One-Piece Flow, 432
Work Cell Design, 434
Kanban Sizing, 435
Key Points, 436

10 Control and Continuous Measurable Improvement 438


Management Systems, 439
Statistical Process Control, 440
Visual Controls, 441
Graphic Work Instructions, 443
Mistake-Proofing (Poka-Yoke), 443
Single-Minute Exchange of Die (SMED), 444
Total Productive Maintenance, 446
Rapid Improvement Events, 449
Continuous Measurable Improvement, 449
Key Points, 450

Appendix A: Basic Math Symbols 454

Appendix B: List of Acronyms 456

Glossary 464

Bibliography 475

Index 480
FM_1 09/30/2008 9

FOREWORD

When asked to provide a foreword to this unique approach to enterprise imple-


mentation of Lean Six Sigma, I jumped at the opportunity. In this latest book by
Dr. Frigon and Mr. Jackson, we are taken on a journey that begins with a cus-
tomer’s needs and walks us through what an organization committed to learning
and customer goals can do to blaze a path towards recognized excellence. A
diverse group of experts and practitioners provides the reader with a host of
well-defined and accepted methodologies, along with the authors’ own personal
touch. What makes this book exceptional, however, is the way that it provides
depth of detail with a successful systemic deployment of Lean Six Sigma proto-
cols within the Department of Defense as well as commercial enterprises. As a
leader who has embraced the process and methodology of Enterprise Excel-
lence, it is especially gratifying to know that its impact brings us to a new appre-
ciation of the ‘‘bottom line.’’
Almost every Six Sigma or Lean Six Sigma publication drives the student/
reader to learn about and focus on specific areas of discipline, achieving a return
on investment, both in terms of hitting bottom line goals as well as changing the
culture of an organization. While the former is a strong motivation, the military
looks at efficiencies and effectiveness not only in dollars and cents but in the
delivery of products and services in what can literally be hostile environments.
The concept of Enterprise Excellence that led to the foundation and eventual
framework for the Army’s Armaments Research, Development and Engineering
Center (ARDEC) originated in 2003 with Dr. Frigon and Mr. Jackson as the
principal authors. Enterprise Excellence served as a core for the leadership at
the ARDEC to develop stategy over the next few years; survived the Base Re-
alignment and Closure Committee; and now serves as a Government benchmark
for how to ’’get it right.’’ It has been an unbelievable experience over the years
to help develop the concept and push a systemically-deployed and well-defined

ix
FM_1 09/30/2008 10

x FOREWORD

process that led to the Malcolm Baldrige National Quality Award for the
ARDEC in 2008.
The book provides insights into the difficulties faced by many other indus-
tries on similar journeys, all of which must be overcome if successful deploy-
ment is to be achieved. Dr. Frigon and Mr. Jackson do a masterful job of
providing the keys to success in overcoming these many obstacles, demonstrat-
ing the true commitment of leaders who really want to change the status quo.
Continuous commitment to excellence is only part of the story. It is the
understanding of the underlying principles of continuous process improvement
and a desire to always do better that really tell the story. Voice of the Customer,
Lean Six Sigma, and a strong hybrid quality management system provide disci-
pline, while leadership provides the focus.
Dr. Frigon and Mr. Jackson clearly highlight the government’s ability to em-
brace the Enterprise Excellence concept, import best practices, and adapt them
with a solid government/industry partnership–forging the vision and turning it
into reality. I highly recommend this unique book, which fuses the combination
of methods and techniques, and connects with the human commitment that
brings the continuous improvement model to a new level.

Paul Chiodo
Former Director, Quality Evaluation and Systems Assurance
Armaments Research, Development and Engineering Center
FM_1 09/30/2008 11

ACKNOWLEDGMENTS

This book is the result of the accumulation of our life experiences, both in the
workplace and in the business of life. In our journeys we have struggled to
understand and to apply the many tools that lead to Enterprise Excellence. We
have found there is no single tool or methodology appropriate for all situations,
all industries, or business cultures. We discovered, however, that a holistic ap-
proach to managing our enterprises, using the most appropriate tools and techni-
ques to achieve our goals, leads to excellence and enables us to thrive.
We would like to acknowledge VSE Corporation, whose foresight and in-
spirational leadership provided the platform for launching Enterprise Excellence.
We would also like to acknowledge Bill Barkau, Roy Weber, Mark Wood-
house, Robert Scott and Paul Chiodo. These management, quality and reliability
professionals, whose professionalism, depth of knowledge, and experience have
made significant contributions to the discipline of continuous measureable im-
provement, have had a significant impact upon the enterprise capabilities of the
country, our Armed Forces, and on this book.
We would like to thank the publishing and editing team at John Wiley and
Sons, especially Robert Argentieri, Daniel Magers, and Amy Odum. We appre-
ciate their support and forbearance, without which this book would not have
been possible.

xi
c01_1 09/30/2008 1

1
INTRODUCTION
‘‘Dammit Jackson, if you don’t have time to do it right the first time, how
do you expect to have time to fix it later?’’

Thus, in 1969, began an education in Enterprise Excellence in the U.S.


Air Force—albeit unknowingly at the time.
SGT Harry Jackson USAF

The challenges facing business and industry are unparalleled in history: uncer-
tainty associated with the war on terrorism, failing confidence in business lead-
ers, dynamic global marketplace, skyrocketing energy costs, shrinking
budgets—and the list goes on. These challenges have led to ‘‘management by
best seller’’: grasping for the silver bullet that will solve immediate problems
and enable the enterprise to meet monthly or quarterly numbers. Yet some avoid
the ‘‘silver bullet paradigm’’ and continue to prosper and thrive (e.g., Toyota,
General Electric, and U.S. Army Armaments Research, Design and Engineering
Center, a 2007 Malcolm Baldrige National Quality Award Winner).
Government agencies, like business and industry, face the same challenges;
however, these are compounded by shrinking budgets, broadening commit-
ments, legacy systems maintained long after their planned life, unrealistic finan-
cial and schedule pressure to meet milestone commitments, and the ever-present
urgency to satisfy constituents. The military environment is further complicated
by the need to maintain systems and equipment, often beyond its intended life
and, with the war on terror, beyond the intended tempo of operation. As in busi-
ness and industry, some government agencies have also resorted to ‘‘manage-
ment by best seller,’’ seeking the silver bullet. All of these organizations are
searching for a quick resolution to problems and situations that have been years
in the making and are supported by well-entrenched cultures and bureaucracies.

LAW OF UNINTENDED CONSEQUENCES

The serious challenges facing business, industry, and government require, to


paraphrase Einstein, different thinking than that which created them. The silver
Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr. 1
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c01_1 09/30/2008 2

2 INTRODUCTION

bullet approach focuses on creating immediate savings, not long-term invest-


ment for success. This search for the silver bullet makes organizations suscepti-
ble to the law of unintended consequences (i.e., unexpected consequences
derailing our ability to achieve our intended consequences—our goals).
The law of unintended consequences has been a recognized part of econom-
ics, politics, and sociology for centuries. It can, in fact, be found in all aspects of
human endeavor. It is the result of not carefully evaluating a course of action and
exploring all potential consequences. Robert Merton, an American sociologist,
in his 1936 article ‘‘The Unintended Consequences of Purposive Social Action,’’
identified the five sources of unanticipated consequences for human activity.

1. Lack of knowledge: Inadequate data prevents accurate identification of


consequences. Proper planning requires fact-based decisions. Fact-based
decisions can be made only if all possible critical factors are explored,
data is collected, and confidence levels for the data are known.
2. Error: Data analysis is essential for fact-based decisions. The analysis
needs to be accurate and statistically sound or it will provide incorrect
information leading to erroneous decisions about consequences.
3. Imperious immediacy of interest: The desire for the intended consequence
of an action is so great that one purposefully chooses to ignore any un-
intended effects.
4. Basic values: The possibility of unintended consequences is ignored, since
the planned action is a direct result of the fundamental values of the deci-
sion makers.
5. Self-defeating predictions: The knowledge of the prediction of the in-
tended consequence of the action inspires individuals to change behavior
and thereby changes the resulting consequence.

These limiting factors are centric to ‘‘management by best seller.’’ Deploying


a single program of improvement strategies ignores the interactions of the many
factors, systems, and processes of an organization. It focuses on quick fixes to
save the day and ignores, or at least trivializes, negative consequences. It also
leads to manipulation of results to match the required goals. ‘‘We must succeed;
therefore, we will declare success for the stockholders, the board of directors,
the boss! And we will demonstrate it even if we have to creatively manipulate
the numbers.’’ This approach will always, therefore, appear to solve the imme-
diate need, but will in the long term create an opportunity to look for another
‘‘best seller.’’
Several methodologies are currently being implemented throughout business,
industry, and government that provide significant benefits to their organizations
(e.g., Six Sigma, Lean thinking, Lean Six Sigma, theory of constraints, continu-
ous process improvement, and Design for Lean Six Sigma). Some are interre-
lated, and all aim at doing the right things efficiently and profitably. In most
instances, they are deployed to improve operations by reducing costs and
c01_1 09/30/2008 3

ENTERPRISE EXCELLENCE 3

improving cycle times. In some instances, efficiency is sought without regard to


quality, which leads to efficiently producing scrap, rework, repair, and increas-
ing customer dissatisfaction. Furthermore this approach unwittingly develops a
philosophy of ‘‘saving their way’’ to prosperity, which of course is not possible.
This may solve an immediate need, but will in the long run lead to failure. On
the other hand, one can spend one’s way to bankruptcy. So, how do we lead our
organizations to success?
Many point to Toyota and others as examples of companies to learn from and
emulate. The focus is usually limited to the efficiency aspects of their opera-
tions, ignoring the other critical elements of their success: Toyota is a great
company by all metrics because it sells its way to prosperity . . . it satisfies and
delights its customers with cost-effective and efficient operations. The Toyota
Production System is an important part of its corporate DNA, but so is the com-
pany’s focus on customer satisfaction, the way it develops products and ser-
vices, and its management and leadership. The Toyota secret to success is a way
of thinking that provides long-term focus on satisfying customers in the most
cost-effective and efficient manner. This requires an integrated approach involv-
ing all aspects of the enterprise and all the members of the enterprise. The lead-
ership teams in continuously successful enterprises like Toyota understand a
principle of physics that seems to elude many: When a ship sinks, the entire ship
sinks! There is no place in today’s environment for protectionism and the rice-
bowl-defending mentality. We need to establish enterprises with collaborative
and supportive infrastructures focused on achieving the mission, vision, and
goals of our enterprise.
Over the past 20 years we have worked with many successful companies and
government agencies that have become agile and flexible in their operations
with a commitment to their customers leading them to prosper and thrive as they
satisfy and delight their customers. They have accomplished this by taking a
long-term, enterprise view of their business and then tailoring and innovating
the best practices and methodologies for achieving their goals. Their successes
are easily measured by their profits, increased value, increased stakeholder sat-
isfaction, increased market share, increased employee satisfaction, and winning
awards such as Malcolm Baldrige National Quality Award. We call this Enter-
prise Excellence.

ENTERPRISE EXCELLENCE

An enterprise is defined as a systematic purposeful activity. This applies to busi-


ness, industry, academic, government, and military organizations. Every new
enterprise starts with a vision. This vision is translated into an enterprise-level
mission statement and set of goals. In order to achieve the goals, satisfy the
mission requirements, and achieve the vision, eight critical functions need to be
performed. All of these functions need to be performed within the enterprise, no
matter how large or small, whether business, industry, or government:
c01_1 09/30/2008 4

4 INTRODUCTION

1. Strategic planning
2. Market and customer research and communication
3. Research and technology development
4. Product, service, and process design
5. Product and service commercialization
6. Postlaunch production
7. Product and service support
8. Measurement, analysis, and knowledge management

The specific organization structure and the attendant roles and responsibili-
ties will need to be guided by your environment, key working relationships,
strategic challenges, advantages, industry, and culture. But in all cases, each of
the eight functions needs to be accounted for and the appropriate infrastructure,
policies, guidelines, and processes established.

Strategic Planning
An enterprise starts with an idea, a need, or an opportunity. This is formulated in
a vision and mission. These need to be clear and concise, providing unequivocal
guidance for the direction of the enterprise—what are you trying to accomplish?
Success will depend on every individual in the organization knowing, under-
standing, and fully embracing the purpose and direction of the enterprise. Suc-
cessfully achieving the vision and mission requires members of the leadership
team to develop and deploy their enterprise values, vision, mission, goals, and
objectives in the enterprise strategic plan. This plan documents the direction for
the organization (i.e., what customer base it will serve, what technology it will
pursue, what types of products and services it will provide, how it will measure
success). It also provides the foundation for the structure of the enterprise as
well as the roles and responsibilities of each function and the workforce within
each work center.
Regular reviews of the strategic plan are required to adjust to the changing
circumstances. Requirements are needed within the strategic plan for regular,
periodic monitoring, measuring, evaluating, and reporting of progress. At a min-
imum, the plan needs to be reviewed, revised, and published annually. Conform-
ance to the plan needs to be deployed to all organizations and all employee
performance goals and objectives.

Market and Customer Research and Communication


Market and customer research and communication are critical to collect the data
necessary to develop the strategies for strategic planning. This is the function
that identifies which opportunities within the customer groups and market seg-
ments to pursue. This is the function that develops the voice of the customer
(VOC) data that identifies customers’ requirements and expectations and
c01_1 09/30/2008 5

ENTERPRISE EXCELLENCE 5

measures their satisfaction. The information developed here provides the basis
for the fact-based decisions about what technology to pursue and what products
and services to offer. How the enterprise communicates with the customers and
the marketplace, and the nature of that communication, will influence and even
shape customer expectations and requirements.

Research and Technology Development


When we rush to the marketplace with immature technology or with a product
or service that is the result of poor technology transfer, the resulting problems
with production and support are costly in scrap, rework, repair, customer con-
siderations, warranty repairs, and lost customer and marketplace confidence.
Therefore, sustainable growth requires development of mature technologies
transferred effectively and efficiently to the mix of products and services pro-
duced by the enterprise.

Product, Service, and Process Design


After the enterprise strategies are established, it is necessary to define the offer-
ings for the customer and design the total customer experience. This is accom-
plished by determining the customer requirements and expectations, defining
what our differentiating characteristics will be, and designing the customer-
performance model.

Product and Service Commercialization


Our goal is to develop products, services, and processes that are robust to sour-
ces of destabilizing variation. This function is a systems engineering and inte-
gration approach to efficiently realizing the customer performance model in the
product, service, and process delivered to the customer. This function includes
selection of materials, design of production processes, make-buy decisions, and
variability reduction activities. Design for Lean Six Sigma is the methodology
that effectively accomplishes this goal.

Postlaunch Production
Our purpose is to cost-effectively produce the products and services, on sched-
ule, that meet or exceed the expectations of the customer, as defined by the cus-
tomer. This includes all in-house activities that add value to the materials to
produce the products and services offered to the customers. Postlaunch produc-
tion includes all activities to produce the products and services after initial
development.

Product and Service Support


After the product or service is delivered to a customer, there are continued op-
portunities to serve the customer: troubleshooting, repair, replacement, data
c01_1 09/30/2008 6

6 INTRODUCTION

Market & Customer


Research & Communication

Product & Service


Support Strategic Planning

Measurement, Analysis &


Knowledge Management

Research & Technology


Postlaunch Production Development

Product, Service & Process Product, Service & Process


Commercialization Design

FIGURE 1.1 Enterprise Functions.

collection, and communication activities. These support activities are critical to


customer satisfaction and provide an opportunity to collect further information
about customer wants and desires. Such follow-up provides a mechanism for
communicating information to the customers and thereby provides an opportu-
nity to shape expectations. In the military environment, this function provides a
different dynamic. In the military environment, product and service support in-
cludes organic and depot maintenance and disposal at the end of the product
life.

Measurement, Analysis, and Knowledge Management


The effectiveness of the previous seven functions depends on the quality, reli-
ability, timelines, and availability of data and information. This function is
therefore the central nervous system for the enterprise. It provides the policies,
guidelines, and requirements for the processes for selecting, collecting, align-
ing, and integrating data and information for tracking daily operations. It estab-
lishes the key enterprise performance metrics and provides for regular periodic
performance reviews. This function is the foundation for fact-based decision
making within the enterprise.
The relationship of the enterprise functions and the flow of information,
products, and services is illustrated in Figure 1.1.
Enterprise Excellence is a holistic approach for establishing an agile, flexible
enterprise and managing it to thrive in the twenty-first century. The Enterprise
Excellence methodology focuses on optimizing the critical success factors of
c01_1 09/30/2008 7

ENTERPRISE EXCELLENCE MODEL 7

quality, cost, schedule, and risk, to achieve your goals. It facilitates the improve-
ment of the operations of the organization and focuses the leadership, manage-
ment, and technology on the critical systems and processes of the enterprise.
The successful deployment of Enterprise Excellence results in an organization
with a fact-based decision-making culture. The infrastructure and processes of
Enterprise Excellence creates an agile and flexible organization capable of
quickly addressing problems, changing requirements, changing markets, chang-
ing technology, changing missions, and so on. These traits will lead to reduced
costs, reduced cycle time, reduced risk, maximized customer satisfaction, and
increased value of the organization.

ENTERPRISE EXCELLENCE MODEL

Each organization operates as an enterprise in that it is a collection of processes


focused on producing products and/or services for customers with the goal of
producing a profit and increasing its value. Profit and value may be defined as
money, capital equipment, real estate, improved efficiency, experience, influ-
ence, and so on. This principle applies regardless of the type of enterprise (com-
mercial, industrial, nonprofit, or government), its size, or the technology
involved. In all instances, an enterprise is focused on staying healthy and in-
creasing its value to its customers. Customer satisfaction is therefore the key to
increasing the value of the enterprise. The goal is to cost-effectively satisfy the
customers’ requirements and expectations, to increase market share, and to raise
value to the stakeholders.
As indicated in the Figure 1.2, Enterprise Excellence begins with establishing
a management system and a voice of the customer system (VOCS). These first
two elements of the model ensure the organization is focused on the require-
ments and expectations of the customer and that it has the infrastructure in place
for managing the enterprise to achieve a competitive edge. The management
system establishes the infrastructure, processes, and procedures necessary for
leading and managing the organization. The VOCS is accomplished through
implementation of Design for Lean Six Sigma (DFLSS), which establishes the
infrastructure, processes, and procedures for performing research, technology
development, and transfer and developing the products, services, and processes
necessary for cost-effectively satisfying customer requirements and expecta-
tions. Throughout its life cycle, the enterprise it needs to continually monitor,
evaluate, and report performance of the enterprise (continuous measurable im-
provement). This is critical to establishing the desired agility and flexibility to
thrive in the twenty-first century.

Enterprise Management System


The management system represents the basic management approach of the en-
terprise. This basic approach will reflect the culture of the organization and how
c01_1 09/30/2008 8

8 INTRODUCTION

Enterprise Management Voice of the Continuous Process Enterprise


System Customer System Improvement (CPI) Excellence

1. Leadership 1. Market & Customer 1. Six Sigma


A. Strategic Planning Research & Communication 2. Lean
B. Customer and 2. Research & Technology 3. Theory of
Market Focus Development Constraints
C. Workforce Focus 3. Product, Service & Process
2. Quality Management Design
System 4. Product, Service & Process
Commercialization
5. Postlaunch Production
6. Product & Service Support

FIGURE 1.2 Enterprise Excellence Model.

the enterprise will be managed. The elements of the management system in-
clude leadership and the quality management system.
The leadership element is where we establish the organizational values, vi-
sion, mission, goals, and objectives. We establish the methodology for deploy-
ing these throughout the organization and communicate them to the workforce,
key suppliers, partners, customers, and other stakeholders. This process pro-
vides for a collaborative and supportive deployment of goals and objectives
from the executive leadership team throughout the organization.
The quality management system (QMS) provides an organization with a set
of processes that ensure a structured, logical approach to the management of the
organization. These processes are geared to ensure consistency and improve-
ment of working practices, which in turn should provide products and services
that meet customers’ requirements. The most commonly used international
standard that provides a framework for an effective quality management system
is ISO 9001:2000.
While ISO 9001:2000 doesn’t define what quality is for a particular product
or industry, it does define the requirements for a management system to control
processes for quality. The standards represent a consensus on what constitutes
good management practices that will enable an organization to reliably deliver
products or services that meet the requirements of the customer. By using the
procedures and processes like those presented in ISO 9001:2000, organizations
will reliably produce goods and services that meet the needs and requirements
of their customers.
Baseline QMS requirements are:

 Processes and procedures with controls


 An organizational structure with defined management roles and
responsibilities
 Processes communicated throughout organization
c01_1 10/07/2008 9

ENTERPRISE EXCELLENCE MODEL 9

 A method for decision making


 Commitment to continuous measurable improvement

Some standardized systems are:

 Voluntary (e.g., ISO 9001:2000)


 Regulatory requirements (e.g., FDA)
 Some support going beyond minimum compliance to Enterprise
Excellence

One of the most critical elements of a quality management system is a com-


mitment to continuous process improvement (CPI). There are many stan-
dardized quality management systems, such as ISO-9001-2000, QS-9000,
ISO-14000, and so forth. All of these provide a good structured approach for
establishing and maintaining a basic quality management system.

Voice of the Customer System


VOCS includes the policies and guidelines, infrastructure, and processes that
address the requirements and expectations of both internal and external custom-
ers: internal customers for developing the processes to design, develop, produce,
and deliver the products and services to the external customers; external cus-
tomers to ensure satisfying their requirements and expectations for the products
and services of the enterprise. This is the essence for providing a customer-
focused enterprise.
Voice of the customer system refers to a commitment and systems engineer-
ing approach for knowing and understanding the full scope of customer require-
ments and needs, then using this knowledge to cost-effectively satisfy the
customers from concept to obsolescence and disposal. The systems engineering
approach provides for both the design of the system’s components and the inte-
gration of those components into a qualified system acceptable to the entire cus-
tomer set across the life cycle of the system. The key concept underlying the
implementation of this process is concurrent engineering. The tools and tech-
niques for execution of each step of the process are the engineering methodolo-
gies. This approach is supported by Design for Lean Six Sigma (DFLSS), which
consists of a focused process for identifying customer requirements and expect-
ations; establishing robust products, services, and processes; and using inte-
grated product and process development (IPPD) to develop the products,
services, and the processes for producing them.
Design for Lean Six Sigma provides a process, discipline, and methodology
that supports systems engineering and ensures effectively and efficiently satisfy-
ing the requirements and expectations of the entire customer set during the con-
ceptual and preliminary design, detailed design and integration, and production
periods of the life cycle. Continuous measureable improvement brings a
c01_1 09/30/2008 10

10 INTRODUCTION

methodology to cost-effectively improve the processes and products during pro-


duction and the use, refinement, and disposal periods of the life cycle.
There are two processes and sets of tools and techniques in Design for Lean
Six Sigma. The first is for the development of new technologies and preparing to
transfer them to new products and services. This process is Invent/Innovate-
Develop-Optimize-Verify (I2DOV). The second is for the design and development
of new products, services, and processes. This is Concept-Design-Optimize-
Verify. The two processes are similar, use many of the same tools and concepts,
but are focused on different goals. The first is focused on developing new technol-
ogies and preparing them for transfer to new products, services, and processes.
The second is focused on developing new products, services, or processes.

CONTINUOUS MEASURABLE IMPROVEMENT

Fact-based decision making is centric to Enterprise Excellence. Continuous


measurable improvement (CMI) is the methodology for monitoring, measuring,
and evaluating our operations to provide the data for fact-based decision making
and to continually improve operations in order to achieve our vision. There are
two major methodologies in continuous measurable improvement (CMI). One
focuses on effectiveness (Six Sigma) by reducing variability by improving qual-
ity and creating robust processes. The other (Lean) focuses on efficiency by re-
moving waste. Which approach we pursue when a problem or opportunity is
identified depends on its nature and scope. We would always prefer to be effec-
tive and then determine how to eliminate waste and become efficient. There are
instances when emergent requirements drive us to become efficient first (e.g.,
schedule or waste issues are overpowering the organization). As illustrated in
Figure 1.3 if we pursue effectiveness we must then Lean the process, and if we
Lean first we must then evaluate the effectiveness to eliminate variability.

Six Sigma (6s)


Six Sigma is a disciplined, structured approach for process, product, and service
optimization focused on quality improvements, reducing process variability, and

Six Sigma Six Sigma Six Sigma


Process Measurement Process Analysis Process Improvement

Project
Selection Process
Control

Lean Lean Lean


Process Measurement Process Analysis Process Improvement

FIGURE 1.3 Continuous Measurable Improvement.


c01_1 10/07/2008 11

CONTINUOUS MEASURABLE IMPROVEMENT 11

increasing process and product robustness. Additionally, the goal of Six Sigma
activities is improving the bottom line of the organization (i.e., improving prod-
ucts, services, and processes to collaboratively support achieving the vision,
mission, goals, and objectives of the enterprise).
Six Sigma provides an infrastructure, a well-defined tool set, and a process
intended to be used in new product/process development and for improvement
projects for existing products and services. In the development of products, ser-
vices, and processes Six Sigma provides the methodology and tools for achiev-
ing the required robustness and effectiveness of processes. Once in production,
Six Sigma provides a focused approach and well-defined tool set for achieving
continuous measurable improvement. If used appropriately, Six Sigma will re-
sult in directly improving the bottom line of an organization by improving qual-
ity and meeting operating schedules while reducing costs and risks. Six Sigma
provides a specific tool set and instructions for applying the tools. The Six Sigma
methodology for reducing variability and improving effectiveness is referred
to as Define-Measure-Analyze-Improve-Control (DMAIC). The Six Sigma
methodology is focused on process, product, and service effectiveness improve-
ment and therefore includes tools and techniques unique to variability reduction,
but also uses some that are also part of Lean.

Lean
The Lean methodology, sometimes referred to as Lean enterprise or Lean think-
ing, represents the manner in which organizations must be managed in a highly
competitive environment. This concept embodies a collective set of principles,
tools, and application methodologies that enable organizations to remove waste
from the system and achieve dramatic competitive advantages in development,
cost, quality, and delivery performance. It is a methodology intended to increase
the efficiency of an organization’s operation by eliminating or minimizing
waste. Lean provides a systems engineering approach to the efficiency of the
enterprise. It is concerned with eliminating waste, streamlining operations, and
coordinating activities that will directly affect the bottom line of an organization
or company. Integrated in the voice of the customer system, Lean ensures the
optimal efficiency in the production of products and services and assists in early
detection and correction of problems. The Lean methodology for eliminating
waste and improving efficiency is referred to as Define-Measure-Analyze-Lean-
Control (DMALC). The overall approach is the same as Six Sigma. It uses some
of the same tools and techniques that are part of Six Sigma but has some that are
unique to Lean.
The collaborative effect of the enterprise management system, voice of the
customer system, and continuous measurable improvement (Six Sigma and
Lean) is the clear understanding of the requirements and expectations of the
customers (internal and external) and the establishment of an infrastructure,
methodology, and comprehensive implementation strategy for ensuring that
high-quality products and services are cost-effectively provided. The focus is
c01_1 09/30/2008 12

12 INTRODUCTION

on listening to the customer, understanding what the customer values, and effec-
tively and efficiently delivering customer satisfaction throughout the life cycle
of the system, product, and services.
In other words the enterprise management system provides the answer to
‘‘what needs to be done and why?’’ This is collaborative and supportive of the
voice of the customer system, which provides the answers to ‘‘what, where, and
when?’’ Six Sigma provides the answer to ‘‘how do we achieve and maintain the
required product and process robustness?’’ Lean provides the answer to ‘‘what
is the waste in our system/environment and how do we eliminate it?’’
In this way we see the strategies do not conflict, nor are they meant to be in
competition with each other, but are collaborative and supportive. They not only
provide positive contributions in their own right, but are enhanced and suppor-
tive when used together. For this reason, the use of multiple strategies, in a ho-
listic manner, needs to be seriously considered when an organization realizes
the need for improvement. However, it is important to remember that whenever
more than one of these improvement strategies is adopted, their common char-
acteristics and complementary aspects should be taken into account.
To achieve this balance between effectiveness (Six Sigma) and efficiency
(Lean) it is important not to segregate or departmentalize any of these strategies
from the others when deploying them. This would ensure little or no improve-
ment at best, and waste activities that use up precious resources at worst. There-
fore, the integrated deployment of these strategies ensures the cultural and
organizational changes essential for the success of the enterprise. Existing busi-
ness processes must be made to be effective and then efficient. This is the road
map to process optimization and a direct route to improving the bottom line of
any organization.

ACHIEVING ENTERPRISE EXCELLENCE

Once the decision is made to implement Enterprise Excellence the question is,
where do we start? In other words, how do we deploy Enterprise Excellence?
How do we change the way the enterprise operates and institute a new way of
thinking and operating? This is in fact a change in the culture of the enterprise.
There are three common deployment strategies: deployment by pilot study,
project-by-project deployment, and enterprise-wide deployment.

Deployment by Pilot Study


Many organizations will choose to use the deployment by pilot study strategy.
This is a low-risk and low-investment approach. If people are unsure of the con-
sequences of a particular course of action and are cautious and fearful of the
consequences of failure, this is the strategy they will select. In this strategy we
select a single function within the organization and implement Enterprise Ex-
cellence. If it achieves the desired goals, we select other areas to begin ‘‘trying’’
c01_1 09/30/2008 13

ACHIEVING ENTERPRISE EXCELLENCE 13

to implement Enterprise Excellence. This strategy does not foster commitment


due to limited involvement of the leadership team and the narrow scope of im-
plementation. Deployment by pilot study will not yield a large return on invest-
ment or effect a cultural change. In fact, just the opposite is true. The functions
involved with the pilot studies will be viewed with skepticism by the rest of the
organization. This will limit success, since all processes within the enterprise
are, by their nature, cross-functional and multidisciplined. Each process has
customers and suppliers within and outside of its organization.

Project-by-Project Deployment
If people are confident that a selected course of action is a good idea but still
have reservations and concerns about the consequences of failure, this is the
strategy they may select. This is the typical strategy used in organizations that
implement Six Sigma or Lean. In this strategy, improvement projects are identi-
fied, and cross-functional, multidiscipline teams are developed to address spe-
cific problems or opportunities. Typically, the interrelationship of problems and
opportunities are ignored in the implementation of this strategy. This strategy
frequently results in optimizing one area of the enterprise at the cost of subopti-
mizing the enterprise. This strategy may show a moderate to high return on in-
vestment with moderate risk, but despite a wide scope will result in only a
shallow effect on the organization. The cultural change resulting from this
method of deployment is slow and uncoordinated.

Enterprise-Wide Deployment
Enterprise-wide deployment requires executive commitment. It begins with the
decision to implement a change in the culture of the enterprise through the im-
plementation of new processes and techniques. Deployment by pilot study and
project-by-project are limiting strategies of caution testing Enterprise Excel-
lence. A strategy focused on the entire enterprise is required to achieve the full
benefits of Enterprise Excellence. This is an enterprise-wide deployment strat-
egy led by the executive leadership team and deployed throughout the entire
organization in a structured, planned method. It requires major commitment of
resources, yet has a very low risk of failure and is the quickest way to achieve
the organizational transformation to Enterprise Excellence. This strategy,
through its top-down coordination, results in a broad and deep implementation.
This results in a collaborative and sustained cultural transformation. This trans-
formation is facilitated through the creation of cross-functional, multidiscipline
team members working together to improve the effectiveness and efficiency of
enterprise processes.

Enterprise Excellence Deployment


Enterprise Excellence deployment begins first with a decision and commitment
to deploy Enterprise Excellence. Once that decision has been made, the next
c01_1 09/30/2008 14

14 INTRODUCTION

step is to perform the assessment. An enterprise senior review group (ESRG) is


established to lead the assessment and to prepare for the development of the
Enterprise Excellence deployment plan. This executive leadership team’s initial
responsibility is to ensure the assessment is a thorough evaluation of the organ-
ization infrastructure and performance against the Enterprise Excellence model.
This includes review of the organization’s management system, voice of the
customer system, and continuous measurable improvement infrastructure and
implementation. This evaluation is analyzed to define the existing state and
compare it with the desired state. The results of the assessment are used to de-
velop a recommended course of action to close the gap and achieve the desired
state. The assessment is documented in an assessment report to include findings,
conclusions, recommendations, and plan of action and milestones (POA&M).
Enterprise senior review group (ESRG) is the senior leadership team of the
enterprise who will lead the deployment of Enterprise Excellence. Initially, the
ESRG members will meet three days per month for six months. Thereafter, they
will meet at least monthly. The first six sessions will be focused on using the
Enterprise Excellence assessment (POA&M) to define specific actions, roles,
and responsibilities for implementing the plan. They will also receive training
on the Enterprise Excellence methodology, process, and leading the transforma-
tion. These early sessions will also be used to (1) refine the enterprise values,
vision, goals, and objectives, (2) develop the enterprise value stream, (3) define
a portfolio of enterprise improvement projects, (4) identify a group of Lean Six
Sigma Black Belt candidates, and (5) identify a resource to provide the requisite
Lean Six Sigma Black Belts and continuous measurable improvement subject
matter experts (SMEs) development. The initial Black Belt candidates are indi-
viduals who will lead the first set of enterprise-level projects identified by the
ESRG. These individuals will be the core internal support for the deployment of
Enterprise Excellence.
In parallel with the start of development of the Enterprise Excellence senior
review group, the initial group of Lean Six Sigma Black Belt candidates will be-
gin their training. Additionally, Champion workshops will be initiated for the sen-
ior managers. These workshops will prepare the senior managers to lead the
implementation of the Enterprise Excellence strategy. These workshops will pro-
vide the senior managers an awareness and appreciation of Enterprise Excellence,
its methodology, processes, and tools. It will prepare the managers to sponsor and
champion Enterprise Excellence deployment activities. This will include an ap-
plication skill level for leading Enterprise Excellence implementation. Monthly
reviews will be performed by the ESRG to evaluate implementation progress, re-
view and approve improvement projects, and reprioritize actions, as required.
As defined in the assessment report, VOCS activities are prioritized by the
enterprise senior review group. This includes defining (1) research and technol-
ogy development, (2) product, service, and process design, and (3) product and
service commercialization processes for the enterprise. Implementation plans
are developed and implemented. At this time, customer and employee feedback
systems are established or improved, as required.
c01_1 09/30/2008 15

KEY POINTS 15

After the initial wave of Black Belts begin training, and as the Champion
workshops are being conducted, Green Belt and additional Black Belt training
will begin. The training goals will be established by the enterprise senior review
group; however, generally, all supervisory personnel need to be Green Belts.
Requirements for Black Belts will depend on the size of the organization and
the nature of its business. In addition, the ESRG will have developed a deploy-
ment plan that will define requirement for the Master Black Belts. This plan will
provide for developing a cadre of Black Belts and Master Black Belts for estab-
lishing a self-sufficient infrastructure and strategy.
Depending on the commitment and resources, this strategy will deliver a self-
sustaining, organizational transformation within three years. This will be an
agile organization capable of quickly addressing problems, changing require-
ments, changing markets, changing technology, changing missions, and so on.
It will be an organization with a culture based on fact-based decision making,
and a self-sufficient workforce that is trained to employ the tools and methods
of fact-based decision making.
The deployment of Enterprise Excellence enables the organization to opti-
mize the critical success factors of quality, cost, schedule, and risk. It uses a
holistic, collaborative approach for managing and improving operations of the
organization, and it focuses the leadership, management, and technology on the
critical systems and processes of the enterprise. This is accomplished through a
focused, collaborative deployment of the three critical elements of Enterprise
Excellence: quality management system, voice of the customer system, continu-
ous measurable improvement. The details of the deployment depend on the spe-
cific situation of your organization. The assessment points to those areas of
immediate concern and prioritizes the actions necessary for success. The appro-
priate methods, processes, and tools are then selected and the solution is adapted
for the need. This focused, collaborative, and holistic approach leads to achiev-
ing the competitive edge for business, industry, and government agencies.

KEY POINTS

Enterprise Excellence
An enterprise is defined as a systematic purposeful activity. Every new enter-
prise starts with a vision. This vision is translated into an enterprise-level mis-
sion statement and set of goals. To achieve the vision requires eight critical
functions:

1. Strategic planning
2. Market and customer research and communication
3. Research and technology development
4. Product, service, and process development
5. Product commercialization
c01_1 09/30/2008 16

16 INTRODUCTION

6. Postlaunch production
7. Product and service support
8. Measurement, analysis, and knowledge management

Strategic Planning
Successfully achieving the vision and mission requires the leadership team
members to develop and deploy their enterprise values, vision, mission, goals,
and objectives in the enterprise strategic plan. This plan documents the direction
for the organization. It provides the foundation for the structure of the enterprise
as well as the roles and responsibilities of each function and the workforce with-
in each work center.

Market and Customer Research and Communication


This is the function that develops the voice of the customer (VOC) data that
identifies customers’ requirements, and expectations and measures their satis-
faction. The information developed here provides the basis for the fact-based
decisions about what technology to pursue and what products and services to
offer. What and how the enterprise communicates with the customers and the
marketplace will influence and even shape their expectations and requirements.

Research and Technology Development


Sustainable growth requires development of mature technologies transferred ef-
fectively and efficiently to the mix of products and services produced by the
enterprise.

Product, Service, and Process Design


Products, services, and processes require knowing and understanding customer
requirements and expectations, defining differentiating characteristics, and then
designing the products and services.

Product, Service, and Process Commercialization


This function is a systems engineering and integration approach that includes
selection of materials, design of production processes, make-buy decisions, and
variability reduction activities.

Postlaunch Production
Postlaunch production includes all activities to produce the products and ser-
vices after initial development.

Product and Service Support


After the product or service is delivered to a customer, there are continued op-
portunities to serve the customer. These support activities are critical to custom-
er satisfaction and provide an opportunity to collect further information about
customer wants and desires.
c01_1 09/30/2008 17

KEY POINTS 17

Measurement, Analysis, and Knowledge Management


The effectiveness of the previous seven functions depends on the quality, reli-
ability, timelines, and availability of data and information. This function pro-
vides the policies, guidelines, and requirements for the processes for selecting,
collecting, aligning, and integrating data and information for tracking daily op-
erations. It establishes the key enterprise performance metrics, and provides for
regular periodic performance reviews.

Enterprise Excellence Model


Enterprise Excellence begins with establishing a management system and a
voice of the customer system (VOCS). These first two elements ensure the or-
ganization is focused on the requirements and expectations of the customer and
has the infrastructure in place for managing the enterprise to achieve a competi-
tive edge. The continuous measurable improvement element of Enterprise Ex-
cellence is critical to establishing the desired agility and flexibility to thrive in
the twenty-first century.

Enterprise Management System


The management system represents the basic management approach of the en-
terprise. This basic approach reflects the culture of the organization and how the
enterprise is managed. The elements of the management system include leader-
ship and the quality management system.

Voice of the Customer System


VOCS includes the policies and guidelines, infrastructure, and processes that
address the requirements and expectations of both internal and external custom-
ers. Voice of the customer system refers to a commitment and systems engineer-
ing approach for knowing and understanding the full scope of customer
requirements and needs, then using this knowledge to cost-effectively satisfy
the customers from concept to obsolescence and disposal.
There are two processes and sets of tools and techniques in Design for
Lean Six Sigma. The first is for the development of new technologies and
preparing to transfer them to new products and services. This process is
Invent/Innovate-Develop-Optimize-Verify (I2DOV). The second is for the
design and development of new products, services, and processes. This is
Concept-Design-Optimize-Verify.

Continuous Measurable Improvement


Fact-based decision making is centric to Enterprise Excellence. Continuous
measurable improvement (CMI) is the methodology for monitoring, measuring,
and evaluating our operations to provide the data for fact-based decision making
and to continually improve operations in order to achieve our vision. There are
two major methodologies in continuous measurable improvement (CMI). One
c01_1 09/30/2008 18

18 INTRODUCTION

focuses on effectiveness (Six Sigma) by reducing variability by improving qual-


ity and creating robust processes. The other (Lean) focuses on efficiency by re-
moving waste.

Six Sigma (6s)


Six Sigma is a disciplined, structured approach for process, product, and service
optimization focused on quality improvements, reducing process variability, and
increasing process and product robustness.

Lean
The Lean methodology represents the manner in which organizations must be
managed in a highly competitive environment. This concept embodies a collec-
tive set of principles, tools, and application methodologies that enable organiza-
tions to remove waste from the system and achieve dramatic competitive
advantages in development, cost, quality, and delivery performance.

Achieving Enterprise Excellence


Enterprise Excellence Deployment
Enterprise Excellence deployment begins first with a decision and commitment
to deploy Enterprise Excellence. Once that decision has been made, the next
step is to perform the assessment. An enterprise senior review group (ESRG) is
established to lead the assessment and to prepare for the development of the
Enterprise Excellence deployment plan. This executive leadership team’s initial
responsibility is to ensure the assessment is a thorough evaluation of the organ-
ization infrastructure and performance against the Enterprise Excellence model.
This includes review of the organization’s management system, voice of the
customer system, and continuous measurable improvement infrastructure and
implementation. The results of the assessment are used to develop a recom-
mended course of action to close the gap and achieve the desired state.
Enterprise senior review group (ESRG) is the senior leadership team of the
enterprise who will lead the deployment of Enterprise Excellence. The ESRG
will (1) refine the enterprise values, vision, goals, and objectives, (2) develop
the enterprise value stream, (3) define a portfolio of enterprise improvement
projects, (4) identify a group of Lean Six Sigma Black Belt candidates, and (5)
identify a resource to provide the requisite Lean Six Sigma Black Belts and
continuous measurable improvement subject matter experts (SMEs)
development. The initial Black Belt candidates are individuals who will lead
the first set of enterprise-level projects identified by the ESRG. These individu-
als will be the core internal support for the deployment of Enterprise
Excellence.
Monthly reviews will be performed by the ESRG to evaluate implementation
progress, review, and approve improvement projects, and reprioritize actions, as
required.
c01_1 09/30/2008 19

KEY POINTS 19

As defined in the assessment report VOCS activities are prioritized by the


enterprise senior review group. This includes defining (1) research and technol-
ogy development, (2) product, service, and process design, and (3) product and
service commercialization processes for the enterprise. Implementation plans
are developed and implemented. At this time, customer and employee feedback
systems are established or improved, as required.
After the initial wave of Black Belts begin training and as the Chanpion
workshops are conducted, Green Belt and additional Black Belt training will
begin. The training goals will be established by the enterprise senior review
group; however, generally, all supervisory personnel need to be Green Belts.
Depending on the commitment and resources, this strategy will deliver a self-
sustaining, organizational transformation within three years.
Deployment of Enterprise Excellence enables the organization to optimize
the critical success factors of quality, cost, schedule, and risk. It uses a holistic,
collaborative approach for managing and improving operations of the organiza-
tion and focuses the leadership, management, and technology on the critical sys-
tems and processes of the enterprise. This is accomplished through a focused,
collaborative deployment of the three critical elements of Enterprise Excel-
lence: quality management system, voice of the customer system, continuous
measurable improvement. The details of the deployment depend on the specific
situation of your organization.
c02_1 10/09/2008 20

2
MANAGING AND LEADING
ENTERPRISE EXCELLENCE
There is nothing more difficult to manage, or more doubtful of success, or
more dangerous to handle than to take the lead in introducing a new
order of things.
Niccolò Machiavelli

Managing and leading the implementation of Enterprise Excellence is a chal-


lenge to management and leadership skills at every level of an organization. We
see change today in many forms: changes in our industrial infrastructure and
changes in our business, administrative, and government processes. The chal-
lenge today is that change is not narrowly focused on a single organizational
element, government or business process, or problem area. Change therefore
has become broad-based and continuous, including people, organizations, infra-
structure, policies, programs, business processes, and requirements. Leading
successful change requires skillful management and inspired leadership.
Management consists of the allocation and control of resources necessary to
accomplish a task. This resource management includes such things as finance,
personnel, time, facilities, technology, and equipment. The role of the leader is
to motivate others to act in such a way as to achieve specific goals and
objectives.

Lead People—Manage Things

In this chapter we discuss the management systems needed to form the founda-
tion of Enterprise Excellence and the leadership needed to achieve the goals and
objectives of your business transformation. Some of the discussions in this chapter
may seem very basic to the reader. These basic principles are presented to provide
a complete picture of the ‘‘whats and hows’’ of managing and leading change.

 Management systems
 Leading Enterprise Excellence
 Overcoming resistance to change
20 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c02_1 10/09/2008 21

MANAGEMENT SYSTEMS 21

MANAGEMENT SYSTEMS

A management system is your philosophy of management, organization of man-


agement, and staff, processes, and procedures. It describes how you will do
business and deploy your requirements throughout the organization. It provides
an organization with a set of processes that ensure a structured, logical approach
to the management of the organization. These processes are geared to ensure
consistency and improvement of working practices, which in turn should pro-
vide products and services that meet customers’ requirements. ISO 9000 is the
most commonly used international standard that provides a framework for an
effective quality management system.
The term ISO 9000 is a generic name given to a family of standards de-
veloped to provide a framework around which a quality management system
can effectively be implemented. The standards define state-of-the-art
processes and procedures for defining and implementing a management
system.
These standards are not mandatory; organizations choose to adopt them. The
standards don’t define what quality is for a particular product or industry,
but they do define the requirements for a management system to control pro-
cesses for quality. The standards represent a consensus on what constitutes
good management practices that allow an organization to reliably deliver
products or services that meet the requirements of the customer. By using the
procedures and processes like those supplied by ISO, organizations reliably pro-
duce goods and services that meet the needs and requirements of their
customers.

Baseline QMS Requirements


In any management system there are baseline requirements. The following re-
quirements are imperative to any successful management system:

 Documented processes and procedures with controls that are fully


implemented
 An organizational structure with defined management roles, responsibili-
ties, and accountability
 A specific method to communicate and promulgate the management sys-
tem throughout organization
 A documented and implemented method for decision making
 Commitment to continuous measurable improvement
 Many standardized systems are:
 Voluntary (e.g., ISO 9001:2000)
 Regulatory requirements (e.g., FDA)
 Specifically designed for the organization and its mission
c02_1 10/09/2008 22

22 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Documented Processes and Procedures with Controls That Are Fully


Implemented.
The term documented processes simply means that they are in writing: proce-
dures, desktop guides, standard operating procedures, checklists, and so on.
Documented processes and procedures are important to the success of your
management system because they provide consistent direction on how to per-
form tasks. Without this documentation, ad hoc steps always creep into a pro-
cess and it is not executed effectively or efficiently. The documentation of the
process must be written for the individuals executing it. There is nothing more
frustrating than trying to execute a production procedure on the shop floor that
has been written by an engineer for engineers. These procedures must contain a
means for controlling the process. The procedure must be:

 Clear and concise


 Written at the level of the individuals executing the procedure
 Be auditable from higher-level procedures down to work instructions
 Provide for easily understood graphical work instructions and checklists
 Contain standards and expectations for the tasks to be performed
 Provide for a quantifiable means of controlling the process

Documenting your business processes is the first step to knowing and under-
standing your process sufficiently to control and improve it. What is not docu-
mented cannot be measured, what cannot be measured cannot be controlled,
what cannot be controlled cannot be improved.

An Organizational Structure with Defined Management Roles,


Responsibilities, and Accountability
Being a manager is not a position or title; a manager performs a specific set of
roles and responsibilities within an organization. These roles and responsibili-
ties may change from organization to organization, but basically are:

 Organizational decision making


 Resource management
 Personnel management
 Time management

Your role as a manager is to guide and give direction so that your organiza-
tion (team) can perform effectively. You provide coaching, training, and sup-
port. In order for individuals to meet the needs and objectives, they may need
extra input, information, or skills. A manager is called upon to make a variety
of decisions and handle problems on a daily basis. You must identify problems,
create choices, and select courses of action. Your daily routine of management
will include how to communicate with employees, how to handle adversarial
c02_1 10/09/2008 23

MANAGEMENT SYSTEMS 23

situations, and how to bring about needed changes in the organization and your
management team. Management roles always involve thinking, planning, tac-
tics, strategies, alternatives, effectiveness, and efficiency.
A Specific Method to Communicate and Promulgate the Management
System throughout Organization
This is a system by which you will communicate your management system from
the top to the bottom of your organization. This is how you will inform each and
every person in your organization of your policies, programs, expectations, and
requirements. It is accomplished in many ways, both automated and manual.
The methods follow the Hoshin Kanri approach to management by policy de-
ployment, discussed in Chapter 3 of this book. It will provide you with a trace-
able, auditable method to cascade your management system, ensuring that all
documentation is available to each employee.

A Documented and Implemented Method for Decision Making


The decision-making process is one of the most critical and most ignored in
business: How do you make business decisions? What data is needed? What is
your decision-making process? A typical decision-making process may com-
prise the following eight steps:

1. Clearly identify the decision to be made.


2. Gather all available information and data.
3. Identify alternatives.
4. Use the information, data, and alternatives to analyze your decision.
5. Make a fact-based selection among the alternatives.
6. Select and implement your decision.
7. Review the consequences of your decision and adjust as needed.

Commitment to Continuous Measurable Improvement


This commitment must be unequivocal. Change agents, change managers, and
change leaders are in a very difficult position in any organization. The only way
to accomplish change is to be fully committed to a structured, disciplined
change process.

Management System Principles


There are eight management principles on which an effective and efficient man-
agement system is based. These principles can be used by you as a framework to
guide your organization toward improved performance. The eight quality man-
agement principles are derived from the ISO 9000:2000 standard. These stand-
ards and principles serve as an excellent guide to management systems. This is
true if you are seeking to achieve ISO 900:2000 certification or using the princi-
ples and standards as a guideline for your organization. The eight management
system principles are:
c02_1 10/09/2008 24

24 MANAGING AND LEADING ENTERPRISE EXCELLENCE

1. Customer focus
2. Leadership
3. Involvement of people
4. Process approach
5. Systems management approach to management
6. Continuous measurable improvement
7. Fact-based decision making
8. Mutually beneficial supplier relationships

Principle 1: Customer Focus


Organizations (commercial, government, and industry) depend on their custom-
ers and therefore should understand current and future customer needs, meet
customer requirements, and strive to exceed customer expectations. Here are
some specific benefits of customer focus:

 Increased revenue and market share obtained through flexible and fast re-
sponses to market opportunities
 Increased effectiveness in the use of the organization’s resources to en-
hance customer satisfaction
 Improved customer loyalty leading to repeat business

Applying the principle of customer focus typically leads to:

 Researching and understanding customer needs and expectations


 Ensuring that the objectives of the organization are linked to customer
needs and expectations
 Communicating customer needs and expectations throughout the
organization
 Measuring customer satisfaction and acting on the results
 Systematically managing customer relationships
 Ensuring a balanced approach between satisfying customers and other in-
terested parties (e.g., owners, employees, suppliers, financiers, local com-
munities, society as a whole).

Principle 2: Leadership
Leaders establish unity of purpose and direction of the organization. They create
and maintain the internal environment in which people can become fully in-
volved in achieving the organization’s objectives. Enterprise Excellence leader-
ship will provide for:

 People who understand and be motivated toward the organization’s goals


and objectives
c02_1 10/09/2008 25

MANAGEMENT SYSTEMS 25

 Activities that are evaluated, aligned, and implemented in a unified way


 Miscommunication between levels of an organization being minimized
Applying the principle of leadership typically leads to:

 Considering the needs of all interested parties, including customers, owners,


employees, suppliers, financiers, local communities, and society as a whole
 Establishing a clear vision of the organization’s future
 Setting challenging goals and targets
 Creating and sustaining shared values, fairness, and ethical role models at
all levels of the organization
 Establishing trust and eliminating fear
 Providing people with the required resources, training, and freedom to act
with responsibility and accountability
 Inspiring, encouraging, and recognizing people’s contributions

Principle 3: Involvement of People


People at all levels are the essence of an organization, and their full involvement
enables their abilities to be used for the organization’s benefit. Involving people
in the change process will lead to:

 Motivated, committed, and involved people within the organization


 Innovation and creativity in furthering the organization’s objectives
 People being accountable for their own performance
 People eager to participate in and contribute to continual improvement

Applying the principle of involvement of people typically leads to:

 People understanding the importance of their contribution and role in the


organization
 People identifying constraints to their performance
 People accepting ownership of problems and responsibility for solving
them
 People evaluating their performance against their personal goals and
objectives
 People actively seeking opportunities to enhance their competence, knowl-
edge, and experience
 People freely sharing knowledge and experience
 People openly discussing problems and issues

Principle 4: Process Approach


A desired result is achieved more efficiently when activities and related re-
sources are managed as a process. Knowing and understanding processes (ad-
ministrative, managerial, and industrial) will lead directly to:
c02_1 10/09/2008 26

26 MANAGING AND LEADING ENTERPRISE EXCELLENCE

 Lower costs and shorter cycle times through effective use of resources
 Improved, consistent, and predictable results
 Focused and prioritized improvement opportunities

Applying the principle of process approach typically leads to:

 Systematically defining the activities necessary to obtain a desired result


 Establishing clear responsibility and accountability for managing key
activities
 Analyzing and measuring of the capability of key activities
 Identifying the interfaces of key activities within and between the functions
of the organization
 Focusing on the factors such as resources, methods, and materials that will
improve key activities of the organization
 Evaluating risks, consequences, and impacts of activities on customers,
suppliers, and other interested parties

Principle 5: System Approach to Management


Identifying, understanding, and managing interrelated processes as a system
contributes to the organization’s effectiveness and efficiency in achieving its
objectives. Understanding the organization as a system of systems will lead
to:

 Integration and alignment of the processes that will best achieve the desir-
ed results
 Ability to focus effort on the key processes
 Providing confidence to interested parties regarding the consistency, effec-
tiveness, and efficiency of the organization

Applying the principle of system approach to management typically leads to:

 Structuring a system to achieve the organization’s objectives in the most


effective and efficient way
 Understanding the interdependencies between the processes of the system
 Structured approaches that harmonize and integrate processes
 Providing a better understanding of the roles and responsibilities necessary
for achieving common objectives and thereby reducing cross-functional
barriers
 Understanding organizational capabilities and establishing resource con-
straints prior to action
 Targeting and defining how specific activities within a system should
operate
 Continually improving the system through measurement and evaluation
c02_1 10/09/2008 27

MANAGEMENT SYSTEMS 27

Principle 6: Continuous Measurable Improvement (CMI)


Continuous measurable improvement is a principle that provides for organiza-
tional effectiveness and efficiency in a continuously competitive environment.
The competition may be for market share, resources, funding, or programs. This
principle applies equally in commercial and government environments. CPI
therefore must be a permanent function within an organization. On a continuous
basis we must seek:
 Business processes that are effective and efficient from concept to
customer
 Performance advantage through improved organizational capabilities
 Alignment of improvement activities at all levels to an organization’s stra-
tegic intent
 Flexibility to react quickly to opportunities

Applying the principle of continual improvement typically leads to:


 Employing a consistent organization-wide approach to continual improve-
ment of the organization’s performance
 Providing people with training in the methods and tools of continual
improvement
 Making continual improvement of products, processes, and systems an ob-
jective for every individual in the organization
 Establishing goals to guide, and measures to track, continual improvement
 Recognizing and acknowledging improvements

Principle 7: Fact-Based Decision Making


This means moving the decision-making process from the subjective to the ob-
jective. Effective decisions are based on the analysis of data and information, that
are evaluated using a structured, documented process. This will lead directly to:
 Informed decisions
 An increased ability to demonstrate the effectiveness of past decisions
through reference to factual records
 Increased ability to review, challenge, and change opinions and decisions

Applying the principle of factual approach to decision making typically


leads to:

 Ensuring that data and information are sufficiently accurate and reliable
 Making data accessible to those who need it
 Analyzing data and information using valid methods
 Making decisions and taking action based on factual analysis balanced
with experience and intuition
c02_1 10/09/2008 28

28 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Principle 8: Mutually Beneficial Supplier Relationships.


The products and services you produce are only as good as the materials and
services you receive from your supply chain. Supply chain management uses
the standard of deploying to your suppliers the need for Enterprise Excellence
and CMI. This will create value for both organizations and will provide:
 Increased ability to create value for both parties
 Flexibility and speed of joint responses to changing market or customer
needs and expectations
 Optimization of costs and resources

Applying the principles of mutually beneficial supplier relationships typical-


ly leads to:

 Establishing relationships that balance short-term gains with long-term


considerations
 Pooling of expertise and resources with partners
 Identifying and selecting key suppliers
 Clear and open communication
 Sharing information and future plans
 Establishing joint development and improvement activities
 Inspiring, encouraging, and recognizing improvements and achievements
by suppliers

Many of the baseline requirements and principles are part of any manage-
ment system and leadership. The following discussion on leading Enterprise Ex-
cellence will enhance and provide information on many of the requirements and
principles.
‘‘Artificial ignorance’’ occurs when people sacrifice truth in favor of rever-
ence or ritual. They follow the rules, practices, procedures, or law exactly with-
out thinking of the implications and results. People who practice artificial
ignorance behave without thinking about the reason behind the actions.

LEADING ENTERPRISE EXCELLENCE

A leader motivates others to action. Thus it is the motivation of others and their
actions that define a successful leader. In other words, leadership is the art and
science of getting others to perform and achieve a vision. Therefore, leadership
is not only reflected in performance, no matter how good that performance is,
but in accomplishment. The motivation and actions of your followers is an im-
portant measure of your leadership, but the only measure of your success is in
achieving your leadership vision.
As a leader, your focus is on accomplishing that leadership vision, whether
the forum is personal, community-oriented, charitable, business, political, or
c02_1 10/09/2008 29

LEADING ENTERPRISE EXCELLENCE 29

industrial. All of these environments share a set of basic principles, traits, and
skills that work in concert with your personal values to achieve successful lead-
ership. The question for you to answer is this: Knowing what is required to be-
come a leader, do you have the desire?
Here we identify the basic requirements for leadership, pinpoint why they are
important to your leadership, examine how you can assess your leadership capa-
bilities, and help you determine what you need to accomplish to become a lead-
er. Then you can concentrate on achieving your vision.

The Building Blocks of Leadership


Everybody wants leaders who are competent, honest, forward looking, inspir-
ing, and successful. Those leaders know how to create an atmosphere of trust.
They care about their own contributions as a member of the team and as individ-
uals. Followers build a bond with their leaders based on honesty and trust, so it
is essential that leaders are always honest with their followers. They are forth-
right about bad news as well as good news. But it is also true that they do not
have to tell followers everything, as long as they demonstrate that they care
about the followers and have established a trusting bond. Demonstrated integ-
rity has great meaning and builds the trust bond needed to achieve a leadership
vision.
Leaders focus on the future and move in a progressive direction. The leader-
ship vision is a view of the future shared with followers.
You may be a single individual leading a large effort, or you may have a
leadership team to assist you. No matter your environment, you can become
a competent, respected, and successful leader by understanding the basic
principles, traits, and skills and using them as the underlying building blocks
of your leadership vision. You also need to comprehend how these three
leadership elements mesh with your personal values, as displayed in the fol-
lowing list:

Principles Traits
Integrity Controlled emotions
Effective communication Adaptability
Responsibility, accountability, Initiative
and authority
Positive mental attitude Courage
Consideration and respect Determination and resolution
Constancy of purpose Ethical behavior
Teamwork Sound judgment
Effective resources management Endurance
Fact-based decision making Desire
Dependability
c02_1 10/09/2008 30

30 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Leadership Principles
The leadership principles are the comprehensive and fundamental concepts that
are the foundation necessary for becoming a leader. You need to practice these
principles in all aspects of your life, personal and professional. Begin by devel-
oping self-discipline and self-leadership. You are, after all, a leader to yourself
and therefore need to possess personal or self-leadership.
Just as individuals seek to lead, organizations also seek to be leaders within
their peer group (e.g., industry, social group, or athletic group). The leadership
principles therefore need to be a part of the culture of the organization or team
in which you participate or lead. There are nine leadership principles:

1. Integrity
2. Effective communication
3. Responsibility, accountability, and authority
4. Positive mental attitude
5. Consideration and respect
6. Constancy of purpose
7. Teamwork
8. Effective resources management
9. Fact-based decision making

Integrity
Integrity is the adherence to a high standard of honesty and character. It is a set
of established values, with your actions consistent with these values. Character
is what you are; reputation is what others think you are. When the values, char-
acter, and actions you present to others are consistent with your personal beliefs,
that’s integrity.
The most important leadership principle that you will demonstrate to your
leadership team and your followers is integrity. From the perception of integrity
will flow the consistency of purpose and the character that will motivate your
leadership team and bind your followers to your vision, goals, and objectives.
The most important component of integrity is the quality of personal character.
When you act consistently with your values, others will notice, and your rep-
utation for living with integrity will develop. Those you choose to lead will no-
tice and know they can count on you to act decisively according to your
convictions. Integrity requires action.

Integrity is not passive. Integrity requires more than not doing anything that is
contrary to your stated values. You must be active and consistently act on your
beliefs and values.
c02_1 10/09/2008 31

LEADING ENTERPRISE EXCELLENCE 31

Acting in accordance with your values is living with honesty. This means
avoiding deceptive communication, either overtly or by omission, and being
open and frank about your values. Your values will be evident in the decisions
you make and the actions you take. If your decisions or actions are inconsistent
with your stated set of values, you will quickly earn a reputation as a fraud.
Living with rigorous honesty will do the following:

 Establish a basis for confidence in your leadership for yourself and others
 Build your self-reliance and self-respect
 Establish a clear understanding of your motives and desires for yourself
and others
 Protect you from destructive controversies
 Inspire you to progress toward your vision with great initiative
A facade of integrity will never work. Pretending to have and abide by a set
of values that are not truly yours and putting on a character that is alien to your
true beliefs is difficult, stressful, and always counterproductive. Eventually your
facade will crack, revealing your true values and character. If your true character
is different from the face you have presented, your leadership team will lose
confidence and trust, and your followers will lose heart and belief in your vision.
Consistency of purpose will establish a basis for confidence in your leader-
ship for yourself, your leadership team, and your followers. Acting in accord-
ance with your true beliefs and values allows you to be free and open. By
answering the question ‘‘Who am I?’’ clearly and honestly, you know your-
self—your values, character, and skills. This understanding builds your self-
reliance and self-respect to make you the leader you want to be.
Being an integrity-bound leader will establish a clear understanding of your
motives and desires for yourself and others. It will protect you from destructive
controversies that always arise when the integrity of your actions is called into
question.
Organizations are made up of individuals. Just as it is important for individu-
als to live with integrity, so must organizations. A group that acts without integ-
rity will not have loyal members.
Effective Communication
Effective communication is clear, concise, and comprehensible communication
through any medium. Leaders must be capable of communicating their values,
vision, goals, and objectives in many different ways: in meetings and in person-
al, written, electronic, and organization communications.
Personal communication is the daily one-on-one communication that occurs
with individuals on your team, with followers, or with others in your organiza-
tion. This important form of communication is used in giving instructions, ask-
ing and answering questions, listening to concerns, and all the other daily
communications that occur in any organization. You must make every personal
communication a motivating one for your team and for your followers.
c02_1 10/09/2008 32

32 MANAGING AND LEADING ENTERPRISE EXCELLENCE

The written and electronic forms of communication include memorandums,


formal reports, letters, meeting minutes, proposals, and e-mail. The leader must
have the capability to convert his or her ideas into coherent written form. Al-
though written communication is different from personal communication in its
form, the focus is the same: an effective communication that results in under-
standing and action. Written communications can be just as easily misunder-
stood as any oral communication.
You probably spend significant amounts of time in meetings, and you must
have the ability to conduct productive, effective, and efficient meetings. These
meetings are a critical way in which you communicate with your leadership
team and others, so be sure to avoid the pitfalls of a lack of focus, poorly defined
purpose, no clear agenda, and no closure or follow-up.
Communication is your most important tool for carrying out all of your or-
ganization functions: planning, organizing, staffing and staff development, di-
recting and leading, and evaluation and controlling. Poor communication is the
most frequent cause of many organization problems, such as lack of manage-
ment credibility and lack of trust. The resolution of these impediments could
transform a failing enterprise into a successful one.
Communication is so important to leadership that we have devoted an entire
section in Chapter 6 to this subject. In that chapter, we provide methods for
improving communications.

Responsibility, Accountability, and Authority


Responsibility, accountability, and authority are inexorably tied to each other by
definition and by function. A responsible leader must have the authority to act to
be an effective leader, and everyone in a position of authority must be held ac-
countable for his or her performance.
Responsibility is having the burden or obligation to accomplish something that
is within your power to achieve. You and you alone have the responsibility for
achieving your leadership vision. Your followers and your leadership team will
hold you accountable for your actions and for your success or failure as a leader.
Accountability is the necessity to report on your actions, performance, or
achievements. As a leader, you are, of course, accountable for your actions to
your leadership team and to your followers, and you may be accountable to
some higher authority. Accountability implies some obligation and even a per-
sonal (or business) liability for your actions. Just as you should and must hold
others accountable for the authority you have delegated to them, you are ac-
countable as a leader in a much broader way and to a much greater degree.
Authority is the power to act—both a legal right to act (e.g., the authority to
raise funds, write checks, and submit documents) and often the right to act for
the leader in his or her absence by signing letters and documents, issuing in-
structions, and managing resources. The authority to act and accomplish goals
and objectives is a serious one. Everyone who has been delegated authority to
act—the leader, the leadership team, and followers—must also be held account-
able for the exercise of that authority.
c02_1 10/09/2008 33

LEADING ENTERPRISE EXCELLENCE 33

You can and should delegate authority to your leadership team. Everyone in a po-
sition of authority and trust must be held accountable, but responsibility always
rests with the leader.

The success of many of your leadership activities depends on appropriate


responsibility, accountability, and authority—both given and accepted. This ap-
plies in your personal, social, and professional life. You need to seek responsi-
bility and be accountable for your actions. In performing your responsibilities,
you must also have the authority to act. Often this authority is given with the
assignment, position, or mission. If it is not, you need to take action to ensure
that you have the necessary authority.
Each individual needs to know what is expected, to understand that he or she
has the authority necessary to accomplish a responsibility, and to realize he or
she is accountable. Remember that what gets measured gets done.

Positive Mental Attitude


A positive mental attitude is the ability to focus on the positive aspects of your
leadership and accomplishing your leadership vision, goals, and objectives. This
attitude does not disregard the negative aspects that need to be considered, but
does not dwell on them. You are always looking forward and have a solution to
every problem; failure is not an option. As a leader with a positive mental atti-
tude, you would never say, ‘‘Oh my. We are short of funds for the project again.
What will we ever do?’’ Rather, you would say, ‘‘The project requires additional
funding for successful completion. This is what we are going to do.’’

Failure is not an option. Decisions are made with positive direction for your lead-
ership team and followers.

Understand that you cannot control people, places, or things; you can influ-
ence them only by controlling your own behavior and attitudes. You can also
influence the environment around you and the people you come in contact with
each day. To influence others and to motivate them to achieve the things that you
believe need to be done is your challenge. A positive mental attitude is vital to
instilling this winning attitude in your leadership team and your followers.
We have all been exposed to managers and leaders who constantly emphasize
the negative aspects of every situation. They are always gloomy and always dis-
appointed. The influence they have is a negative one.
Your positive mental attitude will be reflected in your energy level. You will
feel charged, and others will see you as someone with high energy and with a
capacity for accomplishment, an important attribute for attracting loyal
followers.
c02_1 10/09/2008 34

34 MANAGING AND LEADING ENTERPRISE EXCELLENCE

As you lead yourself and others, adhere to the principle of a positive mental
attitude. This does not mean ignoring mistakes, failures, or negative influences,
but rather, recognizing an equivalent benefit in every negative event. It is clear-
ing your mind of influences that do not support a positive mental attitude, deter-
mining what you want, maintaining focus, and working to achieve your goals
with steady persistence.

Consideration and Respect


Consideration and respect means the thoughtful and sympathetic regard for
others. It is a return of the loyalty that has been given to you as a leader. You
understand the impact of your decisions on others and are considerate of the
consequences. You respect the rights and privileges of people for who they are
and what they can accomplish. Leadership requires difficult decisions.

Difficult decisions concerning individuals can be made in a very positive way


when you show consideration and respect for your leadership team and followers.

Leadership often requires that you make difficult decisions that affect the fu-
ture of others—disciplinary action, job assignments, promotions, training, and
the like. You need to be able to make these tough decisions and to act on them,
while treating all people fairly and without prejudice. By developing a genuine
concern for the welfare, morale, and professional development of the individu-
als you lead, you are respecting them as individuals.
Effective leaders recognize that they cannot achieve success unless they
achieve success for those they lead. When an individual or an organization is
perceived as not caring about the members of the team, disaster looms. There
will not be the loyalty and enthusiasm for your vision that is necessary for
achieving great goals and objectives.

Constancy of Purpose
Constancy of purpose is the steadfast adherence to a set of principles, vision,
goals, and objectives. This is more than just leadership focus: It is how you
achieve your goals and objectives on your way to achieving your vision. Not
only must you be focused on achieving your vision; you must be consistent in
the application of the principles you employ to achieve it.

The outward behavior and performance of individuals and organizations is the re-
sult of constancy of purpose.
c02_1 10/09/2008 35

LEADING ENTERPRISE EXCELLENCE 35

The effective daily behavior of individuals and organizations reflects con-


stancy of purpose that will provide you, your leadership team, and followers
with a known, unswerving direction for your organization activities and individ-
ual efforts and will keep a sharp focus on your leadership vision.
Develop a clear vision and a passion for its accomplishment; then develop
the appropriate planning and infrastructure for ensuring all actions are collabo-
rative and supportive for achieving what you as a leader knows needs to be
done. This includes selecting the resources necessary for achieving the goals
and objectives and managing their application to all of the required actions.

Teamwork
Teamwork is the contribution of individuals through collaboration to meet a
common goal. This is a cooperative and coordinated effort on the part of a group
acting together. All important leadership accomplishments are the result of a
team effort. Effective leaders recognize this important principle and develop
groups of people focused on achieving a measurable benefit.

There is no identifiable vision, goal, or objective that teamwork cannot


accomplish.

The growing complexity of leadership, business, and civic problems requires


the knowledge of a collective team to overcome. The Wright brothers alone de-
signed and constructed the first flying machine. The design, development, and
production of the Boeing 777 required the collaborative efforts of thousands of
people—a massive effort that started with a leader and a leadership team. Indi-
viduals are limited by time, talent, capability, and capacity to achieve complex
goals. Teams can achieve what cannot be accomplished by individuals.
Sometimes people seem to lack an ability to work together to solve problems.
Establishing teams to accomplish a mission promotes a sense of collaborative
goals and creates a sense of ownership so the team feels empowered.
Selecting and building a leadership team is so critical to your success as a
leader that we have devoted a complete chapter to the subject. Chapter 4 imparts
the skill and knowledge you will need to select and build an effective team.

Effective Resources Management


Effective resources management starts with a clear understanding of the re-
sources (time, people, facilities, and finances) needed to accomplish your vision
and the actions you must take to distribute and control them effectively. The opti-
mum use of the resources available to you is effective resources management.

Leadership failures are frequently attributed to poor management of resources.


c02_1 10/09/2008 36

36 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Effective leaders are skilled in developing plans to achieve their goals, which
includes the judicious application of the resources to execute the plan. A leader
who fails never attributes that failure to a lack of vision, instead citing the lack
of resources to complete the goals and objectives: ‘‘We ran out of time [or funds
or people]’’; ‘‘The proper technology was not in place’’; ‘‘We failed to estimate
correctly’’; and so forth.
You must identify the resources necessary to achieving your leadership vi-
sion and practice good planning, management, and financial skills. Then you
will need to track and report on your plans. These important skills are covered
in detail in Chapter 5.

Fact-Based Decision Making


Fact-based decision making is the selection of options based on demonstrable
fact—usually quantifiable data, such as marketing studies, financial analysis,
and statistical analysis. Decisions may also be made with qualitative data, al-
though these may be more subjective because they depend on someone’s deter-
mination of good and bad. But qualitative decisions can be just as fact based as
quantifiable decisions if the information is gathered and evaluated properly.

Leadership is a continuous selection of options. Each option you select as a leader


must be based in fact.

As a leader, almost every decision you make and every option you select will
be challenged and scrutinized. The best way to avoid any controversy is to use a
fact-based decision-making process that is clear and definable. Leadership re-
quires the continuous selection among choices—that is, choices among goals,
courses of action, or individuals in planning, problem solving, or analysis. The
most effective leadership is founded on the principle of fact-based decision
making.
Making fact-based decisions means collecting all available data, performing
the appropriate analysis, and selecting the best option. From time to time your
decision may be contrary to the results of the analysis—perfectly acceptable as
long as you document the reasons for your decision and have the analysis for
future reference. Chapter 5 examines some tools used to make fact-based
decisions.

Leadership Traits
Leadership traits are the distinguishing characteristics and qualities that set you
as a leader apart from others. These traits are the personal attributes that you
consistently demonstrate in exercising your leadership and management respon-
sibilities. The importance of these traits cannot be overemphasized; failing to
c02_1 10/09/2008 37

LEADING ENTERPRISE EXCELLENCE 37

demonstrate any of these traits clearly to your leadership team will demoralize
them and cause them to lose respect and confidence in you as a leader. An indi-
vidual who successfully applies the leadership principles must possess these 10
traits:

1. Controlled emotions
2. Adaptability
3. Initiative
4. Courage
5. Determination and resolution
6. Ethical behavior
7. Sound judgment
8. Endurance
9. Desire
10. Dependability

Controlled Emotions
Emotions are the demonstrated states of joy, sorrow, fear, hate, rage, and so forth.
We all feel these emotions. As a leader, you will be disappointed in some indi-
viduals, will be joyful for successes, and may feel anger toward people or events.
Whatever you may feel is fine; however, you must carefully control the public
display of emotion and make leadership decisions based on fact, not emotion.

Nothing will alienate your leadership team or followers more quickly or more per-
manently than a temper tantrum.

People do not want to follow a leader they fear, and everyone fears a leader
whose emotions are out of control. This does not mean you are emotionless;
there are certainly appropriate displays of emotion—joy in success and sorrow
in loss, for example. It is the inappropriate display of emotion—rage, anger,
peevishness—that will discredit you as a leader.
It is important to control your emotions rather than let your emotions control
you. This does not mean that you need to become cold and dispassionate. It does
mean that you need a strong sense of self-discipline. Avoid sarcasm and person-
al comments. Never use profanity. Focus on principles and facts, not personalit-
ies. If there is corrective action to be taken, do so without any display of
pleasure or displeasure.

Adaptability
Adaptability is the ability to adjust to different situations, conditions, and cir-
cumstances. As a leader, you must be adaptable in two different ways: in the
c02_1 10/09/2008 38

38 MANAGING AND LEADING ENTERPRISE EXCELLENCE

way you address and approach different people and as you face changes in your
business, civic, or personal life. Your ability to adapt to changes in the environ-
ment may well define whether you are successful in achieving your ultimate
leadership vision.

Adaptability is a defining trait for all leaders—and one that differentiates a leader
from a manager.

Change is one of the constant facts of life today, and the inability to adapt to
it leads to sure failure. The realities of leadership are that circumstances will
continuously change. Being a change agent is always risky. There are numerous
forces that you will be unable to control, and they divert you from your chosen
path of action. Be sure to develop the skill of rolling with the punches: accepting
the things you cannot change, learning from them, and adjusting to maintain your
constancy of purpose. Remember that your goal is always your leadership vision.

Initiative
Initiative means to be ready and able to initiate action. World-class leaders are
always aware of what needs to be done; they do not need someone else to point
out what to do. When the facts justify a decision, a leader must initiate action.
One epitaph often heard for failed leaders is, ‘‘He lost the initiative and some-
one else got to his vision first.’’

Lead or follow? The difference between people who exercise initiative and those
who don’t is the difference between leading and following.

Individuals with the ability to see the path to a leadership vision and take the
initiative to get there will be the leaders of tomorrow. Taking the initiative, with
the associated risks, defines the difference between leaders and followers.
Taking the initiative is not a spur-of-the-moment reaction but is coupled with
fact-based decision making, with full knowledge of the risks and rewards. Once
you have sufficient facts, make the decision. Notice we said sufficient facts, not
all the facts, because often the initiative is lost in endless fact-finding and dis-
cussions. There is always risk in taking the initiative. If the only actions you
ever take are risk-free, then you are a manager or follower, not a leader. Allow-
ing your leadership team to take the initiative is part of delegating authority and
accountability.

Courage
Courage is the quality of mind and spirit that enables you to face difficulty, dan-
ger, and pain (physical or emotional) with firmness and determination. This
c02_1 10/09/2008 39

LEADING ENTERPRISE EXCELLENCE 39

physical and moral control of fear gives you control over yourself and enables
you to act in a threatening environment. Understand that courage is not the ab-
sence of fear; rather, it is resolve and determination that overcomes fear.

Courage is not the absence of fear but rather the presence of resolve and
determination.

You will win the respect and commitment of others by standing up for what
you believe in and making tough decisions despite ambiguity. Effective leaders
act in the best interest of the team, the organization, and their vision in spite of
external threats. They confront problems and take action based on what they
believe is right.
You can develop the trait of courage by understanding all the consequences
of making a decision and deciding to accept those consequences.

Determination and Resolution


The characteristic of being resolute means being firmly fixed on a purpose or
goal by deliberate choice and will. Determination and resolve are the traits that
will get you from where you are today to achieving your leadership vision. Both
are tied to the trait of courage.
If you are determined and resolved to achieve your leadership vision, then
you will have the courage to face all situations necessary to achieve that goal.
Moreover, your displaying these traits will encourage your leadership team and
followers to perform during difficult times. If they see you staying the course
and seeing the goals and objectives through to the end, they will do the same.
But if you waver and grow weak, you will soon find yourself alone. You will
have lost your credibility.

Demonstrating determination and resolve will encourage your leadership team


and followers during difficult times.

Once you have made a fact-based decision, you must maintain a focused ef-
fort to see it through to implementation. Do not alter your position on an issue
except in the light of new facts.

Ethical Behavior
Ethical behavior is the system of values and moral principles that guides your
conduct as a leader. It thus requires that you determine your values and act con-
sistently with them at all times. Your ethical behavior reflects a basic philosophy
for dealing with values and conduct with respect to the rightness or wrongness
c02_1 10/09/2008 40

40 MANAGING AND LEADING ENTERPRISE EXCELLENCE

of your actions and the goodness or badness of your motives and vision. Ethical
behavior and integrity are bound together in the same continuum. They are part
of a single whole—one a principle, the other a trait—working together for the
inspirational leader.
Ethical behavior is essential for establishing and maintaining your credibility
and the loyalty of those you would lead. The best leaders exemplify honesty and
integrity. They are forthright and honest in their dealings with peers, subordi-
nates, and superiors.
By its very nature, ethical behavior extends beyond the workplace into your
personal life. Leaders can become discredited and fail due to their lack of ethi-
cal behavior in their personal lives. That is why the honest assessment of your
basic values and desire to become a leader are so critical to your success. If
these self-assessments are not totally honest, you will never have the ability to
behave ethically. To demonstrate ethical behavior, you must clearly understand
what is expected of you (principles and traits) and be willing to live in that way.

Sound Judgment
Sound judgment is the ability to make a decision or form an opinion that dem-
onstrates good sense and discretion. This means more than simply being deci-
sive; your decisions must be well founded and make sense. Sound judgments
are based on facts, knowledge, and understanding. This means you must review
and analyze all of the facts prior to making any decision.

All leaders are decisive. Effective leaders have the ability to make sound judg-
ments as well

Effective leaders are decisive, and they possess the ability to reach sound
decisions promptly and to communicate them powerfully, directly, and clearly.
Demonstrating this kind of sound judgment will win the respect of the leader-
ship team, followers, and even adversaries.

Endurance
Endurance is the power to sustain your efforts without impairment or yielding to
fatigue and time. There is a physical and mental dimension to endurance. As a
leader, you will be expected to be physically able to meet your obligations. You
may have to work long hours, travel extensively, attend apparently endless
meetings, and be readily available to your leadership team and followers. Men-
tally, you will be expected to function as alertly at 10 p.m. as you did at 7 a.m.,
and you must be capable of mentally

Desire
Desire is a strong craving that impels you to the attainment or possession of
something that is real and achievable. This desire can be worthy or unworthy,
c02_1 10/09/2008 41

LEADING ENTERPRISE EXCELLENCE 41

and the possession can be good or bad. The point is that as a leader, your desire
to attain leadership must be worthy of the efforts you are about to invest in at-
taining the vision. You must understand all the traits and principles of leader-
ship, possess all the skills that will be required, and have the desire to attain
your leadership vision.

Leadership is personal. It is based on your desire to achieve your vision.

Having a desire to meet your goal is the essential catalyst for achievement.
Desire is the trait that differentiates you from all others. It will give you the
fortitude to develop all the skills necessary to be an effective leader, compel
you to demonstrate leadership traits, and hold you to your leadership principles
when the going gets rough, as it certainly will. To desire is to have a fire within
that can be extinguished only by achieving your leadership vision. This desire
will ignite your leadership team and sustain them.

Dependability
Dependability is the ability to place trust in someone else’s actions. As a leader
you must be relied on to keep your word in large things and small. This is the
quality of character and self-discipline that means others can rely on you. They
can rely on your word, your acting with integrity, and your loyalty. They know
what you stand for, and they know you will stand up for what you believe in.
Keep your word. If you say that you will be at a meeting at 8 a.m., be there. If
you are committed to meeting a deadline, meet it. If you make a promise, keep
it. Dependability is essential if others will follow your leadership.

Dependability is basic to leadership. Leaders who cannot be depended on will not


be leaders for long

The Leadership Culture


A leadership culture is the pattern of activities that has been used to influence
people, establish goals, perform planning, and make decisions. It determines
the way people perceive and feel about their organization, its infrastructure, and
its leadership. The leadership culture is the formal and informal way work gets
done. Inherent in this definition is its tangibility. You can literally touch, feel,
taste, hear, and read about the leadership culture, and you can certainly measure
it by its results.
c02_1 10/09/2008 42

42 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Despite the extraordinary changes in modern business practices and princi-


ples, our leadership culture too often remains closely associated with the classic
autocratic approach to management. Whenever achieving a vision, goal, or ob-
jective becomes difficult, it is easy to abandon the basic leadership principles,
traits, and skills and become an autocratic manager. This dinosaur makes deci-
sions without consideration, then gives orders and expects immediate obedi-
ence. Since the autocrat does not seek the opinion of subordinates, creativity
and innovation are squelched.
The traditional leadership culture requires close supervision and motivates
through negative reinforcement. In this environment, the basis for legitimate
leadership is formal authority. This system is task oriented and places little val-
ue on the development of good working relationships with subordinates. The
workforce has reacted to this form of leadership by doing only what is compul-
sory and attempting to suppress its frustrations. Often, we have seen these frus-
trations play out in aggressive behavior, verbal abuse, work stoppage, and
sometimes sabotage.
In today’s changing world, the pure autocrat has become an increasingly in-
effective leader. As a successful leader, you should be prepared to challenge an
autocratic culture and to act as a change agent. To do so, you must be able to
assess your organization’s culture or environment and establish a baseline of
where you are now. You probably have an opinion about the culture, but this
opinion may not be based on fact. The most effective method to assess culture
is to conduct a survey—verbal or written, formal or informal. Typically, it con-
tains some of the following questions:

 What is your leadership path?


 What must you accomplish to succeed?
 What is expected of you?
 What are the taboos?
 What are the rivalries?
 Who holds the power?
 How do you get ahead?
 How do you stay out of trouble?
 What does this culture really value?
 What skills are needed to win?

Your challenge is to be a principal cause of positive change in your environ-


ment, whereby the autocrat becomes a leader of people, and autocratic compa-
nies stand up as leaders of industry in the new global economy. There is nothing
else more difficult to accomplish, less likely to succeed, and more hazardous to
a career than to initiate change. This ability to lead change is the most signifi-
cant management skill needed today. Leaders who develop the skill to facilitate
change in an organization will be the guiding force for the future.
c02_1 10/09/2008 43

LEADING ENTERPRISE EXCELLENCE 43

FIGURE 2.1 Leadership Model.

Change and reorganization are universally feared. Change upsets the estab-
lished order, introduces risk, and disturbs the status quo. For this reason, change
and reorganization are often deferred, to the detriment of the organization,
which experiences a loss of effectiveness, quality, and throughput and increases
in cost.

The Leadership Model


Leaders today are no longer defined by business, political, or social position or
power. Rather, they are defined by their ability to perform and achieve their vi-
sion. The measure of a successful leader has become not what position he or she
has achieved but what that person is doing in that position.
Realization of your vision as a leader occurs not through happenstance but
by following a clear path. That path is the leadership model shown in Figure
2.1, which builds on the basic leadership principles, traits, and skills you have
acquired through education, training, and experience. These and your values
are the basis for defining your leadership vision. Once you have identified your
vision, you must do a leadership self-assessment to determine what you need to
do to achieve that vision. Perhaps this self-assessment indicates a need for
more training or more education or new skills. All this preparation and self-
assessment will lay the groundwork needed to achieve your leadership vision.
c02_1 10/09/2008 44

44 MANAGING AND LEADING ENTERPRISE EXCELLENCE

This model is not intended to imply that the leadership process is linear.
Change is one of the few constants we can be sure of, so you will be reassessing
your leadership vision continually and updating the basic skills you will need to
remain a leader.
The first step in the leadership model is for you to understand the theory of
leadership and how leadership practices relate to that theory. This is basically
your initial decision point when you ask for the first time, ‘‘Do I have the desire
to be a leader?’’ In the second step of the model, you must consider how you
will lay the foundation for your leadership vision. Examine the leadership prin-
ciples, traits, and skills and determine how these skills relate to your education
level, knowledge, and values. After establishing your leadership vision (in the
third step) and the goals and objectives needed to achieve that vision, perform
the self-assessment at the back of this chapter to determine your status as a lead-
er. This assessment (step 4) will answer the following questions:

 Do your personal values support the leadership principles?


 Do your value system and character provide for effecting the leadership
traits?
 Do you have the skills, education, and experience to achieve and maintain
your vision?

This self-assessment is designed to provide you with an understanding of


where you are today and what you need to do to become an effective leader. The
assessment is divided into specific areas, as denoted by the following questions:

 What is your leadership profile for principles?


 What is your leadership profile for traits?
 What is your leadership profile for skills?
 What is your leadership profile for vision?
 What is your leadership profile for planning?
 What is your leadership profile for teaming?
 What is your leadership profile for communicating?
 What is your leadership profile for achieving?

A leadership self-assessment is appended to this chapter. Take a few minutes


and complete your leadership evaluation. Based on this assessment, you may
want to reevaluate and perhaps adjust your leadership vision or consider changes
that may need to be made in your personal values, education, or training.

Leading and Managing Teams


A team is only as effective as its leader. A team is composed of a group of peo-
ple brought together for a common purpose. That group does not become a team
c02_1 10/09/2008 45

LEADING ENTERPRISE EXCELLENCE 45

until a leader provides the vision, goals, objectives, and resources needed to
bring the team together and achieve the goal. A leadership team and project
teams are especially appropriate to the implementation of Enterprise
Excellence.

Team Members
Team members are those individuals close to the process, but may also be stake-
holders in the project or process. Team members are typically the individuals
working in the process and appointed by the sponsor/guidance team in consulta-
tion with the team leader. Team members may also include:

 Project sponsor
 Champions
 Green/Black Belts
 Ad hoc members

Team members should encompass ranks, professions, trades, or work areas


impacted by the project (if the project cuts across departmental boundaries, so
should team members). Effective teams are those that are composed of three to
six core members, with other members added as needed. Each team member is
expected to:

 Be supportive
 Contribute
 Be creative
 Add value
 Learn from others
 Be constructive
 Be objective

Team Building
The challenge for an organizational leader is in developing and achieving the
organizational vision. The leader must determine and accomplish the long-
term, intermediate, and short-term goals and objectives, including improve-
ment initiatives and problem resolution. These goals and objectives are usu-
ally interdepartmental, cutting across many functions and requiring special
skills, knowledge, and abilities. Therefore, it is important that the leader mo-
tivate others to perform what needs to be accomplished. This means
teamwork.
Ownership. Teamwork creates a sense of ownership. When individuals have
a sense of ownership, they feel empowered. And when empowered people link
together, they are likely to use their energies to produce extraordinary results.
c02_1 10/09/2008 46

46 MANAGING AND LEADING ENTERPRISE EXCELLENCE

TABLE 2.1
Team Type Definition

Functional Composed of individuals who all do the same type of job in a


given process
Cross-functional Composed of individuals who have different jobs, but
contribute to the same process

Exemplary leaders enlist the support and assistance of all those who must
live with the results, and they make it possible for others to do good work.
Therefore, it is imperative that leaders understand the team-building process.

Types of Teams
There are two types of teams: functional and cross-functional, as shown in
Table 2.1. A functional team is composed of individuals who all do the same
type of job in a given process, such as circuit card assemblers, in-process in-
spectors, or purchasers. A cross-functional team is composed of individuals
who have different jobs, but contribute to the same process. For example, indi-
viduals in the process of building circuit cards might be brought together to
form a cross-functional team while all the department managers in a division
are brought together to form another cross-functional team.
A cross-functional team is preferred for all activities because it brings togeth-
er all of the individuals necessary to understand the entire process.When select-
ing a team, be sure to consider all relevant functional areas, such as:
Finance Purchasing
Marketing Human resources
Operations Information technology
Quality Engineering
Contracts Research and development
Legal

Teams and Their Missions


Teams are also defined by their mission. For example, if a team is formed to
address organizational issues, the team is then known as a management team.
Table 2.2 identifies the other types of teams that can exist, as defined by mission.

Team Members:
Teams should contain three to six members, each of whom must contribute and
add significant value to the team. Team members should also be:

 Supportive and good team players


 Creative
 Open-minded and learn from others
c02_1 10/09/2008 47

LEADING ENTERPRISE EXCELLENCE 47

TABLE 2.2
Team Type Mission Members

Management Addresses organizational Typically composed


team issues, such as planning, of a manager or executive and
policy, guidelines, and his or her direct reports.
infrastructure Example:
Steering councils
Improvement Established to resolve a
or project specific problem, improve
team a specific process, or
achieve a specific goal
Work cell Complete a well-defined All the members of a particular
segment of a finished process
product or service
Self-managing Assumes a major role in
work team activities such as planning,
priority setting, organizing,
coordinating, problem solving,
scheduling, and assigning work

 Respected
 Constructive
 Objective
 Vested and interested in success

Team Dynamics
There are many types of teams and team members. You must understand team
dynamics to manage and lead your team. Regardless of the type of team formed,
team dynamics will be the same. Team dynamics include:

 Team structure. The fundamental organization of a team—who the mem-


bers are, what their relationships are, and what roles they take.
 Team activities. The generic actions each team must take for success.
 Team phases. The predictable stages each team goes through during its life.

Team Structure
Studies and experience have repeatedly demonstrated that a successful team be-
gins with a well-established structure. This structure includes active manage-
ment support and a membership with clearly defined roles. Additionally, teams
will be more effective if they are assisted by people with training in project
management, group process, statistical process control, and the scientific
c02_1 10/09/2008 48

48 MANAGING AND LEADING ENTERPRISE EXCELLENCE

method. In the Enterprise Excellence model, each member of a successful team


performs one of the following roles:

 Manager/supervisor/senior adviser
 Team leader
 Facilitator
 Team members

Manager/Supervisor/Senior Adviser The formation of a team may stem


from a single supervisor who sees a problem that needs to be addressed. Howev-
er, it could just as easily stem from a steering council that determines a need.
Regardless of why a team is formed, teams will be effective only if there is
active management support.
Management has the responsibility of developing a draft mission statement
for the team. It also has the responsibility of setting preliminary goals and as-
signing a team leader and facilitator. Once a team leader is assigned, manage-
ment must work with the leader to ensure that the resources necessary to
accomplish the mission are available. Finally, management must clear the or-
ganizational paths for action when necessary.

Team Leader The team leader manages the team. He or she calls and facili-
tates meetings, handles and assigns administrative tasks, orchestrates team activi-
ties, and oversees preparation of reports and presentations. When selecting the
team leader, it is important to choose an individual who has a stake in the process.
He or she needs to be interested in solving the problems and must be reasonably
good at working with individuals and groups. It is also important to ensure that
the team leader is trained in the tools and techniques of the team process.

Facilitator The facilitator is a specialist trained in all the total quality tools.
His or her role is to work with the team leader and the team to help keep them
on track, to provide training as needed, and to facilitate the application of the
appropriate tools. The facilitator must possess a broad range of skills—group
process, effective meetings, conflict resolution, effective communications, the
total quality tools, and training.

Team Members The team members are the individuals who form the bulk of
the team. They are the individuals who carry out assignments and make im-
provements. The team members usually are individuals from the functions in-
volved in the process, but the team may also include people from functions
necessary to make changes developed by the team. The standing team members
are those individuals who meet regularly with the team and who are essential for
the team’s activities. The standing team members should be kept to the mini-
mum necessary to accomplish the mission, usually fewer than eight. There are
normally three to six members who make up the core team.
c02_1 10/09/2008 49

LEADING ENTERPRISE EXCELLENCE 49

At this point, you will want only the best and brightest on your team. When
selecting team members, ensure they are:

 Creative and open minded


 Good team players
 Respected among peers, stakeholders and other business leaders (this could
make or break the project if the person is difficult)
 Vested in success of the project

If others are helping with the selection process, you should make them aware
of the criteria.
The recommended size for a team is three to six members, depending on the
scope of the project and its impact on organizational units. Smaller teams (three
to four) work faster and tend to produce results more quickly. Teams greater
then seven or eight members require additional facilitation and often require
subteams to be formed to make them effective. The most effective teams:

 Consist of three to six members


 Include stakeholders
 Are knowledgeable
 Are cross-functional and multidisciplinary
 Ad hoc members providing specific expertise

There may be other individuals whom you will employ during some team
meetings, called ad hoc members, who contribute to the success of the team but
do not necessarily meet regularly with them. Ad hoc members possess special-
ized skills or knowledge that the team needs to address regarding particular is-
sues or actions.

Stages of Team Development


In 1965, educational psychologist Dr. Bruce W. Tuckman described the stages
of group development. Looking at the behavior of small groups in a variety of
environments, he recognized four distinct phases that groups go through (See
Figure 2.2). His work also suggests that groups need to experience all four
stages before they can achieve maximum effectiveness.
Teams are groups. As such, teams go through the same four phases that Tuck-
man describes as they mature. Therefore, it is important to understand these
stages of team development—especially if teaming problems are to be handled
when they arise. The four phases of group development are described here.

Forming
When a team is first formed, team members spend time getting acquainted. (See
Table 2.3.) There may be some excitement and optimism about the team and the
c02_1 10/09/2008 50

50 MANAGING AND LEADING ENTERPRISE EXCELLENCE

FIGURE 2.2 Stages of Team Development.

project at hand, but there may also be feelings of uncertainty and cautiousness
for some team members. Since individual roles and responsibilities will initially
be unclear, many team members will feel anxious. Therefore, there will be a
high degree of dependence on the team leader for guidance and direction.

TABLE 2.3
Forming

Feelings Excitement, anticipation, optimism


Initial tentative attachment to team
Anxiety, suspicion, and fear about the
work ahead
Behaviors Defining the task and strategy
Developing group norms
Deciding on information needs
Digressing on concepts and issues
Talking about symptoms or problems
not related to the project
Complaining about the company

Storming
This is the phase during which conflict may arise between individual members
or groups of members of the team. (See Table 2.4.) As important issues start to
be addressed, emotions will start to rise. Some members will be impatient at the
lack of progress, some will observe that it’s good to be getting into the ‘‘real
issues,’’ and others will wish to remain in the comfort and security of the Form-
ing stage.

TABLE 2.4
Storming

Feelings Resistance to the project


Erratic swings in attitude toward team and project
Defensiveness, tension, jealousy
Behaviors Competing, forming coalitions, and choosing sides
Advocating unrealistic goals and concern over extra work
Arguing when there is disagreement over the real issues
Questioning the wisdom of the leader
c02_1 10/09/2008 51

LEADING ENTERPRISE EXCELLENCE 51

TABLE 2.5
Norming

Feelings Acceptance of roles and other’s individually


A sense of belonging and purpose
Intimacy and trust toward other members
Behaviors Expressing feelings and constructive criticism
Avoiding conflict to achieve harmony
Friendliness, confiding in one another, discussing team dynamics
Establishing and maintaining team rule

Depending on the culture of the organization and individual team members,


this stage can be difficult to work through. However, there is good news! During
the Storming phase, team members are beginning to understand one another.
They are also beginning to think of the group as a team.

Norming
During this phase, big decisions are made by group agreement. (See Table 2.5.)
In fact, agreement and consensus become the norm during the Norming stage.
Smaller decisions may be delegated to individuals or small teams within the
group. Team members respond well to facilitation by the leader; roles and re-
sponsibilities are clear and accepted.
Processes will be developed during the Norming phase and working styles
established. Commitment and unity will be strong. The team may even engage
in having some fun during meetings!

Performing
In this phase, the team members are a cohesive unit, working in concert. (See
Table 2.6.) They understand the team process and accept and appreciate individ-
ual differences. They are performing as a team, understanding clearly why it is
doing what it is doing.
The team has a shared vision and is able to stand on its own feet with no
interference or participation from the leader. The team now has a high degree of

TABLE 2.6
Performing

Feelings Satisfaction with group’s progress


Closer attachment to team
Members realize strengths and weaknesses of themselves and others
Feeling like things are clicking . . . everything is going to be fine
Behaviors Modifying personal behavior to adapt to group dynamics
Working through or circumventing difficulties
Accomplishing a lot of work
c02_1 10/09/2008 52

52 MANAGING AND LEADING ENTERPRISE EXCELLENCE

autonomy and makes most of the decisions using previously agreed-to criteria.
Although disagreements occur, they are resolved within the team positively.
Necessary changes to processes and structure are made by the team during
the Performing phase. The team is working toward achieving the goal and at-
tending to relationship, style, and process issues along the way. Although the
team does not need to be instructed or assisted, the team will still require the
leader to delegate tasks. Team members might ask for assistance from the leader
regarding personal and interpersonal development. The leader delegates and
oversees.
Tuckman’s original work describes the way he observed groups evolve. In
the real world, groups are often forming and changing, and each time that hap-
pens, they can move to a different stage of the model. Although a group might
be happily ‘‘norming’’ or ‘‘performing,’’ adding a new member might force the
group back into ‘‘storming.’’ Seasoned leaders will be ready for this and will
help the group get back to performing as quickly as possible.

Effective Meetings
In any organization, meetings are an important vehicle for exchanging ideas and
information. However, in many organizations, meetings are so common and per-
vasive that people take them for granted and forget that, unless properly planned
and executed, meetings can be a terrible waste of time and human resources.

Meeting Plan
A meeting plan is critical to the success of any meeting. Without a plan, your
meeting will become a happening with few effective results. We have all at-
tended this kind of meeting: no clear direction, no goals or objectives, much
griping, and few results. They usually happen ‘‘every Tuesday at 8:00 a.m.’’. A
meeting can be effective only if:

 There is a clearly stated purpose for the meeting


 There are goals and objectives for the meeting
 Attendees are knowledgeable of their role and are prepared
 There is a written agenda published before the meeting

Agenda
The key to having a successful and productive meeting can be summed up in
three words: Have an agenda. An agenda helps keep the discussion focused on
the project and limits room for deviation from that discussion. At a minimum an
agenda should include:

 Purpose of meeting
 Date, time, and location
c02_1 10/09/2008 53

LEADING ENTERPRISE EXCELLENCE 53

 List of invitees
 Topics to discuss
 Time allocated for each topic
 Action items
 Expected results

Rules for Effective Meetings


Once the agenda is published and everyone knows their roles and responsibili-
ties for the meeting, you then must conduct an effective meeting. Here are a few
simple rules for effective meetings:

 Start on time.
 Have a clear need for the meeting (agenda).
 Assign a timekeeper/facilitator.
 Assign a minutes’ recorder (scribe).
 Stick to the agenda.
 End on time.
 Publish the minutes within 24 hours.

Key Roles
Effective meetings also include filling key roles, which are summarized in
Table 2.7.

Meeting Minutes
Proper planning will ensure that you meet the required goals defined for the
project. If you have followed the steps previously laid out (assigning a note tak-
er, scribe, meeting facilitator, etc.), you will be well prepared.
Meeting minutes are used to document attendance, points of discussion, ma-
jor decisions, and other relevant information required for maintaining the

TABLE 2.7
Role Responsibility

A meeting leader Opens the meeting, reviews the agenda, manages participation,
or facilitator helps with the evaluation of the meeting, and ensures that a
timekeeper and note taker are assigned
Timekeeper Helps the team keep track of time during the meeting
Note taker Keeps a record of key topics and main points raised during the
discussion, collects future agenda items, and ensures that minutes
are distributed
Scribe Responsible for posting ideas on a flip chart or whiteboard as the
discussion unfolds so that everyone can see them (posting ideas
helps to keep the team focused)
c02_1 10/09/2008 54

54 MANAGING AND LEADING ENTERPRISE EXCELLENCE

TABLE 2.8 A Sample Action Log


Action No. Action Item Assigned To Start Date Due Date Remarks

history of the project, as well as providing data required for effectively manag-
ing the project.

Action Log
What gets measured gets done, and what gets reported gets done quicker. This
truism also applies to meetings. It is important to establish an action log, such as
the one shown in Table 2.8, and rigorously maintain it. The action log needs to
include a description of the action required, the person to whom it is assigned,
and the scheduled completion date. It also needs to provide for remarks. These
may include status, background information, potential problems, ongoing prog-
ress, and so forth. Purposes of action logs are as follows:

 Ensures that action items are identified and assigned


 Facilitates tracking and reporting

UNDERSTANDING AND OVERCOMING RESISTANCE


TO CHANGE

The essence of Enterprise Excellence is change. As with all change, resistance


and your ability to overcome it will make or break the deployment and imple-
mentation of Enterprise Excellence. Communication, Cooperation, and Coordi-
nation are the three strategies that you as a leader will need to win over your
leadership team, gain followers, and overcome resistance, overt or covert, to
your leadership.

Understanding Resistance to Change


All leaders, business managers, supervisors, civic leaders, politicians, or em-
ployees encounter resistance, a natural response to any change. If you are com-
peting to win a leadership position, you are surely proposing change.
Experienced leaders are aware of this fact, yet rarely do they perform a system-
atic analysis to determine who might resist and for what reasons.
c02_1 10/09/2008 55

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 55

To anticipate what form resistance might take, leaders need to understand its
most common causes:

 Narrow-minded motivation. There are always individuals in any organiza-


tion who are very narrowly motivated; self-interest is their only motivator.
 Lack of understanding and confidence. Some individuals have a distinct
lack of understanding of what needs to be accomplished and why. On the
basis of past experience, their confidence in the organization or individual
may be low. This lack of confidence leads to mistrust and resistance.
 Different analysis of leadership needs. Based on the information available
or the information you have provided, some people in your organization
may have a completely different assessment of the needs for leadership
and change.
 Low tolerance for change. This type of resistance usually comes from em-
ployees who feel they cannot cope with the change required.

Each of these causes has distinct motivations and must be understood. Once
you have this knowledge, you can determine which of these causes apply to
your situation and use that knowledge to counter the resistance.

Narrow-Minded Motivation
The fear of losing something of value—position, salary, or status—is always a
motivation to resist change. Self-interest causes people to consider first their
personal situations and not that of the organization. The following changes can
be expected to result in resistance:

Changes that alter an individual’s status—for example, changes in level of


authority (real or perceived), salary or salary status, or title
Changes that reduce decision-making power—for example, reducing the
scope of someone’s responsibilities, centralizing decision making, or using
team consensus to make decisions
Changes that interfere with existing relationships—for example, moving in-
dividuals outside their existing network or moving employees or
supervisors

People often attempt to subvert new leadership before and during planning
and implementation if they do not view the proposed change as personally bene-
ficial. The resistance is rarely open. Instead, subtle approaches are used and tend
to occur beneath the surface, using back channels of communication. Many in-
dividuals in an organization are in positions to resist your leadership in this way:

 The financial manager who just cannot release the funds at this time
 The shop supervisor who stops operations daily for safety hazards but who
offers no preventive action
c02_1 10/09/2008 56

56 MANAGING AND LEADING ENTERPRISE EXCELLENCE

 The administrative assistant who frequently calls in sick at critical times


 The planning coordinator who never seems to return your phone calls
 Individual team members who never offer opinions or suggestions

Lack of Understanding and Confidence


Individuals and groups may resist your leadership if they do not understand its
implications and if they perceive that it will cost them more than they will gain.
These situations occur most often where there is a lack of trust between the
individual initiating the change and the employees. Rarely is there a high level
of trust among executives, managers, supervisors, and employees. Unless clear
and precise communication, cooperation, and coordination accompany change,
misunderstandings surface when the following types of change are introduced:

 Changes in individual status that are not clearly defined


 Changes that require a level of trust between the change agent and
individuals
 Changes that have been poorly communicated and coordinated

Misunderstandings must be recognized and resolved rapidly, for if they are


not addressed, they often lead to resistance.

Different Analysis of the Situation


Commonly, people resist leadership change when they evaluate the situation dif-
ferently from you. Assumptions are the damaging elements here. Frequently,
those initiating change assume that they have all the relevant data necessary to
conduct a thorough analysis and that everyone else in the organization is work-
ing with the same data. This problem arises when:

 Change data and information are not thoroughly disseminated


 Evaluation of change data is performed using different methods
 Assumptions leading to the need for change are not clear

Low Tolerance for Change


People resist change because they fear that they will not be able to cope with the
new skills and behavior that will be required. Organization changes sometimes
require people to adapt more rapidly than they are able. These are the changes
that may lead to conflict:

 Changes that require skills beyond the individual’s perceived capabilities


 Changes that are beyond the training and education of individuals
 Changes that are beyond an individual’s ability to absorb them
 Changes implying that previous actions and decisions were incorrect
c02_1 10/09/2008 57

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 57

Resistance stemming from limited tolerance is emotional, even when the


change is clearly in the individual’s and the organization’s best interests.
This low tolerance also surfaces when individual egos are threatened by a
belief that the change is an admission that previous decisions and methods
were wrong.

Overcoming Resistance to Change


To implement your leadership fully, it is necessary to overcome resistance. As
you do, you will achieve a transition from traditional leadership and manage-
ment methods to world-class leadership.
You will have several strategies at your disposal, as shown in Table 2.9:
the 3C approach, always the most successful and desirable; negotiating
and agreement, a strategy used in many environments that can be
successfully implemented; and manipulation, the least desirable, which will
provide only short-term gains. If none of these work, termination may be in
order.
Achieving your leadership vision is always characterized by skillful leader-
ship in the application of a combination of approaches that best fit your situa-
tion. Most successful efforts are based on approaches with sensitivity to your
strengths and limitations and a realistic appraisal of the situation. The most
common mistake leaders make is to use only one approach or a limited set of
them, regardless of the situation. Typical examples are the hard-boiled boss
who often coerces people, the people-oriented leader who constantly tries to be
(over involved, and the cynic who always manipulates and co-opts. The point is
that leadership cannot be confined to a single principle. The best approach fits
the situation.
The most desirable strategy is communication, cooperation, and coordina-
tion. Use it first, and you will achieve a large part of your leadership vision.
Based on your assessment of the situation, you may need to negotiate and reach
agreement with some elements of your organization. This strategy will become
necessary when dealing with people in a strong position or in a union environ-
ment. Finally, there is the Machiavellian strategy of manipulation and coer-
cion—the least desirable strategy and one to turn to only after very careful
consideration and as a last resort. Employment of this leadership strategy al-
ways leads to difficulty and eventually erodes your leadership position. (Table
2.9 compares these strategies.)
A second common mistake that managers make is to approach change in an
unstructured way rather than as part of a clearly considered strategy. Table 2.9
compares the approaches to change and the strategies for overcoming
resistance.
The greater the anticipated resistance to your leadership, the more difficult it
will be to overcome. This is especially true in industrial organizations or bu-
reaucracies with entrenched resisters. Your leadership approach depends on four
basic factors:
c02_1

58
10/09/2008
58

TABLE 2.9
Strategy Situation Strengths Limitations

3Cs (communicate, cooperate, Leadership based Creates leadership advocates Requires significant time and effort,
coordinate) on correct information and change agents including the expertise to communicate
and data
Very positive attitude of
the followers
Negotiation Leadership based on Easy way to avoid major Can be expensive and time-consuming
and agreement individual benefits resistance; creates followers
for the followers motivated by need
Manipulation and When other tactics will Relatively quick and Will lead to future problems with personnel;
co-optation not work or are inexpensive never a long-term solution
too expensive
Coercion and Used when speed is Can very quickly Always the last resort; very risky; leaves
termination essential and the overcome resistance angry people; coercion always eventually
initiators have the requires termination
power and authority
c02_1 10/09/2008 59

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 59

FIGURE 2.3 Communication, Cooperation, Coordination.

1. The amount and kind of resistance you anticipated. If resistance is strong,


it may be necessary to move down the strategy list. The application of
good negotiation skills is always helpful here.
2. Your position with respect to the potential resisters. If you are in a strong
position, the 3C approach will always work best.
3. The availability of relevant data. If there is an excellent, well-documented
basis for leadership change, the 3C approach will work best.
4. The stakes. Higher stakes for you and your organization may require you
to move down the list to manipulation and co-opting.

Communication, cooperation, and coordination are the three strategies that


you as a leader will need to win over your leadership team, gain followers, and
overcome resistance, overt or covert, to your leadership.

The 3Cs: Communication, Cooperation, and Coordination


The most effective method of dealing with potential resistance to leadership is
through communication, cooperation, and coordination—the 3Cs of achieving
your leadership vision, show in Figure 2.3. This method of communicating,
gaining followers, and countering resistance is the most effective path to achiev-
ing your vision because it frequently eliminates resistance before it starts.

Communication
Communication is defined in the broadest sense here as the transmission of
meaning to others. In leadership, it has two distinct components: (1) formal
c02_1 10/09/2008 60

60 MANAGING AND LEADING ENTERPRISE EXCELLENCE

FIGURE 2.4 Communication.

communication, which is the dissemination of information through some specif-


ic medium, whether written or electronic, and (2) informal communication,
which transmits much more than words. By its nature, personal communication
also transmits nonverbal messages—your feelings, motives, and attitudes. In in-
terpersonal communications, your expressions, attitude, tone of voice, and body
language transmit much more than your words do.
There is a relationship among the words you use, the symbols of your leader-
ship, and your nonverbal communications. These three relationships form a tri-
angle of effective communication. All three comers of the triangle must be in
concert for your communication to be effective.
At the top of the triangle are the words you use; they are the basis for your
communication. What you say orally or in writing is the beginning of your com-
munication, but only the beginning. These words must communicate your
meaning directly and clearly. You must say what you mean and mean what you
say. (See Figure 2.4.)
Nonverbal communication, such as slogans and graphics, can be powerful
tools in communicating your thoughts and meaning. They must be in concert
with what you are trying to communicate or they may negate your words. Some-
one standing under a banner that reads ‘‘People Are Our Most Important Asset’’
to announce the layoff of thousands of employees will not be taken very
seriously.
Nonverbal communications—body language, attitude, and expressions—
matter very much in communicating your real thoughts. An executive at a board
meeting can ask a simple question, ‘‘Does any board member have a problem
with this program?’’ If she speaks with an expression of concern and makes
direct eye contact with the board members, she will elicit responses. The same
sentence spoken with a scowl and a steely-eyed glare will receive no response.
c02_1 10/09/2008 61

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 61

If you cannot communicate effectively, you cannot lead. It is crucial to com-


municate effectively and frequently with everyone involved early in the leader-
ship process. This is a process of one-on-one discussions, presentations to
organizations and groups, and communication by every medium possible.
The best form of communication is a clearly defined and formally stated
goal. The vision and goals stated for effecting leadership must be clearly con-
sistent with the goals and objectives of the followers. The need for change and
the logic leading to a change decision should be outlined, explained, and docu-
mented. This documentation transforms the needed change from your idea to a
reality. Finally, for maximum effectiveness, a broad dissemination of all the fac-
tors affecting leadership is necessary.
To lead, you must communicate effectively.
Leadership communication is a dialogue, because one-way communication
does not work. Anyone can give directions; that is simply the process of manag-
ing and supervising, not leading. Leadership requires effective communication
that is a two-way dialogue. That dialogue consists of five specific forms of com-
munication: personal communication, written and electronic communication,
meetings, presentations, and organization communication. A few basic rules of
communication apply to all communication media:

 Communicate clearly, whether verbally, in writing, or electronically. Avoid


using questions or allegories that obscure a point.
 Communicate directly. Do not use abstractions to disguise direct
statements.
 Be brief. Do not use long anecdotes and unwieldy examples.
 Use active listening. Reflect each communication back to the speaker until
you both are clear on the meaning. Be sure not to debate each statement.
 Do not conduct a running commentary as others are speaking.
 Share information on many levels of understanding.

Personal Communication Personal communication is the daily one-on-one


communication that occurs in your organization: providing direction, asking
and answering questions, listening to your followers’ concerns, and so forth. To
your leadership team, this can be a motivating experience or a distressing one.
This experience carries over into all other aspects of achieving your vision. If it
is negative, it will inhibit the accomplishment of your leadership vision. If it is
positive, it will be a motivating factor and will lay the groundwork for your
leadership. First and foremost you must be able to communicate personally with
the members of your leadership team and the others in your organization.
The key to successful individual communication is dialogue: open, two-way
communication (i.e., listening and talking). This is a personal exchange. Every
time you have an individual communication with one of your team members,
followers, or employees, it should be a personal discussion. That person should
c02_1 10/09/2008 62

62 MANAGING AND LEADING ENTERPRISE EXCELLENCE

come away from that conversation with the strong feeling that you were talking
to him or her personally and listening closely. There are a number of barriers to
individual communication:

 You are not present.


 You are not listening.
 You are making assumptions or premature evaluations.
 You are playing one-upmanship.
 You are hostile or negative.
 You do not have control of the situation.

The word you appears in every one of these barriers to communication. This
means that you as a leader have to be responsible for the quality of each and
every communication. Following are a few guidelines for effective personal
communications:

 Be there. When having a personal communication, do not appear to be pre-


occupied with other thoughts. Rather, give that person your undivided at-
tention. Talk directly to the person, and use his or her name. Your body
language communicates as well. Do not let it give the impression that you
do not want to be there.
 Listen. Be an active listener. Paraphrase key points made by the other person
and reflect them back to the individual. Verify your understanding of the per-
son’s statements, feelings, and facts by asking relevant questions and linking
the statements together into statements of fact or specific actions. This kind of
listening requires effort, but the reward is a very effective communication.
 Consider new ideas. Do not make assumptions before the other person has
finished talking. Never (never) cut the other person short or interrupt until
he or she has finished, or you will inhibit, if not immediately stop, any
effective communication. Hold your evaluation until you have clearly dem-
onstrated that you comprehend the idea and understand the facts. Whenev-
er an idea is being presented to you, take time to listen, consider, and
answer in positive terms, even if you are turning down the proposal.
 Be ‘‘we-win’’ oriented. Make very conversation a winning conversation.
Do not play one-upmanship with your team members, followers, or em-
ployees. This game is juvenile and wearisome and diminishes your position
and stature as a leader. Do not trap yourself in an ‘‘I win—you lose’’ mode
of thought. Remember that your objective is to win as a leader and achieve
your leadership vision, not to win every point of every discussion.
 Be positive. If you appear threatened and distressed by any new or oppos-
ing view, your team members and employees will be reluctant to discuss
anything with you in the future. They will find another leader who is not
hostile or negative to every input. Try to understand others’ views. When a
c02_1 10/09/2008 63

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 63

new idea is appropriate, accept it. If it is not appropriate, give due credit to
the individual for the effort and thought, and be clear about why the idea
cannot be incorporated.
 Communicate clearly. Plan and organize your thoughts so you think before
you speak. Speak in clear, concise sentences using the language of the lis-
tener. Strive mightily for clarity through active listening. To lead, you need
to be understood.
 Manage conflict. You must be willing to confront conflict, an integral part
of being a leader. Conflict will come, and it will come to you personally.
Never deny another person’s conflicting views, pretending that they do not
exist or are irrelevant. Face it, and then manage, minimize, and channel it
into positive directions by dealing with it on a factual basis. You cannot
resolve conflict with opinions or hyperbole. You can resolve it with facts.

Written and Electronic Communication As a leader, you are expected to be


an effective writer in various forms: memorandums, formal reports, letters,
meeting minutes, proposals, and e-mail. (Notice that we left e-mail for last. To-
day a glut of information is available through electronic media. Some leaders sit
before their monitors all day sending instructions and asking questions on e-
mail and believe they are leading or managing an organization through that ex-
change. Electronic systems can be effective communication tools, but they are
just that—tools—and not substitutes for personal communications.)
Written communication in some ways is more difficult than the spoken word;
since you are not present, your attitude, body language, and expressions cannot
be read or interpreted by the reader. Nevertheless, the focus of this form of com-
munication is the same: an effective communication that results in understand-
ing and action. This requires that you put yourself in the place of your audience
and write to their needs, then visualize what their likely reaction will be to your
written communication. Your writing should be as alive as if you were address-
ing another person in an animated conversation.
Following are some common barriers to effective written and electronic
communication:

 Lack of clarity
 Lack of focus
 Lack of coherence
 Improper language for the audience
 Dull and boring presentation
 Not to the point

You can overcome them by using the following simple and direct rules:

 Employ commonality of language. This means writing to your audience’s


education level, experience, and interests. Before you begin writing, think
c02_1 10/09/2008 64

64 MANAGING AND LEADING ENTERPRISE EXCELLENCE

about your reader and visualize how your communication will be received.
Remember that it is you who desire to communicate. Write so that your
audience can clearly understand your ideas, thoughts, and concerns. Re-
member the simple rules of good grammar. Write in short, declarative sen-
tences that are not open to misinterpretation, and avoid clouding the issue
with unnecessary verbiage. Write good lead sentences to each paragraph
and several supporting sentences. Be sure the paragraphs are supportive of
the focus of your communication.
 Focus on the key issues. Organize your document to support this idea or
fact. Whether the written communication is a one-page memo or a forty-
five-page report, maintain your focus. Write so that the document supports
the central theme of your correspondence. Develop other ideas as second-
ary and in support of your main idea.
 Write coherently. Organize your material to tell a story from beginning to
end. Do not inject new ideas in the middle of a paragraph or in the middle
of several paragraphs discussing a single subject. In each paragraph, the
first sentence should state a specific idea or question, with the following
two to four sentences supporting that idea or amplifying it. Similarly, your
lead paragraph will introduce your theme and the following paragraphs
will support it. Here, too, short declarative sentences communicate best.
Here is a step-by-step guide to writing your document:
 Write a clear, concise theme or subject as the lead sentence and the lead
paragraph.
 Present the relevant data or information concerning the subject.
 Discuss the relevant facts and the data in detail. Make no assumptions
about the reader’s knowledge.
 Draw conclusions based on the data and discussion. You cannot draw
conclusions about data or information you have not discussed.
 Make recommendations. You cannot make recommendation for which
you have not drawn conclusions.
 Use clear language. Your writing style will have an impact on the reader,
so select your style with the reader in mind and follow it throughout the
communication. Examples of styles are formal business style, speaking in
the first person, using scientific descriptions and notation, and conversa-
tional tone. Always use the active voice rather than the passive voice, and
select the shortest and most commonly used words because they are most
effective.

Meetings Much of your leadership team’s work will be accomplished in meet-


ings Team members, followers, and employees will carry out assignments and
perform many tasks between meetings; it is during meetings that discussions
occur, decisions are made, and actions are decided upon. Conducting productive
c02_1 10/09/2008 65

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 65

meetings can be quite difficult since few leaders and fewer team members, fol-
lowers, or employees know and understand the skills needed. The best way to
have a productive meeting is to understand the barriers and rules to conducting
good meetings. Some of the barriers are:

 Lack of focus
 Poorly defined purpose
 Lack of agenda control
 No closure
 No follow-up

The key to conducting a productive meeting is to follow these guidelines.

Always have an agenda. This is true of all meetings, even ad hoc meetings of
only a few people. Take a moment to establish an agenda and define why
you are meeting, what you hope to accomplish, and how long it should
take. Once you have an agenda, stick to it to keep the meeting on track.
Agendas should include:
Specific agenda items. These are the topics that will be discussed at the
meeting. Items should be presented in a short sentence or phrase, with
an agenda item for each subject of discussion.
Presenter of the topics. This is usually the person who requested the item
be added to the agenda. That person will come to the meeting prepared
to present and discuss the item. Time frames for presentation and dis-
cussion include specific start and end times for the meeting, with each
agenda item allotted time for presentation and discussion. Identify the
type of presentation for discussion, information, or decision; pinpoint
decision agenda items that require action. They are usually preceded
by information presentations, or a point paper is provided to the team
members before the meeting. Never make a cold decision presentation
at a leadership meeting.
Be clear about the purpose. When you have taken the time to schedule a meet-
ing, you should have a very clear idea of why you are all there. Meetings
should be conducted when you have a problem to solve, or a decision to make
or a need to inform or to be informed. Before asking for a meeting, determine
what the outcome should be. That is your purpose for meeting. Just because
you ‘‘always have a meeting on Friday at 2:00’’ is never a good reason.
Encourage participation by all meeting attendees. Do not ask people to
come to a meeting if you do not expect or want them to participate. To
encourage participation, call on them individually if necessary, and ask for
an opinion. Be prepared for disagreement. (If this were easy, you probably
wouldn’t need a meeting to discuss it.) Listen to the participants and pro-
cess their information.
c02_1 10/09/2008 66

66 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Always summarize at the end of the meeting. Give the participants a sense of
closure—that this was a good meeting and accomplished its goals. Clearly
define decisions that have been made and action items assigned and select-
ed. Be sure that everyone understands their action items and that they ap-
pear in the minutes of the meeting.
If a meeting is worth having, it is worth writing minutes. Minutes are the
written record of your meeting that set out agreements, decisions, and ac-
tions. Having this information in writing will prevent misunderstandings.
These minutes should include:
 The time, location, and purpose of the meeting
 Names of all attendees names of presenters
 The subjects presented and discussed
 The decisions made and actions assigned
 The scheduled time and location of the next meeting and known agenda
items
 A request for the attendees to review the minutes, correct inaccuracies,
recommend changes, and add agenda items for the next meeting

Presentations Leaders must be dynamic and effective public speakers. In


many situations, you will be called upon to speak before a large group and de-
liver a presentation. These presentations will give you significant public expo-
sure and the chance to have an impact on others.
There are some barriers to being an effective public speaker. Just like many
of the other barriers to leadership, they can be overcome with education, prac-
tice, and personal discipline. The most common barriers to effective public
speaking are:

 Lack of preparation. This occurs when you are not totally familiar with
your subject or with the material you are going to present.
 Canned talks. The same talk or speech given over and over again, or the
presentation you have memorized that puts audiences to sleep.
 Failure to speak to your audience. This is the ‘‘you people’’ syndrome. Do
not talk over the heads of your audience or look and sound disinterested in
them.
 Poor visual aids.
 Too much material in too little time.

The key to effective public speaking is focusing not on covering the material
or reading a canned script but on having a dialogue with your audience. The
following simple rules will greatly assist you in making effective presentations:

 Be prepared. This is a cliché, but very true. Carefully research and prepare
your materials, and prepare yourself on them by reviewing the information
c02_1 10/09/2008 67

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 67

until you know it. Be sure that your material is authentic and that your visu-
al aids are ready and correct. Know your audience too—their educational,
technical, and professional level. Try to determine whether they have any
agendas concerning the material you are about to present. Finally, be flexi-
ble. You may well be surprised by the size and makeup of your audience.
Perhaps you prepare for a formal presentation with transparencies or slides,
but it becomes clear that your audience would rather sit down and discuss
the subject. Do not get flustered in this situation. Remember that this is your
material and your subject, and you can present it in any way you see fit.
 State your purpose. Clearly define the purpose of your presentation: what
you want the listeners to get from it. Organize your material, your visual
aids, and your thoughts around a central purpose or theme. Understand if
your presentation is:
 Educational or training information only
 A decision brief
 Asking your audience to do something
 Asking your audience to believe something
 Trying to inform, train, or educate your audience
 Speak from an outline, never a script. It is always a mistake to memorize
or read your talk. Your audience will lose interest and maybe even doze a
bit while you are droning on in a monotone presentation. Instead, use an
outline that you glance at occasionally, and deliver your talk directly to the
audience with passion. Impart that this is a subject of interest and impor-
tance to you, and you feel strongly about it.
 Speak directly to the audience. Do not over their heads, not at their feet,
not to the podium. Speaking to the audience is one of the most important
traits of a successful public speaker. Look at the audience and individuals
in it. Hold their eye contact for several seconds, or long enough to make a
point. This is a captivating trait, and one that you can learn. As you speak,
move about and gesture, as appropriate. Get excited about your subject,
and the audience will also. One caution: This will work only if you are
authentic. If you are talking from a script or are insincere about the subject,
your acting will show.

Organization Communications As a leader, communication is the tool you


use to carry out all of your organization functions: planning, organizing, staffing
and staff development, directing and leading, and evaluation and controlling.
Communication is your most important tool. Here are some barriers associated
with organization communication:

 Failure to communicate the mission and goals of the organization


 Failure to communicate where the organization is going and how it plans to
get there
c02_1 10/09/2008 68

68 MANAGING AND LEADING ENTERPRISE EXCELLENCE

 Failure to provide your team, followers, or employees with the information


they need to carry out their jobs
 Failure to provide accurate and timely feedback

These impediments underlie many leadership, business, and organization


problems, such as lack of leadership credibility and lack of trust. The resolution
of these impediments could transform an enterprise in trouble into a successful
one. Following are guidelines for achieving effective organization
communication.

Frequently communicate your leadership direction. Let everyone know


where the vision, goals, and objectives are headed. This is especially im-
portant during times of change.
Inform everyone about the major leadership issues that are influencing the
organization. A leadership proactive approach will be far more effective
than a reactive approach. Rather than being placed in a defensive position
of reacting to various rumors about specific issues, leaders should take the
initiative in establishing an ongoing program that periodically informs
everyone about the issues of the day and what is being done to resolve
them.
Provide information to your followers Frequently. Providing accurate and
complete information to everyone is a principal way of achieving trust and
getting the job done. It is your responsibility as a leader.
Lead by walking around. This is not a new concept, but it is indispensable to
organization communication. A significant portion of your workweek
should be devoted to interacting with your followers on a one-to-one basis.
This is organization communication at its best. Nothing can replace it.

No matter how intelligent or innovative you are or how important your posi-
tion is, these qualities are of no use if you cannot communicate your ideas
effectively.

Cooperation
Cooperation, the second of the 3Cs, is the act or instance of people working and
acting together for a common purpose or benefit. (See Figure 2.5.) It will be
necessary for your leadership team to be cooperative for more than an instant if
you are to be successful. Cooperation needs to become a way of life for you,
your leadership team, and your followers.
A deeper look at the nature of cooperation will reveal why people do not
cooperate. Often there is no reward for not cooperating and there may well not
be any consequence for not cooperating. Moreover, the rewards for being coop-
erative may be tentative and undefined.
Some leaders are the biggest block to cooperation. If they talk about coopera-
tion, insist on it, yet pass out rewards based on individual competitive results,
c02_1 10/09/2008 69

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 69

FIGURE 2.5 Cooperation.

cooperation will not be a hallmark of their team. You as the leader must imple-
ment a win–win attitude toward cooperation and avoid fostering an I win–you
lose atmosphere among your leadership team, followers, or employees.
Directly involving potential resisters to your leadership in the design and im-
plementation of change can forestall resistance. Form a network of those poten-
tial resisters because they may have something positive to contribute. This
strategy is practical if those individuals can perceive some benefit from your
leadership or can limit the negative effects of the change on them personally. It
is not very practical to involve them if they are potential net losers in the change
process.
In the spirit of cooperation, display some flexibility. Be prepared to compro-
mise with your network, and do not expect 100 percent acceptance. There are
known barriers to achieving cooperation among your leadership team, follow-
ers, or employees:

 No reward system for cooperation


 A competitive environment
 Little trust
 No commitment to change

A few basic rules of cooperation will help overcome the natural resistance to
it:

Network those affected. As you implement your leadership vision, changes


affecting people will necessarily be made. Networking those affected, and
c02_1 10/09/2008 70

70 MANAGING AND LEADING ENTERPRISE EXCELLENCE

FIGURE 2.6 Coordination.

integrating them into your change process, is an effective way of overcom-


ing potential resistance. Make them part of the team or give them specific
responsibilities in achieving your leadership vision. If they participate in
achieving your vision, it is difficult for them to resist your leadership.
Be prepared to compromise. Compromise is a basic rule of cooperation.
When working with your team or networking those affected by your lead-
ership, promote compromise along the way.
Instill a commitment to cooperate. This is done first by removing the incen-
tives for your leadership team members to be competitive with each other.
(There will still be plenty of competition with outside forces.) The leader-
ship team or your organization as whole should be cooperating to win (i.e.,
to achieve your leadership vision). Provide incentives for cooperation; re-
ward team efforts and minimize individual rewards.

Coordination
Coordination, the last of the 3Cs, is the active interaction of functions or ele-
ments of a system for a common purpose. (See Figure 2.6.) This definition is
close to that of cooperation; indeed, the two functions are closely related.
A well-coordinated leadership effort can help deal with resistance by being
supportive of the elements required to implement change. This process includes
providing the support needed to facilitate your leadership changes among the
individuals and elements of the organization, providing the training and educa-
tion necessary to implement new skills and standards, planning and structuring
the change so that it can be effectively transitional, and executing the planned
change rather than just allowing it to happen in a haphazard way. Some barriers
to coordination are:
c02_1 10/09/2008 71

UNDERSTANDING AND OVERCOMING RESISTANCE TO CHANGE 71

 Poor lines of communication between team members


 Conflicting support requirements among such diverse activities as facilities
management, finance, transportation, and writing issues
 Assumptions and/or outdated information being provided to and/or used by
team members
 Failure to document (in writing) complex plans or schedules

Many of these barriers to coordination can be overcome with good planning


and attention to detail on the part of the leader. Communication also plays a
vital role in coordination; the leader, as the catalyst in the dissemination of in-
formation, must communicate frequently with the team.
A 3C program can be very effective when any resistance is based on in-
adequate information or incomplete data. The program facilitates the change
agent’s acquisition of help from all employees, including potential resisters. It
fosters good relationships between the change agent and the resisters. The pro-
gram requires time and effort, however, and it will not negate all resistance to
change. The change will always have a negative effect on some individuals and
parts of an organization. Utilizing the 3Cs will win the most adherents to your
leadership and eliminate most of the resistance. Nevertheless, inevitably there
will be resistance. You must understand the causes of it and be prepared with
techniques to overcome it.

Negotiation and Agreement


Offering incentives to potential or active resisters is another way of countering
resistance to change. This method is frequently used throughout industry.
Changes in work rules, benefits, and productivity can be balanced with higher
wages, early retirement, and production incentives. Negotiation is an effective
way of dealing with change when there is clearly someone who will lose and
when that individual has the power to resist. Negotiating agreements can be an
easy way to overcome resistance, but, like some other processes, they can be
time-consuming and expensive. This strategy is:

 Used with employees who have power to resist change


 Used to avoid major causes of resistance
 Expensive in time, cost, and assets

Manipulating and Co-opting


The manipulation and selective use of information is an effective way to deal
with resistance to change. Manipulation, in this context, involves the very selec-
tive use of information and the conscious structuring of events.
Co-opting an individual usually involves giving the person a role in the design
or implementation of change. Similarly, co-opting a group involves giving one of
c02_1 10/09/2008 72

72 MANAGING AND LEADING ENTERPRISE EXCELLENCE

its leaders a key role in the design or implementation of change. This is not a
form of participation, however, because the initiators of change do not want the
participation of the person co-opted, merely his or her passive endorsement.
Co-opting can be a relatively easy and inexpensive way to gain an individu-
al’s or group’s support. This method is quicker than participation and cheaper
than negotiation, but there are some drawbacks. If individuals and groups feel
they are being tricked into not resisting, are not being treated equally, or are
being lied to, they often respond in a very aggressive and negative way. Another
serious drawback to manipulation and co-opting is that, if a manager develops a
reputation as an ability to use other needed approaches such as the 3C method.
The co-opting strategy is:

 Used when other strategies have not worked


 Used when change is urgent, and there is insufficient time to implement the
first two strategies
 Not preferred because it can lead to future problems with personnel

Coercion and Termination


In the final instance, managers must at times deal with resistance coercively.
They must force people to accept change by explicitly or implicitly threatening
them with loss of jobs, promotion, position, or authority. The employee may
actually be transferred or terminated. Like manipulation, coercion is a risky pro-
cess. Inevitably, people strongly resent forced change. These employees must be
terminated (or retired or transferred) to facilitate the change and to establish the
leadership team required to achieve a competitive edge. In some situations,
where speed is essential and where change will not be popular regardless of
how it is introduced, coercion sometimes is the manager’s only available tool
Some employees, supervisors, and managers always resist change no matter
what leadership efforts are exerted. In these cases, termination sometimes is
necessary to facilitate the needed change. This strategy is:

 A last resort when all other strategies have failed


 Able to overcome all sources of resistance very quickly
 Very risky, because it always damages the trust bond and leaves people
angry and alienated
 One that almost always leads to the necessity to terminate an employee

Implications for Leaders


All of the strategies have implications for leaders. To determine what they are
and how they will affect your efforts, conduct an analysis of the factors relevant
c02_1 10/09/2008 73

KEY POINTS 73

to producing the needed change. This analysis focuses on the potential resist-
ance to change:

 Determine how and where within an organization each of the methods for
leadership needs to be applied.
 Select a leadership strategy and specify where on the strategic continuum
the strategy will lie.
 Monitor the process and adjust as necessary.
 No matter how well you plan your initial strategy and tactics, something
unexpected will occur. It is always necessary to adjust the strategy and
methods as the change process progresses.

You can significantly improve your chances of success in any change effort
by following these guidelines:

 Conduct an organization analysis.


 Evaluate the factors relevant to producing the needed change.
 Select the methods to be applied.
 Select a change strategy.
 Monitor the implementation process.

Communication skills are a key to this method, but not even the most out-
standing leadership will make up for a poor choice of strategy, lack of planning,
or ineffectively applied methods for overcoming resistance. In a world that is
becoming more and more dynamic, the consequences of poor leadership will
become increasingly severe.

KEY POINTS

A management system is your philosophy of management, organization of man-


agement, and staff, processes, and procedures. It describes how you will do
business and deploy your requirements throughout the organization.
Six requirements are imperative to any successful management system:

1. Documented processes and procedures with controls, which are fully


implemented
2. An organizational structure with defined management roles, responsibili-
ties, and accountability
3. A specific method to communicate and promulgate the management sys-
tem throughout organization
4. A documented and implemented method for decision making
5. Commitment to continuous measurable improvement
c02_1 10/09/2008 74

74 MANAGING AND LEADING ENTERPRISE EXCELLENCE

The eight management system principles are:

1. Customer focus
2. Leadership
3. Involvement of people
4. Process approach
5. Systems management approach to management
6. Continuous measurable improvement
7. Fact-based decision making
8. Mutually beneficial supplier relationships

It is important to know and understand the traits, principles, and skills of


leadership discussed in this chapter.
Understanding resistance to change includes:

 Narrow-minded motivation
 Lack of understanding and confidence
 Different analysis of leadership needs
 Low tolerance for change

Overcoming resistance to change includes:

 3Cs communication, cooperation coordination


 Negotiation and agreement
 Manipulation and co-optation
 Coercion and termination

Leadership Self-Assessment
This self-assessment is designed to provide a baseline for determining where
you are in achieving your leadership vision and where you have to go. The
purpose of this assessment is to provide the basis for completing Appendix B,
your plan to achieve your leadership vision. You and the members of your
leadership team should complete this self-assessment. It will assist you in
measuring how far you must go to achieve your vision and what you need to
do.
This assessment includes the five elements of the leadership model. Each of
these elements is to be measured using a scale of 0 through 100, in increments
of 20. For each item, assess your preparation, training, education, knowledge, or
ability to lead in that area. Mentally answer each question in a category. Then,
on the basis of all your responses, assign yourself a numerical value according
to the following scale:
c02_1 10/09/2008 75

KEY POINTS 75

No preparation, education, training, knowledge, or ability 0


Very slight preparation, education, training, knowledge, or ability 20
Some preparation, education, training, knowledge, or ability 40
A working knowledge 60
A journeyman ability to implement 80
Total mastery and confidence in this area 100

This assessment promotes thought about what leadership characteristics you


most value. Record your scores in the leadership profile in Table 2.10.

Leadership Principles
1. Do you make decisions in a timely manner?

2. Are you a self-confident person?


3. Do you have a sense of responsibility?
4. Do you understand and can you describe your basic values?
5. Can you communicate effectively orally and in writing? 6. Are you tactful
in dealing with your peers and subordinates?
6. Do you seek responsibility?
7. Are you guided by a clear set of values?
8. Are you able to set the example for your followers?
9. Do you consider yourself competent to fill a leadership position?
Numerical value _____________

Leadership Traits
1. Is your desire to be a leader strong enough to make personal changes and
sacrifices?
2. Can you adapt to different situations quickly?

TABLE 2.10 Your Leadership Profile


Profile 0 20 40 60 80 100
Principles
Traits
Skills
Vision
Teams
Communicating
Achieving
c02_1 10/09/2008 76

76 MANAGING AND LEADING ENTERPRISE EXCELLENCE

3. Do you have the mental and physical endurance to be a leader?


4. Do you exercise sound judgment in making decisions
5. Do you have the courage to face difficult situations and difficult people?
6. Do you have the self-confidence to be a leader?
7. Do others have the confidence in you to be a leader?
8. Can you focus yourself, your time, and your resources on becoming a
leader?
9. Do you exhibit enthusiasm? Are you enthusiastic about what you do?
10. Do you have the integrity needed to be a leader?
Numerical value _____________

Leadership Skills
1. Do you have the education required to be a leader in your chosen field?

2. Do you have the training necessary to be a leader in your chosen field?


3. Do you have the practical knowledge and experience to be a leader in
your chosen field?
4. Do you possess the required financial skills needed to achieve your
vision?
5. Do you have the business skills needed to achieve your vision?
6. Do you have the planning skills needed to achieve your vision?
7. Do you have the personal communication skills needed to achieve your
vision?
8. Can you effectively network with others in your selected area of leader-
ship to build a supporting structure for your leadership?
9. Do you have the team-building and -facilitating skills you will need?
10. Can you communicate effectively orally and in writing?
Numerical value _____________

Vision
1. Do you clearly understand your basic values?

2. Do you clearly understand the basic values of your organization?


3. Do you clearly understand the values of your followers?
4. Do you clearly know and understand your competencies and those of
your followers and the organization?
5. Is your vision based on the reality of your underlying values and
competencies?
6. Have you achieved consensus about your values and vision with your
leadership team?
c02_1 10/09/2008 77

KEY POINTS 77

7. Can your leadership vision capture the imagination and ignite the enthu-
siasm of your followers?
8. Is your vision based on the financial facts of your business situation?
9. Is your leadership vision achievable?
10. Is your leadership vision collaborative with the needs of you, your fol-
lowers, and the organization?
Numerical value _____________

Teams
1. Do you know what skills you need in a leadership team?

2. Do you have team members with planning skills?


3. Do you have team members with financial skills?
4. Do you have team members with business skills?
5. Do you have team members with technical skills?
6. Do you actively involve team members in establishing goals and
objectives?
7. Do you have a team structure? Have you selected your team members?
8. Do you have a team facilitator?
9. Do you know how to run effective team meetings?
10. Have you determined how you are going to measure team success?
Numerical value _____________

Communicating
1. Do you understand the natural causes of resistance to your leadership?

2. Do you understand how to use communication to overcome this


resistance?
3. Are you effective in individual communication?
4. Can you communicate effectively in writing and electronically?
5. Can you lead effective meetings?
6. Are you an effective public speaker?
7. Can you communicate to your organization effectively?
8. Do you practice active listening?
9. Are your communications brief and to the point?
10. Do you communicate in clear, distinct sentences that can be clearly un-
derstood by the listener?
Numerical value _____________
c02_1 10/09/2008 78

78 MANAGING AND LEADING ENTERPRISE EXCELLENCE

Achieving
1. Do you understand the strengths and limitations of your leadership team?

2. Can you fulfill your team’s needs for improvement?


3. Do you have a complete and realistic plan of action and milestones to
achieve leadership?
4. Do you know and understand your strengths?
5. Do you know and understand your needs for improvement?
6. Do you have the training and education resources required?
7. Do you have the experience needed to be a successful leader in your cho-
sen field?
8. Do you have the budget available?
9. Do you have the time available?
10. Do you know how you will measure success once it is achieved?
Numerical value _____________
c03_1 10/09/2008 79

3
ENTERPRISE EXCELLENCE
DEPLOYMENT
Deploying Enterprise Excellence is a matter of defining what you want to
accomplish, how to accomplish it, who you need to help you accomplish
it, and then convincing all to follow you.

Deployment of Enterprise Excellence is accomplished through a series of col-


laborative steps. The steps include determining what you want to accomplish,
understanding why you want it, determining where you are on the journey to
the goals, developing a plan to achieve the goals, and establishing the resources
to implement the plan. This is followed by continuous performance monitoring,
evaluating, reporting, and adjusting the plan. This journey begins with the re-
cognition of a need to address a crisis; a need to improve organizational per-
formance, or a need to achieve a cultur of continuous measurable improvement
where one doesn’t exist. Frequently, a crises will be the equivalent of a business-
related 911 call (e.g., sudden increase in scrap, rework, and repair costs; sky-
rocketing warranty returns; loss of customers; dramatic increases in un-
controllable costs or, in the case of public enterprises, diminishing budgets). In
some instances, there may be no immediate crisis, yet we may see competitors
achieving levels of success that potentially threaten our existence, or we may
see an opportunity to achieve a competitive edge if we can create a culture of
continuous measurable improvement. When the need is accompanied by an
understanding that a holistic approach is the most effective way to achieve the
goal and not a silver bullet, you have reached the first step in achieving Enter-
prise Excellence. The next step is to establish the infrastructure required for
deploying and implementing Enterprise Excellence, or Enterprise Excellence
infrastructure. After the infrastructure is established, you need to measure the
enterprise against the Enterprise Excellence requirements, that is, deployment
measurement, analysis, and reporting. The results of the evaluation are then
used to develop a plan of action to close any gaps between the current state and
the desired state of the enterprise, which is called enterprise planning. The plan
of action will be a series of collaborative actions and projects to achieve the
desired enterprise state. Most actions and projects will be accomplished by
Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr. 79
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c03_1 10/09/2008 80

80 ENTERPRISE EXCELLENCE DEPLOYMENT

teams. The plan will be deployed collaboratively throughout the organization,


that is, Enterprise Excellence implementation.

ENTERPRISE EXCELLENCE INFRASTRUCTURE

Successful deployment and implementation of Enterprise Excellence is depen-


dent upon executive and management commitment demonstrated through a for-
mal infrastructure and implementation activities. Deployment requires an
executive group to establish policies and guidelines, a management group to
establish the mechanisms to implement the policies and guidelines, organiza-
tional functions to translate the policies and guidelines into action, and Enter-
prise Excellence subject matter experts to lead and facilitate implementation.
The deployment and implementation infrastructure includes the following roles;
as shown in Figure 3.1:
 Enterprise senior review group
 Enterprise Excellence deployment Champion
 Project sponsors
 Master Black Belt
 Black Belt
 Green Belt
 Team members
It is virtually impossible to deploy Enterprise Excellence without external
help in the first year or two. Carefully select help from consultants or companies
that have a substantial amount of experience in a wide variety of businesses,
industries, and organizational cultures. Before making the selection, carefully
vet their references to ensure they have the experience in achieving the results
you are seeking. Those selected need to be willing and capable of tailoring the
fundamentals of Enterprise Excellence to the needs of your organization, its
business, industry, technology, and culture.

FIGURE 3.1 Enterprise Excellence Deployment Structure.


c03_1 10/09/2008 81

ENTERPRISE EXCELLENCE INFRASTRUCTURE 81

Enterprise Senior Review Group


The enterprise senior review group (ESRG) is the executive group responsible
for the deployment of Enterprise Excellence. It is made up of the senior leaders
within the enterprise, (senior executives, division managers, and managers from
finance, quality, HR, engineering, etc.). The ESRG establishes the mission,
vision, goals, and objectives for the enterprise. It establishes the business plan,
or strategic plan, for the enterprise. The ESRG develops the enterprise value
stream map. It formulates policies and defines enterprise-level projects and
actions required to implement the enterprise plans. The ESRG also establishes
the enterprise improvement project selection criteria; and establishes the process
for prioritizing projects to ensure collaborative and supportive projects for
achieving the mission, goals, and objectives of the enterprise (i.e., aligned with
the Enterprise strategic plan or business plan). The ESRG needs the support of a
Master Black Belt to facilitate the application of the Enterprise Excellence tools,
techniques, and processes and to serve as an adviser for Enterprise Excellence.

Deployment Champions
Deployment Champions are senior leaders in major organizational elements of
the enterprise. They are individuals with significant overall operational respon-
sibility. The Champions are deployment leaders for their organization. Within
their organization, they ensure the deployment of the Enterprise Excellence plan
and enterprise policies and guidelines. They monitor and report the performance
of their organizations to the ESRG and also serve as mentors to Black Belts and
Green Belts. They guide the collaborative implementation of projects and
improvement activities within their organizations, including selecting individu-
als to be trained as Enterprise Excellence (EE) subject matter experts (Master
Black Belts, Black Belts, and Green Belts) and ensuring that improvement proj-
ects are selected in accordance with the enterprise’s selection criteria.
In addition, Champions identify situations that cannot be resolved by the
team members as well as provide support.
Some of the most critical and challenging responsibilities of the Champion
include:

 Selecting and mentoring Black Belts


 Selecting projects
 Removing barriers and ensuring Black Belts receive the support they need
 Driving cultural change through Enterprise Excellence

Project Sponsors
A project sponsor is the ‘‘owner’’ of the process or product being developed or
improved. By owner, we mean the individual who has the authority and respon-
sibility to effect changes. The project sponsor has a vested interest in the success
c03_1 10/09/2008 82

82 ENTERPRISE EXCELLENCE DEPLOYMENT

of the project. The sponsor works closely with the team leader to ensure the
project definition, scope, goals, and deliverables meet the needs of the organiza-
tion and are collaborative and supportive of the enterprise’s vision, mission,
goals, and objectives. The project sponsor does not necessarily attend all project
meetings, but does maintain close communication, coordination, and coopera-
tion with the project leader and the deployment Champion. The project sponsor
is responsible for providing guidance to the project team leader, ensuring the
team has the resources necessary to implement their plan and removing barriers.

Master Black Belts


Master Black Belts are organizational transformation professionals who are
continuous measurable improvement experts. Master Black Belts possesses
experience and advanced knowledge of the processes, tools, and techniques for
process and product improvement. Master Black Belts advise the organization,
ESRG, and Champions about Enterprise Excellence deployment and implemen-
tation. In addition, Master Black Belts need to have experience in multiple busi-
ness environments, which makes them perfect to train and mentor Black Belts
and Green Belts. They need to possess good formal and ad hoc teaching skills.
Master Black Belts also lead Enterprise Excellence projects and therefore
need to have a mastery of Enterprise Excellence technical skills, as well as
change management skills for strategic deployment and tactical implementa-
tion. They also need to possess strong facilitating skills and an ability to com-
municate effectively with all levels of the organization.

Black Belts
Black Belts are specialists in continuous process and product improvement. Black
Belts are a technical resource to the organization for the deployment and imple-
mentation of Enterprise Excellence. They lead projects and provide technical
assistance and facilitation for other improvement project teams. They are directly
responsible for supporting Green Belts and providing coaching to Green Belts and
team members. The Black Belts focus on deployment and implementation proj-
ects that are typically broad in nature, crossing organizational boundaries.
Black Belts need to be open-minded, customer-oriented individuals with a
propensity to learn new ideas. They also need to have the respect of the execu-
tive leadership team, their peers, and subordinates. Effective Black Belts need to
be cognizant of the current organizational culture and know when to push back
and when to back off. Black Belts, like the Master Black Belts, need excellent
facilitation skills and must be able to conduct formal and ad hoc training.
In many organizations, the Black Belts are full-time leaders of Enterprise
Excellence activities. This is very desirable to provide them increased EE expe-
rience and to capitalize on their talents. For many small to midsized organiza-
tions, this is not practical in the early stages of the journey to Enterprise
Excellence. This is a decision and policy that the ESRG needs to address during
c03_1 10/09/2008 83

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 83

its initial planning and that needs to be revisited during the annual deployment
review. Remember, the Black Belts are the tactical leaders for Enterprise Excel-
lence implementation. Furthermore, the skills and knowledge that will make
them successful change agents will be the skills and knowledge that will pro-
vide them with the credentials to become the future leaders of the enterprise.

Green Belts
Green Belts are experienced and trained in leading improvement projects within
their work center and are therefore good stewards of their processes. Green
Belts are trained in the basics of process and product improvement tools and
techniques, facilitation techniques, and project management. Black Belts are as-
signed to mentor each Green Belt as they lead improvement projects in their
work center. Like the Black Belts, the Green Belts need to have the respect of
the executive leadership team, their peers, and subordinates. Green Belts are not
full-time in that role. Through their enterprise excellence implementation as-
signments, the Green Belts will gain the process improvement and project man-
agement experience that will provide them the credentials to become future
leaders of the enterprise.

Team Members
Successful projects require cross-functional, multidisciplinary members encom-
passing disciplines, professions, trades, or work areas impacted by the project
(if the project cuts across departmental boundaries, so should team member-
ship). Effective teams are those that are composed of three to six core members,
with other members added as needed. Team members are those individuals
close to the process, but may also be stakeholders in the project or process.
Team members, typically individuals working in the process, are selected by
the project leader in consultation with the project sponsor. Team members are
the individuals who carry out assignments and make improvements. Team
members are of two types: standing members and ad hoc members. Standing
members are those individuals who meet regularly with the team and who are
essential for the team’s activities. The ad hoc members are those individuals
who possess specialized skills or knowledge but do not meet regularly with the
team. They participate only as their subject matter expertise is needed.

DEPLOYMENT MEASUREMENT, ANALYSIS,


AND REPORTING

Once the decision has been made to deploy Enterprise Excellence and the
ESRG has been formed, the next steps are to evaluate the management system
and perform an Enterprise Excellence maturity assessment. The results of the
assessments will reveal strengths and weaknesses in the organization. It will
c03_1 10/09/2008 84

84 ENTERPRISE EXCELLENCE DEPLOYMENT

also reveal opportunities to enhance the ability of the enterprise to become a


high-performing organization able to achieve the competitive edge.
The management system assessment evaluates the state of the management
system of the enterprise. This is the system that establishes the infrastructure,
processes, and procedures necessary for leading and managing the organization.
Throughout its life cycle, the enterprise needs to be continually monitored and
evaluated and results reported to the ESRG. The results of this review will be
used to determine gaps between the current system and the desired state and
then to develop the actions needed close the gaps, prioritize the actions and pro-
jects, and establish a plan of action and milestones (POA&M).
The Enterprise Excellence maturity assessment reviews the critical factors
and elements for achieving the desired Enterprise Excellence state. As with the
management assessment, the results of this review will be used to determine
gaps between the current system and the desired state and then to develop
the actions needed close the gaps, prioritize the actions and projects, and estab-
lish a plan of action and milestones (POA&M). These initial evaluations are
critical to establishing the desired agility and flexibility needed to thrive in the
twenty-first century. It is critical that the ESRG measure, evaluate, and report
progress of the plans and then annually perform the assessments to determine
the current state and make adjustments as necessary.
Management System Assessment
The Enterprise Excellence management system is a documented system that
establishes the quality management system requirements, defines management
responsibility, and specifies the policies, guidelines, and processes for resource
management, product and service realization, and measurement, analysis, and
improvement. The management system assessment reviews each of these areas
for good practices. A set of questions for each aspect of the management system
provide the basis for the evaluation and subsequent gap analysis. Each answer
needs to be supported by objective evidence. Where objective evidence is not
available, anecdotal data may be used if the source is identified. Verification
is essential to understand the accuracy of data so that confidence can be ade-
quately assessed for the information used to draw conclusions about the system
and to make decisions about the state of the management system and future
actions. The questions support the design of a basic quality management system
(QMS). This is the foundation upon which you may want to tailor your QMS to
your specific business, industry, and culture.

I. Quality management system requirements


A. General quality system requirements
1. Has the organization established, documented, implemented, and
maintained a quality management system in accordance with rec-
ognized good practices, (e.g., ISO 9001:2000)?
c03_1 10/09/2008 85

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 85

2. Are there policies, guidelines, and procedures for continually eval-


uating and improving the quality management system (QMS)?
3. Is there an enterprise value stream map?
4. Does the QMS provide processes and procedures to ensure opera-
tions and process controls are effective?
5. Has the organization provided the resources and information neces-
sary to support the operation and maintenance of its processes?
6. Does the QMS require process measurement, monitoring, and
analysis?
7. Does the QMS implement corrective and preventive actions needed
to achieve the planned results as documented in the quality plans?
8. Does the QMS contain documentation to show continuous im-
provement of the process?
9. Does the organization have documentation (quality manual, proce-
dures, and records) to demonstrate that it manages its processes?
10. Does the QMS provide policies, guidelines, and procedures to
ensure control and quality of processes performed outside of the
facility?
B. Documentation requirements. The basic quality system documen-
tation needs to include a quality manual describing the QMS and
providing governing processes. This needs to include a documented
company quality policy and quality objectives; requirements for
documented level 1 and level 2 procedures; policies and guidelines
for effectively planning, operating, and controlling the enterprise
processes; and policies and guidelines establishing processes and
procedures for creating and managing records demonstrating com-
pliance with the QMS.
1. Quality manual
(a) Does the quality manual define the scope of the quality manage-
ment system and any justification for exclusion to the require-
ments (e.g., obsolete product lines)?
(b) Does the quality manual describe the interaction between the
processes of the quality management system?
(c) Does the quality manual include the basic required quality pro-
cedures as defined in ISO 9001:2000 or other recognized quality
system requirement?
2. Document control system
(a) Are all QMS documents (manuals, procedures, data sheets,
work instructions, records, and procedures) controlled, includ-
ing revision control, access control of original documents, and
control of distributed copies?
c03_1 10/09/2008 86

86 ENTERPRISE EXCELLENCE DEPLOYMENT

(b) Does the document control system include requirements and


procedures for:
(1) Notification and preapproval of changes to controlled docu-
ments?
(2) Approval, revision, and distribution for changes to all quali-
ty documents?
(3) Ensuring current revision documents are available at the
point of use without confusion about which is the current
document?
(4) Ensuring documents remain legible and easily identifiable?
(5) Controlling documents of external origin?
(6) Controlling obsolete documents to ensure they are not used
in current production?
3. Records control system
(a) Does the organization create and maintain records to provide
evidence of conformity to the requirements (e.g., test results,
production records, or customer order files)?
(b) Does the organization have records to provide evidence the
QMS is effective (e.g., management review records, corrective
and preventive actions logs, or customer survey)?
(c) Is there a documented procedure defining the record control
mechanisms (e.g., identification and retrieval, storage and pro-
tection, or retention time and disposition)?
II. Management responsibility
A. Management commitment
1. Has the executive leadership team provided evidence of its commit-
ment to the development and maintenance of the QMS, including
items 2 through 6?
2. Has the executive leadership team communicated the importance of
meeting the customer regulatory and legal requirements?
3. Does the executive leadership team have procedures in place to
maintain and communicate the quality policy?
4. Does the executive leadership team regularly evaluate the quality
objectives?
5. Does the executive leadership team conduct annual management
reviews of the QMS?
6. Does the executive leadership team ensure that the QMS is given
adequate resources?
B. Customer focus
1. Are policies, guidelines, and procedures established to ensure that
customer requirements and expectations are identified and then
satisfied?
c03_1 10/09/2008 87

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 87

2. Are policies, guidelines, and procedures established for increasing


customer satisfaction?
C. Quality policy
1. Has the executive leadership team established policies, guidelines,
and procedures for establishing a quality policy that meets the needs
of the enterprise and provides for continuously improving?
2. Has the executive leadership team deployed the quality policy
throughout the enterprise and provided training on the QMS and its
deployment for all employees?
3. Has the executive leadership team established a process for periodi-
cally reviewing the policy and revising as appropriate?
D. Quality planning
1. Has the executive leadership team established policies, guidelines,
and procedures for using policy deployment for developing and de-
ploying the quality policies? Are the plans documented?
2. Are the quality objectives consistent with the quality policy?
3. Are the quality objectives measurable?
4. Are quality objectives set for appropriate levels of the organization?
5. Has the executive leadership team established the policies, guide-
lines, and procedures to ensure availability of resources needed to
achieve the quality?
E. Responsibility, authority, and communication
1. Has the executive leadership team ensured responsibilities and au-
thority are defined and communicated within the organization (or-
ganization charts, mission statements, etc.)?
2. Has the executive leadership team established policies, guidelines,
and procedures for designating authority and responsibility and pro-
viding for alternates?
3. Has the executive leadership team delegated the responsibility and
defined the authority for ensuring processes are established, imple-
mented, and maintained?
4. Has the executive leadership team delegated the responsibility and
defined the authority for reporting the status of the QMS, including
areas that need improvement?
5. Has the executive leadership team delegated the responsibility and
defined the authority for ensuring that communication of quality
issues is communicated throughout the organization?
F. Management review
1. Does the executive leadership team review the QMS at planned
intervals to ensure the effectiveness of the plan?
2. Are records of the management review maintained as quality records?
c03_1 10/09/2008 88

88 ENTERPRISE EXCELLENCE DEPLOYMENT

3. Are the minimum requirements for a management review described


in a procedure?
4. Does the executive leadership team meet at least monthly?
5. Does the management review include:
(a) Customer feedback?
(b) Result of both internal and external audits?
(c) Action items from previous management review meetings?
(d) Corrective and preventive actions?
(e) Significant items that could affect the QMS (e.g., personnel
changes, new training requirements, or new technology)?
(f) Product and process performance metrics?
6. Is the feedback from management reviews tracked to ensure quality
is continuously improving?
7. Are outputs from the management reviews given the adequate
resources for corrective and preventive action?
III. Resource management
A. Does the management system require all employees who affect the
quality of product or services to be qualified or trained to ensure con-
sistent output?
B. Does the management system provide procedures to ensure employ-
ees are trained to achieve customer requirements?
C. Are there quality records identifying training requirements and train-
ing completion?
D. Does the management system provide the procedures to ensure all
employees receive quality awareness training on the importance of
quality, the quality system, and the importance of meeting the quality
objectives and the customer requirements?
E. Are records maintained to prove that all employees are either train-
ed or qualified to perform their jobs in accordance with quality
standards?
F. Does the organization maintain equipment used to control the quality
of the products and services?
G. Are there policies, guidelines, and procedures for controlling hard-
ware and software used in the production of product?
H. Are there policies, guidelines, and procedures to ensure the environ-
ments are maintained to provide consistent quality?
IV. Product and service realization. The design, development, and delivery
of specific products or services to the marketplace is referred to as prod-
uct or service realization or commercialization. Success depends on a
QMS that includes the following.
c03_1 10/09/2008 89

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 89

A. Planning
1. Is the planning of the product or service realization process inte-
grated into the entire quality system?
2. Are the following being reviewed during the product planning?
(a) Quality objective, product specification
(b) The need for additional processes, resources, and documentation
(c) Requirements for verification, validation, monitoring, inspection,
and test requirements
(d) Records from all stages of the development process
B. Listening to the voice of the customer
1. Does the QMS establish requirements and procedures for determin-
ing:
(a) Requirements specified by the customer, including delivery and
postdelivery activities (training and installation, support, etc.)?
(b) Requirements not specified by the customer but necessary to use
the product for its intended purpose (proper application, safe op-
eration, integration with other products)?
(c) Regulatory and statutory requirements related to the product?
(d) Any additional requirements determined by the organization
(limitations, warranty, special requirements)?
2. Review of the requirements related to the product
(a) Are the requirements reviewed prior to commitment to the cus-
tomer (acceptance of contracts or change orders)?
(b) Does the company have the following information prior to com-
mitment?
(1) Full product specification
(2) Contract or order requirements difference from previous
orders
(3) Ability of the organization to meet the customer require-
ments
(c) Are the order review and actions from the order review
recorded and maintained as a quality record?
(d) Is the critical order information 100 percent complete prior to
acceptance?
(e) Are changes to the product requirement (including change
orders) communicated to all appropriate levels of the
organization?
3. Customer communication
(a) What system does the company use for customer communica-
tion of:
c03_1 10/09/2008 90

90 ENTERPRISE EXCELLENCE DEPLOYMENT

(1) Product specifications and information (salespeople, web


site, spec sheets)?
(2) Inquiries, contracts, change orders, and new orders?
(3) Customer feedback, including customer complaints (e.g.,
corrective and preventive actions)?
C. Design and development
1. Are product design and development activities planned and
controlled?
2. Is the I2DOV process used to develop new technologies?
3. Is the CDOV process used to develop new products or services?
4. Are the interfaces between stakeholders defined and managed to
ensure proper communication?
5. Does the design and development process provide for specification
revision control?
6. Are results of design reviews maintained as quality records?
7. Are the verification results maintained as quality records?
8. Are the results of validation testing maintained as quality records?
9. Are design specification changes identified and recorded?
10. Do design changes include evaluation of effects on subcomponents
and existing products in the field?
11. Are design specification changes reviewed, verified, and validated
before implementation?
12. Are design specification changes recorded and maintained as a
quality record?
D. Purchasing control
1. Are purchasing processes controlled to ensure that the purchased
products and services conform to the specifications or
requirements?
2. Are suppliers selected and evaluated based on their ability to supply
conforming product?
3. How does the organization establish criteria for selection and evalu-
ation of suppliers?
4. Are suppliers periodically reevaluated to determine their compli-
ance with requirements?
5. Are evaluations maintained as quality records?
6. Are copies of corrective actions sent to suppliers maintained as
quality records?
7. Does purchasing information describe requirements for approval of
products, procedures, and equipment?
8. Does purchasing information describe the requirements for a quali-
ty management system?
c03_1 10/09/2008 91

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 91

9. Are inspection and incoming testing requirements specified to en-


sure that purchased parts conform to the specification?
E. Control of operations and processes
1. Does the QMS establish requirements to ensure the availability of
product requirements to production and service personnel?
2. Does the QMS include written work instructions?
3. Are monitoring and measuring equipment available to production
and service personnel?
4. Are processes validated against the quality plan?
5. Are criteria for review and approval of production and service pro-
cesses defined?
6. Is there an established process to ensure personnel are trained and
qualified for their processes?
7. Are record requirements established as part of the quality plan?
8. Does the QMS provide the mechanisms to ensure each product
is uniquely identified throughout the product commercialization
process?
9. Is the product status (conforming/nonconforming) identified with
respect to measurement and monitoring requirements?
10. Are traceability requirements defined, and does the QMS provide
the mechanisms to ensure that unique identification, traceability,
and control information is collected and maintained as part of the
quality records?
11. When there is a requirement to receive and store customer property,
is there a system for identifying it as customer property, protecting
it, maintaining a chain of custody, and recording history of the
custody?
12. Does the QMS provide for the preservation of products, compo-
nents, and subassemblies during handling, production, packaging,
storage, and shipment?
F. Control of measuring and monitoring equipment
1. Has the executive leadership team established policies, guidelines,
and procedures for determining the monitoring and measurement
requirements needed to ensure conformity of the product to the
requirements?
2. Does the QMS establish processes to ensure accurate and repeatable
monitoring and measuring during all operations?
3. Has a metrology system been established that provides for:
(a) Calibration or verification of measurement and monitoring
equipment at specified intervals prior to use?
(b) Calibration or verification of measurement and monitoring
equipment to trace international or national standards?
c03_1 10/09/2008 92

92 ENTERPRISE EXCELLENCE DEPLOYMENT

(c) Identification of calibration status on measurement and moni-


toring equipment?
(d) Safeguarding measurement and monitoring equipment against
improper adjustment that could invalidate the measurements?
(e) Protection of measurement and monitoring equipment against
damage and deterioration during use, maintenance, and storage?
(f) Maintaining calibration and verification records as part of the
quality records?
4. Does the QMS provide for recording and maintaining a quality re-
cord of the disposition of product that after delivery was later found
to be invalid due to measurement or monitoring errors?
V. Measurement, analysis, and improvement. Process management begins
with the leadership team establishing policies, guidelines, goals, and
objectives for effective management of its operations. These are trans-
formed into requirements in the management system to (1) ensure
conformity of the product to customer requirements, (2) control noncon-
forming product, and (3) implementation of Lean Six Sigma tools, tech-
niques, and processes to manage the operations.
A. Customer satisfaction. Does the QMS include requirements and pro-
cesses for monitoring and evaluating customers’ perceptions and the
ability of the organization to meet customer requirements?
B. Internal audits. Does the QMS include systematic, periodic internal
audits of the QMS to evaluate the effectiveness of the deployment of
the quality policy?
1. Does the internal audit process ensure selecting qualified, impartial
auditors?
2. Does the internal audit process include requirements for verifying,
tracking, and recording audit nonconformities and their causes?
3. Does the internal audit process require follow-up audits, including
verification of the actions taken and reporting of the results?
C. Monitoring and measurement of processes. Has the enterprise leader-
ship team established policies, guidelines, goals, and objectives for
implementing Lean Six Sigma management of processes? Have these
requirements been transformed into management system (MS)
processes?
1. Does the MS provide for developing an enterprise value stream?
Does it further require level 1 and 2 value stream maps be used to
manage the key processes of the enterprise?
2. Does the MS provide for incorporating customer requirements in
process management requirements?
3. Does the MS provide for identifying process control points, estab-
lishing control metrics, implementing statistical process control,
and process control plans?
c03_1 10/09/2008 93

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 93

D. Monitoring and measuring products. Has the enterprise leadership


team established policies, guidelines, goals, and objectives for moni-
toring, measuring, and evaluating product characteristics to confirm
that products meet requirements?
1. Does the QMS provide for identifying control points, establishing
product control metrics, implementing statistical process control,
and product control plans?
2. Does the QMS include procedures for identifying nonconforman-
ces, recording corrective action, and ensuring all planned corrective
actions are completed prior to release of product?
3. Does the QMS include procedures for documenting conformity
with the acceptance criteria and maintaining this information as a
quality record?
E. Control of nonconforming product. Does the QMS provide procedures
to identify nonconforming material or products, determine corrective
action, establish disposition, and record actions.
1. Does the procedure for nonconforming product have sufficient con-
trols prevent unintended use or delivery?
2. Does the procedure for nonconforming product specifically estab-
lish authority and responsibility for processing of nonconforming
materials?
3. Does the procedure for handling nonconforming product establish
requirements and processes for maintaining records identifying the
nonconformities and any subsequent actions taken to use the prod-
uct with concessions?
4. Does the QMS require reverification or retesting of reworked or
repaired nonconforming product?
F. Implementation of continuous measurable improvement
1. Has the Enterprise Excellence infrastructure been established?
2. Has the enterprise executive leadership team established policies,
guidelines, goals, and objectives for deploying Lean Six Sigma?
3. Has the enterprise value stream map been deployed throughout the
organization?
4. Does the management system provide for the results of audits, man-
agement reviews, corrective and preventive actions, and analysis of
data to be used for continuous improvement of the QMS, processes,
and products?
5. Has the enterprise executive leadership team established polic-
ies, guidelines, goals and objectives for detection of noncon-
formities, cause-and-effect analysis, and preventive action that
have been integrated in the QMS and transformed into processes
and procedures?
c03_1 10/09/2008 94

94 ENTERPRISE EXCELLENCE DEPLOYMENT

The answers to the management system assessment will aid in the develop-
ment of the actions and projects necessary to ensure your management system is
at the required level for achieving Enterprise Excellence. The answers to the
questions in the management system assessment will identify strong areas in
your system. These need to be continued. It will also identify weak areas, and
you will need to develop tactics and plans to strengthen these areas. For those
areas in which required elements are missing, you will need to develop plans
for implementing the necessary processes and techniques. The sum of these
plans and actions are the gap analysis between a good management system and
your current system. You may need to include additional details and require-
ments depending on your business, industry, technology, or culture.

Enterprise Excellence Maturity Assessment


The Enterprise Excellence maturity assessment has been developed for evaluat-
ing your progress in achieving the desired state. It is intended to be used to
develop a baseline status for your organization. The baseline assessment estab-
lishes the readiness level of your organization to deploy Enterprise Excellence.
After the baseline is established, biannual assessments need to be performed to
measure and evaluate progress. At the end of the fourth assessment, if you deter-
mine the deployment progress is on track with the plan, you may want to reduce
the evaluation cycle to annual and synchronize it with the annual management
system assessment.
The Enterprise Excellence maturity assessment needs to be performed by a
team of three to six people. The team needs to be led by a senior member of the
enterprise senior review group (ESRG). One of the members needs to be a Lean
Six Sigma Master Black Belt experienced with Enterprise Excellence deploy-
ment and assessments. Since much of the data will be anecdotal and the scores,
therefore subjective, it is recommended that the Master Black Belt be from out-
side the enterprise to serve as an objective adviser. The additional members
need to be senior members of the leadership team, experienced and knowledge-
able about the enterprise value stream. The appraisal team needs to conduct a
measurement systems evaluation of its process. This is necessary to maximize
consistency, accuracy, and repeatability. This will also aid in reducing, as much
as possible, bias in the appraisers due to the evolving state of the deployment
over time.
The Enterprise Excellence maturity assessment is divided into five major
categories:

1. Leadership, vision, alignment, and deployment


2. Change management
3. Infrastructure
4. Corporate culture/workplace
5. Process metrics
c03_1 10/09/2008 95

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 95

TABLE 3.1 Assessment Categories and Corresponding Metrics

Assessment Category Category Metrics

Leadership, vision, alignment,  Mission/vision


and deployment  Alignment
 Scope
 Value assessment

Change management  Leadership


 Performance
 Project selection
 Self-sufficiency
 Management system

Infrastructure  Personnel
 Information systems
 Communication and promotion
 Customers and suppliers
 Expert support

Culture/workplace  Idea generation


 Incentives and recognition
 Hiring
 Investment

Process metrics  Quality


 Lean
 Reliability
 Documentation
 Work breakdown

Metrics and Scoring. Each individual metric in each category has a 1–3–5
score. Description of the meaning of each score is provided. The evaluators will,
as a group, score each metric. A score of 1.0 generally marks the absence of the
metric in the organization or the isolated presence of the characteristic without
any coordination or continuity outside the isolated organization level or func-
tion. A 5.0 is considered ‘‘best-in-class,’’ and calculated scores can be measured
against a score of 5.0 for this purpose. A score of 3.0 is considered to be gener-
ally a measure of coordination: different organization levels and functions suc-
cessfully coordinating their efforts such that the presence of organization and
function ‘‘walls’’ are transparent to the metric. Table 3.1 displays the assess-
ment categories and the corresponding metrics.

Leadership, Vision, Alignment, and Deployment


This category evaluates how the Enterprise Excellence strategy is developed and
deployed. (See Table 3.2.) In a mature deployment, the ESRG will stratify the
elements of the mission into goals and goal measurements (metrics), which be-
come the basis for the Enterprise Excellence deployment plan. The plan is
c03_1

96
10/09/2008

TABLE 3.2
96

1 3 5 Score

1. Mission/vision Mission has not been Enterprise Excellence plan Mission has been translated
translated into a specific has been developed. into a specific Enterprise
Enterprise Excellence Quantitative measures for Excellence plan with
plan. monitoring performance performance goals with
are not in place. established metrics.
2. Alignment The need for Some levels of the enterprise Performance goals have been
improvement has have established goals and given to subordinates through
been verbally given metrics traceable to the goals and metrics appropriate
to subordinates. Enterprise Excellence plan. for local enterprise levels.
3. Scope CMI activities are Different levels are All improvement activities
one improvement implementing have been captured in one
activity among many. improvement activities, but unified enterprise program
there is no coordination or with a common purpose
standard methodology. using standard Lean Six
Sigma methodologies.
4. Value assessment Enterprise is unable Some levels measure the Enterprise is able to continuously
to determine whether value of their CMI activities. measure the value of its CMI
CMI efforts are activities.
achieving stated goals.
Subtotal
c03_1 10/09/2008 97

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 97

deployed through policy deployment for execution. This deployment is trace-


able from top to bottom through written transfer mechanisms (e.g., policy de-
ployment waterfall of matrices, performance evaluations, or project
assignments). The plan needs to integrate all ongoing Enterprise Excellence
activities and resources into one unified strategy. Finally, the ESRG needs to
establish processes for evaluating its deployment performance in order to take
corrective actions as necessary.

Change Management
This category evaluates the presence of a control function to manage the de-
ployment, the flow of information to the control function, and feedback to exe-
cuting levels from the information flow. (See Table 3.3.) The control function is
the ESRG that develops the Enterprise Excellence infrastructure. The ESRG
also monitors, measures, evaluates, and reports progress. The ESRG’s ability to
acquire valid data for deployment decision-making purposes is critical, so the
presence of standardized data collection across all enterprise levels is an impor-
tant metric in this category.
The other metrics assess how the ESRG acts on data and the performance
results from that action. To begin, there must be a deployment infrastructure that
yields qualified personnel capable of initiating and obtaining results from de-
ployment and implementation projects. Then the process of data evaluation is
reviewed to see whether performance goals are being met and how progress
toward goals is being evaluated. From that evaluation, corrective actions to the
deployment process should be evident from the ESRG to executing levels. One
primary source of corrective action should be the project selection process. In
a mature deployment, a clear and timely path exists between deficiency
recognition and the selection of next projects to address the deficiency.

Infrastructure
This assesses the ability of the enterprise to properly support deployment activi-
ties and begins with voice of the customer and supplier inputs. (See Table 3.4.)
The most successful deployments begin with an assessment and understanding
of customer needs, which in turn sets the direction of the goals and objectives of
the enterprise. Suppliers are an obvious input to most processes, and while their
needs should be understood, their goals should be aligned with customer goals
as well.
Internally, areas of weakness need to be discovered. Early in a deployment,
qualified personnel are usually in short supply, so the presence of outside assess-
ment personnel may be necessary to direct early efforts to where they’ll do the
most good. Such assessment requires data collection, analysis, and reporting so
the need for enterprise-wide data gathering systems is part of this category. This
metric differs from the management system metric in the change management
category in that management system is concerned with normalizing data mov-
ing up enterprise levels, whereas information systems is the development of
c03_1

TABLE 3.3

98
10/09/2008

1 3 5 Score
98

5. Leadership Local efforts at some As-required basis, personnel Enterprise senior review
enterprise l evels. assigned at the enterprise level group is inplace and
in attempt to coordinate full-time Champions are
activities. Some relationship designated to lead EE
exists between executive efforts. EE infrastructure
leadership team’s actions and is defined and documented.
roll-up data from lower
enterprise levels.
6. Performance Local efforts at some As-required personnel assigned Documented management
enterprise levels. at enterprise level report status review system relating
and value of CMI activities. total enterprise performance
to mission
7. Project selection Local efforts at some Roll-up enterprise levels Documented project selection
enterprise levels. coordinate project selection and process aligning project selection
resource distribution for selected and resource distribution to
projects. mission. Business cases exist
for all projects.
8. Self-sufficiency CMI activities are Qualified personnel exist within Enterprise is able to implement
minimal due the lack the enterprise, but CMI its Enterprise Excellence plan
of qualified (trained) activities are ancillary duties. with qualified in-house personnel.
personnel. There are insufficient qualified
resources to meet the demand.
9. Management Processes to manage change Efforts are at different levels to Active management system with
system are largely verbal in nature document and follow the processes QMS has been established.
and are modified as necessary developed at that level. Processes Processes have been documented
to fit the situation. differ between levels. and standardized.
Subtotal
c03_1

TABLE 3.4

1 3 5 Score
10/09/2008

10. Personnel Each enterprise level uses its Attempts have been made to Standardized criteria and process
own criteria to nominate standardize the process for established for training, development,
99

personnel for training, nominating individuals for and participation in EE activities.


development, or participation training, development, or Deployment core teams and support
in CMI activities. participation in CMI activities. offices are adequately staffed.
11. Information Local efforts at some Some systems are common at Common communication systems
systems enterprise levels. roll-up enterprise levels. for EE efforts and collecting and
analyzing data. A documented QMS
system exists with regular
management review.
12. Communication Local efforts at some Efforts have been made at the Standardized, regularly published EE
and promotion enterprise levels. enterprise level to announce promotion. Accomplishments reported
significant developments and in timely manner. Roll-up maturity
activities across the enterprise assessment used to obtain feedback on
on an ad hoc basis. EE deployment progress.
13. Customers and Local efforts at some Some major customers’ or All suppliers and customers have
suppliers enterprise levels. suppliers’ inputs have been been aligned with enterprise-level
incorporated into the Enterprise goals and their input incorporated
Excellence plan at the enterprise into the Enterprise Excellence
level. plan. DFLSS is used for developing
products, services, and processes.
14. Training Local efforts at some Some enterprise levels tie Documented training plan tied to
enterprise levels. training to levels of the EE deployment plan with
responsibility and authority. implementation records tied to
personnel advancement criteria.
15. Expert support Rapid review team in Formal teams are brought in to Qualified internal resources exist
place to assist in enterprise assess complex processes for assessments, with no outside
assessments, both internally crossing functional lines. expertise necessary.
and crossing functional areas.

99
Subtotal
c03_1 10/09/2008 100

100 ENTERPRISE EXCELLENCE DEPLOYMENT

means to gather data. Data gathering should be a requirement of the enterprise’s


quality management system.
Other metrics in this category involve the written policies concerning the sys-
tematic training and selection of qualified Lean Six Sigma Master Black Belts,
Black Belts, and Green Belts. The ESRG needs to establish goals to ensure per-
sonnel nominated for training, qualification, and work meet established require-
ments and receive the proper training. The enterprise can support only a limited
number of projects at one time using qualified Black Belts and Green Belts to
ensure coverage at all levels and functional areas.
The last metric in this category is a communication program for Enterprise
Excellence efforts. In a mature deployment, the Enterprise Excellence culture is
always expanding due to the communication of implementation successes.

Culture/Workplace
Enterprise Excellence deployment is a success when the culture of the organiza-
tion becomes one of continuous measurable improvement. (See Table 3.5.) This
is not normally achieved for the first few years after deployment begins. For
this reason, this category is weighed less heavily in the first deployment
assessments.
In the early stages of deployment, the ESRG may choose to hire consultants
to provide support as they develop plans to establish internal resources of Enter-
prise Excellence subject matter experts. New employees with Lean Six Sigma
credentials can also begin early in supporting the deployment. In the early
stages of the deployment, as the ESRG establishes the Enterprise Excellence
infrastructure, the ESRG needs to establish position descriptions, career paths,
and individual personnel development plans to select, train, and develop internal
Enterprise Excellence subject matter experts.
The success of the Enterprise Excellence culture change to one of continuous
measurable improvement can often be seen in the personnel generating
improvement ideas. Initially, management normally dictates where improve-
ment projects are directed, since they should best see areas of weakness in their
management roles. In the early stages of the deployment, improvement ideas
flow from the enterprise value stream and are deployed through lower-level val-
ue stream analysis. As the deployment matures, the generation of improvement
ideas will spread horizontally and vertically throughout the enterprise, pushing
improvement ideas upward for review and approval. When the culture change
does become part of the process over time, the enterprise will see continuous
measurable improvement become self-sustaining.

Process Metrics
Metrics at the process level are often an easy task to implement since there
are many common ones, but they may be difficult to implement correctly. (See
Table 3.6.) Defect tracking, cycle time, mean time between failures, and so forth
are well known, and many will certainly exist as a result of the Enterprise Excel-
lence deployment. The difficulty lies in knowing where to implement such
c03_1
10/09/2008

TABLE 3.5
101

1 3 5 Score

16. Idea generation Potential projects and CMI A queue of potential projects A queue of potential projects
recommendations are and EE deployment exist at all levels. Policy
developed on an ad hoc basis. improvement recommendations deployment is used to ensure
exist, generated largely at the activities are collaborative
enterprise level. and supportive of enterprise
goals.
17. Incentives/recognition None. Efforts have been made at the Documented plan ensuring
enterprise level to announce incentives for individual/team
significant performance by performance and recognition
teams and individuals across of efforts across the enterprise.
the enterprise.
18. Hiring CMI background is not part CMI background is used as a Established hiring plan.
of hiring criteria. factor in the hiring of new Position descriptions
personnel but not documented incorporate CMI skills.
in job descriptions.
19. Investment No change in requested Some reduction in requested Improvement results are
improvement-related funding improvement-related funding quantified and savings are
traceable to the results of CMI traceable to the results of CMI reinvested in improvement
activities. activities. activities.
Subtotal

101
c03_1

TABLE 3.6

102
10/09/2008

1 3 5 Total
102

20. Quality Some processes are A documented, centralized QMS is implemented. All processes
documented. Some QMS system is in place, but are documented, control points
control points and quality implementation is set identified, metrics monitored, and
metrics are defined. at the local level. process performance evaluated,
reported, and improved. DFLSS is
used for developing new products,
services, and processes.
21. Lean No data collection. Different levels use Lean Process value stream maps exist for
metrics to determine the the enterprise and all critical functions.
efficiency of their operations. Systematic Lean implementation is
Intermittent implementation part of deployment plan. DFLSS is
of select Lean tools. used for developing new products,
services, and processes.
22. Reliability No data being taken Local efforts to collect, review, Documented enterprise system to
to help prevent asset and act on data related to the collect data, review it, and implement
breakdown. readiness and efficiency of corrective actions related to the
assets. readiness and effectiveness of assets.
DFLSS is used for developing new
products, services, and processes.
23. Documentation None Some enterprise levels with Documentation requirements established
documentation systems in QMS. All processes mapped and
developed in-house. written procedures established.
24. Work breakdown None Some levels have WBS system WBS system or equivalent used at all
or equivalent in place. enterprise levels, with work organized
into documented finite work elements
with relationships described.
Subtotal
c03_1 10/09/2008 103

DEPLOYMENT MEASUREMENT, ANALYSIS, AND REPORTING 103

metrics. Metrics must align across all enterprise levels such that their roll-up
provides an accurate assessment of progress toward the enterprise goals. In-
appropriate or excessive measurements are wasteful, which reduces efficiency
and fails to provide guidance to management. For this reason, process level met-
rics are weighed less heavily in the early stages of a deployment, though they
become critical as the deployment matures.

Assessment Scores
1. Total assessment score: This is the sum of the category scores and will
range from 24 to 120.
2. Category average score: This is the category subtotal divided by the num-
ber of metrics. This will result in a category assessment between 1 and 5.
3. Plotting the category average scores on a radar chart (Figure 3.2) will
present a visual representation of the readiness of the organization by cat-
egory and depict the areas requiring action to improve the maturity of the
deployment.
4. Plotting the individual metrics on a radar chart (Figure 3.3) will present a
more detailed visual representation of the readiness of the organization
and the maturity of the Enterprise Excellence deployment.

In the example in Figure 3.2 the leadership, vision, and deployment category
appears to need the most improvement. It is followed by culture/workplace and
then infrastructure. An examination of the radar chart for the individual metrics
(Figure 3.3) will reveal the details.
An examination of the metrics chart reveals that metrics 3 (scope) and 4
(value assessment) of the leadership, vision, and alignment category have the
lowest readiness score for that category. This indicates that EE is competing
with other initiatives and the organization hasn’t defined and established a

FIGURE 3.2 Example of Category Assessment Radar Chart.


c03_1 10/09/2008 104

104 ENTERPRISE EXCELLENCE DEPLOYMENT

FIGURE 3.3 Example of Enterprise Excellence Maturity Assessment Radar Chart.

process for capturing the value of the EE activities and projects. This helps iden-
tify gaps for inclusion in the EE deployment plan.

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING

The enterprise senior review group (ESRG) leads the deployment for the organ-
ization. The deployment requires a detailed plan of action with assignments and
scheduled completion dates. After the plan has been established and implemen-
tation has begun, the ESRG needs to provide regular, periodic monitoring and
evaluation of progress to plan. The ESRG is also responsible for developing and
implementing recovery plans when performance deviates from the path to the
desired goals.
In the early stages (approximately first six months) of Enterprise Excellence
deployment, the ESRG needs to meet two to three days per month. These ses-
sions need to be working sessions that include training on the Enterprise Excel-
lence methodology, processes, and leading the deployment. During these first
sessions the ESRG will perform the following functions.

 Review the ESRG roles and responsibilities and define the membership of
the ESRG.
 Review the assessment report, determine actions, assign responsibility, pri-
oritize actions, and begin implementation.
 Develop the enterprise-level process maps.
 Evolve the enterprise-level process maps to enterprise value stream maps.
 Define and establish, or improve, existing infrastructure for deploying and
maintaining Enterprise Excellence.
 Define enterprise-level actions and projects.
 Prioritize actions and projects.
c03_1 10/09/2008 105

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 105

 Identify critical success factors for the deployment.


 Define and prioritize deployment goals and objectives.
 Define, prioritize, assign, and monitor directed projects and tasks for
deploying Enterprise Excellence.
 Perform a stakeholder analysis for the deployment plan.
 Define the Enterprise Excellence deployment strategy with a plan of action
and milestones (POA&M).

After the initial development and deployment sessions, the ESRG will con-
tinue to meet regularly and lead the deployment, but it can reduce the meetings
to one two-hour session per month. During these ongoing sessions the ESRG
will:
 Monitor, evaluate, and report status of the deployment
 Roll up subordinate programs
 Review and approve enterprise-level projects
 Continue to perform periodic reviews of the enterprise value stream map
for improvement opportunities
 Reprioritize actions, as required

The report of the management system and enterprise excellence maturity as-
sessments will provide findings, conclusions, and recommendations. The rec-
ommendations will include actions that can be accomplished immediately
(‘‘just-do-its’’), requirements for revising or developing policies and guidelines,
or projects to implement or improve systems or processes. The ESRG needs to
review the assessment report, classify the recommendations, and determine ap-
propriate actions. Before initiating action, the ESRG needs to develop the enter-
prise process map and enterprise value stream

Process Mapping
Process mapping is a systematic/systems approach to documenting the steps/ac-
tivities required to complete a task. Process maps are diagrams that show—in
varying levels of detail—what an organization does and how it delivers services.
Process maps are graphic representations of:

 What an organization does


 How it delivers services
 How it delivers products

Process mapping also identifies the major processes in place, the key activi-
ties that make up each process, the sequencing of those activities, the inputs and
resources required, and the outputs produced by each activity. Process maps are
a way of ensuring that the activities making up a particular process are properly
c03_1 10/09/2008 106

106 ENTERPRISE EXCELLENCE DEPLOYMENT

understood and properly managed in order to deliver appropriate products and


services.
A process is a series of repetitive activities or steps used to transform input(s)
into output(s). It is a transformation of inputs such as people, materials, equip-
ment, methods, and environment into finished products through a series of
value-added work activities. The absence of clearly defined processes makes
any activity subject to variation and thereby subject to ineffectiveness and in-
efficiencies. Effective processes are understood and documented. Four control-
lable factors are key to any process: quality, cost, schedule, and risk (Q$SR).
Each of these must be described, quantified, and analyzed as part of the process.
Process maps reveal hidden activities, and identify opportunities for improve-
ment, and identify how to improve process layout or flow.
Levels of Process Mapping
Process maps help us understand, manage, and improve processes and can be
developed for various levels in the process. Each level will depend on your ser-
vice or product and the approach to the improvement process. Each level of
system complexity adds an analytical burden in the amount and type of data
taken at the various points in the process. The purpose of the process map is to
describe the process properly so it can be quantified and analyzed.
At the simplest level, process maps help ensure that you thoroughly under-
stand your own process: how it works, who does the work, what inputs and re-
sources are required, what outputs are produced, and the constraints under
which work is completed.
Process mapping is accomplished at four levels:
Level 0: Enterprise level
Level 1: Organizational/functional level
Level 2: Operations level
Level 3: Work activities level

Level 0: The Enterprise Level A level 0 process map represents the ‘‘execu-
tive management’’ view of the process. It defines the enterprise and identifies
measures of performance. These metrics include:
 Market share
 Profit and loss
 Mission achievement
 Customer satisfaction
Each element is numbered, and those numbers are carried over to the lower-
level process mapping.
To create a level 0 process map; as shown in Figures 3.4 to 3.6:
1. Establish mission and requirements of the enterprise.
2. Define the functions necessary to achieve the requirements.
c03_1 10/09/2008 107

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 107

FIGURE 3.4 Level 0 Process Map.

FIGURE 3.5 Enterprise Process Map.

FIGURE 3.6 Level 0 Process Map.


c03_1 10/09/2008 108

108 ENTERPRISE EXCELLENCE DEPLOYMENT

3. Define the processes necessary for the functions.


4. Map the enterprise.

When mapping the enterprise, number each element. The numbering used
here will carry over to lower-level process maps.
Level 1 is the organizational/functional level of the process work centers
within the enterprise. At this level, we are defining the process work centers of
the overall process. Level 1 describes the ‘‘operational management’’ view of
the process.
Level 1 also defines organizational elements (e.g., departments, programs,
and functions). Each element (work center) is numbered, and the numbers are
later carried over to the lower-level process mapping.
 At this point you will define the management metrics and the process to be
used.
 Process control and control metrics will be at a lower level.
 Process footnotes are used at all levels of process mapping.
 The footnotes will help identify the process requirements, system require-
ments, stakeholders, and management metrics.
Process requirements, system requirements, stakeholders, and management
metrics are identified on the process map. So, too, are management metrics
(process control and control metrics will be defined at a lower level).
Level 2 process maps are at the operations level and normally consist of work
activities. At the second level, there is a mixture of functional elements and
work activities. The measures at this level are the process control metrics for
the operations-level work activities.
Process maps can be developed for various levels in the process. Each level of
system complexity adds an analytical burden in the amount and type of data taken
at the various points in the process. Therefore it is important to map the process to
the level that enables management of the process to meet the requirements.
Remember, the purpose of the process map is to describe the process pro-
perly so it can be quantified and analyzed.
Level 3 process maps expose deeper processes from within a level 2 function
or activity. At level 3, all elements are work activities. Therefore, only work
instructions and control metrics are shown.

Enterprise Excellence Planning Toolkit


Enterprise Excellence planning, deployment, and implementation is accom-
plished with the aid of processes, tools, and techniques of modern process and
product improvement. Success also depends on the application of sound project
management. Both aspects of planning, deployment, and implementation re-
quire the use of tools for collecting and analyzing communication data. The
Enterprise Excellence planning toolkit includes the tools most often used in the
c03_1 10/09/2008 109

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 109

collection and analysis of communication data. Communication data can be


gathered from a variety of sources. The most common sources are:
Conversation. Any verbal exchange of ideas, opinions or information.
Surveys. Any detailed questionnaire, inspection, or investigation that gathers
data or opinions considered to be representative of a population.
Written reports. Any formal document describing the findings, conclusions,
and recommendations of an individual or group
Brainstorming. A method of problem solving in which all members of a
group spontaneously contribute ideas (usually by way of free association).
The ideas are collected, but are not immediately evaluated. Instead, all dis-
cussion is focused on clarification and drawing out additional ideas. After
all possible ideas have been exhausted, associations are formed from the
responses through the use of an affinity diagram and further evaluated via
an interrelationship digraph.

Affinity Diagram
The result of a brainstorming session may be a large set of data. Initially, the
relationships among these elements may not be clear. The first task is to distill
the data into key ideas or common themes. The affinity diagram is a very effec-
tive tool for achieving this result. It organizes language data into groupings and
determines the key ideas or common themes. The results can then be used for
further analysis in the planning or problem solving process.

Uses for the Affinity Diagram


 Finding a starting point for promoting new policies by creating a consensus
among the group or team
 Aiding in the development and improvement of systems, processes, or
products
 Determining trends and patterns among language data
 Refining and defining language data

Developing an Affinity Diagram The development of an affinity diagram is a


creative task, requiring analysis of ideas, association of common thoughts, and
determination of patterns from large amounts of data. Although an individual
can develop an affinity diagram, group participation is often more effective be-
cause more ideas may be generated. Here are the three steps for developing an
affinity diagram:

1. Brainstorm and group ideas into columns.


2. Select titles for the groupings.
3. Refine and consolidate the groupings.
c03_1 10/09/2008 110

110 ENTERPRISE EXCELLENCE DEPLOYMENT

FIGURE 3.7 Randomly Arranged Ideas.

Step 1. Brainstorm and group the ideas into columns. Begin by collating
the ideas, opinions, perceptions, desires, or issues as individual data elements.
Write each one on an individual piece of paper, such as a Post-it note. (See
Figure 3.7.) Arrange the pieces of paper on a flat surface such as a wall, white-
board, or window, clustering the ideas together in logical associations.

Some people like to write tentative titles and cluster the ideas under them.
We do not recommend this, because it can stifle creativity. Instead, let the asso-
ciations and patterns drive the title.
All team members should participate during this step. However, there should
be no discussion or evaluation of the choices, and each person is allowed to
move the Post-it notes around at will. This may seem chaotic, but it is a neces-
sary part of the process. To add structure to this step, set a time limit (e.g., 15
minutes). Soon, order and agreement will come out of the seeming chaos. (See
Figure 3.8.)

Step 2. Select titles for groupings. The next step is to decide on a title for
each grouping. (See Figure 3.9.) The title needs to represent an action that re-
flects the main idea or theme of the grouping. The titles, therefore, need to be
complete sentences, stated as actions. In some instances, determining the title
requires a compromise among the ideas in the grouping. Keep in mind that, at
this point in developing the affinity diagram, the title is important because it
defines the action to be taken. Avoid evaluating the ideas in the groupings. The
next step will further clarify the issues and the titles.
Step 3. Refine and consolidate the groupings. After the groupings have ap-
propriate titles, it is time to review each item under each title to see whether it
still fits or whether it should be included under a different title. At the same
time, the titles should be reviewed to ascertain if any of the groupings can be
c03_1 10/09/2008 111

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 111

FIGURE 3.8 Associations Formed in Columns.

consolidated. The resulting affinity diagram will bring order to the original col-
lection of apparently unrelated ideas. (See Figure 3.10.)

Application of the affinity diagram can extend from simple personal planning
to the most complex industrial problems. A single individual or a team can use
the affinity diagram as the starting point for planning. The results of this analy-
sis become the input for the interrelationship digraph.

FIGURE 3.9 Title the Groupings.


c03_1 10/09/2008 112

112 ENTERPRISE EXCELLENCE DEPLOYMENT

FIGURE 3.10 Refine and Consolidate Like Columns.

Affinity Diagram—Training System Example


Provide for Identifying and Conducting Retraining
 Retraining requirements
 Periodic audits/retraining
 Audits/internal audits

Provide Traceability Online


 Training records
 Training outline
 Signed hard copies
 General scope and job-specific
 Signing responsibilities

Provide Training at all Levels


 Management support
 Training for managers
 Required for all levels
 Incentives

Provide Effective Training Methods and Materials


 Link training program to Individual Development Plan
 In-house videos
c03_1 10/09/2008 113

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 113

 More pictures and videos


 Fewer words
 Visual versus text-based
 Test of knowledge
 Performance appraisals
 Practice test
 Glossary of terms
 Multiple trainers
 Hands-on/simulation
 End-user input
 Third-party testing

Interrelationship Digraph
The relationships among language data elements are not linear and are often
multidirectional. In other words, an idea or issue can affect more than one idea
or issue. Furthermore, the magnitude of these effects can vary. The interrelation-
ship digraph is an effective tool for understanding these relationships among.
The input for the interrelationship digraph is the result of an affinity diagram.
The information developed from the interrelationship digraph is used to es-
tablish priorities and to determine optimum sequencing of actions. Frequently,
teams develop an affinity diagram, skip the interrelationship digraph, and go on
to use another tool to develop their plan. This is a big mistake. The interrelation-
ship digraph always provides an important understanding about data you are
analyzing.
There are three methods for designing the interrelationship digraph. The
original method is called the arrow method. The second method is the matrix
method. The third method, which we prefer, is called the J-F matrix method.
This is a cross between the matrix method and the arrow method. Only the J-F
matrix method will be presented.

J-F Matrix The J-F matrix method is a cross between the original matrix
method and the prioritization matrix. It is similar to the matrix method, but the
symbols are different and the interrelationship summed along both axes. The J-F
matrix method consists of the following four steps.

Step 1. Develop L matrix of issues. The first step in developing the interrela-
tionship digraph using the J-F matrix method is to develop an L matrix of the
issues. Enter each issue on the horizontal and vertical axes. Add a total column
and a total row to the matrix, as shown in Figure 3.11.

Step 2. Determine causal relationships. The second step in developing the


matrix is to determine the causal relationships between each pair of issues. Take
c03_1

114
10/09/2008

Goal:
114

Design and Develop


Training System

Establish a formalized
company
Provide traceability on-line
be used?
Provide cross-functional
awareness training
Training needs to include
"why" purpose & where it fits

conducting retraining as req.


training system
training throughout the
performance metrics
What training metrics should
Training methods should
include workshops, on-line

Provide effective training


methods & materials
Provide consistenet product
Provides training tied to
in.

Provide for identifying &


Provide training reality
and OJT

trainee & trainers


Provide evaluation of training,

Training required for all levels


1 2 3 4 5 6 7 8 9 10 11 12 13 Total
Provide for identifying & conducting retraining
re as req.

Provide effective training methods & materials

Establish a formalized training system

Provide consistenet product training throughout the company

Provide traceability on-line

Training required for all levels

Provide training reality

Provide evaluation of training, trainee & trainers

Provides training tied to performance metrics

What training metrics should be used?

Provide cross-functional awareness training

Training needs to include "why" purpose & where it fits in

Training methods should include workshops, on-line and OJT

Total

FIGURE 3.11 L matrix for Interrelationship Digraph.


c03_1 10/09/2008 115

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 115

each issue on the vertical axis and compare it to each of the other issues on the
horizontal axis. For this method, the question is: ‘‘Does the vertical issue de-
pend on, or is it caused by, the horizontal issue?’’ Note that the question needs
to be worded the same way each time.

For this method we evaluate the extent of each causal or dependency rela-
tionship: strong, medium, weak, or none. Numeric values are assigned to each
of the attributes. Figure 3.12 is an example of a matrix with numeric values
added.

Step 3. Sum the interrelationships. After determining all the relation-


ships, score them in both the vertical and horizontal axes. Place the totals in
the appropriate column or row. Figure 3.13 shows the matrix with the totals
added.
Step 4. Set priorities for the issues. Review the results of step 3. The issues
having the largest sum totals have the greatest impact on the other issues. In the
matrix shown in Figure 3.14, this corresponds to the column total at the bottom
of the matrix—these are the ‘‘critical few’’ issues. Solving these problems, im-
plementing these actions, or providing these services will have the greatest in-
fluence on the problem or customer requirement.

The totals in column on the right side of the matrix reflect issues that are
affected by the other issues. In this case, the high total of 63 indicates an issue
that is most affected by other issues.
The next highest score is 61, corresponds to the next highest affected issue.
The bottom row totals indicate the extent each issue affects the others in this
example the issue of establish a formalized training system with a correspond-
ing value of 69 indicates the issue that affects the most of the other issues under
evaluation. Remember, for any digraph, the question is: Does the vertical issue
depend on, or is it caused by, the horizontal issue?
You can use the insight provided by this evaluation to prioritize actions or to
determine the issues necessary for further planning. It is always valuable to per-
form this step even if all of the issues are to be acted on. The resulting under-
standing is always of value.

Cause-and-Effect Analysis
After identifying a problem, it is necessary to determine its cause. The cause-
and-effect relationship is at times obscure. A considerable amount of analysis
often is required to determine the specific cause or causes affecting the problem.
Cause-and-effect analysis uses diagramming techniques to identify the rela-
tionship between an effect and its cause. Cause-and-effect diagrams are also
known as fishbone diagrams because the diagrams resemble the skeleton of a
fish (see Figure 3.15).
c03_1

116
10/09/2008

Goal:
116

Design and Develop


Training System

Establish a formalized
company
be used?

conducting retraining as req.


Provide consistenet product
Provide traceability on-line
Provide cross-functional
awareness training
Training needs to include
"why" purpose & where it fits

training system
training throughout the
performance metrics
What training metrics should
Training methods should
include workshops, on-line

Provide for identifying &


Provide effective training
methods & materials
Provides training tied to
in.

Provide training reality


and OJT

Training required for all levels


trainee & trainers
Provide evaluation of training,
1 2 3 4 5 6 7 8 9 10 11 12 13 Total
Provide for identifying & conducting retraining as req. 3 1 3 9 3 3
Provide effective training methods & materials 9 9 9 9 9 3 3 1 1 1 9
Establish a formalized training system 3 9 9 1 3 1 9 9 3 3 1 9
Provide consistenet product training throughout the company 3 9 9 9 3 9 9 1 3 3 3
Provide traceability on-line 3 9
Training required for all levels 9 9 9 9 1 1 3 3 3 3
Provide training reality 9 9 9 3 3 3
Provide evaluation of training, trainee & trainers 1 1 3 3 1 9 9 1
Provides training tied to performance metrics 1 3 9 3
What training metrics should be used? 3 3 3 1 1 9
Provide cross-functional awareness training 3 9 3 1 3
Training needs to include "why" purpose & where it fits in 3 1 3 3 1 3
Training methods should include workshops, on-line and OJT 9 9 3 9 9 3 9 3
Total

FIGURE 3.12 L Matrix of Issues.


c03_1
10/09/2008

should

training,
Goal:
117

metricsshould
Design and Develop

trainers
evaluationofoftraining,
trainingmetrics
Training System

used?

conducting retraining as req.


Provide traceability on-line
Training needs to include
"why" purpose & where it fits

Establish a formalized
training throughout the
company
performance metrics
Whattraining
beused?
Training methods should
include workshops, on-line

Provide for identifying &


Provide effective training
methods & materials
Provide consistenet product
trainee&&trainers
Provides training tied to
Provide cross-functional
awareness training
in.

training system
Provide training reality
Provideevaluation
and OJT

What
be

Training required for all levels


trainee
Provide
1 2 3 4 5 6 7 8 9 10 11 12 13 Total
Provide for identifying & conducting retraining as req. 3 1 3 9 3 3 22
Provide effective training methods & materials 9 9 9 9 9 3 3 1 1 1 9 63
Establish a formalized training system 3 9 9 1 3 1 9 9 3 3 1 9 60
Provide consistenet product training throughout the company 3 9 9 9 3 9 9 1 3 3 3 61
Provide traceability on-line 3 9 12
Training required for all levels 9 9 9 9 1 1 3 3 3 3 50
Provide training reality 9 9 9 3 3 3 36
Provide evaluation of training, trainee & trainers 1 1 3 3 1 9 9 1 28
Provides training tied to performance metrics 1 3 9 3 16
What training metrics should be used? 3 3 3 1 1 9 20
Provide cross-functional awareness training 3 9 3 1 3 19
Training needs to include "why" purpose & where it fits in
in. 3 1 3 3 1 3 14
Training methods should include workshops, on-line and OJT 9 9 3 9 9 3 9 3 54
Total 35 59 69 43 23 37 20 37 51 23 16 15 27 455

FIGURE 3.13 Interrelationship Digraph with Row and Column Totals.

117
c03_1

118
10/09/2008

levels
training,

alllevels
on

on-line
Goal:

forall
118

Design and Develop

trainers

requiredfor
evaluationofoftraining,
Training System

Provides training tied to


Training needs to include

Provide for identifying &


conducting retraining as req.
methods & materials
Establish a formalized
company
Provide traceability online
Provide training reality
performance metrics
be used?
"why" purpose & where it fits

Provide consistenet product


training throughout the
Trainingrequired
Provideevaluation
trainee&&trainers
What training metrics should
Provide cross-functional
awareness training
Training methods should
include workshops, on-line
and OJT

Provide effective training


training system
in.

Training
Provide
trainee
1 2 3 4 5 6 7 8 9 10 11 12 13 Total
Provide for identifying & conducting retraining as req. 3 1 3 9 3 3 22
Provide effective training methods & materials 9 9 9 9 9 3 3 1 1 1 9 63
Establish a formalized training system 3 9 9 1 3 1 9 9 3 3 1 9 60
Provide consistenet product training throughout the company 3 9 9 9 3 9 9 1 3 3 3 61
Provide traceability on-line 3 9 12
Training required for all levels 9 9 9 9 1 1 3 3 3 3 50
Provide training reality 9 9 9 3 3 3 36
Provide evaluation of training, trainee & trainers 1 1 3 3 1 9 9 1 28
Provides training tied to performance metrics 1 3 9 3 16
What training metrics should be used? 3 3 3 1 1 9 20
Provide cross-functional awareness training 3 9 3 1 3 19
Training needs to include "why" purpose & where it fits in
in. 3 1 3 3 1 3 14
Training methods should include workshops, on-line and OJT 9 9 3 9 9 3 9 3 54
Total 35 43 23 37 20 37 23 16 15 27 455
59 69 51
FIGURE 3.14 Completed Interrelationship Digraph Matrix.
c03_1 10/09/2008 119

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 119

FIGURE 3.15 Cause-and-effect diagram.

Step 1. Identify the problem. This step often involves the use of other statis-
tical process control tools as well as brainstorming. The result is a clear, concise
problem statement.
Step 2. Select interdisciplinary brainstorming team. Select an interdiscipli-
nary team, based on the technical, analytical, and management knowledge re-
quired to determine the causes affecting the problem.
Step 3. Draw problem box and prime arrow. The problem contains the pro-
blem statement being evaluated for cause and effect. The prime arrow functions
as the foundation for the major categories. Establish the problem box and prime
arrow. (See Figure 3.16.)
Step 4. Specify major categories. Identify the major categories of causes
contributing to the problem stated in the problem box. As shown in Figure 3.17,
the six basic categories for the primary causes of the problems are most
frequently:

FIGURE 3.16 The prime arrow and problem box.

FIGURE 3.17 Major categories of causes.


c03_1 10/09/2008 120

120 ENTERPRISE EXCELLENCE DEPLOYMENT

FIGURE 3.18 Corrective Action Fishbone.

 Personnel
 Method
 Materials
 Machinery
 Measurements
 Environment
Other categories may be specified based upon the needs of the analysis.
Step 5. Identify defect causes. When you have identified the major causes
contributing to the problem, then you can determine the causes related to each
of the major categories. There are three methods to approach this analysis:
 Random method
 Systematic method
 Process analysis method
For our purposes here, we focus on the random method; listing all six major
causes contributing to the problem at the same time, then identifying the possi-
ble causes related to each of the categories.
Step 6. Identify corrective action. Based on (1) the cause-and-effect analysis
of the problem and (2) the determination of causes contributing to each major
category, identify corrective action . The corrective action analysis is performed
in the same manner as the cause-and-effect analysis. The cause-and-effect dia-
gram is simply reversed so that the problem box becomes the corrective action
box. (See Figure 3.18.)
Process Decision Program Chart
The process decision program chart (PDPC) is a tool that assists in anticipating
events and in developing countermeasures for undesired occurrences. The
PDPC is similar to the tree diagram. It leads you through the identification of
the tasks and paths necessary to achieve a goal and its associated subgoals. The
c03_1 10/09/2008 121

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 121

FIGURE 3.19 Vertical Tree Diagram as Basis for PDPC.

PDPC then leads you to answer the questions ‘‘What could go wrong?’’ and
‘‘What unexpected events could occur?’’ Next, by providing effective contin-
gency planning, the PDPC leads to developing appropriate countermeasures.
The process for developing the process decision program chart is less struc-
tured than the tools previously discussed. Therefore, the steps listed are in-
tended only as guidelines.

Step 1. Construct a tree diagram. As originally developed, the PDPC is a


graphic chart. It begins with the development of a tree diagram of the process or
activity under evaluation. For the PDPC, we prefer to orient the tree diagram
vertically (versus horizontally). This convention is not a rigid requirement, but
it does seem to provide a logical direction for the flow of activities when devel-
oping contingencies. (See Figure 3.19.)
Step 2. Answer key questions. At each branch of the tree diagram, ask the
questions:
 ‘‘What can go wrong at this point?’’
 ‘‘What unexpected events could occur?’’
The answers to the first question are documented on the chart. The alternative
paths are added to the chart as well. (See Figure 3.20.)
Step 3. Develop and select countermeasures. Countermeasures are devel-
oped for each action that could go wrong. Use any of the following tools:
 Affinity diagram
 Interrelationship digraph
 Tree diagram
c03_1 10/09/2008 122

122 ENTERPRISE EXCELLENCE DEPLOYMENT

FIGURE 3.20 Potential Problems are Marked on the PDPC.

Annotate each countermeasure below the potential problems. Then evaluate


each countermeasure for feasibility, cost impact, quality impact, and schedule
impact in order to make an informed decision when selecting the countermeas-
ures to implement. As you evaluate the countermeasures, mark them to indicate
whether they are to be implemented or not.

Matrix Diagram
The matrix diagram is a tool for organizing language data (ideas, opinions, per-
ceptions, desires, and issues) so that they can be compared to one another. The
procedure is to organize the data on a vertical and a horizontal axis, examine the
connecting points, and graphically display the relationships. The matrix dia-
gram reveals the relationships among ideas and visually demonstrates the influ-
ence each element has on every other element.
Matrices can be two-dimensional or three-dimensional. A 2-D matrix is in
the shape of an L or a T. A 3-D matrix is in the shape of an X, Y, or C. The L
matrix is used for two sets of variables, the T, Y, and C matrices for three sets of
variables, and the X matrix is used for four sets of variables.

Constructing a Matrix Diagram


Step 1. Select the matrix elements. The matrix elements fall into categories,
which are sets of data. You can derive these elements from a brainstorming ses-
sion, affinity diagrams, interrelationship digraphs, or tree diagrams.
Step 2. Select the matrix format. As stated previously, the matrix format
depends on the number of sets of data to be analyzed. The most common for-
mats are the two-dimensional L and T matrices. (See Figure 3.21.)
c03_1 10/09/2008 123

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 123

FIGURE 3.21 Two-dimensional Matrices.

Step 3 Complete the matrix headings. After the language data is collected,
sorted, and divided into sets, and the matrix format is selected, fill in the head-
ings of the matrix.
Step 4. Determine relationships or responsibilities. Examine each of the in-
terconnecting nodes in the matrix and determine if there is a relationship. As in
the J-F matrix method for developing an interrelationship digraph, evaluate the
relationships and mark the matrix accordingly. (See Figure 3.22.) At this point,
sum the rows and columns and interpret the matrix. Again, use the 1, 3, 9 scale.

Project Selection Matrix


Successful Enterprise Excellence projects drive value, are lean and mean, are
centralized and autonomous, and communicate effectively.

FIGURE 3.22 What-how Matrix Relationship.


c03_1 10/09/2008 124

124 ENTERPRISE EXCELLENCE DEPLOYMENT

Project selection is one of the most critical and challenging activities. The
goal of any project selection process is to create a clear path to implementing
process improvements that benefit the business as a whole. Picking the right
projects to work on will ensure that you leverage your limited resources wisely
while also making sure you solve business problems that are most critical to
your bottom line.
Most organizations can identify a host of project opportunities but have diffi-
culty sizing and packaging those opportunities to create meaningful projects. To
be successful, the project selection process must be well defined and
disciplined.
The project selection matrix makes the identification, selection, and prioriti-
zation of Enterprise Excellence projects more objective and easier to validate.
By adopting this matrix, key management (top-down-driven) projects can be
more easily identified and approved by the senior management team.
The selection process provides a straightforward way to gather the appro-
priate data from all areas of the business, segregate them by improvement
categories, and apply a rating for prioritization. The frustrations, issues,
problems, and opportunities visible inside the company are key sources of
potential projects.
Enterprise Excellence project selection starts as a ‘‘what-how’’ matrix that
identifies the wants, desires, and needs of the customer. These customer require-
ments are translated into technical requirements as the matrix is constructed.
The process for developing the matrix clarifies the relationships between the
means and the goals, thus ensuring that all of the customers’ requirements are
addressed. ‘‘Goal’’ is used to denote ‘‘what’’ is to be achieved, and ‘‘means’’
refers to ‘‘how’’ it is to be achieved. In a matrix, we list the goals on the vertical
axis as the ‘‘whats,’’ and the means on the horizontal axis as the ‘‘hows.’’ (See
Figure 3.23.)
This project selection process also provides a logical basis for determining
the impact of each action on the other actions. Optional enhancements can be
added to the matrix to provide greater understanding and to facilitate the next
phases of the product development project.
This is an example of a completed project selection matrix. This matrix is an
invaluable brainstorming tool to assist your team with aligning limited resources
to the projects that will give the biggest bang for the buck.
There are six steps to creating the project selection matrix. Let’s briefly look
at the steps and then examine each in more detail.

1. Establish the project selection criteria.


2. Establish a list of candidate projects.
3. Evaluate candidate projects against the selection criteria.
4. Evaluate the risk of completing each project.
5. Create an interrelationship digraph for candidate projects.
6. Prioritize and select a project.
c03_1
10/09/2008
125

FIGURE 3.23 Projects Evaluated Against Selection Criteria.

125
c03_1 10/09/2008 126

126 ENTERPRISE EXCELLENCE DEPLOYMENT

Step 1. Establish project selection criteria. First determine the project selec-
tion evaluation criteria and list them along the far left column (or y-axis). Proj-
ect selection evaluation criteria can include such items as:
 Cost of poor quality (COPQ)
 Risk
 Cost performance
 Schedule performance
 System/product performance
 Rolled throughput yield (RTY)

Consider all of the following criteria when evaluating potential projects for
the project selection matrix:
 Performance to customer requirements
 Customer satisfaction
 Internal work process performance
 Project completion schedule requirements
 Strategic business goals
 Cycle time
 Process performance
 Program goals

Step 2. Establish a list of candidate projects. Next, develop a list of potential


(candidate) projects that will have the greatest potential for success. These proj-
ects should be strategically tied to the organization’s goals and objectives—the
business plan. The projects should also be focused on satisfying customers and
delivering world-class products and services. Tools such as the affinity diagram,
the cause-and-effect diagram, and Pareto charts can be useful in focusing your
efforts. Other potential tools to help in this endeavor is the voice of the customer
table, which is discussed later.

Hopefully, this list is long enough to require you to begin prioritizing projects
based on their significance and potential impact to the business. Begin by listing
the projects in the column headings along the top row (or x-axis).
As you select candidate projects, make sure they are in line with your busi-
ness plan. Each candidate project should have the potential to benefit your or-
ganization and customers through:

 Reduced costs
 Improved quality
 Improved performance
 Improved schedule
c03_1 10/09/2008 127

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 127

 Improved reliability
 Reduced risk
Candidate projects should be selected on the basis of quality, cost, schedule,
and risk. Consider the following project selection categories.
Recurring events. We usually dedicate the most resources (both financial and
human) to the repetitive tasks performed most frequently.
Narrow scope. You can’t ‘‘boil the ocean’’ or ‘‘solve world hunger.’’ The best
projects are scoped an inch wide and a mile deep to enable the rigorous
data collection and analysis required for the permanent solution you seek.
It would be better to do several smaller projects aligned along a common
problem than to try to solve them all at once.
Available metrics or measurements developed quickly. You will need data on
current process performance (process inputs, or X variables) and not just
what we produce (process outputs, or Y variables).
Ownership of the process and solution. Focus on processes that directly
touch your budget and head count.
Direct link to customer satisfaction. Think about your customers in the
broadest possible terms, and don’t even begin a project unless you can
make this connection.
What if You do nothing? When considering and choosing candidate projects,
keep in mind that there may be potential costs if you do nothing. Ask yourself
what effects are possible in terms of cost, quality, schedule, and risk if you were
to do nothing? Note that most successful projects tend to fall into one of four
broad categories.
1. Defect reduction. ‘‘Opportunities’’ are the things that must go right in or-
der to satisfy the customer. Any undesired result would be considered a
defect. Look for projects where you can clearly measure the rate of defects
as a function of opportunities. Examples might be found by looking at
customer complaints, one-call resolution, training enrollment or attend-
ance, recruiting yield, and reducing duplicity, to name a few.
2. Cycle time reduction. If your process is measured as a function of time,
reducing the cycle time by which you complete the process will often have
significant impact. Approval time, time to fill/hire, new-hire on-boarding
and relocation are some relevant examples.
3. Cost per unit. This is a great metric to consider for many processes where
executive management is the primary customer. By reducing the overall
cost per unit, you almost always impact bottom-line cost and your budget.
Cost per hire (with or without relocation), search fees, disability claims,
transaction processing cost, vendor management, contingent workforce,
and training costs are examples of processes that can be measured
this way.
c03_1 10/09/2008 128

128 ENTERPRISE EXCELLENCE DEPLOYMENT

4. Customer satisfaction (external or internal). Yes, this is another refer-


ence to customer satisfaction; hopefully you’re getting the point. Suc-
cessful Lean Six Sigma projects are tied to improving a primary metric
that links directly to the customer. Employee turnover or retention,
applicant tracking, and recruiter market share have direct links to cus-
tomer satisfaction. Ask critical questions regarding your customer’s
requirements, expectations, complaints, and problems. Ask what keeps
the boss awake at night.

Still can’t identify a project? If you are still having difficulty identifying poten-
tial projects, ask yourself the following types of questions. Then go back and
review the selection categories and broad project categories introduced earlier.
Providing answers to these questions helps.
 Do you have multiple projects for fixing a critical process?
 Do you find yourself fixing the same problem over and over?
 Is there a problem or situation that is adversely affecting the organization?
 Are customers experiencing problems with your products or services?
U Quality deficiency reports
U Returned product
U Late or incorrect shipments
 Do you believe your customers might take their business to one of your
competitors?
 Is the product or service quality from your competition better than yours?
 Are cycle times too long?
 Are costs too high in any process?
 Do you have regulatory/compliance problems?
 Where do you seem to be using the most resources?
 What are the biggest scrap-producing processes?

Step 3. Evaluate candidate projects against selection criteria. Your next


task is to determine a significance rating for each candidate project based on
each evaluation criteria.
Now your team can begin the exercise of filling in the individual boxes by
scoring each project selection criteria as it relates to each project idea. You will
rate each item in the matrix based on business or executive management priorit-
ies—the business plan. This rating system adds some power to the matrix and
enables more weight to be placed on selection criteria viewed as most important
by your business. (See Figure 3.24.)
1. For each project, you must determine how well the project might satisfy
each of the selection criteria. You will apply a value to each project/crite-
ria, as follows:
c03_1
10/09/2008
129

FIGURE 3.24 Completed Project Selection Matrix.

129
c03_1 10/09/2008 130

130 ENTERPRISE EXCELLENCE DEPLOYMENT

Highly satisfies ¼ 9


Moderately satisfies ¼ 3

 Weekly satisfies ¼ 1
 Doesn’t satisfy ¼ blank

2. Calculate a total for each column and each row.


3. Now you can see that we will need to assign both a priority and a degree
of risk to each potential project. Let’s first talk about determining potential
risk.
Step 4. Evaluate risk to completing the projects. Your next task is to deter-
mine the potential risk for completing each of the candidate projects in your
matrix. As you consider a prospective project, identify and evaluate potential
risks to the opportunity so that you can plan how to maximize the probability of
project success. Consider the following important measures when determining
each potential risk:
 Resources
 Cost
 Technology
 Internal/external resistance
Step 5. Perform interrelationship digraph for candidate projects. It is time
to add the matrix roof. The roof on the project selection matrix is an interrela-
tionship digraph for the candidate projects. We use an expanded scale for the
roof, with a range from 9 (for the most negative relationships) to þ9 (for the
most positive relationships). We also use 3, 1, 1, 3 and blanks to complete
the range.
Step 6. Prioritize and select project. The final step is to prioritize and select
a project based on all the information you have entered and tabulated thus far in
the project selection matrix. You will finally be able to rank the order of projects
based on the individual cumulative scores.

Knowing where to start looking, using the right selection criteria, and think-
ing about broad categories of project type will allow you to focus on projects
with the greatest potential for success and measurable impact on the perform-
ance of the enterprise. Evaluating these project ideas against a selection criteria
enables you to rank them against business or executive priorities. With some
preparation and thought, the process can work smoothly, enabling you to deliver
near-term results with long-term business benefits. Too often, we see people
rush to get started without evaluating the impact of the projects on the goals of
the enterprise. The process for project selection should involve the same rigor as
project execution. Not doing so will result in projects that consistently fall short
of the mark.
c03_1 10/09/2008 131

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 131

Stakeholder Analysis
Stakeholder analysis is a technique to identify and assess key people, groups,
and institutions that could significantly influence the success of a project as far
as requirements or funding for the project, product, and process.
Stakeholder analysis not only identifies stakeholders, but it also:
 Assesses stakeholders’ interests
 Identifies any effects of their interests
 Is linked with institutional appraisal and social analysis
Let’s look more carefully at how the various stakeholders are defined so we
can better understand stakeholder analysis:
Stakeholders. Persons, groups, or institutions with interests in a project or
program.
Key stakeholders. Those who can significantly influence, or are important to,
the success of the project.
Primary stakeholders. Those ultimately affected, either positively or
negatively.
Secondary stakeholders. Intermediaries in the project, including both win-
ners and losers and those involved or excluded from the decision-making
processes

What Does Stakeholder Analysis Do?


Stakeholder analysis can:
 Identify people, groups, and institutions that will influence your initiative
(either positively or negatively).
 Assess the interest of the stakeholders in your project and determine their
position toward it. Anticipate the kind of influence, positive or negative,
these groups will have on your initiative.
 Develop strategies to mitigate opposition, garner support, and begin devel-
oping the voice of the customer (VOC).
 Stakeholder analysis identifies:
 Stakeholder interests
 Conflicts of interest among stakeholders
 Relationships among stakeholders
 Appropriate stakeholder participation

Performing a Stakeholder Analysis


1. Develop a four-column table and list the project stakeholders in the first
column.
c03_1 10/09/2008 132

132 ENTERPRISE EXCELLENCE DEPLOYMENT

2. Use the affinity diagram process to develop an interrelationship digraph.


3. Identify the interests stakeholders have in your project, and list these inter-
ests in the second column of the table.
4. Create a checklist to assess the impact of the stakeholders’ interests, and
list the impact items in the third column of the table.
5. Consider and record the kinds of things that you could do to get stakehold-
er support and reduce opposition, and then list these items in the last col-
umn of the table.
Step 1. Develop a table and list the stakeholders. An example of a com-
pleted stakeholder analysis table is shown in Figure 3.25.
Step 2. Organize brainstorming sessions. Use the affinity diagram process to
organize group brainstorming sessions; identify all the relevant people, groups,
and institutions that will affect or be affected by your initiative, and list them in
the column under ‘‘Stakeholder,’’ as shown in Figure 3.25. Brainstorm the re-
quirements. Once you have completed your affinity efforts, develop an interrela-
tionship digraph.
Step 3. Identify specific stakeholder interests. Once you have a list of all
potential stakeholders, review the list and identify the specific interests these
stakeholders have in your project. Consider issues like:
 The project’s benefit(s) to the stakeholder
 The changes that the project might require the stakeholder to make
 The project activities that might cause damage or conflict for the
stakeholder
Record these under the column ‘‘Stakeholder Interest(s) in the Project,’’ as
shown in Figure 3.25.

FIGURE 3.25 Stakeholder Analysis.


c03_1 10/09/2008 133

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 133

Step .4 Identify impact of stakeholder interests. Now review each stakehold-


er listed in column one. Ask the question: How important are the stakeholder’s
interests to the success of the proposed project? Consider:
 The role the key stakeholder must play for the project to be successful and
the likelihood that the stakeholder will play this role and contribute as
needed
 The likelihood and impact of a stakeholder’s negative response to the
project

Create a checklist for identifying stakeholders’ interests:


 What are the stakeholders’ expectations of the project?
 What benefits are there likely to be for the stakeholders?
 What resources will the stakeholder wish to commit (or avoid committing)
to the project?
 What other interests does the stakeholder have that may conflict with the
project?
 How does the stakeholder regard others in the list?
 Assess the impact of each interest, and enter your conclusions into the third
column of the stakeholder table as shown in Figure 3.25.
Step 5. Develop strategies to achieve support. The final step is to consider
the kinds of things that you could do to get stakeholder support and reduce op-
position. Consider how you might approach each of the stakeholders.

 What kind of information will he or she need?


 How important is it to involve the stakeholder in the planning process?
 Are there other groups or individuals who might influence the stakeholder
to support your initiative?

Record your strategies for obtaining support or reducing obstacles to your


project in the last column in the matrix, as shown in Figure 3.25. Once you have
completed the stakeholder matrix, you may want to develop a hierarchical rela-
tionship stakeholder table to depict your stakeholders. This helps in visually de-
picting where you may need to exert influence to bolster your efforts.
The checklist for identifying stakeholders should include answers to the fol-
lowing questions:

 Have all potential supporters and opponents of the project been identified?
 Have all primary and secondary stakeholders been listed?
 Have primary stakeholders been divided into user/occupational groups or
income groups?
 Have the interests of vulnerable groups been identified?
c03_1 10/09/2008 134

134 ENTERPRISE EXCELLENCE DEPLOYMENT

 Are there any new primary or secondary stakeholders that are likely to
emerge as a result of the project?

Formulating and Deploying the Enterprise Excellence Strategy


Three specific types of plans are required to deploy the mission of an enterprise:
executive, management, and operating. An executive plan develops the organ-
ization’s policies and basic principles of operation; it also develops the structure
of the company, including the executive offices, company staff, subsidiaries, and
company-level standing committees. The management-level plans establish the
methods, practices, and procedures necessary to translate the company’s polic-
ies into action. The operating plans use the methods, practices, and procedures
established through the management plans to focus Figure 3.26 illustrates the
relationships among the various types of plans, and how they deploy their re-
quirements throughout the company.
In simpler terms; executive plans provide for establishment of policy; manage-
ment plans provide for the mechanisms to implement the policies; and operating
plans use these mechanisms to implement the policies. At each level, the goals
are what is to be accomplished and the objectives are how they are to be accom-
plished. The mission of the enterprise is deployed through a waterfall of matrices.
The executive-level objectives are the broad, top-level categories of action
that must be accomplished to achieve the mission of the enterprise. These objec-
tives require the establishment of policies, guidelines, and infrastructure by the
executive level of the company. The establishment of these policies, guidelines,
and infrastructure is achieved by implementation of the executive plan. (See
Figure 3.27.)

FIGURE 3.26 Relationships Among Plan Types: Executive, Management, and


Operating.
c03_1 10/09/2008 135

ENTERPRISE EXCELLENCE DEPLOYMENT PLANNING 135

FIGURE 3.27 Executive Planning Matrix.


The management-level plans provide the mechanism to transition the
executive-level objectives to management-level goals, which in turn are accom-
plished by the management-level objectives. At this point, the management-
level objectives become the operating-level goals. The operating-level objec-
tives necessary to achieve the operating-level goals are specific actions. The
management-level plan is therefore the transition level for the policies, guide-
lines, and infrastructure. At this level, the management plan is established—the
plan that provides for the establishment of the methods, practices, and proce-
dures necessary to implement the policies, guidelines, and infrastructure. The
actual implementation is accomplished at the operating level. (See Figure 3.28.)
The enterprise-level Enterprise Excellence deployment plan includes polic-
ies, guidelines, and infrastructure actions. It also contains some management-

FIGURE 3.28 Management Planning Matrix.


c03_1 10/09/2008 136

136 ENTERPRISE EXCELLENCE DEPLOYMENT

level elements to establish executive-level methods, procedures, and practices


pertaining to the enterprise-level operations. Additionally, it will include specif-
ic actions needed to implement the policies, guidelines, and infrastructure in the
processes of the enterprise offices.
A company with several business units might have more than one layer of
executive plans, but this complexity can be a trap. We need to avoid developing
‘‘policies for policies’’ and creating unnecessary complexity and bureaucracy.
We must strive for simplicity in becoming a world-class competitor. A company
with only one business unit consolidates the executive and management plans
into one plan.

Developing Enterprise Excellence Deployment Plans


The enterprise senior review group (ESRG) has the responsibility for devel-
oping the executive Enterprise Excellence (EE) deployment plan and ensuring
it is deployed throughout the organization. The ESRG begins by reviewing
the reports from the management system assessment and the EE maturity
assessment. Based on this review, the ESRG will determine actions, assign
responsibility, prioritize actions, and begin implementation. This is accom-
plished by:

 Defining and establishing or improving existing infrastructure for deploy-


ing and maintaining Enterprise Excellence
 Developing the enterprise maps
 Defining and prioritizing strategic objectives
 Identifying critical success factors
 Establishing a plan of action and milestones (POA&M) with assigned
responsibility
 Selecting, defining, assigning, and monitoring directed projects and tasks
to implement Enterprise Excellence

ESTABLISHING ENTERPRISE EXCELLENCE POLICIES,


GUIDELINES, AND INFRASTRUCTURE

An executive deployment plan consists of a plan of action and milestones. It


addresses the major elements that are essential if one is to deploy Enterprise
Excellence. For each of these elements, the plan needs to provide a systematic
approach for:

 Identifying policies and guidelines


 Defining infrastructure
 Deploying the policies, guidelines, and infrastructure
c03_1 10/09/2008 137

ESTABLISHING ENTERPRISE EXCELLENCE POLICIES, GUIDELINES 137

Identifying Policies and Guidelines


The process for developing policies and guidelines begins with studying the ar-
ea or topic about which you wish to develop policies and guidelines as well as
the external forces affecting it. This data is then organized and evaluated to pro-
vide the basis for drafting the policy or guideline.
The first action is to define clearly the objective or purpose for the policy or
guideline. The purpose needs to be carefully stated, periodically validated, and
continuously refined and clarified as you work through this process. Each policy
or guideline needs to be consistent with the company values and vision. The
team members need to establish what they know about their business and its
culture, including the traditions and history of their organization. Central to this
effort is a careful review of the existing policies, guidelines, and infrastructure
for implementation of those policies and guidelines.
The ESRG needs to review all of this information, discuss it, analyze it,
and reach consensus about it. For the pieces that already exist or are handed
to the team, it is wise to discuss the meaning and implications of each and
to write a short paragraph that describes the meaning of each item. This
understanding will serve as the baseline for the next step: brainstorming the
vital issues.
In order to establish a policy or guideline, it is recommended that the ESRG
brainstorm a series of questions. This process develops supplementary informa-
tion about the vital issues affecting the business or organization. Before the poli-
cy statement or guideline is established, however, the brainstorming results must
be integrated with the data collected earlier.
The following questions, although not all-inclusive, are recommended to
help focus on the vital issues affecting a given element of the executive deploy-
ment plan:

 What are the control issues for this element of the executive plan?
 What action would be consistent with the model for achieving the competi-
tive edge?
 What are the necessary authority, responsibility, and accountability levels
for this issue?
 What requirements, if any, do my customers levy on me regarding this
issue?
 How does this issue relate to the company vision?

At this point in the process, the ESRG possesses a large amount of data about
the issue. ESRG is now ready to start formulating the policy or guideline. The
policy or guideline statement needs to state clearly what its purpose is, who has
the authority for this issue, what action is required, and why it is necessary to
provide this level of control.
c03_1 10/09/2008 138

138 ENTERPRISE EXCELLENCE DEPLOYMENT

Defining Infrastructure
Infrastructure refers to the facilities, personnel, training, systems, and core
competencies that are required for implementing the policies and guidelines.
For each element of the executive plan, a careful study must be done to define
the infrastructure necessary to implement and deploy the policies and guide-
lines. A plan of action is then developed to ensure that these are developed. The
answers to the following questions will provide the data necessary to define
your infrastructure requirements:
 What facilities are required to implement the policy or guideline?
 Which personnel, in what functions, are necessary to implement the policy
or guideline?
 What support systems are required to implement the policy or guideline?
 What training is required to implement the policy or guideline?
 What budgets are required to implement the policy or guideline?
 What core competencies are required to implement the policy or guideline?

Deploying Policies, Guidelines, and Infrastructure


The actual deployment of the policies, guidelines, and infrastructure is accom-
plished by the management plan and the operating plans. The management and
operating plans are derived from the single executive deployment plan to ensure
that collaborative planning is accomplished throughout the organization. These
plans are deployed to lower-level organizations using the waterfall of matrices
method. This method ensures that there is a link at each successive level in the
organization to the activities above and below. In this manner, one is able to
trace the relationship of an activity to the Enterprise Excellence vision of the
company and each component organization. (See Figure 3.29.)

FIGURE 3.29 Operating Planning Matrix.


c03_1 10/09/2008 139

KEY POINTS 139

KEY POINTS

Enterprise Excellence Infrastructure


Successfully deploying and implementing Enterprise Excellence is dependent
upon executive and management commitment demonstrated through a formal
infrastructure and implementation activities. The deployment and implementa-
tion infrastructure includes the following roles:
 Enterprise senior review group (ESRG)
 Enterprise Excellence deployment champion
 Project sponsors
 Master Black Belt
 Black Belt
 Green Belt
 Team members
Enterprise Senior Review Group
The enterprise senior review group (ESRG) is the executive group responsible
for the deployment of Enterprise Excellence.

Deployment Champions
Deployment Champions are senior leaders in major organizational elements of
the enterprise. They are individuals with significant overall operational respon-
sibility. The Champions are deployment leaders for their organization.

Project Sponsors
A project sponsor is the ‘‘owner’’ of the process or product being developed or
improved. By owner, we mean the individual who has the authority and respon-
sibility to effect changes. The project sponsor has a vested interest in the success
of the project.

Master Black Belts


Master Black Belts are organizational transformation professionals who are
continuous measurable improvement experts. Master Black Belts possesses ex-
perience and advanced knowledge of the processes, tools, and techniques for
process and product improvement. Master Black Belts advise the organization,
ESRG, and Champions about Enterprise Excellence deployment and
implementation.

Black Belts
Black Belts are specialists in continuous process and product improvement.
Black Belts are a technical resource to the organization for the deployment and
implementation of Enterprise Excellence. They lead projects and provide tech-
nical assistance and facilitation for other improvement project teams. They are
c03_1 10/09/2008 140

140 ENTERPRISE EXCELLENCE DEPLOYMENT

directly responsible for supporting Green Belts and providing coaching to Green
Belts and team members.

Green Belts
Green Belts are experienced and trained in leading improvement projects within
their work center and are therefore good stewards of their processes. Green
Belts are trained in the basics of process and product improvement tools and
techniques, facilitation techniques, and project management. A Black Belt is
assigned to mentor each Green Belt as Green Belt lead an improvement project
in their work center.

Team Members
Successful projects require cross-functional, multidisciplinary members encom-
passing disciplines, professions, trades, or work areas impacted by the project
(if the project cuts across departmental boundaries, so should team member-
ship). Effective teams are composed of three to six core members, with other
members added as needed.

Deployment Measurement, Analysis, and Reporting


Once the decision has been made to deploy Enterprise Excellence and the
ESRG has been formed, the next steps are to evaluate the management system
and perform an Enterprise Excellence maturity assessment. The results of the
assessment will reveal strengths and weaknesses in the organization and will
also reveal opportunities to enhance the ability of the enterprise to become a
high-performing organization able to achieve the competitive edge.
The management system assessment evaluates the state of the management
system of the enterprise.
The Enterprise Excellence maturity assessment reviews the critical factors
and elements for achieving the desired Enterprise Excellence state. It is critical
that the ESRG measure, evaluate, and report progress of the plans, and then
annually perform the assessments to determine the current state and to make
adjustments as necessary.

Management System Assessment


The Enterprise Excellence management system is a documented system that
establishes the Quality management system requirements, defines Management
responsibility, and specifies the policies, guidelines, and processes for resource
management, product and service realization, and measurement, analysis, and
improvement. The management system assessment reviews each of these areas
for good practices.

Enterprise Excellence Maturity Assessment


The Enterprise Excellence maturity assessment is intended to be used to devel-
op a baseline status for your organization. The baseline assessment establishes
c03_1 10/09/2008 141

KEY POINTS 141

the readiness level of your organization to deploy Enterprise Excellence. After


the baseline is established, biannual assessments need to be performed to
measure and evaluate progress.

Enterprise Excellence Deployment Planning


The enterprise senior review group (ESRG) leads the deployment for the organ-
ization. The deployment requires a detailed plan of action with assignments and
scheduled completion dates. After the plan has been established and implemen-
tation has begun, the ESRG needs to provide regular, periodic monitoring and
evaluation of progress to plan. The ESRG is also responsible for developing and
implementing recovery plans when performance deviates from the path to the
desired goals.

Process Mapping
Process mapping is a systematic/systems approach to documenting the steps/
activities required to complete a task. Process maps are diagrams that show—in
varying levels of detail—what an organization does and how it delivers services.
Process maps are graphic representations of:

 What an organization does


 How it delivers services
 How it delivers products

Process mapping is accomplished at four levels:

Level 0: Enterprise level


Level 1: Organizational/functional level
Level 2: Operations level
Level 3: Work activities level

Enterprise Excellence Planning Toolkit


Enterprise Excellence planning, deployment, and implementation are accom-
plished with the aid of processes, tools, and techniques of modern process and
product improvement. Success also depends on the application of sound project
management. Both aspects of planning, deployment, and implementation re-
quire the use of tools for collecting and analyzing communication data. The
Enterprise Excellence planning toolkit includes the tools most often used in the
collection and analysis of communication data.

Affinity Diagram The result of a brainstorming session may be a large set of


data. Initially, the relationships among these elements may not be clear. The first
task is to distill the data into key ideas or common themes. The affinity diagram
c03_1 10/09/2008 142

142 ENTERPRISE EXCELLENCE DEPLOYMENT

is a very effective tool for achieving this result. It organizes language data into
groupings and determines the key ideas or common themes. The results can then
be used for further analysis in the planning or problem-solving process.

Interrelationship Digraph The relationships among language data elements


are not linear and are often multidirectional. In other words, an idea or issue can
affect more than one other idea or issue. Furthermore, the magnitude of these
effects can vary. The interrelationship digraph is an effective tool for under-
standing these relationships. The input for the interrelationship digraph is the
result of an affinity diagram.

Cause-and-Effect Analysis After identifying a problem, it is necessary to de-


termine its cause. The cause-and-effect relationship is at times obscure. A con-
siderable amount of analysis often is required to determine the specific cause or
causes affecting the problem.

Process Decision Program Chart The process decision program chart


(PDPC) is a tool that assists in anticipating events and in developing counter-
measures for undesired occurrences.
The PDPC is similar to the tree diagram. It leads you through the identifica-
tion of the tasks and paths necessary to achieve a goal and its associated sub-
goals. The PDPC then leads you to answer the questions, ‘‘What could go
wrong?’’ and ‘‘What unexpected events could occur?’’ Next, by providing ef-
fective contingency planning, the PDPC leads to developing appropriate
countermeasures.

Matrix Diagram The matrix diagram is a tool for organizing language data
(ideas, opinions, perceptions, desires, and issues) so that they can be compared
to one another. The procedure is to organize the data on a vertical and a horizon-
tal axis, examine the connecting points, and graphically display the relation-
ships. The matrix diagram reveals the relationships among ideas and visually
demonstrates the influence each element has on every other element.
Matrices can be two-dimensional or three-dimensional. A 2-D matrix is in
the shape of an L or a T. A 3-D matrix is in the shape of an X, Y, or C. The L
matrix is used for two sets of variables, the T, Y, and C matrices for three sets of
variables, and the X matrix is used for four sets of variables.

Project Selection Matrix Successful Lean Six Sigma projects drive value, are
lean and mean, are centralized and autonomous, and communicate effectively.
The project selection matrix makes the identification, selection, and prioriti-
zation of Enterprise Excellence projects more objective and easier to validate.
By adopting this matrix, key management (top-down-driven) projects can be
more easily identified and approved by the senior management team.
Candidate projects should be selected on the basis of quality, cost, schedule,
and risk. Consider the following project selection categories.
c03_1 10/09/2008 143

KEY POINTS 143

Stakeholder Analysis
Stakeholder analysis is a technique to identify and assess key people, groups,
and institutions that could significantly influence the success of a project as far
as requirements or funding for the project, product and process.

Stakeholders. Persons, groups, or institutions with interests in a project or


program.
Key stakeholders. Those who can significantly influence, or are important to,
the success of the project.
Primary stakeholders. Those ultimately affected, either positively or
negatively.
Secondary stakeholders. Intermediaries in the project, including both win-
ners and losers and those involved or excluded from the decision-making
processes.

Formulating and Deploying the Enterprise Excellence Strategy


Three specific types of plans are required to deploy the mission of an enterprise:
executive, management, and operating. An executive plan develops the organ-
ization’s policies and basic principles of operation; it also develops the structure
of the company, including the executive offices, company staff, subsidiaries, and
company-level standing committees. The management-level plans establish the
methods, practices, and procedures necessary to translate the company’s polic-
ies into action. The operating plans use the methods, practices, and procedures
established through the management plans to focus their operations.

Establishing Enterprise Excellence Policies,


Guidelines, and Infrastructure
An executive deployment plan consists of a plan of action and milestones. It
addresses the major elements that are essential if one is to deploy Enterprise
Excellence. For each of these elements, the plan needs to provide a systematic
approach for:

 Identifying policies and guidelines


 Defining infrastructure
 Deploying the policies, guidelines, and infrastructure
c04_1 10/09/2008 144

4
ENTERPRISE EXCELLENCE
IMPLEMENTATION

MANAGEMENT AND OPERATIONS PLANS

Each directorate or division within the enterprise develops a management plan


using the policy deployment described in Chapter 3. These plans are then
flowed down to operations plans. And, as the enterprise senior review group
(ESRG) did, each level will develop the value stream map for their organization.
Then they will identify actions and projects needed to deploy the policies and
guidelines from the enterprise plan. The management plans and operations
plans will therefore include collaborative and supportive projects deploying pol-
icies and guidelines to action. Thus, Enterprise Excellence implementation
occurs at all levels of the organization. The scope and complexity of the imple-
mentation projects will vary from the executive level to the management level to
the operational level. Each plan, as it is developed and deployed, will include
projects to be accomplished.
Each project will have a project sponsor as described in Chapter 2. This is the
person who ‘‘owns’’ the process (i.e., who has the responsibility and authority to
make changes to the process). Each project will also have a project leader and
team members. Depending on the depth, breadth, and complexity of the project,
the project leader will be a Green Belt or Black Belt. A Master Black Belt will be
assigned to coach and mentor the team through the project management process.
An Enterprise Excellence project, like any other project, is a one-of-a-kind
undertaking that starts when something needs to be accomplished. It is not an on-
going activity. It is an undertaking that ends with a specific accomplishment. An
Enterprise Excellence project has a definable scope of work and creates a specific
result. It has identifiable start and end points, which can be associated with a time
scale. An Enterprise Excellence project passes through several distinct phases.
144 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c04_1 10/09/2008 145

ENTERPRISE EXCELLENCE PROJECTS 145

Although the interfaces between phases may not be clearly separated, formal ap-
proval and authorization to proceed are recommended between phases to ensure
smooth transition between phases and that the project’s goals are accomplished.
Uncertainty related to time and cost diminishes as a project progresses to-
ward completion. The specified result, the time, and the cost to achieve it are
inseparable. The uncertainty related to each factor of the project is reduced with
the completion of each succeeding phase. The requirements for project planning
and the control systems capable of predicting the final end point, as early and
accurately as possible, comes directly from this project characteristic.
Cost of accelerating a project increases greatly as the project nears comple-
tion. Recovery of lost time becomes increasingly more expensive for each suc-
ceeding phase of the project. This project characteristic demands integrated
control through all phases. The review between phases is critical to control and
ensuring efficient and effective completion of a project.
Conflicts occur between the requirements of quality, cost, and schedule when
executing a project. On one hand, some individuals may want to take more time
than necessary, making the project a little bit better, but overrunning the budget
and causing a late delivery or completion. On the other hand, some will push an
inferior product or report out the door for the sake of on-time delivery. Neither
situation is good. Good, effective planning and project management are there-
fore essential for the success of all Enterprise Excellence projects and the exe-
cution of the Enterprise Excellence plans.

ENTERPRISE EXCELLENCE PROJECTS

Enterprise Excellence projects are about satisfying requirements and expecta-


tions on time and within budget. In all cases, the end product of the project must
satisfy the project sponsor’s requirements and expectations. An Enterprise Ex-
cellence project will be one of three types:

1. Technology invention or innovation


2. New product, service, or process development
3. Product, service, or process improvement

In all three types of projects, Enterprise Excellence requires the use of stand-
ard procedures, standard criteria, and statistically valid analysis tools. Because
logical methods and techniques are being applied to improve processes, Enter-
prise Excellence can be said to use the scientific method.
The scientific method is the process of organizing empirical facts and their
interrelationships in a manner that allows a hypothesis to be developed and
tested. The scientific method consists of the following four steps:

1. Observe and describe the situation.


2. Formulate a hypothesis.
c04_1 10/09/2008 146

146 ENTERPRISE EXCELLENCE IMPLEMENTATION

3. Use the hypothesis to predict results.


4. Perform controlled tests to confirm the hypothesis.

Figure 4.1 presents the Enterprise Excellence project decision process. Like
the scientific method, it begins with an observation. In Enterprise Excellence,
this is the identification of an opportunity (or need) to (1) invent or innovate a
technology, (2) develop a new product, service, or process, or (3) improve a
product, service, or process. The first is Invent/Innovate-Develop-Optimize-
Verify (I2DOV), the second Concept-Design-Optimize-Verify (CDOV), and
the last, depending on the focus, Design-Measure-Analyze-Improve-Control
(DMAIC) or Design-Measure-Analyze-Lean-Control (DMALC).
After a project has been identified, the next step is to develop clarification
and definition of the project so that the project charter can be established. The
initial description of a project usually comes from the customer or project spon-
sor. A common pitfall is to move ahead into planning without a complete proj-
ect definition. Another common pitfall is not getting the project sponsor to agree
on the definition. This is the time to clarify expectations of management and the
project sponsor in terms that are meaningful and measurable.
The initial challenge is to develop a definition of the project that includes a
clear understanding of the deliverables, constraints, objectives, scope, and proj-
ect strategy. There needs to be agreement between the project leader, project
team, and the project sponsor about the project definition. We begin, therefore,
by defining the opportunity and developing the business case for the project
(i.e., what we need to do and why we need to do it). This information is used to
establish the charter for the project. The project charter defines the project and
presents the business case for doing the project. It defines roles and responsibili-
ties, what is in scope and what is out of scope for the project. It also establishes
the initial estimates of resources and schedule for achieving the goals.
The following guidelines will assist in collecting and collating the informa-
tion for the project charter:

1. Define the problem/opportunity. Specifically state the issue needing to be


addressed. If there is only anecdotal data to define the problem or oppor-
tunity, identify the source and approximate confidence in its accuracy.
2. Work with the project sponsor to establish the scope of the project. This
is the breadth and depth of the project (i.e., the project constraints). It in-
cludes constraints for the project and the project team. The scope usually
references policies, guidelines, tasking documents, and other instructions
that shape the breadth and depth of the project. The well-written scope will
include the extent of the responsibility and authority for the project team. It
will also define the constraints for the project and the project team.
3. Define the impact of the problem/opportunity. Specify, as accurately as
possible, the impact of the problem/opportunity on the value stream, the
organization, and the customers.
c04_1
10/09/2008
147

FIGURE 4.1 Enterprise Excellence Project Decision Process.

147
c04_1 10/09/2008 148

148 ENTERPRISE EXCELLENCE IMPLEMENTATION

4. Establish project goals. The project goals define the purpose of the proj-
ect. Achieving the goals is how the purpose of the project is accomplished.
Project goals need to be SMART, an acronym for the following:
Specific. Everyone involved in accomplishing the project needs to know
what the objectives are. When the project objectives are vague or not
clearly communicated to all involved, they may be interpreted differ-
ently. Differences in interpretation may lead to team members working
at cross-purposes and will invariably lead to inaccurate project plan-
ning. Therefore, project objectives need to be established within the
constraints of the project (e.g., available and anticipated resources).
The objectives also need to be consistent with established organization-
al policies and guidelines. The degree of detail needed to specify the
objectives will vary depending on the purpose of the project. For exam-
ple, a manufacturing project will have very detailed objectives, where-
as a research project will have a generalized set of objectives.
Measurable. This means the goals need to be expressed in terms of metrics.
You need to be able to measure the achievement of each goal. Remem-
ber, what gets measured gets done! What gets reported gets done faster!
Achievable. The project goals need to be achievable within the constraints
of the resources.
Realistic. The goals for the project need to be attainable within the con-
straints of the existing technology and resources. If the team doesn’t
believe it can be done, it won’t get done.
Timely. When establishing the project goals, define when they need to be
accomplished. This time element is important for scheduling and en-
sures that the goal is timely for achieving the purpose of the project.
Timeliness also helps establish priorities and urgency for the tasks nec-
essary for achieving the goals.
5. Define the project deliverables. Establish completion requirements for the
project.
6. Define the project benefits. Specify the impact on the value stream, the
organization, and the customers after achieving the goals.
7. Define the type of project. This could be, for example, I2DOV, CDOV,
DMAIC, or DMALC.
8. Establish an initial plan of action and milestone chart (POA&M) for
achieving the project goals.
9. Estimate the resource requirements for the project. Define the standing
and ad hoc team members to achieve the proposed plan. Estimate addi-
tional resource requirements (e.g., travel, materials, or support services).
10. Define the risks to accomplishing the goals and successfully completing
the project. Define all the potential risks, their impact, and ways to elimi-
nate the risks or mitigate their impact (e.g., perform a project stakeholder
analysis and an FMEA on executing the proposed plan).
c04_1 10/09/2008 149

ENTERPRISE EXCELLENCE PROJECT DECISION PROCESS 149

This information establishes the baseline for building the business case, or
justification for the project. It is essential for establishing the charter and ensur-
ing success. Initially you may not have all this information; however, collect/
collate what you have and complete it as the project progresses. This informa-
tion will change as the project is conducted and the process evolves. It is critical
for these issues to be continually addressed throughout the life of the project and
that changes be immediately addressed with the project sponsor.
A typical Enterprise Excellence project charter will include:

 Project title
 Identification of the deployment champion, project sponsor, and team
members
 Description of the opportunity
 Impact of the opportunity on the enterprise and customers
 Project goals
 Benefits of achieving the goals
 Definition of in scope and out of scope
 Initial estimate of the schedule and resource requirements and type of proj-
ect (i.e., I2DOV, CDOV, DMAIC, or DMALC)

The charter needs to be signed by the deployment champion, project sponsor,


cognizant Black Belt and/or Master Black Belt. This formally certifies that all
parties agree on the need, the definition, and the approach. It also certifies they
will provide the necessary support and resources to achieve the goals.

ENTERPRISE EXCELLENCE PROJECT DECISION PROCESS

After a project is identified and the business case is initiated, the type of project
is established, (i.e., inventing or innovating technology; developing new prod-
ucts, services, or processes; or improving products, services, or processes). Each
type of project uses a structured process.

Inventing/Innovating Technology
The development of new products, services, and processes begins with the de-
velopment of new technology or the innovation of existing technology for new
and unique applications. Once a new technology is developed we need to eval-
uate it for application to existing or new products, services, or processes. The
application of the technology is referred to as technology transfer.
Technology development is accomplished using system engineering. This sys-
tem approach enables critical functional parameters and responses to be quickly
transferred into now products, services, and processes. The process is a four-phase
process: Invention and Innovation-Develop-Optimize-Verify (I2DOV).
c04_1 10/09/2008 150

150 ENTERPRISE EXCELLENCE IMPLEMENTATION

Invention and Innovation


Technology invention and innovation begins with a review of the vision, mission,
goals, and objectives of the enterprise. The long-term voice of the customer is
collected and analyzed. Technology needs and trends are identified. Current of-
ferings are evaluated for viability and application of new technology. Decisions
are then made to innovate or invent new technologies. New product, service, or
process functions are then identified and modeled.

Develop
In this phase, concepts are evaluated and selected. The selected technological
concepts are characterized. The characterizations are analyzed and the ideal
transfer functions are quantified.

Optimize
This phase focuses on establishing robust critical functional responses. This in-
cludes evaluating the functional response of the technology concept under con-
ditions of intentionally induced changes to control and noise factors. This
determines the effects of noise and enables us to determine the optimal set
points to achieve a cost-effective robustness. In this way, we are able to select
the optimal technology for transfer to products, services, and processes so they
are insensitive to noise without removing the sources of variability.

Verify
The final phase of this process focuses on the integration and verification of new
technologies into an existing or new product architecture. In this phase, stress
testing and other evaluation methods are employed to ensure the technology is
mature enough to transfer to existing or new products, services, or processes.

Development of Products, Services, and Processes


The Enterprise Excellence approach for developing products, services, and pro-
cesses is the Design for Lean Six Sigma strategy. This strategy ensures that cus-
tomer requirements and expectations are incorporated in the customer offering.
Concept-Design-Optimize-Verify (CDOV) is a specific, sequential design and
development process used to execute the design strategy.

Concept
This phase of CDOV initiates the design and development activities. The voice
of the customer is collected and analyzed. The customer requirements model is
developed, defining the requirements to satisfy the customer. The initial require-
ments matrix (house of quality) is developed and evaluated. The customer per-
formance model is developed, establishing the concept for the offering to be
developed.
c04_1 10/09/2008 151

ENTERPRISE EXCELLENCE PROJECT DECISION PROCESS 151

Design
This phase of CDOV is the system development stage. These are the design and
development activities. The engineering design and design characteristics are
selected. The system measurement strategy is established for evaluating the de-
sign; finally, the project risks are identified and evaluated. And the product, ser-
vice, or process design is established.

Optimize
Design optimization completes the development stage. This phase focuses on
establishing a robust design that meets the goals of the design team. In this
phase, test and evaluation are performed to determine optimum set points for
processes and to establish the desired robust design. In addition make-buy strat-
egies and production control strategies are established to ensure the producibil-
ity of the design.

Verify
This phase is the demonstration stage of the process. In this phase, we verify
capability of the design to meet the requirements. Based on the test and evalua-
tion results, the risk assessment and reliability assessments are updated. We are
now ready to begin full-scale production.

Improving Products, Services, and Processes


Improving products, services and processes involves improving the effective-
ness and efficiency of our operations. A product or service is said to be effective
when it meets all of its customer requirements. Effectiveness can be simply ex-
pressed as ‘‘doing the right things the first time and every time.’’ Efficiency can
be simply expressed as ‘‘doing the right things faster and with minimum re-
sources.’’ The process is Define-Measure-Analyze-Improve or Lean-Control.

Define
During the define phase, the project team maps the current process. All available
process, product, and service data are collected. The process maps are evolved
into value stream maps. The maps are used to evaluate the processing, define
value-added, business value-added, and non-value-added activities. During this
phase, a process failure modes and effects analysis is performed to identify and
prioritize potential problem areas. A value-added process step is anything that:

 Is done right the first time


 The customer is willing to pay for
 Transforms the product or service

Steps that do not meet these criteria often contribute to waste and can lead to
defects. Business value-added steps do not meet the criteria for value added but
c04_1 10/09/2008 152

152 ENTERPRISE EXCELLENCE IMPLEMENTATION

are required by regulatory, safety, or security concerns. Our goal is to improve


the effectiveness of our processes by reducing variability. Once that is accom-
plished, we will focus on improving efficiency by eliminating non-value-added
steps and minimizing business value-added steps. This strategy is accomplished
in the DMALC process.

Measure
The next phase is measure. The goal of the measure phase is to focus the im-
provement effort by gathering information on the current situation. During this
phase, the project team collects baseline data on process performance and actual
or suspected problems, displays the data, and calculates the variance level of the
process. In addition, historic data may be used to further define the problem.
The following steps are performed at this stage:

 Process capability requirements or ‘‘specs’’ are determined.


 Measurement method and tools are established.
 A sampling plan is devised to meet project goals.
 Process performance data is collected.

Analyze
During the analyze phase, the project team first determines whether the process is
stable, If it is not stable, the team will take the appropriate steps to stabilize the
process. Once the process has been determined to be stable, the team focuses on
problems that were identified in the measure phase: Are these related to variability
or waste? The goal of the analyze phase is to identify root causes and to confirm
them with data. During this phase, the project team verifies the causes of problems
before moving on to solutions and displays potential causes for further analysis.
Process performance measures are analyzed to evaluate the efficiency of the
process. To do so, the following steps must be taken:

 Perform routing analysis.


 Calculate cycle times, takt time and rolled throughput yield.
 Update failure modes and effects analysis (FMEA).
 Develop improvement plans.

After all the initial steps are performed, the team identifies opportunities for
improvement and prepares a preliminary plan for improving the effectiveness or
efficiency of the operations. Issues related to quality and variability are ad-
dressed in the improve phase. Issues related to cycle time and waste reduction
are addressed in the lean phase.

Improve
The improve phase of a project allows the team to begin testing solutions. The
goals of the improve phase are to develop, pilot, and implement solutions that
c04_1 10/09/2008 153

ENTERPRISE EXCELLENCE PROJECT DECISION PROCESS 153

address the causes identified in the analyze phase and to use data to evaluate
both the solutions and the implementation of the solutions. During this phase,
the project team:

 Identifies potential solutions


 Pilots one or more solutions
 Applies the results of the pilot
 Implements the solution

Lean
This phase guides the team to specific action to develop lean processes and
achieve the performance and financial goals.

 Implement improvement plan.


 Perform designed experiment if applicable.
 Measure improvements.
 Develop conclusions, recommendations, and next steps.
 Update documentation.
 Present status report.

Control
The control phase focuses on institutionalizing the gains achieved through the
improvement project. In this phase, the following activities occur:
 Policies, guidelines, procedures, and checklists are revised to reflect the
changes.
 A control system is established for each critical parameter. This is essential
to ensure the process is monitored, evaluated, managed, and reported.
 There will inevitably be special circumstances that will cause the process
to go out of control. An out-of-control plan is established for the process.
This plan defines actions to be taken when an out of control condition
occurs.
 An internal audit plan is established for the process to ensure that regular,
periodic checks are performed. This is critical to ensure the gains are main-
tained and that the process doesn’t slip back to the old way.
 Personnel are trained in the new procedures, out-of-control plan, and inter-
nal audit plan.
 A final report is prepared documenting the completion of the project. This
report is critical for recognition of the achievements and for maintaining
the lessons learned.

This phase is for documentation and monitoring of the new process condi-
tions via statistical process control methods.
c04_1 10/09/2008 154

154 ENTERPRISE EXCELLENCE IMPLEMENTATION

The tools are put in place to ensure that the process remains within the maxi-
mum acceptable ranges over time. If implemented properly you should be able to:

 Establish control system for each critical parameter


 Establish data collection plan
 Establish out-of-control plan
 Establish internal audit plan
 Develop and present final report

At the completion of the improve or lean phase, the process is evaluated to


determine whether it has attained a state of continuous measureable im-
provement. If the process, product, or service has not attained the level of
improvements required by the enterprise, a new project opportunity is devel-
oped and the process starts over.

PLANNING THE ENTERPRISE EXCELLENCE PROJECT

After the charter is established, the project leader assembles the team and begins
the next level of project planning. Effective planning is essential for successful
projects. An effective plan is flexible. It provides alternative paths and functions
to accommodate changes that may occur during implementation. The project
plan consists of the WBS, the schedule, and the resource requirements. During
the life of the project, the plan will need to be periodically reviewed and up-
dated to ensure it accurately reflects the project.
The work breakdown structure (WBS) is a powerful tool for breaking a task
into subtasks. The WBS approach translates the deliverable, constraints, and scope
into a detailed project plan. This technique will ensure that all tasks are identified
and will focus attention on those tasks most critical to project success. The WBS
becomes the basis from which scheduling, budgeting, and staffing can be planned.
WBS provides a framework and systematic method for up-front planning of
the project. It describes the project as the sum of smaller work elements (tasks).
The WBS is the basis for developing schedules, cost estimates, and assigning
resources. The WBS is a graphical representation of the project that shows the
relationship between product and tasks, but not time. It is a planning tool that
breaks the project into manageable pieces.
The WBS is a hierarchy chart with the final product at the top. The second
level breaks the procedure into major component tasks. These component tasks
are in turn broken into subcomponent tasks. This process is repeated until the
lowest level is reached. This is the level that identifies individual activities and
tasks that can be assigned and performed.
A top-down approach is used to guide planning instead of allowing detailed
plans to be generated without a common framework. If the required networks
and components are well established and easy to comprehend, a WBS may not
c04_1 10/09/2008 155

PLANNING THE ENTERPRISE EXCELLENCE PROJECT 155

FIGURE 4.2 Sample WBS.

be used. There is a possibility, however, that the planning may be incomplete or


inconsistent with the project objectives.
The number of levels in the WBS depends on the number of levels of man-
agement that are networking and scheduling the work. Thus, if the project is
managed at two levels, a two-level WBS is adequate. In more complex situa-
tions, the number of levels would increase. If the WBS levels increase beyond
four, consider dividing the project into subprojects. (See Figure 4.2.)
The implementation of the project is the controlled execution of the plan.
The project leader is responsible for coordinating all elements of the project
during this phase. Progress needs to be monitored and evaluated.

Scheduling
Scheduling is deciding when work will be performed. In order to make these
decisions, you need to consider the following questions:

 When does the work need to be completed?


 What actions are required?
 How long will it take?
 What resources are needed?
 Where the work will be done?
c04_1 10/09/2008 156

156 ENTERPRISE EXCELLENCE IMPLEMENTATION

 What are the prerequisites for the tasks?


 Priority of tasks, objectives, and deliverables?

There are three basic tools for scheduling:

 Bar charts
 Milestone charts
 Network diagrams

Bar Charts
The bar chart is also known as a Gantt chart. (See Figure 4.3.) This chart con-
sists of an x-axis and a y-axis. The y-axis reflects a numerical value, resource
allocated or tasks. The x-axis contains a timeline. This shows the relationship
between tasks, the required resources, and the time required to complete the
tasks. The Gantt chart doesn’t show the interdependencies between tasks.
A bar chart is best used in the following circumstances:

 Project is short term.


 Project has a small number of tasks.
 It is easy to estimate the time required to perform each task.
 There are few interrelationships between project tasks.

Milestone Charts
A milestone chart is a special innovation of a bar chart. (See Figure 4.4.) It is a
chart orientated to project milestones. Milestones are points in time. They are
usually the beginning or end of an activity. Every event in a project could be a
milestone. Designating milestones is done for monitoring progress of the proj-
ect. Milestones are therefore limited to major and minor. A major milestone is a

FIGURE 4.3 Project Gantt Chart.


c04_1 10/09/2008 157

PLANNING THE ENTERPRISE EXCELLENCE PROJECT 157

FIGURE 4.4 Project Plan of Action and Milestones.

significant event dependent upon many activities over a relatively long period of
time. Minor milestones are events that occur in a short period of time.
Milestone charts also have an x-axis and a y-axis. the x-axis is the calendar
for the project. The y-axis contains the tasks and specific milestones. There is a
current date line that marks the date when the chart is completed.
Milestone chart innovations may include:

 Planned versus actual


 Slipped milestones with new dates
 Major versus minor milestones
 Percentage complete

Network Diagrams
Network diagrams are flowcharts of project tasks. These diagrams indicate the
dependency relationships among the tasks of the project. Once the network dia-
gram is developed, time evaluations can be performed to determine the total
duration for the project.
Two types of network diagram analysis are used:

 Critical path method (CPM)


 Project evaluation and review technique (PERT)

If the project tasks are well defined and if reasonably accurate time duration
can be established, then the CPM method is used. If, on the other hand, activi-
ties and time duration are nebulous (e.g., as with research projects), then PERT
is used.
c04_1 10/09/2008 158

158 ENTERPRISE EXCELLENCE IMPLEMENTATION

There are two techniques for developing CPM charts:

 Arrow diagram
 Precedence diagram

The arrow diagram uses arrows connected by nodes to represent activities.


Each node (circle) is a start or finish event. The finish node for one event is the
start node of another event.
The precedence diagram uses boxes to represent the project activities. The
boxes are connected by arrows representing the relationships of the activities to
each other. (See Figure 4.5.)

Developing the Network Diagram The activities for the network diagram
come from the lowest level of the WBS. Duration and dependencies are
assigned to each task. The logic used to develop the dependencies needs to
be carefully considered. The basis for the relationships is only the require-
ments of the task, not plans or resource constraints.
When establishing times for each task use the following definitions.

Early start time (ES). The earliest point at which an activity can begin. In
calculating the critical path, use the latest date of completion for all pre-
requisite activities for determining the ES of a subsequent activity.

FIGURE 4.5 Project Pert Chart.


c04_1 10/09/2008 159

PLANNING THE ENTERPRISE EXCELLENCE PROJECT 159

Late start time (LS). This is the latest possible start time a task can begin and
still support the project completion date. This calculation is done in a
backward flow beginning with the end of the project. The earliest start for
all activities requiring completion of the activity is the late start date for
the activity.
Slack time. The amount of time that a particular activity can be delayed with-
out impacting the overall project schedule. This is the difference between
the early start time and the late start time.
Critical path. The path through the network that requires the longest time
duration from the beginning to project completion. There is no slack along
this path.

Critical Path Method (CPM) CPM uses tasks in the network diagram. Start
with a forward pass through the network. Identify the earliest start date for an
activity, add the duration to determine the early finish date. For the first activity,
use the project start date. For each succeeding activity, use the latest early finish
date of all predecessor activities as the early start date.
The next step is a backward pass. The early finish date for the last activity is
the late finish date for that activity. Subtract the duration to find the late start
date. Working toward the front of the network, use the earliest late start date for
all succeeding activities as the late finish date.
The difference between the early finish date and late finish date for a task is
the slack. The smaller the slack, the less room there is for scheduling error and
the more critical the activity is.

Project Evaluation and Review Technique (PERT) If the activities of the network
diagram and their duration’s cannot be established accurately, use the PERT meth-
od. For this method, the milestones are used for the network diagram. This
method uses a mathematical, model to predict the time required for each
milestone.
First develop three time estimates:
 The optimistic time (TO) of completion. The shortest expected time if
everything goes as smoothly as possible.
 The pessimistic time (TP) of completion. The time it would take if all that
can go wrong does go wrong.
 The most likely time (TL) of completion. The time it would take if all goes
as it has in the past for similar tasks under similar circumstances.
The calculated time estimate (TE) is:
TO þ 4TL þ TP
TE ¼
6
TP  TO
Standard deviation ¼
6
c04_1 10/09/2008 160

160 ENTERPRISE EXCELLENCE IMPLEMENTATION

FIGURE 4.6 Project Critical Path Analysis.

For 99.73 percent of the time, the work will be completed in the range TE
plus or minus 3 standard deviations. Then TE can be used to calculate early start
dates, late start dates, early finish dates, and late finish dates. (See Figure 4.6.)

Cost Estimating and Budgeting


Credible and accurate cost estimates are needed for funding approval and plan-
ning of activities. When the cost estimate is combined with the project schedule,
the project manager can monitor the progress of the project with sufficient in-
formation for optimum control over the project team to develop trends and fore-
cast performance.
When preparing the cost estimate, review the following:

 Project scope
 WBS
 Cost performance on past projects
 Projected resource requirements
c04_1 10/09/2008 161

PLANNING THE ENTERPRISE EXCELLENCE PROJECT 161

Next, consolidate the costs estimates from the various sources. Apply the ap-
propriate scaling and contingency factors.
There are several types of estimates:
Order-of-magnitude estimate. An approximation based on historical data for
similar projects, with a probable error of 10 to 50 percent.
Study estimate. Better than order-of-magnitude estimate. Requires knowl-
edge of major items. Probable error of less than 30 percent.
Preliminary estimate. Also known as a budget authorization estimate. More
detailed information is needed than for a study estimate. The probable er-
ror is less than 20 percent.
Definite estimate. Based on considerable dates obtained before preparing
completed drawings and specifications. The probable error is less than
10 percent.
Detailed estimate. Requires detailed drawings, equipment specifications, and
site surveys. The probable error is within 5 percent.
The required degree of accuracy depends on the purpose of the estimate.
Order-of-magnitude estimates and study estimates are often used for prelimi-
nary decisions to develop a project. A project plan will normally use a definitive
estimate for establishing and controlling the project budget.
Scaling Factors and Contingencies
Scaling factors are used to adjust for known constraints. These include:
 Geographic locations
 Inflation over project life unusual
 Schedule impacts
 Union or nonunion workforce
 Climate conditions affecting operations

Contingency allowances are for circumstances that are uncertain but proba-
bly will impact the project. These factors need to be included in the cost esti-
mate for the project. These contingencies are for variations caused by:
 Design changes
 Estimating errors
 Variations to the contract or purchasing plan

Contingencies are not for changes in scope, shifts in milestones, or slips in


performance.

Budget
The budget for the project is a written plan covering the planned expenditures
for a defined period of time. The budgets are in monetary terms for specific
c04_1 10/09/2008 162

162 ENTERPRISE EXCELLENCE IMPLEMENTATION

periods of time (e.g., month, quarter, or year). Budgets deal with actual informa-
tion from cost estimates and accounting records.
The project budget should be itemized to the smallest organization unit to
which a cost can be clearly traced.

Variance and Earned Value Analysis Variance is any deviation in schedule,


cost, or performance from a previously specified plan. When evaluating the
project performance, it is necessary to identify variances and examine the most
significant ones in detail in order to develop recovery plans and ensure success-
ful completion of the project.
Earned value analysis assesses the cumulative affect of variances over a proj-
ect. This process combines the effects of schedules with budgets to assess the
value of work performed.

Budgeted cost for work scheduled (BWCS) is the budget for the project. This
is the cost for completing the project. This assumes that all assumptions
made during the estimating process are correct and do not change.
Actual cost for work performed (ACWP) is the amount spent to complete
work to date.
Budgeted cost for work performed (BCWP) is the amount budgeted for the
work that has actually been completed to date. This is the earned value to
date.

Variances for the project can be determined at any level of the work break-
down structure.

Cost variances (CV) ¼ BCWP  ACWP


Schedule variances (SV) ¼ BCWP  BCWS

A negative cost variance indicates a cost overrun condition. A negative


schedule variance indicates a behind-schedule condition.
The challenge in earned value analysis is the projection of BCWP. The deci-
sion needed is how to take credit without overestimating the value of the work
performed.
Three methods are commonly used:

1. 50/50—Half the budget is earned when the work task begins. The other
half is earned when the task is complete.
2. 0/100—No value is earned until the task is complete.
3. Proportional credit is taken according to an established milestone chart or
algorithm for determining percent complete.
c04_1 10/09/2008 163

PLANNING THE ENTERPRISE EXCELLENCE PROJECT 163

Estimate at Completion (EAC) At any given point in time, the estimated


budget at completion can be calculated.
EAC ¼ (ACWP=BCWP)  total budget
This estimate implies that the balance of the project work will be accom-
plished at the same rate of efficiency and performance.

Project Plan Review and Approval The written project plan will vary in size
and complexity depending on the size and complexity of the project. The plan
will include:

 Deliverables
 Goals
 Schedule
 Cost estimate
 Resource requirements
 Project controls

The project plan needs to be completed and agreed upon by the project team
members. It is then reviewed for agreement and approval by the project sponsor.

Implementing the Project In this phase, the project leader coordinates all the
elements of the project. These responsibilities include:

 Controlling work in progress


 Providing feedback to those working on the project
 Negotiating for resources
 Resolving conflicts
 Developing recovery plans
 Forecasting

Implementing begins with written task statements for each WBS activity.
Each task statement needs a specific set of deliverables and associated task per-
formance standards in terms of schedule, cost, and performance. Each WBS will
have a performance plan—milestone chart and budget that can be reviewed and
evaluated for performance.
Communication is key to successful project implementation. It is necessary
for problems or changes to be communicated to all the team members and to the
project sponsor and customer as soon as possible. This enables the team to work
together to solve problems with minimum impact on the schedule and budget.
Implementing the project requires constant review and evaluation of prog-
ress. As progress is made and changes occur, the project plan needs to be re-
vised. This analysis will provide early identification of schedule and budget
c04_1 10/09/2008 164

164 ENTERPRISE EXCELLENCE IMPLEMENTATION

problems. When these occur, it is imperative that the project team develop and
execute recovery plans to get the project back on plan.

TOLLGATE REVIEWS

A tollgate is a formal review and progress report that an Enterprise Excellence


project must pass through in order to proceed to the next phase. Tollgate reviews
provide a methodical, objective review to determine whether a project should go
forward, be held back for more effort in present phase, or cancelled. The toll-
gates are between each of the improvement project process phases.
It is recommended that a tollgate review presentation take no more than
20 minutes, followed by 30 to 40 minutes for questions and answers. At the end
of the tollgate review, a decision is reached to proceed to the next phase, contin-
ue working in the present phase, or cancel the project altogether.
Topics that should be covered during a tollgate review include:
 Status of project deliverables
 Overall project status
 New risks that may have been discovered or encountered (e.g., technical
roadblocks or resource allocation issues)
 Progress against the project schedule
 Recommended changes in project scope or charter
 Links of the project to strategic goals of the organization
Key stakeholders should attend all tollgate reviews. As appropriate, primary and
secondary stakeholders as well as any applicable subject matter experts (SME)
also need to attend. In addition to the project leader, attendees should include
deployment Champion, project sponsor, senior staff, as appropriate, Master
Black Belt, or other technical advisers.

Design for Lean Six Sigma Tollgate Reviews


The I2DOV and CDOV processes collectively constitute the Design for Lean Six
Sigma methodology. The tollgate reviews for each are different as are the tasks.

I2DOV Tollgate Reviews


Invent and Innovate Technology Tollgate At this tollgate, the team will re-
view project progress and summaries of the phase deliverables, which include:

 Technology road maps


 Summary of technology trends
 Technology requirements
 Technology concepts
c04_1 10/09/2008 165

TOLLGATE REVIEWS 165

 Technology concept risk profiles


 Project plan

Develop Technology Tollgate At this tollgate, the team will review project
progress and summaries of the phase deliverables, which include:

 Technology measurement system concepts


 Results of math modeling of concepts
 Performance data
 Updated risk assessment
 Project plan

Optimize Technology Tollgate At this tollgate, the team will review project
progress and summaries of the phase deliverables, which include:

 Robustness reports for technologies evaluated


 Critical parameters
 Report on technology stress testing
 Updated risk assessment
 Project plan

Verify Technology Tollgate At this tollgate, the team will review project
progress and summaries of the phase deliverables, which include:

 Capability assessments
 Reliability assessments
 Critical parameters
 Technology risk profiles
 Technology transfer control plans

CDOV Tollgate Reviews


Concept Tollgate At this tollgate, the team will review project progress and
summaries of the phase deliverables, which include:

 Voice of the customer report


 House of quality
 Customer requirements model
 Customer performance model
 Project risk assessment
 Project plan
c04_1 10/09/2008 166

166 ENTERPRISE EXCELLENCE IMPLEMENTATION

Design Tollgate At this tollgate, the team will review project progress and
summaries of the phase deliverables, which include:
 Sublevel house of quality
 Sublevel design concept alternatives
 Sublevel functional models
 Baseline design capability
 Project risk assessment
 Project plan

Optimize Tollgate At this tollgate, the team will review project progress and
summaries of the phase deliverables, which include:
 Robustness report for each sublevel
 Reliability growth plans
 Make-buy strategy
 Control strategies
 Updated risk assessment
 Project plan

Verify Tollgate At this tollgate, the team will review project progress and
summaries of the phase deliverables, which include:
 Robustness test reports
 Verified reliability growth plan
 Updated risk assessment
 System-level critical parameters
 Capability report of sublevel design
 Launch plans

Improvement Project Tollgate Reviews


As in the Design for Lean Six Sigma design review tollgates, improvement proj-
ect tollgates serve to report status, progress, and future plans and ensure the
project is on course and has the requisite support of the leadership team.

Define Tollgate
The important questions to address during the tollgate at the end of the define
phase are as follows:

 Has the project charter been validated?


 Is there a valid business case, and does it support the goals and objectives
of the enterprise?
c04_1 10/09/2008 167

TOLLGATE REVIEWS 167

 Is there a clear, specific problem or opportunity statement?


 Are the goals of the project delineated and measurable?
 Is the project scope appropriate, manageable, and realistic?
 Have all constraints and assumptions been identified and defined?
 Have the stakeholders/customers been identified?
 Have stakeholders/customers requirements been identified?
 Are requirements measurable? If not, how are they going to be addressed?
 What role will key stakeholders/customers play in the project?
 Has the core project team been identified?
 Have any subject matter experts outside of the core team been identified?
Are they able to participate as needed?
 Has a second-level project map been started?
 Is there a POA&M with project milestones delineated by date?
 Have roadblocks been identified and addressed?
 Have key performance parameters (KPP) and critical characteristics been
identified?
 Is there a plan to go forwarded to measure KPP/critical characteristics?

Measure Tollgate
The measure tollgate is performed after the measurement system is validated
and baseline data on the current system/process has been collected. The impor-
tant questions to address during this tollgate are as follows:

 Have inputs, process activities, and outputs been identified and measured?
 Has the team looked for existing data already available?
 Has team determined what new data is needed?
 Has team developed standard operational metrics for each KPP?
 Has team addressed data collection issues, such as forms and sample size?
 Has team addressed data stratification needed to reach root causes?
 Has baseline performance been adequately measured or quantified?
 Has the gap between ‘‘as is’’ and ‘‘should be’’ performance been quantified?

Analyze Tollgate
The analyze tollgate is performed after the root causes have been determined
and prioritized. The important questions to address during this tollgate are as
follows:

 Has the team reviewed the data for accuracy and timeliness?
 Has team performed descriptive and/or graphical analysis of the data?
c04_1 10/09/2008 168

168 ENTERPRISE EXCELLENCE IMPLEMENTATION

 Have any data anomalies, such as outliers, been addressed?


 Has team performed the appropriate inferential statistical testing?
 Has team addressed value-added and/or cycle time analysis?
 Has team addressed use of resources?
 Has team identified bottlenecks, disconnects, redundancies?
 Does the team understand the causes of the problem?
 Have conclusions from the analyses been documented and communicated?

Improve/Lean Tollgate
The improve/lean tollgate is performed after a pilot run has been completed
and evaluated. The important questions to address during this tollgate are as
follows:

 What techniques were employed to generate solution alternatives?


 Were all potential solutions generated and evaluated?
 What criteria were used to evaluate alternatives?
 How were alternative solutions eliminated?
 How was the final solution chosen?
 How was the final solution documented?
 Does the proposed final solution address all root causes?
 Does proposed solution meet the project goals?
 Was proposed solution communicated to stakeholders?
 Was a pilot run made to verify solution effectiveness?
 Have all proposed changes to the process been communicated to all stake-
holders? Do you have consensus, acceptance, and approval?
 Has necessary preparation and planning for implementing these changes
been made?

Control Tollgate
The control tollgate is performed after changes to the process have been imple-
mented and required performance achieved. The important questions to address
during this tollgate review include the following:
 Has the solution been 100 percent implemented?
 What are the final gains achieved?
 Do these gains improve significantly beyond the baseline performance?
 Have the team goals been met?
 Has proper documentation of the new operation been made?
 Have metrics been put in place to monitor the new operation?
 Can this solution or a similar solution be used elsewhere?
 Have lessons learned and other issues been documented and communicated?
c04_1 10/09/2008 169

KEY POINTS 169

PROJECT NOTEBOOK

The project book is a valuable tool for recording and maintaining up-to-date
information regarding the project. It can facilitate project management and fact-
based program decisions by providing ready reference and a repository of proj-
ect status and data. It provides for easy retrieval of information in ad hoc situa-
tions. Use your detailed and chronological project book throughout the life of
your project. The contents of the project book needs to include:

I. Project selection
A. Project proposal/business case
B. Project charter
C. Supporting data
II. Project plan
A. Project process map
B. Project FMEA/risk assessment
C. Plan of action and milestones
D. Action log
E. Meeting minutes
F. Correspondence
G. Project reports
III. Tollgate reviews
IV. Improvement project data
A. Process maps/value stream maps
B. Product work breakdown structures
C. Failure modes and effects analyses
D. Process and product data
E. Designed experiments and tests
F. Improvement plans
G. Control plans

KEY POINTS

Management and Operations Plans


Each directorate or division within the enterprise develops a management plan
using the policy deployment described in Chapter 3. These plans are then
flowed down to operations plans. As the ESRG did, each level will develop
the value stream map for their organization. Then they will identify actions
and projects needed to deploy the policies and guidelines from the enterprise
plan. The management plans and operations plans will therefore include
c04_1 10/09/2008 170

170 ENTERPRISE EXCELLENCE IMPLEMENTATION

collaborative and supportive projects deploying policies and guidelines to ac-


tion. Therefore, Enterprise Excellence implementation occurs, at all levels of
the organization. The scope and complexity of the implementation projects
will vary from the executive level to the management level to the operational
level. Each plan, as it is developed and deployed, will include projects to be
accomplished.

Enterprise Excellence Projects


Enterprise Excellence projects are about satisfying requirements and expecta-
tions, on time and within budget. An Enterprise Excellence project will be one
of three types:

1. Technology invention or innovation


2. New product, service, or process development
3. Product, service, or process improvement

In all three types of projects, Enterprise Excellence requires the use of stand-
ard procedures, standard criteria, and statistically valid analysis tools. Because
logical methods and techniques are being applied to improve processes, Enter-
prise Excellence can be said to use the scientific method.
The initial challenge is to develop a definition of the project that includes
a clear understanding of the deliverables, constraints, objectives, scope, and
project strategy. There needs to be agreement between the project leader, the
project team, and the project sponsor about the project definition.
The following guidelines will assist in collecting and collating the informa-
tion for the project charter:
1.
Define the problem/opportunity.
2.
Work with the Project Sponsor to establish the scope of the project
3.
Define the impact of the problem/opportunity.
4.
Establish project goals.
5.
Define the project deliverables.
6.
Define the project benefits.
Define the type of project (e.g., I2DOV, CDOV, DMAIC, or DMALC).
7.
8.
Establish an initial plan of action and milestone chart (POA&M) for
achieving the project goals.
9. Estimate the resource requirements for the project.
10. Define the risks to accomplishing the goals and successfully completing
the project.

The charter needs to be signed by the deployment champion, project sponsor,


cognizant Black Belt and/or Master Black Belt. This formally certifies that all
parties agree on the need, the definition, and the approach.
c04_1 10/09/2008 171

KEY POINTS 171

Enterprise Excellence Project Decision Process


After a project is identified and the business case is initiated, the type of project
is established. Each type of project uses a structured process.

Inventing/Innovating Technology
Technology development is accomplished using system engineering. This sys-
tem approach enables critical functional parameters and responses to be quickly
transferred into new products, services, and processes. The process is a four-
phase process: Invention/Innovation-Develop-Optimize-Verify (I2DOV).

Invention and Innovation Technology invention and innovation begins with


a review of the vision, mission, goals, and objectives of the enterprise. The long-
term voice of the customer is collected and analyzed. Technology needs and
trends are identified. Current offerings are evaluated for viability and applica-
tion of new technology. Decisions are then made to innovate or invent new tech-
nologies. New product, service, or process functions are then identified and
modeled.

Develop In this phase concepts are evaluated and selected.

Optimize This phase focuses on establishing robust critical functional


responses.

Verify The final phase of this process focuses on the integration and verifica-
tion of new technologies into an existing or new product architecture.

Development of Products, Services and Processes


The Enterprise Excellence approach for developing products, services, and pro-
cesses is the Design for Lean Six Sigma strategy.

Concept This phase of CDOV initiates the design and development activities.
The voice of the customer is collected and analyzed. The customer performance
model is developed, establishing the concept for the offering to be developed.

Design This phase of CDOV is the system development stage. The engineer-
ing design and design characteristics are selected. The system measurement
strategy is established for evaluating the design, and finally, the project risks are
identified and evaluated. The product, service, or process design is established.

Optimize This phase focuses on establishing a robust design that meets the
goals of the design team. In this phase, test and evaluation are performed to
determine optimum set points for processes and to establish the desired robust
design.
c04_1 10/09/2008 172

172 ENTERPRISE EXCELLENCE IMPLEMENTATION

Verify This phase is the demonstration stage of the process. In this phase, we
verify capability of the design to meet the requirements. We are now ready to
begin full-scale production.

Improving Products, Services, and Processes


Improving products, services, and processes involves improving the effective-
ness and efficiency of our operations.

Define During the define phase, the project team maps the current process. All
available process, product, and service data are collected. The process maps are
evolved into value stream maps. The maps are used to evaluate the processing and
to define value-added, business value-added, and non-value-added activities. Dur-
ing this phase, a process failure modes and effects analysis is performed to identify
and prioritize potential problem areas. A value-added process step is anything that:

 Is done right the first time


 The customer is willing to pay for
 Transforms the product or service

Steps that do not meet these criteria often contribute to waste and can lead to
defects. Business value-added steps do not meet the criteria for value added but
are required by regulatory, safety, or security concerns. Our goal is to improve
the effectiveness of our processes by reducing variability. Once that is accom-
plished, we will focus on improving efficiency by eliminating non-value-added
steps and minimizing business value-added steps. This strategy is accomplished
in the DMALC process.

Measure The goal of the measure phase is to focus the improvement effort by
gathering information about the current situation. During this phase, the project
team collects baseline data on process performance and actual or suspected
problems, displays the data, and calculates the variance level of the process.

Analyze During the analyze phase, the project team first determines whether
the process is stable, If it is not stable the team will take the appropriate steps to
stabilize the process. Once the process has been determined to be stable, the
team focuses on problems that were identified in the measure phase: Are these
related to variability or waste? The goal of the analyze phase is to identify root
causes and to confirm them with data.
After all the initial steps are performed, the team identifies opportunities for
improvement and prepares a preliminary plan for improving the effectiveness or
efficiency of the operations. Issues related to quality and variability are
c04_1 10/09/2008 173

KEY POINTS 173

addressed in the improve phase. Issues related to cycle time and waste reduction
are addressed in the lean phase.

Improve The improve phase of a project allows the team to begin testing solu-
tions. The goals of the improve phase are to develop, pilot, and implement so-
lutions that address the causes identified in the analyze phase and to use data to
evaluate both the solutions and the implementation of the solutions.

Lean This phase guides the team to specific action to develop lean processes
and achieve the performance and financial goals.

Control The control phase focuses on institutionalizing the gains achieved


through the improvement project.
At the completion of the improve or lean phase, the process is evaluated to
determine whether it has attained a state of continuous measureable im-
provement. If the process, product, or service has not attained the level of
improvements required by the enterprise, a new project opportunity is devel-
oped and the process starts over.

Planning the Enterprise Excellence Project


After the charter is established, the project leader assembles the team and begins
the next level of project planning. Effective planning is essential for successful
projects. An effective plan is flexible. It provides alternative paths and functions
to accommodate changes that may occur during implementation. The project
plan consists of the WBS, the schedule, and the resource requirements. During
the life of the project, the plan will need to be periodically reviewed and up-
dated to ensure it accurately reflects the project.
The work breakdown structure (WBS) is a powerful tool for breaking a task
into subtasks. The WBS approach translates the deliverable, constraints, and
scope into a detailed project plan. This technique will ensure that all tasks are
identified and will focus attention on those tasks most critical to project success.
The WBS becomes the basis from which scheduling, budgeting, and staffing can
be planned.
The implementation of the project is the controlled execution of the plan.
The project leader is responsible for coordinating all elements of the project
during this phase. Progress needs to be monitored and evaluated.

Scheduling
Scheduling is deciding when work will be performed. There are three basics
tools for scheduling:
c04_1 10/09/2008 174

174 ENTERPRISE EXCELLENCE IMPLEMENTATION

 Bar charts
 Milestone charts
 Network diagrams

Gantt Charts This chart consists of an x-axis and a y-axis. The y-axis reflects
a numerical value, resource allocated or tasks. The x-axis contains a timeline.
This shows the relationship between tasks, the required resources, and the time
required to complete the tasks. This Gantt chart doesn’t show the interdepen-
dencies between tasks.

Milestone Charts A milestone chart is a special innovation of a bar chart. It


is a chart orientated to project milestones. Milestones are points in time. They
are usually the beginning or end of an activity. Every event in a project could
be a milestone. Designating milestones is done for monitoring progress of the
project. Milestones are therefore limited to major and minor. A major mile-
stone is a significant event dependent upon many activities over a relatively
long period of time. Minor milestones are events that occur in a short period of
time.

Network Diagrams Network diagrams are flowcharts of project tasks. These


diagrams indicate the dependency relationships among the tasks of the project.
Once the network diagram is developed, time evaluations can be performed to
determine the total duration for the project.

Cost Estimating and Budgeting


Credible and accurate cost estimates are needed for funding approval and plan-
ning of activities. When the cost estimate is combined with the project schedule,
the project manager can monitor the progress of the project with sufficient in-
formation for optimum control over the project team to develop trends and fore-
cast performance.
When preparing the cost estimate, review:

 Project scope
 WBS
 Cost performance on past projects
 Projected resource requirements

Next, consolidate the costs estimates from the various sources. Apply the ap-
propriate scaling and contingency factors.
Communication is key to successful project implementation. It is necessary
for problems or changes to be communicated to all the team members and to
the project sponsor and customer as soon as possible. This enables the team to
work together to solve problems with a minimum impact on the schedule and
budget.
c04_1 10/09/2008 175

KEY POINTS 175

Implementing the project requires constant review and evaluation of prog-


ress. As progress is made and changes occur the project plan needs to be re-
vised. This analysis will provide early identification of schedule and budget
problems. When these occur, it is imperative that the project team develop and
execute recovery plans to get the project back on plan.

Tollgate Reviews
A tollgate is a formal review and progress report that an Enterprise Excellence
project must pass through in order to proceed to the next phase. Tollgate reviews
provide a methodical, objective review to determine whether a project should go
forward, be held back for more effort in present phase, or cancelled. The toll-
gates are between each of the improvement project process phases.
It is recommended that a tollgate review presentation take no more than
20 minutes, followed by 30 to 40 minutes for questions and answers. At the end
of the tollgate review, a decision is reached to proceed to the next phase, contin-
ue working in the present phase, or cancel the project altogether.
Key stakeholders should attend all tollgate reviews. As appropriate, primary
and secondary stakeholders as well as any applicable subject matter experts
(SME) also need to attend. In addition to the project leader, attendees should
include deployment Champion, project sponsor, senior staff, as appropriate,
Master Black Belt, or other technical advisers.

Project Notebook
The project book is a valuable tool for recording and maintaining up-to-date
information regarding the project. It can facilitate project management and fact-
based program decisions by providing ready reference and a repository of proj-
ect status and data.
c05_1 10/09/2008 176

5
LISTENING TO THE VOICE
OF THE CUSTOMER
No matter how effective the enterprise processes, how efficient the opera-
tions, or how motivated and skilled the workforce, the market for well-
designed concrete life preservers is limited!

Every enterprise, large or small, manufacturing or service, private or public,


matrix or line, is organized by functional areas. These areas are often called
departments, branches, sections, or groups. In most organizations, each of these
functional areas jealously guards its area of responsibility and authority. This
creates an organization of protected fiefdoms where the good of the enterprise
is suboptimized for the good of the subordinate elements of the enterprise.
The Enterprise Excellence model calls for the breaking down these artificial
barriers, uniting all functions in a focused approach to achieve the goals of the
enterprise, and thereby to optimize the enterprise. In previous chapters, we fo-
cused on how each element of the enterprise develops its plans and manages its
processes in a collaborative and supportive manner to achieve the vision of the
enterprise. In this chapter, we focus on how the company establishes the strat-
egies for developing and improving the products and services to answer the
voice of the customer.
Regardless of the nature of your enterprise, its goal is to make a profit and
grow wealth. Individuals who work in the public sector may argue that their
‘‘company’’ doesn’t make money. This is not true—they provide a value to the
organization that gives them their funding. That organization will also benefit
from increased effectiveness and efficiency. The value to the sponsor, and the
increased effectiveness and efficiency, are forms of profit. This broader under-
standing of profit motivates us to implement the Enterprise Excellence model in
all organizations.
The quickest and most reliable way to make money is to provide your cus-
tomers the best-value product or service for the lowest cost in the shortest time
frame. The customers define what ‘‘best value’’ is. We must therefore focus on
understanding and satisfying their requirements. This strategy ensures that the
customers will buy our products and services, thereby ensuring our market share.
176 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c05_1 10/09/2008 177

VOICE OF THE CUSTOMER (VOC) 177

Market share is meaningless, however, if you lose money on your products or


services. The method for developing and improving the products and services
also needs to optimize activities to ensure that a profit is made. We need to mini-
mize development and implementation cycle times, eliminate waste, and
optimize all processes.
There are many benefits to be gained from a short development cycle:

 A longer product sales life


 Product in the marketplace longer
 Customer loyalty due to the high cost of switching suppliers (i.e., increased
market share)
 Higher profit margins in the absence of competition
 The perception of excellence resulting from the speed of introduction of
new products or improvements

The benefits of eliminating waste and optimizing processes are reduced oper-
ating expenses and higher profit margins. The ideal situation is to have a very
short development cycle time, with a minimum of waste, in an environment of
continuous measurable improvement.
The product and service development and improvement process needs to be
an integral part of the company’s strategy to become a world-class competitor.
This process needs to be an asset to competitiveness, not a liability.

VOICE OF THE CUSTOMER (VOC)

Products and services are usually described in terms of attributes of perform-


ance. Customers, however, assess the quality of a product or service in terms of
their reaction to their experience with that product or service. The entire cus-
tomer experience, which includes presales, sales, delivery, operation, and post-
sales support, must be evaluated in defining a product or service. The data about
our products and services, derived from customers and the marketplace, are re-
ferred to as the voice of the customer (VOC).
The voice of the customer (VOC) is the oxygen that enables the enterprise to
survive and thrive. Answering the voice of the customer requires a commitment
to knowing and understanding the full scope of customer requirements and
needs. To accomplish this, a process is used to acquire customer requirements,
understand them in a structured way, and translate those requirements into prod-
ucts and services. It consists of a focused process for identifying customer re-
quirements and expectations; establishing robust products, services, and
processes; and using a structured, integrated process to develop the products,
services, and processes for producing them.
Creating sustainable growth therefore begins with listening to the voice of the
customer. After understanding the requirements and expectations of the
c05_1 10/09/2008 178

178 LISTENING TO THE VOICE OF THE CUSTOMER

customer, we define the customer requirements model (CRM). This is the basic
offering that the customer ‘‘wants.’’ At this point, we evaluate the CRM with
the mission, vision, goals and objectives of the enterprise, and our technology to
develop the customer performance model (CPM). This is the offering we will
design, develop, and commercialize that will provide a competitive edge. At this
point, we have developed a concept that we are confident will provide the cus-
tomer the motivation to choose our offering over that of the competition.

Answering the Voice of the Customer


Answering the voice of the customer begins with identifying the markets, seg-
ments, and potential opportunities. The voice of the customer is collected and
evaluated. Associated products and services and attendant technologies are
benchmarked to determine competitive levels and trends. The next step is to
define a portfolio of products and services for the market that are consistent with
the mission, goals, and objectives of the enterprise, and then to evaluate them
against the competition’s portfolio of offerings. This information is used to be-
gin developing the CRM and to evaluate the technology required for the CPM.
If the required technology doesn’t exist, and the decision has been made to pro-
ceed with developing the products or services, the decision needs to be made to
invent or innovate the requisite technology.

Technology Development
After the voice of the customer has been evaluated and the CRM developed, we
begin the development of the CPM. This requires the application of technology
concepts. In some cases, the application of the technology will be an
existing technology in a previously identified manner. In most cases, however,
the CPM will require the development of new products, services, and pro-
cesses through the application of new technology or innovative applications of
existing technology. The application of the technology is referred to as technol-
ogy transfer.
The invention of technology or the application of existing technology in
unique and innovative ways is referred to as technology development and is ac-
complished using system engineering. This systems approach enables critical
functional parameters and responses to be quickly transferred into new prod-
ucts, services, and processes. The process is a four-phase process: Invention/
Innovation-Develop-Optimize-Verify (I2DOV).

Invention and Innovation


Technology invention and innovation begins with a review of the vision, mis-
sion, goals, and objectives of the enterprise. The long-term voice of the custom-
er is collected and analyzed. Technology needs and trends are identified.
Current offerings are evaluated for viability and application of new technology.
c05_1 10/09/2008 179

VOICE OF THE CUSTOMER (VOC) 179

Decisions are then made to innovate or invent new technologies. New product,
service, or process functions are then identified and modeled.

Develop
In this phase, concepts are evaluated and selected. The selected technological
concepts are characterized. The characterizations are analyzed and the ideal
transfer functions are quantified.

Optimize
This phase focuses on establishing robust critical functional responses. This in-
cludes evaluating the functional response of the technology concept under con-
ditions of intentionally induced changes to control and noise factors. This
determines the effects of noise and enables us to determine the optimal set
points to achieve a cost-effective robustness. In this way, we are able to select
the optimal technology for transfer to products, services, and processes so they
are insensitive to noise without removing the sources of variability.

Verify
The final phase of this process focuses on the integration and verification of
new technologies into an existing or new product architecture. In this phase,
stress testing and other evaluation methods are employed to ensure the tech-
nology is mature enough to transfer to existing or new products, services, or
processes.

Development of Products, Services, and Processes


The Enterprise Excellence approach for developing products, services, and pro-
cesses is the Design for Lean Six Sigma strategy. This strategy ensures the cus-
tomer requirements and expectations are incorporated in the customer offering.
Concept-Design-Optimize-Verify (CDOV) is a specific, sequential design and
development process used to execute the design strategy. CDOV is a disciplined
and accountable. It is used to achieve:

 Designs based on the voice of customers (VOC)


 Understanding of baseline functional requirements
 Products/process/services that are reliable, producible, and serviceable
 Robust systems that meet or exceed the need of customers

Concept
This phase of CDOV initiates the design and development activities. The voice
of the customer is collected and analyzed. The customer requirements model is
developed defining the requirements to satisfy the customer. The initial require-
ments matrix (house of quality) is developed and evaluated. The customer
c05_1 10/09/2008 180

180 LISTENING TO THE VOICE OF THE CUSTOMER

performance model is developed, establishing the concept for the offering to be


developed.

Design
This phase of CDOV is the system development stage. These are the design and
development activities. The engineering design and design characteristics are
selected. The system measurement strategy is established for evaluating the de-
sign, and finally, the project risks are identified and evaluated. The product, ser-
vice or process design is established.

Optimize
Design optimization completes the development stage. This phase focuses on
establishing a robust design that meets the goals of the design team. In this
phase, test and evaluation are performed to determine optimum set points for
processes and to establish the desired robust design. In addition, make-buy strat-
egies and production control strategies are established to ensure the producibil-
ity of the design.

Verify
This phase is the demonstration stage of the process. In this phase, we verify
capability of the design to meet the requirements. Based on the test and evalua-
tion results, the risk assessment and reliability assessments are updated. We are
now ready to begin full-scale production.

QUALITY FUNCTION DEPLOYMENT

Quality function deployment (QFD) is the methodology that gives CDOV a fo-
cused process for translating the voice of the customer, as reflected in product or
process requirements, into a working design. QFD provides a structured method
that quickly and effectively identifies and prioritizes customers’ expectations.
Customer expectations are analyzed and turned into information to be used in
the design and development of products, services, and processes. Using QFD
will significantly reduce the concept-to-customer time, cost, and cycles.
Quality function deployment was developed in Japan in the 1970s. It was first
applied at the Kobe Shipyard of Mitsubishi Heavy Industries, Ltd. Since that
time, it has become the accepted methodology for development of products and
services in Japan. QFD has enabled businesses to successfully develop and in-
troduce products in a fraction of the time required without it.
In the early 1980s, Dr. Don Clausing introduced QFD to Xerox. Since that
time, American business has shown growing interest in using QFD. The Ameri-
can Supplier Institute and GOAL/QPC have been the leaders in this movement.
They have studied QFD, have helped businesses apply it, and have contributed
greatly to the development and innovation of QFD techniques. QFD is an inte-
gral part of Design for Six Sigma.
c05_1 10/09/2008 181

QUALITY FUNCTION DEPLOYMENT 181

QFD is a structured method that uses the seven management and planning
(M&P) tools to identify and prioritize customer requirements and to translate
those requirements into engineering requirements for systematic deployment
throughout the company at each stage of product or process development and
improvement. The implementation of QFD requires a multifunctional team with
representatives from the functional organizations responsible for research and
development, engineering, sales/marketing, purchasing, quality operations,
manufacturing, and packaging.
QFD is driven by what the customer wants, not by technology. It therefore,
demands, that we clearly identify who the customers are and what they want.
This knowledge drives the need for new technology, innovations, improve-
ments, new products, or new services. Collecting and analyzing this information
increases the time necessary to define the project. This information enables the
development team to focus only on the characteristics that are important to the
customer and to optimize the implementation of those attributes. The result is
increased responsiveness to customer needs, shortened product design times,
and little or no redesign. These improvements mean an overall improved prod-
uct design cycle in terms of cost, quality, and time.
QFD uses the what-how matrix relationship (Figure 5.1). This relationship
generates a family of matrices in a matrix waterfall fashion (Figure 5.2). This
family of matrices deploys the customer requirements and related technical re-
quirements throughout all related design and manufacturing processes for the
development of a product or service.
The words goal and objective are often used interchangeably. The dictionary
definitions for these words are, in fact, similar enough to be considered the
same. In practice, however, one is used to denote what is to be achieved and the
other how it is to be achieved. Confusion results when there is inconsistency in
the application of the terms, so it is important to establish a convention and to
use it consistently.

FIGURE 5.1 What-How Matrix.


c05_1 10/09/2008 182

182 LISTENING TO THE VOICE OF THE CUSTOMER

FIGURE 5.2 QFD Waterfall of Matrices.

Our convention is to use the world goal to designate what is to be achieved


and the word objective to designate how it is to be achieved. In a matrix, we list
the goals on the vertical axis as the ‘‘whats,’’ and the objectives on the horizon-
tal axis as the ‘‘hows’’ (Figure 5.1).
If our goal is to make a profit, one objective might be to sell a product for
more than it costs to produce it. But then the question is raised, ‘‘How do we sell
a product for more than it costs to produce?’’ This question contains a goal, that
is, what we want to accomplish. The method that we use to achieve that goal is
an objective. We see, then, that there are successive levels of goals and objec-
tives. A top-level objective becomes a lower-level goal when we seek to deter-
mine the specific action that is necessary to achieve the top-level objective.
There are five phases in the implementation of QFD. The appropriate tools or
processes (the seven M&P tools, failure modes effect analysis, designed experi-
ments, SPC, SQC, customer information systems, etc.) are used in each phase to
ensure the systematic deployment of the customers’ requirements throughout
the design, manufacture, and service of the product. The requirements matrix,
which is also referred to as the house of quality (HOQ), is the starting point for
the QFD method; we deploy each matrix or activity from this starting point.
This basic QFD approach is flexible and can be adapted to any given situation.
In the QFD process, we use ‘‘what’’ to designate the goal and ‘‘how’’ to des-
ignate the objective. In the matrix, we list the goals on the vertical axis and the
objectives on the horizontal axis. In this way, we can take an objective at each
level and cascade it to the next level to further define the details for achieving
the goal. This is referred to as a waterfall of matrices. Thus, the process ensures
collaborative and supportive designs.
c05_1 10/09/2008 183

QUALITY FUNCTION DEPLOYMENT 183

The relationship between what we want to do and how it can be accomplish-


ed is illustrated in each matrix. This starts with top-level goals, entered on the
horizontal axis. We determine what actions need to be accomplished to achieve
each of the goals and then enter these across the top, or vertical, axis.
Knowing that the goals are ‘‘what’’ and the objectives are ‘‘how,’’ QFD
works like this: Level 1 goals turn into level 1 objectives, level 1 objectives turn
into level 2 goals, and so on.
This matrix will now be used as the basis for QFD. In many applications, the
process and scope of QFD application vary according to the needs of the design.
As QFD is applied to DFLSS, we will use five phases.

1. Design requirements and house of quality


2. Engineering design
3. Product characteristics
4. Manufacturing and purchasing operations
5. Quality control

These five phases of QFD are represented in the waterfall of matrices. This
waterfall method is especially powerful in developing products or services that
satisfy and delight customers. It begins with the customer requirements, which
are deployed in the waterfall fashion. Figure 5.2 illustrates this process through
the design requirements and requisite engineering design to the product charac-
teristics and illustrates how the organization deploys these product characteris-
tics through the manufacturing and purchasing operations to the production and
quality controls.
This method yields the optimum design, developed to satisfy the customers’
requirements. The organization then implements the design, using the optimum
material and processes. QFD thus provides a methodology for ensuring that the
design and production of products and services are focused on achieving cus-
tomer satisfaction. At each step, it leads us to select the optimum objective to
achieve a goal.
A QFD project must be carefully planned to tailor each application to your
specific needs. The process begins by stepping through each phase.
The design requirements matrix translates the voice of the customer and the
customer requirements into the initial engineering design. It is the highest level
of design with the least detail. This matrix also forms the basis for the house of
quality (HOQ) that is an important planning and prioritizing tool for QFD.
The engineering design matrix takes this upper-level design and formulates
the product characteristics. At this point, there is sufficient information to write
specifications and develop drawings for the product or service.
The quality control matrix is the final phase in implementation. At this point,
you are in production. Before you begin this phase, you have to integrate the
process capabilities of the enterprise and the operational requirements for all
other product and service requirements. This is a critical scheduling process that
c05_1 10/09/2008 184

184 LISTENING TO THE VOICE OF THE CUSTOMER

organizes the activities of the enterprise to ensure that the needed resources are
available at the right places, at the necessary times, in the needed quantities.
As production proceeds, you implement statistical process control (SPC),
conduct process capability studies, implement designed experiments, and
achieve continuous measurable improvement. These activities are also referred
to as variability reduction. This is the point at which you make the design and
processes robust. This means that the design, selection of materials, and pro-
cesses are such that there is little variation in quality in spite of diverse or chang-
ing environmental conditions.

CDOV PROCESS

The CDOV process uses the quality function methodology to provide the struc-
ture for implementing the project for developing a product or service. The pro-
cess begins with the selection of the project. (See Figure 5.3.)

Project Selection
Before using the CDOV methodology, we must first understand how to select a
project from a group of candidate projects. The steps for selecting a new prod-
uct, service, or process development project include the following:

 Identify candidate projects using the VOC and R&D initiatives.


 Evaluate each project using prioritization matrices.
 Select a project using a decision matrix.
 Confirm the benefits of the project.
 Determine who needs to lead project team.
 Develop the project charter.

As in all projects, the initial step in organizing the project is to perform a


stakeholder analysis. Use this information to plan your actions to gain the requi-
site support for the project.

Concept Development
During the concept development phase, customer requirements must be deter-
mined. These requirements are important, as they will be turned into system
designs, which will be used to develop the engineering functions for developing
the design concepts.
The goal of the design team in this phase is to define a robust design that is
effective and efficient—one that meets (or exceeds) the needs of all the
customers
c05_1
10/09/2008
185

No

Identify
dentify Select
elec
elect Yes
Identify Confirm
onfirm DFLSS Select Select
elect & Train
Start
ta
tart Potential
otential DFLSS Confirm DFLSS DFLSS Select & Train
Team
ea
eam
Start Potential DFLSS enefit
fi
Benefit DFLSS Team
Projects
rojects
j Benefit Project
Project
j
Projects
j Project

Concept
oncept Design Production
roduction
Concept Design Optimize Design Verify Capability Production
Development
Developm
evelopm Development
Developm Optimize Design Verify Capability Launch
a h
aunch
Development Development Launch

Gate 1 Design Gate 2 Design Review: Gate 3 Design Review: Gate 4 Design Review:
Review: • Critical items list • Optimized settings • Test reports
• POA&M • AoA decision matrices • Robust design • Capability analyses
• VOC table • Reliability allocations • Process analysis • Design dev
• HOQ • Response surface • Product • Process analysis
• Critical functions evaluations characteristics • Response surface evaluations
• Benchmark report • Select design • RAM plan
• Program plan • Engineering design • Release for full rate production
• Design • Manufacturing purchasing
requirements operations
• Quality control matrix
FIGURE 5.3 CDOV Process.

185
c05_1 10/09/2008 186

186 LISTENING TO THE VOICE OF THE CUSTOMER

It is important for the design team to concentrate on generating feasible and


viable designs and not move too quickly into the optimize phase.
The concept development phase is driven by six basic steps:

Steps Tools
1. Obtain customer requirements. Listening, VOCT
2. Organize customer requirements. VOCT, affinity diagram
3. Prioritize customer requirements. Trade-off Studies’ house
of quality (HOQ)
4. Develop design requirements. HOQ
5. Score/relate design requirements HOQ
to CR.
6. Perform assessment. FMEA, FMECA

Establish Customer Requirements


Establishing customer requirements involves defining what the customer needs,
wants, and desires. This can be done through active listening. Users assess the
utility of products and services in terms of how well it meets their expectations.
Therefore, products and services must be described by designers in terms of
specific attributes and performance requirements. The voice of the customer
table is designed to capture and characterize design requirements.
Voice of the customer tables (VOCTs) provide an effective and efficient
method for collecting and organizing the voice of the customer. VOCT comes
in two parts:
1. VOCT part 1 deals with the questions what, when, where, who, why, and
how and provides a descriptive analysis of customer segments and their
requirements. (See Figure 5.4.)
2. VOCT part 2 deals with the specific performance requirements associated
with the demands identified in part 1. (See Figure 5.5.)
The information derived from the voice of the customer tables are used as
input to the house of quality.
While the VOCT provides valuable information, it works from the premise
that customers know and understand what they want. We need to also address
how to understand what the customer tacitly is looking for. Only then will the
rest of the concept development process be truly meaningful. Kano analysis is a
tool to help us further understand the customer requirements and aids in priori-
tizing the requirements.
Kano Analysis In the 1970s, Japanese camera manufacturer Konica sought a
competitive edge. Management wanted engineering to develop a camera that
c05_1
10/09/2008
187

# Customer Use
I E What When Where Why How

FIGURE 5.4 Voice of the Customer Rable, Part 1.

187
c05_1

188
10/09/2008
188

VOCT Part 2 Characteristics


# Customer Data Customer Expectation Expectation Characteristic Function Comments

10

FIGURE 5.5 Voice of the Customer Table, Part 2.


c05_1 10/09/2008 189

CDOV PROCESS 189

would radically differentiate Konica from what other camera manufacturers


were producing. Yet Konica’s sales and research groups reported that customers
were asking for only minor modifications to existing models. To address the
situation, executives at Konica enlisted the help of Dr. Noriaki Kano, a Japanese
engineer and consultant.
Kano understood that success was not just to listen to what customers were
saying, but to develop a deep understanding of the customers’ world and then
address their latent needs. Konica staffers went to commercial photo processing
labs to examine customers’ print runs. What they found included under- and
overexposure, blurred images, double exposures, and so on. Solving these prob-
lems led to features available in cameras today that we now take for granted
(e.g., autofocus, built-in-flash, automatic film winding).
From this experience, Dr. Kano developed what is now referred to as Kano
analysis—a systematic means to show how a customer’s sense of satisfaction is
affected when a product or service succeeds (or fails) to meet that customer’s
expectations.
Kano states that there are four types of customer needs, or reactions to prod-
uct characteristics/attributes:

1. ‘‘Surprise and delight’’ factors


2. ‘‘More is better’’ element
3. ‘‘Must be’’ features (i.e., must-haves)
4. ‘‘Dissatisfiers’’
Kano demonstrates these reactions graphically as shown in Figure 5.6.

FIGURE 5.6 The Kano Model.


c05_1 10/09/2008 190

190 LISTENING TO THE VOICE OF THE CUSTOMER

The Kano model shows the relationship between performance and value
(expressed as customer satisfaction). The model identifies three types of
characteristics: basic, performance, and exciters. How a customer responds
in terms of satisfaction will be determined by the specific mix of character-
istics and the level of performance the organization achieves in each charac-
teristic type. The basics are the quality characteristics that are normally
unspoken and expected. If they are missing or not at a satisfactory level, the
customer satisfaction is severely damaged, but if they are met, they won’t
provide sufficient motivation to be a discriminating factor. The performance
quality characteristics are the stated requirements. The exciters are the qua-
lity characteristics that the customer doesn’t expect but provide the ‘‘wow.’’
These are the characteristics that are the discriminators that will motivate the
customer to choose your offering. The exciters of today will become the ba-
sics of tomorrow. However, the enterprise that offers the exciters first will
garner name recognition and marketplace advantage as the one who offered
it first (e.g., facial tissue is often referred to as ‘‘Kleenex’’ and copiers as
‘‘Xerox’’).

Expectors Those features the customer takes for granted are thought of as ‘‘ex-
pectors.’’ If expector features are omitted, extreme customer dissatisfaction will
result (even though the customer may not ask about them). Interestingly, these
attributes may not satisfy the customer if they are present, but their omission
will produce significant dissatisfaction; that is, customers will not buy a product
or service if the expectors are omitted or poorly done.

Spoken The second type of attributes are the spoken ones: the characteristics
the customer specifies. Often, these appear in written descriptions expressing
performance characteristics demanded by the customer. In some industries, they
are the requirements specified in a request for proposal, a contract, a product/
service specification sheet, or a purchase order. These are the requirements; the
customer consciously wants them and believes it necessary to tell you what they
are (e.g., color, weight, speed, size, or capability).

Unspoken The third type of features are as important to the customer as the
second type, but these are unspoken. They are attributes the customer forgot
about, did not know about, or did not want to talk about. These are high-risk
requirements, because failure to provide them will result in lost sales (even
though customers do not discuss them). Sometimes, such requirements are sim-
ply overlooked and you can mention them to the customer. At other times, you
will have to be persistent in your customer/market research and communication
to determining what ‘‘other’’ requirements exist.

Exciters The last type of characteristics are those that excite or ‘‘wow’’
customers. These features are known as ‘‘exciters.’’ Because customers
c05_1 10/09/2008 191

CDOV PROCESS 191

rarely think about these features (or haven’t even considered their exis-
tence), these are features that customers seldom talk about. Consequently,
exciters will be satisfiers, but will never be dissatisfiers. Exciters eventually
become expectors.
In developing an offering (i.e., product or service) for your customers, it is
necessary to develop information about all four types of characteristics that will
motivate your customers to give you an order. The information you develop is
used to describe the offering in terms of requirements, discriminating character-
istics, and performance.
Market research is critical for developing the information necessary to deter-
mine the basic marketing and sales considerations. Satisfying your customers
depends on understanding what they expect, what their motivation is, and how
to delight them. Be ever mindful that an exciter today will be specified tomor-
row and an expected characteristic the day after.
Excitement attributes are unspoken and unexpected by customers, but can
result in high levels of customer satisfaction. However, their absence does not
lead to dissatisfaction. Excitement attributes often satisfy latent needs, of which
customers are currently unaware. In a competitive marketplace where manufac-
turers’ products provide similar performance, providing excitement attributes
can provide a competitive advantage.

Organize Customer Requirements Organizing the customer’s requirements


helps to make sense of what the customer seeks and expects. This step builds
upon what was learned from creating the VOC table.

Concept Development Steps 3 through 5


The presentation of customer requirements in a logical format allows us to start
building the house of quality. As steps 3, 4, and 5 are accomplished, the HOQ
will start to form, as shown here:

Concept Development Steps HOQ Development Steps


Step 3. Prioritize customer Step 1. Add and score
requirements. customer priority.
Step 4. Develop design Step 2. Develop the relationship
requirements. matrix.
Step 5. Score/relate design Step 3. Develop the interaction
requirements to customer matrix (roof of the HOQ).
requirements.

The basic house of quality is presented in Figure 5.7. A sample HOQ com-
pleted is presented in Figure 5.8. Customer requirements are listed on the left.
c05_1 10/09/2008 192

192 LISTENING TO THE VOICE OF THE CUSTOMER

FIGURE 5.7 A Basic House of Quality.

Design requirements are listed individually at the top of each column, just be-
low the ‘‘roof.’’ Technical and competitive benchmarking is added to the HOQ.
Technical benchmarking means determining how well the competition is ful-
filling the customers’ requirements in terms of the design requirements. We ex-
press this evaluation in terms of a score, which is plotted on the horizontal axis.
Some score the design requirements on a scale of 1 to 5, with 5 being the best.
This method results in a plot across the bottom of the house of quality.
For competitive benchmarking, a row is added on the right side of the house
of quality to reflect how well you and the competition are satisfying the cus-
tomer requirements that are identified on the vertical axis on the left side of the
matrix. As in the case of the technical benchmarking, this evaluation is scored
and plotted as a graph.
Comparing the results of the technical and competitive benchmarking data
should show a consistency. If your product scores high in the competitive com-
parison, it should also score high on the technical comparison. Inconsistencies
are flags that there may be a problem with a design requirement.
We can add more columns to the right side of the matrix for including other
information such as level of effort, cost, or priorities for the customer require-
ments. The possibilities are unlimited and should be driven by your imagination
and capacity for innovation. This begins the waterfall of matrices as presented
in Figures 5.9 and 5.10.
c05_1 10/09/2008 193

CDOV PROCESS 193

FIGURE 5.8 Sample House of Quality.

Develop Design Requirements


Developing the Relationship Matrix
 This is the body of the HOQ, as shown in Figure 5.11
 It graphically displays the relationships between customer requirements
and design requirements

During this step identify customer needs, prioritize their needs and develop
design requirements as shown in Figure 5.12. Determine the relationship be-
tween the customer requirements and how the design requirements meet those
needs. Ask the question, ‘‘What is the strength of the relationship between
c05_1 10/09/2008 194

194 LISTENING TO THE VOICE OF THE CUSTOMER

FIGURE 5.9

the design requirements and the customer’s needs?’’ Relationships can either be
weak (1), moderate (3), or strong (9), as indicated by a numeric value. Figure
5.12 presents an example.
Careful completion of the relationship matrix will reduce or even eliminate
the need for engineering changes later in the product’s life cycle.

Score-Related Design Requirements


In this step, the related design requirements will be scored to the customer
requirements.

FIGURE 5.10
c05_1 10/09/2008 195

CDOV PROCESS 195

FIGURE 5.11 Building the House of Quality—Requirements Matrix.

FIGURE 5.12 House of Quality Example.


c05_1 10/09/2008 196

196 LISTENING TO THE VOICE OF THE CUSTOMER

FIGURE 5.13 House of Quality Roof.

Developing the Interaction Matrix The interaction matrix, also known as the
correlation matrix, is the roof on the house of quality. It is established to deter-
mine the technical interrelationships between the design requirements (the
‘‘how’’). This information is valuable as the basis for decisions regarding tech-
nical trade-offs.
Construct the roof by evaluating the interactions between the ‘‘hows’’ and
placing the appropriate numbers at each intersection point. (See Figure 5.13.)
This number reveals the impact of a change in a given characteristic. It answers
the question, ‘‘Does this characteristic have an effect on another characteris-
tic?’’ Repeat this question for each combination.

Perform Risk Assessment


The sixth and final step of the concept development phase requires that we es-
tablish priorities for the requirements and determine the risk. When establishing
priorities, two scores are taken into account: absolute weight and relative
weight. (See Figure 5.14. and 5.15.)

Absolute Weight This score refers to just the sum of all scores in each
column. It reveals how the customer requirements associate with the de-
sign requirements.
Relative Score The second score is the relative score. It reveals the priority
of the customer’s needs, which may be more reflective of the true
importance.

Establishing Priorities for the Requirements Establishing the priorities for


design requirements is necessary to identify key elements. This understanding
is valuable for determining where to focus resources and refining the design
c05_1
10/09/2008

Total (Absolute Weight) 54 27 33 42 27 27 28 31 36 24 24


197

Total (Absolute Weight) 54 27 33 42 27 27 28 31 36 24 24


Normalized Absolute Weight 15.3% 7.6% 9.3% 11.9% 7.6% 7.6% 7.9% 8.8% 10.2% 6.8% 6.8%
Normalized Absolute Weight 15.3% 7.6% 9.3% 11.9% 7.6% 7.6% 7.9% 8.8% 10.2% 6.8% 6.8%
Priority (Key Elements - Absolute) 1 7 4 2 7 7 6 5 3 10 10
Priority (Key Elements - Absolute) 1 7 4 2 7 7 6 5 3 10 10
Customer Relative Weight 360 165 246 375 225 171 164 191 198 108 99
Customer Relative Weight 360 165 246 375 225 171 164 191 198 108 99
Normalized Relative Weight 15.6% 7.2% 10.7% 16.3% 9.8% 7.4% 7.1% 8.3% 8.6% 4.7% 4.3%
Normalized Relative Weight 15.6% 7.2% 10.7% 16.3% 9.8% 7.4% 7.1% 8.3% 8.6% 4.7% 4.3%
Priority (Key Elements - Relative) 2 8 3 1 4 7 9 6 5 10 11
Priority (Key Elements - Relative) 2 8 3 1 4 7 9 6 5 10 11
Risk
Risk

FIGURE 5.14 Establishing HOQ Priorities.

Total (Absolute Weight) 54 27 33 42 27 27 28 31 36 24 24


Total (Absolute Weight) 54 27 33 42 27 27 28 31 36 24 24
Normalized Absolute Weight 15.3% 7.6% 9.3% 11.9% 7.6% 7.6% 7.9% 8.8% 10.2% 6.8% 6.8%
Normalized Absolute Weight 15.3% 7.6% 9.3% 11.9% 7.6% 7.6% 7.9% 8.8% 10.2% 6.8% 6.8%
Priority (Key Elements - Absolute) 1 7 4 2 7 7 6 5 3 10 10
Priority (Key Elements - Absolute) 1 7 4 2 7 7 6 5 3 10 10
Customer Relative Weight 360 165 246 375 225 171 164 191 198 108 99
Customer Relative Weight 360 165 246 375 225 171 164 191 198 108 99
Normalized Relative Weight 15.6% 7.2% 10.7% 16.3% 9.8% 7.4% 7.1% 8.3% 8.6% 4.7% 4.3%
Normalized Relative Weight 15.6% 7.2% 10.7% 16.3% 9.8% 7.4% 7.1% 8.3% 8.6% 4.7% 4.3%
Priority (Key Elements - Relative) 2 8 3 1 4 7 9 6 5 10 11
Priority (Key Elements - Relative) 2 8 3 1 4 7 9 6 5 10 11
Risk 9 3 3 1 1 1 3 9 3 3 9
Risk 9 3 3 1 1 1 3 9 3 3 9

FIGURE 5.15 House of Quality Basement with Risk.

197
c05_1 10/09/2008 198

198 LISTENING TO THE VOICE OF THE CUSTOMER

concept. The relative score provides an input to the prioritization. However, it is


also imperative that the technical and competitive benchmarking be evaluated.
Collectively, this information will provide the best input for establishing the
priorities for the requirements.

Determining the Risk


 Identify the risk.
 Calculate the absolute value.
 Calculate the relative weight.
 Identify the key design elements.
 Calculate the absolute value for each design requirement by assigning the
associated values in the appropriate column and adding them.
 Enter the total in the associated cell for the absolute weight.
 Calculate the relative weight by multiplying the absolute weight by the val-
ue for the associated risk.
 Place the result in the associated cell for the relative weight.
 FMEA is used to develop risk data.

At this point, the team will have decided the direction they will be taking.
After the concept development activities have been accomplished, the first gate
review is performed. Next, a comprehensive review of the following documen-
tation is performed:

 Project plan
 VOCT
 HOQ/QFD matrix 1
 FMEA

Design Development
The next phase of the CDOV methodology is the design development phase.
This phase includes the following tasks:

 Generation of feasible design ideas


 Evaluation of designs
 Analysis of design alternatives
 Selection of design for prototyping
 Prototype development/testing
 Design evaluation measurements
c05_1 10/09/2008 199

CDOV PROCESS 199

The steps involved in this phase include the following:

1. Generate design ideas.


2. Evaluate design ideas.
3. Develop and test prototypes.
4. Measure key characteristics.
5. Analyze design alternatives.
6. Down select design for optimization

The following table presents a cross reference of steps and recommended


tools.

Steps Tools
1. Generate design ideas  Prioritized customer
requirements/DFX
2. Evaluate design ideas  Functional/reliability block diagrams
 Fault tree analysis
 Trade-off studies
 FMEA/FMECA

3. Prototype development/testing  Characterization DOE


 Relative comparison testing

4. Measure key characteristics  Measure System Design (MSD)


 Measure System Evaluation (MSE)

5. Analyze design alternatives  Decision matrices


 Pugh selection matrix

6. Down-select design for  Engineering and data analysis


optimization

Pugh Analysis
This analytical tool provides a method for analyzing alternatives using a scoring
matrix. It is implemented by establishing an evaluation team and setting up a
matrix of evaluation criteria. The scoring matrix is a form of prioritization ma-
trix. Usually, the options are scored relative to criteria using a symbolic ap-
proach (such as þ, S, and ). These are then converted into scores and
combined in the matrix to yield a total for each option. Comparison of the
scores generated gives insight into the best alternative(s).

 Choose or develop the criteria to establish the requirements column.


 Examine customer requirements to do this, and then generate a list of re-
quirements and targets.
 Establish the requirements metrics or the reference system column.
c05_1 10/09/2008 200

200 LISTENING TO THE VOICE OF THE CUSTOMER

 This column represents the absolutes that the designs will be compared
to. For example, if weight is an important factor to the customer, then
a specific weight should be entered into the appropriate cell of the
matrix.
 Select the designs to be compared, and then establish columns for each.
 The designs represent the ideas developed during the concept generation
phase. All concepts should be compared using the same criteria.
 Generate scores for each criteria of each design.
 Typically, þ is used to indicate ‘‘better,’’ S is used to indicate ‘‘the same,’’
and  is used to indicate ‘‘worse.’’
 If the matrix is developed with a spreadsheet such as Excel, the numbers
þ1, 0, and 1 are ideal substitutes for the original ratings.
 Compute the scores.
 Count up the number of þ scores for each design, and then count up the
number of  scores for each.
 Draw conclusions based on the totals for better (þ) and worse ().

If scoring is very close or very similar, the designs must be examined more
closely to make a better decision.
Figure 5.16 shows a completed Pugh matrix. The requirements are entered in
column 1, and the metrics are entered in the second column. Alternatives are
then evaluated to determine the best design alternative. ‘‘S’’ means the alterna-
tive satisfies the requirement at the required level. ‘‘þ’’ means it exceeds the
requirement and ‘‘’’ means it fails to meet it. The pluses and minuses are then
totaled.

Down-Select Design
In this step, the idea is to down-select the design. This means that only those
designs that have high potential for performance and optimization should go on
for further development. Data analyses from engineering will help in making
these decisions.

FIGURE 5.16 Pugh Matrix.


c05_1 10/09/2008 201

CDOV PROCESS 201

Gate Review 2
To finalize this phase of the CDOV process, conduct a second gate review. In
doing so, complete and review:
 Design ideas
 Functional/reliability block diagrams (FBD/RBD)
 FMEA/FTA
 Measurement systems design (MSD)
 Measurement systems evaluation (MSE)
 Prototype test data
 Analysis of alternatives (AoA)
 Engineering design QFD 2

Optimize Design Phase


The purpose of the optimize phase is to achieve a balance of quality, cost, sched-
ule, and risk to maximize customer satisfaction. Statistical tools and modeling are
used during this phase to predict quality level, reliability, and performance. Other
statistical approaches help to optimize the products design and performance.
 Optimize inputs (operational settings).
 Determine design weaknesses.
 Redesign as necessary.

Optimize Inputs
Multiple product design and/or process variables can be studied at the same
time by using DOE instead of in a hit-and-miss approach. Best of all, DOE pro-
vides reproducible results.
Due to the statistical balance of the designs, thousands of potential combina-
tions of numerous variables (at different settings or levels) can be evaluated for
the best overall combination by using a very small number of experiments. This
not only saves experimental costs, but it greatly increases the odds of identify-
ing the hard-to-find solution to nagging quality problems.
The DOE helps to optimize a process or product by illustrating which con-
trollable factors are affecting variation.

Determine Design Weaknesses


Despite comprehensive qualification testing, using traditional techniques, prod-
ucts, or services can have unacceptable reliability in the field:
 A problem is discovered before the consumer notices, leading to a product
recall and modification or replacement at the supplier’s expense.
 Customers discover the problem, leading to warranty claim. Here again,
the product is modified or replaced at the supplier’s expense.
c05_1 10/09/2008 202

202 LISTENING TO THE VOICE OF THE CUSTOMER

Although there are usually remedies for product or service failures, there is
no easy remedy for what occurs emotionally due to the problem: lack of cus-
tomer confidence in your enterprise.
Highly accelerated life testing (HALT) is a fast way to ensure your products
stay one step ahead of your competitors’ products. Ensuring a product is fully
mature at the time of release has long been a desirable objective. This is now
possible through the use of HALT.
HALT is a process developed to uncover design defects and weaknesses. It
addresses reliability issues at an early stage in product development, offering
significant advantages over traditional techniques. Using HALT as a develop-
ment tool will:

 Significantly reduce field failures


 Minimize costly warranty claims
 Maintain the company’s brand image
 Reduce time to market

Figure 5.17 shows an example of a HALT test procedure history sheet.


Accelerated degradation testing (ADT) is another type of accelerated testing
that can be employed to test for product failures.

Increase Robustness
The focus of the design process is to create a design that is robust, that can per-
form acceptably despite variations in design parameters, operating parameters,
and processes. The team works to make the processes capable of meeting the
design requirements and critical design parameters.

FIGURE 5.17 HALT History Sheet.


c05_1 10/09/2008 203

CDOV PROCESS 203

At this point of the CDOV process, the desire is to increase the robust charac-
ter that was designed into the product. Specifically, we are seeking to:

 Develop optimal improvement


 Incorporate improvement into design
 Retest, evaluate, and improve until the level of robustness is achieved

The test, analyze, and fix (TAAF) process is helpful in optimizing and im-
proving a product. It is a closed-loop reliability growth methodology that delib-
erately searches out and eliminates deficiencies. In TAAF, failures are welcome.
The TAAF concept is often necessary because complex systems (especially
those with new technologies) have reliability deficiencies that are difficult to
fully detect and eliminate through traditional design analysis. The TAAF pro-
cess allows these problems to surface during the optimize phase and eliminates
them before beginning full-scale production. When using TAAF, 10-fold reli-
ability improvements are not unusual.

Gate 3 Review
To finalize this phase of the CDOV process, conduct a third gate review. In do-
ing so, complete and review:

 Robustness/ruggedness test data


 TAAF (test, analyze, and fix) activities
 Robust/ruggedness design improvements

Verify Capability
In this phase, the degree to which the product and process designs meet or ex-
ceed requirements capability is verified. Critical parameters are the focus, and
these should, be well identified and documented by now.
Critical parameters are defined as those that are necessary for successful
operational performance of the product or service.
The most obvious critical parameter is functional performance (i.e., can the
product function as designed?). But other critical parameters exist and test such
attributes as:

 Functional performance
 Manufacturability
 Producibility
 Testability
 Interoperability
 Reliability
 Maintainability
c05_1 10/09/2008 204

204 LISTENING TO THE VOICE OF THE CUSTOMER

 Serviceability
 Survivability
 Availability

The steps for verifying capability are as follows:

1. Verify process
2. Production built units
3. Nominal testing
4. Nominal evaluation
5. Stress testing
6. Stress evaluation

The tools used during the verification of capability phase include:

 Process capability
 Performance testing
 Design qualification testing
 Reliability life testing
 Robust design testing

1 & 2: Verify Process/Production Built Units


Process capability studies are done to verify that manufacturing processes can
produce the product as designed within the constraints of:

 Design specification
 Program scheduling
 Economics

Steps Tools
1. Verify process Process capability
2. Production built units Low rate initial
production (pilot run)
3. Nominal testing Characterization DOE
4. Nominal evaluation ANOVA/RSM analysis
5. Stress testing Characterization DOE
6. Stress evaluation ANOVA/RSM analysis
c05_1 10/09/2008 205

CDOV PROCESS 205

FIGURE 5.18 Allowable versus Actual Process Spread.

Process capability compares the output of an in-control process to the specifica-


tion limits by using capability indices. This method compares the process varia-
bility directly to the stated system tolerance. This is basically the voice of the
customer in the form of specification limits divided by the spread of your pro-
cess. The comparison is made by forming the ratio of the spread between the
process specifications (the specification ‘‘width’’) to the spread of the process
values, as measured by 6 process standard deviation units (the process ‘‘width,’’
which is also 6s).

C p ¼ tolerance=process width
LSL
C p ¼ USL 
6s

A capable process is one where almost all the measurements fall inside the
specification limits. The graph in Figure 5.18 represents a ‘‘capable’’ process.
Several statistics can be used to measure the capability of a process: Cp, Cpk,
Cpm. The Cp, Cpk, and Cpm statistics assume that the population of data values is
normally distributed.

3 & 4: Nominal Testing/Evaluation


Precise simulations have long been a critical step in the execution of all success-
ful endeavors. For example, architects use scale models to validate design deci-
sions before a construction project starts. Because modeling usually leads to
success, just about every manufactured object—be it a child’s toy or a jumbo
aircraft—is modeled before production. The following table presents associated
testing steps with recommended tools.
c05_1 10/09/2008 206

206 LISTENING TO THE VOICE OF THE CUSTOMER

Steps Tools
1. Optimize inputs Factorial DOE
(operational settings)
2. Determine design Highly Accelerated
weaknesses Life Testing (HALT)
Accelerated Degradation
Testing (ADT)
3. Redesign to desired Test, Analyze and
robustness/ruggedness Fix (TAAF) Process

Performance Testing Performance testing measures the capability of the sub-


systems of the product or service to perform their specific functions. In doing
so, performance testing evaluates compliance with customer requirements.
Performance testing of a complex system is usually difficult (due to multiple
input and output variables). For this reason, small to medium response surface
methodology (RSM) experimental designs are typically employed. These de-
signs are very effective in characterizing system performance for both linear
and quadratic parameters. Also, when the experimenter must manipulate multi-
ple factors (independent variables) and measure multiple responses (dependent
variables), RSM is very efficient.
RSM is usually applied following a set of designed experiments intended to
screen out the unimportant factors. The primary purpose of RSM is to find the
optimum settings for the factors that influence the response.
Design Qualification Testing Design qualification testing is done to verify
actual operational performance. As in performance testing, it is important to
ensure that all critical parameters are tested here. However, unlike performance
testing, design qualification testing requires that the unit/system be tested under
actual (or close to actual) environmental and operational conditions. Large to
full RSM experimental designs are more appropriate to use here. Also, the test-
ing environment should mimic, as close as possible, actual field operating con-
ditions. For these reasons, design qualification testing requires more time and
resources than performance testing.

5 & 6: Stress Testing/Evaluation


This type of testing provides statistics that estimate the useful operational life for
a product/system. There are two methodologies for performing reliability testing:

1. Testing to a designated time or number of failures under nominal environ-


mental/operating conditions.
2. Accelerated life testing (ALT), a form of life testing in which one or more
environmental/operational parameter is increased to stress the unit/system
so that it will fail earlier than it would under expected conditions.
c05_1 10/09/2008 207

KEY POINTS 207

Robust Design Testing Robust design testing is done to verify that the prod-
uct/service can function with minimal degradation at various environmental and
operational extremes. Plackett-Burman and Taguchi robust design of experi-
ments are the preferred test designs to use here.
There are several dangers of not verifying, such as premature deployment of
production, insufficient robustness evaluation, and capability not being calculat-
ed. Other dangers include incomplete characterization and high maintenance
and service costs.

Gate 4 Review
Complete and review the following:
 Critical parameters tested/verified
 Robust process
 Robust product
 Reliable process/product
 Verification test data
 Verification data analyses
 HOQ/QFD matrices 4 and 5

Transition to Production
QFD Matrix 4
In the manufacturing and purchasing operations matrix, we allocate the respon-
sibilities for manufacturing processes and quality requirements. This matrix is
then used to make the final manufacturing and purchasing (‘‘make versus.
buy’’) decisions.

QFD Matrix 5
The quality control matrix is the final phase in implementation; at this point, the
design is in production. Before beginning this phase, integrate the enterprise
process capabilities and operational requirements for all other product and ser-
vice requirements. This is a critical scheduling process that organizes the activi-
ties of the enterprise.
We are now ready for product launch and full production.
Figure 5.19 presents the CDOV process template.

KEY POINTS

The Enterprise Excellence model calls for breaking down artificial barriers, and
uniting all functions in a focused approach to achieve the goals of the enterprise,
thereby optimizing the enterprise.
Regardless of the nature of your enterprise, its goal is to make a profit and
grow wealth. Individuals who work in the public sector may argue that their
c05_1

208
Stage QFD Gate Activity Objective Tools Techniques Design Levels
10/09/2008

VOCT
QFD 1 Determine customer Required needs &
Concept VOC Surveys DFXs
208

Gate 1 requirements and needs capabilities


Trade Studies

Design Generate Develop customer needs into Inventive Theoretical design


QFD 2 Brainstorming
Paper Design Specs methods alternatives

Identify advantages & FMEA


Evaluate
disadvantages of different design Trade off
Trade-off Risk analysis
Design
features analysis
Analysis of
Select Meet or exceed all customer Decision matrix
Alternatives
Design needs and requirements with predictions
(predictive)
Design Build representative of selected
Gate 2 Prototype Inherent capability
Prototype design
Analysis of
Evaluate
Determine designs with potential Alternatives
Prototype
(data)
Design Down Pugh Selection Relative
Select best design
Selection Matrix with data comparison

QFD3 Improve Halt


Optimize Robust design Achieved capability
Gate 3 Prototype ADT
DOE
QFD 4 & 5 Validate / Test and Operational
Verify Release to full-scale
full scale production Life testing
Gate 4 Qualify Evaluate capability
ALT

FIGURE 5.19 CDOV Process Template.


c05_1 10/09/2008 209

KEY POINTS 209

‘‘company’’ doesn’t make money. This is not true—they provide a value to the
organization that gives them their funding.
The quickest and most reliable way to make money is to provide your cus-
tomers the best-value product or service for the lowest cost in the shortest time
frame. The customers define what ‘‘best value’’ is. Therefore, we must focus on
understanding and satisfying their requirements.
There are many benefits to be gained from a short development cycle:

 A longer product sales life


 Product in the marketplace longer
 Customer loyalty due to the high cost of switching suppliers (i.e., Increased
market share)
 Higher profit margins in the absence of competition
 The perception of excellence resulting from the speed of introduction of
new products or improvements

Voice of the Customer (VOC)


Products and services are usually described in terms of attributes of perform-
ance. Customers, however, assess the quality of a product or service in terms of
their reaction to their experience with that product or service. The entire cus-
tomer experience, which includes presales, sales, delivery, operation, and post-
sales support, must be evaluated in defining a product or service.
The voice of the customer (VOC) is the oxygen that enables the enterprise to
survive and thrive.

Answering the Voice of the Customer


Answering the voice of the customer begins with identifying the markets, seg-
ments, and potential opportunities.

Technology Development
After the voice of the customer has been evaluated and the CRM developed, we
begin the development of the CPM. This requires the application of technology
concepts. In some cases the application of the technology will be an existing
technology in a previously identified manner. In most cases, however, the CPM
will require the development of new products, services, and processes through
the application of new technology or innovative applications of existing tech-
nology. The application of the technology is referred to as technology transfer.

Development of Products, Services, and Processes


The Enterprise Excellence approach for developing products, services, and pro-
cesses is the Design for Lean Six Sigma strategy. This strategy ensures the
c05_1 10/09/2008 210

210 LISTENING TO THE VOICE OF THE CUSTOMER

customer requirements and expectations are incorporated in the customer offer-


ing. Concept-Design-Optimize-Verify (CDOV) is a specific, sequential design
and development process used to execute the design strategy. CDOV is a disci-
plined and accountable. It is used to achieve:

 Designs based on the voice of customers (VOC)


 Understanding of baseline functional requirements
 Products/process/services that are reliable, producible, and serviceable
 Robust systems that meet or exceed the need of customers

Quality Function Deployment


Quality function deployment (QFD) is the methodology that gives CDOV a fo-
cused process for translating the voice of the customer, as reflected in product or
process requirements, into a working design. QFD provides a structured method
that quickly and effectively identifies and prioritizes customers’ expectations.
Customer expectations are analyzed and turned into information to be used in
the design and development of products, services, and processes.

CDOV Process
The CDOV process uses the quality function methodology to provide the struc-
ture for implementing the project for developing a product or service. The pro-
cess begins with the selection of the project.
While the VOCT provides valuable information, it works from the premise
that customers know and understand what they want. Kano analysis is a tool to
help us further understand the customer requirements and aids in prioritizing the
requirements.

Kano Analysis
Kano states that there are four types of customer needs, or reactions to product
characteristics/attributes. They are:

1. Surprise and delight


2. More is better
3. Must be
4. Dissatisfies

Pugh Analysis
This analytical tool provides a method for analyzing alternatives using a scoring
matrix.
Highly accelerated life testing (HALT) is a fast way to ensure your products
stay one step ahead of your competitors’ products. Ensuring a product is fully
mature at the time of release has long been a desirable objective. This is now
possible through the use of HALT.
c05_1 10/09/2008 211

KEY POINTS 211

HALT is a process developed to uncover design defects and weaknesses. It


addresses reliability issues at an early stage in product development, offering
significant advantages over traditional techniques. Using HALT as a develop-
ment tool will:

 Significantly reduce field failures


 Minimize costly warranty claims
 Maintain the company’s brand image
 Reduce time to market

The test, analyze, and fix (TAAF) process is helpful in optimizing and im-
proving a product. It is a closed-loop reliability growth methodology that delib-
erately searches out and eliminates deficiencies. In TAAF, failures are welcome.
The TAAF concept is often necessary because complex systems (especially
those with new technologies) have reliability deficiencies that are difficult to
fully detect and eliminate through traditional design analysis. The TAAF pro-
cess allows these problems to surface during the optimize phase and eliminates
them before beginning full-scale production. When using TAAF, 10-fold reli-
ability improvements are not unusual.

QFD Matrix 4
In the manufacturing and purchasing operations matrix, we allocate the respon-
sibilities for manufacturing processes and quality requirements. This matrix is
then used to make the final manufacturing and purchasing (‘‘make versus.
buy’’) decisions.

QFD Matrix 5
The quality control matrix is the final phase in implementation; at this point, the
design is in production. Before beginning this phase, integrate the enterprise
process capabilities and operational requirements for all other product and ser-
vice requirements. This is a critical scheduling process that organizes the activi-
ties of the enterprise.
We are now ready for product launch and full production.
c06_1 10/09/2008 212

6
DEFINE: KNOWING AND
UNDERSTANDING YOUR
PROCESSES

The goals of the define phase are to know, understand, and become intimate
with your business processes. This detailed level of knowledge about your busi-
ness processes is critical to the success of any improvement activity. What you
cannot define you cannot measure; what you cannot measure you cannot con-
trol; what you do not control you cannot improve. Therefore, defining is the first
and most critical step in the improvement process. As indicated in Figure 6.1,
‘‘Enterprise Excellence decision process,’’ it is the first step in the improvement
initiative methodology.
The define phase is accomplished using a series of proven tools such as pro-
cess mapping, process walkthroughs, and failure modes and effects analysis.
These tools will lead you through an ever-increasing depth of understanding
about your business process—the as-is process state and the measurements and
data available from the current process.
Understanding process variation and its genesis includes the following:

 Acquiring all process documentation and data


 Developing process maps and value stream maps at cascading levels, as
needed, to understand and measure the process
 Performing a series of process walkthroughs to verify and validate the pro-
cess and value stream maps
 Performing a FMEA to better understand each process work center and
step and to measure the risk associated with each process element. The fol-
lowing Figure 6.1 describes the Enterprise Excellence Decision Process

212 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c06_1
10/09/2008
213

8. Concept 9. Design 10. Optimize 11. Verify 12. Production


End
Development Development Design Capability Launch

Design

16. Improve

3. Develop
1 2. Identify Improve
Business Opportunity 13. Define 14. Measure 15. Analyze 18. Control CMI End
Start Opportunity
Case
17. Lean
No

Invent/Innovate
1
7. Verify
5. Develop 6. Optimize
4. Invent/Innovate Technology 8
Technology Technology
Transfer

FIGURE 6.1 Enterprise Excellence Decision Process.

213
c06_1 10/09/2008 214

214 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

UNDERSTANDING PROCESS VARIATION

Understanding variation is a key to understanding and measuring your process.


As you begin to collect and understand the data associated with your process,
you will understand variation and how it effects your process, its metrics, and
controls. A defect is any error that results in customer dissatisfaction or down-
stream rework. Ultimately, defects negatively impact your bottom line. By min-
imizing defects, companies can increase customer satisfaction, profitability, and
competitiveness. How do you minimize defects? Moreover, how do you ensure
that the measures taken to control defects are maintained? One process im-
provement model strives to minimize defects by reducing variation in processes:
It’s a customer-focused methodology that relies on data to point to the root
causes of problems. It’s called Six Sigma.

Defining Six Sigma


Six Sigma began in the 1980s when Motorola developed this strategy to reduce
defects in its products. It was soon adopted by the manufacturing divisions of
other companies, such as IBM, Texas Instruments, and Kodak. In the late
1990s, GE Capital successfully applied Six Sigma to nonmanufacturing aspects
of its operations. Other companies have been applying Six Sigma to the retail
industry. It is now being implemented in service industries and throughout the
Department of Defense (DoD).
Sigma, the eighteenth letter of the Greek alphabet, is used as the symbol for
standard deviation. Six Sigma is a term that refers to the statistical measure of
variation, an associated improvement methodology, and a level of quality. You
may see Six Sigma either spelled out or written numerically (6s). Six Sigma
can be defined in several ways:
 A management philosophy
 A process measurement
 A level of quality
As a management philosophy, Six Sigma is a cultural and behavioral change
methodology that focuses on business processes and how those processes im-
pact customers. The Six Sigma method seeks to improve processes by reducing
variation.
As a process measurement, Six Sigma is a statistical concept that represents
how much variation there is in a process relative to customer’s specifications.
For example, the checkout lines in a store may move quickly one day and slow-
ly the next.
Normal fluctuations in a process are always present to some degree. These
are called common-cause variations. The variation observed from one proc-
essed checkout to the next and the normal differences between the different
checkout employees are examples of common-cause variation. Variations that
are not normally present in a process are called special-cause variations. Power
c06_1 10/09/2008 215

UNDERSTANDING PROCESS VARIATION 215

failures, for example, occur infrequently and unexpectedly, and they can bring
checkout lines to a standstill. This type of variation would be characterized as
special-cause variation. Decreasing process variation increases the process sig-
ma. The end result is greater customer satisfaction and lower costs. Less varia-
tion in a process provides numerous benefits, including:
 Less waste and rework (which lowers costs)
 Products and services that perform better and last longer
 Happier customers
 Greater predictability in the process
A defect is any error that results in customer dissatisfaction or downstream
employee rework. As a level of quality, Six Sigma measures the number of de-
fects per opportunity, or DPO. Defects per opportunity (DPO) consists of total
defects divided by total opportunities. The relationship between the traditional
three sigma and the Six Sigma process measures is demonstrated in Figure 6.2.
Three sigma equals 2,700 total defects per million opportunities outside the
lower and upper specification limits. Six Sigma equals 3.4 total defects per mil-
lion opportunities below and above the specification limits.

Sigma DPMs
Let’s look at defects per million (DPMs) and percentage acceptable per sigma
values 1, 2, 3, and 6. Decreasing process variation (DPMs) increases the process

FIGURE 6.2 Three Sigma versus Six Sigma Process.


c06_1 10/09/2008 216

216 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

TABLE 6.1

Sigma Value Percent Acceptable (%) Defects per Million (DPM)

1s 68.27 317,300
2s 95.45 45,500
3s 99.73 2,700
6s 99.99966 3.4

sigma. The end result is greater customer satisfaction and lower costs. Table 6.1
demonstrates the relationship between the process sigma value, percentage ac-
ceptable, and defects per million (DPM).

Defects Are Variation


Defects are variation, and variation is the enemy of effectiveness. A defect is
any variation of a required characteristic of a product or service that renders it
not fit for use. ‘‘Not fit for service,’’ as stated earlier, results in customer dis-
satisfaction or downstream employee rework. Anything that the customer cares
about that is not done right the first time is a defect.
All defects must be recorded. If scrap is created, if rework/repair is neces-
sary, or if work to adjust, correct, or modify the process is required—record it.
A unit that is defective may have one or more defects. For the purposes of calcu-
lating sigma levels, each defect is counted separately.

1. Determine the number of defect opportunities per unit.


2. Determine the number of units processed.
3. Determine the total number of defects made. This includes defects made
and later fixed.
4. Calculate defects per opportunity (DPO) by dividing the total number of
defects by number of units times opportunities.

total defects
DPO ¼  opportunities
number of units

5. Use the DPO to calculate the yield.


6. Use the yield to look up sigma in the sigma table (Table 6.1).

Using these steps, let’s calculate the sigma value for the product shown
in Table 6.2.
If there are 28 defect opportunities, 100 units processed, and 3 defects made,
then the DPO is 3 divided by 100 times 28, or 0.00107.
3
DPOelectric light switch ¼ ¼ 0:00107
100  28
c06_1 10/09/2008 217

UNDERSTANDING PROCESS VARIATION 217

TABLE 6.2

Product Units Processed Defect Opportunities Actual Defects

Electric light switch 100 28 3

With a DPO of 0.00107, the resulting yield is 99.893 percent.

ð1  0:00107Þ  100 ¼ 99:893%

Using the yield, we can look up the sigma value in Table 6.3. The yield listed
in the table that is equal to or less than 99.893 percent is 99.865 percent, which
equals a sigma value of approximately 4.5. At 4.5 sigma, there would be 1,350
defects per million opportunities. In stark contrast to 4.5 sigma, a sigma value of
6 refers to just 3.4 defects per million opportunities, or a 99.99966 percent yield.
A sigma level of 3.8 results in approximately 99 percent good yield. That
may sound sufficient, but think how critical processes would function at 3.8 sig-
ma. There would be:

 20,000 lost articles of mail per hour


 Unsafe drinking water for almost 15 minutes each day
 5,000 incorrect surgical operations each week
 2 short or long landings at each airport each day
 200,000 wrong prescriptions each year
 No electricity for almost 7 hours each month

Sigma levels vary greatly, even within the same industry. For example, a
study done in 1995 showed that airline flight fatalities were greater than 6 sig-
ma, but baggage handling was approximately 3.7 sigma.

Defects and Defective Units


To understand defects and defective units, let’s look at a simple example. A
manufacturing process produces six units of the company’s premier product.
However, quality control finds that five of the units produced in the run

TABLE 6.3

Sigma Value Percent (%) Acceptable Defects per million (DPM)

1s 68.27 317,300
2s 95.45 45,500
3s 99.73 2,700
6s 99.99966 3.4
c06_1 10/09/2008 218

218 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

TABLE 6.4

Unit Defects Defective Units

1 1 1
2 1 1
3 2 1
4 1 1
5 0 0
6 4 1

are defective. Furthermore, some pieces exhibit more than one defect
(with one unit having as many as four defects). The raw data is presented in
Table 6.4.
As shown in Table 6.4, six units were produced with a total of nine defects
and five defective units. Using the data presented, we can measure quality in
four different ways:

1. By the total number of defects


2. By the total number of defective units
3. By the proportion of defective units to total units produced
4. By the average value of ‘‘defects per unit’’

Total number of defects is simply the number of defects observed throughout


the process run. In our example, the total number of defects is equal to nine.
Total defective units, easily measured is simply the number of units that are
affected by at least one defect. In our example, five units have at least one de-
fect. This also means that, out of the six units produced, only one unit is without
any defects.
The proportion of defective units is calculated by dividing the number of
defective units by the total number of units produced. In our example, the
equation is:

5
Proportion defective ¼ ¼ 0:833
6

In contrast to the preceding equation, the number of defects per unit is calcu-
lated by dividing the total number of defects by the total number of units pro-
duced. The equation looks like this:
9
Defects per unit ¼ ¼ 1:5
6
But why did these variations arise? What could have been the sources of the
variability? The following Figure 6.3 provides a graphic representation of the
sources of variabilty.
c06_1 10/09/2008 219

UNDERSTANDING PROCESS VARIATION 219

FIGURE 6.3 Sources of Variability.

Sources of Variability
Two sources of variability can be identified: systematic variability and error
variability. Systematic variability comes from the treatment performed in the
experiment. It is variability between two or more groups, and it helps us to de-
termine whether the treatment has had any effect. Error variability, on the other
hand, comes from unidentifiable sources. It is variability within groups, and it
makes it difficult for us to determine whether the treatment has had any effect.
Figure 6.3 represents the three major causes of variability.
Process variability is the variability we are most used to dealing with, and it
comes from our everyday working processes, the human, technical, and proce-
dural factors that contribute to variability. This is often the first area we look at
when attempting to improve effectiveness or efficiency. That is a correct place
to start, but we must also recognize that process variability may come from
the variability or materials or design. Material or process input variability is the
second major cause of process defects. This is often the case even when re-
ceived products and services are within specification. Often, the variability of
the process inputs is a major cause of product or service inefficiency and in-
effectiveness. The lack of robustness and variability in design is also a major
cause of product and process rework, repair, and failures. This most often comes
from the lack of CDOV and IDOV understanding by designers.

First-Pass Yield
First-pass yield (FPY) is a quality metric that measures the amount of rework in
a given process. Specifically, first-pass yield equals the number of good units the
process yields divided by the number of units going into the start of the process:

pieces out

pieces in
c06_1 10/09/2008 220

220 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.4 Six-Step Process Example.

To better understand FPY, let’s look at an example: Process 1 has 1,000 units
entering into it, but only 986 that exist as good units. Therefore, the FPY of
process 1 is:
986
Y1 ¼ ¼ 0:986
100
Now let’s get a bit more complex. Process 1 is, in fact, a work activity of a
much larger work center. The work center uses six work activities (1, 2, 3, 4, 5,
and 6), as illustrated in Figure 6.4.
Table 6.5 shows the units in the six-step process example and the resulting
units in and out for each subprocess and the resulting FPY.
The values displayed in Table 6.5 represent the FPY for each of the subpro-
cesses. This is applied and demonstrated in Figure 6.5.
But what is the FPY of the process itself? The calculation of the FPY for the
total process is simply the product of the FPYs for all subprocesses. Using the
data from Table 6.5, the process yield equals:

Y ¼ ð0:986Þð0:998Þð0:968Þð0:952Þð0:986Þð0:990Þ ¼ 0:885
This value is known as the total process yield (TPY). Although the value of
0.885 indicates that each step in the process may require rework, the value itself
does not indicate the amount of rework that would be necessary.
Unfortunately, FPY and TPY are not effective as quality metrics. The values
they generate are too vague to assist in determining where process improvement
needs to be addressed. To achieve the desired levels of effectiveness, another
metric must be used—a metric that focuses on total defects. Focus on total

TABLE 6.5

Work Activity Units In Units Out FPY

1 1,000 986 0.986


2 986 984 0.998
3 984 953 0.968
4 953 907 0.952
5 907 894 0.986
6 894 885 0.990
c06_1 10/09/2008 221

UNDERSTANDING PROCESS VARIATION 221

FIGURE 6.5 Six-Step Process with FPY.

defects to reduce cycle time per unit, work-in-progress (WIP) inventory carry-
ing costs, delivered defects, early-life failure rate, and defect analysis and repair
cost per unit.

Rolled Throughput Yield


In the previous section, we examined the use of first-pass yield (FPY) as a pro-
cess metric. In doing so, we saw that the FPY considers only two things: what
went into a process step and what came out. This means that FPY is not a good
indicator of the nature of the process; it cannot adequately indicate which steps
in the process need to be improved.
In contrast, rolled throughput yield (RTY) is the probability that a single unit
can pass through a series of process steps free of defects. This means that RTY
takes into account rework, which makes it a better indicator of the nature of the
process. To understand how it is calculated, let’s begin by looking at a metric
that is similar to FPY. It is known as defects per unit.
Defects per unit (DPU) represents the number of product defects divided
by the number of finished products. For example, if there are 34 defects in
750 units, the DPU will be 34 divided by 750, or 0.045.

total number of defects 34


DPU ¼ ¼ ¼ 0:045
total population 750
DPU is the average number of defects observed when sampling a population.
total number of defects
DPU ¼
total population
In other words, DPU is the ratio of the number of defects over the number of
units tested.
Consider 100 electronic assemblies going through a functional test. If 10 of
these fail the first time around, we know that the FPY would be:
pieces out 90
Yassemblies ¼ ¼ ¼ 0:90
pieces in 100
c06_1 10/09/2008 222

222 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

DPU takes a fundamentally different approach to the traditional measure-


ment of yield. In the preceding example, the DPU is:

total number of defects 10


DPU ¼ ¼ ¼ 0:10
total population 100

Since most processes consist of multiple subprocesses, it is important that we


understand how to calculate the DPU for an entire process. Total defects per unit
(DPU value for the entire process) equals the summation of each DPU from the
various subprocesses.

TDPU ¼ DPU1 þ DPU2 þ DPU3 þ    þ DPUn

As you may recall, TPY is the product of all subprocess FPY values.
If you want to reduce cycles time per unit, reduce carrying cost for work in
progress, reduce delivered defects and early failure rates, then you need to be
able to calculate RTY. As we show in the next section, RTY can be calculated
only after determining the TDPU for the process that needs improvement.

Calculating RTY
The calculation of RTY is the constant e (2.71828182845904) raised to the neg-
ative power of total defects per unit (TDPU) for the process.

RTY ¼ eTDPU

To understand the mechanics of calculating RTY, let’s look at an example.


During step 1 of a process, the following data is collected:

Units produced 340


Total defects 10
Units reworked 5
Units scrapped 5

Using the formula for DPU, the DPU is calculated to be:


total number of defects 10
DPU ¼ ¼ ¼ 0:029
total population 340
Therefore, the RTY for this process is

RTY1 ¼ e0:029 ¼ 2:718281828459040:029 ¼ 0:971

RTY can be used to estimate resource requirements. By subtracting the value


of RTY from 1, we can determine the probability of a unit having at least one
defect.
Unit has 1 or more defects ¼ 1  eTDPU
c06_1 10/09/2008 223

UNDERSTANDING PROCESS VARIATION 223

 The value generated from the formula can then be used to estimate the min-
imum additional resources required, where X is defined as the number of
units that must be produced.
 Minimum additional resources ¼ Xð1  eTDPU Þ:

Example 1 You are required to produce 500 units, and your process has a
TDPU of 0.1899. Based on the preceding equations, the minimum number of
additional resources that will be required by this process equals:

Minimum additional resources ¼ 500ð1  eTDPU Þ


¼ 500ð1  e0:1899 Þ
¼ 500ð1  0:826Þ
¼ 500ð0:174Þ
¼ 87

From the calculation, we now know that to produce 500 nondefective


units, we need to rework and/or scrap at least an additional 87 units to get
500 perfect units out of the current process. What type of impact do you
think this increase will have on labor costs, material costs, and planned
cycle time?

Example 2 If you are required to deliver 500 units per day, and your process
has a TDPU of 0.1899, how many units need to be produced? To answer this
question, we will use the reciprocal of the RTY value in the following way:
 
1
Production units ¼ 500 TDPU
e
 
1
¼ 500 0:1899
e
¼ 500ð1:209Þ
¼ 605

You need to produce 605 units per day to deliver 500 good units daily. The
remainder will either be reworked or scrapped during the same period. Eventu-
ally, those pieces that are reworked will enter the product stream as input units
and be delivered to customers.

RTY Summary
How does the RTY affect your business processes? It affects Q$SR:

 Quality
 Cost
c06_1 10/09/2008 224

224 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

 Schedule
 Risk

RTY is a better indicator of the nature of the process because it:


 Shows the extent of hidden factory
 Indicates where in the process steps needed to be improved
 Can be used to plan for the effect of hidden factory while working to im-
prove the process

ACQUIRE ALL PROCESS DOCUMENTATION

We cannot emphasize enough the need to acquire all process data at the begin-
ning of the improvement process. There is nothing more embarrassing or waste-
ful of your team’s time than to start this process and find out during the FMEA
or when implementing an improvement initiative that a regulation or safety re-
quirement has been overlooked. Use the following as a checklist for your pro-
cess documentation.

Process requirements. Process requirements include all necessities to run the


process, such as:
 Personnel
 Materials
 Facilities
 Support services

Contractual requirements. There may be more than one contract associated


with any given process. Acquire all contractual information, such as:
 Deliverables to customers
 Quality and reliability
 Packaging
 Rates and schedules

Regulatory and safety. Safety and regulatory requirements often parallel


each other. In most cases, safety includes:
 EPA standards
 OSHA standards
 Applicable ISO standards
 Personnel protective equipment
 Fire-prevention equipment

Procedures and work instructions. These include every written procedure:


 Procedures
 Work instructions
c06_1 10/09/2008 225

PROCESS MAPPING 225

Work standards
Graphical work aids
 Checklists

Supply chain management information


 Who are the suppliers, internal and external, to the process?
 Supplier contracts and contract requirements
 Supplier data such as SPC, reject rates, and warranty returns
 Supplier qualifications and certifications
 Supply chain management programs such as Lean Six Sigma, ISO 9000,
RCM, and others, if applicable
Process data. During the define phase is when you begin to understand your
process through its metrics and begin to know whether the process is under
control. The general types of data that will provide you with this informa-
tion are:
 Statistical process control (SPC) data
 Warranty data
 Process up and down times
 Process rolled through yield (RTY)
 Process defects per unit (DPU)
 Process cost, overall and by work center
 Process personnel requirements
 Process takt times and waiting times

Now that we understand process variation and the need for process documen-
tation and data, we can begin to better understand our process by developing the
process and value stream maps.

PROCESS MAPPING

Process mapping is a systematic/systems approach to documenting the steps/


activities required to complete a task. Process maps are diagrams that show—in
varying levels of detail—what an organization does and how it delivers services.
The process map is the key and first step in knowing and understanding your
business processes. If you have not completed a process map or recently up-
dated it you will always be surprised about what is actually being executed. Pro-
cess maps are graphic representations of:
 What an organization does
 How it delivers services
 How it delivers products
Process mapping also identifies the major processes in place, the key activi-
ties that make up each process, the sequencing of those activities, the inputs and
c06_1 10/09/2008 226

226 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

resources required, and the outputs produced by each activity. Process maps are
a way of ensuring that the activities making up a particular process are properly
understood and properly managed in order to deliver appropriate customer ser-
vice. They show a series of repetitive activities or steps used to transform in-
put(s) into output(s).

What Is a Process?
A process is a series of repetitive activities or steps used to transform input(s)
into output(s). More specifically, it is a transformation of inputs, such as people,
materials, equipment, methods, and environment, into finished products through
a series of value-added work activities.
The absence of clearly defined processes makes any activity subject to varia-
tion and thereby subject to ineffectiveness. Effective processes are understood
and documented. Three controllable factors are key to any process. They are
quality, cost, and schedule (Q$S). Each of these must be described, quantified,
and analyzed as part of the process.
Process maps reveal hidden processes, identify opportunities for improve-
ment, identify how to improve process layout/flow, and aid in developing the
process FMEA.
Apply process mapping whenever you need to understand, manage, or im-
prove a process; whenever you are developing a new process or improving an
existing process; and whenever you need to evaluate defects in any process.
You should always apply process mapping! The process map is the best tool to
capture all of the inputs to the particular process under scrutiny. Earlier, we
talked about the transfer function:
Y ¼ f ðX1 Þ þ f ðX2 Þ þ    þ f ðXn Þ

In order to find the right factors, we need to first identify them. Think of this
process as a funnel that, as we progress through the tools, filters out the mun-
dane Xs to leave us with those critical few Xs that drive the process. These are
known as the key process input variables (KPIV). There are three steps in pro-
cess mapping.

1. Identifying the process


2. Defining the process
3. Mapping the process

The three-step process is shown in Figure 6.6.

Identifying the Process


Implementing a Six Sigma improvement project starts with identifying the pro-
cess that is going to be reviewed for improvement opportunities. You must first
c06_1 10/09/2008 227

PROCESS MAPPING 227

FIGURE 6.6 Three Process Mapping Steps.

determine what the process is and why you want to analyze it. To do this, the
items shown in Table 6.6 need to be defined.

Defining the Process


Defining a process begins with understanding what the process does and how
the associated activities flow. All processes have the same basic functions, as
indicated in Figure 6.7.

TABLE 6.6

Item Description

Name of the process What is the name used to identify or describe the process?
Process owner Who manages the process and has the authority to change
the process? The person who has the authority to
change the process is the process is the process owner.
Goals and objectives Identify the goals and objectives of the process. What is
the process trying to accomplish? What do you want to
achieve by reviewing your processes?
Process improvement What type of improvements do you want to make, and
purpose and scope why? How would you like to accomplish these
improvements?
Customer of the process Who does the process serve, and who benefits? Who is
the customer of the process? What are the output
requirements: type, volume, quality, and schedule?
How do the output requirements affect the input
requirements?
Products and services What are the products or services produced by the
process? What material or information is required as
input to produce the product or service? Who are the
suppliers? What are all the elements of the required
input? How do they affect the output?
Documentation What are the documents, regulations, and procedures that
govern the process? Review the process documentation.
c06_1 10/09/2008 228

228 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.7 Process Basic Functions.

You will need to identify and define:

 The work activity or work center that begins the process


 The work activity or work center that ends the process
 Any work groups or work cells within the process
 The element or work activity that begins and ends each process element
 The subprocesses or elements of the overall process

Types of Processes
Implementing continuous measurable improvement begins with identifying the
process we are going to analyze and determining who is specifically responsi-
ble for the process and who has the authority to change it. But first we must be
able to distinguish between the four types of processes: industrial, administra-
tive, management, and engineering. (See Figure 6.8.)
We begin our discussion of process types by examining first what is meant by
industrial process.

FIGURE 6.8 Four Types of Processes.


c06_1 10/09/2008 229

PROCESS MAPPING 229

TABLE 6.7

Form Examples

Basic materials Iron ore, steel, coal


Subassemblies Computer boards, engine parts
Equipment for repair A faulty automobile engine
Equipment for rework Aircraft requiring upgrade and modification

Industrial Processes Industrial processes are thought of as ‘‘production


processes’’—processes that produce, repair, rebuild, or upgrade things. The
inputs to industrial processes are raw materials that can take various forms, as
shown in Table 6.7.
Industrial processes lend themselves most easily to the technical resources
for process improvement. As indicated in Figure 6.9, the output from one indus-
trial process can be the raw material of another industrial process.
Processes such as repairing, rebuilding, or upgrading things are also indus-
trial processes. In these cases, the items to be repaired, rebuilt, or upgraded,
together with the new parts, rework kits, or upgrades, are the raw materials of
the process.

Administrative Processes Administrative processes are processes that


produce the paper, data, and information that other processes use. They also
produce products used directly by the customers (internal and external), such as
tax returns, paychecks, reports, and data. (See Figure 6.10.)
Administrative processes include some of the most complex and bureaucratic
challenges in the pursuit of world-class competitiveness. The streamlining of
administrative processes affects all other processes in an organization. Special
attention must be paid to the dilatory effect of inefficient and ineffective

FIGURE 6.9 Industrial Processes.


c06_1 10/09/2008 230

230 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.10 Administrative Processes.

administrative processes on personnel morale, the team spirit, management pro-


cesses, and industrial processes.
Management Processes The management processes produce decisions. They
are classified as all the important processes we deal with on a daily basis. Using
a structured, fact-based decision-making process is critical to the success of an
organization. (See Figure 6.11.)
Engineering Processes Engineering processes produce a broad range of prod-
ucts, services, and decisions. These processes are critical to every phase of the
product life cycle. (See Figure 6.12.)
In addition, there are numerous engineering processes related to industrial
engineering, manufacturing engineering, quality engineering, reliability engi-
neering, software engineering, and test engineering.

FIGURE 6.11 Management Processes.


c06_1 10/09/2008 231

PROCESS MAPPING 231

FIGURE 6.12 Engineering Processes.

Mapping the Process


Before any business process can be known, understood, controlled, measured,
and improved, it must be process- and value-stream-mapped. There are many
process mapping approaches, software, and symbols. Here we present a basic
approach to process mapping. Whatever approach you decide to use to map
your business process, you must use a standard and consistent method through-
out your organization. Select one approach to process mapping and use it every-
where in your organization. There are five steps for developing process maps:

1. Collect all information available on the process (procedures, work instruc-


tions, specifications, regulatory requirements, etc.).
2. Based on the documentation and the knowledge of your team, develop an
initial process map. This process map represents the documentation you
have collected and the knowledge your team has of the process.
3. Take the process map your team has developed and walk the process.
Walk the process several times. If there is more than one shift, walk
through the process during each shift. If there is more than one person
executing an administrative process, sit down at the desk of each person
and review the process map with that person. You will find differences
between the documentation and the process, and you will find differences
between individuals performing the process.
4. Resolve the differences in the process map. Should the process be con-
ducted as stated in the documentation, or should the documentation be
changed to reflect the procedure?
5. Develop the as-is process and develop the process map to reflect the pro-
cess as it is actually being executed.

Process Map Symbols


Many process mapping symbols are available for use. Symbols are designed
specifically for electronic flowcharting, mechanical process flowcharting, and
c06_1 10/09/2008 232

232 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.13 Process Map Symbols.

computer program flowcharts, among others. This book uses only 11 basic sym-
bols, as shown in Figure 6.13.

Process direction arrow. Shows the direction of the process flow. This arrow
points to the next step in the process.
Process input arrow. Shows inputs into the process flow. Usually a list of
inputs appears beneath the arrow.
Process output arrow. Shows outputs from the process flow. Usually a list of
the outputs appears beneath the arrow.
Process function box. Represents a process work center that contains more
than one work activity. A brief description of the function is written in the
box. Example process functions:
 System integration
 Finishing
 Fault isolation
 Rework
 Repair

Process work activity box. Represents a single work activity within the pro-
cess flow. A brief description of the work activity is written in the box
(e.g., grinding, welding, brazing, or document review).
Decision point. A decision or branching point for a process function or work
activity. Lines representing different decisions emerge from different
points of the diamond. The diamond contains a brief description of the
process decision point (e.g., inspection, test, or evaluation).
Automated input and output. Indicates a sequence of commands that will
continue to repeat until stopped manually.
c06_1 10/09/2008 233

PROCESS MAPPING 233

Delay. Represents any process delay.


Manual/paper input/output. Represents material or information entering or
leaving the system, such as a customer order (input) or a product (output).
Process inception, termination, or connection. As a connector, this symbol
indicates that the flow continues where a matching symbol (containing the
same letter) has been placed. As an inception or termination point, this
symbol marks the starting or ending point of the system and usually con-
tains the word start or end.

Levels of Process Mapping


Process maps help us understand, manage, and improve processes and can be
developed for various levels in the process. Each level will depend on your ser-
vice or product and the approach to the improvement process. Each level of
system complexity adds an analytical burden in the amount and type of data
taken at the various points in the process.
At the simplest level, process maps help ensure that you thoroughly under-
stand your own process: how it works, who does the work, what inputs and re-
sources are required, what outputs are produced, and the constraints under
which work is completed. Process mapping is accomplished at four levels:

 Level 0. Enterprise level


 Level 1. Organizational/functional level
 Level 2. Operations level
 Level 3. Work activities level

Level 0: Process Map the Enterprise Level


A Level 0 process map represents the top view of the overall enterprise. It is the
‘‘executive management’’ vision of the organization’s requirements, functions,
and business processes. It defines the enterprise and identifies the overall mea-
sures of performance at the management level for each center (function).
The first step in developing the Level 0, or enterprise, process map is to deter-
mine the three elements of the enterprise, as indicated in Figures 6.14 and 6.15.
Once you know and understand the requirements of the enterprise at the
highest level you are prepared to develop the initial as-is process map. Create
this process map using your team from the documentation and the information
developed in Figure 6.15. Do not crowd too much information on a single page;
remember, you should be able to look at the process map and understand the
process inputs, value added steps and outputs for each work center. Figure 6.16
represents the first page of a Level 0 process map.
The process map for the casting enterprise in Figure 6.16 begins with the
beginning symbol (S), indicating the start of the process. Process inputs are then
designated with an input arrow. The first enterprise level function is technology
c06_1 10/09/2008 234

234 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.14 Level 0 Process Map Development.

development. The process work center output is designated with an output


arrow. The process map continues in this way until you come to the process
connection symbol (5/6) indicating that the process continues on the following
sheet with process work centers 5 and 6, as indicated in Figure 6.17.

Level 1 Process Map: The Organizational/Functional Level


Level 1 is the organizational/functional level of the process work centers within
the enterprise. At this level, we are defining the process work centers of the

FIGURE 6.15 Level 0 Process Map Enterprise Elements Example.


c06_1
10/09/2008

No

Technology Yes 1 Develop 2 Acquisition


Technology Develop Acquisition
235

DP
Development Casting of casting
ss
Development
Requirements Technical Funding Contract
generation • Program definition
• S and T • Acquisition strategy/planning
process - AAO • Systems management/planning/ Milestone C
• Contracting
requirements engineering control Industrial base
• Program
document • Modeling and • Contracting strategy
Management/planning/control
funded program simulation • System engineering • Review/certification if TDP is
• Technology
• New technologies • Test and evaluation applicable
transfer
• Configuration • Developing • PPBES • PPBES
management technologies • Milestone review process • Preproduction engineering
• Price and availability
• IM strategy (legacy systems)

3 4
Production
Production Stockpile
Stockpile 5/6
ofofcasting
casting Management
Management
casting/components Serviceable
• Production Engineering Surveillance requirement • Program management/planning/control Obsolete
• Process Monitoring Contingency/out-loading • Surveillance program/QDR suspensions Unserviceable
• SPC Requirements • Inventory control/accountability Funded program
• Configuration Management • Receipt and issue Technical definition
• Production • Distribution and requisitioning
• Quality Control • Out-loading
• Acceptance • Retrograde
• Packaging and Shipping • Transportation
• Maintenance of storage facilities
• Safety/security
• Pre-positioning
• Malfunction investigation
• PPBES

235
FIGURE 6.16 Level 0 Enterprise Process Map Part 1.
c06_1

236
10/09/2008

Demilitarization
Demilitarization
7 Demilitarization Expended?
Expended? 3/4
3/4
236

Of
Of casting
casting Expended? 3/4
Of casting No

• Program mgmt/planning/control
• Commercial sale
• Develop demil technologies
• Develop procedures Yes
• R3
• Contracting
• Disposal
• Recode to stock

Maintenance
Maintenance No
4 Maintenance 6 Sales E
4 Required
Required Sales E
Required

• Unserviceable • FMS
• Recap • Presidential drawdown
• Modernization • Direct sales
Yes Yes
• GFM

Maintenance
4 5 Maintenance Organic Issue Expended?
Expended? 4
4 of Stockpile Organic Issue Expended? 4
of Stockpile No

• Program mgmt/planning/control • Requisition


• Maintenance planning • Training
• Parts procurement • Combat
• Contracting • Testing
• Acceptance
• Safety security environmental
• Execute program
• Management control

FIGURE 6.17 Enterprise Level 0 Process Map Page 2.


c06_1 10/09/2008 237

VALUE STREAM MAPPING 237

overall process. Level 1 describes the ‘‘operational management’’ view of the


process.
Level 1 also defines organizational elements (e.g., departments, programs,
and functions). Each element (work center) is numbered (the numbers are later
carried over to the lower-level process mapping).
 At this point, you will define the management metrics and the process to be
used.
 Process control and control metrics will be at a lower level.
 Process footnotes are used at all levels of process mapping.
 The footnotes will help identify what the process requirements, system re-
quirements, stakeholders, and management metrics are.

Process requirements, system requirements, stakeholders, and management


metrics are identified on the process map. So, too, are management metrics (pro-
cess control and control metrics will be defined at a lower level). Figure 6.18 is
an example of a Level 1 process map for the casting enterprise manufacturing
process.
The Level 1 process maps shown in Figures 6.19 and 6.20 provide examples
for purchasing and knowledge management work centers.

Level 2 Process Map: Operations Level


Level 2 process maps are at the operations level and normally consist of work
activities. At this level, there is a mixture of functional elements and work activ-
ities. The measures at this level are the process control metrics for the opera-
tions level work activities. Figures 6.21 and 6.22 are examples of Level 2
process maps.

Level 3 Process Map: Work Activities Level


Process maps can be developed for various levels in the process. Each level of
system complexity adds an analytical burden in the amount and type of data
taken at the various points in the process. Therefore, it is important to map the
process to the level that enables management of the process to meet the require-
ments. Remember, the purpose of the process map is to describe the process
properly so it can be quantified and analyzed.
Level 3 process maps expose deeper processes from within a Level 2 func-
tion or activity. At Level 3, all elements are work activities. Therefore, only
work instructions and control metrics are shown.

VALUE STREAM MAPPING

Value stream mapping (VSM) is a unique kind of process mapping that lists and
relates all of the elements and actions required to bring a product/service from
required inputs to delivery to the customer. The scope of the VSM needs to
c06_1

238
10/09/2008

Prepare and
PreheatCasting
210
238

• Preheat oven Projectiles


Receipt and • Temp control ready for
Load, Condition,
Start Verification • Shrouds loading Probe and Add Pour
100 230–235

Prepare Filler • Loader


• BOM • Invoice • Metal • Load carts
• Procedures • Inventory parts for Loading
• Conditioning oven Loaded
• Work orders • WIP carts • TNT 220 • Oven controllers
• Requisitions • Inspection • WIP projectiles
• Water system
• Purchase orders equipment • Timers
• MHE • TNT sorting table Melted TNT
• Feed mechanism ready for • Probe system
• Melt grid loading
• Melt kettle

Rework
Castings
901
Work Center,
Manufacturing • Sump Fail
• Melt pour
• Probe

Pass
Prepare Casting
X-Ray
for X-Ray 299
250
240
• Piping
• X- ray carts Loaded • Porosity Projectiles
• X- ray system projectiles • Voids ready for F-yard
• X- ray readers ready for • Base
X- ray separations

FIGURE 6.18 Level 1 Process Map for Manufacturing Work Center.


c06_1
10/09/2008

Production Manufacturing
Purchasing
S Planning Engineering
3.0
1.0 2.0
239

OEM purchase • Assembly drawings Production • Personnel • Production plan • Suppliers Components
order • Manufacturing plans plan • Equipment • Schedule • Contracts procured
• WIP • Facilities • BOM • Catalogues
• Manufacturing • Materials • Preferred
engineering • Components suppliers
• ERP • Disposables • ERP

Receipt and Metal Parts Assembly and


Verification Conditioning Test
4.0 5.0 6.0

Procedures • Invoice WIP • Conditioning ovens WIP • Fixtures Assemblies


• MHE • Controllers conditioned • Workstations
• ERP • MHE • Test systems
• PPE • PPE
• MHE

Work Center,
Purchasing Packaging
And Shipping E
7.0

• Packaging systems
• Labeling systems
• Shipping/invoices

239
FIGURE 6.19 Level 1 Process Map for Purchasing Work Center.
c06_1

240
10/09/2008

No

Implement
240

Develop Identify DPM Yes Distribution


Start Distribution Requirements Approval
Model
Model 100 200 300
400

Capabilities Web products • Training • Implementation • Business plan Database in web


• Cost schedule • Distribution model format
A • Time • WBS to the third level • Database
• Human • Funding requirements

Work Center,
• Production division database
Knowledge • Lessons learned

Management
Present
Acceptance
Database to
by IPT
IPT 500

Project link • Collects comments • Improvement


• Ease of use recommendations
• Applicability • Stakeholders
• Collect lessons buy-in
learned

FIGURE 6.20 Level 1 Process Map for Knowledge Management Work Center.
c06_1
10/09/2008

Fail

Prepare Kettle, Grid Monitor Pass Receive, Melt and


241

210 Dispense Filler


and Loader for Equipment
220
Operation 230.1 230.2 230.3
Loading
SOPs, steam, water, Equipment equipment under
• Instrumentation/sensors/PLC • Elevator Instrumentation/
electrical power, air, • Heat exchangers capable of loading control and • Hoppers sensors/controls
hydraulic power • Schneible dust collector and Microdyne fume and controlling • Process equipment ready for melt • Carts Projectile loading
removal system projectiles listed in 230.1.1 through • Cans data summary TNT slurry with
pour operations
• Air compressors • Melter Sheet “pour 30% solids at 177+
230.1.10
• Hydraulic pump • Gride reservoir sheet (PROD 513) 5 deg F
• Grid melter • Grid downcomer Scrap material
• Melt kettle • Vibratory feeder Grid
• Loader • Melt kettle
• Metal parts staging oven • Load cells
• Controlled cooling ovens • Agitator
• Probing equipment • Wash digital thermometer
• Bldg heating equipment

Perform Projectile
Mix Filler
Loading
230.4
230.5
TNT slurry with
• Melt kettle 30% solids at • Melt kettle Stopwatch
• Agitator • Kettle dump valves Raytech laser
177+5 deg F • Downcomer thermometer
• Wahl digital thermometer
• Loader PPE
• Instrumentation/sensors/ controls • MPTS preheat chute Projectile loading
• Projectile loading data summary pour sheet • Instrumentation/sensors/ Data sheet summary (PROD-513)
(PROD-513) controls/control panel Flow card (PROD-643)

Monitor
Monitor Projectiles in
Conditioning
Loading Conditioning Oven 230.9
Oven
230.6 230.7
230.8
Projectile MPTS • Insulated oven PPE TNT slurry
Loading Controlled cooling
filled with TNT • Insulated door Flow card filled
• Process equipment process process under
slurry • Heated panels (PROD-643) projectiles
listed in 230.5 under control control
• Water cooling system subjected to • Process equipment
• Heat exchangers controlled listed in 230.7
• Pumps/flow regulators/ball valves cooling cycle
• Instrumentation/sensors/plc
• Stop watch
• Wheel clocks
• Raytech laser thermometer
• Wahl digital thermometer

241
FIGURE 6.21 Level 2 Process Map Casting Example.
c06_1

242
10/09/2008

Purchasing RFQ to Review


2 Strategy Supplier Bid
242

3.1 3.2 3.3


BOM Approved
• Existing supplier • Catalogue no RFQ • Requirements Bid
purchase
• New supplier • Specifications • Cost
• Current PN • Schedules • Schedule
• New PN • Packaging • Plan
• Cataloged • Requirements • Delivery
• Request for quote

3.1
No
Accept
Bid
3.4
Yes
Components
• Price
Supplied 4
• Schedule
3.5
• Complete quote
• Shipped WIP
• Specifications
• Qualified supplier

FIGURE 6.22 Level 2 Process Map Purchasing Example.


c06_1 10/09/2008 243

VALUE STREAM MAPPING 243

TABLE 6.8

Application Begins with . . . Ends with . . .

Product Raw material Customer receipt


Process Initiation Completion
Administrative Assignment Acceptance
Design Concept Production

include the appropriate supply chain for the level of the enterprise being eval-
uated, for example:

 The enterprise functions


 The divisions of the enterprise
 Work centers
 Product lines or processes

Applications for VSM are listed in Table 6.8.


Of course, for VSM to be of use during the define phase, it is essential that
the following statements be true:

 Standard metrics are used.


 The process is thoroughly understood.
 Leveling has been implemented.
 There is a desire to continually eliminate waste (e.g., information, time,
materials).
 The goal is a continual flow pull system based on takt time.
 Value is identified from the customer’s perspective.
 VSM boundaries are established (and isolated improvements avoided).
 Initial process walk is performed.
 Actual times and data are captured.

VSM can be performed at different levels of detail and focus. Mapping out
the activities in your production process will help you to know and understand
the current state of the process activities and guide you toward the future desired
state. Value stream mapping at this operating process level is called operational
value stream mapping (OpVSM).
However, two other important applications of value stream maps correlate
directly to Lean enterprise and organization assessments. The first is enterprise
value stream mapping (enterprise VSM) and the other is organizational VSM
(Org VSM). All three of these value stream maps are important in the Lean im-
provement DMALC process and will be presented and discussed in more detail
later. First, let us take a closer look at value stream analysis.
c06_1 10/09/2008 244

244 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

VALUE STREAM ANALYSIS

Value stream analysis will identify and rate the activities performed by an or-
ganization according to customer requirements and expectations. Value stream
analysis involves identifying and evaluating three types of activities:

1. Value-added (VA)
2. Non-value-added (NVA)
3. Necessary non-value-added (NNVA)

Value-Added Activities
Value-added activities are those activities that contribute directly to achieving
customer requirements and expectations. These activities are not candidates for
elimination, but may be optimized for effectiveness and efficiency using Enter-
prise Excellence methods. These are (1) activities the customer would be will-
ing to pay for, (2) activities done correctly the first time, and (3) transform
inputs to produce an output.

Non-Value-Added Activities
Non-value-added (NVA) activities are those activities that do not contribute to
achieving customer needs, wants, and requirements. NVA activities are pure
waste and should be eliminated immediately. Examples include excess inven-
tory, waiting time, and double inspection. It is important here to note that a
non-value-added activity in one process or organization may be a value-added
activity in another. The crucial question to ask here is: ‘‘Does this activity re-
late directly to achieving the customer’s needs, wants, or requirements?’’ And
don’t forget to address the needs, wants, and requirements of the internal and
intermediate customers as well as the end user.

Necessary Non-Value-Added/Business Value-Added Activities


Those activities that do not contribute to achieving customer needs, wants, and
requirements but are necessary to meet enterprise or organizational constraints
are considered to be necessary non-value-added activities. They are sometimes
referred to as business value-added activities as they are ‘‘necessary’’ because
of enterprise operational requirements or regulatory requirements relating to
safety, security, law, and so forth. The evaluation of NNVA activities is not as
straightforward as VA and NVA activities. The crucial question to ask here is:
‘‘Is this activity really necessary?’’ If an NNVA activity is really necessary, it
certainly cannot be eliminated, and controls need to be in place to ensure that it
is being performed and maintained to the necessary level. However, if an NVA
is masquerading as an NNVA, then it is a pseudo-NNVA activity and is a prime
candidate for Lean enterprise streamlining or elimination.
c06_1 10/09/2008 245

VALUE STREAM ANALYSIS 245

The two most prevalent conditions in which pseudo-NNVA activities turn up


is when employees blindly accept legacy practice (‘‘but we have always done it
this way’’), and when management insists on performing their workers’ tasks
instead of delegating and empowering. When analyzing a VSM, always check
for pseudo-NNVA activities, such as required checking and approval routings
that are needlessly extensive, require approval authority beyond that which is
needed, and require management review that is not justified. These conditions,
coupled with the law of unintended consequences, is what W. Edwards Deming
was referring to when he said that management is responsible for 80 percent of
their problems.
Elimination or streamlining of true NNVA activities requires a change in the
regulatory environment (FAR, FDA, FAA, EPA, etc.), new/reorganized facili-
ties, or other major investments by management. Though some NNVA activi-
ties, such as logistic activities involving the movement of material, cannot be
eliminated, they may be optimized using such tools as time-motion studies and
facilities layout planning. Examples of NNVA activities include transportation
of materials, unpacking supplies, regulatory data collection and reporting, and
safety precautions.
It is important to note that a change in the business context/environment usu-
ally results in a change in makeup of NNVA activities. Many companies that
moved their manufacturing operations offshore in the 1980s and 1990s found
this out the hard way. For instance, while some regulatory constraints may be
less stringent in foreign countries, other constraints are more stringent, or there
may be new, unfamiliar laws and/or cultural or labor restrictions. Other costly
NNVA activities that managers of offshore manufacturing facilities soon dis-
covered were the problems of translating work instructions into different lan-
guages and/or dealing with an unskilled, illiterate workforce.
In order to stay competitive, an organization must continuously improve all
aspects of its business to cope with existing as well as new competitors and
increasing customer demands. The goal of value stream analysis is to identify
waste in the value stream that can then be targeted for reduction or elimination.
Value stream analysis is used to determine opportunities for improved effi-
ciency that will enhance the performance of the organization in achieving Enter-
prise Excellence.

VSM Symbols
In the flowchart in Figure 6.23, you may notice a symbol that you never saw
before (the W in a triangle). This is one of several VSM symbols that follow
established conventions.

Example Process Map


The process map presented in Figure 6.24 identifies VA, NVA, and NNVA activ-
ities. Activities in the mapping can be summarized as shown in Table 6.9.
Note that inspection operations are always non-value-added unless they are
specifically required by the customer. Cleaning, on the other hand, becomes a
c06_1 10/09/2008 246

246 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

FIGURE 6.23 Value Stream Map Symbols.

necessary non-value-added activity and then an NVA as you get closer and
closer to optimum cleaning level. This particular cleaning operation is a chemi-
cal clean and is therefore required. What do you think would happen if the
cleaning operation were eliminated? Figure 6.24 represents the initial transition
of a process map to a value stream map, applying the value-added and non-
value-added notations.
The example of an operational-level value stream map shown in Figure 6.25
demonstrates how these symbols and the value stream data you have acquired
integrate into a process/value stream map.
Just as with process maps, value stream maps can be developed at different
levels of the organization, from Level 0, the enterprise value stream map, to
Level 3, work activities. There are some additional considerations and data
requirements for establishing value stream maps that we will discuss at each level.

TABLE 6.9

Activity VA NNVA/BVA NVA

Receipt and inspection u


Cleaning u u u
Assembly u
Brazing u
Inspection and test u
Rework and repair u
Package, mark, and inspect u
c06_1
10/09/2008

Receipt &
S Inspection Cleaning Assembly
247

1 NVA 2 VA 3 VA
BOM • Invoice Parts • Chemical bath Cleaned • Assembly fixture Heat
• Inventory to WIP • Rinse bath parts to • Buildup exchange
WIP assembly in
• Visual insp. • Dry system • Inspection gage
brazing jig
• Inspection

Rework
7 NVA

• Patch weld
• Hand braze
• Scrap
Package,
Brazing 5 Test Mark & Inspect E
4 VA NNVA 6 VA

• Convection oven Brazed • Inspection Heat exchange • Package Heat exchange


• Oven controller heat • Low-pressure assembly • Marking assembly
exchange ship - stock
• Brazing schedule assembly test/system • Visual inspect
• High-pressure
test/system

FIGURE 6.24 Process Map with VA, NVA, and NNVA Noted.

247
c06_1

248
10/09/2008

No
248

Customer Need W Concept W Submitted for W


$ Approved
S Is Voiced Developed Funding
(400)
(100) VA (200) VA (300) NNVA
NNVA
• Research 3 days • Application 1 day
• Brainstorm sets 2 days

15 days 28 days
85 days 5 days 25 days 1 days 7 days Yes

Evaluate Test W Prototype W Prototype


E Results Tests Developed
(700) NNVA (600) VA (500) VA

• Statistical analysis 2 days • Firing range 3 days • Laboratory 24 days


• Impact study 3 days • SU 8 days
Example 1
15 days 25 days 95 days
5 days 45 days 3 days 20 days 24 days

Lead time = 360 days; value-added time = 38 days


FIGURE 6.25 Completed Value Stream Map Example.
c06_1 10/09/2008 249

VALUE STREAM ANALYSIS 249

Enterprise Value Stream Mapping


Enterprise value stream mapping (enterprise VSM) is an application of value
stream mapping to the overall enterprise system. It correlates to the system-level
focus of a Level 0 process map and is directly applicable to enterprise
assessments.
The purpose of enterprise VSM is to understand the overall enterprise and all
the opportunities for Leaning within it. It is necessary to visualize the business
activities that are carried out on a daily basis in the enterprise. This will provide
a view of the enterprise as a single business entity integrating all of its business
functions.
Through this view, it becomes clear that decisions made by sales affect engi-
neering, decisions made by legal affect HRO, decisions made by R&D affect man-
ufacturing, and so on, and that all these processes and decisions are not as remote
from customer satisfaction as the traditional stovepipe view would indicate.
Enterprise VSMs needs to be completed before other Lean assessments/
activities are started. This will ensure that suboptimization of the overall system
does not occur due to improper leaning, or over leaning, of a subsystem process.
Other advantages of performing an enterprise VSM before starting other Lean
activities include:

 Ensuring that lean improvement benefits are tied directly to the enterprise’s
bottom line
 Finding the most opportune areas in the enterprise for Lean improvement
 Helping to prioritize organizational areas for Lean improvement
 Aiding in finding and coordinating multiple areas for Lean improvement

Enterprise value stream mapping involves applying the define and measure
phases of the DMALC process to the higher-level management, business, and
operational functions of an enterprise. All top-level functions/elements of the
enterprise, including supply chain management, need to be included in the
enterprise VSM.
It is critical that any improvement effort start with a clear understanding of
the value of the product as perceived by the customer. Otherwise, you may end
up improving a value stream that efficiently provides customers with something
they simply don’t want.
Mapping your enterprise is an extensive job. It requires an understanding of
the different functions within the enterprise and their interrelationships. This
needs to be accomplished by a cross-functional, multidisciplinary team. The
Lean enterprise assessment provides an evaluation of what the enterprise does
and how well it does it. The data collected will include:
 Vision
 Mission
 Objectives
c06_1 10/09/2008 250

250 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

 Historical data
 Core competencies
 Infrastructure
 Goals

An enterprise-level value stream map is developed to provide a visual insight


to the overall enterprise system and includes the supply chain, functional acti-
vities, and management control points. The enterprise VSM provides an insight
to the efficiency of the enterprise and identifies opportunities for improvement
at the system level.
The enterprise value stream map satisfies the define and measure portion of
the DMALC process at the system level and will include:

 The enterprise level inputs


 Each functional area of the enterprise
 Management control points
 The enterprise level outputs

At this level, the metrics for each input, functional area, and output are the
higher-order business measures of departmental budgets, worker-hours, over-
head costs, and other resources needed to accomplish each function.
When used appropriately, an enterprise value stream map provides the basis
for measuring, identifying, and optimizing the enterprise. The steps in establish-
ing the enterprise value stream map are:

1. Form a cross-functional team with members from each organizational


function.
2. Make a first draft of the enterprise process flow using Post-it notes.
3. Illustrate the separate activities of the enterprise using map symbols, cop-
ies of documentation, and illustrations.
4. Collect data and measure at the highest level cycle times, hours per unit,
rework and reject rates, and operating costs.

The team develops a basic understanding of the enterprise flow by creating


a map with Post-it notes. This is done in a brainstorming environment with
input being made from each of the team members from the diverse functional
areas.

Organizational Value Stream Mapping


Organizational value stream mapping is an application of value stream mapping
to a specific organizational function, department, or subsystem. This correlates
to the level of focus of a Level 1 process map. Org VSM is directly applicable to
organizational assessments.
c06_1 10/09/2008 251

VALUE STREAM ANALYSIS 251

An organizational value stream includes the various elements, processes, and


activities that make up an enterprise VSM functional block. For instance, the
supply chain management functional block from the enterprise VSM could be
expanded into more detail in an organizational value stream. Or, continuing
with the example started in the enterprise VSM analysis section, the operational
processes and their relationships that make up the timekeeping function could
be developed by use of an organizational value stream for timekeeping.
The organizational value stream map satisfies the define and measure portion
of the DMALC process at the organizational level and will include:

 The organizational-level inputs


 Each process of the organization
 The boundaries and interfaces between the processes
 The organizational-level outputs

The purpose of the organizational value stream analysis is to find and priori-
tize the most opportune processes within an organization for Lean improvement
projects. In rare incidences, an Org VSM analysis reveals that the best approach
would be to lean the overall organization. This would require extensive planning,
reorganization, and process reengineering. Due to their scope, organization-wide
projects, like enterprise-wide Lean projects, are usually reserved for Black Belts.
Figure 6.26 is an example of an organizational value stream map.
In Figure 6.26, you will begin to see the utility of a value stream map. The
overall lead or processing time is 360 days, of which 38 days are value-added.
The delay time from step 100, customer need, to step 200, concept developed, is
85 days waiting time. You can easily see which steps are value-added and non-
value-added. This scenario can be followed through the entire process map.

Process Walkthrough
You have created the process maps from the existing documentation and knowl-
edge of the team members. Following the completion of the process and value
stream maps by the team, the next step is to perform a walkthrough of the pro-
cess. This is a detailed review of the process, comparing the process being per-
formed on the shop floor or in the office with the process map and all associated
documentation. The purpose of the walkthrough is to:

 Determine whether the process and value stream maps represent the true
as-is process
 Determine whether the process documentation reflects the true as-is
process
 Determine which are the correct steps or procedures and modify the pro-
cess map or documentation, as you now have an as-is process that is stable
and can be measured and controlled
c06_1

252
10/09/2008

No
252

Customer Need W Concept W Submitted for W


$ Approved
S Is Voiced Developed Funding
(400)
(100) VA (200) VA (300) NNVA
NNVA
• Research 3 days • Application 1 day
• Brainstorm sets 2 days

15 days 28 days
85 days 5 days 25 days 1 days 7 days Yes

Evaluate Test W Proto-type W Proto-type


E Results Tests Developed
(700) NNVA (600) VA (500) VA

• Statistical analysis 2 days • Firing range 3 days • Laboratory 24 days


• Impact study 3 days • SU 8 days
Example 1
15 days 25 days 95 days
5 days 45 days 3 days 20 days 24 days

Lead time = 360 days, Value added time = 38 days


FIGURE 6.26 Organizational Value Stream Map.
c06_1 10/09/2008 253

FAILURE MODES AND EFFECTS ANALYSIS 253

One key point to the success of the process walkthrough is that it is not an
audit. This is very important to the success of acquiring all the information. If
individuals think they are being audited, you will not get the full and open dis-
closure concerning the process that you need to make improvements. When per-
forming process walkthroughs, always accomplish multiple reviews of the
process, break your team down to two-person teams, and have multiple teams
perform the walkthrough. Figures 6.27 to 6.29 provide examples of process
walkthrough worksheets:
The process walkthrough worksheet in Figure 6.27 relates to the process
map presented in Figure 6.22 (Level 2 process map) for casting. It is the
process walkthrough for work activity 230.7 (projectiles in conditioning
oven).
The process walkthrough worksheet in Figure 6.28 relates to the process map
presented in Figure 6.23 (Level 2 process map) for purchasing. It is the process
walkthrough for work activity 3.3 (review bid).
The process walkthrough example in Figure 6.29 is focused on collecting
Lean data from the process. The process work centers or work activities are
listed across the top and lean process data requirements down the left side.
We develop this process walkthrough further in the measure and analyze
chapter.

FAILURE MODES AND EFFECTS ANALYSIS

Performing failure modes and effects analysis (FMEA) is the final step in the
define phase. It develops a much more detailed understanding of your process
and is the first opportunity to start measuring risk. FMEA is a systematic evalua-
tion procedure used to identify, analyze, prioritize, and document:

 Potential failure modes


 Their effects on a system, product, and process
 The failure causes
 The controls used to mitigate the causes or modes
 A measurable level of risk
 Potential corrective actions

Conducting an FMEA is a preemptive strike on future failures! It is a pro-


active approach that should be done when new products are designed or existing
products are changed.
In this section, we first discuss the FMEA concept, the benefits of FMEA,
and the types of FMEA that exist. We then turn our attention to how two partic-
ular types of FMEA are used (i.e., product FMEA and process FMEA).
c06_1

254
10/09/2008

Element Element Process


Process Inputs Process Equipment Key Metrics Process Outputs
254

Number Name Owner/Remarks


230.7 Projectiles in Carts containing Insulated oven Panel temp (260 +/- TNT slurry filled Line Foreman Production
conditioning projectile MPTS Insulated door 5 deg F) projectiles subjected Operator
oven filled with TNT Heated panels Incoming water controlled cooling Process Engineer
slurry Water cooling system temp (121 deg F min) cycle
Heat exchangers Cooling water flow Ovens do not have water flow
Pumps/flow rate or temp sensors for each
regulators/ball valves Nominal_________ individual cabinet. Cannot
Instrumentation/ Cycle times (M107 monitor flow/temp anomalies
sensor/PLC & M795) oven down during cooling cycle which
Stopwatch time could contribute to cast
Wheel chocks Total defects.
Raytech laser minutes_________
thermometer No TNT splash on Insufficient stopwatches
Wahl digital projectile exterior available to cover pouring
thermometer No drafts from conditions, ovens, and
Personal protective open doors probing occuring
equipment simultaneously.
Flow card (PROD-643)

230.8 Monitor Process output Process equipment Key metrics Controlled cooling Line Foreman Production
conditioning from 230.7 from 230.7 listed in 230.7 process under Operator Process Engineer
oven control

FIGURE 6.27 Process Walkthrough Worksheet Example.


c06_1 10/09/2008 255

FAILURE MODES AND EFFECTS ANALYSIS 255

FIGURE 6.28 Process Walkthrough Worksheet Example.

The FMEA Concept


FMEA may be focused on any area of the enterprise and is usually performed
for a specific purpose (such as fault isolation failure analysis). Generally, FMEA
provides information regarding:

 Reliability
 Maintainability
 Safety
 Manufacturability
 Quality

FMEA is recommended because it can identify and evaluate potential failure


of a product or process. It will:
 Rate the effect or severity of the potential failure
 Identify critical product characteristics and process variables
 Identify and prioritize the risks to the process, products, and/or services
 Rank potential system, product, and process deficiencies
 Focus on prevention of product and process problems by identifying actions
that could eliminate or reduce the chance of a potential failure occurring

The FMEA assists in identifying the causes for failure in our processes. It
also focuses our attention on those processes that are the biggest risks.
c06_1

256
Lean Process Walkthrough Worksheet
10/09/2008

Lean Process Walkthrough Worksheet


1 2 3 4 5 6 7
1 2 3 4 5 6 7
Receipt & Materials/ Marking &
Receipt & Materials/ Assembly Loading Curing X-Ray Marking &
Verification Parts Packaging
256

Assembly Loading Curing X-Ray


Verification Parts Packaging
Setup Time (SU)
Set Up Time (SU)
Machine Time (MA)
Machine Time (MA)
Labor Time (LA)
Labor Time (LA)
Cycle Time (CT)
Cycle Time (CT)
Availability (Ao)
Availability (Ao)
Changeover Time (CO)
Change Over Time (CO)
Batch Size
Batch Size
Number of Operators
Number of Operators
Operational Time (OT)
Operational Time (OT)
Scrap Rate (SR)
Scrap Rate (SR)
Rework Rate (RW)
Rework Rate (RW)
Value-Added Time (VAT)
Value Added Time (VAT)
Non-Value-Added Time (NVAT)
Non-Value Added Time (NVAT)
Required NVAT (RNVAT)
Required NVAT (RNVAT)
Overproduction
Over Production
Waiting
Waiting
Transport
Transport
Inappropriate Processing
Inappropriate Processing
Unnecessary Inventory
Unnecessary Inventory
Unnecessary Motion
Unnecessary Motion
Defects
Defects
Sifting
Sifting
Sorting
Sorting
Sweeping
Sweeping
Standardizing
Standardizing
Sustaining
Sustaining

FIGURE 6.29 Process Walkthrough Example.


c06_1 10/09/2008 257

FAILURE MODES AND EFFECTS ANALYSIS 257

FMEA Team
For FMEA to be effective, it must be accomplished by a cross-functional, multi-
disciplinary team. This is important because FMEA will touch every aspect of the
process: work activities, environment, safety, personnel, cost, schedules, quality,
suppliers, customers, and management. The FMEA team will be made up of
the process improvement core team and ad hoc members (safety, environmental,
financial etc.). FMEA team members should come from functions such as:
 Design engineering
 Manufacturing
 Contracting
 Finance
 Project management
 Customer service
 Quality engineering
 Reliability engineering
 Customers
 Suppliers

The team members must be prepared for a very detailed, in-depth evaluation
of every step of the process. FMEA is an arduous and difficult evaluation pro-
cess to be performed by the team, and it should be managed and facilitated
accordingly.

Benefits of FMEA
When FMEA is properly conducted during the concept and development stage
of products and processes, it has a positive effect and will produce the following
benefits:
 Improve the quality, reliability, and safety of products
 Increase customer satisfaction
 Reduce product development time and cost
 Reduce the amount of rework, repair, and scrap
 Document and track actions taken
 Prioritize deficiencies to focus improvement efforts
FMEA can be applied at any time during the life cycle of a system, product,
or process for any of the following functions:
 As a fault isolation tool
 As a qualifier for design changes/improvements
 As a response to field failures
 As part of a product or process redesign
c06_1 10/09/2008 258

258 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

TABLE 6.10

Type Description

System Examine and analyze systems and subsystems in early concept and design
stages
Product Examine and analyze products and identify potential product failure
modes early in the development cycle
Process Examine and analyze processes and identify potential process failure
modes
Defect Examine and analyze defects to prevent reoccurrence

Types of FMEA
There are four types of FMEA: system, product, process, defect. Table 6.10
describes each type.
This section focuses on two of the four types of FMEA, namely, process
FMEA and product FMEA. Figure 6.30 the FMEA process flowchart, guides
you through both product and process FMEA. The steps for both forms of
FMEA are the same, but the input varies—a work breakdown structure for prod-
uct FMEA and a process or value stream map for process FMEA.
Process FMEA
Process FMEA allows the cross-functional team to analyze manufacturing,
assembly, administrative, and management processes. When performing a pro-
cess FMEA, the team can begin reducing the occurrence and increasing the de-
tection of defects. Process FMEA assists in the development of process control

FIGURE 6.30 The FMEA Process Flow.


c06_1 10/09/2008 259

FAILURE MODES AND EFFECTS ANALYSIS 259

plans and the establishment of priorities for improvement activities. Reasons for
process changes are documented, and focus is provided for future improvement.
Using process FMEA, the team is expected to be proactive. Process FMEA
should be started whenever new processes are designed or old processes are
changed.
Product FMEA
Product FMEA allows the cross-functional team to identify potential product
failure modes early in product development. Product FMEA increases the likeli-
hood that all potential failure modes and their effects will be considered, and it
assists in evaluating product design requirements and test methods. Performing a
product FMEA establishes the priorities for design improvement, documents the
rationale behind design changes, and helps guide future development projects.
The following section provides you with the step-by-step process for per-
forming a product and process FMEA, including all the tools and forms used
during this process.
Performing FMEA
Performing an FMEA is a process. It is a step-by-step procedure performed in a
spreadsheet format. When the process is performed by the appropriate, cross-
functional, multidisciplinary team, it leads you to identify, evaluate, and priori-
tize the following:
 Failure modes
 Failure effects
 Failure causes
 Failure severity/occurrence/detection
 Control factors
 Risk priority numbers
 Corrective actions
The definitions in Table 6.11 will be of help when working on an FMEA.
Otherwise, the FMEA team can become bogged down trying to determine the
difference between a failure mode, a failure effect, and a failure cause.

TABLE 6.11

Failure mode The manner by which a failure is observed. Generally describes the
way the failure occurs and its impact on equipment operation.
Failure effect The consequence(s) a failure mode has on the operation, function,
or status of an item. Failure effects are usually classified
according to how the entire system is impacted.
Failure cause The physical or chemical process, design defects, part
misapplication, quality defects, or other processes that are the
basic reason for failure or that initiate the physical process by
which deterioration proceeds to failure.
c06_1 10/09/2008 260

260 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

The FMEA process begins with a definition of what’s to be evaluated. A work


breakdown structure (WBS) is used for products and systems and a process map
for processes. For defects, a WBS is used for product defects, and a process map
is used for process defects. Cause-and-effect analysis, interrelationship digraph,
and process decision program charts are useful tools for identifying and evaluat-
ing failure modes. The results of the analyses are documented on the FMEA
form using a 12-step process:

1. Identify part or process element.


2. Identify failure modes.
3. Identify failure effects.
4. Identify failure causes.
5. Identify controls.
6. Rate severity, occurrence, and detection.
7. Calculate risk priority number (RPN).
8. Determine preventive and corrective action.
9. Calculate planned RPN.
10. Implement preventive and corrective action.
11. Reevaluate the process to validate that the actions taken have netted the
desired level.
12. Update the FMEA with the appropriate severity, occurrence, and detec-
tion scores.

Figure 6.31 is an example of an FMEA form that serves to guide us through


the FMEA process. The headers of the form show the 12-step FMEA process.
We now go through the FMEA process step by step.

1. Identify Part or Process Element


Product: Use WBS and the associated cause-and-effect analysis to select
items for FMEA.
Process: Identify each work center, process step, or work activity using the
process map.
First identify the system, product, assembly, subassembly, or component
using WBS and the cause-and-effect analysis.
Using the system drawings, specifications, and your knowledge of the sys-
tem, create a WBS to the lowest indenture. Use this WBS and the associated
cause-and-effect analysis to select items for FMEA. It is critical that all mem-
bers of the team are aware of the procedure before going forward. The following
items should be available before starting the process:
 WBS, if one has already been created
 Specifications
c06_1
10/09/2008
261

Company: Team Members: Date


Project: Rev
Team Leader:
Goal:
S O D R
FM Process Failure Failure E Causes C Controls E P Plan Name Date p p P pRPN Remarks
No. Element Mode Effect S O D
V C T N

FIGURE 6.31 FMEA Form.

261
c06_1 10/09/2008 262

262 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

 Drawings
 Failure history (failure reporting and corrective Action System, or FRA-
CAS, data) for this item or for like items
 Developmental test history, if any
 Bill of materials (BOM)
 FMEA from suppliers for assemblies or disposable materials

2. Identify Failure Modes Based on the preceding information, identify and


describe the anticipated failure modes. Ask how the design, part, product, or
process could possibly fail? Don’t concentrate on whether it will fail, but on
how it could possibly fail. Failure mode is the manner in which a desired result
is not achieved.

3. Identify the Failure Effects Describe the effect of the failure. Ask how
the failure manifests itself in the operation of the product in the customers’ eyes.
Ask how this operation could fail to complete its intended function. Effect of the
failure mode is the change in the desired characteristics of the product as the
result of the failure.

4. Identify the Failure Causes Identify and describe the cause(s) of the fail-
ure. The cause of a failure is a condition or action that precipitated the failure
mode. Ask what conditions brought about the failure mode.

5. Identify the Control Factors Identify, list, and describe the current con-
trol factors to prevent the failure mode. Ask whether there are controls designed
to prevent the failure. To what degree are these controls effective? Ask how
effective the controls are and whether these failures will be detected before they
reach the customer.

6. Rate Severity, Occurrence, and Detection Estimate (rate) the severity


of the failure: What are the consequences of the failure to the customer?
Then estimate (rate) the probability of occurrence: What is the likelihood
that the failure mode will occur? Finally, estimate (rate) the detection of
the failure: What is the probability that the failure will be detected by the
control designed to detect or mitigate it before the product advances to the
next or subsequent processes?
There are two rating scales. The first uses a scale of 1–10, and the second
uses a scale of 1–5, with the higher number representing the higher seriousness
or risk. Adopt a single scale for the organization. This will ensure uniformity
across the organization and eliminate confusion when reviewing and evaluating
a FMEA. Tables 6.12 to 6.17 show the various rating scales.

7. Calculate the RPN The risk priority number (RPN) methodology is a


technique for analyzing the risk associated with potential problems identified
c06_1 10/09/2008 263

FAILURE MODES AND EFFECTS ANALYSIS 263

TABLE 6.12 1–10 Scale for Severity: Product FMEA

Scale Severity Situation

10 Hazardous—without May endanger equipment, personnel, or


warning environment and/or involves noncompliance
with regulatory requirements without warning.
9 Hazardous—with May endanger equipment, personnel, or
warning environment and/or involves noncompliance
with regulatory requirements with warning.
8 Very high Product inoperable, with loss of primary function.
7 High Product operable at a reduced performance level.
Customer dissatisfied.
6 Moderate Product operable. Comfort/convenience item(s)
inoperable.
5 Low Product operable. Comfort/convenience item(s)
operable at reduced level of performance.
4 Very low Product fit and finish does not conform to
requirements. Defect noticed by most
customers.
3 Minor Product fit and finish does not conform to
requirements. Defect noticed by some
customers.
2 Very minor Product fit and finish does not conform to
requirements. Defect noticed by discriminating
customers
1 None No effect

TABLE 6.13 1–10 Scale for Occurrence: Product FMEA

Scale Probability Situation % Failure

10 Very high Failure almost inevitable >50%


9 >33%
8 High Repeated failures 12.5%
7 5.0%
6 Moderate Occasional failures 1.25%
5 0.25%
4 Low Relatively few failures 0.05%
3 0.007%
2 0.0007%
1 Remote Failure unlikely 0.00007%
c06_1 10/09/2008 264

264 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

TABLE 6.14 1–10 Scale for Detectability: Product FMEA

Scale Detectability Situation % Detection

10 Almost impossible Design control will not and/or <10%


cannot detect failure mode; or
there are no known controls.
9 Very remote Very remote probability that current <25%
controls will detect failure mode.
8 Remote Remote likelihood current controls <50%
will detect failure mode.
7 Very low Very low probability current <75%
controls will detect failure mode.
6 Low Low probability current controls <80%
will detect failure mode.
5 Moderate Moderate probability current <85%
controls will detect failure mode.
4 Moderately high Moderately high probability current <90%
controls will detect failure mode.
3 High High probability current controls <95%
will detect failure mode.
2 Very high Very high probability controls will <99%
detect failure mode.
1 Almost certain Almost certain probability current >99%
controls will detect failure mode.

TABLE 6.15 1–5 Scale for Severity: Product FMEA


Scale Severity Situation

1 Minor Opportunity for improvement; corrective action not


required.
2 Low Fault isolation and corrective action required; not urgent.
Minor disruption to production line. The product may
have to be sorted and a portion (less than 100%)
reworked. Defect noticed by most customers.
3 Moderate Product, service, or process operational with impaired
capability. Minor disruption to production line. Up to
100% of product may have to be reworked. Item
operable, but some comfort/convenience item(s)
operable at reduced level of performance. Customer
experiences some dissatisfaction.
4 Severe Product, service, or process severely degraded; immediate
corrective action required. Product may have to be
sorted and a portion (less than 100%) scrapped. Item
operable, but at a reduced level of performance.
Customer dissatisfied.
5 Catastrophic— Product, service, or process is nonoperational or there is a
without warning direct safety risk to personnel, product, or environment.
c06_1 10/09/2008 265

FAILURE MODES AND EFFECTS ANALYSIS 265

TABLE 6.16 1–5 Scale for Occurrence: Product FMEA

Scale Probability Meaning How Often

1 Remote Failure is unlikely. No failures ever associated <0.1%


with almost identical processes.
2 Low Isolated failures. <1.0%
3 Moderate Process failure is intermittent. Not able to <10%
maintain long uninterrupted production runs
without failure.
4 High Process failure is constant but not continuous. >10%
Many stops and starts. Very disruptive.
5 Very high Failure almost inevitable. >15%

during a FMEA. In FMEA, the RPN is used to determine which failure modes
deserve the most attention and where preventive measures should be focused
first. FMEA allows many failure modes to be laid out on a worksheet and con-
sidered and prioritized simultaneously. The RPN is calculated as the product of
the severity, occurrence, and detection.

RPN ¼ S  O  D

What is the risk to the customer if the defect occurs? The higher the RPN, the
more serious the failure.

8. Determine Corrective Action for High RPN Items The RPN value for
each potential problem can then be used to compare the issues identified within
the analysis. Typically, if the RPN falls within a predetermined range, corrective
action may be recommended or required to reduce the risk (i.e., to reduce the
likelihood of occurrence, to increase the likelihood of prior detection, or, if pos-
sible, to reduce the severity of the failure effect). Normally, the severity of a
process failure cannot be diminished.
When using this risk assessment technique, it is important to remember that
RPN ratings are relative to a particular analysis (performed with a common set
of rating scales and an analysis team that strives to make consistent rating

TABLE 6.17 1–5 Scale for Detection: Process FMEA

Scale Detectability Test Content Detects

1 Almost certain >99% of failures


2 High 96–99% of failures
3 Moderately high 80–95% of failures
4 Very low 70–79% of failures
5 Almost impossible <70% of failures
c06_1 10/09/2008 266

266 DEFINE: KNOWING AND UNDERSTANDING YOUR PROCESSES

assignments for all issues identified within the analysis). Therefore, an RPN in
one analysis is comparable to other RPNs in the same analysis, but it may not be
comparable to RPNs in another analysis

9. Plan and Take Corrective Action Using your team, brainstorming, and
the other Six Sigma tools, techniques, and technology you have learned, imple-
ment the previously determined corrective actions.
Develop a plan of action and schedule for implementation of the recom-
mended corrective action. At a minimum, this plan of action should have the
following elements:

 Named individual responsible for the action. (Do not designate a group
responsible for an action. Name a specific individual as the responsible
person.)
 Date the action is to be completed.

Corrective actions should be taken to address potential causes of failures that


have severe effects, a high rate of occurrence, and low levels of detection. A
new RPN value will be calculated (see next step) after the corrective actions
have been implemented. This new calculation is used to determine whether the
new RPN value is below the cutoff value for acceptable risk.

10. Recalculate the RPN Review the estimated impact of the correction ac-
tion on the severity, occurrence, and detection after implementation of the cor-
rective action. Then recalculate the RPN. This new value is called the planned
risk priority number (PRPN)

11. Reevaluate the Process Once the corrective actions have been taken, the
team should reevaluate the respective issue to see whether the desired result was
obtained. If not, then further actions may be required to reduce the risk.

12. Update the FMEA The FMEA is a living document and should be kept
up-to-date. This will help if this process ever needs to be reexamined.
The application of FMEA to a process can best be demonstrated by following
an example from process mapping to process walkthrough to FMEA. Figure
6.32 is a Level 2 process map for a manufacturing process. The FMEA was per-
formed on each work activity within this process. We follow work activity 230.7
from this process map through process walkthrough to FMEA.
The process walkthrough was performed by the team members, who walked
the entire process from beginning to end using the process walkthrough
form. The example we are following for work activity 230.7 is demonstrated in
Figure 6.33.
During this walkthrough, the team notes several comments in the remarks
block. Note that one of the key metrics is cooling water flow; in remarks, the
team found that there was no means to measure cooling water temperature or
c06_1
10/09/2008

Fail

Prepare Kettle, Grid Monitor Pass Receive, Melt and


267

210 Dispense Filler


and Loader for Equipment
220
Operation 230.1 230.2 230.3
Loading
SOPs, steam, water, Equipment equipment under
• Instrumentation/sensors/PLC • Elevator Instrumentation/
electrical power, air, • Heat exchangers capable of loading control and • Hoppers sensors/controls
hydraulic power • Schneible dust collector and Microdyne fume and controlling • Process equipment ready for melt • Carts Projectile loading
removal system projectiles listed in 230.1.1 through • Cans data summary TNT slurry with
pour operations
• Air compressors • Melter Sheet “pour
“ 30% solids at 177+
230.1.10
• Hydraulic pump • Gride reservoir sheet (PROD 513) 5 deg F
• Grid melter • Grid downcomer Scrap material
• Melt kettle • Vibratory feeder Grid
• Loader • Melt kettle
• Metal parts staging oven • Load cells
• Controlled cooling ovens • Agitator
• Probing equipment • Wash digital thermometer
• Bldg heating equipment

Perform Projectile
Mix Filler
Loading
230.4
230.5
TNT slurry with
• Melt kettle 30% solids at • Melt kettle Stopwatch
• Agitator • Kettle dump valves Raytech laser
177+5 deg F • Downcomer thermometer
• Wahl digital thermometer
• Loader PPE
• Instrumentation/sensors/ controls • MPTS preheat chute Projectile loading
• Projectile loading data summary pour sheet • Instrumentation/sensors/ Data sheet summary (PROD-513)
(PROD-513) controls/control panel Flow card (PROD-643)

Monitor
Monitor Projectiles in
Conditioning
Loading Conditioning Oven 230.9
Oven
230.6 230.7
230.8
Projectile MPTS • Insulated oven PPE TNT slurry
Loading Controlled cooling
filled with TNT • Insulated door Flow card filled
• Process equipment process process under
slurry • Heated panels (PROD-643) projectiles
listed in 230.5 under control control
• Water cooling system subjected to • Process equipment
• Heat exchangers controlled listed in 230.7
• Pumps/flow regulators/ball valves cooling cycle
• Instrumentation/sensors/plc
• Stopwatch
• Wheel clocks
• Raytech laser thermometer
• Wahl digital thermometer

267
FIGURE 6.32 Level 2 Process Map.
c06_1

268
10/09/2008
268

Element Element Process Process Key Process Process Owner


Element Element Process Process Key Process Process Owner
Number Name Inputs Equipment Metrics Outputs Remarks
Number Name Inputs Equipment Metrics Outputs Remarks
230.7 Projectiles in Carts containing - Insulated oven - Panel temp (260+/-5 deg F) - TNT slurry filled Line Foreman
230.7 Projectiles in Carts containing - Insulated oven - Panel temp (260+/-5 deg F) - TNT slurry filled Line Foreman
conditioning projectile MPTS - Insulated door - Incoming water projectiles Production Operator
conditioning
oven projectile
filed MPTS
with TNT - Insulated
- Heated door
panels - Incoming
- Temp (121 water
deg F min) - Subjected
projectilesto controlled Production
Process Operator
Engineer
oven filed with TNT
slurry - Heated
- Water panels
cooling system - Temp (121
- Cooling waterdeg
flowF min) - Subjected
cooling cycleto controlled Process Engineer
slurry - Water
- Heat cooling system
exchangers - Cooling water flow
- Rate & M794 Nominal cooling cycle Ovens do not have water flow or
- Heat exchangers
- Pumps/flow regulators/ball valves - Cycle & M794 Nominal
- Ratetimes Ovens do not
temperature have water
sensors flow or
for each
- Pumps/flow regulators/ball valves
- Instrumentation/sensors/PL - Cycle Times
- Downtime temperature
individual sensors
cabinet for each
so cannot
- Instrumentation/sensors/PL
- Wheel chocks - Down
- Total Time
minutes individual
monitor cabinet so cannot
flow/temperature
- Wheel chocks
- Raytech laser thermometer - Total
- No TNTminutes
splash on monitor flow/temperature
anomalies during cooling cycle
- Raytech
- Wahl digital laser thermometer
thermometer - Projectile splash on
- No TNT exterior anomalies
which during cooling
could contribute cycle
to cast
- Wahl digital thermometer
- Personal protective equipment - Projectile exterior
- No drafts from open doors which could contribute to cast
defects
- Personal protective equipment - No drafts from open doors defects
- flow card
- flow card Insufficient stopwatches
Insufficient
available stop watches
to cover pouring,
conditioning cover pouring,
available toovens, and probing
conditioning ovens,
occurring simultaneouslyand probing
occurring simultaneously
230.8 Monitor Process output Process equipment listed in 230.7 Key metrics listed in 230.7 Controlled cooling Line Foreman
230.8 Monitor
conditioning Process
from 230.7output Process equipment listed in 230.7 Key metrics listed in 230.7 Controlled
process cooling
under control Line Foreman
Production Operator
conditioning
oven from 230.7 process under control Production
Process Operator
Engineer
oven Process Engineer

FIGURE 6.33 Level 2 Process Walkthrough.


c06_1 10/09/2008 269

KEY POINTS 269

flow in the process. This was corrected when the team pointed out that it was a
requirement of the procedure.
The team next performed the FMEA for the process. Figure 6.34 demon-
strates the results of the FMEA for work activity 230.7.
The RPM scores for this work activity clearly indicate a significant risk for
two failure modes: Water flow and fill rates incorrect, and PLC (controller) can-
not accept entire process. The corrective action for these failure modes was as-
signed, with a due date, to a specific person. The failure mode was significantly
mitigated by this corrective action, as can be seen by the recalculated PRPN.
Figures 6.35 and 6.36 provide examples of FMEA for administrative and ser-
vice processes.

KEY POINTS

Process Mapping
 Collect all data prior to starting the process mapping. It is very frustrating
to begin the define phase and then discover that you do not have a required
safety standard and must start over.
 Use the SIPOC approach: Who are the suppliers, and what are the inputs?
Who are the customers, and what are the outputs?
 Do not walk through the process before completing the initial process map
draft. This will allow you to see whether the process being performed com-
plies with the process documentation and the team members’ concept of
how the process is being performed.

Process Walkthrough
 Walkthrough the process more than once and with different team members.
If this is an industrial process, walkthrough the process on each shift. Break
your team down into two or three members to perform each walkthrough.
 This is not an audit. Make sure that all the employees in the process know
and understand why you are there and what the purpose is (improvement
not audit). Do not surprise the second shift of an industrial process by
showing up with six people dressed in white shirts and suits and carrying
clipboards. It will not be pleasant.
 Walk through the process slowly and thoroughly, and talk to each individual
on the process.

Process FMEA
 Process FMEA is your preemptive strike for future failures. Perform a
complete and comprehensive FMEA for the process.
c06_1

270
09/30/2008
270

Company: Team Members: Date


Project: Rev
Team Leader:
Goal:

S O D R
FM Process Failure Failure E Causes C Controls E P Plan p p P pRPN Remarks
No. Element Mode Effect Name Date S O D
V C T N
Water Critical 4 Plugged 3 None 5 60
230.7 projectile
circulation carts
failure defects
4 Caps/covers 2 None 5 40
clogging system
4 Contaminated 2 None 5 40
water
Failure to Critical 4 Operator error Procedure,
hookup hot projectile SOP violation 2 Visual, 3 24
water to tanks defects Checklist
Water flow Critical 4 5 5 100 Redesign system; John 22
and fill rates projectile Design None Valve gages, Cortum Nov 4 1 1 4
incorrect defects automated signal 08

PLC Inlet/ Unknown 3 5 5 75 PLC sensors on John 22


Outlet cannot about TDP-Driven None individual ovens Cortum Nov 3 3 2 18
accept entire individual water temp: 08
process ovens scope, plan, cost,
implement

FIGURE 6.34 Level 2 Process Work Activity 230.7 FMEA.


c06_1
09/30/2008
271

Company: Team Members: Date


Project: Rev
Team Leader:
Goal:
S O D R
Process p
FM No. Failure Mode Failure Effects E Causes C Controls E P Plan Name Date pO pD pRPN Remarks
Element S
V C T N
Consolidate
Consolidation Funding stream
Ops w/ Higher overhead 4 1 Strategic plan 1 12
Process stopped stopped
process

Management
4 1 Strategic plan 1 12
policy change

Decreasing Funding stream


3 1 Strategic plan 1 9
capabilities stopped

Management
3 1 Strategic plan 1 9
policy change

Diminishing services 3 Inadequate funding 2 Strategic plan 2 18

Lack of
3 management 2 Strategic plan 2 18
support

FIGURE 6.35 Example FMEA.

271
c06_1

272
09/30/2008
272

Company: Team Members: Date


Project: Rev
Team Leader:
Goal:

S O D R
FM Process Failure Failure pP
E Causes C Controls E P Plan
No. Element Mode Effect
Name Date p O D pRPN Remarks
S
V C T N
170 Negoti- Unsupported Resarch delay 2 Insufficient 3 Pre- 4 24 Technical R. 13
ations Technical communications negotiations data Win- Feb 2 1 1 2
Requirements meeting checklist ston 08
2 1 3 6 B. 14
Breakthrough Research Develop Westin Mar 2 1 2 4
requirement department improved 08
search
engines

Unavailable key Negotiations halt 5 Coflicting G. 30


5 Distribution 4 100 Directed Bird Dec
personelle priorities list priority 08 5 2 3 30
G. 14
5 Wrong personnel 4 Distribution 4 80 Critical Write Dec 5 2 3 30
sent list personnel 08
matrix
R. 31 5 1 2 10
5 Vacations 2 Vacation 80 Vacation Rudd Dec
schedule 4 matrix 08

FIGURE 6.36 Example FMEA.


c06_1 09/30/2008 273

KEY POINTS 273

 FMEA is a difficult and tedious process for your team. Always double the
time you estimate for performing FMEA. Try to work the FMEA in half-
day sessions if possible.
 The FMEA team meetings must be well facilitated. The nature of FMEA
and its complex nature will result in team storming if not carefully
facilitated.
 In many cases, the define phase highlights areas in the process that need
improvement. Many of these are ‘‘just-do-its’’ improvements that can be
made without further analysis of a formal improvement initiative.
c07_1 10/09/2008 274

7
MEASURE
Now that you have defined your process and know and understand all of the
process inputs, outputs, and value-added steps at every level, you are ready to
measure your process. The goal of the measure phase is to gathering informa-
tion on the current as-is process. During this phase, the project team gathers all
of the data available from the process, determines whether the process is in con-
trol, and measures the capability of the process.
As indicated in Figure 7.1, this is the next step for your improvement oppor-
tunity. It builds on the knowledge you have gathered during the define phase and
provides the data needed to go on to the analyze phase of your improvement
initiative. During this process, you will also have the opportunity to determine
whether your process is in control and take steps to measure the degree of con-
trol you exercise over your process. The following subjects are discussed in this
chapter:

 Introduction to process measurement


 Statistical process control
 Statistical process control charts
 Process capability analysis
 Measurement systems evaluation

PROCESS MEASUREMENT

Process measurement uses statistical measures to collect, interpret, and commu-


nicate data. Data analysis, measurement, and metrics (quantitative analysis) are
extremely important to managing processes and projects. In order for this to be
successful, the appropriate data must be determined, collected, and then ana-
lyzed. Data are factual pieces of information used as a basis for reasoning, dis-
cussion, or calculation; often, this term refers to quantitative information.

Types of Data
 Attribute data. Attribute data comes from counting things, such as defec-
tive units, defects, anomalies, conditions present or not present, and so on.
274 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c07_1
10/09/2008
275

Concept Design Optimize Verify Production


E
Development Development Design Capability Launch

Design

Improve

Identify Develop Improve


S Opportunity Define Measure Analyze Control CMI E
Opportunity Business Case

Lean No

Innovate 1

Invent Develop Verify


Optimize
Innovate Technology Technology Technology E
Transfer

FIGURE 7.1 Enterprise Excellence decision process.

275
c07_1 10/09/2008 276

276 MEASURE

Attribute data is the lowest level of data. It is always purely qualitative and
binary in nature: good or bad, yes or no. Very little analysis can be per-
formed on attribute data, which is unusable for the purpose of quantifica-
tion. Once you convert it to discrete data by counting the number of good
or bad, it becomes discrete variable data.
 Variable data. Variable data comes from measuring things, such as tempera-
ture, weight, distance, and so on. Variable data is what you would call quan-
titative. This type of data can be thought of as infinitely variable. Variable
data allows you to use most of the statistical tools that have been developed.

Sources of Data
 Primary (active) data. Current data collected under known, controlled con-
ditions. Control is a state in which all special causes of variation have been
removed from a process.

FIGURE 7.2 Relationship of origins and types of data.


c07_1 10/09/2008 277

STATISTICAL PROCESS CONTROL 277

 Secondary (passive) data. This is data not directly collected by the team. It
is collected from historical databases or other external sources such as logs
or field service reports.

Origin of Data

In addition to primary and secondary sources, data are also classified according
to origin. There are three categories of data based on origin: (1) historical re-
cords, (2) experimental tests, and (3) operational, or field, data. The relation-
ship between the types and origins of data is demonstrated in Figure 7.2.

STATISTICAL PROCESS CONTROL

Statistical process control (SPC) is an analytic method for managing processes.


SPC focuses on the variability in a process. This variability is due to common
causes (randomly occurring variations) or special causes (assignable events).
Special causes will result in an out-of-control condition. When special causes
are identified, a decision can be made to adjust the process in order to bring the
process back into control. Common cause variation can be eliminated only by a
capital change in the process, that is, a change in the equipment of procedures.
SPC can be used for managing processes that produce products or services,
and it can be used for processes that produce variables or attribute data. The
goals of SPC are:

 Understand the variability within a process


 Understand the capability of a process
 Identify and eliminate special cause variation
 Develop and implement strategies for managing processes

There are many different strategies and tools to use when analyzing and con-
trolling processes. These strategies and tools exist because of the many reasons
for using SPC. SPC is the voice of the process (VOP). It allows you to deter-
mine whether a process has special causes of variation or only common causes;
to tell whether a process is stable (in control) and therefore predictable, to deter-
mine the capability of a process, and to understand the genesis of process
variability.

 Detection of problems
 Identification of root causes
 Fact-based decision making
 Documentation of problems and solutions
 Reduction of the cost of poor quality
 Identification of opportunities for improvements in efficiency
c07_1 10/09/2008 278

278 MEASURE

Statistical process control is primarily used to monitor process stability and


detect process changes. In this way, it serves as an early warning system to de-
tect potential problems before they cause process downtime, inefficiency, or
equipment failure. This allows you to be proactive and interactive with system
processes rather than reactive:

 To monitor quality parameters


 To identify special causes of variation (chemical, material, equipment, en-
vironmental changes, etc.)
 To identify common causes of variation (i.e., those inherent to the system)
 To monitor and eliminate system variance due to special and common
causes of variation
 To determine process capability
 To indicate when adjustments or process corrections are necessary
 To indicate when to leave a process alone because it is working well

There are a number of process control strategies. These strategies are quanti-
tative and qualitative. The process control strategies grow in strength as they
become more preventive, from the weakest nonquantitative strategies to the
strongest variable process control. Figure 7.3 represents the relationship among
these strategies and how they are used operationally.
We use the process map in Figure 7.4 to present the statistical process control strat-
egy. In this section, we discuss in detail each of the first three steps in the implementa-
tion strategy flow chart. Steps 4 and 5 (establishing and evaluating statistical process
controls) are discussed in the next section, ‘‘Statistical Process Control Charts.’’

Identify a Process
To properly identify a process, all relevant documentation needs to be available
and include:
 Specifications for the item under production
 Process layout and production equipment
 Drawings for all of the preceding items
 Current process procedures

Select the Metrics


Process metrics and control points are used to:
 Reduce or mitigate risk
 Improve process quality
 Reduce defects
 Improve processes
c07_1
10/09/2008
279

Metric Data Period By Whom Media Actions Accountability


Continuous Automated System Process Control, Process Operator
Process Real-Time Monitored by Adjustment and for Control DE/QC
During Run Electronic
Variable Operator Improvement for Improvement

Strong
Variable
Sample Sampled During Process Control, Process Operator
Operator Electronic or
Process Run (On-Line) Adjustment and for Control DE/QC
Variable Manual
Improvement for Improvement
Product Sampled On-line Operator or Process Control Operator and/or
Sample or Off-line Electronic or
Inspector Manual Lot Acceptance Inspector for
and Grading Control

Attribute
Product 100% After Production Inspector SQC to Sentence Inspection
Run Is Complete Electronic or
Inspection Lot, After the Fact Department
Manual
Process Data
Supervisor, Training Records,
Certification Usually Annual or Certification of Training and HR
Training Department Personelle
Biannual Human Resources Employees
Records

Training Supervisor, Training Records,


Various Periods Training Department Personnel Certification of Training and HR

Weak
Records Training Completion

Nonquantitative
Procedure Usual Annual Quality Assurance, Updated and
Review and Audit Manual and
Quality Control Electronic Audited Periodically Quality Dept

FIGURE 7.3 Process control strategies.

279
c07_1 10/09/2008 280

280 MEASURE

FIGURE 7.4 SPC implementation strategy.

 Reduce queues
 Improve throughput
 Improve timeliness
 Reduce cost
 Review process output requirements
 Review process input requirements
 Review all process elements
 Review existing process data
 Define the measures of the process elements
 Establish measures and control points

Develop a Sampling Strategy


Process data acquisition is used to:

 Identify differences between analysis of process performance and


requirements
 Document all analysis findings
 Perform further analysis of selected elements
 Identify critical problem elements or work activities and tie those problems
directly to key metrics

Sampling
Table 7.1 explains sampling parameters.
c07_1 10/09/2008 281

STATISTICAL PROCESS CONTROL CHARTS 281

TABLE 7.1

Parameter Description

Randomness Every object, event, or individual has an equal


probability of being chosen
Rational samples Grouping data in a logical manner to minimize
variations within sample
Frequency Often enough to detect all process changes
(common and special causes)
Sample size Variable data: Normally 5 readings per sample
Attribute data: At least 25 or more per sample
Number of samples At least 25 samples for control charts

Developing a Sampling Strategy


 Identify process control metric.
 Determine source of data.
 Determine type of data.
 Determine what constitutes a rational sample.
 Decide on frequency of sampling.
 Determine size of sample.
 Determine number of samples to be taken.
 Develop data collection tools.
 Determine who will collect data.
 Decide when to begin sampling.

STATISTICAL PROCESS CONTROL CHARTS

In the previous section, we reviewed in detail the first three steps of the imple-
mentation strategy flowchart: Identifying a process, selecting the metrics, and
developing a sampling strategy.
Now we discuss establishing and evaluating statistical process controls, the
last two steps in the flowchart.
Statistical process control (SPC) uses statistical tools and techniques to mea-
sure and evaluate the performance of a process. SPC focuses on the variability
in a process. This variability is due to common causes (randomly occurring vari-
ations) or special causes (assignable events). Special causes will result in an out-
of-control condition. When special causes are identified, a decision can be made
to adjust the process in order to bring the process back into control. Common
cause variation can be eliminated only by a capital change in the process, that
is, a change in the equipment of procedures.
Control charts provide a graphic depiction of the process over time. Control
charts display the plotted values of the process performance. Statistical methods
c07_1 10/09/2008 282

282 MEASURE

are used to evaluate the control charts to determine whether the process is capa-
ble of the required performance, to monitor performance, to detect out-of-
control conditions, and to predict future performance. Control charts therefore
serve to focus on the prevention of defects rather than detection. In everyday
business applications, control charts have five uses:

1. Determine if a process is trending


2. Determine if a process is in control
3. Achieve statistical process control
4. Improve processes by reducing process variability
5. Forecast requirements

The construction, use, and interpretation of control charts is based on the


normal statistical distribution, as indicated in Figure 7.5. In this distribution,
you can see that at one sigma (1s) there is a 68.3 percent probability that your
data will fit in that area under the curve. At three sigma (3s), there is a 99.7
percent probability that your data will fit in the area under the curve.
We can use the normal probability distribution to better understand statistical
process control charts. (See Figure 7.6.) The centerline of the control chart rep-
resents the average, or mean, of the data (X for individual charts and X for sub-
grouped charts). The upper and lower control limits (UCL and LCL),
respectively, represent this mean plus and minus three standard deviations of
the data (X  3s). Either the lowercase s or the Greek letter s (sigma) represents
the standard deviation.

FIGURE 7.5 The normal probability distribution.


c07_1 10/09/2008 283

STATISTICAL PROCESS CONTROL CHARTS 283

FIGURE 7.6 Control charts and the normal distribution.

The normal distribution and its relationship to control charts is represented


on the right of Figure 7.6. The normal distribution can be described entirely by
its mean and standard deviation. The normal distribution is a bell-shaped, that
is, symmetrical about the mean, slopes downward on both sides to infinity, and
theoretically has an infinite range. In the normal distribution, 99.73 percent of
all measurements lie within X þ 3s and X  3s, which is why the limits on con-
trol charts are called three-sigma limits.
A certain amount of data encountered in our daily operations is not normally
distributed. However, due to the central limit theorem and the work of Dr.
Walter Shewhart, we can still use a control chart based on the normal distribu-
tion. Shewhart developed the method of subgrouping and sampling process data
such that, even though individual process values are not normally distributed,
the values of the subgroups demonstrate a normal distribution. In other words,
if you make a distribution of the means of the subgroups, they will be normally
distributed. Using the same basic theory, we can use the subgrouped ranges to
control process variation. For variable control charts, the recommended sample
size for each subgroup is 5; for attribute control charts, sample size should be
25. These are rules of thumb for standard sizes of subgroups. You will need to
determine the best rational subgroup size for the process that you are working
with. Each of these control charts should be established using 25 subgroups for
variables and 20 for attribute data. You will see this theory in practical applica-
tion as we develop the different types of SPC charts.

Control Charts Analysis


Control charts analysis determines whether the inherent process variability and
the process average are no longer operating at stable levels, that one or both are
out of statistical control (not stable), and that some type of appropriate action
c07_1 10/09/2008 284

284 MEASURE

needs to be taken. Another purpose of using control charts is to distinguish be-


tween the inherent random variability of a process and the variability attributed
to an assignable cause. The sources of the random variability are often referred
to as common causes, whereas assignable cause variations are thought of as spe-
cial causes. Common cause variability sources are those sources that cannot be
readily, changed without significant restructuring of the process. Special cause
variability, by contrast, is subject to correction within the process under process
control.

Common Cause Variation


This source of random variation is always present in any process. It is that part
of the variability inherent in the process itself. The cause of this variation can be
corrected only by a capital change to the process. Common causes are the many
sources of variation that always exist within a process that is in a state of statis-
tical control.

Special Cause Variation


This variation can be controlled at the process level. Special causes are indicat-
ed by a point on the control chart that is beyond the control limit or by persistent
trends that indicate that there is something out of the ordinary occurring in the
process. This shows a source of variation that is intermittent, unpredictable, and
unstable. Sometimes called assignable causes, special causes are any factors
causing variation that cannot be adequately explained by any single distribution
of the process output, as would be the case if the process were in statistical con-
trol. Unless all the special causes of variation are identified and corrected, they
will continue to affect the process output in unpredictable ways.

Variation
To use process control measurement data effectively, it is important to under-
stand the concept of variation. No two product or process characteristics are
exactly alike, because all processes contain many sources of variability. The
differences between products may be large, or they may be almost im-
measurably small, but they are always present. Some sources of variation in
the process can cause immediate differences in the product (e.g., a change in
suppliers or the accuracy of an individual’s work). Other sources of varia-
tion, such as tool wear, environmental changes, or increased administrative
control, tend to cause changes in the product or service only over a longer
period of time.
To control and improve a process, we must trace the total variation back to its
sources. Again, the sources are common causes and special causes variation.
The factors that cause the most variability in the process are the main factors
found in cause-and-effect analysis charts: people, machines, methodology, ma-
terials, measurement, and environment. These causes can result from special
causes, or they can be common causes inherent in the process.
c07_1 10/09/2008 285

TYPES OF CONTROL CHARTS AND APPLICATIONS 285

The Theory of Control Charts


The theory of control charts suggests that, if the source of variation is from
chance alone, the process will remain within the three sigma limits. When the
process goes out of control, special cause(s) exist. These causes need to be in-
vestigated so that corrective action can be taken.
Control charts focus on prevention of defects rather than on detection and
rejection. The power of control charts lies in their ability to determine whether
the cause of variation is a special cause, which can be affected at the process
level, or a common cause, which requires a capital change.

TYPES OF CONTROL CHARTS AND APPLICATIONS

Just as there are two types of data (continuous and discrete), there are two types
of control charts: variable charts, for use with continuous data, and attribute
charts, for use with discrete data. Each type of control chart can be used with
specific types of data.

Variable Control Charts


X-bar and R charts are used to measure control processes whose characteristics
are continuous variables such as weight, length, ohms, time, or volume. (See
Figure 7.7.)
Control charts for variables are powerful tools that we can use when mea-
surements from a process are variable. Examples of variable data are the diame-
ter of a bearing, electrical output, or the torque of a fastener.

Attribute Control Charts


Although control charts are most often thought of in terms of variables, there are
also versions for attributes. Attribute data have only two values (conforming/
nonconforming, pass/fail, go/no-go, present/absent), but they can still be
counted, recorded, and analyzed. Some examples are:

FIGURE 7.7 Variable control chart.


c07_1 10/09/2008 286

286 MEASURE

FIGURE 7.8 Attribute control charts.

 The presence of a required label


 The installation of all required fasteners
 The presence of solder drips
 The continuity of an electrical circuit

We also use attribute charts for characteristics that are measurable if the re-
sults are recorded in a simple yes/no fashion, such as the conformance of a shaft
diameter when measured on a go/no-go gage or the acceptability of threshold
margins to a visual or gage check. (See Figure 7.8.)
The p and np charts are used to measure and control processes displaying
attribute characteristics in a sample. The p charts represent the number of fail-
ures as a fraction; the np charts express the failures as a number.
The c and u charts are used to measure the number or proportion of defects
in a single item. A c chart is applied when the sample size or area is fixed; a u
chart is applied when the sample size or area is not fixed.

Control Chart Elements


The centerline is a solid (unbroken) line that represents the mean of the sample
means (X). There are two statistical control limits: the upper control limit, for
values greater than the mean, and the lower control limit, for values less than
the mean. These relationships are demonstrated in Figure 7.9.

Specification Limits versus Control Limits


Specification limits are used when specific parametric requirements exist for a
process, product, or operation. These limits usually apply to the data and are the
pass/fail criteria for the operation. They differ from statistical control limits in
that they are prescribed for a process rather than resulting from the measure-
ment of the process. (See Figure 7.10.) Specifications can be thought of as the
c07_1 10/09/2008 287

TYPES OF CONTROL CHARTS AND APPLICATIONS 287

FIGURE 7.9 Control chart elements.

voice of the customer (VOC), whereas control limits are considered the voice of
the process (VOP).
The data elements of control charts varies somewhat among variable and at-
tribute control charts. We show specific examples as a part of the discussion on
control charts.
There are many possibilities for interpreting various kinds of patterns and
shifts on control charts. If properly interpreted, a control chart can tell us much

FIGURE 7.10 Spec limits versus control limits.


c07_1 10/09/2008 288

288 MEASURE

FIGURE 7.11 Control chart interpretation.

more than simply whether the process is in or out of control. Experience and
training can lead to much greater skill in extracting clues regarding process be-
havior, such as that shown in Figure 7.11. Statistical guidance is invaluable, but
an intimate knowledge of the process being studied is vital in bringing about
improvements.
A control chart can tell us when to look for trouble, but it cannot by itself tell
us where to look or what cause will be found. Actually, in many cases, one of
the greatest benefits from a control chart is that it tells when to leave a process
alone. Sometimes the variability is increased unnecessarily when an operator
keeps trying to make small corrections rather than letting the natural range of
variability stabilize. This is called tinkering, and it usually leads to increased
variability.

X-bar and R Charts


An X-bar (X) and R chart is a control chart used for processes with a subgroup
size of three or more. It is the standard chart for variables data use. The X chart
presents how the mean values change with time, and the R chart presents
changes in the ranges of the subgroups over time. As the standard, the X and R
chart will work in place of the X and s or median and R chart.

 Used when the critical process parameter is a variable metric


 Used to describe process data in terms of its variation and the process
average

The X and R charts are used to manage processes with variables data when
data subgroup size falls in range of 3 to 11 and the time order of subgroups is
preserved. The X and R chart is a powerful tool used when measurements from a
process are available. These measurements can be mechanical (diameter, length,
c07_1 10/09/2008 289

TYPES OF CONTROL CHARTS AND APPLICATIONS 289

width), electronic (resistance, ohms, EF output), or related to time (process


time, on time, waiting time). The X and R variables control chart is used to de-
scribe process data in terms of its variation (item-to-item variability) and the
process average (location). The X is the average value in each group and is a
measure of location. The R is the range of values within each group and is a
measure of range. The UCLX and LCLX represent the limits for the process
averages, and UCLR and LCLR are the control limits for the ranges.
Key elements in the construction of control charts of all types are the selec-
tion and collection of data. Data must be selected, gathered, recorded, and plot-
ted on a chart for a specific purpose and according to a plan. The placement of
data into rational subgroups according to date, time, lot, size, run, or other varia-
ble is an important step in variables control charting.
For an initial study of a process, each subgroup should typically consist of
five or more consecutive actions representing only a single process factor. The
measured actions within a subgroup should all be produced in a very short time
interval. Sample sizes must remain constant for all subgroups when using X and
R control charts. When using X charts, it is important to use the R chart (or one
of the other charts mentioned earlier) so that the subgroups can be scrutinized
for excess variability. The R chart (or other charts, like the s chart) show the
spread within the subgroups that would otherwise be hidden. Remember that
the point plotted on the X chart is the average of the values within that subgroup.
Without characterizing that spread (range or standard deviation, etc.), an error in
analysis could occur. Figure 7.12 is an example of an XR control chart.
We use Figure 7.12 to build an X and R chart step by step. During an initial
process study, the subgroups themselves are often taken consecutively, or at

FIGURE 7.12 X and R chart example.


c07_1 10/09/2008 290

290 MEASURE

short intervals, to detect whether the process can shift or show other instability
over brief time periods. As the process demonstrates stability (or as process im-
provements are made), we can increase the time between subgroups. Subgroup
frequencies for ongoing production monitoring could be twice per shift, hourly,
or at some other feasible rate.

Constructing X and R Charts


Upon completion of data selection and collection, we are ready to start con-
structing the control chart in the following seven steps:

1. Compute X and R.
2. Plot average data.
3. Plot range data.
4. Compute and draw R.
5. Compute and draw X
6. Compute and draw UCL/LCL for R.
7. Compute and draw UCL/LCL for X.

Step 1. Compute Mean and Range Values The mean is the most common
measure of central tendency. It is computed by summing all the scores in a dis-
tribution and then dividing that sum by the total number of scores. The follow-
ing formula is the mathematical expression of the mean:

X1 þ X2 þ X3 þ    þ Xn

n
where
Xi ¼ individual sample data
n ¼ sample size within each subgroup (the number of cells)

The range is the difference between the lowest and highest readings in the sub-
group. The formula for each (mean and range) is presented here:

R ¼ Xmax  Xmin
where
X ¼ the individual readings of X, where max refers to the largest value of
X in that subgroup and min refers to the smallest value of X in that
subgroup
R ¼ the range between the highest reading in a subgroup and the lowest
reading in the same subgroup

Figure 7.13 is an example of an X and R data table. We use the data from sub-
group 14 to calculate an example of the mean and range.
c07_1 10/09/2008 291

TYPES OF CONTROL CHARTS AND APPLICATIONS 291

FIGURE 7.13 X and R data table.

Calculating the Mean

X1 þ X2 þ X3 þ    þ Xn 70 þ 75 þ 71 þ 74 þ 73
X¼ ¼ ¼ 72:6
n 5

Calculating the Range

R ¼ Xmax  Xmin ¼ 75  70 ¼ 5

Step 2. Plot Average Data First it is necessary to establish the scales for the
X-bar and R charts, respectively. These general rules for establishing the control
chart scales are helpful, although they may need to be modified in particular
circumstances. In most cases, the control chart program you select will provide
the proper scales. For X charts, the difference between the highest and lowest
values on the scale should be at least twice the difference between the highest
and lowest subgroup average. For the R charts, the scale should extend from
zero to about 1.5 times the largest range encountered.
Plot the data from the averages table on the chart shown in Figure 7.14.

Step 3. Plot Range Data Plot the individual range data using the same data
table as indicated in Figure 7.15.

Step 4. Compute and Draw Range Average Calculate the average of the
ranges (R) from the data table used in step 1. The average R is calculated by
c07_1 10/09/2008 292

292 MEASURE

FIGURE 7.14 Plotting Average Data.


P
summing ( ) the individual values of RðR1 . . . Ri Þ for each subgroup and divid-
ing that sum by the total number (n) of subgroups.
P
R 144
R¼ ¼ ¼ 5:76
k 25
After calculating the average of the ranges, use the resulting value of R to
draw the centerline of the range chart, as indicated in Figure 7.16.

Step 5. Compute and Draw X-double-bar Calculate the grand average P (X) of
the process measurement data from the data table by summing ( ) each sub-
group average X from the data table and dividing this sum by the total number
of subgroups.
P
X X 1 þ X 2 þ X 3 þ    þ X k 1753
X¼ ¼ ¼ ¼ 70:12
k k 25
After calculating the grand average of the process measurement data, use the
resulting grand average (X) to draw the centerline of the control chart, as indi-
cated in Figure 7.17.

Step 6. Compute and Draw UCLR and LCLR Calculate and draw the upper
and lower control limits for the range by using the R calculated in step 4 and the
factor (for subgroup size 5) from the control limit factor table in Figure 7.18.
c07_1 10/09/2008 293

TYPES OF CONTROL CHARTS AND APPLICATIONS 293

FIGURE 7.15 Plotting range data.

FIGURE 7.16 Compute and draw the range average.


c07_1 10/09/2008 294

294 MEASURE

FIGURE 7.17 Compute and draw X.

Using the following equation, the LCLR factor from Figure 7.18, and data
from the table in Figure 7.13, calculate the upper and lower control limits for
range.
LCLR ¼ ðLCLR factorÞðRÞ ¼ ð0Þð5:76Þ ¼ 0

UCLR ¼ ðUCLR factorÞðRÞ ¼ ð2:114Þð5:76Þ ¼ 12:18


After calculating the upper and lower control limits of the R charts, apply the
limits to the chart, as shown in Figure 7.19.

FIGURE 7.18 Factors for range control limits.


c07_1 10/09/2008 295

TYPES OF CONTROL CHARTS AND APPLICATIONS 295

FIGURE 7.19 Draw UCL and LCL for range chart.

Step 7. Compute and Draw UCLx and LCLx Calculate and draw the upper
and lower control limits for the process measurement averages using the value
of X (calculated in step 5) and the factor (for subgroup size 5) from the control
limit factor chart.
Using the following equation, the LCLx factor from Figure 7.20, and data
from the table in Figure 7.13, calculate the upper and lower control limits for
range.

LCLx ¼ X  ðA2 ÞðRÞ ¼ 70:12  ð0:577Þð5:76Þ ¼ 66:80

UCLx ¼ X þ ðA2 ÞðRÞ ¼ 70:12 þ ð0:577Þð5:76Þ ¼ 73:44

FIGURE 7.20 Factors for average control limits.


c07_1 10/09/2008 296

296 MEASURE

FIGURE 7.21 UCL and LCL for averages control chart.

After calculating the upper and lower control limit for the averages chart,
apply the limits to the chart, as indicated in Figure 7.21.

Completed Variable Control Charts


The variable control charts for R and X-bar are now complete and ready for inter-
pretation. The completed X-bar and R control chart is demonstrated in Figure 7.22.

FIGURE 7.22 Completed X-bar and R control chart.


c07_1 10/09/2008 297

TYPES OF CONTROL CHARTS AND APPLICATIONS 297

FIGURE 7.23 Evaluating variable control charts.

Evaluating Variable Control Charts


The completed X-bar and R control chart is now ready for evaluation. We use
the evaluation rules established in Figure 7.11 to interpret this X-bar and R con-
trol chart.
The control chart in Figure 7.23 is obviously not in control. First, interpret
the range chart. In Figure 7.23, the range chart does not go beyond the upper or
lower control limits. But it does exhibit several process anomalies. The data has
such wide swings that the process average cannot be used to indicate or plan the
variability that can be expected from this process.
At the beginning of the range chart, the process is exhibiting some cycling
above the centerline. This often indicates that the process may be shifting peri-
odically due to shift (personnel) change, time of day, or some other cause of
cycling. The process data then exhibits several data points with zero (0) range.
This is not a normal occurrence and is most often associated with the failure of a
measurement system to record the process data correctly.
The averages control chart is also out of control. It has several points out of
the upper and lower control limits. In the beginning of the control chart, there is
cycling of the data over a number of data points, which is usually associated
with changes of personnel, different equipment, and so on. The process then
plunges below the lower control limit very quickly. This is associated with
equipment failure or possibly a change in supplied materials. The process then
goes directly out of control beyond the upper control limit. This is associated
with someone intervening in the process by making an overcorrection to com-
pensate for the data point below the lower control limit.
c07_1 10/09/2008 298

298 MEASURE

ATTRIBUTE CONTROL CHARTS

Although control charts are most often thought of in terms of variables, there
are also versions for attributes. Attribute data have only zero values (con-
forming/nonconforming, pass/fail, go/no-go, present/absent), but they can
still be counted, recorded, and analyzed. Some examples are the presence of
a required label, the installation of all required fasteners, the presence of sol-
der drips, or the continuity of an electrical circuit. We also use attribute charts
for characteristics that are measurable if the results are recorded in a simple
yes/no fashion, such as the conformance of a shaft diameter when measured
on a go/no-go gage or the acceptability of threshold margins to a visual or
gage check.
It is possible to use control charts for operations in which attributes are the
basis for inspection. These charts are similar to those for variables, but with
certain differences. If we deal with the fraction rejected out of a sample, the type
of control chart used is called a p chart. If we deal with the actual number reject-
ed, the control chart is called an np chart. If articles can have more than one
nonconformity and all are counted for subgroups of fixed size, the control chart
is called a c chart. Finally, if the number of nonconformities per unit is the quan-
tity of interest, the control chart is called a u chart. There are several charts
available to use when dealing with go/no-go, qualitative data:
p charts. Fraction of parts nonconforming or defective in a sample of varying
size.
np charts. Number of parts nonconforming or defective in a sample of con-
stant size.
c charts. Number of attributes nonconforming or defective within a sub-
group, lot, or sample of constant size.
u charts. Number of attributes per unit nonconforming or defective within a
subgroup, lot or sample area of varying size.

p Control Chart
The p control chart measures the proportion defective. Therefore, the p control
chart is applied to quality characteristics that are attribute data (pass/fail, pres-
ent/absent). This chart provides the capability to evaluate processes when sam-
ple sizes vary throughout the process. (See Figure 7.24.)
This chart is used to control processes by evaluating the percentage rejected
as nonconforming to some specific requirement or specification. It is applied to
quality characteristics or to processes that produce variable data (such as meas-
ured dimensions) with specific pass/fail criteria. The chart has its best applica-
tion in measuring inspection results during a process.
Upon completion of a process analysis and the selection of the critical pro-
cess elements and their metrics, you are ready to collect data and construct a
p control chart. The construction of p control charts involves six steps:
c07_1 10/09/2008 299

ATTRIBUTE CONTROL CHARTS 299

FIGURE 7.24 The p control chart.

1. Select the size, frequency, and number of subgroups.


2. Compute and record the percent of nonconforming or defective parts (d).
3. Determine the scales for the p chart.
4. Computer and plot each p value and calculate the average sample size (n).
5. Compute and draw the process average (p-bar).
6. Compute and draw the process upper and lower control limits (UCL,
LCL).

Step 1. Select the size, frequency, and number of subgroups. Minimum


subgroup sample size ¼ 50 (recommended). For the subgroup, sizes must be
sufficient to determine moderate shifts in performance. A sample of 50 ac-
tions should be taken for each subgroup to enable the analyst/manager to in-
terpret the chart for trends and patterns and to identify a process that is not in
control.
 Subgroup sample size should not vary by more than 25 percent.The p-bar
control chart provides the capability to analyze subgroups of differing sam-
ple sizes; however, it is recommended that sample sizes do not vary by
more than 25 percent.
 Frequency should correlate with production periods.The subgroup fre-
quency (how often sample data is acquired) must be correlated with pro-
duction periods. The analyst needs to understand the relationship between
the frequency and the production periods (work shifts, machine runs, re-
porting cycles). To provide a reliable estimate of process performance, the
data acquisition period should be long enough to capture likely sources of
variation and contain at least 20 subgroups.
 Minimum number of subgroups to establish control limits ¼ 20 (recom-
mended).A minimum of 20 subgroups enables the manager or analyst to
c07_1 10/09/2008 300

300 MEASURE

FIGURE 7.25 Data table for p.

determine whether the process is in control and whether the cause of being
out of control is special or random.
Step 2. Compute and record the percent of non-conforming or defective
parts. To compute the percent failing, we need the following data: the number
of items inspected or tested (n) and the number of failing items (d). This infor-
mation can be applied to a data table as indicated in Figure 7.25.
From the data in Figure 7.25, you can calculate the percent failing as follows:
d

n
Record all data and the computed percent failing on a data table, as follows:
d 25
p¼ ¼ ¼ 0:050
n 500
Step 3. Determine the scales for the p chart. The horizontal scale of the
p control chart identifies the subgroup by increments in hours, days, shifts, runs,
and other appropriate units of measure. The vertical scale represents the per-
centage failing and extends from 0 to 1.5 times the highest percentage failing in
any subgroup. Figure 7.26 displays a properly drawn p control chart. Scale is
1.5 times the highest percent failing.

FIGURE 7.26 Scale for the p chart.


c07_1 10/09/2008 301

ATTRIBUTE CONTROL CHARTS 301

FIGURE 7.27 Plot each p value.

Step 4. Compute and plot each p value, calculating the average sample
size (n). From the data in table used in step 2, plot the values for each subgroup.
Connect the points to visualize patterns and trends, as indicated in Figure 7.27.
Step 5. Compute and draw the process average (p-bar). Calculate and draw
a line for the process average (p), using the following equation and the data
from the p chart in Figure 7.28:
P
d total defective
Process average ¼ p ¼ P ¼
n total inspected
636
Process average ¼ p ¼ ¼ 0:05088
12; 500

FIGURE 7.28 Compute and draw the p chart average.


c07_1 10/09/2008 302

302 MEASURE

Step 6. Compute and draw UCL and LCL. The process control limits are the
process average plus and minus the 3s allowance for common cause variation
inherent in any process. The limits determine the parameters within which the
process is in statistical control. When p-bar is low and/or n is small, the lower
control limit can be a negative number. In these cases, the lower control limit is
zero; essentially, then, there is no lower limit. The p control chart upper and
lower control limits are calculated as follows.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi
UCL p ¼ p þ 3 pð1  pÞ= n
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi
LCL p ¼ p  3 pð1  pÞ= n
where

n ¼ total inspected/number of subgroups


pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi
UCL p ¼ 0:05088 þ 3 0:05088ð1  0:05088Þ= 500
¼ 0:08036
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi
LCL p ¼ 0:05088  3 0:05088ð1  0:05088Þ= 500
¼ 0:02140

Draw the process upper and lower control limits, as indicated in Figure 7.29.
This control chart indicates the process is in control. There is one anomaly be-
tween sample group 10 and 20 indicating the process trending up and then
down. This anomaly should be looked into to determine whether there are any
causes for concern.

FIGURE 7.29 Completed p chart.


c07_1 10/09/2008 303

ATTRIBUTE CONTROL CHARTS 303

FIGURE 7.30 Control chart data table for np.

np Control Chart
The np control chart is used when the actual number of items failing is a better
indicator of the process than the percent failing. One essential requirement for
this use of the np chart is that the sample sizes for the subgroups must all be the
same. (This is a limitation that does not apply to the use of p charts.)

 Measure the number of parts nonconforming or defective in a sample of


constant size.
 np control charts are the best tool for measuring discrepancies in lot
inspections

Using the data from Figure 7.30 and the following equations, calculate the
process average and upper and lower control limits:
P
d total defects
Process average ¼ np ¼ P ¼ ¼ 4:96
n number of subgroups
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Upper control limit ðUCLÞ for np charts ¼ np þ 3 npð1  pÞ ¼ 11:57
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Lower control limit ðLDLÞ for np charts ¼ np  3 npð1  pÞ ¼ 0

Draw the process average and upper and lower control limits on the np con-
trol chart, as demonstrated in Figure 7.31. This control chart is in control, but it
displays several anomalies that require attention. The trend starting at sample
number 5 is of special concern. This trend would have to be carefully followed
and the process owner would have to intervene before the trend reaches the up-
per control limit.

c Control Chart
The c control chart is used to measure the number of defects in an inspected
item when the sample size is constant. It is used to measure the number failing
in an inspection item when we are concerned with the variance between items.
c07_1 10/09/2008 304

304 MEASURE

FIGURE 7.31 Completed np control chart.

This chart also requires a constant sample or lot size. The chart is applied in two
specific process control situations:

1. Where there is a continuous flow of product and the failures can be ex-
pressed as defects per unit of production
2. Where failures of different types can be found in a single inspection
procedure

In a c chart, the sample size remains fixed (constant), and the chart shows the
total number of defects per item in a sample or subgroup. The c chart is very
easy to construct, but it requires that all subgroups be the same size. First, find
the average number of defects per group, using at least 20 groups. Call this aver-
age c-bar and use it for the centerline on the control chart. Then compute the
control limits.
The best use for c charts is in short studies to ascertain the variation in quality
of a characteristic or piece. These have been used for periodic sampling of pro-
duction where a certain number of defects per unit is tolerable. The c chart can
be used successfully for 100 percent inspection, where the primary aim is to
reduce the cost of rework or scrap. Another good application of c charts is for
acceptance sampling procedures based on defects per unit.
Using the data from Figure 7.32 and the following equations, calculate the
process average and upper and lower control limits:
P
dtotal defects in all subgroups
c¼ ¼ ¼ 14:24
k total number of subgroups
pffiffiffi
UCLc ¼ c þ 3 c ¼ 25:56
pffiffiffi
LCLc ¼ c þ 3 c ¼ 2:919
c07_1 10/09/2008 305

ATTRIBUTE CONTROL CHARTS 305

FIGURE 7.32 Control chart data table for c.

Draw the process average and upper and lower control limits on the c control
chart, as demonstrated in Figure 7.33. This control chart indicates a process that
is out of control due to special cause variation, and it exhibits some serious
anomalies.

u Control Chart
The u control chart is used to measure the percent not conforming (failing) in an
inspection subgroup when we are concerned with the variance between groups.
It is used to count the defects per unit in production runs or lots of varying sizes.
This chart does not require a constant sample or lot size. It is applied in the same
two instances as the c control chart:

1. Where there is a continuous flow of product, and the failures can be ex-
pressed as a ratio to total product
2. Where failures of different types can be found in a single inspection
procedure

FIGURE 7.33 Completed c control chart.


c07_1 10/09/2008 306

306 MEASURE

FIGURE 7.34 Control chart data table for u.


In a u chart, the sample size is flexible. First, find the average number of
defects per group (u). Next, calculate the process average (ii) and use it for the
centerline on the control chart. Plot the process data and compute the control
limits. The best use of u control charts is to ascertain the variation in quality of
a characteristic or piece.
Using the data from Figure 7.34 and the following equations, calculate the
process average and upper and lower control limits:
P
d total defects
u¼P ¼ ¼ 0:2659
n total inspected
rffiffiffiffi
u
UCLui ¼ u þ 3 ¼ 0:611
ni
rffiffiffiffi
u
LCLui ¼ u  3 ¼0
ni

Draw the process average, and upper and lower control limits on the u control
chart, as demonstrated in Figure 7.35.

FIGURE 7.35 Completed u control chart.


c07_1 10/09/2008 307

PROCESS CAPABILITY ANALYSIS 307

Notice the effect of uneven sample sizes on this control chart. This makes
evaluation more difficult. This is a good example of why you should not allow
your samples on a u control chart to vary by more than 25 percent.
There are additional control charts that can be used for more complex and
sophisticated processes. We discuss these control charts in the advanced topics
in Chapter 10.

PROCESS CAPABILITY ANALYSIS

The basic statistical application in process control is to establish stability in the


process and maintain that state of control over an extended period. It is equally
important to adjust the process to the point where virtually all of the product and
services meet specifications. The latter situation relates to process capability
analysis.
Once we have established stability, it follows that we must adjust the process
to a level where the output will conform to specifications. A state of control
usually exists when process control charts (VOP) do not show points out of con-
trol over an interval of 20 subgroups. Once this control is established, we can
analyze process capability to determine conformance to specifications (VOC).
The primary function of a process control system is to provide statistical signals
when special causes of variation are present and to enable appropriate action to
eliminate those causes and prevent their reappearance.
Process capability is determined by the total variation that comes from com-
mon causes and the minimum variation that remains after all special causes
have been eliminated. Thus, capability represents the performance potential of
the process itself, as demonstrated when the process is operating in a state of
statistical control.
We measure capability by the proportion of output that is within product
specification tolerances. Since a process in statistical control can be described
by a predictable distribution, we can express capability in terms of this distri-
bution and evaluate the proportion of out-of-specification parts realistically. If
this variation is excessive, actions are required to reduce the variation from
common causes to make the process capable of consistently meeting customer
requirements, as indicated by the upper and lower specification limits (USL
and LSL). Figure 7.36 demonstrates the relationship between process control
chart limits (UCL and LCL), specification requirements, and process
capability.
The following apply to the relationship between process capability and the
control charts:

 Once you have brought a process under statistical control, you can calcu-
late a process capability index.
 In addition to the process being under statistical control (so that it is repeat-
able or predictable), the raw data must be normally distributed.
c07_1 10/09/2008 308

308 MEASURE

FIGURE 7.36 Process capability and control chart relationship.

 This is because you will be using an estimate of the process standard devia-
tion to calculate the process capability.

Figure 7.36 demonstrates the relationship between process capability, control


charts, and the normal curve.

Cp Index
The most commonly used capability indices are Cp and Cpk : Capability of pro-
cess, Cp , is the ratio of tolerance to six sigma (6s). The formula is:
tolerance
Cp ¼
6s
Tolerance ¼ upper spec  lower spec
Estimate s using R/d2 (choose d2 from Figure 7.18 based on subgroup size).
The 6s in the Cp formula comes from the fact that, in a normal distribution,
99.73 percent of the parts will be within a 6s (3s) spread when only random
variation (common cause) is occurring. As an example:

tolerance 4:0
Cp ¼ ¼ ¼ 1:33
6s 3:0

Only dispersion, or spread of the distribution, is measured by Cp. It is not a


measure of centeredness, where the distribution is in relation to the midpoint
c07_1 10/09/2008 309

PROCESS CAPABILITY ANALYSIS 309

FIGURE 7.37 Cp index relationships.

(nominal or target value). Figure 7.37 shows two distributions with the same
specification limits and with the same Cp index value. In the top distribution,
almost all product being produced is within specification; in the lower distribu-
tion, a significant number of products would be out of tolerance. This is why Cp
is not used alone as a measure of capability; Cp only demonstrates how good the
process could be if it were centered and doesn’t take into account the location of
the distribution, which is evident by no reference to the mean. Usually, Cp is
used with Cpk. It is easier to think of Cp as ‘‘Can I?’’ In other words, ‘‘Can I
meet the customer specifications with the variability exhibited by my process?’’
Figure 7.37 demonstrates the hazard of using only the Cp index.

Cpk Index
Whereas Cp is only a measure of dispersion, Cpk is a measure of both dispersion
and centeredness. That is, the formula for Cpk takes into account both the spread
of the distribution and where the distribution is in regard to the specification
midpoint. As we see in the following formulas, reference is made to the location
of the distribution by using the mean. Cpk equals the lesser of:

USL  X X  LSL
cpu ¼ or cpl ¼
3s 3s
We choose the lesser of the two values, calculated as the Cpk index. Using
this value, we find out how capable our process is on the worst side of the dis-
tribution. Using the same data used for the Cp calculations, we calculate the
c07_1 10/09/2008 310

310 MEASURE

value for Cpk:

Cpk ¼ minðCpu ; Cpl Þ


 
USL  X X  LSL
Cpk ¼ min ;
3s 3s
 
52 21
Cpk ¼ min ;
1:5 1:5

Cpk ¼ min ð2:0; 0:67Þ

Cpk ¼ 0:067

The greater the value of Cpk, the better. A Cpk value of greater than 1 means
that the 6s (3s) spread of the data falls completely within the specification
limits. A Cpk between 0 and 1 means that part of the 6s spread falls outside the
specification limits. A negative Cpk indicates that the mean of the data is not
within the specification limits. Figure 7.38 demonstrates the hazard of using
only the Cp index.

FIGURE 7.38 Process capability example.


c07_1 10/09/2008 311

MEASUREMENT SYSTEMS EVALUATION (MSE) 311

Using the data from this example, we can calculate Cp and C pk and see clear-
ly how the two indices work.

tolerance 51 4
Cp ¼ ¼ ¼ ¼ 0:994
6s 6ð1:56=2:326Þ 6ð0:671Þ
USL  X 52 3
Cpu ¼ ¼ ¼ ¼ 1:49
3s 3ð1:56=2:326Þ 3ð0:671Þ
X  LSL 21 1
Cpl ¼ ¼ ¼ ¼ 0:50 ¼ Cpk
3s 3ð1:56=2:326Þ 3ð0:671Þ

In this example, the process capability Cp is 0.994 and the C pk is 0.50.

MEASUREMENT SYSTEMS EVALUATION (MSE)

A measuring device will generally introduce an error into the value of a mea-
surement of any physical quantity. To characterize these errors, we can use
measurement systems evaluation (MSE), which can be defined as a collection
of statistical techniques for characterizing measurement error. MSE assesses
the variation of the measurement system and confirms that the measurement
system can:

 Measure consistently and accurately


 Adequately discriminate between parts
 Adequately depict process variability

MSE assesses the variation of the measurement system and determines


whether the measurement system is acceptable for monitoring the process.
MSE is a tool that can characterize the quality of a set of measurements. It is a
collection of statistical techniques for characterizing measurement error (bias
and repeatability). A measuring device will generally introduce an error into the
value of a measurement of any physical quantity. MSE is used to characterize
these errors.
Before reviewing the steps involved in MSE, it is important to understand the
terminology involved. Terms that are relevant to MSE are listed in Table 7.2.

Repeatability
Repeatability is the variability portion of the measurement system that re-
fers to the machine or device used to measure the part. Repeatability can
be demonstrated using Figure 7.39. In this example, measurement system 1
exhibits good repeatability, and measurement system 2 exhibits poor
repeatability.
c07_1 10/09/2008 312

312 MEASURE

TABLE 7.2

Term Definition

Repeatability The variation in measurements obtained with one instrument


when used several times by one appraiser measuring identical
characteristics on the same part.
Reproducibility The variation in the average of the measurements made by
different appraisers using the same instrument measuring
identical characteristics on the same part.
Bias The difference between observed average of measurements
and reference value. It is an accuracy-related term.
Stability Total variation in measurements obtained with a measurement
system on the same master or parts when measuring single
characteristics over an extended period of time.
Linearity The difference in bias values through expected operating
range of the instrument.

Reproducibility
Reproducibility is the variability portion of the measurement system that refers
to the operators. Reproducibility can be demonstrated using Figure 7.40. In this
example, measurement system 1 exhibits good reproducibility, and measure-
ment system 2 exhibits poor reproducibility.

FIGURE 7.39 Repeatability.


c07_1 10/09/2008 313

MEASUREMENT SYSTEMS EVALUATION (MSE) 313

FIGURE 7.40 Reproducibility.

Bias
Bias is the systematic offset from the true value. Although it is not ideal to have
a bias, you can compensate for it by applying the inverse of the bias to your
process readings. Figure 7.41 provides a graphic representation of bias.

Linearity
Linearity refers to the consistency of the measurement bias across the operating
range. Linearity problems, although related to the bias, are more serious and
must be dealt with immediately. Linearity issues require you to track where in

FIGURE 7.41 Bias.


c07_1 10/09/2008 314

314 MEASURE

the operating range each particular part is produced so that the proper offset can
be applied. It is suggested that linearity issues be resolved prior to proceeding
with any further measurements.

Variance
The variance of the observed or total process variation is expressed mathemati-
cally as follows:
s2T ¼ s2P þ s2M
s2T ¼ s2P þ s2M
where:

s2T ¼ total variance


s2P ¼ process variance
s2M ¼ measurement variance

If we look at the measurement portion, it is expressed mathematically as


follows:
s2M ¼ s2Rpt þ s2Rpd

If our measurement system shows a high contribution score, then by breaking


out these components we are able to focus our efforts on the part of the system
that is broken.

Measurement System Requirements


 Stability. Measurement system must be in statistical control (statistical
stability). Variation is from common cause only.
 Consistency. Variability of measurement system is small compared with
manufacturing process and specification limits. A measurement system is
consistent when results for operators are repeatable and results between
operators are reproducible.
 Resolution. Increments of measurement must be small relative to both pro-
cess variability and specification limits (rule of thumb is increments less
than 10 percent of the smaller of either process variability or specification
limits).

Preparing for the MSE


1. Select multiple operators and samples, where each operator measures each
sample multiple times.
c07_1 10/09/2008 315

GAGE REPRODUCIBILITY AND REPEATABILITY (R&R) 315

2. Select samples that are representative of the process. Here it is important


to try to spread the samples across the tolerance range.
3. Perform measurements in random order, which will ensure that drift or
changes that occur will be spread randomly throughout the study.
4. Ensure that the measurement instrument is at the required resolution.
5. Make readings to the nearest one-half of the smallest graduation.
6. Ensure that each appraiser uses same procedure.

GAGE REPRODUCIBILITY AND REPEATABILITY (R&R)

There are two gage R&R types, crossed and nested, as shown in Table 7.3.
Gage R&R analysis is used to compare the measurement system variation to
total process variation or tolerance. If the measurement system variation is a
large proportion of the total variation, then the system is not capable of distin-
guishing between parts. This allows a gage R&R study to answer questions such
as the following:

 Is the variability of the measurement system small compared with the man-
ufacturing process variability?
 Is the variability of my measurement system small compared with the pro-
cess specification limits?
 Is the variability in my measurement system caused by differences between
operators (reproducibility), or is it attributable to the instruments (repeat-
ability) used?
 Is my measurement system capable of discriminating between different
parts?

A good measurement system is one in which the part-to-part variation is most


of the total variation. Gage R&R makes up less than 10 percent of the total vari-
ation. This measurement system looks very good to this point. Let’s now take a
graphical view of the system and its data. The graphs that follow were produced
using the MinitabR program.
If you need to use destructive testing, you must be able to assume that all parts
within a single batch are identical enough to claim that they are the same part. If
you are unable to make that assumption, then part-to-part variation within a
batch will mask the measurement system variation.

TABLE 7.3

Type Description

Crossed Used when part is not destroyed in the measurement


Nested Used when part is destroyed in the measurement
c07_1 10/09/2008 316

316 MEASURE

If you can make that assumption, then choosing between a crossed or nested
gage R&R study for destructive testing depends on how your measurement pro-
cess is set up. If all operators measure parts from each batch, then use gage
R&R study (crossed). If each batch is measured only by a single operator, then
you must use gage R&R study (nested). In fact, whenever operators measure
unique parts, you have a nested design.

Crossed Gage R&R


Crossed gage R&R analysis is used to compare the measurement system varia-
tion to total process variation or tolerance. If the measurement system variation
is a large proportion of the total variation, then the system is not capable of
distinguishing between parts. This allows a crossed gage R&R study to answer
questions such as:

 Is the variability of the measurement system acceptable?


 How much variability is caused by operators?
 Is the measurement system capable of discriminating between different
parts?

Specifically, a crossed gage study identifies:

 The amount of total measurement system variation


 Part-to-part variation
 Measurement system variation

Crossed gage R&R is when testing does not destroy the product (i.e., it is
nondestructive). It tests for the following types of variation:

 Total measurement system variation


 Within-operator variation (repeatability)
 Among-operators variation (reproducibility)
 Part-to-part variation

Crossed Gage R&R X-bar and R Method


The X-bar and R method of performing MSE relies on process control and
graphical analysis to determine whether your MSE is capable of measuring the
unit under test. There are five components to the crossed gage R&R study using
the X-bar and R method.

1. Components of variation
2. X-bar and R chart
3. Operator by part interaction
c07_1 10/09/2008 317

GAGE REPRODUCIBILITY AND REPEATABILITY (R&R) 317

FIGURE 7.42 Components of variation chart.

4. By operator
5. Measurement by part

We review and evaluate each of these components.

Components of Variation The components of variation chart is a graphical


representation of the gage R&R table. Each cluster of bars represents a source
of variation. By default, each cluster will have two bars, corresponding to per-
cent contribution and percent study variance. If you add a tolerance and/or
historical sigma, bars for percent tolerance and/or percent process are added.
In a good measurement system, the largest component of variation is part-to-
part variation, as demonstrated in Figure 7.42. In this figure, the components
of variation are good, with the largest contribution to variation being part-
to-part.

Percent Contribution The percent contribution is 100 times the variance compo-
nent for that source divided by the total variance. The higher the part-to-part
percent contribution the better. If part-to-part percent contribution is 99.2, then
99.2 percent of the variation is between parts, indicating a good measurement
system.

Percent Study Variation The percent study variation is 100 times the study var-
iation for that source divided by the total study variation. This is the component
variation expressed in standard deviation terms. It is used to compare the mea-
surement system variation in the total variation.

Percent Tolerance The percent tolerance is 100 times the study variation for
that source divided by the process tolerance. Percent tolerance compares the
measurement variation to customer specs.
c07_1 10/09/2008 318

318 MEASURE

FIGURE 7.43 X-bar and R chart by operator.

Based on the percent contribution, percent study variation, and percent toler-
ance, Figure 7.42 represents a very good measurement system capability. There
is very little <10% contribution to variability for repeatability and reproducibil-
ity (measurement system and operator).

X-bar and R Chart by Operator The X-bar and R chart for MSE provides a
graphical assessment of the consistency of the operator measurements. This is
demonstrated in Figure 7.43.

R Chart The R chart illustrates the repeatability for each operator. The plotted
points represent for each operator the difference between the largest and smallest
measurements on each part. If the measurements are the same, then the range
equals zero. Because the points are arranged by operator (1 and 2), you can com-
pare the consistency of each operator. If any of the points on the graph go above
the upper control limit (UCL), then that operator is having problems measuring
parts consistently. If the operators are measuring consistently, then these ranges
should be small relative to the data, and the points should stay in control.

X-bar Chart The X-bar chart compares the part-to-part variation to


repeatability.
Parts chosen for a gage R&R study should represent the entire range of possi-
ble parts; this graph should ideally show lack of control. Lack of control exists
when many points are above the upper control limit and/or below the lower con-
trol limit. For the parts data, there are many points beyond the control limits,
which indicates the measurement system is adequate.
c07_1 10/09/2008 319

GAGE REPRODUCIBILITY AND REPEATABILITY (R&R) 319

FIGURE 7.44 Operator by part interaction.

Evaluating the MSE X-bar and R chart in Figure 7.43, you can see that the R
chart is in control and that the operators are consistent. The X-bar chart is out of
control, as it should be. And the operators measurements are consistent here
also. This is an indication of a good operator measurement consistency.

Operator by Part Interaction Plot The operator by part interaction plot dis-
plays the variation in measurement. All measurements are arranged by part. The
operator by part interaction plot shows the average measurements taken by each
operator on each part in the study, arranged by part. Ideally, the lines will follow
the same pattern and the part averages will vary enough that differences be-
tween parts are clear. The circled cross symbol represents the mean. If the lines
are virtually identical, the operators are measuring the parts the same. If one line
is consistently higher or lower than the other, one operator is consistently meas-
uring parts higher or lower than the other. If lines are crossed or not parallel,
there is an interaction between the part being measured and the operator. This is
demonstrated in Figure 7.44.
Evaluating the operator by part interaction in Figure 7.44 indicates that the
operators are measuring the parts consistently.

By Operator Graph The by operator graph shows all study measurements


arranged by operator. Dots represent the measurements; the circled cross sym-
bol represents the means. The line connects the average measurements for each
operator. If they are parallel to the x-axis, the operators are measuring the parts
similarly. If they are not parallel to the x-axis, the operators are measuring the
parts differently.
In Figure 7.45, the lines are parallel, indicating again that the operators are
measuring parts similarly.

Measurement by Part The by part graph shows all of the measurements taken
in the study, arranged by part. The measurements are represented by dots, the
c07_1 10/09/2008 320

320 MEASURE

FIGURE 7.45 Measurement by operator.

means by the circled cross symbol. The line connects the average measurements
for each part. Multiple measurements for each individual part have little variation
(the dots for one part will be close together). Averages will vary enough that differ-
ences between parts are clear. By part plot is presented in Figure 7.46. The mea-
surement by part is tightly distributed around each sample, indicating the MSE is
doing its job well. Next, we review the ANOVA method of assessing MSE.

ANOVA Gage R&R Method


The X-bar and R method was developed first because calculations stem from
control charts and are simpler. However, the ANOVA method is more accurate
for the following reasons:

FIGURE 7.46 Measurement by part.


c07_1 10/09/2008 321

GAGE REPRODUCIBILITY AND REPEATABILITY (R&R) 321

TABLE 7.4 Gage R&R

% Contribution StdDev Study Var %Study Var


Source VarComp of VarComp (SD) (6  SD) (% SV)

Total Gage R&R 0.01167 0.80 0.10801 0.64807 8.97


Repeatability 0.01000 0.69 0.10000 0.60000 8.30
Reproducibility 0.00167 0.11 0.04082 0.24495 3.39
Oper 0.00167 0.11 0.04082 0.24495 3.39
Part-to-part 1.43965 99.20 1.19986 7.19913 99.60
Total variation 1.45132 100.00 1.20471 7.22824 100.00
Number of distinct categories ¼ 15.

 The ANOVA method accounts for the possible interaction between opera-
tors and parts whereas the X-bar and R method does not.
 The variance components used by the ANOVA method are better estimates
of variability than the ranges used by the X-bar and R method.

Now let’s look at Table 7.4.

Total gage R&R. All of the variability in the response except that due to dif-
ferences in parts. This takes into account variability due to the gage, the
operators, and the operator by part interaction. Note that in this study, the
total gage R&R is small.
Repeatability. The variability in the measurements due to the measuring de-
vice is less than 1 percent, very small.
Reproducability. The variability in measurements due to operators and the
operator by part interaction. Here again, the contribution to measurement
system variability is small.
Operator. The operator contribution to variability is very small.
Part-to-part. The variability in measurements that is due to different parts.
Ideally, most of the variability will be part-to-part variability. As you can
see in Table 7.4, the part-to-part contribution is significant. That is where
the variability belongs.
Number of distinct categories. This number is the number of distinct catego-
ries of parts that the process is currently able to distinguish. The lower the
total gage R&R, the higher this number will be. If a process is incapable of
distinguishing at least five types of parts, it is probably not adequate.

Nested Gage R&R


Use gage R&R study (nested) when each part is measured by only one operator,
such as in destructive testing. In destructive testing, the measured characteristic
c07_1

322
10/09/2008

Gage name: UV/Vis


Gage R&R (Nested) for Measure Date of study: 4/24/02
Reported by: NLF
322

Tolerance: 10
Misc:
Components of Variation By Part (Operator)
100 34.4
Components of Variation %Contribution By Part (Operator)
%Study Var 34.3
34.4
100
%Contribution
%Tolerance 34.2
%Study Var 34.3
50 34.1
%Tolerance 34.2

Percent
50 34.0

Percent
34.1

0 33.9
34.0

0 Gage R&R Repeat Reprod Part-to-Part 33.9 1 2 3 4 5 1 2 3 4 5


Part 1 2
Gage R&R Repeat
R Chart by Reprod Part-to-Part
Operator Operator
Part 1 2 3 4 1 52 3 4 5
Operator 1 By Operator
2
0.3 34.4
R Chart 1by Operator 2 By Operator
1 2 UCL=0.2941 34.3
0.3 UCL=0.2941 34.4
0.2 34.2
34.3
0.2 34.1
0.1 34.2
R=0.09
34.0
34.1
0.1 R=0.09

Sample Range
0.0 33.9

Sample Range
LCL=0 34.0
0.0 LCL=0 33.9 1 2
Operator
Xbar Chart by Operator Operator 1 2

34.3 1 2
Xbar Chart1by Operator 2
34.3
34.2 UCL=34.23
UCL=34.23
34.2
34.1
Mean=34.07
34.1
34.0
Mean=34.07

Sample Mean
34.0

Sample Mean
33.9 LCL=33.90

FIGURE 7.47 Nested gage R&R.


c07_1 10/09/2008 323

TRANSACTIONAL MSE 323

TABLE 7.5 Gage R&R

% Contribution StdDev Study Var % Study Var


Source VarComp of VarComp (SD) (6  SD) (%SV)

Total gage R&R 1.56325 100.00 1.25030 7.50180 100.00


Repeatability 0.41148 26.32 0.64147 3.84881 51.31
Reproducibility 1.15177 73.68 1.07321 6.43923 85.84
Part-to-part 0.00000 0.00 0.00000 0.00000 0.00
Total variation 1.56325 100.00 1.25030 7.50180 100.00
Number of distinct categories ¼ 1.

is different after the measurement process than it was at the beginning. Because
of this, no part-by-part charts are created for nested gage R&R. Crash testing,
missile launch, and fuel burning temperatures are examples of destructive
testing.
The graphs shown in Figure 7.47 are evaluated exactly the same way the
crossed gage R&R studies are evaluated. Notice in this figure the components
of variation are not in the part-to-part column. This indicates a poor measure-
ment systems capability, with significant variability attributed to repeatability
and reproducibility. Also the X-bar and R chart indicates clearly that the opera-
tors are not reading the measurements in the same way. The part by operator
also appears scattered. Operator 2 is clearly doing something different, possibly
not following the procedure. Let’s take a look at the ANOVA method for this
gage R&R, shown in Table 7.5.
Evaluation of the ANOVA table confirms the X-bar and R method. This is
not a capable measurement system. There is too much variability in both repeat-
ability and reproducablity. Note the variability attributed to part-to-part is
0 percent.

TRANSACTIONAL MSE

Production processes are not the only systems that need to be verified. Many
transactional processes can benefit from MSE. Let’s use contract approval as an
example. Errors and missing data on a contract require follow-up calls and have
a negative effect on the overall cycle time for contract approval.
One measurement tool that can be used for this purpose is known as the attri-
bute gage R&R. In this case, the test measures the ability of the inspectors to
find and fix errors on contracts.
The four items measured are:

1. Repeatability
2. Accuracy
c07_1 10/09/2008 324

324 MEASURE

3. Overall repeatability and reproducibility


4. Overall repeatability, reproducibility, and accuracy

Repeatability. This looks at an individual inspector’s ability to get the same


results when reviewing the same contract multiple times.
Accuracy. This measures how well the individual inspector can match the
known, or standard, value.
Overall repeatability and reproducibility. This compares all of the inspectors
to each other.
Overall repeatability, reproducibility, and accuracy. This compares all in-
spectors to each other, including how well they match the known value.

Contract Approval Example


1. A defect is any incorrect or missing data that requires a follow-up call to
resolve.
a. Any defects found ¼ Fail
b. No defects found ¼ Pass
2. Have two or more inspectors review a minimum of 20 contracts, and have
each contract reviewed at least twice by the same inspector.
3. Have the contracts reviewed in random order by the inspectors on both the
first pass and the second pass.
4. After the inspectors have recorded their results, transfer the data to the
attribute gage R&R test form.
5. Have an expert or master evaluator fill in the column for the known value.
This is the standard against which you will judge your inspectors.

Figure 7.48 represents the data collected during the attribute measurement
systems analysis.
Scoring for the Attribute Gage R&R

1. Repeatability ¼ the number of times out of 20 pairs that an individual


inspector rates the same contract the same way
x
100 ¼ repeatability %
20
> 90% ¼ acceptable
80% to 90% ¼ marginal
< 80% ¼ unacceptable
c07_1
10/09/2008
325

FIGURE 7.48 Attribute MSE data table.

325
c07_1 10/09/2008 326

326 MEASURE

2. Accuracy ¼ the number of times out of 20 pairs that an individual inspec-


tor matches the known or standard
x
100 ¼ accuracy %
20
3. Overall repeatability and reproducibility ¼ the number of times out of 20
pairs that all inspectors rate the same contract the same way
x
100 ¼ overall R&R %
20
4. Overall R&R and accuracy = the number of times all inspectors rate the
contract the same and match the known value as well
x
100 ¼ overall R&R and accuracy %
20
Example for scoring an attribute gauge R&R

KEY POINTS

Process Measurement
Process measurement uses statistical measures to collect, interpret, and communi-
cate data. Data analysis, measurement, and metrics (quantitative analysis) are ex-
tremely important to managing processes and projects. In order for this to be
successful, the appropriate data must be determined, collected, and then analyzed.

Types of Data
Attribute data comes from counting, such as defective units, defects, anomalies,
conditions present or not present, and so forth.
Variable data comes from measurements, such as temperature, weight, dis-
tance, and so forth. Variable data is what you would call quantitative. This type
of data can be thought of as infinitely variable.

Sources of Data
Primary (active) data. Current data collected under known, controlled condi-
tions. Control is a state in which all special causes of variation have been
removed from a process.
Secondary (passive) data. This is data not directly collected by the team, but
rather collected from historical databases or other external sources, such as
logs or field service reports.

Statistical Process Control Charts


Statistical process control (SPC) uses statistical tools and techniques to measure
and evaluate the performance of a process. SPC focuses on the variability in a
c07_1 10/09/2008 327

KEY POINTS 327

process. This variability is due common causes (randomly occurring variations)


or special causes (assignable events). Special causes will result in an out-of-
control condition. When special causes are identified, a decision can be made to
adjust the process in order to bring it back into control. Common cause variation
can be eliminated only by a capital change in the process (i.e., a change in the
equipment or procedures).

Control Charts Analysis


Control charts analysis determines whether the inherent process variability and
the process average are no longer operating at stable levels, that one or both are
out of statistical control (not stable), and that some type of appropriate action
needs to be taken. Another purpose of using control charts is to distinguish
between the inherent, random variability of a process and the variability attrib-
uted to an assignable cause. The sources of the random variability are often re-
ferred to as common causes, whereas assignable cause variations are thought of
as special causes.

Variation
To control and improve a process, we must trace the total variation back to its
sources. Again, the sources are common causes and special causes variation.
The factors that cause the most variability in the process are the main factors
found in cause-and-effect analysis charts: people, machines, methodology, ma-
terials, measurement, and environment. These causes can result from special
causes, or they can be common causes inherent in the process.

Types of Control Charts and Applications


Variable Control Charts
X-bar and R charts are used to measure control processes whose characteristics
are continuous variables such as weight, length, ohms, time, or volume.

Attribute Control Charts


Although control charts are most often thought of in terms of variables, there are
also versions for attributes. Attribute data are those that have only two values
(conforming/nonconforming, pass/fail, go/no-go, present/absent), but they can
still be counted, recorded, and analyzed.
The p and np charts are used to measure and control processes displaying
attribute characteristics in a sample. The p charts represent the number of fail-
ures as a fraction; the np charts express the failures as a number.
The c and u charts are used to measure the number or proportion of defects in
a single item. A c chart is applied when the sample size or area is fixed; a u chart
is applied when the sample size or area is not fixed.
c07_1 10/09/2008 328

328 MEASURE

Specification Limits versus Control Limits


Specification limits are used when specific parametric requirements exist for a
process, product, or operation. These limits usually apply to the data and are the
pass/fail criteria for the operation. They differ from statistical control limits in
that they are prescribed for a process rather than resulting from the measure-
ment of the process. Specifications can be thought of as the voice of the custom-
er (VOC), whereas control limits are considered the voice of the process (VOP).

Process Capability Analysis


The basic statistical application in process control is to establish stability in the
process and maintain that state of control over an extended period. It is equally
important to adjust the process to the point where virtually all of the product and
services meet specifications. The latter situation relates to process capability
analysis.

Measurement Systems Evaluation (MSE)


A measuring device will generally introduce an error into the value of a measure-
ment of any physical quantity. To characterize these errors, measurement systems
evaluation (MSE) can be used. MSE can be defined as a collection of statistical
techniques for characterizing measurement error. MSE assesses the variation of
the measurement system and confirms that the measurement system can:

 Measure consistently and accurately


 Adequately discriminate between parts
 Adequately depict process variability
c08_1 10/09/2008 329

8
ANALYZE AND IMPROVE
EFFECTIVENESS

In reviewing statistical analysis, we need to address the purpose of statistical


analysis—why it is used and how it can be used. We also need to understand
what statistical analysis is not. For example, it is not a method of proof or a
substitute for thinking. Our examination of statistical analysis then continues
with a review of the formal procedure involved. Understanding how statistical
analysis is conducted brings into the conversation the null hypothesis and how
it differs from the alternative hypothesis. In this chapter, we also look at the
types of statistical tests that exist, one-tailed versus two-tailed tests, and what
makes them different, how they are used, and when to use one versus the other.
Risk assessment factors must be considered when working through a statistical
analysis. We also look at test planning and the concepts of power and sample
size, especially how they are used with Minitab.

 Analysis of variance
 Linear contrasts
 Design of experiments

ANALYSIS OF VARIANCE

Analysis of variance (ANOVA) is a statistical technique by which the source of


variability within a process is identified. ANOVA is widely used in industry to
help identify the source of potential problems in a production process and iden-
tify whether variation in measured output values is due to variability between
various manufacturing processes or within them. By varying the factors in a
predetermined pattern and then analyzing the output, an accurate assessment
Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr. 329
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c08_1 10/09/2008 330

330 ANALYZE AND IMPROVE EFFECTIVENESS

No variance between or
within treatment
A B
75 75
75 75
75 75 Variance only between
treatments
A B
75 77
Variance both between and 75 77
within treatments 75 77
A B
75 77
73 77
76 72

FIGURE 8.1 Concepts of variation.

can be made about the cause of variation in a process. Figure 8.1 presents the
concepts of between and within variation as applied to process treatments.
ANOVA tells us whether the variance we see in our data is significant and
measures that significance. ANOVA accomplishes this by measuring the vari-
ance of different treatments and treatment levels.
A treatment is a process input such as time, temperature, cost, a supplier, or a
process. Treatments have specific values called input variables, or independent
variables. These are the variables that we can control as process inputs and
levels. The concept of treatments and levels is demonstrated in Figure 8.2.

Hypothesis Testing
In dealing with these types of issues, it is best to turn them into a statistical prob-
lem in order to best analyze them. We do this by using hypothesis testing. When
performing an ANOVA, the assumption is always made that the population is ho-
mogeneous. This is done by creating a statement of ‘‘equality’’ or ‘‘no change,’’

Treatment Experimental run

A
A
Supplier
Supplier

A1
A1 A2
A2
Temperature
Equipment

A1B1C1
A1B1C1 A2B1C1
A2B1C1 C1
C1
Equipment

B1
B1
A1B1C2 A2B1C2
C
B

A1B1C2 A2B1C2 C2
B

C2

A1B2C1
A1B2C1 A2B2C1
A2B2C1 C1
C1
B2
B2
A1B2C2
A1B2C2 A2B2C2
A2B2C2 C2
C2

Treatment combination Level

FIGURE 8.2 Process treatment matrix.


c08_1 10/09/2008 331

ONE-WAY ANOVA 331

which we call the null hypothesis. It is designated by H0. The null hypothesis is a
statement that the means (m) of the population for all treatments are equal:
H0: mA ¼ mB ¼ mC
The opposite of the null hypothesis is the alternative hypothesis. It is desig-
nated by Ha. The alternate hypothesis is that at least one of the population
means is not equal:
Ha: mA 6¼ mB 6¼ mC

Fisher’s F Statistic
ANOVA determines whether the mean values for several treatments are equal by
examining population variances using a value known as the F statistic. The F
statistic is based on the evaluation of the variance (s2 ) of the data. The F statistic
so named to honor the great statistician R. A. Fisher.
ANOVA compares two estimates of this variance, one estimate attributable to
the variance within treatments (Swithin 2 ), which is also called error, and one esti-
mate from between treatment means (Streatments2).
We calculate the first estimate from the variance within all the data from sev-
eral distinct treatments or different levels of one treatment. This within/error
treatment estimate is the unbiased estimate of variance that remains the same
whether the means of the treatments are the same or different. It is the average,
or mean, of the variances found within all the data.
The second estimate of the population variance is calculated from the vari-
ance between the individual treatment means (Streatments). This estimate is a
true representation of the variance only if there is no significant difference
between it and the variance within sample means (Swithin). Fisher’s F statistic
measures that difference in means based on the ratio of the variance between
and the variance within treatments, and we compare that ratio to the critical
value of F (Fcritical).
The computation of ANOVA can be complex and tedious. In most cases, the
analysis is accomplished using a software program (and this approach is
strongly recommended). However, for the purpose of gaining a firm understand-
ing of variance, we review the manual processing of ANOVA calculations, start-
ing with the one-way ANOVA.

ONE-WAY ANOVA

One-way ANOVA provides for the analysis of two populations with a single treat-
ment. This assists in determining whether there are differences in such things as:

 The quality of materials coming from two suppliers


 The warranty returns from two different areas
 The differences between two processes producing the same product, or any
other combination of inputs to a single treatment
c08_1 10/09/2008 332

332 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment SStreatment dftreatment MStreatment Ftreatment F'critical Treatment

Within SSwithin dfwithin MSwithin Within

Total SStotal dftotal

FIGURE 8.3 One-way ANOVA table.

There are six steps in our approach to the ANOVA:

1. Calculate the sum of the squares.


2. Determine the degrees of freedom.
3. Calculate the mean squares.
4. Calculate the F ratio.
5. Look up the critical value of F.
6. Calculate the percent contribution.

The steps can also be represented graphically, using a tabular format, as


shown in Figure 8.3.
To demonstrate the steps involved in an ANOVA, we assume the following
scenario: Two suppliers have delivered to us six lots of goods. Each lot contains
100 items (which are supposed to be exactly the same). However, defective
product is found in each of the lots.
The information in Figure 8.4 represents the acceptance yield of the lots
received from the two suppliers. Using this information, we can perform an
ANOVA. To do so, we first proceed to calculate the sum of the squares.

Sum of the Squares


In performing any ANOVA, the sum of the squares must be calculated for the
following:

 Total variation (SStotal)


 Variation attributed to the treatment (SStreatment)
 Variation attributed to error within (SSwithin).

Sum of the Squares Total (SStotal )


P SStotal is calculated by subtracting
The sum of the squares for the total variation
the squared value of the summation of y( y)2 divided by the total number of
c08_1 10/09/2008 333

ONE-WAY ANOVA 333

Supplier
Supplier A1 Supplier A2
98 89
94 99
97 94
98 99
97 92
100 96
FIGURE 8.4 Data from six lots, received from two different suppliers.

P 2
data points (N) from the summation of the squared individual values of y y,
as indicated:
X X 2 
2
SStotal ¼ y  y N

This equation is easily translated into a simple spreadsheet, as shown in


Figure 8.5. The spreadsheet is then used to calculate the SStotal.
X X 2 
SStotal ¼ y2  y N

¼ 110;901:00  110;784:08
¼ 116:92

Sum of the Squares Treatments (SStreatments )


To calculate the sum of the squares for the treatment variation (SStreatment), first
summarize the squared values of each level of our data for each treatment (Al,
A2, etc.) divided by the number of data Ppoints in each level (n) and subtract the
squared value of the summation of y ( y)2 divided by the total number of data
points (N), as indicated in the equation:
"P P # P
ð yA1 Þ2 ð yA2 Þ2 ð yÞ2
SStreatment ¼ þ 
n n N
c08_1 10/09/2008 334

334 ANALYZE AND IMPROVE EFFECTIVENESS

n y y2
1 98 9,604
2 94 8,836
3 97 9,409
4 98 9,604
5 97 9,409
6 100 10,000
7 89 7,921
8 99 9,801
9 94 8,836
10 99 9,801
11 92 8,464
12 96 9,216
Total: 1,153 110,901

(∑y)2 1,329,409 n/a

(∑y)2/n 110,784.08 n/a

FIGURE 8.5 Spreadsheet calculations for SStotal.

Applying this equation to our data from the data table, we can use a spread-
sheet to calculate the values for the treatment sum of the squares, as shown in
Figure 8.6.
"P P # P
ð yA1 Þ2 ð yA2 Þ2 ð yÞ2
SStreatment ¼ þ 
n n N

SStreatment ¼ ½56;842:67 þ 53;960:17  110;784:08 ¼ 18:75

n yA1 n yA2
1 98 1 89
2 94 2 99
3 97 3 94
Supplier A1

Supplier A2

4 98 4 99
5 97 5 92
6 100 6 96
Total: 584 Total: 569
(∑yA1)2 341,056 (∑yA2)2 323,761
2 56,842.67 53,960.17
(∑yA1) /n (∑yA2)2/n

FIGURE 8.6 Sum of squares treatment (SStreatment).


c08_1 10/09/2008 335

ONE-WAY ANOVA 335

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 dftreatment MStreatment Ftreatment F'critical Treatment

Within 98.17 dfwithin MSwithin Within

Total 116.92 dftotal

FIGURE 8.7 ANOVA table with sum of the squares (SS) applied.

Sum of the Squares Within (SSwithin )


The sum of the squares for within is the total sum of the squares minus the sum
of the squares for the treatment.
SSwithin ¼ SStotal  SStreatment
Using this equation with the data we used in developing the calculation of the
sum of the squares element of the ANOVA table, we can determine the SSwithin
as follows:
SSwithin ¼ SStotal  SStreatment
SSwithin ¼ 116:92  18:75 ¼ 98:17

The calculations for the sum of the squares can now be added to the ANOVA
table, as shown in Figure 8.7.

Degrees of Freedom
Degrees of freedom (df) are the number of independent comparisons available
to evaluate the data. It is necessary to determine the degrees of freedom for
treatment, within, and total.
The degrees of freedom for treatment is the number of treatments minus one
(T – 1). The total degrees of freedom is the total number of data elements in
the analysis minus one (N – 1). The degrees of freedom for within is calculated
by subtracting the treatment degrees of freedom from the total degrees of
freedom.

dftotal ¼ total data points ðTÞ minus one


dftreatment ¼ number of treatment levels ðNÞ minus one
dfwithin ¼ dftotal minus dftreatment
c08_1 10/09/2008 336

336 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 MStreatment Ftreatment F'critical Treatment

Within 98.17 10 MSwithin Within

Total 116.92 11

FIGURE 8.8 ANOVA table with degrees of freedom (df) applied.

For our example:


dftotal ¼ 12  1 ¼ 11
dftreatment ¼ 2  1 ¼ 1
dfwithin ¼ 11  1 ¼ 10
This information can then be applied to the ANOVA table, as shown in
Figure 8.8.

Mean Squares
The mean squares (MS) element of the ANOVA table is the quotient of the sum
of the squares of the treatment and within and the degrees of freedom (df), as
indicated in the following equations.
SStreatment
MStreatment ¼
dftreatment
SSwithin
MSwithin ¼
dfwithin
Applying this equation to our data from the sum of the squares, we can calcu-
late the values for the mean squares, as indicated in the following equations.
SStreatment 18:75
MStreatment ¼ ¼ ¼ 18:75
dftreatment 1
SSwithin 98:17
MSwithin ¼ ¼ ¼ 9:82
dfwithin 10
This information can then be applied to the ANOVA table, as shown in
Figure 8.9.
c08_1 10/09/2008 337

ONE-WAY ANOVA 337

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 18.75 Ftreatment F'critical Treatment

Within 98.17 10 9.82 Within

Total 116.92 11

SStreatment 18.75 SSwithin 98.17


MStreatment = = = 18.75 MSwithin = = = 9.82
dftreatment 1 dfwithin 10

FIGURE 8.9 ANOVA table with the mean squares (MS ) applied.

F Ratio
The F ratio is the quotient of the MStreatment and MSwithin. It is calculated using
the following equation:
MStreatments
Fratio ¼
MSwithin
Applying this equation to the mean squares data, we can calculate the value for
the Fratio as follows:
MStreatments 18:75
Fratio ¼ ¼ ¼ 1:91
MSwithin 9:82
This information can then be applied to ANOVA table, as shown in Figure 8.10.

F Critical
We compare the Fratio to the critical value of F0 (Fcritical) to determine whether
the variance demonstrated is significant. The critical value of F is determined by
Sum of the Degrees of Mean
Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 18.75 1.91 F'critical Treatment

Within 98.17 10 9.82 Within

Total 116.92 11

FIGURE 8.10 ANOVA table with the F ratio (F ) applied.


c08_1 10/09/2008 338

338 ANALYZE AND IMPROVE EFFECTIVENESS

TABLE 8.1
Decision Example Level

Critical systems parameters System reliability .01>


System safety requirements
System performance
Systems competitive capability
Process efficiency or effectiveness Process improvement options .05>
Process selection
Process differentiation
Equipment selection
Administrative/business decisions Payment of bonuses .10>
Return on investment
Marketing decisions

referring to an F table for the applicable degrees of freedom and significance


level selected for your evaluation.
The level of significance applied to the analysis can be a very subjective
choice where no specific standards exist. It is important that some standard for
selection of the significance level be implemented and applied to analysis uni-
formly throughout. The selection of the level of significance often reflects the
consequences of the decision that will result from the analysis. A typical deci-
sion table for level of significance is provided in Table 8.1.
We will select a significance level (a) of .05. Therefore, our F0 critical will be:
F 0: df1 ; df11 ; :05 ¼ 4:96
Fratio > Fcritical ¼ significant
Fratio < Fcritical ¼ not significant
1:91 < 4:96 ¼ not significant
This information can then be applied to the ANOVA table, as shown in Figure 8.11.

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 18.75 1.91 4.96 Treatment

Within 98.17 10 9.82 Within

Total 116.92 11

95% Confidence

FIGURE 8.11 ANOVA table with the F critical (F0 ) applied.


c08_1 10/09/2008 339

ONE-WAY ANOVA 339

Percent Contribution
The % contribution element of the ANOVA table is the quotient of the sum of
the squares for the treatment and within and the sum of the squares for total, as
follows:
SStreatments
% contribution treatment ¼
SStotal
SSwithin
% contribution within ¼
SStotal
Using this equation, we can calculate the % contribution using the information
from our calculations of the sum of the squares.
SStreatments 18:75
% contribution treatment ¼ ¼ ¼ :16
SStotal 116:92
SSwithin 98:17
% contribution within ¼ ¼ ¼ :84
SStotal 116:92
The final step can now be completed by applying the data to the ANOVA
computation and decision table, as shown in Figure 8.12.

Evaluating the Results


We can now use the completed ANOVA table as a fact-based decision-making
tool. Evaluating the data in this form becomes almost intuitive. A few of the
important facts that we can extract from the table are:

 The product variance caused by the treatment (supplier) was not significant.
 The supplier treatment is contributing 16% to the overall product variability.
 In this analysis, 84 percent of the product variability is not accounted for.

Sum of the Degrees of Mean


Source of F F’ %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 18.75 1.91 4.96 16%

Within 98.17 10 9.82 84%

Total 116.92 11

SStreatment 18.75 SSwithin 98.17


= = 0.16 = = 0.84
SStotal 116.92 SStotal 116.92

FIGURE 8.12 Completed ANOVA computation and decision table.


c08_1 10/09/2008 340

340 ANALYZE AND IMPROVE EFFECTIVENESS

These facts form the basis for informed business and engineering decisions
concerning such critical business success factors as design, materials selection,
procurement, training, variability reduction programs, process selection, and the
management of further designed experiments.

TWO-WAY ANOVA

One-way ANOVA deals only with one treatment. One hypothesis is tested,
namely, that all means are equal. It is often necessary to determine whether
two different treatments are affecting a process or product and whether their
effect is significant. Two-way ANOVA provides a tool to make that assess-
ment of two treatments. To demonstrate this, the data table used for one-way
ANOVA has been further divided into two treatments, treatment A (supplier)
and treatment B (test set). This division is often accomplished as a result of
the cause-and-effect analysis performed before every design of experiment
(DOE) or through the evaluation of existing data and using engineering as-
sessments. These data now represent the acceptance yield of six lots of 100
received from two different suppliers and accepted on two different accep-
tance test sets. (See Figure 8.13.)
Two-way ANOVA can be accomplished in the same six steps as one-way
ANOVA. The difference is the addition of the second treatment to the ANOVA
computation and decision table, as shown in Figure 8.14.

Supplier

A1 A2

97 89

94 93 B1
Test Set

97 94

97 99

97 99 B2

100 99

FIGURE 8.13 Two-way ANOVA data.


c08_1 10/09/2008 341

TWO-WAY ANOVA 341

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment SStreatment dftreatment MStreatment Ftreatment F'treatment Treatment


A A A A A A A

Treatment SStreatment dftreatment MStreatment Ftreatment F'treatment Treatment


B B B B B B B

Within SSwithin dfwithin MSwithin Within

Total SStotal dftotal

FIGURE 8.14 Two-way ANOVA calculation and decision table.

Just as in one-way ANOVA, there are six steps in our approach to two-way
ANOVA:

1. Calculate the sum of the squares.


2. Determine the degrees of freedom.
3. Calculate the mean squares.
4. Calculate the F ratio.
5. Look up the critical value of F.
6. Calculate the percent contribution.

Sum of the Squares


The sum of the squares is the first calculation on all ANOVA tables. We must
calculate the sum of the squares for total variation (SStotal) for the variation attrib-
uted to the treatments (SSA and SSB) and the variation attributed to error (SSwithin).

Sum of the Squares Total (SStotal )


Calculating the sum of the squares for total is accomplished in exactly the same
way for all ANOVA calculations (one-way, two-way, and multivariate). The
process is repeated here only for continuity.
PSStotal is calculated by subtract-
The sum of the squares for the total variation
ing the squared value of the summation of y ( y)2 divided by the total number
of data points (N) from the summation of the squared individual values of y
(Sy)2, as indicated in the formula:
X X 2 
2
SStotal ¼ y  y N

This equation is easily translated into a simple spreadsheet, as shown in


Figure 8.15. The spreadsheet is used to calculate the SStotal.
c08_1 10/09/2008 342

342 ANALYZE AND IMPROVE EFFECTIVENESS

n y y
1 97 9,409
2 94 8,836
3 97 9,409
4 97 9,409
5 97 9,409
6 100 10,000
7 89 7,921
8 93 8,649
9 94 8,836
10 99 9,801
11 99 9,801
12 99 9,801
Total: 1,155 111,281
(∑y)2 1,334,025 n/a
(∑y)2/n 111,168.75 n/a

FIGURE 8.15 Calculate the total sum of squares (SStotal).

Sum of the Squares Treatments (SStreatments)


To calculate the sum of the squares for the variation of two treatments
(SStreatments), sum the squared values of each level (e.g., A1 and A2) of one treat-
ment (A) and then divide each of the values by the number of data points (n) in
each level. Perform the same functionPfor the other treatment. Subtract
the squared value of the summation of y ( y)2 divided by the total number of
data points (N) from each summed value for each treatment.
"P P # "P #
ð yA1 Þ2 ð yA2 Þ2 ð yÞ2
SSA ¼ þ 
nA1 nA2 N
"P P # "P #
ð yB1 Þ2 ð yB2 Þ2 ð yÞ2
SSB ¼ þ 
nB1 nB2 N

Applying this equation to our data from the data table, in Figure 8.15 we can use
a spreadsheet to calculate the values for the treatments sum of the squares, as
shown in Figures 8.16 and 8.17.
"P P # "P #
ð yA1 Þ2 ð yA2 Þ2 ð yÞ2
SSA ¼ þ 
nA1 nA2 N
SSA ¼ ½56;554 þ 54;721:50  111;168:75 ¼ 6:75
"P P # "P #
ð yB1 Þ2 ð yB2 Þ2 ð yÞ2
SSB ¼ þ 
nB1 nB2 N
SSB ¼ ½53;016 þ 58;213:50  111;168:75 ¼ 60:75
c08_1 10/09/2008 343

TWO-WAY ANOVA 343

Supplier A1 Supplier A2
n yA1 yA2
1 97 89
2 94 93
3 97 94
4 97 99
5 97 99
6 100 99
Total: 582 573
(Σy)2 338,724.00 328,329.00
2
(ΣY) /n 56,454.00 54,721.50

FIGURE 8.16 Calculate sum of squares for treatment A (supplier).

Sum of the Squares Within (SSwithin)


The sums of the squares for the treatments are cumulative. Therefore, we can
calculate the sum of the squares for within as the total sum of the squares minus
the sum of the squares for the treatments, as follows:
SSwithin ¼ SStotal  SStreatments
Using this equation with the data we have used in developing the calculation of
the sum of the squares element of the ANOVA table, Figure 8.14 we can deter-
mine the SSwithin as follows:
SSwithin ¼ SStotal  SStreatments
SSwithin ¼ 112:25  ð6:75 þ 60:75Þ ¼ 44:75

Supplier B1 Supplier B2
n yB1 yB2

1 97 97
2 94 97
3 97 100
4 89 99
5 93 99
6 94 99
Total: 564 591
(∑y)2 318,096.00 349,281.00
(∑y)2/n 53,016.00 58,213.50

FIGURE 8.17 Calculate sum of squares for treatment B (test set).


c08_1 10/09/2008 344

344 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 dfsupplier MSsupplier Fsupplier F'supplier Supplier

Test Set 60.75 dftest set MStest set Ftest set F'test set Test Set

Within 44.75 dfwithin MSwithin Within

Total 112.25 dftotal

FIGURE 8.18 Apply the sum of squares to the ANOVA table.

The calculations for the sum of the squares can now be added to the ANOVA
table, as shown in Figure 8.18.

Degrees of Freedom
Degrees of freedom (df) are the number of independent comparisons available
to evaluate the data. It is necessary to determine the degrees of freedom for the
treatment, within, and total. The degrees of freedom for the treatment is the
number of treatments minus one, T – 1; in this case, we have two treatments (Al
and A2). The total treatments are the total number of data elements in the analy-
sis minus one, N – 1; in this case, we have 12 data elements. The degrees of
freedom for within are calculated by subtracting the treatment degrees of free-
dom from the total degrees of freedom. This information can then be applied to
the ANOVA table, as shown in Figure 8.19.
dftotal ¼ total data points minus one
dftreatment ¼ number of treatment levels minus one
dfwithin ¼ dftotal minus dftreatment

For our example:


dftreatment A ¼ 2  1 ¼ 1
dftreatment B ¼ 2  1 ¼ 1
dftotal ¼ 12  1 ¼ 11
dfwithin ¼ dftotal  dftreatment ¼ 11  2 ¼ 9
c08_1 10/09/2008 345

TWO-WAY ANOVA 345

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 MSsupplier Fsupplier F'critical supplier Supplier

Test Set 60.75 1 MStest set Ftest set F'critical test set Test Set

Within 44.75 9 MSwithin Within

Total 112.25 11

FIGURE 8.19 Apply the degrees of freedom to the ANOVA table.

Mean Squares
The mean squares (MS) element of the ANOVA table is the sum of the squares
for each treatment and the sum of the squares for within divided by the degrees
of freedom (df ).
SSA SSB
MSA ¼ MSB ¼
dfA dfB
SSwithin
MSwithin ¼
dfwithin
Applying this equation to our data from the sum of the squares, we can calcu-
late the values for the mean squares, as indicated in the following equations. This
information can then be applied to the ANOVA table, as shown in Figure 8.20.
6:75
MSA ¼ ¼ 6:75
1
60:75
MSB ¼ ¼ 60:75
1
44:75
MSwithin ¼ ¼ 4:97
9
F Ratio
The Fratio is the quotient of MStreatment and MSwithin for each treatment. It is
calculated using the following equation:

MStreatment
Fratio ¼
MSwithin
c08_1 10/09/2008 346

346 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 6.75 Fsupplier F'critical supplier Supplier

Test Set 60.75 1 60.75 Ftest set F'critical test set Test Set

Within 44.75 9 4.97 Within

Total 112.25 11

FIGURE 8.20 Apply the MS to the ANOVA table.

Applying this equation to the mean squares data, we can calculate the value
for the Fratio, as indicated in the following equations.
MSA 6:75
FA ¼ ¼ ¼ 1:36
MSwithin 4:97
MSB 60:75
FB ¼ ¼ ¼ 12:22
MSwithin 4:97
This information can then be applied to the ANOVA table, as shown in
Figure 8.21.

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 6.75 1.36 F'critical supplier Supplier

Test Set 60.75 1 60.75 12.22 F'critical test set Test Set

Within 44.75 9 4.97 Within

Total 112.25 11

FIGURE 8.21 Apply the F ratio to the ANOVA table.


c08_1 10/09/2008 347

TWO-WAY ANOVA 347

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 6.75 1.36 5.12 Supplier

Test Set 60.75 1 60.75 12.22 5.12 Test Set

Within 44.75 9 4.97 Within

Total 112.25 11

FIGURE 8.22 Apply the F0 critical value to the ANOVA table.

F Critical
We compare the Fratio within the critical value of F0 (F critical) to determine
whether the variance demonstrated is significant. The critical value of F is deter-
mined by referring to the F table for the applicable degrees of freedom and sig-
nificance level selected for your evaluation.
We select a significance level (a) of .05. Therefore, our F0 will be:
F 0 : df1 ; df9 ; :05 ¼ 5:12
Fratio > Fcritical ¼ significant
Fratio < Fcritical ¼ not significant
FA ¼ 1:36 < 5:12 ¼ not significant
FB ¼ 12:22 > 5:12 ¼ significant
This information can then be applied to the ANOVA table, as shown in
Figure 8.22.

Percent Contribution
The % contribution element of the ANOVA table is the sum of the squares for
the treatment and within divided by the sum of the squares for total.

SStreatments
% contribution treatment ¼
SStotal
SSwithin
% contribution within ¼
SStotal
c08_1 10/09/2008 348

348 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 6.75 1.36 5.12 6

Test Set 60.75 1 60.75 12.22 5.12 54

Within 44.75 9 4.97 40

Total 112.25 11

FIGURE 8.23 Apply % contribution to ANOVA table.

Using this equation, we can calculate the % contribution using the information
from our calculations of the sum of the squares.
SSA 6:75
% contribution A ¼ ¼ ¼ :06
SStotal 112:25
SSB 60:75
% contribution B ¼ ¼ ¼ :54
SStotal 112:25
SSwithin 44:75
% contribution within ¼ ¼ ¼ :40
SStotal 112:25
The final step can now be completed by applying the data to the ANOVA
table, as shown in Figure 8.23.
The critical Value of F is determined by referring to the F table for the appli-
cable degrees of freedom and significance level, in the following Table 8.2, for
your evaluation.

Evaluating the Results


We can now use the two-way ANOVA table Figure 8.23 we just completed as a
fact-based decision-making tool. These data indicate the following:

 The product variance caused by the treatment A (supplier) is not


significant.
 The product variance caused by the treatment B (test set) is significant.
 The test set treatment is contributing 54 % to the overall product
variability.
 In this analysis, 40 percent of the product variability is not accounted for.
c08_1 10/09/2008 349

MULTIVARIATE ANOVA 349

TABLE 8.2
DF Numerator
DF Denominator 1-a 1 2 3 4 5
5 0.9 4.06 3.78 3.62 3.52 3.45
0.95 6.61 5.79 5.41 5.19 5.05
0.99 16.3 13.3 12.1 11.4 11
6 0.9 3.78 3.46 3.29 3.18 3.11
0.95 5.99 5.14 4.76 4.53 4.39
0.99 13.7 10.9 9.78 9.15 8.75
7 0.9 3.59 3.26 3.07 2.96 2.88
0.95 5.59 4.74 4.35 4.12 3.97
0.99 12.2 9.55 8.45 7.85 7.46
8 0.9 3.46 3.11 2.92 2.81 2.73
0.95 5.32 4.46 4.07 3.84 3.69
0.99 11.3 8.65 7.59 8.01 6.63
9 0.9 3.36 3.01 2.81 2.69 2.61
0.95 5.12 4.26 3.86 3.63 3.48
0.99 10.6 8.02 6.99 6.42 6.06
10 0.9 3.28 2.92 2.73 2.61 2.52
0.95 4.96 4.1 3.71 3.48 3.33
0.99 10 7.56 6.55 5.99 5.64

MULTIVARIATE ANOVA

Multivariate ANOVA (a.k.a. MANOVA) can be used for the evaluation of pro-
cess data and in designed experiments. It extends analysis of variance methods
to handle cases where there is more than one dependent variable and where the
dependent variables cannot simply be combined. It identifies whether changes
in the independent variables have a significant effect on the dependent variables.
It also seeks to identify the interactions among the independent variables and
the association between dependent variables. MANOVA involves six steps:

1. Calculate the sum of the squares.


2. Determine the degrees of freedom.
3. Calculate the mean squares.
4. Calculate the F ratio.
c08_1 10/09/2008 350

350 ANALYZE AND IMPROVE EFFECTIVENESS

Step 1 2 3 4 5 6

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

A 15.042 1 15.042 13.371 3.05 12

B 70.041 1 70.041 62.259 3.05 54

C 1.042 1 1.042 0.926 3.05 1

AB 12.041 1 12.041 10.703 3.05 9

AC 0.041 1 0.041 0.036 3.05 0

BC 9.376 1 9.376 8.334 3.05 7

ABC 3.376 1 3.376 3.001 3.05 3

Within 18.000 16 1.125 14

Total 128.959 23

FIGURE 8.24 Multivariate ANOVA calculation and decision table.

5. Look up the critical value of F.


6. Calculate the percent contribution.

These steps follow directly the construction of the ANOVA computation and
decision table, as shown in Figure 8.24.
To demonstrate these steps, we use the data from the table in Figure 8.25. To
provide a reasonable level of confidence in this data, we have selected a mini-
mum sample size of three.
The Sum of the Squares
The sum of the squares is the first calculation on our ANOVA table. We must
calculate the sum of the squares for total variation (SStotal) for the variation at-
tributed to treatment main effects (SSmain effects), and interactions (SSinteractions),

Supplier

A1 A2

B1 10 12 11 10 11 10 C1
Vacuum
Test Set

B1 8 11 9 12 9 10 C2

B2 11 11 11 13 16 15 C1

B2 13 14 13 16 15 16 C2

FIGURE 8.25 Multivariate ANOVA data.


c08_1 10/09/2008 351

MULTIVARIATE ANOVA 351

and the variation attributed to error (SSerror). The classic view of the calculation
of the sums of the squares is shown here:
a X
X b X
c X
n
y2...
SStotal ¼ y2i jkl 
i¼1 j¼1 k¼1 l¼1
abcn
X
a
y2 y2...
i:::
SStreatment ¼ 
l¼1
bcn abcn
Xa X
b y2
i j... y2...
SSinteractions ¼ 
i¼1 j¼1
cn abcn

We have significantly simplified this approach and designed statistical nota-


tion that is comprehensible and easily translated to a spreadsheet. For this rea-
son, we do not discuss the intricacies of this dot and summary notation.

Sum of the Squares Total (SStotal) The sum of the squares for the total varia-
P SStotal is calculated by subtracting the squared value of the summation of y
tion
( y)2 divided by the total numberP of data points (N) from the summation of the
squared individual values of y ( y)2, as indicated in the formula:
X P
2 ð yÞ2
SStotal ¼ y 
N
This equation is easily translated into a simple spreadsheet. The spreadsheet
is then used to calculate the SStotal, as shown in Figure 8.26.
X P
2 ð yÞ2
SStotal ¼ y 
N
82;369
SStotal ¼ 3;561 
24
SStotal ¼ 3;561:000  3;432:042
SStotal ¼ 128:958

Sum of the Squares Treatment (SStreatment) To calculate the sum of the


squares for treatment variation (SStreatment) for treatment main effects (SSA,
SSB), summarize the squared values of each level of our data for each treatment
(Al, A2, etc.) divided by the number of dataPpoints in each level (n) and subtract
the squared value of the summation of y ( y)2 divided by the total number of
data points (N), as indicated in the formula:
"P P # "P #
ð yi Þ2 ð yi Þ2 ð yÞ2
SSmain effects ¼ þ 
n n N
c08_1 10/09/2008 352

352 ANALYZE AND IMPROVE EFFECTIVENESS

n y y

3 11 121
4 13 169
5 12 144
6 11 121
7 11 121
8 14 196
9 11 121
10 9 81
11 11 121
12 13 169
13 10 100
14 12 144
15 13 169
16 16 256
17 11 121
18 9 81
19 16 256
20 15 225
21 10 100
22 10 100
23 15 225
24 16 256
Total: 287 3,561
(∑y)2 82,369
2
(∑y) /n 3,432.042

FIGURE 8.26 Calculate the total sum of the squares.

Applying this equation, we can calculate the value for the treatment main
effects sum of the squares for treatment A (SSA), as shown in Figure 8.27.
"P P # "P #
ð yA1 Þ2 ð yA2 Þ2 ð yÞ2
SSA ¼ þ 
nA1 nA2 N
   
17;956:000 23;409:000 82;369:000
SSA ¼ þ 
12 12 24
SSA ¼ ð1;496:333 þ 1;950:750Þ  3;432:042
SSA ¼ 15:042

In the same way, SSB and SSC can be calculated using the same equation and
spreadsheet format, as demonstrated in Figures 8.28 and 8.29 and the following
equations.
"P P # "P #
ð yB1 Þ2 ð yB2 Þ2 ð yÞ2
SSB ¼ þ 
nB1 nB2 N
c08_1 10/09/2008 353

MULTIVARIATE ANOVA 353

n yA1 yA2
1 10 10
2 8 12
3 11 13
4 13 16
5 12 11
6 11 9
7 11 16
8 14 15
9 11 10
10 9 10
11 11 15
12 13 16
Total: 134.00 153.00
2
(∑y) 17,956.00 23,409.00
(∑y)2/n 1,496.33 1,950.75

⎡ (∑yA1)2 ⎤ ⎡ (∑yA2)2 ⎤ (∑y)2


SSA = ⎢ ⎥ + ⎢ ⎥ − = 15.042
⎣⎢ nA1 ⎦⎥ ⎢⎣ nA2 ⎦⎥ N

FIGURE 8.27 Sum of the squares—treatment A.

   
15;129:000 26;896:000 82;369:000
SSB ¼ þ 
12 12 24
SSB ¼ ð1;260:750 þ 2;241:333Þ  3;432:042
SSB ¼ 70:042
"P P # "P #
ð yC1 Þ2 ð yC2 Þ2 ð yÞ2
SSC ¼ þ 
nC1 nC2 N

n yB1yyB1
B1
yyB2
yB2
B2
11 10 10 11 11
22 12 12 11 11
33 11 11 11 11
44 10 10 13 13
55 11 11 16 16
66 10 10 15 15
77 8 8 13 13
88 11 11 14 14
99 9 9 13 13
10 12 12 16 16
11 9 9 15 15
12 10 10 16 16
Total:
Total: 123.00
123.00 164.00
164.00
2
(∑y)2
(?y) 15,129.00
15,129.00 26,896.00
26,896.00
22
(∑y) /n
(?y) /n 1,260.75
1,260.75 2,241.33
2,241.33

⎡ (∑yB1)2 ⎤ ⎡ (∑yB2)2 ⎤ (∑y)2


SSB = ⎢ ⎥+⎢ ⎥− = 70.042
⎢⎣ nB1 ⎥⎦ ⎢⎣ nB2 ⎥⎦ N

FIGURE 8.28 Sum of the squares—Treatment B.


c08_1 10/09/2008 354

354 ANALYZE AND IMPROVE EFFECTIVENESS

   
19;881:000 21;316:000 82;369:000
SSC ¼ þ 
12 12 24
SSC ¼ ð1;656:750 þ 1;776:333Þ  3;432:042
SSC ¼ 1:042

Sum of the Squares for Interactions: AB, AC, and BC For treatment inter-
actions (SSA  C, SSB  C, etc.), summarize the squared values of each level for
P treatment (Al, A2, etc.), subtract the squared value of the summation of y
each
( y)2 divided by the total number of data points (N), and subtract sum of
squares for the associated main effects (SSA, SSB, etc.).
"P # "P # "P # "P #
ð yi1 Þ2 ð yi2 Þ2 ð y j1 Þ2 ð y j2 Þ2
SSinteractions ¼ þ þ þ
n n n n
P 2
ð yÞ
  SSi  SS j
N
Applying this equation to our data, we can use a spreadsheet to calculate the val-
ues for the treatment interactions of treatments AB, as shown in Figure 8.30.
With the information from the table, SSAB can now be calculated.
"P # "P # "P # "P #
ð yA1B1 Þ2 ð yA1B2 Þ2 ð yA2B1 Þ2 ð yA2B2 Þ2
SSAB ¼ þ þ þ
nA1B1 nA1B2 nA2B1 nA2B2
P 2
ð yÞ
  SSA  SSB
N
n yC1yyC1
C1 yC2yC2
yC2
1 10 8
2 12 11
3 11 9
4 10 12
5 11 9
6 10 10
7 11 13
8 11 14
9 11 13
10 13 16
11 16 15
12 15 16
Total: 141.00 146.00
2
(∑y) 19,881.00 21,316.00
2
(∑y) /n 1,656.75 1,776.33

⎡ (∑yC1)2 ⎤ ⎡ (∑yC2)2 ⎤ (∑y)2


SSC = ⎢ ⎥ + ⎢ ⎥ − = 1.042
⎢⎣ nC1 ⎥⎦ ⎢⎣ nC2 ⎥⎦ N

FIGURE 8.29 Sum of the squares—treatment C.


c08_1 10/09/2008 355

MULTIVARIATE ANOVA 355

n yA1B1 yA1B2 yA2B1 yA2B2


1 10 11 10 13
2 12 11 11 16
3 11 11 10 15
4 8 13 12 16
5 11 14 9 15
6 9 13 10 16
Total: 61 73 62 91
(∑y)2 3,721 5,329 3,844 8,281
2
(∑y) /n 620.167 888.167 640.667 1,380.167

⎡(∑yA1B1)2 ⎤ ⎡(∑yA1B2)2 ⎤ ⎡(∑yA2B1)2 ⎤ ⎡(∑yA2B2)2 ⎤ (∑y)


2
SSAB = ⎢ ⎥ + ⎢ ⎥+ ⎢ ⎥ + ⎢ ⎥− − SSA − SSB = 12.041
⎣⎢ nA1B1 ⎥⎦ ⎢⎣ nA1B2 ⎦⎥ ⎣⎢ nA2B1 ⎦⎥ ⎣⎢ nA2B2 ⎦⎥ N

FIGURE 8.30 Sum of the squares—treatment interaction AB.

       
3721 5329 3844 8281 82369
SSAB ¼ þ þ þ   15:042  70:042
6 6 6 6 24
SSAB ¼ ð620:137 þ 888:167 þ 640:667 þ 1380:167Þ  3432:042  15:042
 70:042
SSAB ¼ 12:041

Using this same method, we can calculate the interactions for the interactions
of A  C and B  C, as shown in Figures 8.31 and 8.32.

n yA1C1 yA1C2 yA2C1 yA2C2


n yA1C1 yA1C2 yA2C1 yA2C2
11 10
10 88 10
10 1212
22 12
12 11
11 11
11 99
33 11
11 99 10
10 10
10
44 11
11 13
13 13
13 16
16
55 11
11 14
14 16
16 15
15
66 11
11 13
13 15
15 16
16
Total:
Total: 66
66 68
68 75
75 78
78
2
(∑y) 2 4,356
4,356 4,624
4,624 5,625
5,625 6,084
6,084
(?y)
2
(∑y) 2/n 726.000
726.000 770.667
770.667 937.500
937.500 1,014.000
1,014.000
(?y) /n

⎡ ( ∑ y A1 C 1 ) 2 ⎤ ⎡ ( ∑ y A1 C 2 ) 2 ⎤ ⎡ ( ∑ y A 2 C 1 ) 2 ⎤ ⎡ ( ∑ y A 2 C 2 ) 2 ⎤ ( ∑ y ) 2
SSAC = ⎢ ⎥ + ⎢ ⎥ + ⎢ ⎥ + ⎢ ⎥ − − SSA − SSC = 0 . 041
⎢⎣ n A1C 1 ⎥⎦ ⎢⎣ n A1C 2 ⎥⎦ ⎢⎣ n A 2 C 1 ⎥⎦ ⎢⎣ n A 2 C 2 ⎥⎦ N

FIGURE 8.31 Sum of the squares—treatment interaction AC.


c08_1 10/09/2008 356

356 ANALYZE AND IMPROVE EFFECTIVENESS

n yB1C1 yB1C2 yB2C1 yB2C2


n yB1C1 yB1C2 yB2C1 yB2C2
11 10
10 88 11
11 1313
22 12
12 11
11 11
11 1414
33 11
11 99 11
11 1313
44 10
10 12
12 13
13 1616
55 11
11 99 16
16 1515
66 10
10 10
10 15
15 1616
Total:
Total: 64
64 59
59 77
77 8787
(∑y) 2
2
(?y) 4,096
4,096 3,481
3,481 5,929
5,929 7,569
7,569
(∑y) 2/n
2
(?y) /n 682.667
682.667 580.167
580.167 988.167
988.167 1,261.500
1,261.500

⎡( ∑ y B1C1 ) 2 ⎤ ⎡( ∑ y B 1C 2 ) 2 ⎤ ⎡( ∑ y B 2C 1 ) 2 ⎤ ⎡( ∑ y B 2C 2 ) 2 ⎤ ( ∑ y ) 2
SSBC = ⎢ ⎥+⎢ ⎥+⎢ ⎥+⎢ ⎥− − SSB − SSC = 9 . 375
⎢⎣ nB1C 1 ⎥⎦ ⎢⎣ nB 1C 2 ⎥⎦ ⎢⎣ nB 2C1 ⎥⎦ ⎢⎣ nB 2C 2 ⎥⎦ N

FIGURE 8.32 Sum of the squares—treatment interaction BC.

"P # "P # "P # "P #


ð yA1C1 Þ2 ð yA1C2 Þ2 ð yA2C1 Þ2 ð yA2C2 Þ2
SSAC ¼ þ þ þ
nA1C1 nA1C2 nA2C1 nA2C2
P 2
ð yÞ
  SSA  SSC ¼ 0:041
N

Sum of the Squares for Interactions: ABC Although similar to the calcula-
tions for the sum of the squares treatment interactions for AB, AC, and BC, the
calculation for SStreatment ABC appears to be more daunting.
"P # "P # "P #
ð yA1B1C1 Þ2 ð yA1B1C2 Þ2 ð yA1B2C1 Þ2
SSABC ¼ þ þ
nA1B1C1 nA1B1C2 nA1B2C1
"P # "P # "P #
ð yA1B2C2 Þ2 ð yA2B1C1 Þ2 ð yA2B1C2 Þ2
þ þ þ
nA1B2C2 nA2B1C1 nA2B1C2
"P # "P # P
ð yA2B2C1 Þ2 ð yA2B2C2 Þ2 ð yÞ2
þ þ   SSA  SSB  SSC
nA2B2C1 nA2B2C2 N

 SSAB  SSAC  SSBC ¼ 3:376

The spreadsheet used is similar to those that we developed previously. How-


ever, instead of looking at 12 samples per treatment, we will be looking at 3 per
each treatment, as shown in Figure 8.33.
c08_1 10/09/2008 357

MULTIVARIATE ANOVA 357

nn A1B1C1
A1B1C1 A1B1C2
A1B1C2 A1B2C1
A1B2C1 A1B2C2
A1B2C2 A2B1C1
A2B1C1 A2B1C2
A2B1C2 A2B2C1
A2B2C1 A2B2C2
A2B2C2
11 10
10 88 11
11 13
13 10
10 12
12 13
13 1616
22 12
12 11
11 11
11 14
14 11
11 99 16
16 1515
33 11
11 99 11
11 13
13 10
10 10
10 15
15 1616
Total:
Total: 33
33 28
28 33
33 40
40 31
31 31
31 44
44 4747

∑y) 2
((?y)
2
1,089
1,089 784
784 1,089
1,089 1,600
1,600 961
961 961
961 1,936
1,936 2,209
2,209
∑y) 2/n
((?y)
2
/n
363.000
363.000 261.333
261.333 363.000
363.000 533.333
533.333 320.333
320.333 320.333
320.333 645.333
645.333 736.333
736.333

⎡( ∑ y A1B1C1 ) 2 ⎤ ⎡( ∑ y A1B1C2 ) 2 ⎤ ⎡( ∑ y A1B2 C1 ) 2 ⎤ ⎡( ∑ y A1B2 C2 ) 2 ⎤ ⎡( ∑ y A2B1C1 ) 2 ⎤ ⎡( ∑ y A2B1C2 ) 2 ⎤


S SA B C = ⎢ ⎥+⎢ ⎥+ ⎢ ⎥+⎢ ⎥ +⎢ ⎥+⎢ ⎥+
⎢⎣ n A1B1C1 ⎥⎦ ⎢⎣ n A1B1C2 ⎥⎦ ⎢⎣ n A1B2 C1 ⎥⎦ ⎢⎣ n A1B2 C2 ⎥⎦ ⎢⎣ n A2B1C1 ⎥⎦ ⎢⎣ n A2B1C2 ⎥⎦
⎡( ∑ y A2B2 C1 ) 2 ⎤ ⎡( ∑ y A2B2 C2 ) 2 ⎤ ( ∑ y)2
⎢ ⎥+⎢ ⎥ − − S SA − S SB − S SC − S SA B − S SA C − S SBC = 3 . 3 7 6
⎢⎣ n A2B2 C1 ⎥⎦ ⎢⎣ n A2B2 C2 ⎥⎦ N

FIGURE 8.33 Sum of the squares—treatment interaction ABC.

Using the values provided by the spreadsheet, the formula can now be filled-in:
 
363:000 þ 261:333 þ 363:000 þ 533:333
SSABC ¼
þ320:333 þ 320:333 þ 645:333 þ 736:333
 15:042  70:042  1:042  12041 ¼ 3:376

Sum of the Squares Error The sums of the squares for main effects and in-
teractions are cumulative. Therefore, we can calculate the sum of the squares for
error as the total sum of the squares minus the sum of the squares for treatments
and interactions.
SSerror ¼ SStotal  ðSSmain effects þ SSinteractions Þ
Using this equation with the data we have used in developing the calculation of
the sum of the squares element of the ANOVA table, we can determine the
SSerror as follows:
SSerror ¼ SStotal  ðSSmain effects þ SSinteractions Þ
¼ 128:958  ð86:126 þ 24:842Þ
¼ 18:002
Please note that although this is called the sum of the squares for error, it should
not imply that an error has been made in your data acquisition, calculations, or
analysis. It is also not related to sampling error or any other statistic. Rather, this
is the variance that has not been accounted for by your main effects or
interactions.
Step 1 can now be completed by applying the sum of the squares data to the
ANOVA calculation and decision table, as indicated in Figure 8.34.
c08_1 10/09/2008 358

358 ANALYZE AND IMPROVE EFFECTIVENESS

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042

BB 70.041
70.041

CC 1.042
1.042

AB
AB 12.041
12.041

AC
AC 0.041
0.041

BC
BC 9.376
9.376

ABC
ABC 3.376
3.376

Within
Within 18.002
18.002

Total
Total 128.961
128.961

FIGURE 8.34 Enter the sum of the squares in the ANOVA table.

Degrees of Freedom
Degrees of freedom (df) are the number of independent comparisons available
to estimate a specific treatment or level of a treatment. Therefore, if you have N
treatments, you would have N1 degrees of freedom. This is also an element of
the critical value of F. Degrees of freedom must be found for both the numerator
and denominator of the F statistic. The number of degrees of freedom for treat-
ment main effects is TA1, or the number of applicable treatment levels minus
one. The degrees of freedom for treatment interactions is (Ta – 1) (Tb – 1), or the
degrees of freedom for each of the effects in the interaction. The total degrees of
freedom are the total number of independent comparisons for all treatments,
treatment levels, and replicates N – 1, or the total number of data elements in
the sample minus one. The degrees of freedom applicable to variance within
treatments (error) are the total degrees of freedom minus the degrees of freedom
for treatments and treatment interactions.

Treatment Main Effects The degrees of freedom derived from this data, for
the treatment main effects, are the number of treatment levels for each treatment
minus one. In this example, there are two treatment levels for each treatment;
dfmain effects are:

dftreatment A ¼ TA  1
dftreatment B ¼ TB  1
dftreatment C ¼ TC  1
c08_1 10/09/2008 359

MULTIVARIATE ANOVA 359

For our example:

dftreatment A ¼ 2  1 ¼ 1
dftreatment B ¼ 2  1 ¼ 1
dftreatment C ¼ 2  1 ¼ 1

Treatment Interactions The degrees of freedom derived from this data, for
treatment interactions, are the multiplied degrees of freedom for the associated
treatments. In this example, there are four possible treatment interactions, A  B,
A  C, B  C, and A  B  C. Therefore, dfinteraction are:

dfinteraction AB ¼ ðTA  1ÞðTB  1Þ


dfinteraction AC ¼ ðTA  1ÞðTC  1Þ
dfinteraction BC ¼ ðTB  1ÞðTC  1Þ
dfinteraction ABC ¼ ðTA  1ÞðTB  1ÞðTC  1Þ

For our example:


dfinteraction AB ¼ 1  1 ¼ 1
dfinteraction AC ¼ 1  1 ¼ 1
dfinteraction BC ¼ 1  1 ¼ 1
dfinteraction ABC ¼ 1  1  1 ¼ 1

The total degrees of freedom for this figure are the total number of samples
(24) minus one.
dftotal ¼ N  1
dftotal ¼ 24  1 ¼ 23

The degrees of freedom for within treatments (or error) are derived by sub-
traction of the dfmain effects and dfinteractions from the dftotal:

dferror ¼ dftotal  ðdftreatments  dfinteractions Þ


dferror ¼ 23  3  4 ¼ 16

The total degrees of freedom for this figure are the total number of samples
(24) minus one. This step can now be completed by applying the degrees of
freedom data to the ANOVA table, as shown in Figure 8.35.
c08_1 10/09/2008 360

360 ANALYZE AND IMPROVE EFFECTIVENESS

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042 11

BB 70.041
70.041 11

CC 1.042
1.042 11

AB
AB 12.041
12.041 11

AC
AC 0.041
0.041 11

BC
BC 9.376
9.376 11

ABC
ABC 3.376
3.376 11

Within
Within 18.002
18.002 16
16

Total
Total 128.961
128.961 23
23

FIGURE 8.35 Enter degrees of freedom into the ANOVA table.

Mean Squares
The mean squares (MS) element of the ANOVA table is the quotient of the sum
of the squares for main effects and interactions and the degrees of freedom (df).
SSmain effects
MSmain effects ¼
dfmain effects
SSinteractions
MSinteractions ¼
dfinteractions
SSerror
MSerror ¼
dferror
This step can now be completed by applying the mean squares data to the
ANOVA table, as shown in Figure 8.36.

F Ratio
The Fratio is the quotient of the MSmain effects, MSinteractions, and MSwithin and can
be calculated using the following formula.
MStreatment
Fratio ¼
MSwithin
This step can now be completed by applying the Fratio data to the ANOVA
table, as shown in Figure 8.37.

F Critical
We compare the Fratio to the critical value of F0 to determine if the variance
demonstrated is significant. The critical value of F0 is determined by referring
c08_1 10/09/2008 361

MULTIVARIATE ANOVA 361

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042 11 15.042
15.042

BB 70.041
70.041 11 70.041
70.041

CC 1.042
1.042 11 1.042
1.042

AB
AB 12.041
12.041 11 12.041
12.041

AC
AC 0.041
0.041 11 0.041
0.041

BC
BC 9.376
9.376 11 9.376
9.376

ABC
ABC 3.376
3.376 11 3.376
3.376

Within
Within 18.002
18.002 16
16 1.125
1.125

Total
Total 128.961
128.961 23
23

FIGURE 8.36 Enter mean squares into the ANOVA table.

to the F table for the applicable degrees of freedom and significance level se-
lected for your evaluation.
The level of significance applied to the analysis can be a very subjective
choice. Based on the consequences of the decision that will result from the anal-
ysis of this experiment, a significance level (a) of .10 is reasonable. Therefore,
our F0 will be:

F 0: df 1; df 16; 0:10

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042 11 15.042
15.042 13.371
13.371

BB 70.041
70.041 11 70.041
70.041 62.259
62.259

CC 1.042
1.042 11 1.042
1.042 0.926
0.926

AB
AB 12.041
12.041 11 12.041
12.041 10.703
10.703

AC
AC 0.041
0.041 11 0.041
0.041 0.036
0.036

BC
BC 9.376
9.376 11 9.376
9.376 8.334
8.334

ABC
ABC 3.376
3.376 11 3.376
3.376 3.001
3.001

Within
Within 18.002
18.002 16
16 1.125
1.125

Total
Total 128.961
128.961 23
23

FIGURE 8.37 Enter the F ratio into the ANOVA table.


c08_1 10/09/2008 362

362 ANALYZE AND IMPROVE EFFECTIVENESS

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042 11 15.042
15.042 13.371
13.371 3.05
3.05

BB 70.041
70.041 11 70.041
70.041 62.259
62.259 3.05
3.05

CC 1.042
1.042 11 1.042
1.042 0.926
0.926 3.05
3.05

AB
AB 12.041
12.041 11 12.041
12.041 10.703
10.703 3.05
3.05

AC
AC 0.041
0.041 11 0.041
0.041 0.036
0.036 3.05
3.05

BC
BC 9.376
9.376 11 9.376
9.376 8.334
8.334 3.05
3.05

ABC
ABC 3.376
3.376 11 3.376
3.376 3.001
3.001 3.05
3.05

Within
Within 18.002
18.002 16
16 1.125
1.125

Total
Total 128.961
128.961 23
23

FIGURE 8.38 Look up the critical F value.

The degrees of freedom for the numerator are the degrees of freedom for
treatment main effects and interactions. The degrees of freedom for the denomi-
nator are the degrees of freedom for error. From our continuing example, we can
determine that the degree of freedom for the numerator is one (1) for all main
effects and interactions. The degrees of freedom for error are 16.
F 0 ¼ 3:05
From the F distribution table, we can determine that the critical value of F is
3.05 for each interaction and main effect. The treatments are significantly differ-
ent when the Fratio exceeds the Fcritical value. This step can now be completed by
applying the Fcritical data to the ANOVA table, as shown in Figure 8.38.

Percent Contribution
The % contribution element of the ANOVA table is the quotient of the sum of
the squares for main effects, interactions, and error and the sum of the squares
for total, as indicated:

SStreatment
% contribution ¼
SStotal
SSinteractions
% contribution ¼
SStotal
SSerror
% contribution ¼
SStotal
c08_1 10/09/2008 363

LINEAR CONTRASTS 363

Step
Step 11 22 33 44 55 66

Sum
Sumofofthe
the Degrees
Degreesofof Mean
Mean
Source
Sourceofof FF F'F' %%
Squares
Squares Freedom
Freedom Squares
Squares
Variation
Variation Ratio
Ratio Critical
Critical Contribution
Contribution
SS
SS dfdf MS
MS
AA 15.042
15.042 11 15.042
15.042 13.371
13.371 3.05
3.05 12
12

BB 70.041
70.041 11 70.041
70.041 62.259
62.259 3.05
3.05 54
54

CC 1.042
1.042 11 1.042
1.042 0.926
0.926 3.05
3.05 11

AB
AB 12.041
12.041 11 12.041
12.041 10.703
10.703 3.05
3.05 99

AC
AC 0.041
0.041 11 0.041
0.041 0.036
0.036 3.05
3.05 00

BC
BC 9.376
9.376 11 9.376
9.376 8.334
8.334 3.05
3.05 77

ABC
ABC 3.376
3.376 11 3.376
3.376 3.001
3.001 3.05
3.05 33

Within
Within 18.002
18.002 16
16 1.125
1.125 14
14

Total
Total 128.961
128.961 23
23

FIGURE 8.39 Enter the % contribution into the ANOVA table.

Using the information shown in the ANOVA table’s SS column, we can now
calculate the % contribution for the various sources of variation and apply it to
the completed ANOVA table, as shown in Figure 8.39.

Evaluating the Results


A few of the important facts that we can extract from the table are listed here.

 The main effect of treatment A is contributing 12 percent to the total


variability.
 The main effect of treatment B is contributing 54 percent to the total pro-
cess, product, or system variability; this treatment would also be critical at
the 99 percent confidence level.
 Treatment C is not significant.
 The interactions of AB and BC are significant.

LINEAR CONTRASTS

Accepting or rejecting the null hypothesis in the ANOVA analysis implies that
there is a difference in means. Using ANOVA, the exact nature of this difference
is not specified. A further understanding may be useful. Linear contrasts can
provide this additional understanding and also provide for a graphical represen-
tation of the data, as indicated in Figure 8.40. A contrast is the sum of the high-
level mean minus the sum of the low-level mean. Let’s take a close look at both
one-way and two-way linear contrasts.
c08_1 10/09/2008 364

364 ANALYZE AND IMPROVE EFFECTIVENESS

Effect = Contrast = y + –y –
A A

y+ +
A

y+
A
y–
A

Contrast and
effect are in the
yyA– -
noise level and
A may not be
significant

A –- A+ A -– A+

FIGURE 8.40 One-way linear contrast.

One-Way Linear Contrasts


Determining that the data from an ANOVA are significant implies that there is
a difference between treatment means (as measured by their variance). But the
exact nature of these differences is not specified. The effect of the difference
of the means is not measured; rather, it is only an assessment that they are
‘‘significantly different.’’ Using linear contrasts provides a tool to measure
that difference and also provides a visualization of that difference. This ap-
proach to measuring the effects empirically and graphically is demonstrated
in Figure 8.40.
The contrast between the two input variables (A and Aþ) is measured as the
difference between the means of each treatment. In this way, we measure the
effects as indications of the linear contrasts; it also provides an assessment of
the effects of the different treatments and provides a graph of these effects.
The analysis of linear contrasts can be accomplished in four steps.

1. Calculate the effects.


2. Graph the effects.
3. Determine the significance of the linear contrast.
4. Evaluate the results.

To demonstrate these steps, we use the data we used in the evaluation of the
two-way ANOVA. Using this same data will assist you in understanding the
differences between ANOVA and linear contrasts.
Additionally, we have introduced the different types of notation used in DOE
to define levels of a treatment. These levels can be defined as level 1 and level 2
(A1, A2) or as minus and plus (A, Aþ), as shown in Figure 8.41.
c08_1 10/09/2008 365

LINEAR CONTRASTS 365

n A1(–) A2(+)

1 98 89

2 94 99

3 97 94

4 98 99

5 97 92

6 100 96

Total: 584 569

(∑y)/n 97.33 94.83

FIGURE 8.41 Data table.

1. Calculating the effects. The effects of theP


different treatments is equal to
the sum of the treatment data at the plus level ( Kþ) divided by the total num-
ber of P
samples at that level (nþ), minus the sum of the treatment data at the plus
level ( K) divided by the total number of samples at that level (n).
P P
Kþ K
EffectK ¼ 
nþ n
With this equation, we can calculate the effects as follows:
P P
Aþ A
EffectA ¼ 
nAþ nA
569 586
¼ 
6 6
¼ 94:83  97:33
¼ 2:50
2. Graphing the effects. We can now graph the effect of the treatment levels
to evaluate what effect the levels are having on the quality characteristic, as
shown in Figure 8.42.
The slope of the line indicates the significance of the effect. The steeper
the slope, the more significant the effect. The graph also indicates which
treatment level is producing the desired effect of optimizing the process. This
graph of linear effects indicates that Aþ would minimize yield. The
c08_1 10/09/2008 366

366 ANALYZE AND IMPROVE EFFECTIVENESS

98.0
97.5
97.33
97.33
97.0
96.5
96.0
–2.50
-2.50
95.5
95.0
94.83
94.5
94.0
93.5
A1(–)
A1(
1 -) A2(+)
A2(+)
2

FIGURE 8.42 Graph the effects.

calculation of the effects indicates that taking this action would decrease
yield by 2.50 units.
3. Determining the significance. Determine the significance of the linear
contrasts by using Fisher’s F statistic (ANOVA). This can be accomplished
by using the same first steps as one-way ANOVA. Since we have covered
these functions in detail previously, the significance of the effects can be de-
termined from our previous one-way ANOVA table, as shown in Figure 8.43
and Figure 8.44.
4. Evaluating the results. Now that we have completed our one-way linear
contrast evaluation, we can make several fact-based determinations:
 A– yields 97.33.
 Aþ yields 94.83.
 The effect of B is 94.83 – 97.33 = 2.50.
 F ratio is less than the critical value of F.

This contrast may be caused by sampling variation or other factors not tested.

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Treatment 18.75 1 18.75 1.91 4.96 16

Within 98.17 10 9.82 84

Total 116.92 11

FIGURE 8.43 Determine significance.


c08_1 10/09/2008 367

LINEAR CONTRASTS 367

B–
B -

y+
A
A+
B+
B +
B ++
B

yA -–
A
B -–
B

A -– A+

FIGURE 8.44 Two-way linear contrasts.

Two-Way Linear Contrasts


Just as with the relationship of one-way ANOVA to one-way linear contrasts, so
does the two-way linear contrast provide a method to measure the effect of the
differences of two treatments simultaneously. This is very important, because
different treatments can affect the results of your analysis by their individual
effects and their interactions. In two-way ANOVA, we measure the effects of
interactions for the first time. Figure 8.44 illustrates the linear relationship of
two variables (Aþ, A and Bþ, B).
The analysis of two-way linear contrasts can be accomplished in the same
four steps as one-way linear contrasts.

1. Calculate the effects.


2. Graph the effects.
3. Determine the significance of the linear contrast.
4. Evaluate the results.

To demonstrate these steps, we use the same data set that we have been work-
ing with throughout. The data is from the evaluation of two-way ANOVA, as
shown in Figure 8.45.

1. Calculating the effects. We calculate the effects to introduce design of ex-


periments (DOE) notation and and orthogonal arrays. The two-way linear con-
trast data table in Figure 8.45 can be transitioned to a test matrix and then to a
DOE orthogonal array, as indicated in Figure 8.46.
Building upon the information in Figure 8.46, we can then apply this in-
formation to a DOE orthogonal array and calculate the effects, as shown in
Figure 8.47.
c08_1 10/09/2008 368

368 ANALYZE AND IMPROVE EFFECTIVENESS

Supplier

A1 A2

97 89

94 93 B1

Test Set
97 94

97 99

97 99 B2

100 99

Supplier A1 Supplier A2

FIGURE 8.45 Two-way linear contrasts data table.

2. Graphing the effects. We can now graph the effect of the treatment levels
to evaluate what effect they are having on the quality characteristic. The slope of
the line indicates the significance of the effect. This also provides us the oppor-
tunity to understand the relationship of treatments, test cells and orthogonal ar-
rays demonstrated in Figure 8.46 and Figure 8.47. The steeper the slope, the
more significant the effect. (See Figure 8.48)
The graph also indicates which treatment and treatment level are
producing the desired effect of optimizing the process. Graphing the linear

Treatments A1(-) A2(+)

B1(-) A(-)B(-) A(+)B(-)


Treatments
Treatments A1(-) A2(+)
97 89
B1(-) 94 93
B2(+) A(-)B(+) A(+)B(+)
97 94
97 99
B2(+) 97 99
100 99

A B AB
1 - - +
2 - + -
3 + - -
4 + + +

FIGURE 8.46 Data table transition.


c08_1 10/09/2008 369

LINEAR CONTRASTS 369

Supplier Test Set Interaction y11 y22 y33 y¯¯

1 - - + 97 94 97 96

2 - + - 97 97 100 98

3 + - - 89 93 94 92

4 + + + 99 99 99 99

∑+
?+ 191.00 197.00 195.00


?-- 194.00 188.00 190.00

∑+/n+
?+/n+ 95.50 98.50 97.50

∑-/n -
?-/n- 97.00 94.00 95.00

Effect -1.50
–1.50 4.50 2.50

FIGURE 8.47 Calculate the effects.

effects indicates that A and Bþ would optimize yield, as shown in


Figure 8.48.
 Where lines run parallel, there is little or no interaction.
 Where lines cross, there is a clear interaction.

Procedure (A) Team Members (B)


100
99
98
– 1.50 97
4.50
96
95
94
93
92
91
90

- A + – - B +

100
99 B+
98
97
96
95
94
B–
93
92
91
90
– - A +

FIGURE 8.48 Graph the effects.


c08_1 10/09/2008 370

370 ANALYZE AND IMPROVE EFFECTIVENESS

Sum of the Degrees of Mean


Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS

Supplier 6.75 1 6.75 1.36 5.12 6

Test Set 60.75 1 60.75 12.22 5.12 54

Within 44.75 9 4.97 40

Total 112.25 11

FIGURE 8.49 Determine significance.

The graph of the interactions indicates that there are no or a very small inter-
action effect.
3. Determining the significance. The significance of the linear contrasts is
determined by using Fisher’s F statistic. This can be accomplished by using the
same first steps from one-way ANOVA. The significance of the effects can be
determined from our previous two-way ANOVA table, as shown in Figure 8.49.
4. Evaluating the results. Now that we have completed our two-way linear
contrast evaluation, we can make several fact-based determinations:
 Effect of A ¼ 95.50 – 97.00 ¼ 1.5.
 Effect of B ¼ 98.50 – 94.00 ¼ 4.5.
 F ratio for A is less than critical F.
 F ratio for B is greater than critical F.

Therefore, treatment B is statistically significant, while A is not.

DESIGN OF EXPERIMENTS

Factorial experiments are widely used in business, industry, and services to de-
fine, describe, optimize, and improve products and processes. They are used to
simultaneously evaluate the factors affecting the response of a product or pro-
cess. This class of designs is of great practical importance and is the most fre-
quently used technical decision-making tool.
In this section, we discuss the basic concepts of factorial experiments and the
uses, generation, and evaluation of 2k experimental designs.
c08_1 10/09/2008 371

DESIGN OF EXPERIMENTS 371

Basic Concepts of DOE


Design of experiments (DOE) is one of the most powerful tools available for the
design, characterization, and improvement of products and services. DOE is a
group of techniques used to organize and evaluate testing so that it provides the
most valuable data and makes efficient use of assets.
This chapter explains how to carry out the calculations needed to identify the
factors that have the most influence on your process. The intent is to present a
simple technique that can withstand day-to-day use in a variety of industries
while providing a basic understanding that will ease further studies. With this
technique, you will be able to decide:

 Whether a change in your process is worth the added cost


 How to improve the yield or other quality characteristic
 How to reduce the variability of your product or service

Variability breeds dissatisfaction, both in unlucky customers who get the


poorer item and in the lucky ones who then wonder whether their luck will hold
the next time they buy. Customers will (and do!) talk to each other. By using
your design parameters (dimensions, materials, etc.) as factors in the analysis,
you can make your quality characteristic insensitive to an outside influence
(e.g., ambient temperature).
A designed experiment is simply a test or trial project that has been well
structured to gain the most information from the response or yield caused by
certain inputs and treatments. Full factorial experimental designs provide a
comprehensive analysis of all treatments, their levels, and their interactions.
Evaluating full factorial experiments includes:

 The analysis of the effects of the treatments and treatment levels


 Graphing these effects
 Performing an ANOVA to determine the significance of the treatments
 Determining the percent contribution of the variance at each level

Optimizing the quality characteristic depends on the metric selected and the
nature of the process. If we are measuring a process yield, maximum is best; for
failure rates, minimum is best; for the deviation from a variable standard, nomi-
nal is best.
Experimental Matrix
Figure 8.50 displays a typical DOE experimental matrix or test matrix. The ma-
trix indicates the number of treatments, levels of the treatments, and the treat-
ment combinations for each experimental run (trial or test).
The DOE experimental matrix translates directly to an array suitable for
computation. This matrix lays out the DOE in runs, indicating the levels for
each run applied to the main effects and interactions of the treatments.
c08_1 10/09/2008 372

372 ANALYZE AND IMPROVE EFFECTIVENESS

Treatment Experimental run

A
Supplier
A1 A2

Equipment

Temperature
A1B1C1 A2B1C1 C1
B1
A1B1C2 A2B1C2

C
B
C2
A1B2C1 A2B2C1 C1
B2
A1B2C2 A2B2C2 C2

Treatment combination Level

FIGURE 8.50 DOE experimental matrix.

In Figure 8.51, the treatment combination A2B2C2D2 is represented as ex-


perimental run 1, with main effects for A set as low (), B set as low (), C set
as low (), and D set as low ().

Main Effect
The main effect of a treatment is the measured change in the response as a result
of changing that specific treatment. In Figures 8.50 and 8.51, the main effects
are for treatments A, B, C, and D.

Interaction
Interaction is the measured change in the response as a result of the combined
effect of two or more treatments. In Figures 8.50 and 8.51, the interactions are
for treatment combinations AB, AC, BC, AD, BD, CD, ABC, ABD, ACD,
BCD, and ABCD.

Treatment
Treatments are the controllable factors used as inputs to the products and pro-
cesses under evaluation. These are the input variables, also called the indepen-
dent variables (Figure 8.51), that can be varied to change the effect on the output
or dependent variable. In Figures 8.50 and 8.51, the treatments are A (the sup-
plier), B (the equipment used in the process), C (the temperature), and D (wind
speed).

Level
Levels are the values of the treatments being studied. In most instances of de-
signed experiments, we can use two levels of the treatments:

1. High level: symbolized by a plus sign (þ) or the number 1


2. Low level: symbolized by a minus sign () or the number 2
c08_1

Supplier
10/09/2008

A1 A2
A1B1C1D1 A1B1C1D2 A2B1C1D1 A2B1C1D2 C1
373

B1
A1B1C2D1 A1B1C2D2 A1B1C2D1 A2B1C2D2 C2
A1B2C1D1 A1B2C1D2 A2B2C1D1 A2B2C1D2 C1

Equipment
B2

Temperature
A1B2C2D1 A1B2C2D2 A1B2C2D1 A2B2C2D2 C2
D1 D2 D1 D2
Wind Speed

A B C D AB AC BC AD BD CD ABC ABD ACD BCD ABCD

1 -1 -1 -1 -1 1 1 1 1 1 1 -1 -1 -1 -1 1
2 1 -1 -1 -1 -1 -1 1 -1 1 1 1 1 1 -1 -1
3
22 -1 1 -1 -1 -1 1 -1 1 -1 1 1 1 -1 1 -1
4 1 1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 1 1 1
5 -1 -1 1 -1 1 -1 -1 1 1 -1 1 -1 1 1 -1
6 1 -1 1 -1 -1 1 -1 -1 1 -1 -1 1 -1 1 1
7
23 -1 1 1 -1 -1 -1 1 1 -1 -1 -1 1 1 -1 1
8 1 1 1 -1 1 1 1 -1 -1 -1 1 -1 -1 -1 -1
9 -1 -1 -1 1 1 1 1 -1 -1 -1 -1 1 1 1 -1
10 1 -1 -1 1 -1 -1 1 1 -1 -1 1 -1 -1 1 1
11 -1 1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1
12 1 1 -1 1 1 -1 -1 1 1 -1 -1 1 -1 -1 -1
13
244 -1 -1 1 1 1 -1 -1 -1 -1 1 1 1 -1 -1 1
14 1 -1 1 1 -1 1 -1 1 -1 1 -1 -1 1 -1 -1
15 -1 1 1 1 -1 -1 1 -1 1 1 -1 -1 -1 1 -1
16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

373
FIGURE 8.51 DOE experimental RUNS.
c08_1 10/09/2008 374

374 ANALYZE AND IMPROVE EFFECTIVENESS

In the Figure 8.51 above, there are two levels for each treatment:

 Supplier Al (þ) and A2 ()


 Equipment B1 (þ) and B2 ()
 Temperature C1 (þ) and C2 ()
 Wind speed D1(þ) and D2 ()

Treatment Combination
A treatment combination is the set of treatments and the associated levels used for
an individual experimental run. Treatment combinations are displayed for each ex-
perimental run. For instance, A1B2C1D1 indicates that the experimental run will
be accomplished using material from treatment A (supplier) at level 1 (þ), treat-
ment B (equipment) at level 2 (), treatment C (temperature) at level 1 (þ), and
treatment D (wind speed) at level 1 (þ). These treatment combinations also de-
scribe the treatments and levels of an experiment and determine the number of
experimental runs required.

Experimental Run
An experimental run is the accomplishment of a treatment combination. It may
consist of one or more trials. In the example given in Figure 8.51 ‘‘Treatment
Combination,’’ there are four treatments at two levels (24), or eight experimental
runs in this designed experiment. The number of experimental runs needed in a
designed experiment can be determined by the number of treatments (k) and the
levels (L) of each treatment. These are stated as an exponential expression Lk,
with the levels being the base number and the treatments being the exponent.
A designed experiment can have as few as four experimental runs (22) or as
many as 2,187 (three levels with seven treatments, or 37). This clearly demon-
strates that DOE can cover a wide range of experimental combinations and lev-
els of data. As we progress through this chapter, we describe DOE methods for
dealing with large experiments.

Orthogonal Array
Analyzing the results of a designed experiment includes calculating the mean
effect of the levels of each treatment. To accomplish this result, it is necessary
for the experiment to be balanced. A balanced set of experiments contains an
equal number of experiments for each level of each treatment. Figure 8.52 rep-
resents a balanced set of experiments.
Three criteria can be applied to determine whether a test array is orthogonal:

1. Sum all the levels in each column. If there is an equal number of levels in
each column, then we have passed the first test for an orthogonal array. In
each column of our data table (Figure 8.52), the treatment has four plus-
level (þ) and four minus-level () values.
c08_1 10/09/2008 375

DESIGN OF EXPERIMENTS 375

Treatments
Run A B C AB AC BC ABC
1 - - - + + + -
2 + - - - - + +
3 - + - - + - +
4 + + - + - - -
5 - - + + - - +
6 + - + - + - -
7 - + + - - + -
8 + + + + + + +
+ 4 4 4 4 4 4 4
- 4 4 4 4 4 4 4

FIGURE 8.52 Orthogonal array.

2. All rows with a certain symbol () in a given column must have an equal
number of occurrences of all symbols in each other column. In our table,
when B is minus (), there are two minuses and two pluses in C.
3. The selected matrix must have the least number of rows that satisfy the
first two criteria for the selected number of treatments.

Sample Size (Replication)


The sample size is the number of times all experimental runs are accomplished
as a repeat or replicate. A repeat sample is used when the experiment is simply
duplicated for each experimental run. A replicate sample is used when the ex-
perimental run is a measurement that is sensitive to setup, environment, or some
other factor outside the sample treatments. The minimum sample size for a de-
signed experiment is two. This minimum is required to establish a variance
about the mean of the response variable. The sample size then becomes depen-
dent on the levels of confidence needed in the experiment, as indicated during
our discussion on ANOVA. The sample size for each experimental run is indi-
cated by a lowercase n, and the total sample size for the complete experiment is
indicated by an uppercase N. The samples for a designed experiment then can
be described by:

ð2Þð22 Þ ¼ 8 ð2Þð32 Þ ¼ 18
ð2Þð23 Þ ¼ 16 ð2Þð33 Þ ¼ 54
ð2Þð24 Þ ¼ 32 ð2Þð34 Þ ¼ 162
ð2Þð25 Þ ¼ 64 ð2Þð35 Þ ¼ 486
ð2Þð26 Þ ¼ 128 ð2Þð36 Þ ¼ 1;458
ð2Þð27 Þ ¼ 256 ð2Þð37 Þ ¼ 4;374
c08_1 10/09/2008 376

376 ANALYZE AND IMPROVE EFFECTIVENESS

It is apparent that we can use the sample size and number of experimental
runs to plan our data management needs. If we run the simplest experiment with
the minimum number of samples, there will be only eight resulting data points.
In a 37 full factorial experiment with the minimum number of samples, there
would be 4,374 data points.
Sample size is always a critical decision in any experimental design. This
decision is based on the need for experimental data (the smaller the differences
you are trying to detect, the larger your sample size needs to be), the economics
of the situation, and the resources being used to perform the experiment. Al-
though it is true that the minimum sample size required to evaluate a designed
experiment is two (2), you would not want to evaluate any kind of data based on
a sample of two.

Response
A response is a result of an experimental trial. It is the dependent variable (also
called the response variable). It is the measured effect on the product or process
of using the specific combination of treatments and levels. In a 27 full factorial
experiment with two repeats or replicates, there are 256 responses.

Randomization
We can assign a treatment combination to an experimental run by random
chance, using a randomization program or randomization table. Randomization
prevents the influence of data due to any uncontrolled environmental variables
in any test run. Randomization must always be used when the experimenter
does not have total control over the environment or when there are input varia-
bles outside the experiment that may affect the process. A possible random or-
der of the experimental runs for our data is presented in Figure 8.53.

2k Full Factorial DOE


Full factorial experimental designs provide a comprehensive analysis of all
treatments, levels, and interactions related to a selected quality characteristic.
The full factorial designs we discuss are commonly called the 2k designs, where
2 is the number of levels and k represents the number of treatments. Full factori-
al experiments can be accomplished in eight steps.

1. Identify a problem or opportunity for improvement.


2. Perform a cause-and-effect analysis to select treatments.
3. Select treatments and treatment levels.
4. Select a full factorial experimental format.
5. Conduct the experiment and acquire data.
6. Determine the effects.
c08_1
10/09/2008
377

FIGURE 8.53 Randomization.

377
c08_1 10/09/2008 378

378 ANALYZE AND IMPROVE EFFECTIVENESS

7. Graph the results.


8. Perform ANOVA.

1. Identifying an opportunity. The first step in any designed experiment is to


identify the opportunity for improvement. The most effective way to accomplish
this is through a thorough understanding of your processes and products. These
opportunities can be derived from many sources in existing and new processes,
products, and services. Examples include:
 Opportunities for variability reduction to improve the efficiency and effec-
tiveness of existing processes
 Opportunities to improve products and services based on customer
requirements
 Opportunities to reduce reject and scrap rates
 Opportunities to improve the development of new products and services

2. Cause-and-effect analysis. Figure 8.54 is an example of a fishbone dia-


gram for cause-and-effect analysis. To identify the opportunities for improve-
ment, four subject experts spent approximately one hour of their time to
brainstorm the problem and create the cause-and-effect analysis. We follow this
example through our development of parameter design.
3. Selecting treatments and treatment levels. Based on the cause-and-effect
analysis, select which factors will be tested. Determine the levels of the factors
and assign a test value for each level. From our previous example, the factors
and levels are as indicated in Table 8.3. Two of the factors are qualitative
(equipment and suppliers), and the third factor is quantitative (temperature).
The ability to mix these distinctively different types of data into a single experi-
ment is an important feature of designed experiments.

Machine Method Material


Wrong Virgin
Worn
Sequence
Reprocessed Resin Type A
Excessive Too Slow
Resin
Gear Wear
Speed Additive %
Resin Type B
Too Fast
Production
Training Yield
Too Low
Worn
Attitude Temp
Calipers
Too High Low
Wrong Humidity
Education
Specifications High
Measurement Men/Women Environment
(Personnel)

FIGURE 8.54 Cause-and-effect.


c08_1 10/09/2008 379

DESIGN OF EXPERIMENTS 379

TABLE 8.3
Treatments Levels
+ 
A. Equipment A1 A2
B. Supplier B1 B2
C. Temperature C1 (high) C2 (low)

4. 2k full factorial design. To select the appropriate full factorial design for
your experiment, use the table of 2k factorial designs shown in Figure 8.55.
Note that the selected design is based on the number of treatments and levels
selected for the experiment. The table demonstrates the full factorial designs for
two-treatment (22), three-treatment (23), and four-treatment (24) experimental
designs at two levels. If we were to select a design for three treatments (ABC)
at two levels from this table, the resulting DOE format would take the form of a
full factorial for three treatments at two levels (23).
5. Conducting the experiment and acquiring data. Using the design selected
during the preceding step, the experiment is conducted and the results are re-
corded. Record any notes on related circumstances that might provide informa-
tion concerning the test results, as necessary.
For our example, the data in Figure 8.56 was recorded. Notice that we per-
formed the eight experimental runs, and we repeated them three times (these
are annotated as y1, y2, and y3.) These are called iterations. This lends more
power to the results of the DOE.

Design Treatment Combinations


Design
Run 2k A B AB C AC BC ABC D AD BD ABD CD ACD BCD ABCD
Run
1 - - + - + + - - + + - + - - +
2 + - - - - + + - - + + + + - -
22
3 - + - - + - + - + - + + - + -
4 + + + - - - - - - - - + + + +
5 - - + + - - + - + + - - + + -
6 + - - + + - - - - + + - - + +
23
7 - + - + - + - - + - + - + - +
8 + + + + + + + - - - - - - - -
9 - - + - + + - + - + + - + + -
10 + - - - - + + + + + - - - + +
11 - + - - + - + + - - - - + - +
12 + + + - - - - + + - + - - - -
24
13 - - + + - - + + - + + + - - +
14 + - - + + - - + + + - + + - -
15 - + - + - + - + - - - + - + -
16 + + + + + + + + + - + + + + +

FIGURE 8.55 Select a 2k full factorial design.


c08_1 10/09/2008 380

380 ANALYZE AND IMPROVE EFFECTIVENESS

Treatments Test Results


A B C AB AC BC ABC y1 y2 y3 y Range
1 - - - + + + - 81 79 74 78.00 7
2 + - - - - + + 78 72 76 75.33 6
3 - + - - + - + 76 71 74 73.67 5
4 + + - + - - - 75 71 75 73.67 4
5 - - + + - - + 79 80 76 78.33 4
6 + - + - + - - 76 71 74 73.67 5
7 - + + - - + - 75 76 77 76.00 2
8 + + + + + + + 74 77 78 76.33 4
∑-
∑+
∑+/n+
∑-/n-
Effect

FIGURE 8.56 Conduct the experiment and apply data.

6. Determining the effects. To calculate the effects of the treatments and


treatment levels on the quality characteristic, we use the following equation:
P P
yþ y
Effectk ¼ P P
nþ n
The effects for each treatment and treatment level are calculated as follows:
P P
yAþ yA 299:00 306:00
EffectA ¼ P P ¼  ¼ 1:75
nAþ nA 4 4
299:67 305:33 304:33 300:67
EffectB ¼  ¼ 1:42 EffectC ¼  ¼ 0:92
4 4 4 4
306:33 298:67 301:67 303:33
EffectAB ¼  ¼ 1:92 EffectAC ¼  ¼ 0:42
4 4 4 4
305:67 299:33 303:67 301:33
EffectBC ¼  ¼ 1:58 EffectABC ¼  ¼ 0:58
4 4 4 4

Determining the treatment interactions requires one additional calculation. It


is necessary to break down the effects of the treatments into the effects of the
treatment interactions. To accomplish this, we calculate the effects of A when
B is at the plus and minus levels and the effects of Aþ when B is at the plus and
minus levels. This is accomplished using the following equation:
P
yxy
Interactionxy ¼
nxy
c08_1 10/09/2008 381

DESIGN OF EXPERIMENTS 381

Treatments Test Results


A B C AB AC BC ABC y1 y2 y3 y Range
1 - - - + + + - 81 79 74 78.00 7
2 + - - - - + + 78 72 76 75.33 6
3 - + - - + - + 76 71 74 73.67 5
4 + + - + - - - 75 71 75 73.67 4
5 - - + + - - + 79 80 76 78.33 4
6 + - + - + - - 76 71 74 73.67 5
7 - + + - - + - 75 76 77 76.00 2
8 + + + + + + + 74 77 78 76.33 4
∑- 299.00 299.67 304.33 306.33 301.67 305.67 303.67
∑+ 306.00 305.33 300.67 298.67 303.33 299.33 301.33
∑+/n+ 74.75 74.92 76.08 76.58 75.42 76.42 75.92
∑-/n- 76.50 76.33 75.17 74.67 75.83 74.83 75.33
Effect -1.75 -1.42 0.92 1.92 -0.42 1.58 0.58

FIGURE 8.57 Determine the effects.

These calculations can be accomplished directly on the same spreadsheet


used to calculate treatment main effects.
P
yAB 78:00 þ 78:33
InteractionAB ¼ ¼ ¼ 78:17
nAB 2
Determining the effects can most easily be done by performing the calcula-
tions directly on the full factorial design spreadsheet, as indicated in Figure
8.57.
7. Graphing the results. We can now graph the main effects and interactions
to evaluate what effect the treatments and treatment levels are having on the
quality characteristic. The top section of Figure 8.58 displays the graphs associ-
ated
P with the P treatment main effects. This is simply a graph of the values of
þ=nþ and –=n– from the table. The slope of the line for main effects indi-
cates the significance of the effect. The steeper the slope, the more significant
the effect. The graph also indicates which treatment level is producing the desir-
ed effect of optimizing the process.
We are now ready to plot the treatment, as shown in Figure 8.58. When the
lines of the graph are intersecting or converging, there is an indication of an
interaction. If the lines run parallel, there is no interaction.
The main effects and interactions shown in Figure 8.58 and the ANOVA in
Figure 8.59 indicate that the yield for this process can be maximized by select-
ing treatment A at the low setting, treatment B at the low setting, and treatment
C at the high setting. There are apparent interactions between A  B and B  C,
but no apparent interaction between A  C.
8. Performing ANOVA. ANOVA, applied as part of a designed experiment, is
used to measure the significance of the main effects and interactions and the level
c08_1

382
10/09/2008

77 77 77
382

76.50
76.5 76.5 76.33 76.5 76.08
EFFECT: EFFECT: EFFECT:
76 -1.75 76 1.42 76 +0.92
75.5 75.5 75.5
75 75 75
75.17
74.5 74.75 74.5 74.92 74.5
74 74 74
73.5 73.5 73.5
(-) (+) (-) (+) (-) (+)
Treatment A Treatment B Treatment C
78.17
78 78 78
77.5 EFFECT: 77.5 77.17 EFFECT: 77.5 EFFECT:
77 1.92 77 -0.42 77 +1.58
76.67
76.5 B- 76.5 76.5
C+ C+ 76.17
76 76 75.83 76
75.5 75.5 75.5 76.00
B+ 75.00
75 75.00 75 C- 75
74.83 C-
74.5 74.50 74.5 74.5
74.50
74 74 74
73.67
73.5 73.5 73.5
(-) A (+) (-) A (+) (-) B (+)
Interaction AB Interaction AC Interaction BC

FIGURE 8.58 Graph the results.


c08_1 10/09/2008 383

DESIGN OF EXPERIMENTS 383

Step 1 2 3 4 5 6
Sum of the Degrees of Mean
Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS
A 18.375 1 18.375 2.940 3.05 10%

B 12.042 1 12.042 1.927 3.05 7%

C 5.042 1 5.042 0.807 3.05 3%

AB 22.042 1 22.042 3.527 3.05 13%

AC 1.042 1 1.042 0.167 3.05 1%

BC 15.042 1 15.042 2.407 3.05 9%

ABC 2.042 1 2.042 0.327 3.05 3%

Within 100.000 16 6.250 57%

Total 175.627 23

FIGURE 8.59 Perform ANOVA.

of contribution. The ANOVA decision table for our current example (Figure
8.59) indicates that treatment A is significant and contributed 10 percent to the
overall process variation. This clearly indicates which treatment is to be the tar-
get of improvement processes such as CMI and variability reduction programs.

It also indicates which of the treatments in our experimental design has the most
effect on our outcome. Additionally, it is equally important to notice that the per-
cent contribution from error is 56.94 percent of the total. This indicates that 56.94
percent of the variability is attributable to factors we have not evaluated in our de-
signed experiment and/or to the uncontrolled environment.
As you can see in the graphic below, the only significant factor at the 90
percent confidence level is the AB interaction. This is determined by comparing
the Fcalc to the Fcritical.

Full Factorial DOE Summary


 Full factorial designs are the best-designed experiments to use when all
main effects and interactions are critical.
 Full factorial designs with ANOVA provide the most comprehensive infor-
mation for fact-based decision-making.
 Full factorial designs are the most costly, time-consuming, and resource-
consuming of all available designed experiments.

Fractional Factorial Experiments


As the number of treatments to be evaluated in a full factorial experiment grows
large, the costs in time, resources, and budget grow. The number of runs
c08_1 10/09/2008 384

384 ANALYZE AND IMPROVE EFFECTIVENESS

required to complete a full factorial quickly outgrows the resources of most ex-
perimenters. As an example, completing a 23 full factorial experiment requires
eight runs; completing a 28 experiment requires 256 runs. The runs required to
execute these full factorial experiments contain higher-order interactions as well
as main effects. The number of runs required to evaluate these interactions is a
significant portion of the full factorial experiment. Where these higher-order in-
teractions are not clearly of concern, as is true in most experimental cases, in-
formation on main effects only or on main effects and lower-order interactions
can be obtained from fractional factorial experiments, thus providing great sav-
ings in resources.
Several key points contribute to our ability to fractionalize and use fractional
factorial experiments:

 When several variables are to be evaluated, the system being studied is most
likely driven by some subset of main effects and possibly a few lower-order
interactions.
 The subset of significant factors identified in a fractional factorial ex-
periment can be projected into a full factorial experiment and fully
evaluated.
 Fractional factorial experiments can be used in a sequence of experiments
to refine and identify significant factors.
 Fractional factorial experiments can themselves be used to optimize sys-
tems where there is little or no direct concern about interactions.

Using fractional factorial designs leads directly to improved efficiency and


effectiveness of experimental design. The most effective use of fractional fac-
torials is when runs are made sequentially, that is, when fractional factorial
experiments are used as screening and characterizing purposes leading to a
full factorial or a fractional factorial that can be used to optimize a system.
For example, if you need to understand the relationship of six treatments (26)
on a quality characteristic, it would require 64 test runs. Running one-half
fractional factorial (26 – 1) experiment would require only 32 runs, and one-
quarter fractional factorial (26 – 2) would require only 16 runs. From this frac-
tional factorial data, you can determine which subset of the original six fac-
tors was significant. You could then conduct a full factorial on that subset,
thereby saving significant resources. Alternatively, you may learn enough
from the first fractional factorial to change levels, add or remove treatments,
or change the response characteristic.
All experiments should be sequential, beginning with fractional factorials, to
screen larger numbers of treatments and refine other experimental parameters.
This leads to effective use of full factorials to finalize the analysis and optimize
the system or process under study. As a basic rule, no more than 25 percent of
the experimental budget should be used to perform the first experiment.
c08_1 10/09/2008 385

DESIGN OF EXPERIMENTS 385

Fractionalizing 2k1 Factorial Experiments


One-half fractional factorial experiments are called 2k1 designs. This is be-
cause the experiment is performed in k1 runs. As an example, in a 23 exper-
imental design, there are eight runs. In a 23 – 1 fractional factorial experimental
design, there are only four runs.
The basic premise of separating experimental runs for fractional factorial ex-
periments is the same as for blocking a full factorial: to derive from the full
factorial a smaller subset that is orthogonal. In a fractional factorial, the blocks
are not likely to be rejoined into a full factorial, so they must be separated by
their higher-order interaction only.
This method is easier to visualize if we use the already familiar 23 full facto-
rial. The transition for this full factorial design to a fractional factorial is demon-
strated in Figure 8.60.
The higher-order interaction (ABC) here is the defining factor for confound-
ing. The eight-run experiment can be separated into two four-run designs that
separate treatment combinations according to the sign (þ or ) conversion of
interaction ABC, as shown in Figure 8.60.

Full Factorial
Run A B AB C AC BC ABC
1 - - + - + + -
2 + - - - - + +
3 - + - - + - +
4 + + + - - - -
5 - - + + - - +
6 + - - + + - -
7 - + - + - + -
8 + + + + + + +

1/2 Fractional Factorial


Run A B AB C AC BC ABC
2 + - - - - + +
3 - + - - + - +
5 - - + + - - +
8 + + + + + + +

1/4 Fractional Factorial


Run A B C
1 + - -
2 - + -
3 - - +
4 + + +

FIGURE 8.60 Fractional factorials 2k1.


c08_1 10/09/2008 386

386 ANALYZE AND IMPROVE EFFECTIVENESS

Treatments

Run A B AB C AC BC ABC

2 + - - - - + +

3 - + - - + - +

5 - - + + - - +

8 + + + + + + +

Alias BC AC AB Identity

FIGURE 8.61 Confounding and aliases.

The primary fraction is the identity column where ABC is equal to plus
ðþÞ; I ¼ ABC. This then would yield a fractional factorial experiment in four
runs, as indicated in Figure 8.60.
This method separates the full factorial into two fractional factorials, each
with four runs. Four runs designated by ABC are equal to minus, and four runs
designated by ABC are equal to plus.

Confounding and Aliases


This method also creates aliases, that is, more than one factor with the same
treatment combination. The aliases result when more than one factor has the
same treatment levels in the same run. In the 23 – 1 fractional factorial table
shown in Figure 8.61, the aliases are:
A ¼ BC B ¼ AC C ¼ AB

Executing a Fractional Factorial Experiment


Fractional factorial experiments are conducted and evaluated exactly like full
factorial experiments. Fractional factorial experiments can be accomplished in
eight steps.

1. Identify a problem or opportunity for improvement.


2. Perform a cause-and-effect analysis to select treatments.
3. Select treatments and treatment levels.
4. Select a fractional factorial experimental format.
5. Conduct the experiment and acquire data.
6. Determine the effects.
7. Graph the results.
8. Perform ANOVA.

Step 1. Identifying the opportunity. Selecting the opportunity for improve-


ment in a fractional factorial is accomplished in exactly the same way as for full
c08_1 10/09/2008 387

DESIGN OF EXPERIMENTS 387

factorials. A fractional factorial can be either a screening experiment used to


determine which of many factors are critical, a system characterization experi-
ment used to determine how a system will react to several treatments and levels,
or as an optimizing experiment in cases where interactions may be of little con-
cern. Therefore, the opportunity for improvement may relate to any of these
factors. The basic idea remains constant; however, your output variable that re-
lates to the opportunity for improvement will relate to quality, cost, or schedule
in some way. Some examples include:
 Identifying the treatments and levels affecting the reliability of a system
(quality)
 Determining the treatments and levels that reduce cycle time (schedule)
 Opportunities to reduce rework (cost)
 Opportunities to improve process effectiveness and efficiency (quality,
cost, and schedule)
Step 2. Performing a cause-and-effect analysis. Figure 8.62 illustrates an
Ishikawa diagram used to select treatments and treatment levels. This is accom-
plished using a cross-functional team during a brainstorming session. Other
methods for selecting treatments and levels are quality function deployment,
failure modes and effects analysis (FMEA) for systems and processes, and pro-
cess analysis. In using any of these methods, remember the importance of using
a cross-functional team. The team approach will bring a broad spectrum of
knowledge, disciplines, and experience to the evaluation and make your designed
experiment more effective and efficient.
Step 3. Selecting treatments, levels, and values. Based on the cause-and-
effect analysis, select which treatments and treatment levels will be selected for

Machine Method Material

Procedures
Manual Position of
Systems Line of Personnel
Authority
Accountability

Automated Report Reporting


Systems Format Media

Reduce Age
of Receivables
Review
Training Tracking
Cycle Line
Type
Customer Management
Age Position
Lead Time
Report Matrix
Frequency Accountability Management

% of Invoice
Education
Sales Lead Time

Measurement Men/Women Environment


(Personnel)

FIGURE 8.62 Perform a cause-and- effect analysis.


c08_1 10/09/2008 388

388 ANALYZE AND IMPROVE EFFECTIVENESS

Level
Treatment + -
A. Training Clerk Technician

B. Follow-up 30-Day 1-Day

C. Contractor Contractor Company

D. Procedure Formal Checklist

Billing, Tracking
E. Automated Tracking
and Follow-up

F. Level of Authority Executive Management

FIGURE 8.63 Select levels and values.

your designed experiment. The treatments and treatment levels selected are in-
dicated in Figure 8.63.
Step 4. Selecting 2k  p fractional factorial experiment. Since the number of
runs required of a fractional factorial experiment for a 26 experiment would be
very large, it was determined that a fractional factorial experiment would be
used. The resources available and the fact that interactions may not be of con-
cern led the experimenters to use a 26  2 fractional factorial. The resulting or-
thogonal array is demonstrated in Figure 8.64.
The applicable confounding and aliasing are listed across the bottom of the
table in Figure 8.64. In this fractional factorial format, the treatment main ef-
fects are confounded with three-level interactions.
Step 5. Conducting the experiment and acquiring data. Using the design
selected, conduct the experiment. Measure the quality characteristic using the

Treatments
A B C D E=ABC F=BCD
1 - - - - - -
2 + - - - + -
3 - + - - + +
4 + + - - - +
5 - - + - + +
6 + - + - - +
7 - + + - - -
8 + + + - + -
9 - - - + - +
10 + - - + + +
11 - + - + + -
12 + + - + - -
13 - - + + + -
14 + - + + - -
15 - + + + - +
16 + + + + + +
BCE ACE ABE BCF ABC BCD
DEF CDF BDF AEF ADF ADE

FIGURE 8.64 Select 2kp fractional factorial design.


c08_1 10/09/2008 389

DESIGN OF EXPERIMENTS 389

Treatments Test Results


A B C D E F y1 y2 y3 y
1 - - - - - - 76 70 69 71.67
2 + - - - + - 100 96 81 92.33
3 - + - - + + 121 105 117 114.33
4 + + - - - + 96 84 95 91.67
5 - - + - + + 101 96 115 104.00
6 + - + - - + 67 72 76 71.67
7 - + + - - - 96 106 100 100.67
8 + + + - + - 115 122 119 118.67
9 - - - + - + 96 89 98 94.33
10 + - - + + + 121 122 135 126.00
11 - + - + + - 165 173 181 173.00
12 + + - + - - 119 127 132 126.00
13 - - + + + - 112 124 99 111.67
14 + - + + - - 86 82 84 84.00
15 - + + + - + 100 109 111 106.67
16 + + + + + + 176 184 195 185.00
∑+
∑-
∑+/n+
∑-/n-
Effect

FIGURE 8.65 Conduct the experiment.

measure determined (step 2). Record the results on your worksheet along with
any notes on related circumstances that might provide information concern-
ing the test results. The resulting data can be recorded as indicated in
Figure 8.65.
Step 6. Determining the effects. To determine the effects of a fractional fac-
torial designed experiment, we use the same equation as we did for determining
the effects of a full factorial experiment. (See Figure 8.66.)
Step 7. Graphing the results. Graph the main effects to evaluate what effect
the treatments and treatment levels are having on the quality characteristic. Fig-
ure 8.67 displays the graphs associated
P with the
P treatment main effects. This is
simply a graph of the values of þ=nþ and =n. The slope of the line for
main effects indicates the significance of the effect. The steeper the slope, the
more significant the effect. The graph also indicates which treatment level is
producing the desired effect of optimizing the process.
Step 8. Performing ANOVA. Analysis of variance, applied as part of a frac-
tional factorial designed experiment, is used to measure the significance of the
treatments and the level of contribution. The ANOVA decision table for the
fractional factorial experiment appears in Figure 8.68.
c08_1 10/09/2008 390

390 ANALYZE AND IMPROVE EFFECTIVENESS

Treatments Test Results


A B C D E F y1 y2 y3 y
1 - - - - - - 76 70 69 71.67
2 + - - - + - 100 96 81 92.33
3 - + - - + + 121 105 117 114.33
4 + + - - - + 96 84 95 91.67
5 - - + - + + 101 96 115 104.00
6 + - + - - + 67 72 76 71.67
7 - + + - - - 96 106 100 100.67
8 + + + - + - 115 122 119 118.67
9 - - - + - + 96 89 98 94.33
10 + - - + + + 121 122 135 126.00
11 - + - + + - 165 173 181 173.00
12 + + - + - - 119 127 132 126.00
13 - - + + + - 112 124 99 111.67
14 + - + + - - 86 82 84 84.00
15 - + + + - + 100 109 111 106.67
16 + + + + + + 176 184 195 185.00
∑+ 895.33 1016.00 882.33 1006.67 1025.00 893.67
∑- 876.33 755.67 889.33 765.00 746.67 878.00
∑+/n+ 111.92 127.00 110.29 125.83 128.13 111.71
∑-/n- 109.54 94.46 111.17 95.63 93.33 109.75
Effect 2.38 32.54 -0.87 30.21 34.79 1.96

FIGURE 8.66 Determine the effects.

This fractional factorial experiment has indicated the following:

 Treatments B, D, and E are clearly significant.


 Treatments A, C, and F are not significant.
 The variance may have been caused by the sampling or some other factor
not attributed to the treatment under study.

The percent contribution indicates what you may gain by taking action and
optimizing the treatment under study. In this study the percent contribution is:

Treatment B 27%
Treatment D 24%
Treatment E 31%

This indicates that 83 percent of the variance may be eliminated by optimiz-


ing treatments B, D, and E.
c08_1 10/09/2008 391

DESIGN OF EXPERIMENTS 391

127.00
126.00
122.00
118.00
114.00 111.92 111.17
110.00 Effect
32.54 Effect 110.29
106.00 109.54 Effect
2.38 -0.87
102.00
98.00
94.00 94.46
90.00
A- A+ B- B+ C- C+
128.13
126.00 125.83

122.00
118.00
114.00
Effect 111.71
110.00 30.21 Effect
34.79 109.75
106.00 Effect
1.96
102.00
98.00
94.00
95.63
90.00 93.33
D- D+ E- E+ F- F+

FIGURE 8.67 Graph the results.

Step 1 2 3 4 5 6
Sum of the Degress of Mean
Source of F F' %
Squares Freedom Squares
Variation Ratio Critical Contribution
SS df MS
A 67.21 1 67.21 0.350 7.31 0

B 12707.00 1 12707.00 65.840 7.31 27

C 8.71 1 8.71 0.050 7.31 0

D 10950.00 1 10950.00 56.740 7.31 24

E 14525.04 1 14525.04 75.260 7.31 31

F 45.54 1 45.54 0.240 7.31 0

Within 7925.50 41 193.30 17

Total 46229.00 47

FIGURE 8.68 Perform ANOVA.


c08_1

392
10/09/2008
392

8. Concept 9. Design 10. Optimize 11. Verify 12. Production


Development End
Development Design Capability Launch

Design

16. Improve

3. Develop
1 2. Identify Improve
Business Opportunity 13. Define 14. Measure 15. Analyze 18. Control CMI End
Start Opportunity
Case
17. Lean
No

Invent/Innovate
1
5. Develop 7. Verify
6. Optimize
4. Invent/Innovate Technology 8
Technology Technology
Transfer

FIGURE 8.69 Enterprise excellence decision process.


c08_1 10/09/2008 393

KEY POINTS 393

KEY POINTS

Caution: We have built a step-by-step process for Enterprise Excellence


starting with deployment, implementation, leadership, knowing and under-
standing your processes, and measuring your processes. Do not shortcut that
methodology! There is a propensity to jump to the analyze phase and begin
performing designed experiments or evaluating data prematurely, This is
called ‘‘coffee cup engineering’’ and will be very costly in time, treasure,
and talent, yielding very little.
Use the Enterprise Excellence decision process and follow the procedure step
by step. Figure 8.69 shows a good step-by-step process to follow whenever ana-
lyzing data for process improvement.
Always evaluate the data to analyze existing processes. Be sure the data you
are collecting and using is relevant to the current process. Be cautious about
using historical data from a process when configuration changes or supplier
changes have occurred.
ANOVA will provide you with information concerning the significance of
your process treatments but does not directly provide effects. Use linear con-
trasts to determine the measurable effects of your significant treatments.
Linear contrasts will provide a measure of effects for significant treatments.
Use the critical process metrics (CSF) as measures of effectiveness, quality cost
schedule, and/or risk.
Designed experiments are costly in time, treasure, and talent. Plan them well,
and always use project management tools when performing a series of designed
experiments.
Rarely does the first designed experiment provide significant data from a pro-
cess to move to the improve phase. Allocate your budget accordingly. Spend
about 25 percent of your available resources on the first ADOE.
Always have team members monitor a designed experiment. A DOE works
only if the treatments and levels are executed according to plan.
The DOE scenarios presented in this chapter are basic 2K designed experi-
ments. You may need to use more sophisticated experiments.
Caution: Use the statistical measures of significance when making decisions.
Do not rely on the graphical representations of the data.
c09_1 10/09/2008 394

9
ANALYZING AND IMPROVING
EFFICIENCY
Effectiveness is the foundation of success—efficiency is a minimum condi-
tion for survival after effectiveness has been achieved. Effectiveness is do-
ing the right things. Efficiency is doing things right.
Peter F. Drucker

Once effectiveness has been achieved, efficiency improvements must begin im-
mediately and continue for the life cycle of the enterprise. In this chapter, we
build upon the define and measure phases using the value stream maps we created
and the measurements we collected to analyze and improve the effectiveness and
efficiency of our processes. (See Figure 9.1.)
Lean methodology has a proven track record of increasing efficiency. But
to be effective, Lean methods and tools must be applied to processes and
operations that work in harmony. Only then can total efficiency be increased.
Improved efficiency is meaningful only when it results in overall cost
reduction.
In a typical business, it is not unusual to find isolated parts of the system
running at peek efficiency (95 to 100 percent). However, efficiency for the
entire system is usually less than 40 percent. This often happens because the
more efficient operations create problems (such as bottlenecks or increased
inventory) for the system as a whole. When making a system more efficient,
the whole system must be considered to avoid suboptimization. This ‘‘sys-
tem’’ focus must go beyond your business processes to customers and
suppliers.
It is important that suppliers’ capabilities match customer requirements
(regardless of whether the customer is internal or external). This means that
what the supplier delivers must be:

 As requested
 Within the time needed
 In the quantities desired

394 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c09_1
10/09/2008
395

8. Concept 9. Design 10. Optimize 11. Verify 12. Production


Development Development End
Design Capability Launch

Design

16. Improve

3. Develop Improve
1 2. Identify
Business Opportunity 13. Define 14. Measure 15. Analyze 18. Control CMI End
Start Opportunity Case

17. Lean
No

Invent/Innovate
1
7. Verify
5. Develop 6. Optimize
4. Invent/Innovate Technology Technology 8
Technology
Transfer

FIGURE 9.1 Enterprise Excellence decision process.

395
c09_1 10/09/2008 396

396 ANALYZING AND IMPROVING EFFICIENCY

TABLE 9.1
Analyze Improve

6S program Just-in-time
Seven forms of waste Mixed-model production
Takt time ABC material handling
Cycle time Workable work
Routing analysis Workload balancing
Spaghetti diagrams One-piece flow
Work content analysis Work cell design
Process availability analysis Kanban sizing
Process yield analysis Rapid improvement events (kaizen)

As an example, consider the following: A customer needs to receive parts


every week to maintain a desired level of efficiency. However, the parts supplier
initially schedules shipments at a rate of once a month.
To facilitate the alignment of customers and suppliers, there must be direct
lines of communication. Suppliers need to understand how their parts are used
and whether they are satisfying the customer’s requirements for fit, form, func-
tion, and availability.
Table 9.1 shows the tools we use to analyze and improve efficiency.
The first two analyze and improve methods we use can be accomplished con-
currently with the value stream mapping and process walkthroughs. They are
the seven forms of waste and the 6S analysis. Not only are these analyses and
actions important to the overall analyze and improve phase, they very often
achieve significant improvement upon their application. After you implement a
6S program and understand the seven forms of waste, you can then move on to
analyze the efficiency data. We therefore begin the analysis and improvement of
efficiency with 6S and the seven forms of waste.

6S PROCESS

The 6S process, or simply 6S, is a structured program to systematically achieve


total organization, cleanliness, and standardization in the workplace. (See
Table 9.2.) A well-organized work place results in a safer, more efficient, and
more productive operation. It boosts the morale of the workers, promoting a
sense of pride in their work and ownership of their responsibilities. 6S is more
than mere housekeeping. It can and does change a work environment and the
attitude of those working in the environment.
Figure 9.2 is an example of a work space needing the 6S program.

Sift
This term refers to removing unneeded and unused items from the workplace. Sift-
ing makes it easier for material to flow and for operators to move about. Clearing
c09_1 10/09/2008 397

6S PROCESS 397

TABLE 9.2
Category Required Actions Desired Outcomes

Sift Remove unneeded and unused A safe and uncluttered work site,
items from the workplace. free of hazards and
workarounds.
Sort Arrange work site tools, Tools, equipment, and materials
equipment, and materials in the are located within safe and easy
most convenient location for reach. Waste of motion is at a
process use. Identify, label, and minimum.
color code.
Note: Follow the Principles of
Safe.
Shine Clean the work area, tools, and The work site, required tools,
equipment. Tag equipment equipment, and materials are
abnormalities. clean, defect-free, and ready for
use.
Standardize Document work site layout, Plan with documented, graphic
location of tools, equipment, work site layout showing proper
and materials. Establish a plan location and amounts of
and team assignments to required tools, equipment, and
maintain operational readiness. materials, including visual
controls and coding, with
required team member actions
and assignments.
Sustain Follow the plan. Improve the plan A continuously ready operational
and work site. work site, excellent
housekeeping.
Safe All appropriate safety controls in Eliminate all hazards and provide a
place safe work environment
Safety equipment properly
identifies
All safety equipment unobstructed
and accessible

the area of those items that are not being used should occur on a regular basis (e.g.,
every 30 days). Since everything has a place, everything should be in its place.
Create a safe and uncluttered work site, free of hazards and workarounds.

Sort
Identifying and arranging items that belong in the area defines the sort process.
All needed items should be sorted and labeled as belonging in the area. This
makes recognition of the proper tooling, resources, materials, and so on very
easy. If it does not warrant a label, it does not warrant a place in the area.
Tools, equipment, and materials are located within safe and easy reach of
use. Waste of motion is kept to a minimum.
c09_1 10/09/2008 398

398 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.2 Work area in need of 5S.

Shine
Order can be maintained by cleaning the work area on a regular basis. In
fact, the work area should be cleaned at the end of every shift. There should
be nothing missing or out of place. All tools and materials should be ac-
counted for.
The work site, required tools, equipment, and materials are clean, defect-free,
and ready for use. Additionally, anything that changes the state of the so-called
shined area becomes much more noticeable (e.g., where a hydraulic leak on a
piece of equipment may have gone unnoticed, it now stands out.)

Standardize
Standard activity must be enforced. If the housekeeping activity does not be-
come institutionalized within the operation, the area will not stay clean. Regu-
lar, formal audits—with quantitative and qualitative expectations—need to be
conducted and the results posted. Responsibility, accountability, and authority
are critical for this to work.
Specifically, it is recommended that you document the work site layout, loca-
tion of tools, and equipment and materials. Then establish a plan and team as-
signments to maintain operational readiness.
Create a documented, graphic work site layout showing proper location and
amounts of required tools, equipment and materials, visual controls, and coding,
as well as required team member actions and assignments.
c09_1 10/09/2008 399

THE SEVEN FORMS OF WASTE 399

FIGURE 9.3 Work area after 5S implemented.


Sustain
Set expectations. Conduct walkthroughs and demonstrate the importance of sift-
ing, sorting, shining, and standardizing.
What gets measured gets done . . . but what gets reported gets done quicker.
Setting up a continuous system that emphasizes the other four S factors pro-
motes a continuously ready operational work site with excellent housekeeping.

Safe
In addition to the 5S we add Safe or Safety. This refers to safety of personnel,
materials and environment. Even though these elements are inherent in the ac-
tivities of the first 5 elements of 6S it is important for emphasis to add focus on
safety by adding inspection, measurement, evaluation and reporting of safety
metrics.
After the 6S program is fully implemented and sustained you can expect a
significant improvement in the work environment, as indicated in Figure 9.3.

THE SEVEN FORMS OF WASTE

The next step is to understand the seven forms of waste as they apply to your
business process. This can also be accomplished through a combination of the
process walkthrough and the data analysis that follows. The seven forms of
waste in business are:
c09_1 10/09/2008 400

400 ANALYZING AND IMPROVING EFFICIENCY

1. Overproduction
2. Waiting
3. Transport
4. Appropriate processing
5. Unnecessary inventory
6. Unnecessary motion
7. Defects

Each form of waste is described in detail.

Overproduction
Activities often are performed that add no value to the product or for which the
customer is not willing to pay extra to have.
Overproduction occurs when operators make parts even though they are not
immediately needed (either by the next operation or as completed units, which
causes excess inventory). It is a key waste in manufacturing environments for
parts to stack up. But production of parts is not the only type of overproduction
that can occur.
Activities often are performed that add no value or for which the customer is
not willing to pay. In maintenance, this waste translates to performing preven-
tive and predictive maintenance at intervals that are in excess of what is truly
needed. Thus we have an overproduction of maintenance work. Unnecessary
maintenance is a form of overproduction. Unnecessary preventive maintenance
(PM) is 100 percent wasteful.
Overproduction is one of the most common and often tolerated forms of
waste, even though it is one of the most destructive. It prevents a smooth flow
of goods or services, promotes inefficiency, and inhibits quality by creating ex-
cessive lead and storage times. Defects take longer to detect, storage may dete-
riorate the product (e.g., damage or shortened shelf life), and communications
are delayed.
The defenders of excess production claim that it is wasteful to allow expen-
sive equipment and space to stand idle while waiting for customer orders: better
to use the resources even if it means keeping them in a warehouse for a period of
time. Actually, overproduction is a reaction to the waste in a process that must
generate still more waste in order to appear efficient.
Bonus systems that reward production without regard to demand and push
unwanted goods into inventory are the primary culprits of overproduction.
The goal of Lean enterprise is to produce only what is required, when it is
needed, and with zero defects. Improvements in this waste component will
require the identification and elimination of the root causes of unexpected
demand changes. This is the time to implement kanban, where customer
demand pulls the process. Kanban is a Japanese word for ‘‘sign,’’ historically
reorder cards or other methods of triggering the pull system based on actual use
c09_1 10/09/2008 401

THE SEVEN FORMS OF WASTE 401

of material. Kanbans are attached to the actual product at the point of use. Kan-
ban cards have information about the parts (name, part number, quantity, source,
destination, etc.), but carts, boxes, and electronic signals can also be used.

Waiting
When time is being used ineffectively, waiting occurs. Any time in which goods
are not being moved or worked is waste. This includes personnel who are idle
for any of a myriad of reasons. The ideal situation is a continual, efficient move-
ment of materials through the production process. Waiting time that cannot be
eliminated can be used for maintenance, training, or Six Sigma improvements.
The critical issue is that wasted time be identified, eliminated, reduced, or made
productive.
This waste can be reduced by eliminating the situations where materials, ser-
vices, and information are not moving or having value added. This will require
in-depth identification and evaluation of the actual processes rather than the
planned or intended ones.

Transport
The movement of goods is known as transport. In practical terms, it cannot
usually be eliminated, but it should be minimized. Excessive movement and
handling increase the likelihood of damage, increase communication require-
ments, and add time to corrective actions. Each time a part or product is moved,
it requires some form of tracking mechanism to identify its new location. So-
phisticated tracking systems are expensive, and basic ones increase the likeli-
hood of the part never being retrieved.
Transportation into, out of, or around the production facility cannot be elimi-
nated completely, but it can be reduced in the following ways:

 Minimize the distances between operational sites along the supply chain
internal and external to the production facility.
 Maximize the efficiency of transportation vehicles, equipment, and
techniques.
 As new processes are introduced or existing processes are modified, make
transportation flow a major priority in their layouts. Only transportation
that is essential and part of the overall flow should be included.

Inappropriate Processing
We all do this: We let processes build on themselves and become overly compli-
cated and inappropriate.
Inappropriate processing occurs when excessively complex operations or
equipment have been installed where simpler solutions were more appropriate.
An example would be a single high-performance, high-capital processor that
exceeds the capacity of upstream production. Maintenance on the single unit
c09_1 10/09/2008 402

402 ANALYZING AND IMPROVING EFFICIENCY

may stop the flow of goods and encourage excessive production at inappropriate
times. Ideal processing involves using the simplest equipment and operations
capable of fulfilling quality and rate requirements and installing them near pre-
ceding and subsequent steps to minimize movement.
Machinery that has insufficient or excess capacity or capability is in-
appropriate and should be replaced. Each operation, including rework, should
be performed by a machine of optimum capacity capable of producing defect-
free products.

Unnecessary Inventory
Carrying costs—especially those that hide other supply chain problems—are
incurred by unnecessary inventory. Identifying areas that need improvement is
best accomplished through minimal inventories. Problems from poor supplier
communications, late or erratic deliveries, and production bottlenecks that have
been hidden will become visible and demand permanent solutions. There are
both direct and indirect costs associated with maintaining unnecessary invento-
ries. The numerous costs of unnecessary inventory include those listed in
Table 9.3.
This waste is dealt with by reducing inventories using workload balancing
and just-in-time (JIT) operations. Reducing inventory will also reduce lead
times, decrease problem identification times, free space, and release capital.

Unnecessary Movements
Each time an employee must unnecessarily bend, stretch, or reach to complete a
process step, it creates waste. In addition to the very real excess labor costs,
unnecessary movement increases fatigue and leads to quality problems. Less-
than-optimum location of tools, materials, spare parts, and product in relation to
the operator creates wasted motion. Unnecessary movement is the result of poor
ergonomics.
Ergonomics is defined by the American Heritage dictionary as ‘‘the applied
science of equipment design, as for the workplace, intended to maximize pro-
ductivity by reducing operator fatigue and discomfort.’’

TABLE 9.3
Direct Costs Indirect Costs

Handling Late deliveries


Transporting Rejected supplies and parts
Damage Rework
Obsolescence Inefficient suppliers
Carrying costs Incorrect materials
Floor space/storage Production inefficiencies
Increased utilities
Capital investments
c09_1 10/09/2008 403

TAKT TIME 403

Improving production ergonomics will improve both personal safety and out-
put. This waste is addressed by ergonomics, safety, and mistake-proofing (i.e.,
eliminating unnecessary motion, eliminating hazards, reducing fatigue, and re-
ducing opportunities for mistakes).

Defects
Defects are direct costs, and they are the most prominent example of waste. Bill
Smith at Motorola and Taiichi Ohno at Toyota were pioneers in identifying the
exponential damage and costs created by defects that are allowed to be repeated
or passed along the production process. Rework, unrecoverable resources, and
customer ill will are only a few of the costs generated by defects. All defects
should become the target for variability reduction initiatives until they are com-
pletely eliminated. The intent is to continuously reduce the level of:

 Product defects that escape to the customer


 Rework defects that require resources to rectify
 Scrap defects that are lost to production
 Service defects that slow delivery, reduce reliability, and provide in-
adequate or inaccurate information for the product

Strategies for improving the effectiveness of the operations are normally


accomplished through variability reduction improvement projects. This waste
provides the opportunity for the benefit of integrating the Lean enterprise and
variability reduction activities.
Now that we have applied the 6S program and understand how the seven
forms of waste affect our process, we can begin to analyze and improve
the process. This step-by-step function builds upon our value stream map (see
Figure 9.4) and analyzes data. You should recognize this value stream map from
the define and measure chapters. In our continuing process, we now analyze and
measure our business processes using several analyze and improve tools previ-
ously discussed.

TAKT TIME

Lean production uses takt time as the rate or time that a completed product is
finished. Takt time balances the workload of various resources and identifies
bottlenecks. It sets the ‘‘beat’’ of the organization in sync with customer de-
mand. In German, takt means ‘‘baton,’’ such as an orchestra conductor uses to
regulate the speed, beat, or timing at which musicians play. Think of takt time as
‘‘beat time,’’ ‘‘rate time’’ or ‘‘heartbeat.’’
c09_1

404
10/09/2008

I I I
404

• I 50 Kits • I 50 Kits • I 50 Kits


Receipt & • T 4320 • T 2880 • T 2880
Receipt & Cleaning Assembly
SS Inspection Cleaning Assembly
Inspection
NVA VA VA
HES request Assembly • LA 79 Cleaned • LA 14 Assembly In
BOM kit • MA 60 assembly kit • MA 54 brazing jig
• Ao .99 • Ao .99

2880 30 4320 139 2880 68 2880

Rework I
Rework • LA 48
NVA • MA 33 • I 100 Kits
• Ao .99 • T 4320 Example 2
I I
• I 50 Kits • I 25 Kits
• T 30 Min
Final
• T 2880
Brazing Test Final
Inspection
Brazing Test Inspection
n
EE
VA NNVA NVA

• LA 14 Brazed • LA 37 Passed • LA 41 Inspected


• MA 54 HES • MA 37 HES • MA 6 HES
• Ao .98 • Ao .89 • Ao .99 to ship

68 7200 74 30 47

FIGURE 9.4 Value stream map.


c09_1 10/09/2008 405

ROUTING ANALYSIS 405

Takt time is calculated by dividing the available working time (AWT)—which


is the effective work time available—by customer demand (a projected amount).
available working time
Takt time ¼
customer demand
Takt time is expressed as ‘‘time per piece,’’ indicating that customers are
buying a product once every so many seconds. It is not expressed as ‘‘pieces per
time.’’
For available working time, minutes are used. For example, if the typical
workday is 8 hours, and if lunch and breaks take up an hour per day, then AWT
will be 7 hours  60 minutes ¼ 420 minutes per day.
To calculate the takt time for a process where the customer demand is pro-
jected to be 35 pieces per day, proceed as follows:

available working time


Takt time ¼
customer demand
420 min=day
¼
35 pieces=day
¼ 12 min=piece

The required takt time to meet the projected customer demand of 35 pieces
per day is 12 minutes per piece.
Takt time is the key to synchronizing all process operations. When all pro-
cesses run at takt time, unevenness and overburden are eliminated. When takt
time and cycle time are in balance, waste is eliminated. Takt time and cycle time
disparities identify bottlenecks or excess processing. However, takt time and
cycle time are not the same thing. Cycle time is the time it takes to complete
one task and may be less than, more than, or equal to takt time.

CYCLE TIME

Cycle time (‘‘order-to-deliver cycle’’) is the total time from the beginning to the
end of the process, as defined by you and your customer. (See Figure 9.5.) Cycle
time includes process time, during which a unit is acted upon to bring it closer
to an output, and delay time, during which a unit of work is waiting for the next
action.

ROUTING ANALYSIS

Routing analysis provides an assessment of work-flow patterns and cycle time


in each process work center and work activity you have mapped. It is based
upon the assessment of linear process flow analysis, process work content anal-
ysis, and cycle time.
c09_1 10/09/2008 406

406 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.5 Order-to-delivery cycle.

First show the process sequence (work-flow patterns) for each product
type. Group together products with the same process routes and distances and
analyze the mix of process routes. You are seeking value-added versus non-
value-added activities. An example of routing analysis is demonstrated in
Figure 9.6.

SPAGHETTI DIAGRAM

A useful tool for identifying and eliminating waste is what we refer to as a


‘‘spaghetti diagram.’’ Spaghetti diagrams show the original complexity of a
work flow with its many starts, stops, jumps, and backtracks. The challenge
is to untangle the strands of spaghetti. Spaghetti diagrams map out the facili-
ty layout with movement of product during processing. They record the

FIGURE 9.6 Routing analysis example.


c09_1 10/09/2008 407

WORK CONTENT ANALYSIS 407

FIGURE 9.7 Spaghetti diagram.

distance and direction each product is moved. Figure 9.7 demonstrates a spa-
ghetti diagram.

WORK CONTENT ANALYSIS

Building upon the process analysis, we now move to evaluating process cycle and
total time. This is accomplished by assessing setup time, machine time, and labor
time for each process element. This is baseline information that can and will be
used in the future for different calculations. This assessment begins with gather-
ing the data, which can be done manually or by using the data available from a
material requirements planning (MRP) system. If accomplished manually, the da-
ta will be gathered on a process time observation worksheet. The following are
examples of these worksheets:

Setup time. This worksheet is used to establish the average setup time
for each process step for the HES737. (See Figure 9.8.) For this
product set up is accomplished daily. Other products may require setup
more or less frequently. This setup time is not the same as the line
turnover time for changes of casts and dies, which is addressed later in
our discussions.
Labor time. This worksheet is used to establish the direct labor times used in
each process step for each product produced. (See Figure 9.9.)
Machine time. In this final worksheet, machine time is collected. (See
Figure 9.10.) Notice that in some cases machine time is the same as labor
time and in other cases it is significantly different.
c09_1 10/09/2008 408

408 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.8 Setup time.

This data is collected directly from the production line during operations.
Different Lean team members at different times observe the operation and col-
lect the time data associated with the process. Ten observations should be suffi-
cient to establish the average times.
The alternative method to collecting this data is to use the information from
the MRP program if the organization has one. Caution should be exercised

FIGURE 9.9 Labor time.


c09_1 10/09/2008 409

WORK CONTENT ANALYSIS 409

FIGURE 9.10 Machine time.

when using this data, and it should be verified from separate records such as
labor costs and so forth.
Now that we have this information, we can use it to summarize the overall
times required for production and estimate the average time required to pro-
duce specific quantities of products. The process summary form can then be
used for planning and for comparison to the takt time, the time required to
meet the customer’s needs. Figure 9.11 an example of the process summary
form.
This basic information can then be used for several types of analysis. The
total times required for production of any quantity can then be calculated as
indicated in the following example with a weekly inherent capacity to produce
40 HES737 heat exchangers.

FIGURE 9.11 Work content analysis example.


c09_1 10/09/2008 410

410 ANALYZING AND IMPROVING EFFICIENCY

Setup time 112.70


Labor time 400.30
Machine time 307.40

Additionally, various charts and graphs can be used to evaluate this data, such
as the bar chart in Figure 9.12, in which it becomes obvious that the most labor-
intensive work center is Brazing. However, remember that by its nature and def-
inition, rework is a non-value-added process function. If you were to focus your
Lean efforts on this process, which work center(s) would you address first?
This data is focused on the efficiency of the process, that is, the times and
costs associated with the overall process and process work centers. This alone is
insufficient to make decisions concerning Lean implementation. We must also
understand the effectiveness of the process. In addition to rolled throughput
yield (RTY), discussed previously, the following measures are used in this
assessment:

 Process availability
 Actual yield
 Operational yield

FIGURE 9.12 Work content analysis example.


c09_1 10/09/2008 411

PROCESS AVAILABILITY ANALYSIS 411

PROCESS AVAILABILITY ANALYSIS

Process availability, sometimes called operational availability (Ao ), is the time a


system or process is up and running. It is the probability that a process is availa-
ble to perform when called upon. Availability is calculated as the ratio of operat-
ing time over operating time plus downtime. There are several methods of
calculating this ratio (reliability engineering, systems engineering, and logistics).
For our purposes, we are simply looking for the availability of our production
system work centers, answering the question: What is the average operating
time and what is the average downtime for the work center? This can be calcu-
lated as follows:
MTBM
Ao ¼
MTBM þ MDT
where

MDT ¼ mean down time


MTBM ¼ mean time between maintenance

Availability should be calculated for each work center and for the overall
process. The process of acquiring and calculating the MTBM and MDT for each
work center will provide valuable information for you to focus your Lean and
Six Sigma improvement efforts. To calculate Ao , we use the example of Inspec-
tion and test from our continuing example. The second-level process map and
process worksheet indicate that there are two pieces of equipment used in the
process. (See Figure 9.13.)
We calculate the availability for the Low Pressure Test work center 5.1 and
the High Pressure Test work center 5.3 in Figure 9.13 as follows.

Low pressure test: Using MTBM as 48 hours and MDT as 3 hours,


48
Ao ¼ ¼ 0:94
48 þ 3
High pressure test: Using MTBM as 30 hours and MDT as 8 hours,
30
Ao ¼ ¼ 0:79
30 þ 8
The probability that both test systems are available is simply the probability
(Pa )(Pb ) ¼ availability. In this case, the availability of test equipment in work
center 5 is 0.74. This information can then be added to our continuing example:

System availability ¼ ðPa ÞðPb Þ


¼ ð0:94Þð0:79Þ
¼ 0:74
c09_1 10/09/2008 412

412 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.13 Process availability example.

The probability that both test systems are available is 0.74. The Ao for each
process step can be calculated as indicated in Figure 9.14.

PROCESS YIELD MEASURES

Process yield is the traditional way that yield has been calculated (units out/
units in). The process yield is calculated by subtracting the total number of de-
fects from the total number of opportunities, dividing by the total number of
opportunities, and finally multiplying the result by 100.

FIGURE 9.14 Operational availability added to work content analysis.


c09_1 10/09/2008 413

PROCESS YIELD MEASURES 413

Rolled throughput yield (RTY) is the more accurate Lean Six Sigma method
of calculating yield that takes into account the ‘‘hidden factory’’ (calculated
e-TDPU). Used to quantify processes, rolled throughput yield is the probability
that a product will pass through all the process steps defect-free and thus gives
an accurate view of how efficient a process is.
Theoretical yield is the predicted yield of a proposed process. It is predicted
from engineering analysis, not data. For known amounts of reactants, theoretical
amounts of products can be calculated in a process. The calculated amounts of
products are called theoretical yield. In these calculations, the limiting reactant
is the limiting factor for the theoretical yields of all products. Theoretical yield
is usually stated as worst case, best case, and most likely case.
Inherent yield is the designed yield, what can be expected if all conditions go
perfectly, and this can be changed only by reengineering the process.
Achieved yield is the estimated yield based on low rate production (pilot run)
data.
Operational yield is the yield of a process under normal operating conditions,
calculated using:
RTY  Ao
The inherent capacity of low-pressure test is 60 systems per day. That is five
systems per hour for six hours per day on each of two test sets. (Remember to
deduct lunch, breaks, and setup times.) Considering one system per hour on
each of five test sets, actual product yield is therefore:
Actual daily yield ¼ 60  0:94 ¼ 56:40
Actual weekly yield ¼ 300  0:94 ¼ 282
Quantity produced is the actual number of output units, calculated by the
number of input units times the operational yield. Figure 9.15 is an example of
operational yield.

FIGURE 9.15 Operational yield example.


c09_1 10/09/2008 414

414 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.16 Process yield analysis.


Applying all the information so far to the overall process at level 1, we can
establish the actual yield on a daily basis for the process work centers and the
overall process as indicated in Figure 9.16.
Based on our previous analysis, ask the following questions:

 Which of the work centers have excess capacity?


 Is there an associated cost with excess capacity?
 Is equipment downtime a significant problem?
 In which work centers?
 In work center 5, what is contributing to the low A0?
 What is the work center with the highest labor cost?
 How would you approach a Lean process improvement for this work
center?
 The bar chart in Figure 9.17 provides a graphical approach to answering
these questions.

CALCULATING CYCLE TIME

Cycle time is the actual time required to produce a part, assembly, or product. It
is the total time from the beginning to the end of your process, as defined by you
and your customer. Cycle time includes:

 Process time, during which a unit is acted upon to bring it closer to an


output
 Delay time, during which a unit of work is spent waiting for the next action

Cycle times can also apply to lots or batches, depending on the operation. It
is important to understand that cycle time, like many other factors we have
c09_1 10/09/2008 415

CALCULATING CYCLE TIME 415

FIGURE 9.17 Capacity analysis graph.

evaluated, has two distinctly different elements: theoretical cycle time and op-
erational cycle time. Theoretical cycle time is calculated using the theoretical
yields, and operational cycle time is calculated using the operational yields. Cy-
cle time is calculated as follows:

time
Theoretical cycle time ¼
input units
480 min/shift
TCTWC5 ¼ ¼ 12:63 minutes/unit
38 units/shift

time
Operational cycle time ¼
output units
480 min/shift
OCTWC5 ¼ ¼ 18:82 minutes/unit
25:5 units/shift

We can now apply this to our continuing example in Figure 9.18.


The next step is to compare theoretical and actual cycle times with the VOC
takt time. This comparison is demonstrated in Figure 9.19.
Takt time assessment results are shown in Figure 9.20. This graph can be
used to compare and analyze theoretical cycle times, operational cycle times,
and takt times.
Based on the cycle times and takt time assessments and all of our previous
analysis, ask the following questions:
c09_1 10/09/2008 416

416 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.18 TCT and OCT calculations.

 Which work units can meet the takt time requirements?


 What is one Lean strategy that we could use on work unit 4?
 Is equipment downtime a significant problem?
 In which work units?
 In work unit 5, what is the strategy to meet takt time?
 How would you approach a Lean process improvement for these work
units?

Now that we have analyzed our process we can move to the improve phase.

JUST-IN-TIME

This strategy refers to a body of practices that calls for goods to be produced as
closely as possible to when they are sold. The availability of raw materials (with-
in hours of consumption or provision of a service) is assumed by this strategy.
Just-in-time (JIT) is one of the pillars of the Toyota Production System
(which is virtually synonymous with Lean).

FIGURE 9.19 Comparison to takt time.


c09_1 10/09/2008 417

JUST-IN-TIME 417

FIGURE 9.20 Graphing TCT, OCT, and takt time.

In pure manufacturing terms, JIT is a material requirement planning ap-


proach in which:

 Hardly any inventory of parts or raw materials is kept at the factory.


 Little to no incoming inspection of parts or raw material occurs.

Kanban (which is discussed later in this book) is the tool used to help imple-
ment just-in-time. The focus of all the JIT/kanban efforts is reduced inventory
and reduced WIP. It is crucial to their success that these efforts be understood in
light of your specific work environment. The effects of disruptions to your pro-
cess flow and product can be disastrous. It is important to keep in mind some of
the constraints of JIT and their just-in-case solutions.

Limited Applicability
There are several types of industry to which JIT could apply. Here we focus on
three of them.

Repetitive Manufacturing
Distinguished by production of discrete units, where products flow continu-
ously along a direct route until they are complete, these industries have little
in-process inventory, and parts rarely stop moving. Examples include:
c09_1 10/09/2008 418

418 ANALYZING AND IMPROVING EFFICIENCY

 Automotive
 Consumer packaged goods
 Furniture manufacturers
 Medical devices

Challenges for this industry type include:

 Poor quality due to process variations


 High scrap rates
 Long setup and changeover times
 High batch sizes leading to poor inventory turns

Process Industries
These industries are distinguished by a production approach that has minimal
interruptions in actual processing in any one production run, or between produc-
tion runs of similar products. This approach produces multiple unique products
in relatively small batches flowing through different production operations
throughout the factory. Examples include:

 Food processing
 Pharmaceuticals
 Refineries, wineries, and so on

Challenges for this industry type include:

 A heavily government-regulated environment (GMP) increases documen-


tation requirements.
 Complexity of the operation makes training critical and time-consuming.
 Time-to-market can mean the difference between profit and loss.
 There is increasing price pressure from customers and governments.
 There is increasing government intervention and regulatory requirements.
 There is increasing sales and marketing competition.
 Quality deficiencies can be a life-and-death matter and affect the viability
of the company. There can be no process variations.
 There is a need to improve productivity in order to sustain growth in increas-
ingly complex organizations that are achieving growth through acquisition.
 Complex raw material needs make supplier and supply-chain management
critical.

Job Shops
Job shop manufacturing features a production process in which the manufacturer
receives all or most engineering specifications from the customer and utilizes
c09_1 10/09/2008 419

JUST-IN-TIME 419

intermittent production methods due to limited customer demand. An example is


custom metal products for construction industries.
Challenges for this industry include:

 High direct labor cost


 High variation in cycle times leading to customer delivery delays
 High cost overruns due to poor job estimating
 High quality variation due to design inconsistencies
 High raw material costs due to specialized low-volume needs
 Significant customer and overseas manufacturer cost pressure
 Heavily impacted by steel and other metals costs and tariffs

The implementation of a JIT/kanban system is essentially restricted to repeti-


tive industries. It is not appropriate to implement a JIT logistical system either
in continuous process environments or in plants that do not produce relatively
large quantities of standardized products.
In process industries, finished goods are produced in a continuous process.
These industries are typically very capital-intensive, with the system specifi-
cally designed to produce large batches of a product. For reasons of process-
ing efficiency, this type of operation simply cannot be switched continuously
back and forth from one product to another. In addition, the amount of work-
in-process (WIP) inventory in the system is generally set by the constraints
and capacities of the processes.
Job shops will have trouble attaining the JIT objectives of small lot sizes and
reduced levels of inventory. The basic explanation of why job shop operations
have difficulty implementing JIT logistical systems is simple. In order to keep a
smooth product flow through the process, just-in-case inventory must be provided
at every workstation in the form of inventory buffers. This requires a measured
flow of product and a quantity normally not present in a job shop that does small
quantities without repeat orders.
In a JIT/kanban environment, workstations do not work independently and at
their own pace. Each station is a link in a chain. Whenever one workstation experi-
ences a disruption significant enough to cause a work stoppage—such as may be
caused by unavailability of required materials, poor-quality materials, or a machine
malfunction or breakdown—the entire logistical system product flow is in jeopardy.

Implementation Requirements
In order to implement a JIT/kanban system successfully, a number of significant
changes must occur in manufacturing and management cultures. The length of
time necessary to implement a working JIT system is generally several years.
This can be a trying time, inasmuch as the implementation process is disruptive
to normal operations and requires both financial staying power and great pa-
tience on the part of management.
c09_1 10/09/2008 420

420 ANALYZING AND IMPROVING EFFICIENCY

JIT is a very different philosophy from that found in the traditional manufac-
turing environment, and it requires increased reliance on a more responsible,
better-trained, and better-educated workforce. Ultimately, in order to implement
a JIT system successfully, quality problems must be virtually eliminated, setup
times must be drastically cut throughout the operation, and other sources of pro-
duction fluctuations must be significantly reduced across the board. Required
resources (material, equipment, and people) are available:

 At the required locations


 In the quantity needed
 At the time required

Issues that impact production:

 Tests efficiency of material, equipment, and labor


 Can stop system/production line
 Stops excess inventory
 Demands correction/improvement

KANBAN

Two types of production systems exist: the push system and the pull system.
In a push system, production schedules are developed for each work area
based on sales forecasts. Components are produced in the work areas and
then pushed downstream. This means that the more efficient operations may
bury downstream operations with product. And when this happens, material
flow is interrupted, workstations become disconnected, and production lead
times increase.
In contrast, a pull system controls the flow of work through the factory by
releasing materials into production only when they are needed. Production is
always triggered by demand from the next work center. The system that signals
this demand is known as kanban.
In Japanese, the word kan means ‘‘card,’’ and the word ban means ‘‘signal.’’
Kanban means ‘‘signal cards.’’ (See Figure 9.21.)
The kanban technique is similar to the process used by companies that deliver
bottled water. The driver, when arriving at a designated delivery site, immediate-
ly sees how many empty water bottles are left for removal. Two empty containers
authorize that driver to deliver two full bottles. The empty bottles signify exactly
what needs to be replenished.
Kanban is a way of controlling inventory. It works as a signal to replace what
has been used. If the kanban authorization is present, action is taken. If it is not,
no action is taken. As an inventory control mechanism, it provides an ideal way
of exposing problems or opportunities for improvement. It is an essential
c09_1 10/09/2008 421

KANBAN 421

FIGURE 9. Kanban card.

element of Lean manufacturing. The sum of all kanbans in a process represents


the total inventory, as indicated in Figure 9.22.
Kanban is the minimum level at which we know we can operate the factory.
The objective for us is to operate with fewer kanbans (inventory). If we don’t
know the minimum, we find out by carefully removing a few kanbans (reduc-
ing inventory). If we know the problem, we fix it and then remove a few kan-
bans to find the next constraint. The goal is to perfectly balance inventory with
process flow so there is a minimum of inventory, but sufficient to keep pro-
cesses working effectively and efficiently. This is a continuous process. The
goal is clearly to know how much, where, and when. Reorder points simply
tell us at what point to reorder, based on required lead times and economic
order quantities.

The Process of Kanban


As indicated earlier, JIT encourages the use of visual controls. Therefore, it
works best if you can see when your customer requires your completed product.
c09_1 10/09/2008 422

422 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.22 Kanban.

In most cases, you cannot see when a product is moved to a customer, so you
must make a visible kanban. Floor or work surfaces make excellent kanbans
when inventory is stored within sight. One of the best types of kanban is a re-
turnable container or some other type of returnable material-handling device,
such as a pallet or a cart. Another alternative is a chit that is returned once the
inventory has been consumed. In all instances, the empty kanban authorizes
replenishment.
When a chit is used to represent a kanban, there are a number of approaches.
Cards of different colors can be used, one authorizing production and others
authorizing movement. Alternatively, a single card may serve to authorize all
operations. No matter how many cards are used, the same rules of kanban apply.
Every piece of material must be authorized by a kanban.
Another way to handle a kanban is to use an electronic signal, e-mail, or a fax
kanban. Caution: Experience has taught us that the customer/supplier relation-
ship is often lost when electronic communication is adopted. JIT is a very visual
system, and that vision can be lost with electronic communication, particularly
when the electronic authorization resembles just another piece of e-mail. Al-
though electronic transfer of information is necessary in order to communicate
rapidly across great distances, care must be taken to avoid loss of the hands-on
visual control feature of just-in-time accountability. That accountability is easily
lost in e-mail. This does not mean that electronic kanbans are not to be used, but
c09_1 10/09/2008 423

KANBAN 423

that care needs to be exercised so that this type of kanban system does not get
lost or overlooked.
The use of work centers, JIT, and kanban seems to focus on internal and
external supplier communications. In conventional environments, an automat-
ed report initiates work activity. If something is questionable, we talk to plan-
ning or scheduling or ignore it. Obviously, we cannot communicate with the
computer. With JIT, we receive returnable kanbans directly from our customer.
The customer receives product and services directly from us. It is up to the
customer/supplier (internal and external) to communicate directly to learn how
to reduce the kanbans in the pipeline. This brings an exceptional amount of
accountability to the system.
We often perceive and act as if the computer or computer report is the cus-
tomer. The focus is on the report. With the kanban system, we physically link
the supplier and customer together, causing direct communications. The focus
is on the voice of the customer.
Technically, electronic kanbans should be an excellent alternative. We must
remember, however, that we are dealing with people. Experience has shown that
if we eliminate the personal contact, there is a danger of losing the customer/
supplier focus. The first question you must ask yourself is: ‘‘Are electronic kan-
bans really necessary, or are they just technologically appealing?’’ If they are
necessary, how can you minimize the side effects of this approach? Whenever
possible, it is preferable to use visible kanban locations. Returnable containers
are probably a close second choice, followed by cards. There is something in-
herently unifying about being able to touch, see, and know when a customer has
consumed a product. It stimulates communication and makes us focus on the
customer rather than on a computer report.

Establishing Maximum Requirements


Kanbans facilitate our use of visual controls by enabling us to establish a visible
ceiling on the amount of work authorized. We can visually assess the amount of
time and product by which one work center is allowed to get ahead of another.
When each product type flowing through a series of production processes re-
quires a relatively consistent amount of time, the kanban ceiling can be ex-
pressed in terms of a maximum number of pieces. When products vary in work
content, however, we must establish a kanban unit of measure that represents a
consistent amount of time.
A kanban could also represent an entire operation, allowing us to limit the
number of jobs one work center can complete in advance of customer need. In
some operations, for example, the amount of work does not vary greatly accord-
ing to the size of the batch, and the number of ‘‘jobs’’ in queue may serve as a
good kanban ceiling. But where jobs vary considerably in work content, the
number of jobs cannot be used to establish a consistent kanban ceiling.
Whatever your particular circumstances, the objective when establishing a
kanban is to keep it simple, keep it visual, and keep it consistent.
c09_1 10/09/2008 424

424 ANALYZING AND IMPROVING EFFICIENCY

In most cases, kanban is a highly efficient process, but there are some special
considerations that must be taken into account, especially when it comes to
sporadic processes. If we have products that are needed just twice a year, we
can still use kanban: We would build the product; the card would come back;
and we would be authorized to build another. A problem arises when that newly
authorized product ends up in the warehouse for the next six months. One solu-
tion would be to have someone intercept the kanban card on its way back, and
then hold it for five months before returning it to the pipeline. A better approach
would be to use the demand cycle in conjunction with the takt time for the kan-
ban release to be authorized. In this manner, the product is built and delivered
just in time and the waste of storing it is eliminated. Only the products that are
‘‘shippable’’ would move through the pipeline.
In all businesses, seasonal demands and unexpected surges occur. The
worst way to handle surges is to carry enough inventories to cover all possible
situations. To solve this problem, we could use a combination of alternatives:
kanbans for more flexible capacity and a requirement that abnormal orders
receive longer lead times. Some inventory will be necessary; however, it must
be minimized.

Advantages of Kanban
Some of the advantages of a kanban system are as follows:

 Eliminates computer data entry errors (entering incorrect part numbers and
incorrect quantities)
 Eliminates errors with unit-of-measure conversions
 Offers a visible signal indicating whether material has been ordered and
when
 Offers mistake-proof system to limit quantity ordered
 Supports point-of-use inventory
 Eliminates stockouts
 Reduces inventory
 Offers clear and complete information to suppliers
 Links inventory directly to demand

One of the benefits provided by kanban is that it identifies problems. And if


you make a problem visible, somebody will solve it. You achieve results when
you identify a problem that you have to solve. Real improvements come from
the interaction and synergy of people. It is when customer, supplier, manage-
ment, and employee communicate about genuine opportunities that change
takes place. Just-in-time is the vehicle of communication. Ability is randomly
distributed throughout an organization; it exists everywhere. Once you recog-
nize this resource, improvement can begin.
c09_1 10/09/2008 425

KANBAN 425

Rules of Kanban
There are six hard-and-fast rules that must be used with kanban to make it an
effective and efficient tool. These rules are simple, but important. They are as
follows:

1. Buy or request material only when you have a known requirement.


2. First in, first out . . . always, no exceptions.
3. Quality is first.
4. Reduce WIP inventory.
5. Pull material/products through the process.
6. Allow only required WIP inventory at a work center.

Each of these rules, and the way in which they relate to kanban, is now discuss-
ed in more detail.

Have a Known Requirement


If you determine that a ceiling of 750 pieces in the production pipeline is needed,
then that quantity should never be exceeded. It is important to maintain this ceil-
ing. Companies can become very sloppy in how they use kanbans. If they want a
few extras, they make a few extras. That is very costly. Like work orders, kan-
bans provide a control system. A company should be using either work orders or
kanbans correctly. To get rid of work orders and fail to follow the rules of kanban
is to lose accountability. Never exceed the established ceiling.

First In, First Out


The first authorization that comes back, or the first set to add up to a lot size, is
the first one that generates the signal to begin the process. Where lot sizing still
occurs, production is authorized to begin whenever the lot size is reached. Lots
for various products should be processed in the sequence in which they are
reached. Where lots for a family of parts are run periodically, all available kan-
bans for the group should be collected and run consistently as scheduled.
If you are incapable of manufacturing the highest-priority kanban be-
cause, for example, of material shortages, cross-training, or equipment prob-
lems, then you should select the first job in the queue that you are capable of
completing.

Quality Is First
The objective is to make high-quality, cost-effective products in a responsive
manner. Never pass on a known defect. If a product is known to be defective, it
will not move on, and until it is repaired or scrapped it occupies an authorized
kanban location in the pipeline. It is part of the work inventory. If a rework loca-
tion exists outside the normal production pipeline, the product can be moved to
that location, but the number of items in rework should also be under kanban
control.
c09_1 10/09/2008 426

426 ANALYZING AND IMPROVING EFFICIENCY

Reduce WIP Inventory


While continuing to make shipments, reduce the total number of kanbans in the
pipeline one at a time, in order, and isolate anything that is constraining a faster
throughput. Each time a constraint is identified and resolved, repeat the process,
gradually achieving higher velocities and an improved ability to respond.
The reduction process is within our own area of control, but part of this pro-
cess must also involve the supplier and the customer. Our interactions have an
impact on the amount of inventory in the pipeline. Reducing the inventory will
expose constraints that require communications and changes on the part of both
the customer and the supplier.

Pull Material/Products through the Process


It generally works out best if the customer pulls material from the supplying
operation. The result of this action is very visual, and what needs to be replaced
will be immediately visible. However, this outbound-queue control may not
work well if a large number of operations or machines are feeding the same
operation or piece of equipment. In this case, it may be better to put an in-
bound-queue control in front of the bottleneck area.
When the input queue is nearly full, a yellow caution light will signal the
feeding operations to slow their rate of input. These operations might then work
on the items in their own queues that do not require the bottleneck operation. If
that is not possible, they may have to shift their resources.
This kanban process applies equally well to both manufacturing products and
administrative processes. However, we may be unable to control the supplier
shipment. When we reach a kanban ceiling, we do not shut off the incoming
product, but rather shift our resources to the operation that is falling behind.

Use Required WIP Inventory Only


The work center is not a general-purpose storage location. Set a time limit on
the amount of WIP allowed at a given station—ideally, one hour’s work. For
various reasons, it may be impractical for workers to go to a storage area for
every single hour’s work. Never have more than one day’s work at the work
center. WIP inventory should be moved at least every hour. We want smaller
batches to level the load throughout the process. Forcing hourly movement will
minimize the waves of work going through the process and eliminate the WIP
problem. Types of kanbans are shown in Table 9.4.

Kanban Examples
Kanbans can come in all shapes, sizes, and materials, ranging from pigeonholes
to cards to reusable containers to squares on the floor. Familiar examples of
kanban include:

 Empty containers
 Cards
c09_1 10/09/2008 427

KANBAN 427

TABLE 9.4
Type Description

Square Marked, designated area to hold items


Signal Used to signal production at the previous
workstation
Material Used to order material in advance of a process
Supplier Rotates between the factory and suppliers

 Squares on the floor


 Returnable cart
 Supermarket kanban

Supermarkets are most often seen in product coming from suppliers. The
supermarket analogy comes from the way that product/parts are signaled for
resupply. Much like the replacement of inventory on a supermarket shelf, as a
part is used, it is restocked. Supermarkets are a collection of parts needed to
continue production. As product is produced, the parts used to build that product
are consumed. These parts are controlled by kanban signals, which cue the
upstream process to build more parts. As the raw materials are used by the up-
stream processes, they diminish the on-hand supply. When the reorder point is
reached, the raw material kanban signals the supplier to act and replenish
the raw materials to the appropriate level designated by the kanban. (See
Figure 9.23.)
Whatever your process, product, or service, the objectives when establishing
a kanban are to:

 Keep it simple
 Keep it visual
 Keep it consistent
 Control inventory

FIGURE 9.23 Supermarket kanban.


c09_1 10/09/2008 428

428 ANALYZING AND IMPROVING EFFICIENCY

FIGURE 9.24 Mixed-model production example.

MIXED-MODEL PRODUCTION

Mixed-model production is a scheduling tool providing increased responsive-


ness and utilization of floor space. It supports production flexibility for the work
center.
For mixed-model production to be successful, production processes must be
relatively consistent from part to part or the processes must be able to rapidly
adapt to the different models presented for production. Furthermore, there can-
not be any significant variation in the process. Variation in the work content for
each operation must be minimal as well. Finally, a highly flexible workforce
must be in place.
As an example, consider the following the following scenario. Customer de-
mand requires:

Part 1: 3,500/week
Part 2: 500/week
Part 3: 1,000/week

In the typical production environment, the schedule may look like the one
shown in Figure 9.24.

A, B, C MATERIAL HANDLING

Varying parts are scheduled and controlled in different ways. For instance,
large, complex machined parts (such as parts weighing 100 pounds that use 30
hours of machine time) would be scheduled and controlled differently than a
c09_1 10/09/2008 429

WORKABLE WORK 429

small, inexpensive bracket. This means that parts are stratified according to a
given criteria so that appropriate effort is spent on managing the part replenish-
ment process.
Parts are segregated along an A, B, C type of classification. This approach
differs from the typical 80/20 rule; the average part population falls along a
15=35=50 percentage split.

‘‘A’’ parts (15% volume, 75% value). Parts that are expensive, more complex
to build, and often exhibit long lead times should be considered ‘‘A’’ parts.
These parts need to be scheduled with suppliers using transportation pipe-
line kanbans (especially with high-volume product), or directly through
MRP II (for low-volume product).
‘‘B’’ parts (35% volume, 20% value). Parts that are less complex, have
shorter and more predictable lead times, are less expensive, and are
small enough to be kitted should be considered ‘‘B’’ parts. These can
be replenished via kanbans and can possibly be built on demand. These
parts may be built and delivered in negotiated batch sizes or in predeter-
mined kits. Usage data should be used to establish the appropriate re-
order points, and this should be reflected in the kanban signal.
‘‘C’’ parts (50% volume, 5% value). Parts that experience low demand vol-
ume or are highly variable should be considered ‘‘C’’ parts. These parts are
usually replenished via MRP II or through nonrepetitive kanbans. The ma-
jority of parts (50 percent) are in the ‘‘C’’ category and can be managed
directly through a vendor-managed reorder point or kanban system.

WORKABLE WORK

Workable work refers to those elements contained within the production process
that are necessary for work to begin on a product. Every production environ-
ment will have something that is specifically required in order to begin work:

 Material/parts
 Tooling
 Equipment
 Work instructions/specifications/checklists/routing
 Demand
 Skilled workers

Most MRP II systems plan and release work to the shop floor based on de-
mand information. Some check for component part availability before assembly
orders are launched, but that is normally where it stops. When work is released
to the floor without having been verified completely that it is workable, there
c09_1 10/09/2008 430

430 ANALYZING AND IMPROVING EFFICIENCY

will invariably be delays. This is especially true when the lack of readiness is an
issue (missing parts, tooling out for repair, instruments out of calibration, essen-
tial work center members unavailable, etc.).

WORKLOAD BALANCING

Cycle times for individual products and total times for producing product lots
are times derived directly from the existing process. Takt time, on the other
hand, is driven by the projected customer demand. The calculation for takt time
has the scheduled production time available as the numerator and the designed
daily production rate as the denominator.
available working time
Takt time ¼
customer demand
Available work time is simply the effective work time available. Effective
work time for ‘‘on-line’’ labor is expressed in minutes. Since the typical work
day is eight hours, 480 minutes is the numerator we use (8  60 ¼ 480). Let’s
also say that the required takt time to meet customer demand of 35 units is
13.71 minutes for the overall process. This takt time can then be broken down
into takt times for each work center. When calculating takt time for work cen-
ters, remember that you must take into account machine downtime—mean time
between failure (MTBF), and mean time to repair (MTTR)—as well as scrap
rates and rework. (See Figure 9.25.)
Once a cell takt time has been determined, we can begin to design a balanced
cell. The operational elements (machine time, labor time, and setup time) of
each product are examined in relation to takt time.

FIGURE 9.25 Workload balancing, takt time.


c09_1 10/09/2008 431

WORKLOAD BALANCING 431

Machine Time
Machine time is compared to takt time to determine whether the fixed cycle
time of any piece of equipment is greater than the takt time. If it is, action such
as the following must be taken:

 Changing the available time


 Off-loading work
 Reduce the cycle time
 Changing processes
 Adding equipment
 Splitting demand

If the operation remains greater than takt time, it will need to be balanced
with ‘‘in-process’’ kanban inventory, automation, autonomation, additional
work shifts, and the like.

Labor Time
Labor time is then compared to takt time to address the opportunities for auto-
mation, workload balance, and/or reducing the workforce.
The first opportunity, autonomation, means equipment does not need to be
watched in case something goes wrong. Autonomation equipment will automat-
ically shut off when an abnormality is discovered, thereby allowing the operator
to do other value-added work. This opportunity is invaluable for increasing pro-
ductivity and quality.
The second opportunity, workload balancing, has to do with examining the
individual work elements of each operation and determining whether they can
be reduced, shifted, resequenced, combined, or eliminated. This effort to bal-
ance the workload to takt time is a main enabler for achieving one-piece flow
and minimizing manufacturing lead times.
Once we know the cycle time for the process and we know the designed takt
time, we can take the known cycle time and divide it by the takt time to deter-
mine the maximum staffing requirements for the cell. For instance, the cycle
time from the preceding example was 5.0 minutes. If takt time for that process
were 2.5 minutes, then the required staffing would be two operators. Actual
head counts will vary with changes in required daily demand, which is why
cross-training and operator flexibility are so important in supporting one-piece
flow.

Setup Time
Setup times are almost always greater than takt time and need to be addressed as
part of the cell design process. By comparing setup time to takt time, you have a
c09_1 10/09/2008 432

432 ANALYZING AND IMPROVING EFFICIENCY

greater appreciation of how far setups need to improve in order to create a flexi-
ble work environment. The initial stake in the ground is to plan on setting up
each high-volume product every day and then to schedule the product mix to
run accordingly. If this cannot be accomplished, then plan to run two to three
days’ worth at a time and hold the excess inventory until the customer or cus-
tomer cell asks for it (never allow this to extend past more than a one week’s
run). It will become very clear, very quickly, why setup reduction is so impor-
tant when the supplier cell has to physically hold the excess inventory until the
customer cell asks for it through a kanban. Once each of these three operational
elements is determined for each product, they are compared to the overall takt
time of the cell.
From this point, it is a matter of generating ideas and looking for cell design
solutions that will balance the cell workload for all parts and takt time. By re-
viewing the actual work elements and either improving the operations or shift-
ing the work content, the cell can become more balanced compared to the takt
time. This is accomplished much more easily in an assembly environment than
in a fabrication environment, but it can be done in both.

ONE-PIECE FLOW

One-piece flow is characterized by:

 Functional layout
 Product routing
 Large batch manufacturing

Material Requirements Planning (MRP)


When the operations are balanced to takt time, it is possible to take advantage
of a one-piece flow approach to work flow instead of running in large batch
quantities. With one-piece flow, the manufacturing lead time, level of inven-
tory, and feedback on quality issues are. In a batch-and-queue system, individ-
ual pieces are completed at an operation and sit waiting until the entire batch is
complete, at which point they are moved to the next operation in sequence and
wait in queue for other orders to be completed that arrived there first before
moving forward. In the one-piece flow approach, products are passed one piece
at a time from operation to operation with a first-in, first-out (FIFO) priority.
Product manufacturing lead times are now only as long as the total of all the
takt times they had to get through. For example, five operations, each with a
takt of 1.0 minute, require a manufacturing lead time of five minutes. Another
significant benefit to one-piece flow is the impact on quality. There are fewer
units in flow to rework or scrap; if a defect is found, the feedback is almost
instantaneous, and corrective action is taken on the spot, not several weeks
later. (See Figures 9.26 and 9.27.)
c09_1 10/09/2008 433

ONE-PIECE FLOW 433

FIGURE 9.26 One-piece flow.

FIGURE 9.27 One-piece flow batch system.


c09_1 10/09/2008 434

434 ANALYZING AND IMPROVING EFFICIENCY

One-Piece Flow Versus Batch and Queue


Characteristics of one-piece flow are:

 Reduced product lead time


 FIFO
 Instantaneous defect feedback
 Immediate corrective action
 Reduces WIP

WORK CELL DESIGN

When designing a cell, a set of specific design objectives (criteria) need to be


established. The following is a list of general criteria for good cell design:

 Ensure material flows in one direction.


 Reduce material and operator movement.
 Eliminate storage between operations.
 Eliminate double and triple handling.
 Locate parts as close as possible to point of use.
 Use task variation to reduce repetitive motion.
 Locate tools and parts within easy reach of operators.
 Reduce walking distances.

The cell layout shown in Figure 9.28 is a graphical representation of the op-
erator and material flow. It depicts the path of the overall material movement
through the cell and describes the designed operator sequence and operations.

FIGURE 9.28 Work cell design.


c09_1 10/09/2008 435

KANBAN SIZING 435

FIGURE 9.29 Kanban sizing high inventory levels.

In terms of driving continuous improvement initiatives, the work cell should


also be designed with the following in mind:

 The goal is to eliminate all wait time.


 Vertical storage requires less space than horizontal storage (include kanban
material).
 Equipment and materials should be prepared by process sequence.
 Operators should be included in the design process (incorporate economies
of motion).

KANBAN SIZING

Kanban sizing helps to uncover production problems, as illustrated in Figure


9.29. When the level of in-process inventory is high, problems like absentee-
ism and equipment breakdowns will be only an annoyance, as shown in Fig-
ure 9.30.
Lean methodology has a proven track record of increasing efficiency. But to
be effective, Lean methods and tools must be applied to processes and opera-
tions that work in harmony. Only then can total efficiency be increased.

FIGURE 9.30 Hidden problems masked with high inventory levels.


c09_1 10/09/2008 436

436 ANALYZING AND IMPROVING EFFICIENCY

KEY POINTS
 Remember, ‘‘effectiveness is the foundation of success—efficiency is a
minimum condition for survival after effectiveness has been achieved.’’ Al-
ways balance effectiveness and efficiency.
 Integrate effectiveness and efficiency in all process analysis. While doing a
DMAIC process improvement, do not be reticent to use value stream maps
and Lean tools. Frequently, an initial process walkthrough will generate a
number of rapid improvement events to be run in parallel.
 When initiating an improvement project, always look to start with a 5S
program, which changes the culture of a workplace and makes improve-
ment initiatives much more successful.
 During your initial process walkthrough, use a checklist for the seven
forms of waste:
1. Overproduction
2. Waiting
3. Transport
4. Appropriate processing
5. Unnecessary inventory
6. Unnecessary motion
7. Defects
 Accumulate all of your process data into a work-flow analysis and use that
data and graphic as a basis to make fact-based decisions concerning mak-
ing your process a Lean one.

The seven forms of waste and the 5S analysis can offer early successes and
act as cultural change agents. Not only are these analyses and actions important
to the overall analyze and improve phase, they very often achieve significant
improvement upon their application.
Takt time is the key to synchronizing all process operations. When all pro-
cesses run at takt time, unevenness and overburden are eliminated. When takt
time and cycle time are in balance, waste is eliminated.
Routing analysis provides an assessment of work-flow patterns and cycle
time in each process work center and work activity you have mapped.
Work content analysis is baseline information that can and will be used in the
future for different calculations.
Process availability, sometimes called operational availability (Ao), is the
time a system or process is up and running. It is the probability that a process
will be available to perform when called upon.
Process yield is the traditional way that yield has been calculated (units out/
units in). Rolled throughput yield (RTY) is the more accurate Lean Six Sigma
method of calculating yield that takes into account the ‘‘hidden factory’’ (calcu-
lated e-TDPU).
c09_1 10/09/2008 437

KEY POINTS 437

Cycle time is the actual time required to produce a part, assembly, or product.
It is the total time from the beginning to the end of your process, as defined by
you and your customer. Cycle time includes:

 Process time, during which a unit is acted upon to bring it closer to an


output
 Delay time, during which a unit of work is waiting for the next action.

Just-in-time strategy refers to a body of practices that calls for goods to be


produced as closely as possible to when they are sold.
Kanban is a way of controlling inventory. It works as a signal to replace what
has been used. If the kanban authorization is present, action is taken. If it is not,
no action is taken. As an inventory control mechanism, it provides an ideal way
of exposing problems or opportunities for improvement. It is an essential ele-
ment of Lean manufacturing.
Mixed-model production is a scheduling tool providing increased responsive-
ness and utilization of floor space. It supports production flexibility for the work
center.
Workable work refers to those elements contained within the production pro-
cess that are necessary for work to begin on a product. Every production envi-
ronment will have something that is specifically required in order to begin work:

 Material/parts
 Tooling
 Equipment
 Work instructions/specifications/Checklists/Routing
 Demand
 Skilled workers

Kanban sizing helps to uncover production problems


c10_1 09/30/2008 438

10
CONTROL AND CONTINUOUS
MEASURABLE IMPROVEMENT

The goal of the control phase is to institutionalize and sustain the improvements
made during the analyze and improve phase and then transition to continuous
measurable improvement. After a change has been made to a process, it becomes
necessary to ‘‘lock it down’’ and maintain it as the new standard for operating;
however, after having set the new standard, the performance level should not be
limited to that standard. The process measure and analyze tools are used to moni-
tor a process, evaluate progress, and then, as appropriate and aligned with the
goals of the enterprise, develop new levels of improved performance. Process
control focuses on implementing process controls for measuring and evaluating
performance, documenting new processes and procedures in written work
instructions, documenting action plans for dealing with instances of special cause
variation, and training process operators on the new process and methods. The
controls also need to include audit plans for regular periodic verification to
ensure that process operators are following the new process instructions.
Control phase uses a number of tools for stabilizing standard methods of
working, managing their implementation, and achieving continuous measurable
improvement. The following strategies are used to control, sustain, and build
upon your improvements.

 Management systems
 Statistical process control
 Visual controls
 Graphic work instructions
 Mistake-proofing (poka-yoke)
 Single-minute exchange of die (SMED)
438 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
c10_1 09/30/2008 439

MANAGEMENT SYSTEMS 439

 Total productive maintenance (TPM)


 Rapid improvement events

MANAGEMENT SYSTEMS

The implementation of management systems is the first and most important con-
trol and sustainment strategy. We have discussed managements systems previ-
ously and now extend that discussion to include codifying improvements and
sustaining improvement initiatives. The first step after achieving a successful
improvement is to document the process changes. These process changes should
be documented in the organization’s management system. By using the manage-
ment system, you ensure that the change is documented in policy documents,
procedures, manuals, and work instructions as they cascade through the docu-
ment control system. This will ensure that the change(s) are:

 Part of the enterprise policy and directives


 Integrated into the appropriate manuals and operating procedures
 Part of the process procedures and work instructions
 Integrated into the new employee training programs

Well-documented process and procedure changes are important to the success


of your improvement initiatives and must be fully integrated with the manage-
ment system because they provide consistent direction on how to perform the
changed tasks. Without this documentation, old procedures, methods, and work
instruction steps will always creep into a process and it will not be executed
effectively or efficiently. The documentation of the process must be written for
the individuals executing it. The changed documentation must be:

 Clear and concise


 Written at the level of the individuals executing the procedure
 Be auditable, from higher-level procedures down to work instructions
 Provide for easily understood graphical work instructions and checklists
 Contain standards and expectations for the tasks to be performed
 Provide for a quantifiable means of controlling the process

Documenting your changed business processes is the first step to controlling and
sustaining your successful improvement initiatives. Remember:

What is not documented, you cannot measure; what you cannot measure, you can-
not control; what you cannot control, you cannot improve.
c10_1 09/30/2008 440

440 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

STATISTICAL PROCESS CONTROL

Statistical process control (SPC) is used to monitor process stability and detect
process changes. The key metrics and Critical Success Factors (CSF) from your
improvement initiative should be controlled using a process control chart. In
this way, SPC serves as a tool to ensure consistent and sustained implementation
of your improvement initiative. This allows you to be proactive and interactive
(rather than reactive) in monitoring system process changes:

 To monitor the key metrics, CSF of the improvement initiative


 To identify special causes of variation (chemical, material, equipment,
environmental changes, etc.)
 To identify common causes of variation (i.e., those inherent to the system)
 To monitor and eliminate system variance due to special and common
causes of variation
 To determine process capability
 To indicate when adjustments or process corrections are necessary
 To indicate when to leave a process alone because it is working well

As an example, the control chart in Figure 10.1 was implemented to manage


a customer services telephone answering process. The process had become a
significant problem for clients who were on hold for extended periods of time

X-bar-R Chart of Waiting Time


22
UCL=21.659

20
Sample Mean

_
_
18 X=18.117

16

LCL=14.575
14
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

15
UCL=12.98
Sample Range

10

_
R=6.14
5

0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample

FIGURE 10.1 SPC chart for waiting time.


c10_1 09/30/2008 441

VISUAL CONTROLS 441

(up to five minutes). A DMAIC process improvement initiative resulted in


changed procedures that would provide for additional service representatives to
be brought on line when needed and that would implement statistical process
control to monitor, plan, and control the amount of time a client was on hold.
These process improvement initiatives resulted in an average hold time of
18 seconds. The X-bar and R chart in Figure 10.1 is used to monitor the plan
and to control the hold time and number of service representatives and client
hold time.
Using the chart in Figure 10.1, the process leader was able to determine
that the client hold time was increasing steadily starting at the fourth sub-
group and climbing through the seventh subgroup. The service representative
team leader was able to add on-line service representatives to reduce the cli-
ent waiting time closer to the 18-second target that had been achieved as the
process average.

VISUAL CONTROLS

Simple visual controls can enhance process efficiency, effectiveness, and safety.
Visual controls are used throughout the process at key points to provide opera-
tors with simple, easy-to-use, and difficult-to-overlook checks. Some examples
of visual controls are:

 Color-coded paper to indicate different processing requirements


 Computer screen warnings and cautions
 Painted arrows on concrete floors
 Signs in work areas indicating instructions and warnings
 Providing tools (ladders, brooms, etc.) throughout the plant rather than in
one location
 Providing shadow boards for the placement (and replacement) of tools and
so forth
 Shadow boards and graphics to use comparators
 Labeling toolboxes, ladders, and so on
 Protecting levers, faucets, and such

One example of visual controls is the tool shadow board demonstrated in


Figure 10.2. This kind of visual control provides an instant indication of missing
tools.
Another key aspect of visual control is performance measurement, accom-
plished through the display of a measures for everyone to see and understand. A
communication board provides a means to display performance status and com-
municate problems. Figure 10.3 shows a typical communication board that pro-
vides visibility to work area production status. This type of visual control
c10_1 09/30/2008 442

442 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

FIGURE 10.2 Visual control of tools.

provides the ability to walk into the work area and—in a matter of minutes—
know the status of the operation. The intent of a visual control is that the whole
workplace is set up with signs, labels, color-coded markings, and so on, such
that anyone unfamiliar with the process can, in a matter of minutes, know what
is going on, understand the process, and know what is being done correctly and
what is out of place.

FIGURE 10.3 Production work status.


c10_1 09/30/2008 443

MISTAKE-PROOFING (POKA-YOKE) 443

A visual factory is made up of visual displays and visual controls. Visual dis-
plays and controls help keep things running as efficiently as they were designed
to run. Sharing information through visual tools helps keep production running
smoothly and safely. Shop floor teams are often involved in devising and imple-
menting these tools through 5S and other improvement activities.
Visual controls describe workplace safety, production throughput, material
flow, quality metrics, or other information. Visual controls supply the feedback
to an area, much the same way that SPC can give process feedback to the oper-
ator running a particular operation. A visual display relates information and data
to employees in the area.

GRAPHIC WORK INSTRUCTIONS

Graphic work instructions consistently convey, in an easily recognizable format,


how a job is to be performed according to documented standard work proce-
dures. Text-based work instructions are the most widely used. The problem with
text is that it is very dependent not only on an individual’s ability to learn from a
written format, but also on an individual’s ability to accurately describe actions
as part of a series of motions, not to mention the cross-cultural language barriers
that can exist within the plant or when communicating globally regarding prod-
ucts or production methodologies.
In the past, CAD drawings and blueprints were the only means of graphically
depicting work and were very time-consuming to update and maintain; however,
with the advent of digital cameras, video recorders, and presentation software,
there is no excuse for not providing graphic instructions in the shop area. Graphic-
based work instructions are a far more effective means of communication than
simple text. The information can be captured quickly through a digital camera and
manipulated with software to add color-coded legends that identify work content
by operation, quality checks, special notes, and so forth.
Each picture or slide can represent an operation or depict a bill of material for
that operation with a date, revision, and signature block for configuration con-
trol. When there is an improvement to the process or the introduction of a new
part, the old graphic can be pulled and replaced with a new one in as little as
30 minutes. The days of a manufacturing engineer having to spend several days
trying to maintain and update work instructions are over.

MISTAKE-PROOFING (POKA-YOKE)

Human beings will invariably make mistakes. It is not possible to remember


everything that has to be done at every step of producing every product
with every job. Errors cause defects. Errors need to be prevented. This is where
the mistake-proofing comes in. Mistake-proofing, or, to use the Japanese ter-
minology, poka-yoke, is accomplished through the deployment of simple, inex-
pensive devices designed to prevent or catch errors so they do not become
c10_1 09/30/2008 444

444 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

defects. These devices are placed in the process to ensure that it is very easy for
the operator to do the job correctly or very difficult for the operator to do the job
incorrectly. These may be physical, mechanical, or electrical (e.g., checklist for
the operator or technician to ensure that all steps in the process are performed or
work aids to eliminate fatigue or boredom).

Mistake-Proofing Successes
 Ignition locks
 Elevator door sensors
 Stamping machines
 Automatic toilet flushers
 Color-coded computer connectors
 Different-shaped nozzles for gas delivery systems
 Assembly keying

Mistake-Proofing Failures
 Seatbelt ignition lockouts
 Inventory packing checklists
 Warning signs

A good example of mistake-proofing derives from a printer manufacturer


whose number one complaint reported was ‘‘missing components,’’ with the pri-
mary missing component being the power cord. This was unacceptable for the
company, especially since each box was inspected for completeness prior to be-
ing sealed and shipped.
The existing system is relatively simple: An empty box passes along a series
of conveyor belts and is filled with the required components and packing material
by each work site. The company prides itself in the amount of training provided
to all of its personnel and possesses a well-deserved reputation for providing
quality products. Yet the ‘‘missing components’’ problem arose. The company
modified the packaging to include a plastic molded form that included color-
coded compartments for the power cord, ink cartridge, and package of instruction
and installation documentation. A DMAIC improvement team was responsible
for the improvement, which includes the elimination of the inspector position.

SINGLE-MINUTE EXCHANGE OF DIE (SMED)

In processes where equipment needs to be changed over for different products,


rapid setup and teardown are essential if the goals of Lean manufacturing are to
be achieved. The dependency on flexibility (especially in fabrication) is para-
mount to allowing level production schedules to flow.
c10_1 09/30/2008 445

SINGLE-MINUTE EXCHANGE OF DIE (SMED) 445

Single-minute exchange of die (SMED) refers to streamlining the change-


over process so it can be accomplished in a very short period of time. It was
developed by Shigeo Shingo over a period of years and implemented at Toyota
(1970) as part of its just-in-time system.
Shingo developed SMED to cut setup times, enabling smaller batch sizes to
be produced. The setup procedures were simplified by using common or similar
setup elements whenever possible. This approach was in complete contrast with
traditional manufacturing procedures, as Shingo pointed out:

It is generally and erroneously believed that the most effective policies for dealing
with setups address the problem in terms of skill. Although many companies have
setup policies designed to raise the skill level of the workers, few have imple-
mented strategies that lower the skill level required by the setup itself.

The success of this system was illustrated in 1982 at Toyota, when the die
punch setup time in the cold-forging process was reduced over a three-month
period from one hour and forty minutes to three minutes. These are the general
benefits of SMED:

 Minimal loss to throughput time on equipment


 Reduced operating costs
 Capability of processing a greater mix of product

SMED focuses on developing processes for conducting more setups in the


same amount of time. Cutting changeover time enables a cell to perform
many more setups in the same amount of time. This achieves the primary
objective of building flexibility into the process. Each step is now described
in detail.

Step 1. Identify the activities. This step is self-explanatory. All that is hap-
pening here is the identification of all activities in the process. This is best ac-
complished by having the team create a process map for the piece of equipment
that is under scrutiny
Step 2. Segregate activities into one of two categories. Activities can be
either:
External setup activities. Operations are performed while the machine is run-
ning (previous or current job).
Internal setup activities. Operations are performed while the machine is
stopped. By shifting activities from internal to external and conducting
some good housekeeping practices, changeover time can be reduced
significantly.
Step 3. Reduce/eliminate steps as they are performed today. The emphasis
here is simplifying the set-up process for both internal and external activities.
Investigate:
c10_1 09/30/2008 446

446 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

Standardizing the set-up


Minimizing the utilization of bolts and adjustments
Utilizing simple one-turn types of attachment methodologies
Techniques as cams, interlocking mechanisms, slotted bolts, secured washers,
etc.

Strive to make the set-up process standard, consistent, repeatable, and one
that employees can learn. Eliminate the requirement for black art or years of
experience.

SMED Examples
 Die preheating
 Preset tooling
 Kitting replacement parts
 Equipment alignment jigs

TOTAL PRODUCTIVE MAINTENANCE

The reliability of equipment in a Lean environment is critical for success. As


inventory levels are reduced, uptime of machinery becomes more important
because there is little inventory to buffer unplanned downtime. When a ma-
chine does go down, the entire production line stops. Therefore, we must
have a tool or mechanism in place to address equipment problems before
they occur. Total productive maintenance (TPM) provides a method for re-
ducing (if not eliminating) unplanned machine downtime. Specifically, the
goals of TPM are to:

 Eliminate breakdowns
 Minimize setup and adjustment activities
 Eliminate minor stoppage and idling
 Eliminate equipment-created defects
 Improve start-up yield
 Prevent defects

To achieve these goals, TPM focuses on:

 Early detection of equipment problems


 Preventing breakdowns
 Improving the effectiveness and efficiency of equipment
 Standard activities and procedures
 Planned maintenance
c10_1 09/30/2008 447

TOTAL PRODUCTIVE MAINTENANCE 447

There are three main components of TPM:

1. Preventive maintenance. The focus here is on preventing breakdowns


from happening. The concern here is with the uptime or availability of
equipment. Preventative maintenance is performed in a preplanned,
scheduled manner as opposed to reacting to breakdowns. Operators con-
duct daily maintenance on equipment and identify abnormalities as they
occur.
2. Corrective maintenance. This area focuses on improving repaired equip-
ment by identifying components from the original equipment that keep
breaking and replacing them with more reliable components or equipment.
3. Maintenance prevention. One of the key ingredients of TPM is the role of
the daily operator. It is imperative that equipment be easy to maintain on a
recurring basis. If machinery is difficult to lubricate, if bolts are difficult to
tighten, and if it is impossible to check critical fluid levels, it is unlikely
operators will be motivated to monitor equipment on a daily basis. Data
that assists in providing preventive maintenance includes downtime re-
ports, such as:
 Reason(s) for failure
 Actions or repairs
 Mean time between failure (MTBF)
 Mean time to repair (MTTR)

The origin of TPM can be traced back to the early 1950s, when the U.S. con-
cept of ‘‘preventive maintenance’’ was introduced to Nippondenso, a manufac-
turer of automotive parts. In 1960, Nippondenso was the first company to
introduce plantwide preventive maintenance. However, in doing so, manage-
ment exposed a problem: soaring demand for maintenance personnel.
To address the situation created, management noted that automated equip-
ment was still ‘‘manned’’ by an operator. Therefore, it was decided that basic
maintenance of automated equipment should be carried out by the operators.
This practice is known as autonomous maintenance and is one of the features
of TPM.

Overall Equipment Effectiveness


An important tool in the TPM improvement program is known as overall equip-
ment effectiveness (OEE). It offers a simple but powerful measurement tool to
get information on what is actually happening with equipment on the floor.
Three main categories of equipment-related loss exist:

1. Loss of time
2. Loss of speed
3. Loss of quality
c10_1 09/30/2008 448

448 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

These three categories form the ingredients for determining the overall equip-
ment effectiveness.
OEE is calculated by multiplying the availability rate (time loss factor), the
performance rate (speed loss factor), and the quality rate (defect/quality loss
factor). But just what does this all mean?

Availability Rate
The availability rate is the time the equipment is really running versus the time
it could have been running. A low availability rate reflects downtime losses
due to:
 Breakdown loss
 Setup and adjustment loss
It is calculated using the formula:
operating time  downtime
Availability rate ¼
total operating time

Performance Rate
Performance rate is the quantity produced during the running time versus the
potential quantity, given the designed speed of the equipment. A low perform-
ance rate reflects speed losses due to:
 Minor stoppage loss
 Idling loss
 Reduced speed loss
It is calculated using the formula:
total output
Performance rate ¼
potential output at rated speed

Quality Rate
The quality rate is the amount of good products versus the total amount of prod-
ucts produced. A low quality rate reflects defect losses:
 Scrap and rework
 Startup losses
It is calculated using the formula:
good output
Quality rate ¼
total output
To calculate OEE, we multiply the three factors together:
Availability rate  performance rate  quality rate
c10_1 09/30/2008 449

CONTINUOUS MEASURABLE IMPROVEMENT 449

Total productive maintenance requires total participation and needs to be em-


bedded in the culture of the enterprise to be successful. TPM is a mainstay of
Enterprise Excellence. When properly implemented, TPM greatly reduces or
eliminates unplanned maintenance, slowdowns, and start-up loss, improves cy-
cle time, and aids in reducing defects (scrap, rework, and repair).

RAPID IMPROVEMENT EVENTS

Rapid improvement events are also known as kaizen events. Kaizen is a Japa-
nese word: kai means ‘‘change,’’ and zen means ‘‘good’’ (for the better). Kai-
zen refers to continuous small improvements involving all people in the
organization. The principle behind kaizen is that a very large number of small
improvements are often more effective in an organizational environment than a
few improvements of large value. Kaizen activities are aimed at reducing waste
in the workplace. By using a detailed and thorough procedure, we eliminate
losses in a systematic method using various kaizen tools. These activities are
not limited to production areas but are implemented in administrative areas
as well.
Rapid improvement events (RIE) may be used in the improve phase of the
Enterprise Excellence model as well as in the control and continuous measur-
able improvement phases. These events are initiated when a need for improve-
ment is identified, narrow in scope, and urgent. The rapid improvement process
involves careful preparation and analysis. This includes developing a value
stream map, collecting process data, analyzing the process performance, devel-
oping rapid improvement plans, and preparing for implementation. This is fol-
lowed by the implementation of the improvements. The initial planning phase of
the RIE is about one to three weeks. The actual implementation is normally one
week. Once the improvements are implemented, process monitoring is contin-
ued for the changed process. And we are back in the measuring, evaluating, and
improving of continuous measurable improvement.

CONTINUOUS MEASURABLE IMPROVEMENT

The culmination of all the process analysis and improvement activities is con-
tinuous measurable improvement. Process management and improvement is not
a one-time event; rather, it represents a continuous effort. This is accomplished
by continually measuring and monitoring the process, evaluating performance
against standards and requirements, and auditing operations to ensure proce-
dures are followed and the new methodology is adhered to. All of these activi-
ties are then used to identify opportunities to improve the operations to be even
more effective and efficient . . . continually looking for ways to improve the
operations by continually eliminating variation and waste.
c10_1 09/30/2008 450

450 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

KEY POINTS

The objective of the control phase is to institutionalize and sustain the improve-
ments made during the analyze and improve phase and then transition to contin-
uous measurable improvement.
Process control focuses on implementing process controls for measuring and
evaluating performance, documenting new processes and procedures in written
work instructions, documenting action plans for dealing with instances of special
cause variation, and training process operators on the new process and methods.
The controls also need to include audit plans for regular periodic verification to
ensure that process operators are following the new process instructions.
The following strategies are used to control, sustain, and build upon your
improvements.
 Management systems
 Statistical process control
 Visual controls
 Graphic work instructions
 Mistake-proofing (poka-yoke)
 Single-minute exchange of die (SMED)
 Total productive maintenance (TPM)
 Rapid improvement events
Management Systems
The implementation of management systems is the first and most important con-
trol and sustainment strategy. The first step after achieving a successful im-
provement is to document the process changes. These process changes should
be documented in the organization’s management system. Without this docu-
mentation, old procedures, methods, and work instruction steps will always
creep into a process and it will not be executed effectively or efficiently. The
changed documentation must be:
 Clear and concise
 Written at the level of the individuals executing the procedure
 Be auditable, from higher-level procedures down to work instructions
 Provide for easily understood graphical work instructions and checklists
 Contain standards and expectations for the tasks to be performed
 Provide for a quantifiable means of controlling the process

What is not documented, you cannot measure; what you cannot measure, you can-
not control; what you cannot control, you cannot improve.
c10_1 09/30/2008 451

KEY POINTS 451

Statistical Process Control


Statistical process control is used to monitor process stability and detect process
changes. The key metrics and CSF from your improvement initiative should be
controlled using a process control chart. In this way, SPC serves as a tool to
ensure consistent and sustained implementation of your improvement initiative.

Visual Controls
Simple visual controls can enhance process efficiency, effectiveness, and safety.
Visual controls are used throughout the process at key points to provide opera-
tors with simple, easy-to-use and difficult-to-overlook checks.
Another key aspect of visual control is performance measurement, accom-
plished through the display of measures for everyone to see and understand. A
communication board provides a means to display performance status and com-
municate problems.
The intent of a visual control is that the whole workplace is set up with signs,
labels, color-coded markings, and so on, such that anyone unfamiliar with the
process can, in a matter of minutes, know what is going on, understand the pro-
cess, and know what is being done correctly and what is out of place.

Graphic Work Instructions


Graphic work instructions consistently convey, in an easily recognizable format,
how a job is to be performed according to documented standard work procedures.

Mistake-Proofing (Poka-Yoke)
Mistake-proofing, or, to use the Japanese terminology, poka-yoke, is accom-
plished through the deployment of simple, inexpensive devices designed to pre-
vent or catch errors so they do not become defects. These devices are placed in
the process to ensure that it is very easy for the operator to do the job correctly
or very difficult for the operator to do the job incorrectly. These may be physi-
cal, mechanical, or electrical (e.g., checklist for the operator or technician to
ensure that all steps in the process are performed or work aids to eliminate fa-
tigue or boredom).

Single-Minute Exchange of Die (SMED)


In processes where equipment needs to be changed over for different products,
rapid setup and teardown are essential if the goals of Lean manufacturing are to
be achieved. The dependency on flexibility (especially in fabrication) is para-
mount to allowing level production schedules to flow.
SMED focuses on developing processes for conducting more setups in the
same amount of time. Cutting changeover time enables a cell to perform many
more setups in the same amount of time. This achieves the primary objective of
c10_1 09/30/2008 452

452 CONTROL AND CONTINUOUS MEASURABLE IMPROVEMENT

building flexibility into the process. Each step is described in detail in this
chapter.

Total Productive Maintenance


The reliability of equipment in an Enterprise Excellence environment is critical
for success. Total productive maintenance (TPM) provides a method for reduc-
ing (if not eliminating) machine downtime. TPM consists of three main
components:

1. Preventative maintenance. This focuses on preventing breakdowns from


happening. The concern here is with the uptime or availability of equip-
ment. Preventative maintenance is performed in a preplanned, scheduled
manner as opposed to reacting to breakdowns. Operators conduct daily
maintenance on equipment and identify abnormalities as they occur.
2. Corrective maintenance. This focuses on improving repaired equipment
by identifying components from the original equipment that keep break-
ing and replacing them with more reliable components or equipment.
3. Maintenance prevention. One of the key ingredients of TPM is the role of
the daily operator. It is imperative that equipment be easy to maintain on a
recurring basis.

Rapid Improvement Events


Rapid improvement events are also known as kaizen events. The principle be-
hind kaizen is that a very large number of small improvements are often more
effective in an organizational environment than a few improvements of large
value. Kaizen activities are aimed at reducing waste in the workplace. These
activities are not limited to production areas but are implemented in administra-
tive areas as well.
Rapid improvement events may be used in the improve phase of the Enter-
prise Excellence model as well as in the control and continuous measurable im-
provement phases. These events are initiated when a need for improvement is
identified, narrow in scope, and urgent. The rapid improvement process involves
careful preparation and analysis. Once the improvements are implemented, pro-
cess monitoring is continued for the changed process. And we are back in the
measuring, evaluating, and improving of continuous measurable improvement.

Continuous Measurable Improvement


The culmination of all the process analysis and improvement activities is con-
tinuous measurable improvement. Process management and improvement is not
a one-time event; rather, it represents a continuous effort. This is accomplished
c10_1 09/30/2008 453

KEY POINTS 453

by continually measuring and monitoring the process, evaluating performance


against standards and requirements, and auditing operations to ensure proce-
dures are followed and the new methodology is adhered to. All of these activi-
ties are then used to identify opportunities to improve the operations to be even
more effective and efficient . . . continually looking for ways to improve the
operations by continually eliminating variation and waste.
Appendix_1 09/30/2008 454

APPENDIX A

BASIC MATH SYMBOLS

þ plus, positive
 minus, negative
 multiplied by, times
/ divided by
¼ equal to
6 ¼ not equal to
> greater than
< less than
 greater than or equal to
 less than or equal to
 approximately equal to

P plus or minus
summation, add series
s sigma, population standard deviation
c number of defects in a sample
df degrees of freedom
I individual chart
H0 null hypothesis
HA alternate hypothesis
k in design of experiments (DOE), the number of factors in a study;
in engineering notation, stands for kilo (103)
MR moving range chart
N population size
n sample size
np number of defects in a sample
p proportion of units defective

454 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
Appendix_1 09/30/2008 455

APPENDIX A: BASIC MATH SYMBOLS 455

R range
R-bar mean of the range
s sample standard deviation
s2 variance of a set of sample values
m mu, population average
X-bar sample average
pffi
square root
a alpha, risk of incorrectly concluding the null hypothesis is false
b beta, risk of incorrectly concluding the null hypothesis is true
% percent
AppendixB_1 10/07/2008 456

APPENDIX B

LIST OF ACRONYMS

Ao (operational availability) Actual equipment readiness, determined by the


equation: MTBF  ðMTBF þ MDTÞ.
ADT (accelerated degradation testing) A method of reliability testing used to
find design weaknesses, where units are tested to failure by applying stressed
operational modes. ADT is used in conjunction with TAAF to improve
reliability and durability.
ALT (accelerated life test) A test that is performed on products and processes
designed to find and identify causes of wear out, but that does not identify
individual defects.
ANOVA (analysis of variance) A statistical method used to determine
important factors and interactions between sets of data.

BOM (bill of material) BOM provides important information for material


requirements planning (MRP), including the product structure and relative
quantities.
BVA (business value added) A process activity that does not contribute to
achieving customer requirements or expectations, but is mandated by
compulsory or obligatory requirements.

CAD (computer aided design) Software used in art, architecture, engineering,


and manufacturing to create precision drawings.
CDD (capability development document) A document that captures the
information necessary to develop a proposed program, normally using an
evolutionary acquisition strategy.

456 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
AppendixB_1 09/30/2008 457

APPENDIX B: LIST OF ACRONYMS 457

CDOV (Concept-Design-Optimize-Verify) A methodology used to identify


and eliminate design problems in the product or process development phase.
CE (concurrent engineering) A systematic approach for the integrated,
simultaneous design of products and processes.
CE (cause-and-effect) diagram A comprehensive tool to focus problem solv-
ing. Also known as a fishbone diagram or Ishikawa diagram.
CFP (critical function parameters) An input variable used at the subassembly
or subsystem level that controls the mean or variation of a critical function
response (CFR).
CFR (critical function response) A measured scalar or vector output variable
that is critical to the fulfillment of a critical customer requirement. Some
refer to these critical customer requirements as critical-to-quality (CTQ)
requirements.
CL (confidence level) Confidence level is the probability value (1  a) that is
associated with a confidence interval. Confidence level is often expressed as
a percentage. For example, say a ¼ 0:05 ¼ 5%, then the confidence level is
ð1  0:05Þ ¼ 0:95, or 95%.
CMI (continuous measurable improvement) A fact-based, operations manage-
ment approach used to increase quality, reduce costs, meet or exceed
schedules, and lower risks.
CO (change over time) The time required to modify equipment or a process to
produce a different model or product.
COPQ (cost of poor quality) COPQ consists of costs that are generated as a
result of producing defective material and that result in rework, repair, and/or
warranty fulfillment.
COQ (cost of quality) COQ consists of the costs associated with the quality of
a product. COQ includes the cost of conformance and the cost of
nonconformance.
Cp A process capability index that assumes that the process is centered. This
measure predicts potential process capability, as opposed to process
performance (Pp) that is used to determine current process performance.
Cpk A process capability index that does not assume that the process is
centered. This measure predicts potential process capability, as opposed to
process performance (Ppk) that measures current process performance.
Cpl A process capability index used for single-sided, lower-specification
limits.
Cpm A process capability index used to measure the process mean relative to a
target. Used when the specifications limits are not symmetrical.
CPM (critical path method) A tool used to make deterministic decisions in
project management.
Cpu A process capability index used for single-sided, upper-specification limits.
AppendixB_1 09/30/2008 458

458 APPENDIX B: LIST OF ACRONYMS

CRM (critical review method) A tool used to make deterministic decisions in


project management.

df (degrees of freedom) The number of independent values associated with a


given factor (usually the number of factor levels minus 1).
DFLSS (Design for Lean Six Sigma) A research and development approach
that focuses on robustness in design, material, and processes.
DFX (design for X) A best practice during the design phase intended to
improve X, where X is machineability, manufacturability, testability,
durability, serviceablity, reliability, and so on. DFX may include analyzing
the design of documents for production readiness and integrity; selecting the
best components for functionality, cost, and availablity; designing the best
manufacturing and assembly of components to create a system; and
designing the most cost-effective tests.
DMAIC (Define-Measure-Analyze-Improve-Control) A process for achieving
Six Sigma quality.
DMALC (Define-Measure-Analyze-Lean-Control) A process to improve the
efficiency of a process.
DOE (design of experiments) A statistics-based method of experimentation
superior to trial-and-error and one-factor-at-a-time experiments.
DPM (defects per million) The defects per million units observed during an
average production run.
DPMO (defects per million opportunities) Average number of defects
observed during an average production run divided by the number of
opportunities to make a defect on the product under study during that run
normalized to one million.
DPU (defects per unit) Average number of defects per unit observed when
sampling a population.

EWMA (exponentially weighted moving average) A control chart used to


detect small shifts away from a target.
ESRG (enterprise senior review group) A panel of executives and managers
with overall authority and responsibility for the vision of the enterprise. The
ESRG is also responsible for ensuring development and implementation of
the vision throughout the enterprise.

FBD (function block diagram) A static analysis tool that uses a block diagram
to portray and analyze a product or process at the function level.
FIFA (fault isolation and failure analysis) A corrective maintenance method
of troubleshooting to determine subassembly and component failures.
FIFO First in, first out.
AppendixB_1 09/30/2008 459

APPENDIX B: LIST OF ACRONYMS 459

FMEA (failure mode effects analysis) A systematic method for determining


possible effects from failure modes (causes) and prioritizing them by a risk
priority number (RPN). The FMEA method is a bottom-up approach to
predictive failure analysis, in contrast to the fault tree analysis (FTA), which
is top-down.
FRACAS (failure reporting and corrective action system) A systematic, closed-
loop process for reporting and evaluating product and process failures.
F ratio The test statistic in ANOVA (F stands for Ronald Fisher, who
developed ANOVA).
FRP (full rate production) Contracting for economic production quantities
following stabilization of the system design and validation of the production
processes.
FTA (fault tree analysis) A logical method of determining possible causes
from failure effects. The FTA method is a top-down approach to predictive
failure analysis, in contrast to the failure mode effects analysis (FMEA),
which is bottom-up.

HALT (highly accelerated life testing) A method of reliability testing used to


find design weaknesses, where units are tested to failure by applying stressed
environmental factors. HALT is used in conjunction with the test, analyze,
and fix (TAAF) methodology to improve reliability and robustness.
HOQ (house of quality) The requirements matrix of quality function deployment.
HRO (human resources office) The business unit responsible for management
of the people employed by the business. Typical functions of a human
resources office include recruiting and interviewing employees, advising on
hiring decisions in accordance with policies and requirements that have been
established in conjunction with management, providing training to enhance
employee skills, and developing compensation plans and incentive programs
to motivate employees. In some businesses, this is called personnel
management.

ICD (initial capabilities document) An ICD documents the need for a matériel
approach to a specific capability gap derived from an initial analysis of
matériel approach executed by the operational user and, as required, by an
independent analysis of the matériel alternatives.
ID (interrelationship digraph) One on the seven management and planning
tools used to systematically identify and analyze the relationships that exist
among critical issues.
IPT (integrated product team) A team made up of cross-functional, multi-
disciplined members who work together to build successful programs,
identify and resolve issues, and make sound and timely recommendations in
order to facilitate decision making.
AppendixB_1 09/30/2008 460

460 APPENDIX B: LIST OF ACRONYMS

IPPD (integrated product and process development) A cross-functional


team approach to implementation that strives to optimize materials
and processes. This strategy is used for determining cause-and-effect
relationships.

J-F method A matrix approach for developing an interrelationship digraph


(ID).
JIT (just-in-time) A pull-system approach to operations.

LA (labor time) Human resource time required to complete a work task.


LCL (lower control limit) The bottom control line on a control chart placed
three sigmas below the control chart’s mean.
LRIP (low rate initial production) The first effort of the production and
deployment phase to establish an initial production base for the system. This
approach supports an orderly ramp-up to smoothly transition to full rate
production (FRP) and provides representative production articles for initial
operation test and evaluation.
LSL (lower specification limit) The lower specification limit line on a control
chart.
L=6s Lean Six Sigma.

MA (machine time) The equipment time required to complete a work task.


MANOVA (multivariate analysis of variance) An extension of the usual
analysis of variance (ANOVA) procedure for handling multiple responses
and their interactions.
MDT (mean downtime) The average time a system is not available for use
during specified periods.
MRP (material requirements planning) The work function that develops the
purchasing strategy for materials required to support production and
schedules.
MS (mean square) The sum of squares (SS) divided by degrees of freedom.
MSD (measurement system design) The development, application, and
deployment of a measurement system that can provide required parameter
data for decision making.
MSE (measurement system evaluation) A way to analyze the variability
within operators and machines (repeatability), among operators and
machines (reproducibility), and part-to-part variability (sometimes referred
to as a gage R&R analysis).
MTBF (mean time between failures) Average time between failures for
repairable equipment in units of operating hours or other units of life, such as
time, cycles, miles, or events.
AppendixB_1 09/30/2008 461

APPENDIX B: LIST OF ACRONYMS 461

MTTR (mean time to repair) The average system repair time shown by the
equation:
Total corrective maintenance time
Total number of corrective maintenance actions in a given period of time

NVA (non-value-added) A process activity that does not contribute to


achieving customer requirements or expectations.

OCT (operational cycle time) The time required to successfully complete the
tasks for a work process.
OEE (overall equipment effectiveness) An index measure of the availability,
performance efficiency, and quality rate for a piece of equipment.

PDPC (process decision program chart) One on the seven management and
planning tools used to develop contingency planning.
PERT (program evaluation and review technique) Similar to the critical path
method (CPM), but incorporates probabilities for nondeterministic decision
making based on most likely, most optimistic, and most pessimistic scenarios.
POA&M (plan of action and milestone chart) A Gantt chart with the
associated resource requirements.
Pp A process performance index that assumes that the process is centered.
This is a measure of current process performance, as opposed to process
capability (Cp) that predicts potential process capability.
Ppk A process performance index that does not assume the process is
centered. This is a measure of current process performance, as opposed to
process capability (Cpk) that predicts potential process capability.
PRPN (planned risk priority number) This is the final risk priority number
(RPN) that is calculated from failure mode effects analysis (FMEA) after
final corrective actions are complete and the results are documented.

QFD (quality function deployment) A systematic, focused method for


listening to customers and optimizing design, materials, and processes per
the customer’s expectations.
QMS (quality management system) Basic management of an enterprise that
reflects the culture of the organization, the philosophy of its management,
and how it provides products or services.
Q$SR (quality, cost, schedule, and risk) The four critical elements in project
management and in process and product improvement.

R (range) The highest value minus the lowest value for a dataset. Range is
used as a rough indication of dispersion.
AppendixB_1 09/30/2008 462

462 APPENDIX B: LIST OF ACRONYMS

RAM (reliability, availability, and maintainability) The three parameters used


to determine uptime.
RBD (reliability block diagram) A static analysis tool that uses a block
diagram to portray and analyze the reliability relationship of components and
subsystems for a system.
R&D (research and development) An early business phase where products
and services are conceived and designed.
ROI (return on investment) The profit from an investment as a percentage of
the amount invested.
RPN (risk priority number) In a failure mode effects analysis (FMEA), the
product of severity, occurrence, and detection values used to prioritize failure
modes.
RR (repeatability and reproducibility) The two parameters used to determine
the precision of a measurement system.
RSM (response surface methodology) An experimental design method that is
used for nonlinear optimization and process characterization.
RTR (rolled throughput rate) A measure of the true effectiveness of a process
using the natural antilog, eTDPU.
RTY (rolled throughput yield) Another name for rolled throughput rate
(RTR).

SME (subject matter expert) A person who possesses detailed and in-depth
knowledge about a specific subject, process, product, or service.
SMED (single-minute exchange of die) An operational management strategy
that increases the efficiency of production changeover.
SPC (statistical process control) Graphical methods used to monitor the
effectiveness of processes.
SS (sum of squares) Numerator of the variance equation.
SU (setup) The time required to prepare a process for operation.

TAAF (test, analyze, and fix) A methodology used in conjunction with ADT/
HALT to determine the best redesign activities to increase reliability and
robustness.
Takt time The amount of time to produce one unit per customer demand. Takt
time is calculated by dividing total available working time by customer
demand. Customer demand is designated in terms of a number of units.
TCT (theoretical cycle time) Cycle time based on engineering principles and/
or secondary data.
TDPU (total defects per unit) The total number of defects divided by the total
number of units.
AppendixB_1 09/30/2008 463

APPENDIX B: LIST OF ACRONYMS 463

TPM (total productive maintenance) A production maintenance program that


utilizes reliability-centered maintenance principles to ensure that production
equipment is at maximum uptime and that corrective maintenance is
minimized.

UCL (upper control limit) The top control line on a control chart placed three
sigmas above the control chart mean.
UPC (unit production cost) The per-unit production cost.
USL (upper specification limit) The upper specification limit drawn on a
control chart.

VA (value added) A process activity that contributes to achieving customer


requirements or expectations and for which the customer is willing to pay.
VOC (voice of the customer) A complete summary of all stated and nonstated
customer requirements and expectations.
VOCT (voice of the customer tables)

VOCT part 1: A tool used to systematically capture and characterize design


requirements in terms of both stated and nonstated customer require-
ments.
VOCT part 2: A tool used by designers to systematically describe products
and services in terms of specific attributes and performance requirements.

VRP (variability reduction process) The procedures and techniques aimed at


increasing the reliability of a process and reducing the variability of
manufactured units.
VSM (value stream map) A special-case process map that is inclusive and
comprehensive in scope that quantifies needed and unneeded activities
according to customer requirements and expectations.

WBS (work breakdown structure) A method used in project management to


partition tasks into manageable segments.
WIP (work in progress) The units that are in the production process.

Y (yield) The number of pieces out divided by the number of pieces in.
Bibliography_1 09/30/2008 475

BIBLIOGRAPHY

Altshuller, Genrich. The Innovation Algorithm. Massachusetts: Technical Innovation


Center, Inc., 1999.
Abegglen, James C., and George Stalk Jr. Kaisha: The Japanese Corporation. New
York: Basic Books, Inc., 1985.
Bernstein, Albert J., and Sydney Craft Rozen. Dinosaur Brains. New York: John Wiley
& Sons, 1989.
Benbow, Donald W., and T. M. Kubiak. The Certified Six Sigma Black Belt. Wisconsin:
ASQ Quality Press, 2005.
Beyer, William H. Handbook of Tables for Probability and Statistics, 2d ed. CRC Press,
1991.
Bonoma, Thomas V. The Marketing Edge: Making Strategies Work. New York: Free
Press, 1985.
Bossert, James L. Quality Function Deployment: A Practitioner’s Approach. Milwaukee:
Quality Press, 1991.
Bothe, Davis R. Measuring Process Capability. New York: McGraw-Hill, 1997.
Brassard, Michael. The Memory Jogger PlusþTM. Methuen, MA: Goal/QPC/1989.
Breyfogle, Forest W., Implementing Six Sigma. New York: John Wiley & Sons, 1999.
Breyfogle, Forest W., James M. Cupello, and Becki Meadows. Managing Six Sigma.
New York: John Wiley & Sons, 2001.
Brownlee, K. A. Statistical Theory and Methodology in Science and Engineering, 2d ed.
New York: John Wiley & Sons, 1967.
Byham, Dr. William C., with Jeff Cox. Zapp! The Lightning of Empowerment. Pitts-
burgh: Development Dimension International Press, 1989.
Creveling, C. M., Tolerance Design. Canada: Addison-Wesley Longman, Inc., 1997.
Creveling, C. M., Lynne Hambleton, and Burke McCarthy. Six Sigma for Marketing
Processes. New Jersey: Pearson Education Inc., 2006.

Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr. 475
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
Bibliography_1 09/30/2008 476

476 BIBLIOGRAPHY

Creveling, C. M., J. L. Slutsky, and D. Antis, Jr., Design for Six Sigma. New Jersey:
Pearson Education, Inc., 2003.
Daetz, Doug, Bill Barnard, and Rick Norman. Customer Integration. Canada: John
Wiley & Sons, 1995.
Data Myte Corporation. Data Myte Handbook. Minnetonka, MN: Data Myte Corpora-
tion, 1993.
Deming, W. Edwards. Out of the Crises. Boston: Massachusetts Institute of Technology,
1986.
———. Quality, Productivity, and Competitive Position. Boston: Massachusetts Insti-
tute of Technology, 1986.
Eckes, George. Making Six Sigma Last. Canada: John Wiley & Sons, 2001.
———. Six Sigma Team Dynamics. Canada: John Wiley & Sons, 2003.
Eureka, William E., and Nancy E. Ryan. The Customer-Driven Company: Managerial
Perspectives on QFD. Dearborn, MI: ASI Press, 1988.
Feigenbaum, Armand V. Total Quality Control. New York: McGraw-Hill, 1983.
Feld, Willam M. Lean Manufacturing. New York: St. Lucie Press, 2001.
Fox, Ronald J., with James L. Field. The Defense Management Challenge. Boston:
Harvard Business School Press, 1988.
Goldratt, Eliyahu M. Critical Chain. Massachusetts: North River Press, 1994.
———. It’s Not Luck. Massachusetts: North River Press, 1994.
———. Theory of Constraints. Croton-on-Hudson, NY: North River Press, 1990.
Goldratt, Eliyahu M., and Jeff Cox. The Goal. Croton-on-Hudson, NY: North River
Press, 1984.
Grant, Eugene L., and Richard S. Leavenworth. Statistical Quality Control, 6th ed. New
York: McGraw-Hill, 1988.
Hall, Robert W. Zero Inventories. Homewood, IL: Dow Jones-Irwin, 1993.
———. Attaining Manufacturing Excellence. Homewood, IL: Dow Jones-Irwin, 1987.
Hambleton, Lynne. Treasure Chest of Six Sigma. New Jersey: Pearson Education, Inc.,
2008.
Harrington, James. The Improvement Process—How America’s Leading Companies Im-
prove Quality. Milwaukee: Quality Press, 1987.
Hazlewood, Robert H., and Steven C. Wheelwright. Restoring Our Competitive Edge,
Competing through Manufacturing. New York: John Wiley & Sons, 1984.
Henderson, Bruce A., and Jorge L. Larco. Lean Transformation. Virginia: The Oakleer
Press, 1999.
Hesselbein, Frances, Marshall Goldsmith, and Richard Beckhard. The Leader of the Fu-
ture. New York: Drucker Foundation, 1996.
Hirano, Hiroyuki. Five Pillars of the Visual Workplace. New York: Productivity Press,
1995.
Hogg, Robert V., and Allen T. Craig. Introduction to Mathematical Statistics, 3d ed.
New York: Macmillian, 1970.
Hudiberg, John J. Winning with Quality: The FPL Story. White Plains, NY: Quality Re-
sources, 1991.
Imai, Maska. Kaizen. New York: Random House, 1986.
Bibliography_1 09/30/2008 477

BIBLIOGRAPHY 477

Ishikawa, Kaoru.Trans. John H. Loftus. Introduction to Quality Control. Tokyo: Juse


Press Ltd., 1994.
———. Guide to Quality Control. White Plains, NY: Quality Resources, 1982.
———. Trans. D. J. Lu. What Is Total Quality Control. Englewood Cliffs, NJ: Prentice-
Hall, 1985.
Jablonski, Joseph R. Implementing Total Quality Management: Competing in the Nine-
ties, Albuquerque: Technical Management Consultants, 1991.
Jackson, Harry K., and Normand L. Frigon. Achieving the Competitve Edge. New York:
John Wiley & Sons, 1996.
———. Fulfilling Customer Needs. New York: John Wiley & Sons, 1998.
———. The Leader. San Diego: M.A.F. Enterprises, 2003.
Juran, J. M. Managerial Breakthrough. New York: McGraw-Hill, 1964.
———. Juran on Planning for Quality. New York: Free Press, 1988.
Juran, J. M., and F. M. Gryna, Jr. Quality Planning and Analysis. New York: McGraw-
Hill, 1980.
Kaplan, Stan. An Introduction to TRIZ. Ideation International Inc., 1996.
Karastu, Hajime. TQC Wisdom of Japan: Managing for Total Quality Control. Cam-
bridge, MA: Productivity Press, 1988.
Kececioglu, Dimitri. Reliability Engineering Handbook, 2 vols. Englewood Cliffs, NJ:
Prentice-Hall, 1991.
Kepner, Charles H., and Benjamin B. Trego. The New Rational Manager. Princeton:
Kepner Trego, 1981.
Khazanie, Ramakant. Elementary Statistics in a World of Applications. Santa Monica:
Goodyear Publishing Co., 1979.
Kiemele, Mark J., Stephen R. Schmidt, and Ronald J. Berdinge. Basic Statistics. Colorado:
Air Academy Press, 2000.
King, Bob. Better Designs in Half the Time—Implementing GFD In America. Methuen,
MA: Goal/QPC, 1987.
———. Hoshin Planning: The Developmental Approach. Methuen, MA: Goal/QPC,
1989.
Kushel, Gerald. Reaching the Peak Performance Zone. New York: Amazon, 1994.
Lareau, William. Office Kaizen. Wisconsin: American Society for Quality, 2003.
Lewis, James P. Project Leadership.New York: McGraw-Hill, 2003.
Levin, Richard. Statistics For Management, 2d ed. Englewood Cliffs, NJ: Prentice-Hall,
1981.
Liker, Jeffrey K. Becoming Lean. Oregon: Productivity, Inc., 1998.
———. The Toyota Way. New York: McGraw-Hill, 2004.
Liker, Jeffrey K., and David Meier. The Toyota Way Fieldbook. New York: McGraw-
Hill, 2006.
Li, Jerome C. R. Statistical Inference. Ann Arbor, MI: Edwards Brothers, 1964.
Lubben, Richard. Just-in-Time Manufacturing: An Aggressive Manufacturing Strategy.
New York: McGraw-Hill, 1988.
Marsh, S., J. W. Moran, S. Nakui, and G. Hoffherr. Facilitating and Training in Quality
Function Deployment. Methuen, MA: Goal/QPC, 1991.
Bibliography_1 09/30/2008 478

478 BIBLIOGRAPHY

Mizuno, Shigeru. Management for Quality Improvement: The 7 New Quality Tools.
Cambridge, MA: Productivity Press, 1988.
Moubray, John. Reliability-Centered Maintenance. New York: Industrial Press Inc.,
1992.
Nadler, Gerald, and Shozo Hibino. Breakthrough Thinking. California: Prima Publish-
ing, 1990.
Newbold, Robert C. Project Management in the Fast Lane. Massachusetts: St. Lucie
Press, 1998.
O’Connor, Patrick D. T. Practical Reliability Engineering. London: John Wiley & Sons,
2002.
Ohno, Taiichi, Toyota Production System. New York: Productivity Press, 1988.
Qhrnae, Kenichic. The Mind of the Strategist: The Art of Japanese Business. New York:
McGraw-Hill, 1982.
Ouchi, William G. The M-Form Society. Reading, MA: Addison-Wesley, 1984.
———. Theory Z. New York: Avon Books, 1981.
Peters, Tom. Thriving on Chaos. New York: Knopf, 1988.
ReVelle, Jack B. The New Quality Technology: An Introduction to Quality Function
Deployment (QFD) and the Taguchi Methods. Los Angeles: Hughes Aircraft Co.,
1990.
———. Quality Essentials. Wisconsin: Quality Press, 2004.
ReVelle, Jack B., Harry K. Jackson, and Normand L. Frigon. From Concept to Customer.
New York: Van Nostrand Reinhold, 1995.
Robson, George D. Continuous Process Improvement: Simplifying Work Flow Systems.
Westport, CT: Free Press, 1991.
Rosander, A. C. The Quest for Quality in Services. Milwaukee: Quality Press, 1989.
Ross, Phillip J. Taguchi Techniques for Quality Engineering. New York: McGraw-Hill,
1988.
Rubinstein, Moshe F. Tools for Thinking and Problem Solving. Englewood Cliffs, NJ:
Prentice-Hall, 1986.
Rubinstein, Moshe F. and Kenneth Pfeiffer. Concepts in Problem Solving. Englewood
Cliffs, NJ: Prentice-Hall, 1975.
Ryan, Thomas P. Statistical Methods for Quality Improvement. New York: John Wiley
& Sons, 1989.
Sage, Andrew P., and William B. Rouse. Handbook of Systems Engineering and Man-
agement. New York: John Wiley & Sons, 1999.
Savage, Charles M. 5th Generation Management. Bedford, MA: Digital Press, 1990.
Sanders, Donald H., A. F. Murphy, and Robert J. Eng. Statistics: A Fresh Approach,
2d ed. New York: McGraw-Hill, 1980.
Satty, Thomas L. Decision Making for Leaders: The Analytical Process for Decisions in
a Complex World. Pittsburgh: University of Pittsburgh, 1988.
Scherkenbach, William W. Deming’s Road to Continual Improvement. Knoxville, TN:
SPC Press, 1991.
———. Building a Chain of Customers. Westport, CN: Free Press, 1990.
Shigeo, Shingo. A Revolution in Manufacturing. Oregon: Productivity Press, 1985.
Bibliography_1 09/30/2008 479

BIBLIOGRAPHY 479

Shetty, Y. K., and V. M. Buehler, Productivity and Quality Through People. Westport,
CN: Free Press, 1985.
Smith, Dick, Jerry Blakeslee, and Richard Koonce. Strategic Six Sigma. New Jersey:
John Wiley & Sons, 2002.
Smith, Preston G., and Donald G. Reinertsen. Developing Products in Half the Time.
New York: Van Nostrand Reinhold, 1991.
Stephanson, S. E., and F. Spiegl, The Manufacturing Challenge from Concept to Pro-
duction. New York: Van Nostrand Reinhold, 1992.
Steudel, Harold J., and Paul Desruelle. Manufacturing in the Nineties. New York: Van
Nostrand Reinhold, 1992.
Sundararajan, C. Guide to Reliability Engineering. New York: Van Nostrand Reinhold,
1991.
Tenner, Arthur R., and Irving J. DeToro. Total Quality Management: Three Steps to
Continuous Improvement. Reading, MA: Addison-Wesley, 1992.
Thurow, Lester. Head to Head. New York: William Morrow and Co., 1992.
Tregoe, Benjamin B., John W. Zimmerman, Ronald A. Smith, and Peter M. Tobia.
Vision in Action: Putting a Winning Strategy to Work. New York: Simon and Schus-
ter, 1990.
Turban, Efraim, and Jack R. Meredith. Fundamentals of Management Science. Plano,
TX: Business Publications, 1981.
Turino, Jon. Managing Concurrent Engineering: Buying Time to Market. New York:
Van Nostrand Reinhold, 1992.
Walton, Mary. The Deming Management Method. New York: Putnam Publishing Group,
1986.
Welch, Jack, and Suzy Welch. Winning. New York: HarperCollins, 2005.
Weinberg, Gerald M. Becoming a Technical Leader An Organic Problem-Solving Ap-
proach. New York: Dorset House Publishing, 1986.
Yang, Kai, and Basen El-Haik. Design for Six Sigma. New York: McGraw-Hill, 2003.
Glossary_1 09/30/2008 464

GLOSSARY

5S Traditional Lean manufacturing approach to cleaning up, organizing, and


standardizing work. Originally, five Japanese words starting with the letter S,
translated as several combinations of English words, one set of which is:

 Sort (organize)
 Stabilize (eliminate variations)
 Shine (clean)
 Standardize (make standard the best known way to do something)
 Sustain (consciously continue to work the previous four items)

Accuracy The deviation of measures or observed values from the true value.
Action item A formally assigned requirement to accomplish something with-
in an assigned time frame.
Action plan A time-phased schedule for executing tasks, events, projects, and
‘‘just-do-its’’ to accomplish a stated goal.
Activity-based costing A management accounting system that assigns cost to
products based on the amount of resources used (including floor space, raw
materials, machine hours, and human effort) in order to design, order, or
make a product.
Advanced planning system (APS) Computer program that seeks to analyze
and plan a logistics, manufacturing, or maintenance schedule to optimize re-
source use to achieve desired results.
Advanced statistical methods A term used by statisticians, members of
secret-handshake societies, and consultants to convince businesspeople that
they cannot survive without them.
464 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
Glossary_1 09/30/2008 465

GLOSSARY 465

Alias The alternative factor(s) that could cause an observed effect due to
confounding.
Analysis of variance (ANOVA/AOV) In design of experiments, a method of
investigation that determines (1) how much each factor contributes to the
overall variation from the mean and (2) the amount of variation produced by
a change in levels and the amount due to random error.
Attribute data pass/fail Qualitative data that can be counted binomially. At-
tribute data includes the presence or absence of specific characteristics, such
as conformance to a specification, or pass/fail on a go/no-go gage.
Jidohka A Japanese word that describes a feature of the Toyota Production
System whereby a machine is designed to stop automatically whenever a de-
fective part is produced.
AWP (awaiting parts) A special status for an item held up in a repair process
while it waits for parts needed to complete the repair.
Balanced design An experimental design that has an equal distribution of lev-
els within each factor.
Balanced scorecard A strategic management system used to drive perform-
ance and accountability throughout the organization. The scorecard balances
traditional performance measures with more forward-looking indicators in
four key dimensions:

 Financial
 Integration/operational excellence
 Employees
 Customers

Baseline measure A statistic or numerical value for the current performance


level of a process or function. A baseline is taken before improvement activi-
ties are begun to accurately reflect the rate of improvement or new level of
attainment of the performance being measured.
Binomial distribution A discrete probability distribution for attribute data
that is applied to pass/fail and go/no-go attribute data.
Block A stratum of data that is homogeneous.
Blocking A technique to eliminate nuisance factors by setting them as extra
factors in the experiment, often using up the higher-order interaction col-
umns to save resources.
Buffer stock Maintaining some small portion of finished products/goods to
temporarily satisfy variations in demand.
Business case A written document describing why an organization is planning
to implement a process improvement initiative, including a goal and objec-
tives that are specific and measurable based on cost, performance, or schedule.
Glossary_1 09/30/2008 466

466 GLOSSARY

Business value Not identified by the customer, but required to satisfy some
other need (e.g., policy, law or regulation, operational security).
Capacity constraint Anything that hinders production or process flow (the
weak link in the chain).
Catch ball A participative approach to decision making. Used in policy de-
ployment to communicate across management levels when setting annual
business objectives. The analogy to tossing a ball back and forth emphasizes
the interactive nature of policy deployment.
Cause-and-effect diagram A comprehensive tool used to focus problem
solving. This is also called a fishbone diagram or an Ishikawa diagram.
Central line The line on a control chart that represents the average or median
value of the items being plotted.
Characteristic A distinguishing feature of a process or its output on which
variables or attributes data can be collected.
Checkpoints and control points Points in a process at which measurements
are taken to evaluate progress.
Common cause A source of variation that affects all the individual values of
the process output being studied; in control chart analysis, it appears as part
of the random process variation.
Comparative experiment An experiment whose objective is to compare the
treatments rather than to determine absolute values.
Confirmation experiment A designed experiment that defines improved con-
ditions of product/process design. An experimental run at these conditions is
intended to verify the experimental predictions.
Confounding Where two factors (or interactions) are inseparable in regard to
their effect on the response. Used to advantage by confounding high-order
interactions that have no practical value.
Continuous flow The mechanism to transform a product, service, or informa-
tion, by which the request for the item is triggered by a customer demand,
and the production process creates the needed item without delay or inven-
tory in just the right quantity and delivered at the right time to satisfy the
triggered demand.
CMI (continuous measurable improvement) A comprehensive philosophy
of operating that asserts there are always ways to improve and better meet
the needs of the customer.
Control chart A graphic representation of a characteristic of a process, show-
ing plotted values of some statistic gathered from that characteristic, a central
line, and control limits.
Control line a line on a control chart used as a basis for judging the signifi-
cance of the variation from subgroup to subgroup. Variation beyond a control
limit is evidence that special causes are affecting the process. Control limits
Glossary_1 09/30/2008 467

GLOSSARY 467

are calculated from process data and are not to be confused with engineering
specifications.
Corrective action The action taken by an identified group to reverse a down-
ward trend in process metrics.
Current state The current or as-is process—how it actually works in terms of
operations, matériel, and information flow.
Customer Someone for whom a product is made or a service is performed.
There are internal and external customers. The external customer is the end
user of the products or services. Internal customers are those who take the
results of internal process as an input for their process.
Cross-functional management The interdepartmental coordination required
to realize the policy goals of policy deployment.
Degrees of freedom(df) The number of independent values associated with a
given factor (usually the number of factor levels minus 1).
Design of experiments The planned, structured, and organized observation of
two or more input/independent variables (factors) and their effect on the out-
put/dependent variable(s) under study.
DMSMS Diminished manufacturing sources and material shortages—an in-
clusive term for the general problem of parts becoming unavailable by be-
coming obsolete or through suppliers going out of business or leaving a
particular market.
Just-do-it A desired change to the current state that can be done quickly and
easily—usually within days.
Effect The change in level of the response variable due to the change in a
factor; the average response at the high level of a factor minus the average
response at the low level of a factor. There are both main effects (due to sin-
gle factors) and interaction effects.
Experiment A planned set of operations that leads to a corresponding set of
observations.
Experimental condition A specific combination of factors and levels to be
evaluated in a designed experiment.
Experimental error Failure of two identical treatments to yield the same
response.
Experimental run A combination of experimental conditions required to pro-
duce experimental results; a treatment combination; a cell in the design.
Experimental unit One item to which a single treatment is applied in one
replication of the basic experiment.
Five whys Asking why five times whenever a problem is encountered. Re-
peated questioning helps identify the root cause of a problem so that effective
countermeasures can be developed and implemented.
Glossary_1 09/30/2008 468

468 GLOSSARY

Flow The sequential, coordinated movement of information, product, or ser-


vice through a process.
Future state A vision of the optimum operating environment with new/im-
proved processes in place.
F test A means for determining statistical significance of a factor by compar-
ing calculated F values to those contained in an F table.
F value A ratio of the factor effect to the random error effect within a design-
ed experiment.
Factor A processing variable whose level may change the response variable; a
method, material, machine, person, environment, or measurement variable
changed during the experiment in an attempt to cause change in a response
variable; independent variable.
Factorial experiment An experiment in which at least one experimental ob-
servation is made for each distinct treatment combination.
Fractional factorial An abbreviated version of a full factorial designed ex-
periment that reduces the minimum number of experimental runs but introdu-
ces confounding.
Full factorial A balanced, designed experiment that tests each possible com-
bination of levels that can be formed from the input/independent factors.
Homoscedasticity Constant common cause variation across all levels of all
factors.
Ideal state A vision of the future state that depicts what the system should
look like if there were no constraints.
Interaction effect The effects on the output/dependent response variable caus-
ed by the combination of two or more factors, independent of their individual
effects. An interaction exists between two or more factors if the response
curve of one or more factors is dependent upon the level of other factor(s).
Just-in-time A strategy for inventory management in which raw materials
and components are delivered from the vendor or supplier immediately be-
fore they are needed in the transformation process.
Kaizen A term meaning ‘‘improvement.’’ Moreover, it means continuing im-
provement in personal life, home life, social life, and working life. When
applied to the workplace, it means continuing improvement involving every-
one—managers and workers alike.
Kanban A term that means ‘‘signal.’’ It is one of the primary tools of a just-
in-time system. The kanban signals a cycle of replenishment for production
and materials in order to maintain an orderly and efficient flow of materials.
It is usually a printed card that contains specific information such as part
name, description, quantity, and so on.
Levels of a factor The various values of a factor considered in the experiment
are its levels.
Glossary_1 09/30/2008 469

GLOSSARY 469

Lead time Interval of time between the established need for something and its
successful delivery.
Lean A systematic approach to identify waste and focus activities on elimi-
nating it.
Level scheduling Planning an output so that the processing of different items
is evenly distributed over time.
Main effect The average effect of a factor is the main effect of the factor.
Mean The average of values in a group of measurements.
Mean square (V ) The average deviation from the target value or nominal
specification.
Mission The mission is a concise, unambiguous, and measurable description
of the organization’s role in the overall objectives of the enterprise.
Monument Part of a process that cannot easily be altered, whether because of
physical constraints or legal or regulatory requirements.
Muda A Japanese term meaning ‘‘waste.’’
Nested design An experimental design used to estimate the components of
variation at various stages of a sampling plan or analytical test method.
Noise factor Any uncontrollable factor that causes a product’s quality charac-
teristic to vary. There are three types of noise:

 Noise due to external causes, such as temperature, humidity, and so forth


 Noise due to internal causes, such as wear and deterioration
 Noise due to part-to-part variation

Nonconforming unit A unit that does not conform to a specification or


standard. Nonconforming units can also be called discrepant or defective
units.
Nonconformities Specific occurrences of a condition that do not conform to
specifications or other inspection standards; sometimes called discrepancies
or defects.
Non-value-added Any activity that takes time, matériel or space, but does not
add value to the product or service from the customer’s perspective.
Normal distribution A continuous, symmetrical, bell-shaped frequency
distribution.
Nuisance A factor that affects the process but is not of interest in this experi-
ment (e.g., a difference in raw materials that we are already managing).
On-line quality control Methods that focus on production, corrective actions,
and process control. Includes the use of the seven QC tools.
Optimal condition That combination of factors and levels that produces the
most desirable results.
Glossary_1 09/30/2008 470

470 GLOSSARY

Orthogonal array A matrix of numbers arranged in rows and columns. Each


row represents the state of the factors in a given experiment. Each column
represents a specific factor, variable, or condition that can be changed be-
tween experimental runs. The array is orthogonal because the effects of the
various factors resulting from an experiment can be separated, one from the
other.
Pareto diagram An important tool for problem solving that involves ranking
all potential problem areas or sources of variation according to their contribu-
tion to cost or to total variation. Typically, a few causes account for most of
the cost (or variation), so problem-solving efforts are best prioritized to con-
centrate on the ‘‘vital few’’ causes, temporarily ignoring the ‘‘trivial many.’’
PDCA (plan-do-check-act) A process based on the scientific method for ad-
dressing problems and opportunities.
Percent contribution The amount of influence each factor contributes to the
variation in the experimental results.
Performance measure A measurable characteristic of a product, service, pro-
cess, or operation the organization uses to track and improve performance.
Point of use (POU) The condition in which all supplies are within arm’s
reach and positioned in the sequence in which they are used to prevent hunt-
ing, reaching, lifting, straining, turning, or twisting.
Poisson distribution A discrete probability distribution for attributes data that
applies to nonconformities and underlies the c and u control charts.
Policy deployment The process of using a waterfall of matrices to deploy a
policy throughout the enterprise to create collaborative and supportive organ-
izational missions, goals, and objectives.
Precision The accuracy of measurement. Precision is related to its repeatabili-
ty in terms of the deviation of a group of observations from a mean value.
process The combination of people, equipment, materials, methods, and envi-
ronment that produce output—a given product or service.
Production leveling Configuring the workload and output of a workstation,
through balancing and rebalancing, so that the workstation produces items at
a rate close to takt time, in an evenly distributed mix over time, with minimal
slack or nonproductive time.
Product life cycle management (PLCM) A technology for managing the
entire life cycle of a product, from initial development through end-of-life
management (EOL).
Process spread The extent to which the distribution of individual values of
the process characteristic varies; often shown as the process average plus or
minus some number of standard deviations (e.g., X þ 3).
Product families Items of like kind or units linked to a specific material or a
common end product.
Glossary_1 09/30/2008 471

GLOSSARY 471

Pull A system by which nothing is produced by the upstream supplier until


the downstream customer signals a need.
Pure sum of squares A value not used in classical/traditional ANOVA/AOV,
but used in Taguchi-style ANOVA/AOV to account for the degrees of free-
dom and the mean square error when determining the percent contribution.
QC (quality control) A system of actions to produce goods or services that
satisfy customer requirements.
Quality assurance Action taken to ensure that the quality of a product or ser-
vice is satisfactory and reliable.
Quality deployment A technique to deploy customer requirements into de-
sign characteristics and deploy them into subsystems (e.g., as components,
parts, and production processes).
Quality loss function Parabolic approximation of the financial loss that results
when a performance characteristic deviates from its best (or target) value.
Quantitative Levels of the variable may be changed on some underlying con-
tinuous scale (e.g., pressure).
Quasi-interaction effect A crude estimate of an interaction from a screening
design.
Randomization Chance assignment of experimental units to treatment com-
binations so that any systematic trends do not bias the results.
Randomness A condition in which individual values are not predictable, al-
though they may come from a definable distribution.
Random sample A sample selected such that each individual in the popula-
tion has an equal chance of being selected.
Range The difference between the highest and lowest values in a subgroup.
Reflection A copy of the original design with the plus (þ) and minus ()
signs transposed. Retains attractive features when combined with the original
design. Allows main-effect estimates clear of two-factor interactions.
Repeat An additional experimental run that can be used to estimate part of the
experimental error from the range between repeats but that does not include
all sources of experimental error.
Repeatability The measurement variation obtained when one person mea-
sures the same dimension or characteristic several times with the same gage
or test equipment.
Replicate An additional experimental run that can be used to estimate exper-
imental error from the range between replicates.
Response The result of a trial based on a given treatment combination.
Response variable A characteristic whose distribution you wish to change
(e.g., its mean, variance, or shape); dependent variable; quality output char-
acteristic; usually one of the key variables.
Glossary_1 09/30/2008 472

472 GLOSSARY

Results-oriented management This style of management is well established


in the West and emphasizes controls, performance, results, rewards (usually
financial), or the denial of rewards—and even penalties.
Return on investment (ROI) The ratio between the predicted or computed
cost that will result from some action and the cost of completing that action
(the investment).
Robust Product or process that functions with reduced variability in spite of
diverse and changing environmental conditions.
Run chart A graphic representation of a characteristic of a process showing
plotted values of some metric gathered from the process and the mean of the
values.
Sample In process-control applications, a synonym with subgroup.
Setup time Also called changeover time. The time it takes to change a system
or subsystem from making one product to making the next.
Sigma (s) The Greek letter used to designate a standard deviation.
Signal factor With dynamic characteristics, a factor that controls responses in
a specified or designed manner. In measurement studies, a factor used to gen-
erate different measurement results.
Signal-to-noise (SIN) ratio SIN is a metric used to project (from experimen-
tal results) field performance. SIN is calculated in decibels and depends on
the type of characteristic being considered.
Single-minute exchange of die (SMED) A detailed approach to reducing any
machine setup time to less than 10 minutes.
Single-piece flow The movement of a product or information, upon comple-
tion, one at a time through operations, without interruptions, backflow, or scrap.
Special cause A source of variation that is intermittent, unpredictable, and un-
stable; having an assignable cause. It is signaled by a point beyond the control
limits or a run or other nonrandom patterns of points within the control limits.
Stakeholder Person internal or external to an organization who has a stake in
the outcomes of an activity.
Standard work An agreed-upon set of work procedures that effectively com-
bines people, matériel, and machines to maintain quality, efficiency, safety,
and predictability.
Stability For control charts, the absence of special causes of variation; the
property of being in statistical control.
Stable process A process that is in statistical control.
Standard deviation A measure of the spread of a distribution.
Statistic A value calculated from or based on sample data (a subgroup average
or range), used to make inferences about the process that produced the output
from which the sample came.
Glossary_1 09/30/2008 473

GLOSSARY 473

Statistical control The condition describing a process from which all special
causes of variation have been eliminated and only common causes remain.
Statistical process control The use of statistical techniques to analyze a pro-
cess to take appropriate actions to achieve and maintain a state of statistical
control and to improve the process capability.
Subgroup One or more events or measurements used to analyze the perform-
ance of a process. Rational subgroups are usually chosen so that the variation
represented within each subgroup is as small as feasible for the process (rep-
resenting the variation from common causes), and any changes in the process
performance (special causes) will appear as differences between subgroups.
Supply chain management (SCM) Proactively directing the movement of
goods from raw materials to the finished product delivered to customers
Takt time Takt is German for ‘‘beat’’ (as in the beat of music). In Enterprise
Excellence, takt time is the available production time divided by the rate of
customer demand. Takt time sets the pace of production to match the rate of
customer demand and becomes the heartbeat of the enterprise.
Total productive maintenance A set of techniques to ensure every machine
in a process is able to perform required tasks, including preventative mainte-
nance, corrective maintenance, maintenance prevention, and breakdown
maintenance.
Total value-added time The total time in a process during which the value of
the product going through the process to the customer is increased.
Throughput time The amount of time it actually takes a product, informa-
tion, or service to move through a process, including wait time.
Value A need the customer is willing to pay for, expressed in terms of a spe-
cific required product or service.
Value-added activities The parts of the process that add worth to the custom-
er’s product or service. To be considered value-added, the action must meet
all three of the following criteria:

 The customer must be willing to pay for this activity.


 It must be done right the first time.
 The action must somehow change the product or service in some manner.

Value stream The specific activities required to design, order, and provide a
specific product or piece of information, from concept to launch, order to
delivery, into the hands of the customer.
Value stream map Visual representation of all the activities occurring along
a value stream for a product or service.
Variability An aspect of an item or process that is likely to be unstable or
unpredictable.
Glossary_1 09/30/2008 474

474 GLOSSARY

Visual management Tools that allow one to visually determine whether a


process is proceeding as expected.
Waste Anything that adds cost or time without adding value. Waste is catego-
rized as:

 Overproduction
 Waiting
 Transportation
 Inappropriate processing
 Unnecessary inventory
 Unnecessary/excess motion
 Defects

Work in process (WIP) At any given moment, materials currently between


the start of a process and the end of the process.
Index_1 09/30/2008 480

INDEX

ABC material handling, 428 Control chart elements, 286


Achieving enterprise excellence, 12, Control charts analysis, 283, 327
13, 18 Cooperation, 59, 68
Affinity diagram, 109 Coordination, 59, 70
Analysis of variance, 329 Cost estimating and budgeting, 160,
Analyze tollgate, 167 174
ANOVA gage R&R method, 320 Cost per unit, 127
Attribute control charts, 298, 327 Cp index, 308
Availability rate, 448 Cpk index, 309
Customer requirements, 177
Bar chart, 156, 174 Customer satisfaction, 128
Baseline QMS requirements, 21 Cycle time reduction, 127
Bias, 312, 313 Cycle time, 396, 405, 414, 430, 436
Black belts, 139
Defect reduction, 127
c control chart, 303 Defects and defective units, 217
Cause and effect analysis, 109 Degrees of freedom, 335, 358
CDOV, 146, 164, 170, 184, 189 Deployment by pilot study, 12
Common cause variation, 284, 302 Deployment champions, 139
Communication, 31, 54, 59, 61 Deployment measurement, analysis
Communication, cooperation, and and reporting, 140
coordination, 59 Design development, 185, 198
Components of variation, 317 Design for Lean Six Sigma, 150, 164,
Concept development, 184, 191, 196 171
Confounding and aliases, 385 Design of experiments, 370
Continuous measurable improvement, Design qualification testing,
7, 9, 14, 437, 440 204, 206
Control and continuous measurable Design requirements, 182,
improvement, 438, 440, 448 196, 202

480 Enterprise Excellence: A Practical Guide to World-Class Competition N. L. Frigon and H. K. Jackson Jr.
Copyright © 2009 by Normand L. Frigon & Harry K. Jackson Jr. ISBN: 978-0-470-27473-6
Index_1 09/30/2008 481

INDEX 481

Design, 180 F critical, 332, 335, 383, 391


Developing Enterprise Excellence F ratio, 332, 358, 370, 383, 391
deployment plans, 136 Failure modes and effects analysis,
Development of products, services 212, 253, 263
and processes, 150, 171 First pass yield, 219, 221
DPM, 215, 240 Fisher’s F statistic, 331, 366, 370
FMEA, 212, 224, 253
Effective meetings, 52, 53 Fractional factorial experiments,
Enterprise Excellence deployment, 384
13, 14, 18, 79 Full factorial DOE summary, 383
Enterprise Excellence
implementation, 144 Graphic work instructions, 438, 443,
Enterprise Excellence infrastructure, 450
79, 93, 139 Green belts, 80, 100, 140
Enterprise Excellence maturity
assessment, 83, 94, 104, 140 HOQ, 182, 197, 207
Enterprise Excellence model, 7, 8, House of quality, 179, 186, 191
9, 14 Hypothesis testing, 330
Enterprise Excellence planning
toolkit, 108, 141 Improve/Lean tollgate, 168
Enterprise Excellence policies, Improving products, services and
guidelines and infrastructure, processes, 151, 172
136, 143 Interaction, 349
Enterprise Excellence project Interrelationship digraph, 109
decision process, 146, 171 Invent and innovate technology
Enterprise Excellence projects, tollgate, 164
145, 170 Inventing/innovating technology, 149,
Enterprise Excellence strategy, 95, 171
134, 143 Invention and innovation, 178
Enterprise Excellence, 3
Enterprise management system, Just in time, 396, 402, 416, 437
7, 8, 11
Enterprise senior review group, 80, Kanban, 396, 400, 417
94, 98, 136 Kano analysis, 186, 189, 210
Enterprise value stream mapping, Key roles, 46
243, 249
Equipment effectiveness, Labor time, 407, 430
447, 448 Law of Unintended
Evaluating the results, 339, 363 Consequences, 1
Excitement attributes, 191 Leadership principles, 30, 37
Exciters, 190 Leadership traits, 36, 41, 75
Expectors, 190 Leading and managing teams, 25
Experimental matrix, 371 Leading Enterprise Excellence, 20
Experimental run, 330, 371, Lean, 2, 5, 7
385 Levels, 372
Index_1 09/30/2008 482

482 INDEX

Linear contrasts, 363, 393 One-way linear contrasts, 364


Linearity, 313 Optimize design phase, 201
Optimize technology tollgate, 165
Machine time, 407, 428 Optimize tollgate, 166
Main effect, 350 Optimize, 176, 201, 206
Management and operations plans, Organizational value stream mapping,
144, 169 250
Management system assessment, 84, Orthogonal array, 367, 374, 388
94, 136 Overcoming resistance to change, 20,
Management system principles, 54
23, 74
Management systems, 20, 23, 438, p control chart, 298, 303
439, 450 Percent contribution, 339, 362
Managing and leading Enterprise Performance rate, 448
Excellence, 20 Performance testing, 204
Manipulating and co-opting, 71 Planning the Enterprise Excellence
Market and customer research and project, 154, 173
communication, 4, 15, 16 Policies and guidelines, 80, 105,
Master black belts, 80, 100, 139 136
Matrix diagram, 122, 142 Process availability analysis, 396, 411
Mean squares, 332, 383, 391 Process capability analysis, 274, 307,
Measure tollgate, 167 328
Measurement by part, 317, 319 Process decision program chart, 120,
Measurement system requirements, 142
314 Process FMEA, 226, 253, 257, 265
Measurement systems evaluation Process identification, 227
(MSE), 311, 313, 328 Process mapping, 105, 141, 212, 225,
Measurement, analysis and knowledge 266
management, 4, 6, 16 Process measurement, 274, 292, 326
Milestone chart, 148, 156, 170 Process variation, 212, 214
Mistake proofing (poka-yoke), 438, Process walkthrough, 212, 251, 253,
443, 450 266
Mixed-model production, 396, 428, Process yield measures, 412
437 Product and service support, 8
Multivariate ANOVA, 349 Product FMEA, 253, 258, 263
Product service and process
Necessary non-value added/business design, 4, 14, 16, 19
value added activities, 244 Product, service and process
Negotiation and agreement, 58, 71, 74 commercialization, 16
Network diagrams, 156, 174 Project notebook, 169, 175
Non-value added activities, 244 Project selection matrix, 123
np control chart, 303, 304 Project selection, 184
Project sponsors, 80, 139
One piece flow, 396 ,431 Project-by-project deployment, 12
One-way ANOVA, 331, 366 Pugh analysis, 199, 210
Index_1 09/30/2008 483

INDEX 483

Quality function deployment (QFD), Sum of the squares total (SSTotal),


180, 198, 207, 210 351
Quality rate, 448 Sum of the squares treatment
(SSTreatment), 351
R charts, 285, 294, 327 Sum of the squares within (SSWithin),
Randomization, 376 343
Rapid improvement events, 439, 449 Sum of the squares, 333
Repeatability, 311, 315, 323 Sustain, 397, 399, 418
Repetitive manufacturing, 417
Research and technology Takt time, 396, 403, 415, 424, 430
development, 4, 14, 19 Team building, 45, 76
Response, 376 Team dynamics, 47, 51
Risk assessment, 180, 196 Team members, 45, 80, 110, 137
Robust design testing, 204, 207 Technology development, 178, 209
Robustness, 179, 202 The seven forms of waste, 396, 399,
Rolled throughput yield, 221 436
Routing analysis, 396, 405, 436 Tollgate reviews, 164, 175
RTY, 221, 222 Total productive maintenance
Transactional MSE, 323
Sample size (replication), 375 Treatment combination, 374, 379
Sampling strategy, 280 Treatment interactions, 359, 380
Sampling, 280, 304 Treatment main effects, 358, 381
Scheduling, 148, 154, 173 Treatment, 330
Secondary (passive) data, 277, 326 Two-way ANOVA, 340
Setup time, 407, 413, 464, 430 Two-way linear contrasts, 367
Shine, 397 Types of data, 274, 276, 285, 326
Sift, 396, 399 Types of FMEA, 253, 258
Single-minute exchange of die Types of processes, 228
(SMED), 438, 444, 450
6S, 396, 403, 436 u control chart, 305
Six Sigma, 2, 5, 7 Understanding and overcoming
Sort, 397, 399 resistance to change, 54
Sources of variability, 219
Spaghetti diagram, 396, 406 Value added activities, 244
Special cause variation, 277, 284, 305 Value stream analysis, 243, 249, 251
Stages of team development, 49, 50 Value stream mapping, 237, 243, 249
Stakeholder analysis, 105, 131, 143 Variable control charts, 283, 285, 296,
Standardize, 397, 419 327
Statistical process control, 274, 277, Variance, 278, 285, 303, 314
326, 438, 440 Variation, 212, 214, 221, 223, 276
Strategic planning, 4 Verify capability, 180, 185, 203
Stress testing/evaluation, 179, 204 Verify tollgate, 166
Sum of the squares error, 357 Verify, 178, 196, 203
Sum of the squares for interactions, Visual controls, 438, 441, 450
354, 356
Index_1 09/30/2008 484

484 INDEX

Voice of the customer (VOC), 176, Work cell design, 396, 434
177, 209 Work content analysis, 396, 405, 436
Voice of the customer system, 4, 6, 8 Workable work, 396, 429, 437
VSM symbols, 245 Workload balancing, 396, 402, 430

Work activities level process map, 237 X-bar and R charts, 285, 288, 327

You might also like