You are on page 1of 147

1|Page © ExpertRating Solutions

Six Sigma Green Belt Courseware

Contents

Chapter 1 Introduction to Six Sigma Page 4

1.1 Introduction to Six Sigma

1.2 Six Sigma Deployment Process

1.3 Six Sigma Implementation Process

1.4 The Six Sigma Toolkit

1.5 The Six Sigma Toolkit Continued...

1.6 Overview of DMADV or DFSS

Chapter 2 The Define Phase Page 30

2.1 The Define Phase

2.2 The Goal and Expected Benefits of the Project and Kano Model

2.3 Definition, Survey Construction and Margin of Error

2.4 Focus Groups and Critical-to-Quality Tree

2.5 Project Definition and Process Mapping

2.6 Process Mapping Continued...

2.7 The 7M Tools

2.8 Matrix Diagrams and Activity Network Diagrams

Chapter 3 The Measure Phase Page 65

3.1 The Measure Phase

3.2 Histograms and Probability Plots

3.3 Basic Process Capability and Process Variation

2|Page © ExpertRating Solutions


Chapter 4 The Analyze Phase Page 80

4.1 The Analyze Phase

4.2 Nature of Work and Flow of Work

4.3 Root Cause Analysis

4.4 Root Cause Analysis-Close

4.5 Scatter Diagram and Run Charts

4.6 Hypothesis Testing, T Tests and Chi-square Test

4.7 Analysis of Variance (ANOVA)

4.8 Quantifying the Opportunity

Chapter 5 The Improve Phase Page 112

5.1 The Improve Phase

5.2 The Improve Phase ...Continued

5.3 The Improve Phase ...Continued

Chapter 6 The Control Phase Page 129

6.1 The Control Phase

6.2 The Control Phase ...continued

6.3 Control Methods / Tools and Techniques for Control Planning

6.4 Control Methods / Tools and Techniques for Control Planning Continued...

3|Page © ExpertRating Solutions


Chapter 1 - Introduction to Six Sigma
1 Introduction to Six Sigma

Introduction

Six Sigma today is used across a wide range of industries like banking, business process outsourcing,
telecommunications, insurance, construction, healthcare, and software. Many global companies like General
Electric, Motorola, Allied Signal, Honeywell, Honda, Sony, Cannon, Polaroid, Texas Instruments and Whirlpool
are using the Six Sigma methodology.

What is Six Sigma?

Six Sigma is a process based methodology for pursuing continuous improvement. Companies use this
methodology to reduce defects in their processes.

Companies measure their performance by the sigma level of their business processes. Initially, companies
accepted three or four performance levels as the standard. These processes created 6210 to 66,800 defects
/per million opportunities. The sigma level of 3.4 defects per million opportunities is a reaction to increasing
customer expectation and the fact that business processes and products are becoming increasingly complex
and competitive.

The primary aim of Six Sigma is to focus on the customer first and then use facts and data based on customer
requirements to get better results or improve the process. Thorough understanding of the process and the
product drives the business and fulfils customers‟ expectations.

Sigma, σ, is the Greek letter used to measure the variability in the process. Six Sigma stands for Six Standard
Deviations from mean. Standard Deviation is a statistical method to define how much variation exists in a set
of data or a process.

For example, a pizza home delivery company goes with a thumb rule of delivering a pizza within 30 minutes. In
case the pizza delivery boy fails to deliver it within 30 minutes, the customer will get a 50% discount on his
next purchase. (And it is considered as a defect on the part of the company.) If only 65% of the pizzas are
delivered on time, the process will be at „level 2‟ sigma. If 92% pizzas are delivered on time, the company‟s
performance is at „level 3‟ sigma. If the company delivers 99.4% of the pizzas on time, the company‟s perform
ance is at „level 4‟ sigma.

If the pizza company wants to be at „level 6‟ sigma, it has to deliver pizzas within a level of 99.9997%
accuracy.
That means a process efficiency of 99.9997%.

4|Page © ExpertRating Solutions


Levels of Sigma Performance

The sigma value of a business or a firm should always be high as it acts as a methodology that indicates the
performance of the process. The measures used in Sigma primarily focus to perform defect-free-work. A defect
may be described as customer dissatisfaction about anything.

Today Six Sigma is delivering business excellence, higher customer satisfaction, and superior profits by
dramatically improving every process in an enterprise from financial to operational to production.

History of Six Sigma

The History of Six Sigma can be traced back to the time of Carl Frederick Gauss (1777-1855). He is said to
have introduced the concept of the normal curve. Perhaps as a standard of measurement in product variation,
Walter Shewhart in 1920‟s showed that a process needed correction whenever it reached a point where three
sigma were produced from the mean.

Edward Deming, the 'Godfather' of quality brought about immense change in the approaches and attitude
towards quality in the early 1950s. Later many standard measurements like Cpk, Zero Defects, etc. were found
to exist.

In the 1980s, Mikel Harry, working for Motorola, focused on Deming's concept of process variation as a way to
improve performance. But the real credit goes to Motorola engineer, Bill Smith, for coining the term „Six
Sigma‟. The engineers working there found that the traditional quality levels used for measuring defects in
thousands of opportunities did not provide a proper measurement system analysis.

Therefore, with the ropes of success being in the hands of Bob Galvin, the chairman of Motorola in 1987, a new
standard methodology was created known as Six Sigma. With the help of Six Sigma methodology, Motorola
produced powerful results. It is said that they documented more than $16 billion in savings as a result of
applying Six Sigma efforts.

Since then, many top companies have adopted Six Sigma for successful business. Companies like General
electric, Allied signal, Honeywell, Honda, Sony, Cannon, Polaroid, and many more are using Six Sigma
methodologies. It is more than just a quality system like TQM or ISO. It's a way of doing business.

The value of mean usually acts as the expectation of the customer or our target. Therefore in order to
eliminate the defects, it is important to measure the variations occurring in the processes, which acts as the
carriers of defects.

5|Page © ExpertRating Solutions


The Metrics of Six Sigma

In modern day scenario, it is very important to track down the simple metrics, manipulate and improve
numbers. The concept of Six Sigma and other methodologies not only measures site activity but also strive to
understand the factors affecting the metrics. Six Sigma is referred as a business-driven, multi-faceted
approach, which focuses primarily on process improvement, reduced costs, and increased profits.

Common Six Sigma metrics include defect rate (parts per million or ppm), sigma level,
processcapability indices, defects per unit, and yield. Many Six Sigma metrics can be mathematically
related to the others.

Defect Rate

The complimentary measurement of yields is defects. If the yield is 90 percent, naturally there must be 10
percent defects.

Sigma Level

From a quality perspective, Six Sigma is defined as 3.4 defects per million opportunities. This is called a sigma
level of quality.

Process Capability Indices

Another set of measures exist to quantify the capability of a process or characteristics to meet its
specifications. These indices directly compare the voice of the process to the voice of the customer.

Defects Per Unit

DPU provides a measurement of the average number of defects on a single unit.

Yield

Traditionally, yield is the proportion of correct items you get out of a process compared to the number of raw
items you put into it.

With the fundamental conviction and drive to improve customer satisfaction by reducing defects, the foremost
aim of Six Sigma is to result in defect-free processes and products (3.4 or fewer defective parts per million
(ppm). The process to achieve this goal consists of various steps in the Six Sigma methodology, including
"Define - Measure - Analyze - Improve - Control".

6|Page © ExpertRating Solutions


1.1 Six Sigma Deployment Process

From a practical viewpoint, it is very essential to generate a master deployment plan as a road
map throughout the Six Sigma implementation cycle. The master plan can be developed and
divided into four phases: measure, analyze, improve, and control phases.

The detailed steps for each phase are described as follows:

Define Phase

Measure Phase

Analyze Phase

Improve Phase

Control Phase

Define Phase

The essence of Six Sigma is to solve problems that are impacting business. The process of
improvements starts immediately with the "Define" step. When a Six Sigma project in a firm is
launched, goals are chalked out to have an idea about the degree of satisfaction among customers.
These goals are further broken up into secondary goals such as cycle time reduction, cost
reduction, or defect reduction.

The Define Phase comprises of base lining and benchmarking the process that needs improvement.
Further goals/sub-goals are specified and the infrastructure to accomplish these goals is
established. An assessment of changes in the organization is also taken into consideration.

7|Page © ExpertRating Solutions


Measure Phase

Top management plays a very important role in the entire Six Sigma deployment processes.
Therefore it is very important that in the initial phase of implementation of the Six-Sigma program,
it should be fully accepted by the employees and they should show full commitment towards the
steps of improvement. In the initial phase, development of a thorough infrastructure would help
the functioning of the deployment process, and manage the implementation process. This phase
should be developed as follows:

Establish Leadership Commitment and Involvement

The Six Sigma implementation processes must involve the top-level management.
Total commitment and involvement must be there throughout the implementation
process.

Senior management must assign a Management Champion to lead the Six Sigma
implementation and make him the authoritative head for the entire Six Sigma project in
the organization.

Form a Core Team

The Top management is responsible for forming a Six Sigma Core Team. The
Management Champion heads the Core Team.

The main role of the Core Team is to develop and manage the Six Sigma
implementation and to assure the readiness of the organization for the implementation.

Team up with outside Quality Facilitators

Assistance can be taken from experienced quality facilitators for the implementation
process, especially for SMEs (small and medium enterprises)

The facilitators coaches the Core team in deploying Six Sigma implementation and in
providing necessary training for all Six Sigma project participants.

8|Page © ExpertRating Solutions


Provide Six Sigma Deployment Training

To understand the benefits and general approach of the Six Sigma implementation, the
top management and the core team should attend an overview on Six Sigma.

The Core Team should attend the training on Six Sigma development, deployment, and
management.

Schedule Periodic Top Management Reviews

Reviews must be scheduled periodically in the initial stages of the Six Sigma program
including the defining, developing, and implementing.

The top management must be informed about the activities involved in the Six Sigma
implementation.

Analyze Phase

In this phase, the gaps are identified between current process performance and the business goals.
The gaps are further transformed into improvement projects, and an integrated system is
established to support the implementation.

Define Business Goals based on Organization‟s Strategic Plan

The organization's purpose, structure, and flow, including interfaces with other
organizations and primary customers must be fully known by the Core team.

It is very important to understand the corporate policies and procedures that affect
the Six Sigma Quality Management System (QMS).

The short-term and long-term business goals must be defined by the Core team. These
goals must be based upon the organization's strategic plan.

9|Page © ExpertRating Solutions


Identify the Existing Processes Performance

It is important to identify the overall process of the organization, showing how


products or services are created and supplied to the customers.

The Core Team must perform a high-level “gap analysis”.

After the performance, the gap analysis results have to be further reviewed by top
management.

Define Six Sigma Improvement Projects

The scope and the goals of Six Sigma improvement projects are defined based on the
“gap analysis”, which should include: process management, human resource
development, training system, quality tools, supplier management, and customer
management.

Create Performance Measures for all Six Sigma Projects

The detailed performance measures for all Six Sigma projects are defined based upon
the gap analysis.

These performance measures should be consolidated into an organizational


information ystem.

The organizational information system is enhanced in order to provide the information


about individual project progress and the overall Six Sigma implementation
performances.

10 | P a g e © ExpertRating Solutions
Establish an Incentive/Recognition System

An incentive/recognition system is essential to Six Sigma implementation.

Top management is responsible for designing a system to motivate employees in the


Six Sigma implementation.

Improvement Phase

In this phase the improvement project teams are composed to provide Six Sigma project-related
training to the team members. As the progression of the projects take place, it is important to
constantly monitor the status of each project.

Form the Six Sigma Project Teams

The Six Sigma project teams are composed by the Core Team.

The Six Sigma project teams are responsible for the delivering of the project goals
assigned to each project.

Plan and Provide Six Sigma Training to Members of Project Teams

It is very important to develop a training plan and strategy, and provide further
training to all members of Six Sigma project teams.

The training plan should focus on: Six Sigma overview, measure-analyze-improve-
control (MAIC) discipline, and utilization of quality tools.

Implement the Six Sigma Projects

Project teams should evaluate the existing processes and proceed with the MAIC
discipline.

Measure

In this stage, the existing systems are measured. The potential critical processes/products are
identified and described.

11 | P a g e © ExpertRating Solutions
Analyze

In this stage, the system is to be analyzed to identify ways to eliminate the gap between the
current performance of the system or process and the desired goal.

Improve

In this stage, the improved outcome is measured to determine whether the revised method
produces results within customer expectations.

Control

In this stage, the new system is controlled and the original problems are stopped from recurring.

Monitor and Review the Status of each Project

The Core teams should obtain inputs of each project on an ongoing basis, to monitor
and review the status of each of them.

The Core Team provides directions and support to the Six Sigma project teams.

Control Phase

The main aim of the Control Phase is to appraise the performance of the processes, to ascertain
the level of success of each project, to adjust the business strategic plan, and re-start the
implementation cycle.

Audit the Projects' Results

After the projects are completed, the results of the completed projects are audited by
the Core Team and top management.

The improved systems have to be maintained.

Institutionalize the Improved System

Policies, procedures, operating instructions, and other management systems are


modified, to institutionalize the improved system.

12 | P a g e © ExpertRating Solutions
Apply the Incentive/Recognition System

The appropriate incentive and recognition is applied to the project team members
based on the project performances.

Apply Continuous Improvement Mechanism

The organizations, strategic plans and related action plans are revised according to the
project performance. Therefore the new Six Sigma projects are derived from the revised
strategic plan.

1.2 Six Sigma Implementation Process

Structuring the Six Sigma Function

The structure of the Six Sigma functions consists of the following:

The Executive Leadership is made up of the CEO and key top management team members.
They are responsible for setting up a vision for implementing Six Sigma. They empower others
with freedom of power and speech because they believe it will achieve splendid results.

The Quality Leader/Manager improves the operational effectiveness. He takes up the


responsibility for representing the needs of the customer. To maintain impartiality, the quality
function is different from the manufacturing or transactional processing functions.

The Master Black Belt acts as the in-house expert coach for the organization. They see that the
overall Six Sigma initiatives are accountable to the executives and the operational business units.
They are full time agents to Six Sigma. They assist champions, and guide Black Belts and Green
Belts. Apart from the usual routine, they spend their time on ensuring integrated deployment of
Six Sigma across various functions and departments.

Process Owner (PO) as the name suggests are the responsible individuals for a specific process.
These managers own the profit and loss or the budget and productivity of the processes.
Depending on the size of the business and core activities, one can have process owners at lower
levels of the organizational structure.

Black Belt (BB) - Black Belts are the heart and soul of the Six Sigma quality initiative. Their main
purpose is to lead quality projects and work full time. As team leaders, Black Belts use project
tracking and management tools as well as process optimization.

13 | P a g e © ExpertRating Solutions
Black Belts can typically complete four to six projects per year with savings of approximately
$230,000 per project. Black Belts also coach Green Belts on their project.

Green Belt (GB) - Green Belts are employees trained in Six Sigma who spend a portion of their
time completing projects, but maintain their regular work routine and responsibilities. Depending
on their workload, they may spend 10% to 50% of their time on their projects. They operate under
the guidance of Black Belts and support them in achieving the overall results. Green Belts use
similar tools as Black Belts with a lower level of analytical prowess.

Use of Technologies in Six Sigma Programs

In an organization seeking consistency in their processes; Six Sigma plays a very important role in
the promotion and establishment of such processes. Six Sigma works towards improvement of
existing processes.

It is difficult to briefly describe the ways in which Six Sigma may be interconnected with other
initiatives (or vice versa). But some understanding can be gained from the following:

The Six Sigma toolkit comprises of many techniques that are already applicable to some software
and are directly being used by the software industry. For instance, "Voice of the Client" and
"Quality Function Deployment" is useful for developing customer requirements. There are
numerous charting/calculation techniques that can be used to scrutinize cost, schedule, and quality
data as a project proceeds. As far as the technical development is concerned, quantitative methods
are used for risk analysis and concept/design selection. The efficacy of "Six Sigma” therefore
comes from deliberately and consciously deploying the tools in such a way that the customer is
fully satisfied.

CMM‚ CMMISM, PSPSM/TSPSM are some of the improvement approaches that either complement
Six Sigma or mutually support it. Technologies like the Goal-Question-Metric (GQM), Initiating-
Diagnosing-Establishing-Acting-Leveraging (IDEALSM), and Practical Software Measurement (PSM)
shows some adaptability and coherence with Six-Sigma.

The GQ(I)M technology merges well with the Define-Measure steps of Six Sigma. Many common
features are shared by IDEAL and Six Sigma. IDEALSM mainly focuses on change management
and organizational issues whereas Six Sigma deals with more of tactical, data-driven analysis and
decision making. PSM provides a software-tailored approach to measurement that may well serve
the Six-Sigma improvement framework.

14 | P a g e © ExpertRating Solutions
Strategies Employed in the Six Sigma Programs

Only a bright, knowledgeable and an involved management can lead an organization to a success
in its quality efforts. It doesn‟t matter what their individual styles of management are, what
matters is their involvement and efforts to succeed. The various strategies explained further
addresses what management must do, to ensure that quality improvement program in the
organization is successful. There are some essential steps that need to be followed for the success
of the program targeted for improvement.

The executives must have total commitment to the implementation of Six Sigma and accomplish
the following:

Creation and Agreement of Strategic Business Objectives, that is to identify the key
business issues

Creation of Core, Key Sub- and Enabling Processes, that is to establish a Six Sigma
Leadership Team

Identification of Process Owners, that is to Assign Masters to each key business issue

Creation and Validation of Measurement “Dashboards”

To allocate time for change agents (Experts) to make breakthrough improvements

To set aggressive Six Sigma goals

To incorporate Six Sigma performances into the reward system

To finance directly to validate all Six Sigma ROI

To evaluate the corporate culture to determine if intellectual capital is being infused


into the company

To evaluate continuously the Six Sigma implementation and deployment process and
make changes if necessary

Database Development

Six Sigma is about transformation and it requires full support from top management. Any action
performed needs to be monitored to describe its purpose. Hence effective Six Sigma
implementation requires an IT system to receive, organize and help translate this information into
effective decisions for the organization. For such a system to be active and functional, it requires
an underlying IT infrastructure.

15 | P a g e © ExpertRating Solutions
The following are some of the main roles an effective IT system would be required to play. (Kendall
and Fulenwider, 2000).

Support the collection of data from the process

Provide a means for effective communication and sharing of data/information across


the organization

Provide an easily accessible database holding information regarding all ongoing and
completed Six Sigma projects

Provide an interactive training tool for employees to learn the Six Sigma methodology
and the tools within the methodology for problem solving activities

Provide on-line coaching for Six Sigma tools and techniques

Provide software packages to assist with the selection and prioritization of projects

1.3 The Six Sigma Toolkit

Idea Collection Tools

Brainstorming

Brainstorming means generation of ideas. The basic purpose of brainstorming is to come up with
new ideas or solutions regarding some problem. Lists of ideas or solutions are chalked out and
then the final choice is made from the options that are available.

Tools like Brainstorming generally support creative thinking. An advertising executive, Alex Osborn
can be credited for coining the term “brainstorm” in 1939.

Definition

According to Osborn, “Brainstorm means using the brain to storm a creative problem and to do so”
“in commando fashion, each stormer audaciously attacking the same objective”.

16 | P a g e © ExpertRating Solutions
There are various factors that make a session of brainstorm a success:

It is very important to see that everyone involved in the session fully supports the
query that needs to be discussed for new ideas.

Noting down the ideas generated in the minds makes it easier to remember them.
Therefore the team must be encouraged to do so.

A brainstorming session must be open dialogue or discussion. Everyone in the session


must get a chance to present his or her set of ideas.

No judgment must be passed during the session.

In order to save time all irrelevant ideas must be eliminated, and duplicity done away
with.

In a nutshell, we can say that a brainstorming session usually starts with a definite
question and ends with a list of candid ideas, which are further, organized, developed
and attuned, by using Affinity diagrams.

Affinity Diagram

Affinity diagrams are used to show the representation of brainstormed ideas. When ideas
presented in the Brainstorming session are grouped into meaningful categories, an Affinity diagram
is formed.

Many of the ideas that are presented in the brainstorming session are sometimes too long,
complex and raw. The Affinity diagram helps to organize the unorganized data and refine the
output of the brainstorming session. These diagrams present interesting tools that are used for
organizing, gathering, and correlating the information from the customer. These diagrams usually
present relationship between items and groups.

17 | P a g e © ExpertRating Solutions
Multivoting

Multivoting is generally referred as a follow-up to brainstorming. In the brainstorming session, a


large number of raw ideas are presented which are further organized into meaningful categories as
presented in the Affinity diagrams. These meaningful and organized sets of ideas are further
arranged in an order of importance including common problems and causes. The list may comprise
of 3-5 items. Each member of the group is allowed to rank each item of importance. Thus the item
which receives the highest number of ranking from the group is considered for further analysis.

Structure Tree

The brainstormed ideas are presented with links in a hierarchical fashion in the structure tree. The
structure tree shows how goals and solutions to the problems presented in the brainstormed ideas
can be connected using a structure tree. The tree diagrams are very simple and a routine process
in the six sigma projects.

There are three types of “tree diagrams” or “hierarchical diagrams” that form a part of
the Six Sigma project.

Cause and Effect Diagrams

Y-to-X Flowdown Diagrams

Functional Analysis Diagrams

Cause and Effect Diagrams

The Cause and Effect diagram is used to brainstorm possible causes of a problem or effects. It puts
the possible causes into groups, and reviews the causes that lead to other causes. It is further
linked, as in a structure tree. The value of a Cause and Effect diagram is to help gather collective
ideas of a team and to discuss various solutions to solve the upcoming problems.

Y-to x Tree Diagram

The Y-to x Tree Diagram helps in the identification and classification of the factors (independent
variables) that may drive an important results variable. It helps to recognize the factors (x‟s) that
may drive changes in the results variable Y.

Each node in the Y-to x tree diagrams must describe a measure or a factor that is both continuous
and categorical. Each node should describe measure - a factor that can take on different values.

Functional Analysis Diagram

As compared to the Y-to x diagram, which takes into consideration the results of the measures, the
functional analysis diagram identifies and organizes the important functions of a process or a
product. It identifies general and specific functions that operate in a product or process. This tree
structure helps to check for completeness, and reports the analysis in ways that can hide or
expose details appropriate to different audiences.

18 | P a g e © ExpertRating Solutions
SIPOC

SIPOC is an acronym for Supplier, Input, Process, Output, and Customer. SIPOC is generally used
in the Define phase of the DMAIC methodology. The SIPOC diagrams are made to define the sub-
processes in major business processes, and to identify possible measures. The SIPOC software
tools, like iGrafx, SigmaFlow, and Process Model helps to organize and display the information.

The functioning of SIPOC involves the following six steps:

Categorizing the process

Categorizing the outputs

Defining the outputs by their name, title, or organizational entity

Defining the customer requirements

Defining the inputs

Categorizing the sources of the inputs

Flowcharts

A flowchart is a graphical representation or a pictorial representation of the flow of a process. This


pictorial view usually depicts the inputs, outputs, and the unit of activity in a process. It analyzes
and observes the process from its initiation to its completion. A flowchart is mostly presented in
the form of a diagram comprising of boxes, diamonds, and other shapes, and these boxes are
connected with arrows. Each box or a shape represents a step in the process and each arrow
shows the order of the activities that are performed.

Flowcharts provide a very simple way to present a data. If developed without any errors, they can
describe the process with great clarity.

Fishbone Diagram

The Fishbone diagram or Ishikawa diagram as it is popularly known as, is primarily used in the
Define, Analyze, and Improve steps of the DMAIC process. This diagram provides an insight into
input variables. It is used to brainstorm possible causes of a problem or its effects.

The project team use Fishbone diagrams to identify the input variable that is causing a problem
and also focus on the cause and effect relationship.

As the name suggests, the Fishbone diagram looks like a fish with the head identifying the specific
problem of interest, and the six bones identifying six categories comprising of “five Ms and one E”.
They are Measurements, Materials, Men, Methods, Machines, and Environment.

19 | P a g e © ExpertRating Solutions
Data Gathering Tools

Statistical Sampling

Statistical Sampling comprises of two words, statistics and sampling. To have a clear idea of the
term statistical sampling, it is important to know them separately.

Definition

“Statistics is defined as the mathematics of the collection, organization, and interpretation of


numerical data, especially the analysis of population characterized by the inference from sampling.

Sampling is the practice of gathering a subset of the total data available from a process or a
population.

Therefore we can define Statistical Sampling as the collection, organization, and interpretation of
the numerical data collected randomly from a process or a population.”

Statistical Sampling methods usually refer to taking a small sample at random which can be used
further to obtain reliable information about a much larger process or population. A sample is said
to be random, when all the items in the process or population has a defined probability, otherwise
any inference obtained from a sample, which is not random, would be of no use. Functions in
spreadsheets like Excel or Lotus 123 helps to obtain a random sample.

We can say that Attribute Sampling, which is used to measure process audits, is a statistical
sampling method, and it is used extensively used in the Six Sigma improvement program.

Operational Definitions

Operational Definitions are definitions that state specifically how to measure the defined item, or
how to interpret data or events in a process.

When data is collected, it is important to define the terms clearly so that the definition has the
same meaning for all. Operational definitions must be written in such a way that their descriptions
are totally free from variation or ambiguity.

Voice of Customer

Voice of the customer represents the feedback of a customer regarding a process or an


organization. The Voice of the Customer plays a very important role, as it gives an insight into the
success of various techniques. These techniques help an organization collect customers‟ input,
assess and prioritize their requirements. The Voice of the Customer can be obtained by various
methods and tools like focus groups, customer surveys, customer interviews, critical to quality tree
etc.

Checksheets

Checksheets are forms which are used to collect and organize data. Checksheets can be simple

20 | P a g e © ExpertRating Solutions
tables, surveys or diagrams, to point out where errors might have occurred. It is defined as a tool,
which is used to ensure that all the important steps or the actions in an operation have been
taken.

Checksheets are designed by a Black Belt having two objectives in mind:

Firstly, it ensures that the right data with all the facts are included.

Secondly, it ensures that the collectors have no problem in gathering the data.

Spreadsheets

The data that is collected and organized in a checksheet is placed in a spreadsheet. A properly
designed spreadsheet makes it easier to use the data for further calculations.

It can also be said that a spreadsheet is an electronic ledger, which is made up of rows and
columns. These rows and columns are formed into cells, which is used for the entry of data and for
further calculations. A spreadsheet can also be called a computerized grid, in which the full
information can be viewed. This information is then used for making comparisons.

Measure System Analysis (MSA)

The improvement and analysis methodologies adhere on the collection of the data. Therefore it is
very important to ensure that the various measurement systems used is capable enough to
measure the data before indulging in the improvement efforts.

Definition

“Measurement System Analysis, often referred to as MSA, is used to assess the statistical
properties of process measurement systems.”

The system of Measurement relies on gathering procedures, gauges, and other test equipment to
collect the data for analyzing the problems. The measurement system ensures that the system and
procedures of measurements are adequate and not biased in any way.

MSA usually helps to uproot problems in the processes. One MSA method is the Gage R&R
(repeatability and reproducibility), that help in measuring the effectiveness of gauges, rulers and
other measurement instruments.

Therefore when precision and accuracy form the main criteria of measurement and they are
assessed together, the analysis is known as MSA or Measurement System Analysis.

Process and Data Analysis Tools

Process Flow Analysis

21 | P a g e © ExpertRating Solutions
Definition

“A process is usually defined as a set of interrelated work activities characterized by a set of


specific inputs and value added tasks that make a procedure for a set of specific outputs”

A process flow consists of a flowchart of a particular work or process. Analyzing this flowchart to
investigate the process and identify the problems in the process is called process flow analysis. It
is the Black Belts who focus on the processes and recognize the poor processes that result in
problems, high cost, and low quality. They are involved in the making of a breakthrough strategy
to recognize functional problems that are linked with operational issues.

Charts and Graphs

Charts and Graphs provide the best way to analyze measures of a process. They present data or
information in the form of visual displays.

There are many types of charts and graphs, which are used to display information. The following
are some common charts and graphs used for Six Sigma:

Pareto Chart

A Pareto chart is a Bar chart that is displayed by frequencies. It is used to rank the causes of a
problem from the most significant to the least significant.

The Pareto chart is based on the Pareto principle, which was first defined by J.M. Juran. The
principle was named after Vilfredo Pareto. The principle explains that 80% of the effects come from
relatively few causes; that is, 80% of the effects come from 20% of the possible causes. In more
simple terms we can say that Pareto charts help to figure out the problems, which have the most
prominent impact and based on this, the solutions are chalked out for those problems only.

22 | P a g e © ExpertRating Solutions
1.4 The Six Sigma Toolkit Continued...

Histogram

Definition

“Histograms are defined as bar graph of a frequency distribution in which the widths of the bars
are proportional to the classes into which the variable has been divided and the heights of the bars
are proportional to the class frequencies”

Histograms are used as a graphic tool to display continuous occurrence of data values and attempt
to show the number of times a value has occurred most frequently and least frequently. In a
Histogram, the size is shown on the horizontal axis and the frequencies of each size are shown on
the vertical axis. The bar lengths are proportional to the relative frequencies of the data.

Run (Trend) Chart

Unlike the Pareto charts and histograms which do not take into consideration the changing pattern
of the society, trend or the run charts primarily focuses the changing trend or pattern over a
specified period of time before measuring the performance of a process.

Scatter Plot (Correlation) Diagram

A Scatter Plot is also known as Scatter Diagram or Scattergram. This graphic tool is used to search
for direct relationship between two factors in a process, to see if a correlation exists between the
two. The correlation helps to show the dependency of one over the other. Even though a
relationship is shown between two different variables in the scatter plot, it does not indicate a
cause and effect relationship. Scatter plot diagrams are used to show a relationship between the
input and an output.

Control Charts

Control charts are primarily used in the control phase of the DMAIC process. It is defined as a
graphical tool for monitoring changes that occur within a process because of some common cause.
Control charts help to show the trends in the average or the variation, which further helps in the
debugging process. A control charts consists of a run chart, centerline and upper and lower limits
determined statistically.

Line Graphs

Line graphs are used to show the behavior of a process with changing time. The behavior of the
process is the specific characteristics of a process. Line graphs are used to depict the changes in
the process; whether the process is getting better or worse, or remains the same. These graphs
are perhaps the first step in defining a problem to be solved. These graphs can also show cycle
time, defects or cost overtime.

Pie-Charts
A Pie chart is a simple circular chart that is cut into slices. Each slice represents the frequencies of
the collected data. The bigger the slice, the higher is the number or percentage. These charts are

23 | P a g e © ExpertRating Solutions
best used to represent the discrete data.

Multi-Vari Charts

A Multi-Vari chart is used to display a pattern of variation. It is used to identify the causes of
variations, such as variations between a subgroup.

Statistical Analysis Tools

Tests of Statistical Significance

Definition

“The statistical significance of a result is the probability that the observed relationship or a
difference in a sample occurred by pure chance, and that in the population from which the sample
was drawn, no such relationship or differences exist.”

When a statistic is significant, it means that it is very reliable. Significance tells us about the
differences or the relationships, but it does not tell about the strength of the relationship; or
whether it is strong or weak, large or small. The strength usually depends upon the sample size.

Steps in Statistical Significance Testing:

Form the Research Hypothesis

Form the Null Hypothesis

Identify a probability of error level

Identify and calculate the test for statistical significance

Define the results

24 | P a g e © ExpertRating Solutions
Chi-square Test

Chi-square test is a non-parametric test of statistical significance, which is used, for bivariate
tabular analysis. Chi-square being a popular method of testing discrete data, takes into
consideration the weaker and less accurate data.

The Chi-square test uses three types of analysis:

Goodness of Fit

It determines if the sample being used was taken from the population.

Test for Homogeneity

It is based on the proposition that population is homogeneous in character.

Test for Independence

It takes into consideration the null hypothesis.

T-test

T-test approaches the normal distribution if the sample size is greater than mean. T-test involves
the statistic with the degree of freedom to test a hypothesis about a population parameter. This
test, unlike the Chi-square test, uses the interval and ratio type of data. T-test is used to
investigate the differences between two groups on the same variable, and also identifies if the
same group has different mean scores on different variables.

Analysis of Variance

The Analysis of Variance is popularly known as ANOVA.

Definition

“Analysis of variance is a statistical technique for analyzing data that tests for a difference between
two or more means by comparing the variances *within* groups and variances *between*
groups.”

ANOVA is a statistical technique, which is used for analyzing experimental data. The variation of
the data is sub-divided into meaningful data based on the variation, in order to test a hypothesis
based on the parameters of the components. There are three models of components in ANOVA.
They are fixed, random and clear.

Correlation and Regression

Correlation

Correlation is the relationship between two data sets of variable. It is a technique, which is used to
define a co-relationship between two quantitative and continuous variables.

Correlation is said to be positive when the value of one variable increases with the increase of the
other variable.

25 | P a g e © ExpertRating Solutions
On the other hand, when the value of one variable decreases with the increase in the other
variable, it is said to be negatively correlated.

If one variable has no effect on the other variable, there is no correlation between the two.

Regression

Definition

“Regression analysis is a method of analysis that enables you to quantify the relationship between
two or more variable (X) and (Y) by fitting a line or plane through all the points such that they are
evenly distributed about the line or plane.

” In simple terms, regression means a relationship between the mean value of random variable
and the corresponding value of one or more predictors. It helps in the prediction and association of
one variable from the other.

The most popular types of regression are:

Simple and Multi-linear regression

Non Linear Regression

Design of Experiments

The Design of Experiments is also known as DOE. The DOE uses statistical measures to explore the
variables that influence process, quality of products and services in an empirical perspective. Once
the processes are experimented using the statistical measure, the interpretation of the results and
improvement measures can be easily taken to improve the productivity of the process.

DOE is different from OFAT (which means one factor at a time) as it takes multiple variables and
their mutual interactions into consideration. Experiments are essential tools of activity. DOE allows
tests that examine the validity of a hypothesis, determine the efficacy of something previously
untried or to even demonstrate a known fact in a controlled environment. Apart from this, it is
crucial that the experiments must be performed correctly.

The following tools are used by Six Sigma experts while developing the Design of Experiments.
They are:

Factorial Designs: These designs help to analyze the effect of multiple factors on the process or
the product. By doing this the efficiency of the experiments are enhanced and the levels of
criterion are then based on these designs

Response Surface Designs: These designs appraise the relationship between one or more
response variables and set of experimental variables.

Taguchi Designs: These experiments help to locate the settings that aid in the consistency of the
product or service over a variety of conditions.

Implementation and Process Management

26 | P a g e © ExpertRating Solutions
Managing a project involves a medley of skills; which includes planning, budgeting, scheduling,
communication, people management, and the like. The tools used in process implementation and
management are discussed below:

FMEA

FMEA stands for Failure Mode and Effect Analysis. FMEA is crucial when taking into consideration
the key problem prevention methods for aid in implementation of new processes and in their
regular use.

Definition

“A procedure and tools that help to identify every possible failure mode of a process or product, to
determine its effect on other sub-items and on the required function of the product or process”

The concept of FMEA which was first introduced in the aerospace industry in 1960, is used to
eliminate the risk of failures. Failures indicate unhappy outcomes in the form of customer
satisfaction or products. It aids in classifying and prioritizing the causes of failure, determining the
impact of the failure, and presenting various preventive measures to deal with the failure.

The Improve phase of the DMAIC process uses FMEA. It also plays a very important role in the
early design process.

Stakeholder Analysis

A stakeholder is defined as a person who has a share or an interest in an enterprise. Stakeholders


in a company may be shareholders, directors, management, suppliers, stockholders, bondholders,
customers, employees and so forth.

Stakeholder analysis involves in identifying and enlisting the support of the shareholders regarding
the project taken up by the Six Sigma team. This analysis is done to project their views on the
project or solution.

Force Field Diagram

The Force Field diagram helps in the understanding of the driving forces that work towards the
process of improvement and restraining forces that block any improvement or change taking place.

The Force Field Diagram is used to compare opposites, actions, consequences and different point of
views. It is usually seen that in any process of improvement, the restraining forces always stand in
the way of the driving force that seems to indulge in some process of advancement. The Force
Field Diagram helps to analyze these restraining forces and helps to pave a way for a change to
happen.

Process Documentation

Process Documentation is the most important step in the Control phase of the DMAIC process. The
responsibility of the project has to be handed over to specialists who work on a daily basis, as the
project nears a completion in the Control phase, with results in place. The process documentation
– which involves making process maps, task instructions etc. - should be clear and simple.

27 | P a g e © ExpertRating Solutions
Balanced Scorecards

A Balanced scorecard is used for the support of shareholders. It promotes accountability


throughout the organization, and helps in the alignment of the individual and the corporate
objectives. The Balanced Scorecard is used for operational excellence, for the employees, for the
customers and for the financial aspect.

Process Dashboards

Process dashboards summarize the measures that give feedback and point out the issues and
opportunities that need attention.

1.5 Overview of DMADV or DFSS

The DMAIC (Define, Measure, Analyze, Improve, Control) methodology, which has been discussed
previously, is used when a product or a process already exists in a company.

The DMADV/ DFSS methodology is used when a new process or a product has to be developed.
DFSS is an acronym for Design For Six Sigma.

DFSS describes how to implement the method of using tools, training, measurements, and
verification so that products and processes that are designed, meet with the demands of Six
Sigma.

A more specific version of DFSS is DMADV, i.e., Define, Measure, Analyze, Design, and Verify.
DMADV uses Six Sigma principles in product/process design in a new business process.

DFSS covers the DMADV framework for the design of processes; using statistical techniques,
simulation software to analyze variation and risks, and performing Design of Experiments.

DMADV methodology covers the following phases:

D- Define the project goals of the design activity consistent with the customer demand

M- Measure and determine customer needs and specifications and identify CTQs, product
capabilities, risk assessment and so on

A- Analyze the process options to meet the customer needs and design alternatives, create high
level design and evaluate design capability

D- Design the process to meet the customer needs, optimize design, and simulations are also
involved in this phase

V- Verify the design performance and ability to meet customer needs, set up pilot runs, implement
production process Similarities between DMAIC and DMADV

28 | P a g e © ExpertRating Solutions
Similarities between DMAIC and DMADV

DMAIC and DMADV are both Six Sigma methodologies

Both these methodologies aim at driving defects to less than 3.4 per million
opportunities

They also aim to approach the problem with data intensive solutions

The Green Belts, Black Belts and Master Black Belts implement these methodologies

Both these methodologies are implemented with the support of a champion and
process owner

29 | P a g e © ExpertRating Solutions
Chapter 2 - The Define Phase
2 The Define Phase

Overview of Green Belt

Six Sigma Green Belts are the project leaders, they formulate and assist the Six Sigma team and
see it through commencement of the project to its completion.

A Green Belt is a primary executer of Six Sigma methodology. He is a person who is trained in Six
Sigma skills to the same level as the Black Belt. Green Belts work part time on the chosen projects
and undergo lesser intensive training than Black Belts. They lead a project under the guidance of a
champion or Master Black Belt (MBB). A Green Belt will apply project specific DMAIC skills. Green
Belt projects are locally focused i.e. they work on smaller projects which are limited to a single
department.

The methodology used by Green belts is known as DMAIC. DMAIC has five interconnected phases-
Define, Measure, Analyze, Improve and Control.

Overview of DMAIC

DMAIC refers to a data-driven quality approach for improving processes. This methodology is used
in Six Sigma to improve existing business processes by constantly reviewing and re-tuning the
processes. It involves:

Defining the aims

Measuring the current procedure

Analyzing the current procedure to remove loopholes between the current procedure
and the aimed results

Improving on the current procedure and finding innovative techniques

Controlling improved performances

30 | P a g e © ExpertRating Solutions
The Define Phase

Six Sigma is aimed to increase customer satisfaction. The Define phase is the first step in making
any system flawless. Here the goals of any project and its customer deliverables are defined. The
customers and their requirements are defined.

This phase is an important phase of any Six Sigma project. It identifies the inherent problems in
the system and develops a problem statement. It explains the need of implementing Six Sigma,
the goal of implementing Six Sigma and its benefit to that particular organization. In this phase,
the strategic organizational support for that particular project is evaluated. The project plan and its
milestones are developed. This also covers process mapping and flowcharting, project charter
development, problem solving tools, and the 7M tools.

The champion, the process owner and the rest of the team for the project under scrutiny are
identified in this phase.

The Measure Phase

Six Sigma is a systematized, database methodology for improving process by reducing defects,
waste or quality related problems in a business. The measure phase, the second step in
implementing Six Sigma, measures the existing system for defects, opportunities, units and
metrics. It establishes statistical process control techniques and data analysis methods. It also
develops a data collection plan to measure the defects; and it helps in monitoring the progress of
the project towards the defined goals.

This phase covers the principles of measurement, studies of continuous and discrete data,
graphical analysis techniques, basic capability indices and so forth.

31 | P a g e © ExpertRating Solutions
The Analysis Phase

The analysis phase involves in working on the data generated from the various methods provided
in the measure phase, and analyzing the defects by trying to reach the root cause. This phase uses
various data analysis techniques to analyze the generated data. Here the system tries to pinpoint
the deviation from the identified goals and also identify the sources of these variations.

The various statistical techniques used are Statistical Process Control Techniques, Process
Capability Analysis, Statistical analysis of cause and effect relationship using regression and
correlation, chi-square tables, specialized control charts and so forth.

The Improve Phase

In this phase, the analysis is used to improve the existing system by removing the identified flaws.
It is used to generate new, innovative and cost effective solutions and ideas to the problem and
implement them. The results for the defined problem may be statistically evaluated to solve the
flaw. The process is then optimized based on the analysis using various Design of Experiments
methods like ANOVA, and process optimization.

The Control Phase

The Control phase is the final phase of the DMAIC methodology. In this phase, the improvements
have to be controlled, to keep the standardized process on its new course. A strategy has to be
developed to prevent the process from reverting back to the previous way. This requires
development, documentation and implementation of the new plan. After this, trial runs to establish
process capability have to be conducted, which help in the transition of the planned process to
production. Finally, the process with the instituted control mechanisms needs to be continually
measured.

This phase covers process control planning, using SPC (Statistical Process Control) for operational
control and pre-control.

The Define Phase

Objectives

The Objectives of the DEFINE phase is to:

Develop a team/project charter

Identify the customers for the project, their needs and requirements

Identify which critical to quality factors are considered

32 | P a g e © ExpertRating Solutions
Project Charter

The execution of any project consists of three important elements. They are:

Project Charter

Identification of the customer

Making a high level process map

The Project charter is the vital element among the three listed above. It is devised in the Define
step of DMAIC.

The Project Charter is a single-page document. It lists the vital features of the project. It describes
what the higher management, black belts and champions have agreed to achieve in the process. It
is a kind of agreement among all groups involved.

The success of a project entirely depends upon the Project charter. It can make any project a
success by stipulating the indispensable resources and limitations. It can also be responsible for
the failure of the project by reducing the team motivation and efficiency.

It contains the following elements:

The Business Case

While creating the Project charter for any project, first the business case or the process which
operates the business that is to be improved has to be identified. The strategic purpose of the
team is generated by the business case. In other words, the reasons for doing the project are
stated. The team should understand the importance of the process to be improved, and why is it
important enough to spend time improving this particular process.

For example, a strategic business objective of a Pizza chain would be to increase revenues from its
home delivery division.

The Problem it is Addressing

The problem statement states the planned issue the project team wants to improve. The problem
statement should be specific, measurable and should impact the business. For example, in the
pizza delivery case, the problem identified could be falling sales leading to decreasing profits.

The Project Scope

The scope of the project would be to define the boundaries within which the project team will be
working on. The scope of the project should be apparent. It should be achievable within four to six
months. Projects fail for the reason that the scope is too large for the time agreed upon. To
achieve this, each project team should create a consensus on what would be the project scope for
their project.

33 | P a g e © ExpertRating Solutions
The project scope has to include the following:

Project Leader (Black Belt/Green Belt)

In order to avoid confusion later, it is essential to name the project leader so that management
knows the chief person in command. It will also help the rest of the team know the leader.

The Mentor/Master Black Belt

Important research papers, journals, and other reference resources should be available to the
project leader, whenever he has to deal with a doubt or any question regarding the issues that
appear.

Project Commencement Date

For smoothness in documentation purposes, the project start date has to be finalized. It is the date
on which the project becomes officially operational.

Projected Project Conclusion Date

The projected project conclusion date is set by the mentor or the master black belt. This provides
the team with adequate time to plan and finish the project in the specified business setting, work-
load setting, holiday schedules, and so on.

2.1 The Goal and Expected Benefits of the Project and Kano Model

The Goal and Expected Benefits of the Project

Once the scope has been created, the project team has to formulate a set of attainable goals and
objectives that are achievable within a finite time frame. It should also anticipate the expected
benefits of the project. For example, in the pizza delivery case, what are the anticipated results?
Will the pizza making time be reduced? Will the problems reduced or eliminated? The idea is to set
demanding but practical targets.

Process Measurements

The different events that will determine the effectiveness of the project are the process
measurements. All the necessary measurements should be listed, but they should be within the
range of the project.

34 | P a g e © ExpertRating Solutions
The Milestone of the Project

It is important that the project goals set by the team be attained within the defined time frame. A
good project leader should ensure that the team can achieve this by providing the team with the
required project management resources.

The Team Members, their Roles and Deliverables

The project team should include meticulously chosen team members, and their roles and
responsibilities should be carefully defined. It should include people most qualified to carry the
chosen project to its completion and those who are strategically important to the process. Every
project should have a team leader, either a green belt or a black belt.

35 | P a g e © ExpertRating Solutions
Identifying The Customers, Their Needs and Requirements

Once the team charter is formulated and validated, the second function is to identify the customer
or customers for the project. Since a customer is a user of a product or service, it is not always
necessary that he is an external customer. It is quite possible that he may be an internal
customer, a person internal to the organization.

Once the customers are identified, it is necessary to segment them, because requirements are
different for different types of customers. Segmentation may be done on the basis of these
criteria- market, revenue, or geographical area. Once they are segmented, their needs and
requirements are to be established for the process.

Voice of The Customer

Customers grab much of the attention in Six Sigma activity. Their needs, desires and wants keep
changing over time. It is a fact that businesses solely depend on customer satisfaction. Therefore it
is imperative for any business to constantly strive for product innovation in line with the changing
needs of their customers. This requires the organization to be continuously proactive to provide the
customers with best service or quality. The organizations have to look for the best way to find out
the shifting requirements of its customers.

Six Sigma Green Belts find out which is the best method for gathering information about the voice
of the customer. The term “Voice of the Customer” in Six Sigma methodology is used to define the
needs of the customer. So, identifying the voice of the customer is a process used to obtain the
feedback from the customer input.

There is a broad arrangement of techniques that help an organization collect customer input,
prioritize requirements, and provide feedback to the organization. These are known as voice of
customer(VOC) tools. These may be sample surveys, focus groups, critical to quality tree etc.

Customer Satisfaction and Kano Model

Definition

The Kano model of customer satisfaction was developed by a Japanese researcher, Noritaki Kano.
Kano developed a relationship between customer satisfaction and quality. Kano pointed out that
customer needs are complex and intricate and they are each related to customer satisfaction.
Customers perceive some product attributes to contribute to their satisfaction more than others.
Kano describes this relationship in a diagram.

Kano model is a quality measurement technique to measure client happiness. It can be said to be
a useful tool to evaluate and prioritize customer requirements. Not all requirements are equally
important to all customers, and the attribute of a product will be ranked differently by different
customers in their need chart. Therefore this model is used to rank requirements according to the
importance of each segment‟s needs, which differentiates between must haves and differential
attributes.

The Kano model is a potent quality measurement method used to envision product attributes and
aid in quality improvement. Different customer responses can be mapped on the graphical model.
It aids quality teams focus on adding differentiating features of the product, so that their product
gets a competitive advantage.

36 | P a g e © ExpertRating Solutions
Applying Kano model in the workplace to learn customer requirements will change the Green Belt‟s
viewpoint towards customer satisfaction. The team will be able to know which values and services
the customer covets for the most, and how to plan for operations in the Six Sigma program.

Product Attributes can be Classified as

Basic/Threshold Attributes

Threshold attributes are those whichthe customer normally assumes to be present in the product.
Their absence will cause dissatisfaction among customers. However, the customer will remain
neutral even if these attributes are provided in a better way. For example, refrigerators come with
freezers and door handles. A sleeker handle or a frost-free freezer will not cause any more
satisfaction in the customer.

Performance/Linear Attributes

The presence of performance attributes are directly proportional to customer satisfaction. There
are high levels of satisfaction if their performance is high, and dissatisfaction if their performance is
low. For example, the time spent waiting in line at the check-in counter of an airport terminal. This
attribute is represented as a linear and symmetric line in the graph. A high level of execution of
these linear attributes can add to product competitiveness.

Exciters/Delighters

These are hidden attributes which delight the customer and lead to high levels of satisfaction if
they are present, but do not cause any dissatisfaction if the product lacks this feature.These
„delighters‟ are the surprise elements in the product and companies can use this attribute to set
their product apart from their competitors. In course of time, as expectations rise, today‟s
delighter‟s become tomorrow‟s basics. For example, a car with an inbuilt television can be today‟s
delighter, but can be a basic tomorrow.

In order to survive cut-throat competition, and to lead in the market, companies need to be
constantly innovative and research what is the current level of customer quality to meet customer
expectations. A higher grade of execution of performance attributes and inclusion of one or more
delighters/exciters will provide stiff competition to similar players.

In the figure below, the entire basic attribute curve lies in the lower half of the chart, indicative of
neutrality even with improved execution, and dissatisfaction with their absence. The exciters curve
lies entirely in the upper part of the graph. The more the exciters, the higher is the level of
satisfaction. The performance attributes are shown as a 45° line passing through the center of the
graph.

37 | P a g e © ExpertRating Solutions
2.2 Definition, Survey Construction and Margin of Error

Sample Survey

Definition

Another voice of customer tool or method for gathering customer feedback, are sample surveys.

A set of written questions that is sent to a group of selected customers to obtain answers that will
enable corporate decision making is called a sample survey. Data are collected from a sample of a
universal population to reach a conclusion about the inherent features of the whole population.

It is an important tool to determine which requirements are most important to the customer.

The organization might interact with its customers to access the customer sensitivity to the
company‟s product, measure the quality of service or determine if the current level of quality is at
par with the company‟s identified goals. The organization might want to judge why employee
behavior or morale is changing, what the customer‟s buying experience is, or what the responses
for a new product are.

38 | P a g e © ExpertRating Solutions
According to Thomas Pyzdek, „Sample surveys are usually used to answer descriptive questions
(“How do things look?”) and normative questions (“How well do things compare with our
requirements?”).‟

It is not humanly and logistically possible to count everything that happens in a process. Sample
surveys enable the process leader to collect the no-nonsense responses of the customer, analyze,
and reach a conclusion from them. Sampling saves time and money and gives reliable data to
analyze a problem.

Determining What to Measure

The first natural thing to decide in a sample survey is what to measure. For example, a five star
chain may want to measure the quality of food served in its 24 hour restaurant. The customer care
division for a telecom brand may want to assess how the billing system can be made less
erroneous.

Selecting the Sample

After determining what to measure, the process team has to decide what kind of sample to select.
A sample is a subset of a universal population, like the number of night time customers out of the
total customers in the round-the clock restaurant of a five star chain.

The samples maybe randomly collected, where it ensures unbiasedness, and in which each
element or respondent has an equal chance of occurrence. The sample may also be a
representative sample which is a sample with an exact reflection of a larger population. To truly
represent a population, the sampler and analyzer of data must take into consideration the
variables like a diverse and changing population.

Survey Construction

Once the sample is selected, the survey has to be constructed.

1. The first important step in constructing a successful survey is to develop the measure of the
survey. Measures of responses can be taken in the form of-

Open-ended questions- Here the respondents frame their own answers without any
limitations

Ranking Questions- The response choices are ranked according to some criterion, like
importance.

Fill-in-the blank questions

39 | P a g e © ExpertRating Solutions
Yes/No questions

Likert’s scale- This response type is used to determine the strength of a response.
Likert stated that a scale of 1 to 5 is better than a range of 1 to 10 because people tend
to ignore larger ranges and hardly use the entire range of choices, and instead opt for
very low ranges, like 1,2 or very high ranges like 9 or 10

Semantic differentials-This response type measures respondent’s choices in two


bipolar values. The values than may lie between the two possible options are not stated.
The values are usually two contrasting adjectives. For e.g. Very Good and very Bad

2. After selecting the sample, the next thing to be done is to design the samples. sample design is
determining how many persons or elements (respondents) are to be included in the survey to
ensure the success of the survey.

3. The next step would be to develop the questionnaire. A questionnaire must truly reflect the
situation facing the company and be aimed to fulfill the goal of the survey.

4. After that, the questionnaire is tested on a small sample, also known as a pilot study. This is
done to test the accuracy and clarity of questions.

5. Now the final questionnaire will be produced.

6. This would be followed by preparation of mailing material and dispatching the same to the
sample population.

7. The next obvious step would be to collect the filled up questionnaires, also known as data.

8. Data collected has to be collated and reduced to enable analysis.

9. The last step would be to analyze this data.

Tactics to Develop Questions

Surveys should be designed with the proper professional expertise or survey experience. Question
framers should meticulously study the respondent group and ensure that the area under discussion
is understood by the respondents so that they give appropriate responses.

The format of the questions should be in line with the focus of the survey. The questions should be
relevant, concise, clear and in a language the respondent understands.

To get unbiased answers, the question itself should be unbiased. The answer choices should be
clear and mutually exclusive so that it becomes easy to understand and choose from.

The responses should also be quantified wherever possible.

Margin of Error

When a survey is conducted, a sample is selected and the data gathered from the survey is
generalized for the larger population. Margin of error is a tool used to determine how precise the
collected data are, or how precisely the survey measures the true feelings of the whole population.

For example, it is not logistically possible for an organization to measure the entire population, say
of customers, on the satisfaction level of using a particular product. Rather, samples of customers

40 | P a g e © ExpertRating Solutions
are taken from the whole population of customers. Margin of error is used to gauge how precisely
this sample is judged.

A margin of error is calculated for one of the three confidence intervals- 99%, 95% or 90%. The
most commonly used is the 95% confidence interval. The larger the margin of error, the lesser is
the confidence interval.

For example, in a pre poll survey, the larger the margin of error, the lesser is the level of
confidence that the survey‟s reported percentage will be close to the poll‟s true percentage.

The formula for calculating the margin of error is-

Where, p = Estimate of percentage of respondents answering a question

n = Number of respondents that answered the question

The formula involves three basic steps:

1. The amount of variability in the sample denoted by p

2. The standard degree of precision of a 95% confidence interval

3. The sample size denoted by n

2.3 Focus Groups and Critical-to-Quality Tree

Focus Groups

Another tool for the Six Sigma team to know customer requirements or employee information is
through focus groups. A focus group is a selected group of customers who are unfamiliar with each
other, collected together to answer a set group of questions. They are hand-picked because they
have a number of common characteristics that are relevant to the subject of study of the focus
group. The discussion is conducted several times to ascertain trends in product and service, and in
knowing customer requirements and perceptions.

The facilitator of the focus group creates an environment which permits different perceptions and
opinions, without threatening or pressurizing the participants. The aim of the focus group is to
reach a consensus about a particular area of interest by analyzing these discussions.

Six Sigma focus groups are helpful during strategic planning process, trying new ideas and
customers, generating information during surveys, validating a CTQ tree (which shall be described
in the next step) etc.

41 | P a g e © ExpertRating Solutions
Advantages of Focus Groups

1. Focus groups generate ideas, because a good facilitator may have the penchant for following up
additional questions based on the participants‟ answers.

2. Focus groups also stimulate ideas in a greater number than when individual interviews are
conducted.

Disadvantages of Focus Groups

1. An inexperienced and untrained facilitator may not be able to analyze the result.

2. Bringing groups together under one physical location might be more costly than what the
company envisioned.

3. Dominating personalities may influence the opinion under discussion.

4. There may be difference of opinion from group to group, making it difficult to gather a
consensus on the issue under discussion.

Critical-to-Quality Tree

Six Sigma is about looking for causes. The aim behind a Six Sigma process is to find the reason
behind a particular phenomenon. The team tries to find out what‟s “critical” to the success of the
process chosen for improvement.

A Critical-to-Quality (CTQ) tree is another tool to find out customer requirements, or which critical
to quality factors are being addressed. This helps the team to streamline the general needs of the
customer to more specific needs. It is useful in confirming and brainstorming the needs of the
customer of the process targeted for improvement. It is done to see that the voice of the customer
was not collected and reported against internal standards.

What‟s Critical?

Depending on what is being analyzed, the word „critical‟ could have diverse connotations ranging
from the satisfaction of the customer, to the quality and dependability of the product. It could also
be the cycle time of manufacture of the product or cost of the final product or service.

The following table lists a number of “CTX”s, or the critical variables that influence a product.

42 | P a g e © ExpertRating Solutions
Steps in Creating a Critical to Quality Tree:

1. Step one is to identify the customer. First the team has to do a CTQ as to whether the identified
customers need to be segmented. Here need for segmentation of customer arises when the
different customers have different requirements. In the following example, the example of the
pizza delivery process is used. The customer ordering a pizza maybe a high school graduate or an
office executive. Here there is no need for segmentation because the requirements in getting a
pizza delivered are almost the same across all ages.

2. Step two is to identify the customer‟s need. The customer‟s need is in level 1 of the tree as
shown in the picture. The high school graduate is in need of a pizza and so calls up a pizza delivery
outlet.

3. The next step is to identify the first set of requirements for this need. Two to three measures
need to be identified to run the process. In the example, the data collected by the process leader
indicated that the speed and the accuracy of delivery, the quality of the pizza, the variety in the
menu and add-ons in the menu card were crucial requirements while ordering a pizza. Thus the
first three branches of the CTQ tree will be formed with these factors. These are in level 2 of the
CTQ tree.

4. The step that follows is to try to take each level 2 element in the CTQ tree to another degree of
specialty. In the example, the process leader found out that while delivering the pizza on time, it
was necessary that the correct variety be delivered. He also found out that it was important for the
customer that the pizza be hot, taste good and look good. Similarly, data regarding the range of
items in the menu pointed out that the types or numbers of items, the add-on condiments were
important. All these factors are to be put level 3 of the CTQ tree as shown in the figure.

5. The final step is to validate the requirements with the customer. The CTQ tree is created as a
result of the project team‟s brainstorming. The needs and requirements need to be validated with
the customer because in many cases, what the team considers important may not be likewise with
the customers. Customer validation can be made through Focus groups, sample surveys, customer
interviews etc.

43 | P a g e © ExpertRating Solutions
Customer One on One Interview

A customer one-on-one interview asks a customer a number of questions that will validate the CTQ
tree. The advantage here is that the interviewer can record responses and follow up the answer to
get more detailed answers. The disadvantage of this method is it is expensive and there is the
necessity to have experienced interviewers who can spontaneously raise more questions that
would logically follow.

Customer Complaints

Customer complaints and suggestions provide feedback to the management, which may be
positive or disapproving. They provide opportunities for individual customers to have their say.
These methods are characterized by selection bias, so they seldom provide statistically correct
information.

A company might also communicate with its customers and employees through case studies, field
experiments and by the already available data. New technologies like data mining and data
warehousing are also used.

After the Green Belt project team identifies who the customers are for the project and what their
requirements are of the project, a process map of the process targeted for improvement has to be
created.

44 | P a g e © ExpertRating Solutions
2.4 Project Definition and Process Mapping

Project Definition

Every project or process to be undertaken has to meet certain criteria. Project definition means
documenting key information about the project. The project charter, a document issued by senior
management, mainly the project sponsor, specifies the project definition. The project charter
empowers a project manager with the ability to use the various resources available to the project.

The project charter should include the following:

A full account stating the motive of undertaking the project

The final product of the process/project and its characteristics

The association linking the business need and the final product/result

The approval to apply the various resources available to the project/process

A good Six Sigma project has to meet certain criteria:

They have to be given the go ahead by the management

They should be in tangent with the organization's goals

They should have clearly demarcated goals

They should be of manageable size

45 | P a g e © ExpertRating Solutions
Process Thinking

Process Thinking is a methodology which helps organizations in improving their overall efficiency in
achieving the desired goals. It usually begins with the identification of key processes in the project,
to be improved by the top management.

Two things should be kept in mind while selecting a process. One, it should recognize those
particular performance parameters which will help the company financially. Two, it should aim to
effect customer satisfaction positively.

A process can be measured on any of the following criteria like defects per million opportunities,
cost saving, capacity of the process or the time taken for production of a unit. It is a cross-
functional approach and is totally focused on the outcome.

The various tools used in the Define phase when undertaking a project are described
below:

Process Mapping

Process mapping is an illustration of the flow of work. A process map may illustrate a small part of
the operation or the complete operation. It consists of a series of actions which change some given
inputs into the previously defined outputs.

Process maps increase the visibility of any process. This in turn improves communication. Maps
show the present flow of work in an organization. They can also be used to chart a desired or
improved flow of work.

Process mapping is a well-known technique which is frequently used to create a common vision to
improve business results. It is a faster and effective way to minimize flaws, maximize output and
improve on customer satisfaction.

The important steps involved in creating a process map are as follows: ( Galloway, 1994)

Select a process to be mapped

Define the process

Map the primary process

Map alternative path

Map inspection points

Use map to improve the process

Process Mapping and Flow Chart

A process map is illustrated with a flow chart. A flow chart is a diagrammatic representation of the
nature and the flow of work in any organization or process. Representations in a flow chart, also
known as a flow diagram has numerous benefits.

46 | P a g e © ExpertRating Solutions
A flow chart helps in explaining people the working of the process

A flow chart can help in the training of newly appointed employees according to the
standardized procedures of the organization

Problem areas are easy to identify because in the flow chart, all the process steps are
diagrammatically represented. This also helps in simplifying and refining the process.

A flow chart uses symbols. Each symbol has a specific meaning. These symbols are connected by
arrows which indicate the flow of the process. The symbols are described below:

Oval -

indicates the start and end point of the process. They usually contain the words START and STOP,
or END

RectangularBox -

represents the process or an activity in the process

Parallelogram -

represents the input or the output of the process

Diamond -

represents a decision point in a flow chart. It has two arrows coming out of it, corresponding to yes
and no or true or false.

Circle -

represents a place marker. It is used when a line or page has to be changed with the flowchart.
This symbol is then numbered and placed at the end of the line or the page.

On the next line or page, this symbol is used with the same number so a reader of the chart can
follow the path.

A Flowchart typically helps to understand the process and reveals ways to enrich it. This is possible
only if it is used to analyze what is currently taking place in the process. Correctly interpreting the
Flowchart will help in:

Identifying the people involved

Developing various theories regarding the causes

Identifying various techniques to improve the process

Determining ways to apply changes to the process

Imparting training on the way the process works or should work

47 | P a g e © ExpertRating Solutions
Reducing Cycle Time through Process Mapping

For achieving a reduction in the cycle time, a cross functional process map needs to be developed.
This means that a team of individuals from every division is selected. They in turn map each step
of the process of product development from beginning to end. Two kinds of maps are developed; a
map of the current functioning of the process and another map for the expected process map.

The first process map helps to identify the problems in the current system, and to improve the
current system. The expected process map explains each step in detail.

During the mapping session a list of actions is also created. This list defines in detail the changes
required to change the process from the current map to the expected map.

Reducing Cycle Time through Value Added Flow Charts

A value added flow chart is a method to improve on the cycle times and eventually productivity, by
visually sorting out value-adding steps from non-value-adding steps in a process. It is a very
simple yet effective way. The steps are described below:

List all the steps involved in the particular process. To do this, draw a diagram box for
every step from start to the end.

Determine the time currently required for the completion of every step of the process.
Include this time to each box. ( See Figure 4)

Determine the total cycle time by adding the time taken by each step.

Some of the steps listed above are those which do not add any value to the process.
Such steps are inspecting, checking, revise, stocking, transporting, delivering etc.

Shift such boxes (as explained above) to the right of the flow chart. (See Figure 5)

Determine the total non- value added time by adding the time taken by each non value
adding step.

Some of the steps listed above are those which add value to the product. Such steps
are assembling, painting, stamping etc.

Shift such boxes (as explained above) to the left of the flow chart. (See Figure 5)

Determine the total value added time by adding the time taken by each value adding
step.

Construct a pie chart to display the percentage time taken by non-value adding steps.
(See Figure 6)

Using benchmarking and analysis, decide the target process configuration.

48 | P a g e © ExpertRating Solutions
Pictorially represent the target process and calculate the total target cycle time (See
Figure 7)

Explore the non value adding steps and identify the procedures which could be
trimmed down or can be done away with to save time.

Explore the value adding steps and identify the procedures which could be improved
to reduce the cycle time.

Make a flow chart of the enriched process. Keep looking for further loopholes in the
process till the target is achieved.

49 | P a g e © ExpertRating Solutions
50 | P a g e © ExpertRating Solutions
2.5 Process Mapping Continued...

51 | P a g e © ExpertRating Solutions
52 | P a g e © ExpertRating Solutions
Pareto Charts

A Pareto chart is a specialized vertical bar graph that exhibits data collected in such a way that
important points necessary for the process under improvement can be demarcated. It exhibits the
comparative significance of all the data. It is used to focus on the largest improvement opportunity
by emphasizing the "crucial few" problems as opposed to the many others.

The Pareto chart is based on the Pareto principle. Before getting to know the Pareto chart, first the
Pareto principle has to be understood. The Pareto principle was proposed by management thinker
Joseph M. Juran. It was named after the Italian economist Vilfredo Pareto, who observed that
80% of the wealth in Italy was owned by 20% of the people.

This principle can be applied to work related to business:

“80% of the business defects are caused by only 20% of the errors”

“80% of your results are produced from 20% of your efforts”

53 | P a g e © ExpertRating Solutions
“80% of the profit to a company is earned by 20% of the customers”

“80% of the complaints to a business are caused by 20% of the products or services”

The Pareto chart is a bar graph and is used to graphically summarize and display the relative
importance of the differences between groups of data. It is useful for non-numeric data.

This principle is applied to business operations because it is assumed that large percentage (80%)
of the problems are caused by few percentages (20%) of the processes. So the Pareto chart helps
by narrowing the very few areas of concern, while analyzing the process.

For understanding this, let us take an example. Suppose a multi-national company dealing in the
home delivery of pizzas, wants to check the problem areas while delivering the pizzas.

The data collected is displayed in the following table:

The next step in preparing a Pareto chart is to calculate the cumulative percentages of the data
supplied above. The following table can be derived from the data given above.

54 | P a g e © ExpertRating Solutions
Finally a line graph can be prepared to see what the main problems are. The following line graph is
drawn from the preceding table data using Ms Excel and plotted with the cumulative percentage
against the complaints of the customers. The X axis is plotted as complaints of the customers and
the Y axis as the cumulative percentage.

The final step is to recognize the few elements that cause most of the problems. Then the 80- 20
rule of the Pareto analysis is applied, and a line is drawn from 80% on Y axis until it meets the line
graph. When this line reaches the line graph, it has to be vertically dropped down to the X axis as
shown in the following graph.

With the help of these identified problems, the root cause of the problem can be found out. Once
the root cause is pinpointed, the problem area can be worked upon.

55 | P a g e © ExpertRating Solutions
All the problems that fall to the left of the 80% line are the few problems accounting for most of
the complaints. They are:

Not hot

Late delivery

No extras

Wrong Billing

Wrong Pizza

Lesser ingredient

No delivery in a particular area

These account for 80% of the problems encountered in the home delivery of the pizza. If these are
immediately taken care of, then 80% of the problems can be solved.

Pareto analysis helps in determining which problems to concentrate our efforts on.

56 | P a g e © ExpertRating Solutions
2.6 The 7M Tools

The 7M Tools

The 7M tools, used for quality management, were developed in Japan in the 1970s. In America
these tools were known as the Management and Planning (MP) tools. These are seven powerful
management tools used in the Six Sigma for ensuring quality and constant development of the
process.

Affinity Diagrams

The affinity diagram, or KJ (Kawakita Jiro) method, is one of the most extensively used Japanese
management and planning tool. It was developed to establish meaningful groups of ideas from a
raw list. An affinity diagram is developed after a brainstorming session. It is used to systematize
large groups of information generated during the session into meaningful sets.

A standard affinity diagram arranges the ideas developed after the brainstorming session on its left
section. These ideas are then grouped into different affinity sets on the right of the page. The basis
for an idea to belong to a specific set is not important.

An idea may be present in a single group if it has any similarity to another. The same idea can be
present in more than one group if it seems logical.

The affinity diagrams help to systematize the collective opinion of the team most efficiently.

Interrelationship Diagrams

Interrelationship Diagrams are used to examine the relationship between complex issues. It is
made to illustrate the relationship between various factors, areas, or processes. The analysis by an
Interrelationship Diagram aids in making a distinction between elements which operate as the root
cause and which those are the outcomes of the root cause.

Considering the “Pizza Home Delivery” example again, we can derive an interrelationship diagram
here and find out the interrelationship between the various factors.

Steps in generating an Interrelationship Diagram:

1. The group has to agree on the particular issue or question.

2. Write down the all the factors on chits of paper.

3. Link each factor to all others. An arrow also known as “influence arrow” to link related factors
can be used.

4. Draw the “influence arrows” from the factors that influence to those which are influenced.

5. If two factors influence each other, the arrow should be drawn to reflect the stronger influence.

6. Count the arrows.

7. The elements with the most outgoing arrows will be root causes or drivers.

8. The ones with the most incoming arrows will be key outcomes or results.

57 | P a g e © ExpertRating Solutions
From the above interrelationship diagram, it is clearly visible that the most number of arrows are
originating from rude and incompetent staff. They are root cause of the outcome, that is low sales
and eventually fall in profits.

Tree Diagrams

Tree diagrams are used to break down a large idea, problem or solution into smaller and more
manageable parts. This helps to understand the problem or implement the solutions more
effectively. To accomplish a certain purpose, a tree diagram is prepared to determine all the
necessary steps.

A tree diagram is also used to display the ultimate goal of the problem on the left side and the
process of achieving it on the right side.

Given below is a simple example of a tree diagram. Here the Monthly Total Income is divided into
various categories to track the expenditure.

58 | P a g e © ExpertRating Solutions
Prioritization Matrices

Prioritization matrices are a combination of tree diagram and a matrix chart. They are prepared to
logically narrow down the focus of the team. It is designed before exhaustive execution planning
could be done. Prioritization matrices are supposed to be used when:

Vital root causes are already recognized and the most important ones have been
identified.

When the issues that have been generated from the brainstorming session are
complex and are strongly interrelated

The resources for the progress are limited and only a vital few activities must be
focused upon

59 | P a g e © ExpertRating Solutions
Example

The Sales Department of a well-known pizza making company found certain problems when the
company‟s motivational survey was held. The company decided to focus on the problematic areas
using the Prioritization Matrix.

The main problems are penned along with the various options, which are then multiplied with the
weight assigned to each criterion.

A Prioritization matrix is like a grid, showing various options on the top and decision criteria on the
left side and weights are also mentioned with the decision criteria. The final score is calculated by
multiplying the ratings with the weights for each criterion. Once the ratings have been summed up,
the best optimum solution will be chosen with highest score.

2.7 Matrix Diagrams and Activity Network Diagrams

Matrix Diagrams

A Matrix diagram is an analysis tool that helps in identifying the relationship between two or more
sets of elements. Matrix diagram is a representation of elements in tabular form. The left-most row
and top-most column consists of inter-related elements and the rest of the cells consists of the
symbols or the numbers that represent the strength of the relationship between the elements.

Matrix diagram can be used to compare as many set of elements

It helps comparing the efficiency and effectiveness of the options

60 | P a g e © ExpertRating Solutions
It can also be used to set the priorities

Different symbols can be used in the matrix diagram to depict the comparison level
and weightage for the items being compared

Different shaped matrices available are L,T,Y,X,C etc.

An L-shaped matrix relates two set of elements with each other, and sometimes one set of
elements with itself. The elements are compared by placing them in the first row and top column.

A T-shaped matrix relates three set of elements in such a way that sets X and Y are related to Z,
but X and Y are not related to each other.

A Y-shaped matrix also relates three set of elements in such a way that each set is related to the
other two set of elements. Suppose X and Y are related to Z, then X and Y must also be related to
each other.

An X-shaped matrix relates four set of elements and each set is related to two other set of
elements in a circular manner.

A C-shaped matrix relates three set of elements and that too simultaneously, in a 3-dimensional
manner.

Process Decision Program Charts

In any kind of planning there might be many things that could go wrong. The Process Decision
Program Charts or PDPC aids in foreseeing the problems which could be encountered in a plan
undergoing development. To avert and avoid such problems, the countermeasures are developed
beforehand. PDPC helps to revise the plan with an intention to avoid the problem or to be ready
with the solution to counter the problem.

A Process decision Program Charts should be prepared before implementing any plan. Particularly
more so if the plan is sizeable and complex. It is also very useful when the proposed plan has to be
finished on schedule.

Steps in making Process Decision Program Charts

Develop a high level tree diagram of the plan

In the final level of the tree diagram, for every task cited, brainstorm the problems that
could be encountered. Classify the criteria for identifying problems i.e., those problems
which effect the scheduled completion date.

Appraise all the problems and remove the impossible ones and those with trivial
outcomes. Classify those risks which would need a countermeasure. Illustrate these
problems as the next level in the tree diagram linked to the tasks cited.

Now brainstorm the counter measures for the problems illustrated. Isolate those
countermeasures which would minimize the problem. The countermeasures may be

61 | P a g e © ExpertRating Solutions
those that would introduce changes to the plan or those that would provide with a
solution if the problem occurred. Illustrate these countermeasures as the next level in
the detailed tree diagram. They should be illustrated with irregular outlines.

Determine the feasibility of each countermeasure. It should be measured on such


criteria as price, time required, effort needed for execution, and efficacy etc. For
example, those countermeasures should be implemented which are cost effective, like,
amount of time a problem will cost or amount of time a countermeasure will save, etc.

Activity Network Diagrams

Activity Network Diagrams are also called arrow diagrams, network diagrams, activity charts, node
diagrams etc

A version of the Activity Network Diagrams is also known as PERT (Program evaluation and review
technique) chart.

A pictorial representation of the chain of actions to be executed to complete a project is known as


Activity Network Diagram. It is only relevant in those projects in which the actions are known
beforehand. It helps the team to apply the most organized path or sequence.

This diagram displays the interdependencies between various tasks. Boxes and arrows are used for
this purpose. Arrows are used to show the sequence of the tasks. In case an arrow is emerging
from one task and going to another, it means that the task from which the arrow is originating
must be finished before the next task (to which the arrow is pointing). The next task cannot begin
until the previous task is complete.

62 | P a g e © ExpertRating Solutions
Steps in developing an Activity Network Diagram:

Discuss all the tasks that are needed to complete the project. The outputs of the tree
diagram can also be used.

Write down all the tasks on a Post-It note.

Ascertain which task is to be carried out first and position it on the left side of the work
table.

Find out if there are some tasks which can be done simultaneously with this task.
These can be placed vertically above or below the previous task.

Find out the next task to be accomplished and place this task to right of the first task
card. Find out if there are some tasks which can be done simultaneously with this task.
These can be placed vertically above or below the previous task.

Repeat the above process till all the tasks are arranged in a sequence.

Number each task and note it down on the task card or the post-it.

Join the tasks in a sequence with arrows.

Calculate the time to complete each job and note it on the respective Task card or the
post-it. Use a similar time unit for all the tasks i.e., weeks, days, months etc.

Calculate the project’s earliest possible completion time by working out the critical
path. The critical path of any project is the longest path taken from the start to the end
of the project.

Work out the earliest time that each task can begin and conclude. These are called
foremost Start (FS) and foremost Finish (FF). Foremost Finish for each task is FS + time
taken to complete the task. Draw a separate box for each task. Make a time box divided
into four quadrants as shown in figure below.

63 | P a g e © ExpertRating Solutions
Work out the latest time that each task can begin and conclude with disturbing the
project timetable. These are known as Latest Start (LS) and Latest Finish (LF).To calculate
this work backwards, start from the latest finish date to the latest start date.

Advantages of Activity Network Diagrams

Activity Network Diagrams is very useful as it provides the following information in advance:

Probable project completion time

The Probability of completing a project prior to the specified date

Task start and end dates with the latest start and end dates without effecting the
project completion time

Tasks that can be completed simultaneously:

Task1 and Task 2, Task 4 and Task 5, Task 7 and Task 8

64 | P a g e © ExpertRating Solutions
Chapter 3 - The Measure Phase
3 The Measure Phase

Objectives

The key objectives of the Measure Phase are to:

Identify key measures, make a data collection plan and execute the same

Establish current process capability, improvement and goal

Display variation

Data Collection Plan and Execution

The first question that comes to one‟s mind after reading the phrase “Measure Phase” is what is to
be measured. The process that is to be measured should be in line with customer needs. The
benchmarks set by the measuring team should be harmonious with the customers‟ expectations.

The data collection plan is built while measuring the process. A data collection plan includes:

A list of questions, which should be answered by the data collected

A brief overview of the project, along with the problem statement

Determining the data type which will be suitable for the data a process is generating

Determining the number of iterations of the data collected that will be enough to
present the change in the chart

A list of the measures to be taken, once the data has been collected

65 | P a g e © ExpertRating Solutions
A good data collection plan facilitates the accurate and efficient collection of data.

After the data is collected, it must be figured out that what kind of data a particular process holds.
Before measuring data, one should know the type of data so that an appropriate tool can be
applied to the data.

Types of Data

Attribute Data

Attribute data, also known as category data, is the data which cannot be broken down into smaller
units. No additional meaning can be added to such data. Typically such data is counted in whole
numbers.

For example: The number of family members cannot be 4.5.

Some other examples of attribute data are:

Zip codes in a country

“Sweet” or “Sour” taste of fruit

Total number of candies in a pack

“ Regular”, “Medium” or “Large” sizes of pizza

“Fat” or “Thin” attributes given to a person

Variable Data

Variable data, also known as c ontinuous data, is data which can have any value on a continuous
scale. Variable data can have almost any numeric value and can be meaningfully forked into finer
increments or decrements, depending upon the precision of the measurement system.

For example: The height of a person on a ruler can be read as 1.2 meters, 1.05 meters or 1.35
meters.

Note

The important distinction between attribute data and variable data is that variable data can be
meaningfully added or subtracted, while attribute data cannot be meaningfully added or
subtracted.

66 | P a g e © ExpertRating Solutions
Data Collection Tools

The best way to analyze data and measure a process is with the help of charts, graphs, or pictures.
C harts and graphs are the most commonly used tools for displaying data as they offer a quick and
easy way to visualize what the data characteristics are. They show and compare changes and
relationships. Various techniques used for analyzing data are:

Trend Charts

Trend charts (also known as run charts) are typically used to display different trends in data over
time. A trend chart is actually a quality improvement technique and is used to monitor processes.
A goal line is also added in the chart to define the target to be achieved. It can lead to improved
process quality. One of the main advantages this chart offers is that it helps in discovering patterns
that occur over a period of time.

The various steps involved in creating a trend chart are:

Data Gathering

The data should be collected over a period of time and it should be gathered in a chronological
manner. The data collection can start at any point and end at any point.

Data Organizing

The collected data is then integrated and is divided into two sets of values, i.e., x and y. The
values for „x-axis‟ represent time, and the values for „y-axis‟ represent the measurements taken
from the source of operation.

Preparing the Chart

The y values versus the x values are plotted, using an appropriate scale that will make the points
on the graph visible. Next, vertical lines for the x values are drawn to separate time intervals such
as weeks. Horizontal lines are drawn to show where trends in the process, or in the operation,
occur or will occur.

Interpreting the Chart

After preparing the chart, the data is interpreted and conclusions are drawn that will be beneficial
to the process or operation.

Example

Suppose you are the new manager in a company and you are disturbed by the trend of certain
employees coming late. You have decided to monitor the employees‟ punctuality over the next four
weeks. You decided to note down by how much time they get late everyday (on an average basis)
and then construct a trend chart.

67 | P a g e © ExpertRating Solutions
Data Gathering

Cluster the data for each day over the next four weeks. Record the data in an ordered manner:

Organizing Data

Determine what should be the values on x-axis and what should be the values on y-axis. Assume
day of the week on the x-axis and time on the y-axis.

68 | P a g e © ExpertRating Solutions
Preparing the Chart

Plot the y values versus the x values on a graph sheet (on paper) or using another computer tool
like Excel or Minitab. Draw horizontal or vertical lines on the graph where trends or deviations
occur.

Interpreting Data

Conclusions can be drawn once the trend chart has been prepared. Results can then be interpreted
by the analysts in the analysis phase. It is very clear from the chart above that employees usually
take more time to reach office on Mondays.

Process Maps

Process mapping is an illustration of the flow of work. A process map may illustrate a small part of
the operation or the complete operation. It consists of a series of actions which change some given
inputs into the previously defined outputs.

A process map is illustrated with a flow chart. A flow chart is a diagrammatic representation of the
nature and the flow of work in any organization or process. A flowchart typically helps to
understand the process and reveal ways to enrich it. This is possible only if it is used to analyze
what is currently taking place in the process.

Note

To know more about Process maps, see Chapter 2- Green Belt- The Define Phase.

3.1 Histograms and Probability Plots

Histograms

A histogram is a bar graph. It is constructed from a frequency table and thus is also called a
Frequency Histogram. It depicts the distribution or variation of data over a range (range could be
in terms of age, size, length, number etc.).

The shapes of histograms vary depending on the choice of the size of the intervals. The horizontal
axis depicts the range and scale of observations involved. The vertical axis shows the number of
data points in various intervals, i.e., the frequency of observations in the intervals. The values on
the horizontal axis are called the upper limits of (intervals) of data points.

The main advantage of using histograms is that it shows the scattering of data and thus one can
see where the variable occurs in a critical state. In the example discussed previously about
employees who come late, the histogram can show how the data is dispersed (on a daily basis) for
the duration of a month:

69 | P a g e © ExpertRating Solutions
Attribute Control Charts

Certain characteristicsof a process cannot be conveniently represented numerically. In such cases,


each item inspected is classified either as "conforming units" or "nonconforming units” to the
specifications on certain quality characteristics. Another quality characteristic criteria would be
sorting units into "non defective" and "defective" categories. Quality characteristics of this type are
called attributes. There may be times when these attribute charts skip precise devices and time-
consuming measurement procedures.

Note

There is a difference between “non-conforming” units and “defective” units. A “non-conforming”


unit is the one which is deviating from its engineering specifications; whereas a “defective” unit
may be functioning just fine, BUT a part of it is not functioning as desired.

There are different types of attribute control charts:

C- Charts

This control chart deals with the number of defects and nonconformities produced by a
manufacturing process. In this chart, the number of defects is plotted per unit (a unit could be a
day, a month, a year, a batch, a machine, a process etc.). While using this chart, it is assumed
that the number of errors in a quality attribute is rare and the nonconforming events are
independent.

P-Charts

This control chart deals with the proportion or a fraction of a defective product. In this chart, the
percentage of defects are plotted per unit (unit could be a day, a month, a year, a batch, a
machine, a process etc.). While using this chart, it is assumed that the number of errors in a
quality attribute is not very rare.

70 | P a g e © ExpertRating Solutions
U-Charts

Another type of chart which handles defects per unit is called the U-chart. In this chart, the
average number of nonconformities per unit of product is plotted. U-chart can also be used with
different sample sizes.

X-Bar Charts

X-bar charts are a set of control charts for variable data. (Variable data, also known as c ontinuous
data, is data which can have any value on a continuous scale). The X-bar chart monitors the
process location over time, based on the average of a series of observations, called a subgroup.
The mean value is calculated for each sub-group and different ranges are plotted for analysis. This
chart is advantageous when changes in mean value are to be shown.

For example: In the previous example, where employees were coming late to office, one
additional information is to be added, i.e. computing the average for each week, and then plotting
the figures in the bar chart:

71 | P a g e © ExpertRating Solutions
The bar chart based on this data would look like this:

Probability Plots

Probability charts, also called Probability Sampling, are mainly based on anticipation only. A
probability plot shows the probability of a certain event occurring at different places within a given
time period. Each sample is selected in such a manner that each event within the sample space
has a known chance of being selected. While sampling for any event, every observation from which
the sample is drawn has a known probability of being selected into the sample.

Probability Sampling is usually estimated on a scale from 0 to 1. Any event which is most likely to
occur will have a probability nearest to 1. Any event which is least likely to occur will have a
probability nearest to 0.

When plotted on a graph, these events usually bunch around the mean, which occurs in a Bell
curve (see Topic: Basic Process Capability). This theoretical distribution of events allows the
calculation of the probability of a certain event occurring in the sample space.

The three main types of Probability Sampling methods are:

Simple Random Sampling

In simple random sampling, each element in the sample space has an equal chance of getting
selected. Hence the probability of any event can be determined by listing all the possible units in
the sample space. Simple random sampling is considered as the simplest probability sampling
technique. But it requires homogeneous distribution of the samples.

72 | P a g e © ExpertRating Solutions
Stratified Sampling

In case the distribution of samples is not homogeneous or proportional, the total sample
population is divided into homogeneous subgroups. These subgroups are called strata and this is
followed by applying simple random sampling technique in each stratum. These strata are based
on predetermined criteria such as age, size, weight, sex, location etc. Each unit in the sample
space must be assigned to one stratum only.

Systematic Sampling

In this sampling technique, each n th element is selected from the sample space. The sampling
interval, n, is calculated as:

n = Number in population / Number in sample

This technique is also referred as interval sampling as every n th sample is selected from the list of
sample space.

Clustered Sampling

In clustered sampling, all the units are grouped into clusters and a number of clusters are selected
randomly to represent the total population. Then all units within selected clusters are included in
the sample. The elements within the clusters can be homogeneous or heterogeneous but there
should be heterogeneity between clusters.

The difference between cluster sampling and the stratified sampling is that in cluster sampling,
each cluster is treated as the sampling unit and hence analysis is done on the number of clusters,
whereas in stratified sampling, the analysis is done on elements within strata.

3.2 Basic Process Capability and Process Variation

Basic Process Capability

A measurement control system ensures that process capability indices are measures of the
capability of a process to produce the final product or service, according to the customer‟s
specifications or some other measurable characteristics. The capability indices determine if the
output is consistent or not, and whether it lies between the lower and upper specification limits.

For understanding a process clearly, process capability indices and relationships involved within it
are considered. Lower confidence limits and upper confidence limits are suggested. The process
capability index is denoted as „C p‟. It denotes the stability of the process and it indicates whether
the process is capable of generating products which can accommodate customer‟s specifications or
not.

In general, a capable process is the one in which all the measurements fall inside the specification
limits, i.e. inside Upper Specification Limit (USL) and Lower Specification Limit (LSL). An example
of a capable process could be like:

73 | P a g e © ExpertRating Solutions
While calculating „ C p ‟, it is assumed that all the observations drawn from the sample are
normally distributed. „µ‟and „σ‟ are assumed as the mean and standard deviation, respectively, of
the normal data. USL, LSL, and T are the upper and lower specification limits and the target value,
respectively. Hence the process capability indices are defined as follows:

C p = (USL – LSL) / 6 σ

USL and LSL

The difference between USL and LSL defines the range of output which the process must meet.
(USL-LSL) is also called as the specification range.

It is called the “natural tolerance” of the process. If 6σ is less, the output data of the process
would be more stable.

If C p<1,

It means that the denominator value is more the numerator. Hence the value of (USL- LSL) is less,
which depicts that the process is wider than the specification limits. Thus it can be interpreted that
a process is not capable of generating outputs which abide by specifications and the process is
generating significant number of defects.

74 | P a g e © ExpertRating Solutions
If C p= 1,

It means that the process is just meeting the specifications but is still generating 0.3% defects.
The actual minimum accepted value for C p is 1.13.

If C p>1,

It means that the process variation is less than the specification and it ensures that the
specification range is narrow enough. However, defects may occur if the process is not centered on
the target value. Generally, larger value of C p, is preferred.

Process Variation

Introduction to SPC (Statistical Process Control) Techniques

The first thought that comes to one‟s mind, when referring to SPC, is the controlling of the
process. One of the biggest misconceptions organizations harbor is that they think they are
controlling the processes but, in reality, they are only monitoring the process outcomes. Statistical
process control (SPC), is actually a commonly used monitoring system.

Statistical Process Control (SPC) is a method of monitoring, controlling and, ideally, improving a
process through statistical analysis.

SPC measures a process

SPC analyzes the process and eliminates variances from the process to make it
consistent

SPC monitors the process

SPC improves the process so that the process can achieve its target

SPC is not a formula, technique or program which can be applied to a process, on the basis of
which predictions about the process‟ outcome can be made. In fact, it is a collection of statistical
tools. It only helps in making inferences about the process‟ behavior

Various Statistical tools like, Pareto charts and Fishbone charts are used for monitoring and
controlling the variation in processes. SPC, firstly, upholds the out-of-control processes, and then
reviews the invariability of process‟ outcome.

Various control charts available for statistical process control are:

X-bar charts

Histograms

Run charts

Probability charts

75 | P a g e © ExpertRating Solutions
The above mentioned techniques have already been discussed under the topic “Graphical Analysis
Techniques”. Besides these control charts, other charts and diagrams which help in monitoring the
process are discussed below:

Pareto Chart

The Pareto chart is based on the Pareto principle. Before getting to know the Pareto chart, first the
Pareto principle has to be understood. The Pareto principle was proposed by management thinker
Joseph M. Juran. It was named after the Italian economist Vilfredo Pareto, who observed that 80%
of the wealth in Italy was owned by 20% of the people.

This principle can be applied to work related to business:

“80% of the business defects are caused by only 20% of the errors”

“80% of the profit to a company is earned by 20% of the customers”

“80% of the complaints to a business are caused by 20% of the products or services”

The Pareto chart is a bar graph and is used to graphically summarize and display the relative
importance of the differences between groups of data. It is useful for non-numeric data.

This principle is applied to business operations because it is assumed that large percentage (80%)
of the problems are caused by few percentages (20%) of the processes. So the Pareto chart helps
by narrowing the very few areas of concern, while analyzing the process.

For understanding this, let us take an example. Suppose a multi-national company dealing in the
home delivery of pizzas, wants to check the problem areas while delivering pizzas. Following is the
data they accumulated by surveying 100 customers:

72 people found that the pizza was not hot.

52 people found that the pizza was not delivered in time.

32 people found that they were delivered the wrong pizzas.

20 people found that the pizza had lesser ingredients and there was deterioration in
quality.

8 People found that the wrong size of the pizza was delivered.

76 | P a g e © ExpertRating Solutions
Now, in the above chart, the data has been put under various categories, like „Not hot‟, „Late
delivery‟, „Wrong pizza‟, etc. As depicted by the chart, customers are mostly annoyed by the fact
that their pizzas are not delivered hot.

Finally a cumulative chart can also be prepared to see what the root cause of the problem is. And
once it is found, it is the area to be worked upon.

The Pareto Chart analysis acts as a performance indicator and is performed regularly over a fixed
interval of time.

Fishbone Diagram

The Fishbone diagram is a tool that is used for brainstorming possible causes of a problem in a
graphical (tree structured) format. The Fishbone diagram is also known as the Ishikawa Diagram
and the Cause-and-Effect diagram . This technique is called fishbone diagram because it
resembles the skeleton of the fish. A fishbone diagram helps in getting to the root cause of the
problem. It consists of a fish head at one end of the diagram– which states the problem. Besides
this fish head, there is a fish spine and there are bones attached to the spine. The bones attached
to the spine state the reasons which are causing the problem.

The fishbone diagram is employed for problem-solving by the team members. It is used to collect
all the inputs (which are causing the problem) and to present them in graphical manner. The
advantage of using a Fishbone diagram is that besides detecting the problem, it helps the team to
focus on why the problem occurs.

Procedure of Implementing a Fishbone Diagram:

Define the Problem

List down the exact problem, in detail. It should be stated in a box, called the fish head. After

77 | P a g e © ExpertRating Solutions
stating the problem, draw a horizontal line, across the box.

Categorize the causes

Attach sliding lines, called the bones of the fish, to the fish spine. These bones will state the causes
because of which the problem occurred. Write down as many possible causes as could be involved.
The major categories typically involved are:

The 4 M’s: Methods, Machines, Materials, Manpower

The 4 P’s: Place, Procedure, People, Policies

The 4 S’s: Surroundings, Suppliers, Systems, Skill

Further Categorize the „Categorized‟ Causes

Sketch out the smaller lines coming out of the larger bones, which will depict the possible causes
within each category that may be affecting the problem. This helps breaking down a complex
problem into smaller problems. Repeat this step until there is no breaking down of a problem into
a sub-problem.

Analyze the Fishbone Diagram

Finally analyze the diagram and draw out results by measuring the root cause.

Example

Suppose the MNC dealing in the home delivery of pizzas wants to find out the various causes that
are leading to a fall in their customer base. They depict the problem graphically, by putting the
problem and causes under the fishbone diagram.

The following is the fishbone diagram, tailored to the “pizza home delivery” example:

78 | P a g e © ExpertRating Solutions
In the fish head, the pizza problem has been defined. The main causes leading to the problem are
defined under the fish bones. The causes are then further sub-classed into generic problems, like,
one of the cause, which is “pizza not delivered in time”, has been further categorized into sub-
causes, which infer the reasons why the pizza couldn‟t be delivered in time. The reason could be
any one of these: traffic congestion, the scooter‟s tire was punctured or the pizza delivery boy
couldn‟t locate the address easily.

79 | P a g e © ExpertRating Solutions
Chapter 4 - The Analyze Phase
4 The Analyze Phase

Objectives

The objectives of the ANALYZE phase would be to:

Arrive at the root cause by process analysis or data analysis

Quantify the opportunity for the project

Arriving at the Root Cause

One of the major goals of the analyze phase is to determine the root cause of the problem of the
process targeted for improvement. The true reason why a problem could exist in the process is
unearthed in the Analyze phase.

The goal of analysis can be defined by an equation –

Solving for Y = f (X1, X2, …..Xn)

In this equation, Y is the measure of output. The equation says that Y is a function of a series of
X‟s. The factors affecting Y are a series of Xs. The goal in the analysis phase is to determine which
factors (Xs) in the process are the largest contributors to the performance of Y.

For example, a pizza chain continuously adds new recipes to increase the variety in their menu.
The chef would alter the flavor of the pizza topping by adding more cheese, adding more garlic,
more chili, or oregano to make it tangier. He could even use a fluffier base to make the pizza
softer. All these are the process factors (X‟s), to vary the taste of the pizza (Y).

80 | P a g e © ExpertRating Solutions
The logical way the project team should initiate analysis is to:

list all the possible factors of the problem

segment and stratify these factors

prioritize the list of vital few factors

verify and quantify the root causes of variation

Analysis can be done in two ways.

One method is analysis of the data collected in the Measure phase of the project, mainly if the goal
of the team is to improve effectiveness, called data analysis.

The other method is analyzing the process itself through process maps the team created in the
Define phase, known as process analysis, mainly if the goal of the team is to improve efficiency.

The process team mostly uses a combination of the two to arrive at the root cause.

Data Analysis

The goal of data analysis is to take the data that was collected in the measure phase, and scan it
for clues to explain the problem encountered by the process team. A careful look at the data would
make the problems more visible to the team.

Data Analysis Using Histograms

To understand the application of histograms, consider the Pizza Home Delivery example already
discussed in the Measure phase.

The frequency distribution given below shows the number of times an event is seen in a set of
observations. If you take the tallies and create a bar graph, the subsequent result is a histogram.
Histograms make it easier to interpret the problem more than raw data. Analyzing this histogram
will explain the problem.

Suppose it takes a minimum of 10 minutes to make the required pizza that is ordered by the
customer. And it shouldn‟t be forgotten that once the customer hangs up, the pizza should be
delivered within 30 minutes. If it takes minimum 10 minutes to prepare the pizza, it means the
delivery boy has only 20 minutes left to deliver it. Hence, the Lower Limit Specification would
be 11 minutes and Upper Limit Specification would be 30 minutes.

Now take the help of a collection form for continuous data collected in the Measure phase. The
collection form is called a frequency distribution checksheet. The following checksheet tells the
time taken in minutes which the pizza boy took to deliver pizzas to 30 customers. It is assumed
that whatever the time the delivery boy took to deliver the pizza, the time should always be
rounded off to an even integer (the floor value), i.e., if the boy took 15.5 minutes, the time should
be noted as 14 minutes, the decimal part should be deducted and the even value should be taken.

81 | P a g e © ExpertRating Solutions
Time Taken

Specifications: 0.12-0.30

Target: 0.20

Now, examine the data and prepare the histogram.

Once you see the histogram, you can easily analyze that although the delivery time is within the
specifications, the delivery time is no closer to the target time. The frequency distribution using the
histogram should ideally center around the target value (i.e. 20) and lower down on either side of
the target. In other words, it should display the Bell curve in the distribution.

Hence, if you want to rectify the problem, the delivery boys should be given the 10 minutes as
their target for reaching the destination.

Process Analysis

The second method to arrive at the root cause is through process analysis. It can be one of the
fastest methods of learning about the root causes of any problem. If the goal of the process
targeted for improvement is to improve efficiency, like reducing the cycle time or completing a
task, a process analysis is necessary. In our Pizza Home Delivery example, the goal is to reduce
the cycle time of delivering the pizza.

Sub Process Mapping

In process analysis, sub process mapping means selecting a high level process step from a process
map and breaking it down to a number of steps. An assessment of the major steps in the SIPOC
having the highest influence on the output will determine which of the steps to map.

The details of flow of any process are provided by such sub process maps. These steps in the sub
process maps expose the real ineffectiveness in the process. The details provided by such a sub
process map can later be analyzed. They also reveal the non value adding steps of the process and
offer a scope of improvement in the process.

82 | P a g e © ExpertRating Solutions
In the previously illustrated example of the leather bag industry, we see there are seven
processes. Every process here is made up of certain sub processes. For example, the process
“construct bags” has sub processes which are - prepare the machines, collect the threads, scissors
at a place, decide on the color and design of the bags, procure leather from the store, procure
marking pencil from store, mark design, cut leather, sew, rework and finally send it to the section
where handles and other accessories will be added.

Thus sub-process mapping is vital in understanding the root cause of problems in a process. It also
plays an important role in taking counteractive measures.

4.1 Nature of Work and Flow of Work

Nature of Work

Nature of work deals with deciding whether a particular sub process is a value-adding process or a
non value-adding process. The naming of any process as value adding process depends on the
following criteria:

The cost of that particular step can be borne by the customer. In other words, the
customer must be ready to pay for that particular step

The particular sub process must be changing or altering the final product or service

83 | P a g e © ExpertRating Solutions
The step or the sub process should be done correctly the first time itself, i.e., there
should not be any rework in the process

Once the criteria is laid down, the team should study the detailed process maps and determine
those sub-processes which meet the above outlined conditions. Consider the example of leather
bag making industry.

Out of the nine sub processes, only five add value to the product (55.5%) and four sub-processes
do not add any value. According to Rath and Strong, most sub processes have 2 to 8 ratio of value
added to non value adding process.

Once the analysis of nature of work is done, the non value adding steps are identified and
categorized. They are categorized into:

In-house failures: These are procedures involving rectification of errors or defects in a process.
These usually begin with the word „re‟. They are signs of in-house failures. For e.g. retest, recall
etc.

Peripheral failures: These are procedures involving rectification of errors or defects pointed out
by the customers.

Control / inspection: These are procedures involving reviewing, checking, verifying, the previous
or the value adding steps. They are considered as non value adding steps.

Delays or hold up: These are the most common non value adding step. These could be hold up in
a process or waiting for delivery of the supplies.

Groundwork time: These are procedures which set up the process for following activity.

Moves or shifts: It involves moving of the parts from one place to another to get them assembled
or build or moving the finished parts to the warehouse or stockroom for storage. It is a non value
adding step. There is also the possibility of damage to the finished product in this step.

Flow of Work

Usually the flow of work starts when the customer orders something and ends with the delivery of

84 | P a g e © ExpertRating Solutions
that product to the customer. Flow of work refers to computing the total time taken by the each
sub process step (both value and non value adding).

Considering the previously provided example of the leather bag industry, the flow of work can be
properly illustrated.

Total Value Adding Time: 6hrs

Total Non Value Adding Time: 4hrs

Total Time: 10hrs

Here it takes 10 hrs to construct a bag. Examine the time taken for the value adding steps (6hrs)
as opposed to the non value adding steps (4hrs). This type of analysis can help to help identify the
loop holes in a sub process and improve it by checking on the non value adding time.

Synopsis of Analysis Table:

After the nature of work and flow of work has been done on sub process steps, the analysis has to
be summarized statistically. A synopsis of analysis table has to be prepared. This table should be
created for all sub process steps created and it should also be validated.

There are some value enabling steps in summary analysis. These are those steps which are usually
non value adding steps but are required by law to be included. These steps cannot be improved.

For e.g. testing the body alcohol concentration of the workers daily, who are working with
hazardous machinery to maintain a safe working environment etc. This can be considered as a
value enabler.

85 | P a g e © ExpertRating Solutions
After completing the Analysis table, it can be seen that many loop holes exist in the sub process. It
will also help in determining on which non-value-adding sub process step improvement efforts are
to be concentrated. Making an analysis table for all the sub process steps will lead to a better
understanding of the process and help in eventually improving it.

4.2 Root Cause Analysis

Root Cause Analysis

Root Cause Analysis is a six sigma tool designed to identify the problems which have occurred in a
process. It not only explains what has happened and how it has happened but why a problem has
occurred in a process.

Understanding the reason for the occurrence of any problem or event is the only solution to
develop valuable suggestions to safeguarding it. This will eventually lead to correction of the
problem and prevent future recurrences.

Root causes are those causes over which the management has some control. For e.g. the
conditions of the bikes which deliver the pizza are under the control of the management but the
road condition or the weather condition cannot be considered as root causes, as the management
has no control on either of them. Even those causes for which no effective recommendation can be
developed cannot be termed as root causes.

The aim of the researcher who is doing the root cause analysis is to identify the most specific
underlying reasons which are responsible for the problem. This will help in arriving at suggestions
and recommendations to improve it.

The process for doing a root cause analysis is as follows:

86 | P a g e © ExpertRating Solutions
The root cause analysis is done in three steps/ phases:

OPEN –

1. Brainstorming

Brainstorming is a tool that is used for generating new ideas for solving a problem. It a creative
method that helps in solving a problem by listing the number of options that can be applied to
solve the problem and then choosing the optimal one. The brainstorming tool is used at all the
levels of problem solving. This technique is a strong tool to know what enhancements can be done
in a given solution or approach.

This tool can be used individually or in a group. If done individually, then the person himself is
responsible for generating new ideas and selecting or rejecting them.

When brainstorming is practiced in a group, all the team members are asked to come up with their
thoughts and ideas, and then one by one, each idea is analyzed and later selected or rejected. If
selected, then further improvements in the idea are made by brainstorming the options for
enhancements, and hence the process of brainstorming is repeated at each level.

87 | P a g e © ExpertRating Solutions
Root Cause Analysis-Narrow

Brainstorming is accomplished in the following three steps:

2. Cause and Effect Diagram (fish-bone)

One of the best ways of reaching at root cause is “Fish Bone Diagram”, which has already been
discussed in the „Measure phase‟. Fish bone diagram helps the users to visualize various causes
leading to the problem. Once all the causes have been brainstormed, they are graphed and their
sub problems are noted. The fishbone diagram is not applicable for problem-searching but is
employed for problem-solving by the team members. The advantage of using fishbone diagram is
that it helps the team to focus on why the problem occurs and not to just detect the problem;
hence this tool is also very helpful in the Analysis Phase.

The fishbone diagram consists of a fish head – which states the problem at one end of the
diagram. Besides fish head, there is fish spine and bones attached to the spine. The bones
attached to the fish spine states the reasons which are causing the problem.

The fish bone diagram helps the team to brainstorm the potential causes of a problem. Once all the
potential causes have been linked, it‟s the time to draw conclusion. It should be noted that this
diagram never tells you a single root cause of the problem. It will only document various causes
and helps analyzing them.

For example: Take the previously illustrated Pizza Home Delivery example. Suppose the chain has
documented all the possible causes that lead to the loss of customers, and they have seen it
graphically too, by putting the problem and causes under fishbone diagram. Following is the
fishbone diagram, tailored to “pizza home delivery” example:

88 | P a g e © ExpertRating Solutions
As you can see that the causes have been further categorized into sub-causes, such as, one of the
causes “pizza not delivered in time”, has further been categorized into sub-causes, which infer the
reasons why the pizza couldn‟t be delivered in time. Now if you analyze this cause, the reason can
be one of these: traffic congestion, the scooter‟s tire got inflated, the pizza delivery boy couldn‟t
locate the address easily. So each delivery boy should be given a city map, so that he can locate
any address easily. The bike should have a spare tire so that if any of the tires get deflated, then
he can replace it himself.

Similarly, if customers are complaining about less amount of ingredients, one of the reasons could
be the kitchen has run out of stock. In this case, the retailer should maintain some kind of list in
which he mentions the current inventory of the ingredients. In case, any of the ingredients falls
short of the requirements, he should add some more stock. Whenever the amount of ingredients
falls below the required limit, it should be immediately restocked to maintain the safe limit.

The person at reception should always re-confirm the customer‟s order, and before delivering the
pizza, it should be re-checked to make sure that delivery boy doesn‟t deliver the wrong pizza.

Better transportation facilities should be provided to the delivery boys. The heating compartments
of the vehicles should be regularly checked and replaced, if found defective.

NARROW -

1. Clarification and Duplication

Once the open phase of root cause analysis is done, the next phase comes into play. This phase is
called the narrow phase of root cause analysis. In this phase the team needs to clarify the ideas
which are brainstormed in the open phase. Explanation of all the ideas which are brainstormed
helps one to understand what ideas different people have offered.

After clarifying the brainstormed ideas, the next step is to check for duplication of the ideas.

These two steps narrow down the initial list of the ideas generated by the brainstorming process. It
helps to bring the list to a workable state.
89 | P a g e © ExpertRating Solutions
2. Multi-Voting

Multi-voting should not be confused with decision making. It is just another instrument which
assists the team to prioritize the root causes.

It entails coming to an agreement by the team regarding the list of generated ideas. It includes the
following steps:

Place the list of ideas and merge the items which look same or are alike. The
clarification of ideas in the previous step will help the team members to understand this
task

Allocate a letter to every item in the list

Each member will be required to vote on every item of the list. The members will give
points ranging from 5 to10. The lowest will be 5 and highest 10 depending on the voters’
choice of important topics

Let the members take time to assign points separately

Specify the point allocated by each member on the list and tally them

Multi-vote on the items with highest points again

Answering to the “whys” to the above cited problems will help in reaching at root cause.#text

3. Five Why Diagram

This is a method used in the analyze phase. This method does not use any advanced statistical
tools.

Using the 5 Why method helps to identify the root cause or the primary cause of the problem and
hence improves the process quality or the product quality. The team which is using the 5 why
should accept that understanding the cause of a problem requires understanding the essential
stimulus that has resulted in the type and scale of the problem.

This method uses questioning the problem with a “WHY?” until the root cause is perfectly
understood. Usually by the time the 4 th or 5 th why is reached, a reasonable understanding of the
root cause will be achieved.

This method is related to the fish bone diagram and may be used to supplement the analysis
derived from a fish bone diagram.

For example: In the pizza delivery case illustrated previously, using 5 why will help to reach a
solution to the problem.

90 | P a g e © ExpertRating Solutions
Answering to the “whys” to the above cited problems will help in reaching at root cause.

4.3 Root Cause Analysis-Close

Problem: Wrong pizza delivered

WHY?

Problem: Pizza, not hot

WHY?

Problem: Pizza not delivered in time

WHY?

Problem: Lesser amount of ingredients

WHY?

Problem: Failure to locate the address

WHY?

It is not mandatory to stop after 5 „whys‟. The analyst can continue to address the problem with
„whys‟ until the root cause is understood.

CLOSE –

91 | P a g e © ExpertRating Solutions
1. Pareto Charts

A Pareto chart is a specialized vertical bar graph that exhibits data collected in such a way that
important points necessary for the process under improvement can be demarcated. It exhibits the
comparative significance of all the data. It is used to focus on the largest improvement opportunity
by emphasizing the "crucial few" problems as opposed to the many others.

The Pareto chart is based on the Pareto principle. Before getting to know the Pareto chart, first the
Pareto principle has to be understood. The Pareto principle was proposed by management thinker
Joseph M. Juran. It was named after the Italian economist Vilfredo Pareto, who observed that 80%
of the wealth in Italy was owned by 20% of the people.

This principle can be applied to work related to business: “

80% of the business defects are caused by only 20% of the errors”

“80% of your results are produced from 20% of your efforts”

“80% of the profit to a company is earned by 20% of the customers”

“80% of the complaints to a business are caused by 20% of the products or services”

The Pareto chart is a bar graph and is used to graphically summarize and display the relative
importance of the differences between groups of data. It is useful for non-numeric data.

This principle is applied to business operations because it is assumed that large percentage (80%)
of the problems are caused by few percentages (20%) of the processes. So the Pareto chart helps
by narrowing the very few areas of concern, while analyzing the process.

For understanding this, let us take an example. Suppose a multi-national company dealing in the
home delivery of pizzas, wants to check the problem areas while delivering the pizzas.

The data collected is displayed in the following table:

92 | P a g e © ExpertRating Solutions
The next step in preparing a Pareto chart is to calculate the cumulative percentages of the data
supplied above. The following table can be derived from the data given above.

Finally a line graph can be prepared to see what the main problems are. The following line graph is
drawn from the preceding table data using Ms Excel and plotted with the cumulative percentage
against the complaints of the customers. The X axis is plotted as complaints of the customers and
the Y axis as the cumulative percentage.

The final step is to recognize the few elements that cause most of the problems. Then the 80- 20
rule of the Pareto analysis is applied, and a line is drawn from 80% on Y axis until it meets the line
93 | P a g e © ExpertRating Solutions
graph. When this line reaches the line graph, it has to be vertically dropped down to the X axis as
shown in the following graph.

With the help of these identified problems, the root cause of the problem can be found out. Once
the root cause is pinpointed, the problem area can be worked upon.

All the problems that fall to the left of the 80% line are the few problems accounting for most of
the complaints. They are:

Not hot

Late delivery

No extras

Wrong Billing

Wrong Pizza

Lesser ingredient

No delivery in a particular area

These account for 80% of the problems encountered in the home delivery of the pizza. If these are
immediately taken care of, then 80% of the problems can be solved. Pareto analysis helps in
determining which problems to concentrate our efforts on. It helps to focus efforts on the vital few
problems only. The rest of the problems will either be automatically solved or they are not the
major reasons leading to the problem.

94 | P a g e © ExpertRating Solutions
4.4 Scatter Diagram and Run Charts

2. Scatter Diagram

The Scatter Diagram is a tool used for establishing a correlation between two sets of variables. It is
used to depict the changes that occur in one set of variables while changing the values of the other
set of variables. This diagram does not determine the exact relationship between two variables,
but only determines whether the two set of variables are related to each other or not; and if they
are related, then how strong the relationship is.

The relationship in a scatter diagram, between the set of variables, is analyzed using Regression
Analysis.

NOTE: TheScatter Diagram does not determine the reason behind the changes in one of variables
when the other variable is changed. In other words, it does not determine the cause and effect
relationships between these variables.

Usually there are three types of correlations:

Strong Correlation: The Strong correlation indicates that there is a close relationship between
the set of variables that are paired together. A strong relationship is said to be observed when
almost all the points fall along an imaginary straight line with either a positive or negative slope.

Moderate Correlation: The Moderate correlation indicates that there is neither a very close nor a
very loose relationship between the set of variables that are paired together. A moderate
relationship is said to be observed when an average number of points fall along an imaginary
straight line with either a positive or negative slope.

95 | P a g e © ExpertRating Solutions
No correlation: The No correlation indicates that there is no relationship between the set of
variables that are paired together. A No relationship is said to be observed when all the points are
randomly scattered throughout the graph.

Example : Suppose the pizza-making company wants to see if there is any relationship between
the scooter‟s age and the time taken by the pizza delivery boys to deliver the pizza, using the
Scatter Diagram.

Step 1: Accumulate all the data. That is, note down the ages of all the scooters and the average
time taken by each scooter to deliver the pizza.

Step 2: Tabularize the data

96 | P a g e © ExpertRating Solutions
Step 3: Now prepare the scatter diagram to find if any relationship occurs between the two sets of
variables.

Step 4: Now analyze the diagram. It is very obvious from the diagram that most of the points lie
on the imaginary line and it is showing a positive slope, which means that as the scooter‟s age
increases, its wear and tear also increases and hence its performance goes down, because of which
the average time taken by the delivery boy goes high.

3. Run Charts

A run chart, also known as a line graph, is a kind of control chart that is used to display process
performance over time. The Run charts are basically used for interpreting the trends that occur in
the data, if any.

Run charts are basically used for keeping a check on the process’ performance.

Run charts are useful in discovering the patterns that occur over time.

Run charts are easy to interpret; any one can guess from the chart’s behavior whether
the process’ performance is normal or abnormal.

Various steps involved in creating a trend chart are:

Data gathering: The data should be collected over a period of time and it should be
gathered in a chronological manner. The data collection can be started at any point and
end at any point.

Data organizing: The collected data is then integrated and divided into two sets of
values, i.e., x and y. The values for ‘x-axis’ represent time and the values for ‘y-axis’

97 | P a g e © ExpertRating Solutions
represent the measurements taken from the source of operation.

Preparing the chart: The y values versus the x values are plotted, using an appropriate
scale that will make the points on the graph visible. Then vertical lines for the x values
are drawn to separate time intervals such as weeks. Horizontal lines are drawn to show
where trends in the process or operation occur or will occur.

Interpreting the chart: After preparing the chart, the data is interpreted and
conclusions drawn for the benefit of the process or operation.

Example: Suppose you are the new manager in a company and you are disturbed by seeing the
fact that certain employees come late. You have decided to monitor the employees‟ punctuality
over the next four weeks. You decided to note down by how much time they get late everyday (on
an average basis) and then construct a trend chart.

Data Gathering: Cluster the data for each day over the next four weeks. Record the data in an
ordered manner:

Organizing Data: Determine what should be the values on x-axis and what should be the values
on y-axis. Assume day of week on x-axis and time on y-axis.

98 | P a g e © ExpertRating Solutions
Preparing the chart: Plot the y values versus the x values on a graph sheet (on paper) or use
another computer tool like Excel or Minitab. Draw horizontal or vertical lines on the graph where
trends or deviations occur.

99 | P a g e © ExpertRating Solutions
Interpreting Data: Once the trend chart is prepared, conclusions can be drawn. It is important to
explain the results and draw conclusions that are important. It is very clear from the above chart
that employees usually take more time to reach office on Mondays.

4.5 Hypothesis Testing, T Tests and Chi-square Test


4. Hypothesis Testing

By the end of the Six Sigma project, various solutions and hypotheses are proposed. Six Sigma
takes the help of complex statistical tools to test out these planned solutions to see if they are
appropriate for fixing the problem. One such statistical tool is Hypothesis Testing. It refers to the
process of using statistical analysis to ascertain if the observed disparities between two or more
samples are due to random chance.

Steps in carrying out a hypothesis test:

For the purpose of hypothesis testing, a null hypothesis (H0) is created which simply says that
the mentioned problem is purely due to a random chance.

The null hypothesis assumes nothing: that is no association, no dissimilarity, and no cause.
Formulation of such a null hypothesis is a fundamental and first step in testing of statistical
significance. This probability is often called the "significance level" of the results.

The second step is to create an alternate hypothesis(H1). The alternative hypothesis is created
if it can be statistically proved that the null hypothesis is false, i.e. “H0 is not true”. The alternate
hypothesis ascertains what should be assumed if the null hypothesis is untrue.

The third step in hypothesis testing is to identify a Test Statistic. A test statistic is the quantity
calculated from the data values that will be the subject of the test. The decision of whether to
accept or reject the H0 depends on the data collected from the sample. The idea here is to
generate a single number which will be compared to H0 for its rejection or acceptance. This
number is called the test statistic.

100 | P a g e © ExpertRating Solutions


The next step is to obtain the null distribution. It is the sampling distribution of the test statistic
provided that the null hypothesis is true. Identifying the null distribution is an important step in
hypothesis testing.

The last step is comparing the observed test statistic to the null distribution. H0 is rejected
as unlikely to have been true, if the test statistic falls in a satisfactorily questionable area of the
null distribution. H0 is not rejected, if the test statistic falls within the scale of “normal” values
depicted by the null distribution.

5. T Tests

T- Tests are used to compare two averages. They may be used to compare a variety of averages
such as, effects of weight reduction techniques used by two groups of individuals, complications
occurring in patients after two different types of operations or accidents occurring at two different
junctions.

The t-test is used when sample size is small i.e., less than 30. A t-test may be calculated if the
means, the standard deviation and the number of data points are known. If raw data is used then
these measures should be calculated before performing the t-test.

A t-test can be performed manually or a statistical package like Microsoft Excel or Minitab software
can also be used.

Steps in carrying out a T-test:

List the data

Calculate the standard deviation for sample 1 : s1

Calculate the standard deviation for sample 2 : s2

The square of the standard deviation of sample 1 (s12) is multiplied by the degrees of freedom. (it
is the number of subjects minus one)

Repeat Step 4 for sample 2

Add the results of Step 4 and Step 5 (s12 +s22) and divide the result with the total degrees of
freedom.

Calculate the Standard Error for the difference between the means

Divide the difference between the means by the Standard Error. This value is “t‟

Look up the t distribution table, it gives the P value

If the probability is less than 0.05, it means that the difference between the two means is not due
to chance and the null hypothesis can be rejected. This means that there is 95% chance that the
difference between the two means is not due to chance. To achieve a 99.9% chance that the
difference between the two means is not due to chance a probability of 0.001 should be taken into
consideration.

This test provides the probability that the difference between the two means is due to chance.

101 | P a g e © ExpertRating Solutions


6. Regression Analysis

To determine the relationship between variables, regression analysis is used. It is used to


determine the effect of one variable upon another.

For example, the effect of demand on supply, or the effect of customer relations on increasing
sales, the effect of sale announcements on the increasing sales, or the effect of a trained
management on customer satisfaction, can be studied using regression analysis.

Simple linear regression seeks to sum up the relationship between two variables which are shown
graphically in a scatter plot by a single straight line. A simple relationship between two variables
(one dependent and one independent) can be expressed in a linear formula.

Y = a + bX

where, Y is the dependent variable.

X is the independent variable.

Coefficient "a" is where the regression line intercepts the y-axis; it indicates the value of Y when X
(the independent variable) is 0.

Coefficient "b" is referred to as the slope and conveys how a 1 unit change in X will change the
value of Y.

The idea of regression analysis is to arrive at an equation of a line which fits through the cluster of
data points with the minimum deviations from the line. The deviation of the points from the line is
called "error".

Multiple Regression Analysis

Multiple regression analysis is used to deal with a large number of variables. (Simple regression
analysis can deal with only two variables). Here, one is a dependent variable and the others are
independent.

7. Chi-square Test

The Chi-square test is a non parametric test. It is generally the most commonly used non-
parametric test. Chi-square test is quite a flexible test and can be used in a number of
circumstances.

102 | P a g e © ExpertRating Solutions


Three types of analysis can be drawn from the chi square test:

Goodness of fit

Test for homogeneity

Test of independence

The probability that an observed distribution of data is due to chance (sampling error) alone is
determined by nonparametric data. It is based on rankings or distribution into qualitative
categories. The most widespread application of chi-square test is to study the connection between
two variables.

Unlike the parametric tests, the Chi-square test does not require the data sample to be normally
distributed. It assumes that the variable is normally distributed in the population from which the
particular sample is taken.

The steps in performing a Chi-square test are as follows:

The population should be randomly sampled.

Raw frequencies should be used in reporting the data and not percentages.

The variables should be independent variables.

The groups on independent and dependent variables must be extensive and exclusive.
i.e., reactions should be independent and not influenced by another.

The observed frequencies should be sizeable.

The test statistic used is as follows:

Where:

Ei – expected data
Oi – observed data

Here's how to interpret the x2 values:

1. Firstly we need to determine the degrees of freedom(DOF)


2. Degrees of freedom = number of categories in the problem - 1
3. Determine a relative standard for accepting or rejecting the hypothesis.
4. Refer to a chi-square distribution table. With the help of the table using the appropriate degrees
of freedom search the value nearest to the value in the chi-square table.

103 | P a g e © ExpertRating Solutions


104 | P a g e © ExpertRating Solutions
4.6 Analysis of Variance (ANOVA)

Problem:

A market research firm was doing a survey for its client, a leading manufacturer of cosmetics. They
were interested in studying the consumer behavior in the context of purchase decision of cosmetics
in a specific market. This company is an important competitor in the cosmetic market which is
characterized by strong competition. They would like to know whether the income level of the
consumers influence their choice of the brand. Currently there are four manufacturers in the
market. Manufacturer 1 and manufacturer 2 are the first-rate brands while manufacturer 3 and
manufacturer 4 are the budget brands.

A representative stratified random sampling method was used covering the entire market using
income level as the basis of selection. The groups that were used in classifying income level are:
Lower, Middle, Upper Middle and High. 600 consumers participated in this market research. The
subsequent data emerged from the market research:

Solution:

Null Hypothesis: There is no relationship between the brand preference of customers and their
income level. These two variables are independent and do not influence each other.

Alternative Hypothesis: There is a relationship between brand preference of customers and their
income level. These two variables are dependent on each other and the income level influences the
brand preference.

105 | P a g e © ExpertRating Solutions


Considering a level of significance of 0.001, the observed frequencies are:

Solution:

Null Hypothesis: There is no relationship between the brand preference of customers and their
income level. These two variables are independent and do not influence each other.

Alternative Hypothesis: There is a relationship between brand preference of customers and their
income level. These two variables are dependent on each other and the income level influences the
brand preference.

Considering a level of significance of 0.001, the observed frequencies are:

To calculate the chi-square, the expected frequencies have to be calculated. It is calculated on the
hypothesis that the null hypothesis is true.

106 | P a g e © ExpertRating Solutions


Computing Chi-Square:

Adding the values of for each cell,

The critical value of X2 depends on the degrees of freedom.

The degrees of freedom = (the number of rows-1) * (the number of columns-1)

= 3*3

=9

107 | P a g e © ExpertRating Solutions


Considering a level of significance of 99.9%, the critical value of X2 at 9 d.f is 4.78.

The inference drawn from the above result can be easily summed up as the following:

The null hypothesis can now be conveniently rejected and the alternate hypothesis can be
accepted that, there is a relationship between brand preference of customers and their income
level. These two variables are dependent on each other and the income level highly influences the
brand preference for cosmetics.

Customers in the lower and middle income groups prefer the budget brands of cosmetics whereas
the customers in the upper middle and upper income groups prefer the first rate brands of
cosmetics.

8. Analysis of Variance (ANOVA)

ANOVA (Analysis of Variance) is very much related to the t-test explained above. The main
difference is that with the help of ANOVA tests, the difference between the means of two or more
groups can be studied, whereas a t-test can measure the difference between the means of only
two groups.

Using ANOVA over the t-test reduces the chances of committing a type-I error (i.e., a correct null
hypothesis can be wrongly discarded).

Computing ANOVA manually is a tedious job. Some statistical packages are designed to help solve
this problem. Microsoft Excel and Minitab software can also be utilized to do the computation.

D = (Grand total)2 / total no. of observations

Total Sum of squares, E = B – D

Between-treatments S of S, H = C – D

Residual S of S, G = B – C

108 | P a g e © ExpertRating Solutions


u = no. of treatments, v = no. of replicates

F = between treatments mean square /Residual mean square

Check out the tabulated value ofF (p = 0.05) where u is d.f of between treatments mean square
and v is d.f of residual mean square.

If the calculated F value exceeds this and also exceeds the tabulated F value for p = 0.001, it
indicates that there is a very high significance difference between treatments.

If the calculated F value is below this value, it means that the test has failed to reject the null
hypothesis.

A major drawback of ANOVA is that it conveys that there is a significant difference between the
groups but it cannot specify those groups which are significantly different.

For this purpose certain other comparisons may be used.

4.7 Quantifying the Opportunity

8. Design of Experiments (DOE)


Design of experiments is a quality improvement methodology that is used for investigating a
process. Under design of experiments, the data which is the outcome of an experiment is analyzed.
It also has to be ensured that the appropriate and enough data are available for performing the
experiment. This process of analyzing data and its outcome is known as an Experimental Design.

Design of experiments helps the Six Sigma team to choose the experiment design that effectively
and efficiently works under the constraints like, time, financial resources, available data or any
other resource.

Under Design of experiments, the users or the experimenters are supposed to study all the factors
that are responsible for a single effect. Each factor that is added during an experiment, adds
another design dimension to the design space. For example, a 2-factor experiment is represented
as a 2-dimension design space or a plane; a 3-factor design space is represented as a 3-dimension
space; a 4-factor experiment is represented with 4 dimensions, and so on.

Design of experiments is used in business for maximizing the information acquired and minimizing
the resources required. It also helps in discovering the relationships between two different types of
data. Once these relationships are understood, they can be used to find the best solution for
improving a process.

109 | P a g e © ExpertRating Solutions


Design of experiments is discussed more elaborately in the next Chapter, i.e. The Improvement
Phase.

Quantifying the Opportunity

After all the analysis is done, it is time for the green belts to quantify the opportunity. This is done
by determining the performance gap, redefining the problem statement, and looking at the cost of
poor quality.

Determining the Performance Gap

The ultimate goal of the analysis phase is to identify the performance gaps that are encountered.
The performance gap is the difference between the actual performance and the desired or planned
performance. Various statistical tools that had been discussed above are used for analyzing the
process‟s performance.

The outcome that is expected or desired by the customer should always match with the actual or
the current output, else the process has to be re-worked. The performance gap is measured as:
Desired output minus the Actual output.

The performance gaps should be reduced so as to control the costs of Cost of Poor Quality
(discussed later in the chapter).

Redefining the Problem

The problem that was defined in the definition phase, should be scrapped and re-defined, as after
analysis phase, everyone involved in the process has a better know-how of the problem and how it
is to be solved. The problem statement reflects the better understanding of the problem. It is often
assumed that the problem-definition given by the customer is biased. Customers often always see
the problem and its desired results from their perspective. Hence the problem solvers must
question the customer and redefine the problem in more explicit terms.

The new problem statement should be less ambiguous; it should be open-ended and not
constrained by the customer's preconceived solution.

Cost of Poor Quality (COPQ)

Cost of Poor Quality consists of cost that is incurred on prevention, appraisal and failure costs.
COPQ is usually a result of generating defective materials or delivering defective services. The cost

110 | P a g e © ExpertRating Solutions


is measured as the difference between the desired and actual product/service quality. The COPQ
cost usually includes all labor costs, rework costs, disposition costs and material costs.

These costs are to be kept minimum and one rule of thumb is followed:

“Prevention is better than cure”

This means the cost incurred to cure the problem will be much higher, than the cost the company
has to pay for preventing the problem to occur.

The four categories of costs that come under COPQ are:

1. Internal failure costs: It is the cost that is associated with defects which are encountered in a
process, before the customer has received the product or service. The internal costs include
reworking on the product, failure analysis, re-inspection, re-testing and re-constructing of the
product.

2. External failure costs: It is the cost that is associated with defects which are encountered in a
process, after the customer has received the product or service. The external costs include
warranty charges, rejected material, allowances, or any complaint adjustments.

3. Appraisal costs: These costs are associated with the efforts required for inspection, testing,
confirmation, approving plans or cost incurred for product quality audits. All the costs that are
borne in conjunction with quality measurement, management, and planning are included under
appraisal costs.

4. Prevention costs: The costs that are drawn to keep the failure and appraisal costs to a
minimum, i.e., preventing all the errors and defects, are prevention costs. These costs are difficult
to determine and tend to be more intangible. These costs include the expenses associated with
inspection, testing and auditing; and education and training costs.

111 | P a g e © ExpertRating Solutions


Chapter 5 - The Improve Phase
5 The Improve Phase

Objectives

The objectives of the IMPROVE Phase are to:

Generate a number of plausible solutions

Select the best solutions that would improve the process

Devise the implementation plan

In this phase, the analysis of the data or process (as discussed in the earlier chapter- The Analyze
Phase) is used to improve the existing system by removing the identified flaws. It is used to
generate innovative and cost effective solutions and ideas to the problem and implement them.
The results for the defined problem may be statistically evaluated to solve the flaw.

The green belt project team must have recorded the root causation of the problem in the process
in the analysis phase. Therefore with evidence of root causation, the goals that lie ahead are to
generate solutions and select those that would help to provide continuous solutions to the root
causes. Thus the prime objective of this phase is to eliminate or neutralize the root causes of the
problems in the processes permanently.

In this phase, the process is then optimized based on the analysis using various Design of
Experiments methods, Project management tools, and so on.

It is seen that many of the approaches that are used in the analysis phase are also used in the
improvement phase.

Generating Possible Solutions and Selecting the Best Solutions

By the time the project team reaches the improvement phase, they are familiar with the three
major areas of root cause analysis of the analyze phase. These three major areas are the Open-
Narrow-Close. A similar approach is also used in the Improve phase. While there are differences
between the two approaches in the two phases, there are apparent similarities.

112 | P a g e © ExpertRating Solutions


Figure 22: Tools used in Open-Narrow-Close root cause analysis in Improve Phase

The Open-Narrow-Close Approach

Open: Here, as in the Analyze phase, Brainstorming helps to gather all the possible ideas that play
an important role in validating the root causes. The difference seen here is that the Affinity
Diagrams are used to gather all the possible solutions, whereas in the Analyze phase, the Cause
and Effect diagrams are used.

Let us take an example to explain brainstorming.

Every month the Johnson‟s organizes a family dinner with all the family members. Once the date of
the dinner is finalized, the next step involves selecting the venue of the dinner. Each member
makes a personal note about the restaurants they can possibly go to. At this stage the affinity
diagram (discussed in the define phase) plays an important role. Notes on personal preferences
are then given silently or we can say that a Brainstorming session takes place.

Narrow: This phase involves narrowing down the list of possible ideas. With reference to the
above example, the venue preferences were narrowed down. With brainstorming in progress and
the venue of the dinner being narrowed down, clarification was done, regarding the
understanding of the ideas. After the ideas were clarified, a check was made regarding the
duplicity of the ideas. In other words, it was checked if the preferences of the venue were the
same by two or more members. After this the ideas were then categorized and placed under
similar categories. These categories were made according to the choice of the cuisine, like Italian
cuisine, Thai food, Indian food, Mexican food and so on. Therefore the choice of the restaurants
was placed according to the categories. A header card (for e.g. Italian cuisine for restaurants like
Pasta Point, Corelli) characterized each group of the categories.

The Cause and effect diagram that is used in the analysis phase has one advantage over the
Affinity diagram. The cause and effect diagram has naturally designed categories which Affinity
diagrams don‟t.
The last tool that was used in the narrow phase was multi-voting . Here the members of the family
were asked to vote according to the names of the restaurant that were categorized and not
according to the header card. The advantage of multi-voting is that it results in a Pareto Chart.
113 | P a g e © ExpertRating Solutions
Close: The Close phase marks the use of the Musts and Wants criteria. For the selection of any
solution, it is important to apply a set of criteria. The project team emphasizes on the prioritization
of the solutions. When the solutions are prioritized, the cost of implementing them can be
controlled.

The Must criteria refer to the minimum requirements that a suggestion must meet, to be
considered as a criterion. The Must criteria is an „either/or‟ decision. They are mostly structured as
close-ended-questions.

The Want criteria are that criteria that allows the comparison of one solution against another so
that prioritization of the solution takes place. It is seen that the Must criteria usually come from the
champion. On the other hand, the want criterion is a joint decision of both the project team and
the champions.

Thus we can say that the Open-Narrow-Close approach plays a very important role in the
generation and selection of the possible solutions.

After the solutions are generated and the best is selected out of the lot, it becomes even more
important to see that the solution gives the desired results. Piloting plays an important role in
implementing the plan.

Pilot Run Data:

A pilot in simple terms means to implement a solution in a small scale to see its effect rather than
applying the solution to the entire setup. By doing so, it helps to see the effects of the solutions
and also tries to measure its adaptability to the project. By creating a small scale pilot to the
solutions, the team can apply certain alterations, modifications and even radically change the
solutions for better implementations.

Devising the Implementation Plan

The solutions that were generated in the above phase are now to be implemented. The
stakeholders play an important role in the implementation of the solutions that are gathered. A
stakeholder is someone who is affected by a proposed solution or someone needed to implement
the solution. Thus a stakeholder Analysis tool is a simple graphical presentation, showing the
acceptance of the solution, or those affected by the solution.

Once the stakeholder analysis is conducted and sets of solutions are agreed upon, the project team
focuses on the various resistances existing among the stakeholders. A planning for influence chart
can help the team to determine the types of resistance exhibited by key stakeholders. It can also
determine various issues that contribute to the resistance and the number of strategies needed to
overcome the resistance.

The following table shows a Planning for Influence Chart.

The implementation plan in this stage takes into consideration the cost benefit analysis and the
development of a Contingency plan.

114 | P a g e © ExpertRating Solutions


Cost-Benefit Analysis

The cost-benefit analysis is defined as the process of calculating the total expected costs versus
the total expected benefits of the actions that are performed to choose the most profitable options.
The costs mostly comprise of the people and machine resources of a project.

Cost-benefit analysis studies the expenditure involved in the projects and the benefits of the new
information system. The solutions that are generated and selected would only be implemented if
the benefits of those solutions would recover the costs of implementing them.

Contingency Plan

Contingency planning is defined as a plan which involves taking appropriate or suitable actions
immediately to counter against any type of disaster or emergency. Contingency planning helps to
recover from any type of disaster. This plan usually consists of plans such as; backup plan,
network storage planning, storage management, cold site facility, hot site facility etc. Contingency
planning is a part of the business resumption planning which aims to improve the affected
processes or products.

Tools used in the Improve Phase

1. Design of Experiments (DOE)

Design of experiments is a quality improvement methodology that is used for investigating a


process. Under design of experiments, the data which is the outcome of an experiment is analyzed.
Under design of experiments, first the causes that contribute to certain effects are identified, and
then meaningful tests that verify possible improvement ideas are created. It also has to be
ensured that appropriate and enough data are available for performing the experiment. This
process of analyzing data and its outcome is known as an Experimental Design.

According to Thomas Pyzdek, “A designed experiment is an experiment where one or


more factors, called independent variables, believed to have an effect on the
experimental outcome are identified and manipulated according to a predetermined
plan”

Design of Experiments in Six Sigma aims to improve the performance of the business processes. It
follows a methodology that suggests the use of advanced statistical techniques to understand and
control variation which could improve predictability in business, thus improving performance.

Under DOE, the users or the experimenters are supposed to study all the factors that are
responsible for a single effect. Each factor that is added during an experiment, adds another design
dimension to the design space. For example, a 2-factor experiment is represented as a 2-
dimension design space or a plane; a 3-factor design space is represented as a 3-dimension space;
a 4-factor experiment is represented with 4 dimensions, and so on.

The experiments performed leave a bulk of data, which can be analyzed statistically to determine
the effect of independent variables. An experimental design or plan must be such that it includes
all the provisions to deal with other types of variables like the extraneous variables, response
variables, primary variables, and background variables. The experiment design should also be such
that it effectively and efficiently works under the constraints like, time, financial resources,
available data etc.

115 | P a g e © ExpertRating Solutions


Design of experiments is used in business for maximizing the information acquired and minimizing
the resources required. It also helps in discovering the relationships between two different types of
data. Once these relationships are understood, they can be used to find the best solution for
improving a process.

In an experimental situation, there may be innumerable factors that may be the source of
deviation. But it is just not possible to experiment on every possible source. Some of these
variables are left out. Therefore, the variables that are not included for the experimental design
are known as “noise level” in a process. These variables, through the process of Randomization are
kept at bay so that the effects of these variables are unable to corrupt the primary variable effects.

Design of Experiments reduces to a large extent the runs that are needed to vary one variable, and
even more importantly, captures interaction effects between the variables that are taken into
consideration.

Components of Experimental Design

Experimental design helps to reduce the design cost and speed up the design process. Designed
experiments aims to drastically minimize the manufacturing cost and achieve manufacturing cost
savings, by minimizing process variation and reducing rework, scrap, and the need for inspection.
The basic concepts of Design of Experiments are factors, levels, and responses. These three
processes play a very important role in analyzing the design of an experiment.

Factor: A factor is usually referred as an independent variable. It can be either


classified as a controllable or an uncontrollable variable.

Levels: Levels are the state of a factor that is consciously assorted. Levels can be
numeric or discrete.

Responses: Responses mean the outcome of the experiment which is measurable at


each factor.

The three processes can be explained with the help of a diagram.

116 | P a g e © ExpertRating Solutions


Figure 23: An example showing components of experimental design

The above diagram explains the making of one shredded chicken jalapeno pizza.

The three components play their important role in the making of the pizza.

Factors: Factors are the inputs to the process. They can be controllable or
uncontrollable. In the above example the controllable factors are the ingredients used in
the making of the pizza and the oven.

Levels: Levels describe the settings. In the above example, the oven temperature and
the amount of ingredients constitute the levels.

Response: Response means the final outcome of the experiment. In the example, the
final response is the shredded chicken jalapeno pizza which is directly influenced by the
variety of factors and levels.

Design Characteristics

Designing or redesigning any process requires a good amount of innovation. Prior study suggests
that the majority of the defects in the processes usually take place during the designing or
redesigning of the processes. The process of designing or redesigning is known as DFSS or Design
for Six Sigma. DFSS helps in enabling processes right from the start to ensure customers‟
expectations.

117 | P a g e © ExpertRating Solutions


The designing phase follows some basic characteristics:

A basic character of a design is that it should possess an analytical quality. A design


must begin with analytical tools that may help to analyze the design of a process
according to the customer’s expectations.

While designing, the process must be completely based on DFSS including a complete
tollgate checklist and reviews, complete risk assessment, a process map of design
methodology and so on.

Another important feature of design is that the purpose of an experiment must be


known. Without knowing the purpose, the relevant elements and tools cannot be
selected to justify the customers’ demands.

The design must identify the customers’ demand for a product excellence. Excellence is
defined as the ideal balance of product attributes such as cost, quality, performance,
aesthetics, packaging, etc.

Another important characteristic of design is that the analyst must be aware of the
experimental tools that are selected and how would these tools accomplish the
objectives of the process.

The experimental design plan must always be in writing and must be recommended to
all the people involved. The plan must consist of a statement of the objectives, the type
of tools to be used, the total time frame needed, and brief statement on the methods
used.

5.1 The Improve Phase ...Continued

Types of Design

Taking a broader view regarding the types of designs, the DOE practitioners have attempted to
provide three types of designs. They are:

Factorial Designs

As the name suggests, Factorial designs helps to study the effect of innumerable factors on the
processes or the product. This statistical study helps to research the effect of each factor on a
response variable. It also studies the effect of the interaction between factors on the response
variable. As a result of this, the efficiency of experiments is improved and it further helps to set
various parameters at different levels and settings simultaneously during the experiment.
Fractional factorial designs are good alternative to a full factorial design, especially in the initial
screening stage of a project.

118 | P a g e © ExpertRating Solutions


A factorial experiment can be analyzed using regression analysis.

Response Surface Designs

G.E.P. Box and K.B. WILSON introduced the response surface methodology in 1951. Such type of
designs focuses on the sequential use of the experimental procedure to obtain maximum response.
This type of design helps to inspect the relationship between one or more response variables with
a set of experimental variables. Such designs are best applied when the parameters are already
set with few important outputs, and the settings are done according to the parameters to
accelerate the output.

Taguchi Designs

These designs are named after Dr. Genichi Taguchi, who was widely known as the pioneer in the
robust parameter design. These methods are developed statistically to improve the quality of the
manufactured goods. The Taguchi designs are said to improve engineering productivity. They aim
to provide an environment of customer satisfaction and flexible designs to improve the product or
the process.

2. Project Management Tools

Management of a project involves planning and implementing the changes that aims to produce
the desired result efficiently. Managing a Six-Sigma project involves a good plan and strategy.

A project always begins with a problem or a need to improve. Management of a project follows
some standard management rules and guidelines. They are illustrated in the table below.

119 | P a g e © ExpertRating Solutions


Tools Used in Project Management

There are a number of tools and techniques for project management. Many of these tools are used
extensively in quality improvement and control situations.

a) Project Plan

According to Ruskin and Estes, “A plan is a simulation of prospective project work, which allows
flaws to be identified in time to be corrected.”

A project plan is a “why” and “how” of a project. It includes the scope, schedule, and benefits of a
project. It also includes the goals of the project.

Elements of a good project plan include the following:

A detailed account of the goal

The cost benefit analysis

A feasibility analysis

120 | P a g e © ExpertRating Solutions


A detailed account of the steps taken

The scheduled time for its completion The various resources needed for the project

b) Gantt Charts

Gantt charts are used to keep track of the time frame of a project. These charts help in scheduling
the date when the task will be carried out; and in chalking out a plan for the proper allocation of
the resources needed for the completion of the project.

Gantt charts are very simple and easy to construct. They are represented by a horizontal bar,
which represents the expected time for each task. The left end of the horizontal bar represents the
expected beginning of a task, and the right end of the bar represents the expected completion date
of the project. The tasks in the Gantt charts may be represented in sequence or may be
overlapping. The vertical line of the chart represents the number of tasks involved in a project.

The following illustration shows a Gantt chart used during the construction of a commercial
building.

Figure 24: A Gantt Chart showing the construction of a commercial Building

A single view of the Gantt chart helps to monitor the progress of the project. It can be inferred
from the chart that the project is running ten days late from the stipulated time that was given to
the allotees of the commercial building.

c) Pareto Analysis

This concept has been discussed in the earlier chapters.

d) Process Decision Program Chart (PDPC)

This concept has been discussed in the earlier chapters.

121 | P a g e © ExpertRating Solutions


e) Quality Function Deployment

Quality Function Deployment is defined as a systematic process to identify the needs of the
customers.

Quality Function Deployment or QFD takes into consideration the voice of the customer regarding
their requirements in products or processes. This information is then used by cross-functional
teams to resolve the issues faced by customers. Market research is the most important
requirement while performing a Quality Function Deployment.

This research can be carried out through interviews or discussions, customer specifications,
observations, field reports, and so on.

5.2 The Improve Phase ...Continued

Quality Function Deployment approach is carried out in four phases:

i) Planning the Product

This phase identifies the list of the prioritized needs of the customer with reference to the design
and the service of the product. This phase also compares ones product with the performance of its
competitor and then sets targets for improvements to compete ahead of the competition. An
example can be cell phones designed according to the customers needs.

ii) Planning of the Part

This phase helps to translate the specifications that were gathered in the planning phase of the
product. The parts of the product are characterized in this phase. For example, the cell phone can
be of light weight and can be battery driven.

iii) Planning of the Process

This phase involves further planning of the process along with the first two steps. The foremost
aim of the process is to deliver results without any defects. This involves planning a process that
would maximize the ability to deliver a Six Sigma quality. For example, in case of a call from a
mobile phone, the process must be totally foolproof so that the call lands from one antenna to
another without any break.

iv) Planning the Production

After the process has been planned to ensure delivery of a foolproof process, the next step is to
plan the production effectively, keeping in mind the quality of the product. In this phase, the
process steps into the manufacturing of the product or delivering the service, keeping the Six
Sigma quality in mind.

With the development of the technology required for a line of product, the lists of the customers‟
requisites are assembled. These customer requisites are organized into the technology
development planning matrix. These matrices help to identify which requisite needs to be
undertaken first.

122 | P a g e © ExpertRating Solutions


f) Matrix Diagrams

This concept has been discussed in the earlier chapters.

g) Activity Network Diagrams or Arrow Diagrams

This concept has been discussed in the earlier chapters.

3. Failure Mode and Effect Analysis (FMEA)

The Failure Mode and Effect Analysis is a proactive tool, technique and quality method that helps in
the identification and prevention of the process errors before their occurrence. It is one of the
methods for prevention of defects in a product or a process.

FMEA is a tool that helps in identifying every potential failure mode of a process or a product; how
readily these failures and their effects on the customer can be detected, so that the effects of the
risk of these failures can be reduced. The failure modes are then ranked by using the RPN (Risk
Priority Number). The failure mode having the highest RPN will have the highest priority to be
controlled. Feasible actions are then recommended to eliminate the problem. While making the list
of failure modes, it should be kept in mind that every failure mode is not possible.

The RPN is simply a product of three elements:

RPN = severity rating of the impact of the failure * probability rating that the failure will occur *
failure detection rating

FMEA thus helps in preventing occurrence of problems by ascertaining the impact of the failure in
the process and prompt actions to improve process efficiency. FMEA is mostly carried out in the
Improve phase of the DMAIC methodology. The FMEA approach can be easily performed by using
Process maps, C&E matrix or a Fishbone diagram.

Advantages of FMEA

FMEA helps in accumulating the collective knowledge of a team as it is involved in


identification and prevention of error or defects.

123 | P a g e © ExpertRating Solutions


As the inherent character of FMEA is to identify the defects and failures, this helps to
improve the reliability and quality of a process.

The approach of FMEA is a logical and structured approach that helps in identifying the
process areas of concern.

The FMEA approach usually reduces the time and cost needed for process
development as it is presented with a design concept.

FMEA also helps in identifying Critical-To-Quality characteristics (CTQs).

FMEA helps to increase the satisfaction of a customer by providing a safe and sound
product or process.

Types of FMEA

Process FMEA: This is used to analyze transactional processes in which the main
criterion is to identify the failure and to produce the wishful requirement.

System FMEA: This forms a part of the Design FMEA which is used to analyze the sub-
systems in the early stages of design. This type of FMEA primarily focuses on the failures
that may have occurred due to some defect in the design.

Design FMEA : This type of FMEA analyzes the designs of the components. This FMEA
focuses on the failure of the components that have taken place because of some fault in
the design.

Roles and Responsibilities of FMEA

The FMEA team consists of 6 to 10 persons. Each team has a team leader, a record keeper, a time
keeper and a champion. The data for the design meeting has to be gathered through the voice of
the customer.

A design meeting consists of persons with full knowledge about the product or the process. The
facilitators should show good teamwork and fast decision making abilities.

4. Error-proofing
Error-proofing activity is also known as Poka-Yoke in Japan, where it was fully developed and
popularized by the Shigeo Shingo in Japan. The word Poka means inadvertent errors and Yokero
means to avoid. Error-proofing is based on the principles of reality which comprehends that not
even a small number of defects are acceptable.

124 | P a g e © ExpertRating Solutions


Poka Yoke is a mechanism that prevents a mistake from being made. It is done by eliminating or
hugely reducing the opportunity for an error, or to make the error so obvious at the first glance,
that the defect reaching the customer is almost impossible. Poka Yoke creates the actions that
have the ability to eliminate mistakes, errors, and defects in everyday processes and activities. In
other words, it is used to prevent causes that give rise to defects. Mistakes are not converted to
defects if the errors are discovered and eradicated beforehand.

An analysis of the cause-and-effect relationship of a defect is the first step towards the mechanism
of Poka Yoke. Then a remedy that wipes out the occurrence of the mistakes that lead to that defect
is applied. Poka Yoke solutions can consist of any way that helps to ensure the mistake will be
eliminated for good. It can be the creation of a check list, an altered sequence of operation, a
computer data entry form, a message that reminds the user to complete a task etc. Poka Yoke has
wide applicability, especially in engineering, manufacturing, and transactional processes.

Poka Yoke can be done in two ways:

The Type-1 corrective action, usually believed to be the most effective form of process control, is a
type of control which when applied to a process eliminates the possibility of an error condition from
occurring.

The second most effective type of control is the Type -2 corrective action, also known as the
detection application method. This is a control that discovers when an error occurs and stops the
process flow or shuts down the equipment so that the defect cannot move forward.

An error-proofing process looks like the following where both the prevention and detection of the
defects are done:

Figure 25: Error- proofing

There are three error-proofing categories.

Warnings: This prevents the occurrence of the error or defects at the first place. The
first place means the defects at the source before a value is added to the product.

Shutdown : This category of proofing inhibits the errors to progress further as it closes
the system as soon as an error occurs.

Auto Correction: This category of proofing allows correction or self-correction of the


problem that has gone out of hand.

125 | P a g e © ExpertRating Solutions


5. Corrective Action Metrics

Corrective action metrics or plans are used to eliminate an identified problem by proposing
appropriate solutions. An organization running on six sigma methodology would realize that a
sound corrective plan always plays a very important role in the success of the project at hand.

A corrective plan with all the ingredients helps in improvement of the products or processes. The
implementation of corrective action metrics in the Improve phase of the DMAIC methodology
results in improvement in the areas of concern.

The goal of a corrective plan is to provide a standardized method which would help in assessment
of the performance and the issues related to it. It aims to define the root causes of various
problems which need improvement. The corrective plan identifies appropriate actions needed for
the root causes, and tracks down the achievements that have taken place. It is used by the
problem-solving teams to keep track of what is being done by whom and when it is taking place.

The matrix can be either simple or complex. Many times a spread sheet is used for simple projects
while a detailed Gantt chart is used to deal with complex projects.

Documentation plays a very important role in the corrective action system. The corrective action
system always follows a common format and a common system. Performance can be easily
tracked with the help of documentation. As a result, no time is lost in interpreting different
methodologies of corrective action.

Thus the foremost aim of a sound corrective action plan is to counter the various problems by
identifying them and then suggest appropriate corrective actions to eliminate them completely. By
doing this, the organization aims to improve the methods and reach the level of six sigma
expectations.

6. System Dynamics

A system is defined as a set of interdependent or temporary parts, which are in continuous


interaction with each other. System thinking or system dynamics was first created by Jay W.
Forrester. The concept was later made popular by Peter Senge in his book, “The Fifth Disciple”.
According to the book, system thinking or dynamics occupies an important place in the learning
organizations. These organizations concentrate in increasing their potentialities and capabilities
and believe in learning continuously.

The foremost goal of system thinking and dynamics is to focus on the cause and effect relationship
between processes or products. The system thinking encourages showing not only one aspect of
causes or the linear causal connections; but the whole web of causal interconnections that occur in
totality to affect a system. These webs of casual interconnections are usually represented in the
form of causal loop.

In the process of the adoption of Six Sigma, system thinking plays an interesting role.

They help in the following ways:

With the help of system thinking, a good initiative can be taken by concentrating on
the actual causes rather focusing on the symptoms of the problems.

System thinking can be used in the Define phase of the DMAIC methodology by the
green or black belt. In this phase it helps to identify the negative effects by using the
project Y and further helps to eliminate the negative effects.

126 | P a g e © ExpertRating Solutions


System thinking can also be used in the Measure or Analyze phase of DMAIC process.
In this phase, it helps to identify the dynamics of a system of the important Xs that have
an affect on the project Y.

7. Process Mapping

Process mapping is a well-known technique which is frequently used to create a common vision to
improve business results. It is a faster and a very effective way to minimize flaws, maximize
output and improve on customer satisfaction.

To know more about Process maps, refer to Chapter 2- Green Belt-The Define Phase.

8. Development of Contingency Plan

Contingency planning is defined as a plan which involves taking appropriate or suitable actions
immediately to counter against any type of disaster or emergency. Contingency planning helps to
recover from any type of disaster. This plan usually consists of plans like backup plan, network
storage planning, storage management, cold site facility and hot site facility. Contingency planning
is a part of business resumption planning which aims to improve the affected processes or
products.

The process of developing a contingency plan involves various features. It involves:

Convening a team that represents all the sectors of an organization.

Identifying all important resources and functions.

Effective documentation

Contingency plans should be prepared to deal with events which are unexpected but are potentially
damaging events. To identify the occurrence of such unexpected events, the process decision
program chart (PDPC) can be used. This tool emphasizes the impact of failures on projects. It also
proposes specific actions that need to be taken to eliminate the problems.

9. Piloting

The development of a pilot in the Improve phase helps in determining the effect of the solutions on
the sigma performance. A pilot means implementing a solution in a small scale to see its effect
rather than applying the solution to the entire setup. This helps to measure its adaptability to the
project. By creating a small scale pilot to the solutions, the team can apply modifications or even
radically change the solutions for better implementations.

It is generally seen that even if the solutions for the project are well thought, there is always some
room for unanticipated consequences, whenever the solutions are actually implemented. A pilot
makes the team members prepared for such consequences.

127 | P a g e © ExpertRating Solutions


10. Cost Benefit Analysis

Cost-benefit analysis is defined as the process of calculating the total expected costs versus the
total expected benefits of the actions that are performed. It studies the expenditure involved in the
projects and the benefits of the new information system. The costs include the people and machine
resources of a project.

The benefits are of two types- tangible and intangible benefits.

Tangible benefits are derived by estimating the cost and savings of the new system with the old
one.

On the other hand, intangible benefits cannot be measured. These include service to the customers
and the relations with the employees.

The cost-benefit analysis focuses on the calculations, involving the start-up expenses versus the
conceived return. For example, a product manager may decide to produce a product only if he
expects the return to recover the costs of the product.

The cost-benefit analysis is calculated by using the formula of time value of money. Time value of
money is also known as discounted present value. Time Value of money or (TVM) is based on the
principle that a person demands an interest, when he deposits a certain amount in the bank. It is
perceived that money received today is more valuable than the money received in future with the
inclusion of the interest amount.

Cost-benefit analyses are estimated by using the survey method or by drawing logical conclusions
from the behavior of the market.

Conclusion

To conclude, it can be said that the Improve phase deals primarily with the plausible solutions
which could improve the existing system. As seen above, to generate the possible solutions, the
open-narrow-close approach focused on brainstorming to validate the root causes of a problem. As
a result, a pilot run of the data that was collected helped in making certain modifications and
alterations to the solutions that were selected.

After piloting was done, the solutions were implemented to achieve the desired goal. The
implementation process involved cost benefit analysis which helped to choose the profitable
options. The contingency planning in this stage provided a back up plan to counter against
unwanted emergencies. With the implementation process in line, various tools were used, to
design the experiments and manage the project. These tools helped to extensively improve the
quality and control the situations.

128 | P a g e © ExpertRating Solutions


Chapter 6 - The Control Phase
6 The Control Phase

Objectives

The objectives of the CONTROL phase are to:

Implement a plan for maintaining the improvement or gains

Verify long term process capability

The Six Sigma project has finished successfully. The goals have been met and the customer has
accepted the deliverables. You must be thinking now what? There is one thing to be careful about-
you have to see the process or project doesn‟t backslide. Control is essential to ensure there is no
backsliding. An organization has to ensure the gains are permanent, there is process stability. A
solution is of little or no value if it isn‟t sustained over a long period of time.

The Control phase establishes standard measures to maintain performance and to correct problems
as required.

This is the phase where changes which were made in the X‟s are maintained in order to sustain or
hold the improvements in the resulting Y‟s.

The Control Phase is the last step to sustaining the improvement in the DMAIC methodology. The
Control phase is characterized by completing the project work and handing over the improved
process to the process owner. The control phase gets special emphasis in Six Sigma as it helps to
ensure that the solution stays permanent and it provides additional data to make further
improvements. It has been recorded in previous experiences that hard earned results are very
difficult to sustain if the process is left to itself. There is inherent self control in a well designed
process unlike poor processes which require external control.

To sustain the hard-earned gains, it is necessary to make a list of all inclusive control details. The
following are some ways to protect the gains:

a. The Control Plan

The Control Plan is used to implement process control to ensure that the same problems don‟t
keep reoccurring by regularly monitoring the processes that create the products or services.

Six sigma uses the equation Y =f(X), where Y is the output or the final product. This output is a
function of the inputs (the X‟s). Only by controlling the inputs can the output be controlled.

The Control plan is a tool to help standardize the process and sustain the gains. The Control plan is
created to control the product or process characteristics and the associated process variables to
ensure capability, or to ensure that a process has inherent and automatic control.

There are two aspects to a control plan- inputs and outputs. It is the inputs that can be controlled
while the outputs cannot be controlled. Outputs can only be monitored to see whether control has
been achieved or not. The two aspects of a Six Sigma control plan are:

129 | P a g e © ExpertRating Solutions


1. Process Management Chart

Process Management Chart is used to monitor the critical process outputs. Its purpose is to
facilitate the visibility, appraisal and action for every critical process outputs in any organization. It
is a collection of all critical to quality outputs for a process (or a department or the entire
organization) It gives a picture of the extent of reviewing, monitoring and action-taking an
organization needs, to ensure process and business success.

The following diagram shows a project management summary. Section 1 identifies the
organizational areas involved, the revision level and the date.

Section 2 through 6 shows the current status of the CTQs, how they relate to the processes that
follow and what actions, if any, are taken.

Click here to view the image.

2. Process Control Plan

Process Control Plan is used to control the critical process inputs. The purpose of the process
control plan is to create a feedback system to assure the process can be controlled automatically.

The process control plan‟s focus is on the inputs (Xs) to the process. The process outputs (CTQs)
can also be placed in a process control plan. It is a centralized document to keep track of the
status of process characteristics. When created, the plan provides a complete description of all
possible inputs, outputs and activities for a single process.

130 | P a g e © ExpertRating Solutions


Click here to view the image.

Section 1 of the process control plan helps in identifying the process, including the owner. Section
2 states the operational definition of the process; it tells why the process is existing. Section 3
gives a description of what is specifically being controlled. Section 4 states the requirements and
the current performance level of the requirements. Section 5 states what remedies and controls
should be taken were the process to go out of control. The control methods may be error proofing
methods, or statistical process controls. (which will de discussed in the latter part of the chapter)
Section 6 contains the names of documents for the process and any standard operating procedures
(SOPs).

A good process control plan makes it easier to hand over the process to the process owner, for
sustaining the improvements made by the six sigma team. A well designed process control plan is
the last step towards completion of the Six Sigma project.

b. Process Standardization

After the solutions have been implemented in the improve process, it is important to see if the new
process steps have been standardized. Once the project team has completed its improvement
work, they should see whether the process steps exhibit stability. Process standardization refers to
the consistency or repetitiveness in the process steps. Process standardization should exist where
one needs to experience high quality and consistency in the quality of the process.

After implementing the solutions, the project team must have created a „map‟ or procedure of how
things “should be” or the way the process should go ahead. Standards should be ingrained into
normal operations. This map should ensure that all the steps are highly repetitive, and all
employees perform the steps in the same manner with stability in each process step. If
implemented correctly, most projects have high standardization.

For e.g, the Pizza making process, by the Pizza Home Delivery company can be standardized by
ensuring:

the oven temperature of baking a certain kind of pizza is the same across all locations

the recipe of making a certain kind of pizza is standardized

the quantity and type of the ingredients used for a particular pizza are identical in all
regions

131 | P a g e © ExpertRating Solutions


the scooters for delivering the pizzas are in good condition

the billing and inventory system is perfected by employing people specially qualified
for this purpose

Those processes need to be standardized where differentiation brings little or no incremental gains.
Reinventing the same processes again and again results in high costs and inefficiencies. Process
standardization results in successful business performance, and raises the efficiencies in the
operation of things.

For e.g., to state a real life example, McDonald‟s strives to supply its customers with the same
quality of burgers; which have to be consistent in size, appearance, and taste across different
cities in the same country. (Of course, the burgers served in Russia will be different from those
served maybe, in China, because tastes differ geographically.) They exhibit a high level of process
standardization. These require similar processes and similar quality ingredients.

To state another example, Airbus, the leading European aircraft manufacturer, produces both
jetliners and passenger aircraft. While it manufactures all kinds of jets, the process for making
them is standardized.

Another thing to be seen is whether the project brings the organization into compliance with a
standard like, ISO 9000, environmental standards, product safety standards, customer standards
etc.

To meet the criteria as a non standard process, one or more of the following conditions should
exist:

Unpredictable steps in the process

Highly non-repetitive steps in the process

c. Process Documentation

As the process improvements occur in the control phase, the knowledge thus gained should be
documented. Documentation means writing down the improvements in such a way that everybody
involved in the project is doing things similarly. The documentation should support the successful
implementation of the improvement. The idea of documenting procedures is to decrease variation
in the way the process is run. Too little documentation can result in people operating the process
in slightly different ways which will increase variation in the process. A properly documented plan
exhibits the following characteristics:

Fresh employees without formal training can implement the changes or improvements

Clarity, consistency, specificity and easy to follow operators exist. Obsolete ideas are
erased

The Pareto concept of the vital few against the unimportant many is seen

132 | P a g e © ExpertRating Solutions


For e.g., if there was no documentation on the rules of flying an aircraft, there would have been
pilots taking the plane to a height of their choice, or they would have flown the plane at a speed
which is over the permissible or safety limits, leading to disastrous consequences. They would also
have flown the plane at a speed lesser than the desirable limits which could lead to loss of precious
time and wastage of fuel.

To cite another example, consider our Pizza Home Delivery process. The following can be
ingrained as process documentation:

The drivers of the scooters should not violate traffic rules

The drivers should not break the speed limit or overtake at the wrong moments

The weight of the heating compartments should be optimal in comparison to the


weight of the scooter The heating compartment should be properly enclosed lest the
customer receives a cold pizza

The scooter should go for servicing at routine intervals to ensure proper maintenance

d. Response Plan

The next step in the control phase is to establish, and deploy a response plan. A response plan
comes in handy when the input, process, or output measures indicate an “out-of-control situation”
or to know if something changes for the worse. This plan states the critical parameters that require
corrective action. It also contains restorative actions for known causes to problems that might
surface.

A good response plan is an ongoing action plan to be followed by the process team members so
that the changes are positive. It should contain a definite closed loop improvement scheme. In
addition to identifying measures, specifications and targets of the process, a response plan
identifies what types of controls are in position, and what are the ongoing or planned process
improvements.

Another characteristic feature of a response plan is that various employees, including non team
members, know how to respond to a process once it has been subjected to DMAIC by just reading
the response plan.

For e.g, A business key sub process, the billing system in a BPO (on the number of calls taken) in
an inbound customer care environment, is taken to show what a response plan should look like:

133 | P a g e © ExpertRating Solutions


6.1 The Control Phase ...continued

Figure : A flowchart showing a “should be” map in the billing system of an inbound BPO

134 | P a g e © ExpertRating Solutions


e. Confirming the Number of DPMO Reductions

The next step in the Control Phase is to confirm the number of DPMO (defects per million

135 | P a g e © ExpertRating Solutions


opportunities) reductions. The process improvements that were gained or introduced in the
Improve phase must have resulted in the reduction of the field errors. How much the process
capability increased and by what number the DPMO reduced should be an indicator of where the
control emphasis should be shifted to. This will be explained more elaborately in the control charts
under Statistical Process Control.

f. Transfer of Ownership to Process Owner

After the control systems are in place and response plans are made by the project team, the
process is taken over by the process owner and they run the new process. This is called transfer of
ownership. This is the essence of the Control phase of the project.

g. Systems and Structure Changes to Institutionalize the Improvement

To ensure the permanence of the introduced changes in the Control phase and to sustain the
gains, it is necessary to institutionalize these solutions. The following is a list of systems and
structure changes to make these improvements an accepted part of the organization.

Communication of Metrics: It is critical that the change details and metrics be


communicated in every step. The value calculated from multiple measurements is known
as a metric. This may include tolerance, procedure or data sheet related to the change. It
should be made sure that appropriate quality checks, gauges and verification, and
operator feedback are done. Changes in training personnel need to be in place. The new
and better ways of doing things as a result of the Six Sigma project, needs to be
communicated to the personnel involved. All current employees need to be retrained
and new employees receive proper instructions.

Compliance: It should be made sure that all individuals on the project are in agreement
with the change. It is important to get everyone’s approval, before implementation lest
someone might challenge the change.

Policy Changes: The corporate policies also need to be altered along with the results
generated from the project. It needs to be seen if some policies have become obsolete
and if new policies are needed.

Modification of quality appraisal and audit criteria: To make sure the process or
product conforms to requirements, the quality control department exists in an
organization. The quality control activity assures that the documented changes will result
in changes in the way the actual work is done. It should also be ensured that there is an
audit plan for regular surveillance of the project’s gains.

Revision in budgets: The Six Sigma project team should adjust the budgets in
accordance with the improvements gained in the process. It should be adjusted to that
extent till where profitability and capital inflow is not affected.

Modification in engineering drawings: Many Six Sigma projects require engineering


changes as a part of fixing the problem. The project team should ensure that any

136 | P a g e © ExpertRating Solutions


engineering changes; for e.g, in manufacturing or software, result in the actual changes
being translated to engineering drawings. Instructions should be handed out to scrap old
drawings and instructions.

Modification in manufacturing planning: Six Sigma teams usually find new and
improved ways of manufacturing a product. If new manufacturing plans are not
documented, they are likely to be lost. The project teams should make new
manufacturing plans at least for the processes that are included in the project.

Revision in manpower forecasts: As a result of the six sigma project, productivity and
efficiency of manpower increases. It can happen that the Six Sigma program results in
less manpower producing the same output, and this change should be mirrored in the
manpower planning requirements. Higher quality and faster cycle times create more
value for customers and has a positive effect on sales.

6.2 Control Methods / Tools and Techniques for Control Planning

1. Statistical Process Control (SPC)

The objective of the Control Phase is to ensure that the improved processes now enable the key
variables of the process to stay within the maximum acceptable limits, by using tools like SPC.

The SPC expert collects information about the process and does a statistical analysis on that
information. He can then take necessary action to ensure that the overall process stays in-control
and to allow the product to meet the desired specifications. He can recommend ways and means to
reduce variations, optimize the process, and perform a reliability test to see if the improvements
work.

In a system, all the processes vary and various statistical methods are used to control this
variation. Statistical Process Control makes use of certain control charts that helps monitoring the
variation among processes. The variation among processed is expressed in “Sample Standard
Deviation”, better known as SD.

Standard Deviation (SD) is a mathematical term. It is used in combination with various control
charts to describe the “Normal Distribution”. The Bell curve showing the normal distribution of data
has already been discussed under „Measure Phase‟.

The SPC tool, i.e., control chart, is a graphical way of discovering the process‟s performance over
time; monitoring its inputs and output. The graphical output of the chart is compared with the
actual process performance, for comparing the variation in the process.

Control Charts

Different control charts are applied for different kinds of data. As discusses earlier, there are two
types of data, i.e., attribute data and variable data. Attribute data are data that cannot be broken
down into a smaller units and no additional meaning can be added to such data. For example, True

137 | P a g e © ExpertRating Solutions


or false; hot or cold; zip codes in a country etc.

Variable data is data that can have any value on a continuous scale. Variable data can have almost
any numeric value and can be meaningfully forked into finer and finer increments or decrements,
depending upon the precision of the measurement system. For example: height or weight of a
person.

The control chart is chosen on the basis of the type of data. Once the type of data is known, the
sub-class size for „variable‟ data is decided. Then on the basis of class size, It is decided which
particular control chart should be applicable. In case, the data type is „attribute‟, it is searched if it
has “confirming” or “non-confirming” units. Then the respective control charts are applied.

Click here to view the image.

It is already known by now, from the previous phases, what is to be measured. In the control
phase, the process which has to be measured, is managed and monitored, so that it does not
deviate from its actual standards.

Control charts are basically two-dimensional graphs, showing the performance of a process on one
axis and the time period plotted on other axis. Various attributes that have been used in a control
charts are:

USL and LSL: A monitored process is the one in which all the measurements fall inside the
specification limits, i.e. inside Upper Specification Limit (USL) and Lower Specification Limit (LSL).
The difference between USL and LSL defines the range of output which the process must meet.
(USL-LSL) is also known as the specification range.

Mean: The mean is the center line or the average for the data. The data that has been collected is
accumulated and then divided by the number of data items, to find the mean value of the data.

138 | P a g e © ExpertRating Solutions


While deciding how to control or monitor the process, let it operate in a normal manner, so that
the sample data or sample group can be collected which can act as an input. Once you have
collected 25-30 subgroups of data, compute the mean. Also, determine the Lower Specification
Limit and Upper Specification Limit specified by the client.

The output of the process is denoted as „C p‟. It denotes the stability of the process and it indicates
whether the process is capable of generating products which can accommodate customer‟s
specifications or not. The process‟s capability is measured using the formula:

The „σ‟ is assumed as the standard deviation, for the normal data. The 6σ is called the “natural
tolerance” of the process. If 6σ is less, the output data of the process would be more stable. The
„σ‟ can be calculated as:

Steps for Computing Standard Deviation:

Step 1: Collect the observations; suppose they are 13, 15, 10, 19, 20, 25, 12, 22, 18, and 19.

Step 2: Compute the mean (say N), by adding all the data values and dividing it by the number of
data items, i.e, N = (13+15+10+19+20+25+12+22+18+19) ÷ 10. Hence N = 17.3.

Step 3: Compute the standard deviation as shown in the following table:

139 | P a g e © ExpertRating Solutions


If C p<1, it means that the denominator value is more the numerator. Hence the value
of (USL- LSL) is less, which depicts that the process is wider than the specification limits.
Thus it can be interpreted that a process is not capable of generating outputs which
abide by specifications and the process is generating significant number of defects.

If C p= 1, it means that the process is just meeting the specifications but is still
generating 0.3% defects. The actual minimum accepted value for C p is 1.13.

If C p>1, it means that the process variation is less than the specification and it ensures
that the specification range is narrow enough. However, defects may occur if the process
is not centered on the target value. But in general, the larger value of C p is preferred.

140 | P a g e © ExpertRating Solutions


Various control charts that had been mentioned in the diagram above can be used for different
kinds of data. The steps for applying different control charts are almost the same. They are given
below:

Clearly specify the process that needs to be monitored or controlled.

Clearly determine the method that will be used to supply the data. The data
generation mechanism should be known.

Establish the control charts.

Collect the appropriate data. The data should be good in quality and quantity.

Take appropriate decision once data has been collected and control charts are
applied.Control Charts for Continuous Data Individuals and Moving Range Chart

Control Charts for Continuous Data

The individuals (I) and moving range (MR) control chart is used when there is
continuous data.

The average of all the data values is plotted on the chart. Also, individual data values
are also plotted.

The center-line in the chart depicts the average of the data values, whereas the USL
and LSL are drawn at the upper end and the lower end of the chart, respectively.

If the sample size is 1, then use the Individuals and moving range charts for two
successive observations for measuring the process variability.

The moving range is defined as:

141 | P a g e © ExpertRating Solutions


and is the absolute difference value between the two consecutive data points.

The lines that are plotted for individual measurements in the control charts are measured as:

Here, X is the average of all the data values and MR is the average of all the moving ranges of two
observations.

6.3 Control Methods / Tools and Techniques for Control Planning Continued...

The averages and ranges chart is also used when the data values are continuous.

Here the subgroups having the successive difference of two to ten are plotted.

The main purpose of this chart is to monitor and stabilize the process characteristic’s
average value.

The averages and standard deviations chart is used when the data values are
continuous .

The averages and standard deviation charts plot standard deviation of each subgroup.

142 | P a g e © ExpertRating Solutions


This chart is used when the size of the subgroup is ten or more than ten
measurements.

This chart is usually used when the data collection is inexpensive and quick.

Control Charts for Attribute Data

C-Charts: This control chart deals with the number of defects and nonconformities produced by a
manufacturing process. In this chart, the number of defects are plotted per unit (a unit could be a
day, month, year, batch, machine, process, etc.). While using this chart, it is assumed that
number of errors in a quality attribute is rare and the nonconforming events are independent.

P-Charts: This control chart deals with the proportion or fraction of defective products. In this
chart, the percentage of defects are plotted per unit (a unit could be a day, month, year, batch,
machine, process etc.). While using this chart, it is assumed that number of errors in a quality
attribute is not very rare.

U-Charts: This type of chart handles defects per unit. In this chart, the average number of
nonconformities per unit of product is plotted. A u-chart can also be used with different sample
sizes.

2. Project Planning

The deliverable of the Control phase is to have an effective and implemented control system.
Making a control plan in the control phase is a (sub) project. A careful list of daily or weekly
schedules, activities, timelines, responsibilities and dates, necessary to produce the deliverables
should be maintained. Until the process owner is confident that the improvements are permanent,
a detailed Business Process Change Control Plan should be organized and be regularly updated.

3. Brainstorming

The six sigma team should brainstorm to expand the list of controls with inter-organization ideas.

4. Force Field Diagrams

The Force Field diagram helps in the understanding of the driving forces that work towards the
process of improvement and restraining forces that block any improvement or change taking place.

The Force Field diagram can be used to display the forces that would push to reverse or nullify the
changes, and create counterforces that would sustain these changes. A Process Control Plan to
maintain control of the gains and improvements of the Six Sigma project should be developed from
the counterforces obtained from this tool.

The Force Field Diagram is used to compare opposites, actions, consequences and different point of
views. Some forces are “drivers” that push the system towards its desired goals. Some forces are
“restrainers” that prevent movement in the desired direction. It is usually seen that in any process
of improvement, the restraining forces always stand in the way of the driving force that seems to
indulge in some process of advancement.

The project team can design an action plan to control the gains, after the drivers and restrainers
are found out, to achieve the following:

143 | P a g e © ExpertRating Solutions


a) Decrease the forces restraining progress

b) Increase the forces which lead to movement towards the desired goal

The Force Field Diagram helps to analyze these restraining forces and helps to pave a way for a
change to happen.

The following is an example of how a force field diagram can be used to analyze where controls are
needed. The service levels of a telecom customer care company (inbound environment) in a
particular process are going down below the targeted levels. The drivers and restraints are as
follows:

5. PDPC

The Progress Decision Program Chart(PDPC), previously discussed in the Analyze Phase, can be a
powerful tool in developing a contingency plan.

6. Failure Mode and Effect Analysis (FMEA)

FMEA (Failure Modes and Effect Analysis) is an important tool in control planning.

FMEA is a tool that helps in identifying every potential failure mode of a process or a product; how
readily these failures and their effects on the customer can be detected, so that the effects of the
risk of these failures can be reduced. The failure modes are then ranked by using the RPN (Risk
Priority Number). The failure mode having the highest RPN will have the highest priority to be
controlled. Feasible actions are then recommended to eliminate the problem. While making the list
of failure modes, it should be kept in mind that every failure mode is not possible.

The RPN is simply a product of three elements:

RPN = severity rating of the impact of the failure * probability rating that the failure will occur *
failure detection rating
144 | P a g e © ExpertRating Solutions
FMEA thus helps in preventing occurrence of problems by ascertaining the impact of the failure in
the process and prompt actions to improve process efficiency. It is used to produce safe, reliable
and customer satisfying products or services. It is used to design and appraise of the control plan
for their readiness to failure. It is a tool that helps in focusing on the more significant contributors
to the success or failure of the process.

FMEA is more effective when used in the early stages of control planning, when it is easier to take
action to ensure reliability of the process, in other words, to control the problem.

For e.g: The following is an example of a simple FMEA for the basic activities undertaken in a
telecom customer care helpline company in an inbound environment.

Note: In the table, the Risk Priority Number is a value to rank the failure modes, the highest
number demanding the highest urgency in priority.

Failure mode number 2 has an RPN of 110, so it demands the highest priority of process
improvement and control.

7. Mistake Proofing (Poka Yoke)

Mistake proofing, also known as Poka Yoke, can be used in control planning to make sure the
problem is eliminated for good.

Poka Yoke is a mechanism that prevents a mistake from being made. It is done by eliminating or
hugely reducing the opportunity for an error or to make the error so obvious at the first glance
that the defect reaching the customer is almost impossible. Poka Yoke creates the actions that
have the ability to eliminate mistakes, errors, and defects in everyday processes and activities. In
other words, it is used to prevent causes that give rise to defects. Mistakes are not converted to

145 | P a g e © ExpertRating Solutions


defects if the errors are discovered and eradicated beforehand.

An analysis of the cause-and-effect relationship of a defect is the first step towards the mechanism
of Poka Yoke. Then a remedy that wipes out the occurrence of the mistakes that lead to that defect
is applied. Poka Yoke solutions can consist of any way that helps to ensure the mistake will be
eliminated for good. It can be the creation of a check list, an altered sequence of operation, a
computer data entry form, a message that reminds the user to complete a task etc. Poka Yoke has
wide applicability, especially in engineering, manufacturing, and transactional processes.

Poka Yoke can be done in two ways:

The Type-1 corrective action, usually believed to be the most effective form of process control, is a
type of control which when applied to a process eliminates the possibility of an error condition from
occurring.

The second most effective type of control is the Type -2 corrective action, also known as the
detection application method. This is a control that discovers when an error occurs and stops the
process flow or shuts down the equipment so that the defect cannot move forward.

Both these methods are effective in control planning.

We find everyday examples of Poka Yoke. For e.g. In a normal 3.5 floppy disk, the top right corner
is shaped in such a way that the disk cannot be inserted upside down. Electronic door locks have
mistake proofing devices. They ensure that no door is left unlocked; doors won‟t lock when a door
is open or when the engines are running. Computer data entry forms that won‟t let the user
proceed to the next field until the blank is filled is also mistake proofing.

Project Closure

Months of successful Six Sigma Project Planning and Implementation can be reversed if an
adequate amount attention is not paid to project closure.A key step in any Six Sigma project life-
cycle is formally closing the project. Through the Project Closure Report, the success of future
projects, and high quality deliverables can be ensured. Many plans and adjustments are made
during the course of the project, in relation to the end result. The inputs gathered from the current
project are critical to the outcomes of future projects. The lessons learned can be used to avoid
similar mistakes in future projects. That is why project closure is so important.

Formal Project Closure Report

A project closure report is developed once the project is completed and all the project deliverables
have been delivered to the Process Owner or Business Owner.

The Green Belts being the project leaders are entrusted with the responsibility of making a
carefully detailed Project Closure Report to guarantee the project is brought to a controlled end.
The Project Closure Report template is an important part of project closure. It is a final document
produced for the product or process and is used by senior management, Black Belts, to tie up the
“loose ends”. It contains the framework for communicating project closure information to the main
stakeholders of the Six Sigma project.

146 | P a g e © ExpertRating Solutions


The end project report is to be made by the project leader/manager and it should include the main
findings, outcomes, and deliverables. It should be a fair representation of the project‟s degree of
success. This project closure report template should contain:

Detailed activities undertaken to close the project/process

Detail outstanding issues, risks involved, and recommendations to handle them

Detailed operational matters

The Green Belt project leader/manager should hold a review of the project/process which is
concerned with ensuring the completeness of all the project deliverables. From this can be deduced
what worked well for the project and how to avoid repeating mistakes. It should be attended by
the process owner. The basic question raised should be if the process delivered the projected end
product or service within the time limit and financial resources at their disposal.

147 | P a g e © ExpertRating Solutions

You might also like