You are on page 1of 68

1

What is Six Sigma ?

A metric?

What is
A A
philosophy? Six methodology?
Sigma?

A
management
system?

2
What does Six Sigma methodology focus on?
 Defect reduction
o e.g. Reducing no of bugs in a software
o e.g. Reduction of defects in apparel

 Yield improvement
o e.g. Increasing the % loan application processed versus loan application received
o e.g. Increasing Sales per hour

 Improved customer satisfaction


o e.g. Improving CSAT scores in an ITES process
o e.g. Reducing attrition or turnover
o e.g. Reducing transaction processing time, Average Handling time etc

 Higher net income


o e.g. Improving Occupancy % of a multiplex
o e.g. Reducing and controlling raw inventory
o e.g. Reduction of Non-value add time in a process

Continual improvement

3
Evolution of concepts behind Six Sigma

FMEA was formally


introduced in the late 1940s
for military usage by the US
Armed Forces. Later it was
used for aerospace/rocket
development to avoid
Ronald Fisher errors in small sample
introduced Design of sizes of costly rocket
Carl Friedrich Gauss Experiments through technology. An example of
(1777-1855) a book in 1935. This this is the Apollo Space
introduced the was a result of a program. The primary push
concept of the series of studies that came during the 1960s,
In 1920's, Walter started with study of while developing the
Normal Curve. Shewhart showed that variation in crop yield. means to put a man on the
three sigma, from the moon and return him safely
mean, is the point where to earth. In the late 1970s
a process requires the Ford Motor
correction. This finally Company introduced FMEA
led to Control Charts to the automotive industry

AIAG, Automotive Industry Action Group published the most accepted document on Measurement Systems
Analysis (MSA). MSA is an essential step in Six Sigma methodologies and is used to ensure reliability of
data.
4
How it all started?
 In the late 1970's, Dr. Mikel Harry, a senior staff engineer at Motorola's Government
Electronics Group (GEG), experimented with problem solving through statistical
analysis. Using this approach, GEG's products were being designed and produced at
a faster rate and at a lower cost.

 Subsequently, Dr. Harry began to formulate a method for applying six sigma
throughout Motorola. In 1987 when Bob Galvin was the Chairman, Six Sigma was
started as a methodology in Motorola

 Bill Smith, an engineer, and Mikel Harry together devised a methodology with the
focus on defect reduction and improvement in yield through statistics.

 The term "Six Sigma" was coined by Bill Smith, who is now called Father of Six
Sigma
 Terms such as Black Belt and Green Belt were coined by Mikel Harry in relation to
martial arts.

The company saved $ 16 billion in 10 years

5
What is Lean Six Sigma?
What is Lean ?

 Goal – Eliminate waste and increase process speed


 Method
 Genchi Genbutsu– Go and See the workplace (gemba).
 Kaizen (Change for better) workshops

6
What is Lean Six Sigma ?
What is Six Sigma ?

 Goal – Reduce variation to improve performance on


CTQs
 Method – DMAIC / DMADV approach

7
What is Lean Six Sigma ?
Lean Six Sigma Strategy
Traditional
Strategy
Improved
product Lower Product
quality Higher Costs
Longer
Delivery Product
Lean Six
Times Costs
Sigma

Shorter Lean Six Improved


Delivery sigma product
Times quality

8
Data Type
 Attribute data : Value of countable quality characteristics

Example - number of defects, number of defectives.

 Continuous data: Value of measurable quality characteristics


Examples - Strength (kg/cm2), weight (kg) , length (cm),
temperature (° C), time (sec)

9
Normal Distribution Curve
Lower Upper
Spec Spec

-  µ + 
- 1σ + 1σ

68.26%
- 2σ + 2σ
95.44%
- 3σ + 3σ
99.73%

- 4σ + 4σ
99.9937%
- 5σ + 5σ
99.99943%

- 6σ + 6σ
99.999998%

10
Overview of DMAIC
 Identify Y
 Create charter document which includes objective, scope and
Define
financial impact of the project and seek approval.
 Create process overview map and define scope.

 Assess appropriateness of measurements


Measure
 Identify the base line and target for Y’s.

 Identify potential Xs affecting Output


Analyze  Collect and analyze data to validate/ verify whether the
potential Xs are critical

 Develop the improvement alternatives, select the best one and


Improve carry out improvements for vital X.
 Verify if Y is improving.

 Select control subjects, develop and execute control plan.


Control
 Take actions to sustain the improved results.
11
DMAIC Roadmap

D
Step 1: Generate Project Ideas
Step 2: Select Project
Step 3: Finalize Project Charter and High Level Map

Step 4: Finalize Project Y, Performance Standards for Y

M Step 5: Validate Measurement System for Y


Step 6: Measure Current Performance and Gap

A
Step 7: List All Probable X’s
Step 8: Identify Critical X’s
Step 9: Verify sufficiency of Critical X’s for the project

I
Step 10: Generate and Evaluate Alternative Solutions
Step 11: Select and Optimize best solution.
Step 12: Pilot, Implement and Validate the solution

C
Step 13: Implement Control System for Critical X’s
Step 14: Document Solution and Benefits
Step 15: Transfer to Process Owner, Project Closure

12
Project Identification Methods

VOC Analysis
(problems identified on the basis
of Customer’s Voice)

List Down the Potential


Problems /Projects of
VOB Analysis Identification of Organization on the
(problems identified on the basis
of Business problems) CTQs basis of the above
analysis.

COPQ Analysis
(problems identified on the basis of
Cost of Poor Quality i .e. Process
Defects ) •VOC  Voice of Customer
•VOB  Voice of Business
•COPQ  Cost of Poor Quality

13
Identify the CTQ
VOC/VOB CTQ

“I want the pizza delivered hot”

“Last year, we spent a lot of money to fix


products in warranty”

“I had to wait for so long to get an operator


to answer my query”

“We aren’t able to process transactions


within the time promised to the customer”

“Loan application forms submitted by loan


officers have too many errors”

14
Key elements in a charter

 Business Case:
• Explanation of why to do the project

 Problem and Goal Statements:


• Description of the problem/opportunity or objective in clear,
concise, measurable terms

 Project Scope:
• Process dimensions (start and end point), available resources

 Milestones:
• Key steps and dates to achieve goal

 Roles:
• People, expectations, responsibilities

15
The SIPOC Model
Inputs Process Outputs

Suppliers Customers

Steps

Inform Loop

16
Understanding SIPOC

17
Outputs of Define Phase

 Approved Project Charter

 SIPOC diagram

The main output of the


Define phase is the Project
Charter

18
DMAIC Roadmap
Step 1: Generate Project Ideas

D Step 2: Select Project


Step 3: Finalize Project Charter and High Level Map

M
Step 4: Finalize Project Y, Performance Standards for Y
Step 5: Validate Measurement System for Y
Step 6: Measure Current Performance and Gap

A
Step 7: List All Probable X’s
Step 8: Identify Critical X’s
Step 9: Verify sufficiency of Critical X’s for the project

I
Step 10: Generate and Evaluate Alternative Solutions
Step 11: Select and Optimize best solution.
Step 12: Pilot, Implement and Validate the solution

C
Step 13: Implement Control System for Critical X’s
Step 14: Document Solution and Benefits
Step 15: Transfer to Process Owner, Project Closure

19
Performance Standards for Y
CTQ Performance Characteristics
CTQ OPERATIONAL
Data Type Unit of Measure LSL USL Target
Measure DEFINITION

Data Type Continuous / Discrete

Unit of Measure e.g. degree centigrade, kg, litre, seconds etc

Operational Definition A clear, concise detailed definition of a measure. Operational definitions should be very
precise and be written to avoid possible variation in interpretations.
USL Upper Specification Limit

LSL Lower Specification Limit

20
Measurement systems analysis

“ Any measurement can be useful if its


limitations are understood and observed”

Dr. W. Edwards Deming

21
MSA - Terminologies
Discrimination
Also known as resolution.

The ability of the measurement system to detect variation to a


meaningful extent compared to the process variation or specifications.

General rule: need to be able to measure in increments of 1/10 of


requirement.

22
MSA - Terminologies

Bias : Also known as accuracy. Difference between the average


measurement and a reference value.

One part, one operator, one piece of eq

23
Measurement systems analysis

Repeatability
Indicates the inherent variation in the measurement system.

One part,
one operator,
one piece of equipment

24
MSA - Terminologies

Reproducibility
Indicates the ability of different tester to achieve the same result.

One part, one piece of equipment

25
MSA - Terminologies

R&R or Gauge R&R


•Combined repeatability and reproducibility.
•Defines the capability of measurement system.
•Compares the measurement system variation with
the Specification / Tolerance.

< 10% - Good


10 – 30% - Conditionally Acceptable
> 30% - Not Acceptable

26
Desired ndc for continuous variables

Discrimination index (D.I) also


X
known as no. of distinct data
X categories (ndc) compares
X X
X X X Meas. Variation & Process
X X X X Variation, to determine if the
X X X X X
5.1 5.2 5.3 5.4 5.5 measurement system is capable
of discriminating different
parts.
Good if, 5 or more
distinct values are
observed

27
Attribute Measurement Systems
 These measurement systems utilize accept /reject
criteria are ratings to determine the acceptable level of
quality.
 Kappa techniques are used to evaluate these
measurement systems.

28
Capability Analysis
Understanding ‘control’ versus ‘capability’
Control = Statistical Stability
= No ‘special cause’ variation
= Only ‘common cause’ variation
Capability = Process output meets specifications
Control charts are used to assess ‘control’ or
statistical stability

29
30
Specification Vs Process Output

31
32
DMAIC Roadmap
Step 1: Generate Project Ideas

D Step 2: Select Project


Step 3: Finalize Project Charter and High Level Map

Step 4: Finalize Project Y, Performance Standards for Y

M Step 5: Validate Measurement System for Y


Step 6: Measure Current Performance and Gap

A
Step 7: List All Probable X’s
Step 8: Identify Critical X’s
Step 9: Verify sufficiency of Critical X’s for the project

I
Step 10: Generate and Evaluate Alternative Solutions
Step 11: Select and Optimize best solution.
Step 12: Pilot, Implement and Validate the solution

C
Step 13: Implement Control System for Critical X’s
Step 14: Document Solution and Benefits
Step 15: Transfer to Process Owner, Project Closure

33
Tools for Identification of Probable X’s

 Qualitative Analysis
 Ask for expert advice , On-site investigation, Investigate best practices
through Benchmarking, Fish Bone Analysis

 Graphical analysis
 Analyze historical data using Box Plots and Scatter Diagram

 Pareto Plots to identify the potential inputs

 Risk Analysis
 Failure Mode Effect Analysis (FMEA)

 Value Analysis
 Analyse process maps to determine VA/NVA %

34
35
Why – Why Analysis

• Frequent visits to Dentist


Why

• toothache
Why

• Gum infection
Why

• Food particles stay after dinner


Why

• Not brushing after dinner


Why

36
FMEA – Operating Definition
Failure Mode and Effects Analysis is a structured and systematic process
to identify potential design and process failures before they have a chance to occur with the
ultimate objective of eliminating these failures or at least minimizing their occurrence or
severity.

Design FMEA Process


FMEA FMEA

During design, it is advantageous to Performed to identify and address


know: areas of potential risk within existing
a) How and where customer will process.
use end product?
b) How customer may “abuse” end Helps proactively manage risks in a
product? process
37
Severity
A numerical measure of how serious is the effect of the failure.
Effect Design or Process: Customer Effect Process: Manufacturing/Assembly Effect

10 Hazardous Very high severity ranking when a potential failure mode Or may endanger operator (machine or assembly) without warning
without affects safe vehicle operation and/or involves noncompliance
warning with government regulation without warning.

9 Hazardous Very high severity ranking when a potential failure mode Or may endanger operator (machine or assembly) without warning.
with affects safe vehicle operation and/or involves noncompliance
warning with government regulation with warning.

8 Very High: Vehicle/item inoperable (loss of primary function). Or 100% of product may have to be scrapped, or vehicle/item
repaired in repair department with a repair time greater than one
hour.
7 High Vehicle/item operable, but at a reduced level of performance. Or product may have to be sorted and a portion (less than 100%)
Customer very dissatisfied. scrapped, or vehicle/item repaired in repair department with a
repair time between a half-hour and an hour.

6 Moderate Vehicle/item operable, but Comfort/Convenience item(s) Or a portion (less than 100%) of the product may have to be
inoperable. Customer dissatisfied. scrapped with no sorting, or vehicle/item repaired in repair
department with a repair time less than a half-hour.

5 Low Vehicle/item operable, but Comfort/Convenience item(s) Or 100% of product may have to be reworked, or vehicle/item
operable at a reduced level of performance. repaired off-line but does not go to repair department.

4 Very Low Fit & Finish/Squeak & Rattle item does not conform. Defect Or product may have to be sorted, with no scrap, and a portion
noticed by most customers (greater than 75%). (less than 100%) reworked.

3 Minor Fit & Finish/Squeak & Rattle item does not conform. Defect Or a portion (less than100%) of the product may have to be
noticed by 50% of customers. reworked, with no scrap, on-line but out-of-station.

2 Very minor Fit & Finish/Squeak & Rattle item does not conform. Defect Or a portion (less than100%) of the product may have to be
noticed by discriminating customers (less than 25%). reworked, with no scrap, on-line and in-station.

1 None No discernable effect Or slight inconvenience to operation or operator, or no effect.


38
Occurrence
Occurrence: A measure of probability that a particular failure will
actually happen.
The degree of occurrence is measured on a scale of 1 to 10,where
10 signifies the highest probability of occurrence.
Likelihood Either … Or … Cpk

10 Very High: More than 100 per thousand machines/items/ 1 in 10 or less <0.55
pieces

9 Persistent Failures 50 per thousand machines/items/pieces 1 in 20-50 0.55 to 0.78

8 High: 20 per thousand machines/items/pieces 1 in 50-100 0.78 to 0.86

7 Frequent Failures 10 per thousand machines/items/pieces 1 in 100-200 0.86 to 0.94

6 Moderate: 5 per thousand machines/items/pieces 1 in 200 -500 0.94 to 1.00

5 Occasional 2 per thousand machines/items/pieces 1 in 500-1000 1.00 to 1.10

4 Failures 1 per thousand machines/items/pieces 1 in 1000-2000 1.10 to 1.20

3 Low: Relatively 0.5 per thousand machines/items/pieces 1 in 2,000-10,000 1.20 to 1.30

2 Few Failures 0.1 per thousand machines/items/pieces 1 in 10,000-100,000 1.30 to 1.67

1 Remote: Failure is unlikely Less than 0.010 per thousand machines/item/ 1 in 100,000 or more >=1.67
pieces

39
Detection
A measure of probability that a particular failure or cause in our operation shall be
detected in current operation and shall not pass on to the next operation. (would
not affect the internal/ external customer)
Detection Criteria: Likelihood of Detection by PROCESS control A B C Suggested Range of Detection Methods

10 Absolute certainty of Design Control will not and/or cannot detect a potential X Cannot detect or is not checked.
non-detection cause/mechanism and subsequent failure mode; or there is
no Design Control.

9 Controls will Very remote chance the Design Control will detect a potential X Control is achieved with indirect or random checks only.
probably not detect cause/mechanism and subsequent failure mode.

8 Controls have poor Remote chance the Design Control will detect a potential X Control is achieved with visual inspection only.
chance of detection cause/mechanism and subsequent failure mode.

7 Controls have poor Very low chance the Design Control will detect a potential X Control is achieved with double visual inspection only.
chance of detection cause/mechanism and subsequent failure mode.

6 Controls may detect Low chance the Design Control will detect a potential X X Control is achieved with charting methods, such as SPC
cause/mechanism and subsequent failure mode. (Statistical Process Control)

5 Controls may detect Moderate chance the Design Control will detect a potential X Control is based on variable gauging after parts have left
cause/mechanism and subsequent failure mode. the station, or Go/No Go gauging performed on 100% of
the parts after parts have left the station.

4 Controls have a Moderately high chance the Design Control will detect a X X Error detection in subsequent operations OR gauging
good chance to potential cause/mechanism and subsequent failure mode. performed on setup and first-piece check (for set-up
detect causes only).

3 Controls have a High chance the Design Control will detect a potential X X Error detection in-station, or error detection in subsequent
good chance to cause/mechanism and subsequent failure mode. operations by multiple layers of acceptance: supply,
detect select, install, verify. Can’t accept discrepant part.

2 Controls almost Very high chance the Design Control will detect a potential X X Error detection in-station (automatic gauging with
certain to detect cause/mechanism and subsequent failure mode. automatic stop feature.) Cannot accept discrepant part.

1 Controls certain to Design Control will certainly detect a potential X Discrepant parts cannot be made because item has been
detect cause/mechanism and subsequent failure mode. error-proof by process/product design.

* Inspection Types: A= Error Proofed, B= Gauging, C= Manual Inspection


40
Risk Priority Numbers (RPN)

It is a numerical and relative “measure of overall risk” corresponding to a


particular failure mechanism and is computed by multiplying the
Severity, Occurrence and Detection numbers.

RPN = S * O * D

The RPN provides prioritization of potential failure mechanisms.

Normally RPN values more than 125 need a recommendation and


action.

41
Notes on Reduction in RPN Numbers

Severity

This number cannot usually be changed.


E.g. Brake system failure could result in a Severity number = 10

Occurrence

Design/process revisions and prevention controls can reduce


occurrence ranking.

Detection

This number can be reduced by instituting good detection


techniques (inspection, testing, visual controls etc.)

42
RPN Reduction
Assume that the Severity number cannot be reduced. Indicate the
order of importance that you would assign as far as addressing
these processes so as to reduce overall risk.

Item Severity Occurrence Detection RPN


a 8 10 2 160
b 10 8 2 160
c 8 2 10 160
d 10 2 8 160

43
Value Analysis
 Three elements of Work
 Value Added Time
 Non Value Added Time
 Value Enabling Time

VALUE ADDED
NON-VALUE ADDED

10% 10% 80%

VALUE ENABLING
LEAD TIME
Traditional Processes
44
Time Chart
Time (Available seconds per working day)
Takt Time =
Volume (Daily production requirement)

Sets pace of production to


match pace of sales.

Actual time required for a worker to


Cycle Time =
complete one cycle of his process

45
Time Chart

TIME TRAP
AND
CONSTRAINT

TAKT TIME
Cycle time

CONSTRAINT

46
700 Constrain Process:
1.Rough boring
2.Final boring
3.Bottom seat
600 turning(Rough)
Bottle neck
process 4.Milling
5.Tapping
76
500 6.Fine boring

400

593 Takt Time


130sec
300
92
10
203 481
52
200
60

65 228 249
100 196 92 129
66 143
135 132 67
82 25 62 30
48 53 45 32
0
24 25 7
7 9

Run Time (Machine Time in sec) Setup Time (Manual Time in sec)

47
Time
48
Chart
Line Balancing
Process Before Process After
Line Balancing Line Balancing
DMAIC Roadmap
Step 1: Generate Project Ideas

D Step 2: Select Project


Step 3: Finalize Project Charter and High Level Map

Step 4: Finalize Project Y, Performance Standards for Y

M Step 5: Validate Measurement System for Y


Step 6: Measure Current Performance and Gap

A
Step 7: List All Probable X’s
Step 8: Identify Critical X’s
Step 9: Verify sufficiency of Critical X’s for the project

I
Step 10: Generate Improvement Strategy
Step 11: Select Mistake Proof solution.
Step 12: Pilot, Implement and Validate the solution

Step 13: Implement Control System for Critical X’s

C Step 14: Document Solution and Benefits


Step 15: Transfer to Process Owner, Project Closure

49
A

Is Cp < = > Cpk?

50
B

Is Cp < = > Cpk?

51
C

Is Cp < = > Cpk?

52
A

Is Cp < = > Cpk?

53
B

Is Cp < = > Cpk?

54
C

Is Cp < = > Cpk?

55
DMAIC Roadmap
Step 1: Generate Project Ideas

D Step 2: Select Project


Step 3: Finalize Project Charter and High Level Map

Step 4: Finalize Project Y, Performance Standards for Y

M Step 5: Validate Measurement System for Y


Step 6: Measure Current Performance and Gap

A
Step 7: List All Probable X’s
Step 8: Identify Critical X’s
Step 9: Verify sufficiency of Critical X’s for the project

I Step 10: Generate and Evaluate Alternative Solutions


Step 11: Select and Optimize best solution.
Step 12: Pilot, Implement and Validate the solution

C
Step 13: Implement Control System for Critical X’s
Step 14: Document Solution and Benefits
Step 15: Transfer to Process Owner, Project Closure

56
Control Phase – Main Objectives

To make sure that our process stays in control after


the solution has been implemented.

To quickly detect the out of control state and


determine the associated special causes so that
actions can be taken to correct the problem before non
conformances are produced.

57
Control Charts
 Purpose
 To help people manage systems to make the right
decisions.
 Benefits :
 Prevents over control
 Prevents under control
 Shows how the system reacts to the change.

58
59
Selecting Control Charts

TYPE OF DATA

Count or Classification Measurement


(Attribute Data) (Variable Data)

Count Classification

Defects Defectives

Fixed Variable Fixed Variable Subgroup Subgroup Subgroup


Sample Size Sample Size Sample Size Sample Size Size of 1 Size 2-10 Size >10

C Chart U Chart NP Chart P Chart I-MR X-bar & R X-bar & S

60
Chart interpretation for abnormalities – Rule 1
Point outside the limits

61
Chart interpretation for abnormalities – Rule 2
7 Points in a row above or below the mean line.

62
Chart interpretation for abnormalities – Rule 3
7 points in a row descending or ascending

63
Chart interpretation for abnormalities – Rule 4
Too Close to the average

64
Chart interpretation for abnormalities – Rule 5
Too far away from the average

65
Chart interpretation for abnormalities – Rule 6
Cyclic pattern

66
Transfer to Project Owner, Project Closure

 Hand over the Control Plan to the Process Owner.

 Get a formal sign off of the Sponsor, Process Owner on


the results of the project.

 Identify Best Practices and replication opportunities and


advise the Process owner on the same.

67
Outputs of Control Phase
 Control Plan

 Documentation of Project Findings

 Training & Communication Plan

 Transfer to Process Owner

 Validation of Financial Benefits


The main output of the
Control phase is the
Control Plan
68

You might also like