You are on page 1of 45

ISE 250

Leading the Six Sigma


Improvement Project

CANVAS
https://sjsu.instructure.com/
Dr. Jacob Tsao
Industrial and Systems Engineering
San Jose State University
https://www.sjsu.edu/people/jacob.tsao 1
Synopsis: Week 4
 A Quick Review of Lecture 3
 A Quick Expanded Study on σ and as well
 Ch. 6: Approaching the Problem
 Ch. 21: Hypothesis Testing, with Minitab and
Intuition (Cont’d)
 Ch. 22: Sample Size, with Minitab and Intuition
(Cont’d)

2
Synopsis: Week 3 (Review)
 A Quick Review of Lecture 2
 Ch. 2: Six Sigma Applications (Cont’d)
• Use of a Spectrum of Real-world Cases to Achieve Objectives
(1) – (6).
 More Cases before Addressing Methods Next: Qualitative &
Quantitative Methods in Parallel
• Gerald Smith’s taxonomy of Quality Problems.
• Classification of the Real-world Cases according to Smith’s
taxonomy.
• Identification of Appropriate Solution Methods for Each Case.
• Building the Problem-Type-to-Solution-Method
Mapping/Matrix. 3
Gerald Smith’s Taxonomy (02-08)

4
Building the Problem-Type-to-Solution-Method
Mapping/Matrix (in Excel)
↓Method/Problem Problem   Problem   Problem
Type→ Type 1 Type j Type m
Method 1 Initial        
Method 2 intermediate        

Method 3     Initial   Initial


  intermediate        

      Intermediate    

Method i Terminal        
Method i+1         Intermediate

      Terminal    
            5

Method n         Terminal
σ and also , An Expanded Study
 Six Sigma presumes the knowledge of the standard
deviation (σ) and also the mean () of the random
variable or the distribution of the quality characteristic.
• Is this realistic?
 σ and also must be accurately estimated
 A standard method, as a minimum
• Make sure the process is good, as designed
• Take at least 30 samples, with 5 measurements each
• Calculate sample standard deviation and sample mean
for each sample
• Average the 30 estimates
150 measurements (data points); “effectively” 120
6

Standard Deviation (σ), An Expanded Study

 The Larger Issue: Why 1.5 σ Shift in Six Sigma?


• 1 σ shift cannot be detected fast by Statistical Control Charts.
 Take the most basic pair of control charts (, R), for
example.
• Details will come later in the course; this is to justify a key
part of the definition of Six Sigma

7
Week 1: What is Six Sigma?

Chapter 1 8 Introduction to Statistical Quality


Control, 6th Edition by Douglas C.
Montgomery.
If the shift is 1.0σ and the sample
size is n = 5, then β = 0.75.

If the shift is 1.0σ and the sample


size is n = 5, then β = 0.75.
(β = Probability of Acceptance of
the (Null) Hypothesis of In-Control

Chapter 6 Introduction to Statistical Quality Control, 6th Edition by 9


Douglas C. Montgomery.
Copyright (c) 2009  John Wiley & Sons, Inc.
Chapter 6 Introduction to Statistical Quality Control, 6th Edition by 10
Douglas C. Montgomery.
Copyright (c) 2009  John Wiley & Sons, Inc.
Presumption of the Knowledge of σ in Six Sigma
 Is assuming knowledge of (σ) realistic? A discrete case:
• Consider an extension to HW1.1, and study the slow
convergence of the sample standard deviation to the
theoretical evidence and its speed. (03-04)
 OK, consider an example of a continuous distribution.
• Consider the Normal distribution N(10,22) and a simulated
sequence of 3,000 random numbers, and see the convergence
of the sample standard deviation to the theoretical standard
deviation 2 and its speed. (03-05)
 Note: Each involved ONLY ONE random “realization”. 11

 Note: A rigorous study of the variance of the respective


sample standard deviation is not hard.
CH. 6: APPROACHING THE PROBLEM (Book)
“Beginning to identify the problem and defining it with a statement”
 5 Whys, Expanded a Little
 FMEA: Ch. 13 (Week 7; Measure; Advanced)
 FMEA & Root Cause for Example 1, on Hamburger
• Failure mode discussed in the book: Customer
dissatisfaction; undercooking; grill not calibrated in the
morning; calibration omitted in new hire training; improper
new hire training
• Another failure mode: Customer dissatisfaction; some
experiencing overcooking; ….
• Yet, another failure mode: Customer dissatisfaction; some
experiencing uneven cooking (undercooking on one side & 12

brunt on the other); ….


CH. 6: APPROACHING THE PROBLEM (Book)
“Beginning to identify the problem and defining it with a statement”
 FMEA for Example 2, on Vacation Scheduling, If
time permits
 Alternative Answers to
• Why are employees not happy with vacation schedules?
• Why are employees getting their first choice of time off for
vacation?
• Why are supervisors taking too long to approve a [vacation
schedule] request?
• Why do supervisors see the [vacation scheduling] system as
cumbersome?
 Additional failure modes and hence FMEA 13
CH. 6: APPROACHING THE PROBLEM (Book)
“Beginning to identify the problem and defining it with a statement”
 Creating a Problem Statement
• Criteria for a Strong Problem Statement
• Example of a Strong Problem Statement
• Example of a Weak Problem Statement
• Writing your Own Problem Statement
• Problem Statements Lead to Objective Statements/Goals
 Scope and Scope Creep

14
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC;
To Build a Clearer Big Picture, for your Project
 DMAIC vs. DMADV, throughout the five phases
 Phase 1: Define:
• Identify the problem, define requirements and set goals for
success.
• DMADV in cases where the current process needs to be
completely replaced or redesigned; or newly conceived
opportunity
• DMADV: DMAIC + Change Management + Customer
Requirements 15
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
 Phase 2: Measure
• The team [identify data to collect; collect and] use data to
verify their assumptions about the process and problem.
Might revisit the problem statements, goals and other
process-related definitions, etc., based on the result of
assumption verification.
• DMADV: DMAIC + “activities more targeted,” “collect data
and measurements that help them define the performance
requirements of the new process.”

16
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
 Phase 3: Analyze
• Develop hypothesis about the causal relationship between
inputs and outputs, narrow the causation to the vital few
(using methods such as the Pareto principle) and use the
statistical analysis and data to validate the hypotheses and
assumptions they have made so far.

17
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
 Phase 3: Analyze
• DMADV: … team also identify cause and effect relationships
but they are usually more concerned about identifying best
practices and benchmarks by which to measure and design
the new process. The team might also begin process design
work by identifying value-added and non-value-added
activities, locating areas where bottlenecks and errors are
likely, refining requirements to better meet the needs and
goals of the project.
• (What is the difference between a hypothesis and an
assumption?) 18
CH. 6: APPROACHING THE PROBLEM (Book)
Expanded: Definitions of DMAIC, for your Project
 Phase 4: Improve (or Design)
• “… teams start developing ideas that began in the Analyze
phase during the Improve phase of the project.” “They use
statistics and real-world observation to test their
hypotheses and solutions.” “Hypothesis testing actually
begins in the Analyze phase but is continued in Improve
phase as teams select solutions and begin to implement
them. Team also work to standardize solutions in
preparation for roll out improved process to daily production
and non-team employees. Teams also start measuring
results and lay the foundation for controls that will be built
in the last phase. 19
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
 Phase 4: Improve (or Design)
• DMADV projects diverges substantially from DMAIC
projects. The team actually works to design a new process,
which does involve some of the solutions testing mentioned
above but also involves mapping, workflow principles,
actively building new infrastructures. This might mean
putting new equipment in place, hiring and training new
employees or developing new software tools. Teams also
start to implement the new systems and process.

20
CH. 6: APPROACHING THE PROBLEM (Book);
Expanded: Definitions of DMAIC, for your Project
 Phase 5: Control (or Verify)
• The Control or Verify phase is where loose ends are tied and
the project is transitioned to a daily work environment.
Controls and standards are established so that the
improvements can be maintained, but the responsibility for
those improvements is transitioned to the process owner.
During the transition, the Six Sigma team might work with
the process owner and his or her team to trouble shoot any
problems with the improvement.
21
CH. 6: APPROACHING THE PROBLEM;
Expanded: FMEA, RCA, Fault Tree, Logic Tree;
To Build a Clearer Big Picture, for your Project
 Root Cause Analysis (RCA): Failure Mode, Effect and
Criticality Analysis (FMECA), Brief Introduction to
Fault Tree (FT) Analysis, 5 Why’s for Root Causes,
Logic Tree (Beyond Fault Tree).
 Connection to ISE 235 (Quality Assurance and
Reliability)
 Methods to Remove or alleviate Root Causes
• Process Improvement
22
• Poka Yoke
Case: Fast Food Cooking (Cont’d; Brief)
 Hamburger again, continuing the textbook and my
example
• Textbook: 5 Whys
• Looking further into the possibility of within-piece variability
• Poka Yoke: Two-surface (or George Foreman) grill properly
labeled now.

23
Case: PCB Manufacturing: (Cont’d)
 PCB “stuffing”: Manufacturing electronic products
• From FMEA to root causes: e.g.,
 Insufficient solder (Montgomery et al., Example 6.4; 02-06)
 Uneven solder: within-piece variability
• Possible process improvements; Poka Yoke?
 Take a closer look at the two soldering videos
• Use of stencil to apply solder paste, uniformly on a circuit board (1:12)
https://www.youtube.com/watch?v=n0GaNvrPsJ8
• PCB Assembly, with Surface Mount Devices (8:21)
https://www.youtube.com/watch?v=2qk5vxWY46A
 Orientation of the board during soldering: production line design;
production-line design for manufacturability
 Needs for solder vary on the same board: circuit layout design;
product design for manufacturability
24
 (Slanting PCB upward during soldering? To take advantage of gravity?
Example 6.4 (Cause-and-Effect or Fishbone;
as input to FMEA; then input to Root Cause A.)
Example 6.4 (Root Cause Analysis Producing Sold. Insuffici;
Sold. Cold joint; Sold. … Sold. Short; Raw cd damaged)
Case: Implementation of Poka-Yoke System
to Prevent Human Error in Material Prep
 “Implementation of Poka-Yoke System to Prevent Human Error
in Material Preparation for Industry” (04-02) *
• FMEA to root causes to Poka Yoke
• A focus on Missing Part and Wrong Part
• FMEA
• Root Cause: Human Errors
• Poka Yoke: Scanner, Database and Software Control, with UI

 *2020 International Conference on Intelligent Technology and Its Applications.


Case: Universal Workplace Design
Motivated by Design to Accommodate Disability
 “Universal Design of Workplaces through the use of Poka
Yokes: Case Study and Implications” (04-03) **
 Motivation
• Not motivated by customer complaints
• Motivated by Human Compassion and Humanity
 Poka Yoke
• To prevent worker errors
• To enable workers to be qualified for more tasks
 Actually helps quality for all work force, possibly with slight cost
increase

** Journal of Industrial Engineering and Management, 2011


Case: Low cost automation and poka yoke
device – A case of heat exchanger
 “Low Cost Automation and Poka Yoke Devices: Tools for
Optimizing production Processes.” *** (04-supp)
 Optional and only if time permits.
 A Focus on Low Cost automation and Poka Yoke
 FME(C)A omitted.

 *** International Journal of Productivity and Quality Management, Vol 4, No. 5/6, 2009
Case: Root Cause thru “Logic Tree”
 A Practical Example for Searching for Root Causes - Case:
“Eastman Chemical’s Success Story” (04-04; HW)
 “Unlike a fault tree Analysis, which is traditionally used for
mapping out what could go wrong, a logic tree helps determine
what did go wrong. Patience and discipline are stressed.”
 Note:
• GE website about Logic Tree:
https://www.ge.com/digital/documentation/meridium/Help/V43050/Def
ault/Subsystems/RootCauseAnalysis/Content/LogicTree.htm
Root Cause Analysis,
After Failures Have Occurred
 RCA is conducted after a quality problem has occurred in the
FIELD and to a CUSTOMER (or during proactive product testing
and design).
 Process definition, FMEA, and FAULT TREE should always be
done at the DESIGN STAGE.
 They should always proceed RCA.
 So, always start with the current process, whether it is well
defined or optimized or not.
 If the process was not followed, it is a conformance problem.
 It followed and a problem occurred, improve the process.
 In either case, root causes need to be found and removed or
alleviated.
Root Cause Analysis
 Fault Tree Analysis has been a well established methodology to
anticipate failures
 In fault tree analysis, “secondary events” are place holders for
possible further development. If what did go wrong was not
identified in the fault tree developed during the design stage,
the failure can be used to further develop the fault tree.
 There could be too many possible failure modes!
 Failure of Imagination, as a conclusion of a Congressional
Hearing after the Apollo 1 fire killed three astronauts on the
test pad (on the gorund)
• Current Congressional hearing ongoing about Jan. 6 2021 riot; some
concluded tentatively “Failure of Imagination”
Fault Tree Exercise (04-supp; by John Thomas,
CERN)
 Hazard: Toxic chemical released
 Design:
Tank includes a relief valve opened by an operator to
protect against over-pressurization. A secondary valve is
installed as backup in case the primary valve fails. The
operator must know if the primary valve does not open so
the backup valve can be activated.
Operator console contains both a primary valve position
indicator and a primary valve open indicator light.
Draw a fault tree for this hazard and system design.
© Copyright 2014 John Thomas
Fault Tree Exercise
Example of an actual incident
 System Design: Same
 Events: The open position indicator light and open indicator light both
illuminated. However, the primary valve was NOT open, and the system
exploded.
 Causal Factors: Post-accident examination discovered the indicator light circuit
was wired to indicate presence of power at the valve, but it did not indicate
valve position. Thus, the indicator showed only that the activation button had
been pushed, not that the valve had opened. An extensive quantitative safety
analysis of this design had assumed a low probability of simultaneous failure for
the two relief valves, but ignored the possibility of design error in the electrical
wiring; the probability of design error was not quantifiable. No safety evaluation
of the electrical wiring was made; instead, confidence was established on the
basis of the low probability of coincident failure of the two relief valves.
Ch. 21: Hypothesis Testing, with Intuition
 Hypothesis Testing (Revisited)
• The probability of this observation or worse (i.e., farther out)
is called “the p-value”, where the “p” stands for “(tail)
probability”. (See a minor note * below.)
• For any hypothesis test and for most practical purposes, the
“p-value” summarizes almost the entirety of the result of the
hypothesis test; in other words, it is the “bottom line”.
• Sometimes, the concern is on both sides; some other times,
the concern is only on one side: 2-sided vs. 1-sided test
 Two-sided: the sum of the two tail probabilities, corresponding to the
tail on the side of the test statistic and its symmetric counterpart.
 One-sided: Just the probability of observing what has been observed
or worse (or further on the same tail).
*Note: This is NOT the percentage non-conforming involved in the binary outcome of a
INTUITION AND EXAMPLES, WITH MINITAB:
TWO-SIDE
 Your observed value WILL BE DIFFERENT from your
hypothesized value, e.g., sample p or sample mean
 So, the question is
• How far is too far? Or,
• How different is too different?
 So, you MUST DO (two-sided) hypothesis testing
regardless of what you observed.

37
INTUITION AND EXAMPLES, WITH MINITAB:
ONE-SIDED
 Consider the case: H0: p = 0.85 vs. H1: p <= 0.85
(Low), with any given sample size
 Consider two possibilities about your observed value.
• Sample p = 0.835: Do you need to test H0? (Intuitively)
• Sample p = 0.865: Do you need to test H0 against H1?
(Intuitively)
 Minitab will give the same answer as your intuitive answer.
 Because, the question is 38

• How low is too low?


• NOT: How different is too different?
 Similarly for the case: H0: p = 0.85 vs. H1: p >= 0.85.
Ch. 21: Hypothesis Testing, with Excel &
Intuition (Revisited; Brief)
 Question: Given this data, what are the hypothesized
values that will be accepted? Confidence Interval.
 This is called the “duality between hypothesis testing
and confidence interval”.
 Intuitively and even semantically:
• We have confidence in those values, because we would accept
them in the corresponding hypothesis tests.
• (Strictly speaking, we do not really have confidence in them;
we have no confidence on those points not in them, because
we would have rejected any of those points in hypothesis
testing.)
Ch. 21: Hypothesis Testing, with Excel &
Intuition (Revisited; Expanded)
 Intuitively and even semantically: (Cont’d)
• The reason? Both use the same test statistic, but in two
different ways.
 One treating the hypothesized value as given and using the data to
test it. (Hypothesis Testing)
 The other treating the data as given and using it to find all the
hypothesized value with which the hypothesis would be accepted.
 Try it yourself in Minitab.
 FYI Only (03-supp)
 Confidence Interval: A random interval, which
contains the parameter of interest with 1 -
probability.
A Spectrum of Real-world Cases:
Manufacturing Problems
 Boston Scientific’s “Proof” of Identical Manufacturing
Processes at Two Locations at the End of a Long
“Certification” Process for “Factory Duplication” (02-07)
• Alternatively, “proof” of adherence to standards
 Two-Sample t Test: Why Two-Sample? (03-03; gist)
 Different from the Paired (One-sample) t Test done for
the Gauge R&R example
 Dramatization: (02-03; gist)
• Pretend that the Gauge R&R data without pairing.
• A different (opposite) conclusion is reached. 41
Ch. 22: Sample Size, with Minitab and
Intuition
 So far, we have focused on the “null hypothesis”,
usually denoted as H0 , and let the “alternative”
hypothesis be VAGUE, e.g.,
• H0: P = 0.85
• H0: µ = 10
 Given a set of data (by others, your management or
colleagues) for you to analyze, test such a hypothesis is
just about all what you can do.
 BUT, YOU SHOULD COLLECT DATA, AFTER DESIGING
YOUR EXPERIMENTS. 42
Ch. 22: Sample Size, with Minitab and
Intuition (Revisited)
 Graphically for the example of two-sample t-test (03-
supp)
 Frequently needed and used sample size calculations are
supported by Minitab
 Theory is left for the other courses focusing on
statistics.
 Reading 1.2: What information is lacking for sample size
calculation?
43
Ch. 22: Sample Size, with Minitab and
Intuition
 Now, you have rediscovered the Central Limit Theorem
with you HW exercises.
 You should not be surprised that many (hypothesis
testing) “statistic’s” can be and are approximated by a
Normal distribution (and other distributions resulting
from Normal distributions, e.g., Chi-squared
Distribution), including even the sample proportion,
which is an average of Bernoulli random variables.
 But, this approximation is good only if the sample size is
“sufficiently” large.
44
Ch. 22: Sample Size, with Minitab and
Intuition
 Unstructured examples, i.e., not falling into well-known
classes of problems. But, the null hypothesis should
always be no difference, in control, on target, and other
ones under which a test statistic can be developed and
the distribution of the test statistic can be derived or
simulated.
 Example: Comparing two proportions (04-05)
• Fisher’s Exact Test, vs.
• Normal and Chi-squared approximation

45

You might also like