Professional Documents
Culture Documents
T
he broad use of Six Sigma since lyze, improve and control (DMAIC) lack of team involvement is a waste of
its introduction in the 1980s has process improvement approach. valuable knowledge and experience
taught us much about how to Understanding how the output of one and should be avoided. Team involve-
make the best use of Six Sigma tools. tool can become the input for one or ment in model formulation, data col-
Now we need to take a look back and more other tools can help you enhance lection, interpretation of results and
reflect on what we’ve learned, particu- implementation of process changes is
larly those things that make the use These proven important to the effective use of the
of Six Sigma tools more effective. other tools as well.
As noted by philosopher George
statistical tools can
Santayana, “Those who remain ignorant help you implement Good Measurement Systems
of history are doomed to repeat it.” MSA generally focuses on the use of
Six Sigma more gage repeatability and reproducibility
In my experience, eight tools are
used most frequently in Six Sigma effectively. studies with measurements being
projects—all eight use statistical made on 10 parts, samples or items by
thinking and methods. The tools are: your use of the tools. A schematic of one or more people in duplicate or
process maps, cause and effect matri- the key linkages of the tools is shown triplicate using one or more measur-
ces (CE), failure mode and effects in Figure 1. Insight into the sequenc- ing devises.
analysis (FMEA), measurement sys- ing and linkage of the tools speeds up Though I’ve seen practitioners use
tem analysis (MSA), process capability the learning process and helps you as few as five parts, samples or items,
studies, multi-vari studies, design of become productive more quickly. such small sample sizes should be
experiments (DOE) and process con- used with caution and viewed as a
trol plans.1 Get Team Input minimum because they may not rep-
These eight tools are integrated with The results of creating a process map, resent the full range of variation in the
the Six Sigma define, measure, ana- CE matrix or FMEA are most useful process. Measurement variation is fre-
when these tools are quently different at different levels of
used by a Black Belt the variable being measured, and a
(BB) or Green Belt change in variation may be missed if
FIGURE 1 Six Sigma Tool Linkage (GB) and a project the parts, samples or items measured
team. All these people do not reflect the total range of varia-
need to be involved tion in the process.
because the tools are Those using the Six Sigma tools often
Customers knowledge based, overlook that a complete MSA also
e d rather than data includes an evaluation of accuracy, lin-
t ne Process
men
p r ove based, and the results earity and stability over time as well as
Im SPC
are only as good as an assessment of variation. While
Process the knowledge of every MSA may not need a detailed
map Control plan
those who use them. assessment of all these characteristics,
BBs and GBs are you should try to determine whether a
MSA Capability Multi-vari DOE typically confident in more thorough evaluation in any of
analysis
their abilities and these areas is needed.
sometimes, in the The typical discussion of MSA gives
CE matrix FMEA
interest of saving little advice on what to do when the
CE – cause and effect. time, may create the measurement system variation is too
DOE – design of experiments.
FMEA – failure mode and effects analysis.
process map, CE high. One approach is to conduct
MSA – measurement system analysis. matrix and FMEA measurement method ruggedness
SPC – statistical process control. by themselves. Such studies by using DOE techniques and
using regression analysis, particularly creates more heat than light. Some BBs self and others make more effective
in analysis of multi-vari studies, is the and GBs, when doing regression analy- use of the tools. Recalling the admon-
presence of correlations among the sis or analysis of variance (ANOVA), ishment of Santayana, our goal should
process variables, known as multi- treat all the y values (process output) as be to make history by quickly making
collinearity. Collinearity among the a single sample and test them for nor- large breakthroughs in process perfor-
predictor variables (Xs) in regression mality. The test often fails, with the con- mance, not repeat history.
models can lead to erroneous conclu- clusion being regression or ANOVA is
REFERENCES
sions. The regression coefficient vari- not appropriate.
1. Forrest W. Breyfogle III, Implementing Six
ance inflation factor (VIF) measures Such a scenario results from a mis- Sigma—Smarter Solutions Using Statistical Methods,
the effects of collinearity. understanding of the true nature of second edition, John Wiley and Sons, 2003.
If the collinearity is large, the regres- the normality assumption: The with- 2. ASQ Chemical and Process Industries
Division, Trusting Measurement Results in the
sion coefficients used to identify the in-group variation for ANOVA and Chemical and Process Industries, ASQ Quality
key drivers will be too large on the the residuals (y – y-predicted) for Press, 2001.
3. S.E. Windsor, “Attribute Gage R&R,” Six
average and may indicate a positive regression analysis, not the raw data,
Sigma Forum Magazine, August 2003, pp. 23-28.
effect when the true effect is negative should be normally distributed. 4. ASQ Chemical and Process Industries
or vice versa. A useful guideline is VIF Allowing the statistical software to do Division, Chemical Interest Committee, Quality
Assurance for the Chemical and Process Industries,
< 5 indicates no problem and VIF > 10 the normality test without giving ASQ Quality Press, 1999.
indicates collinearity is a problem and careful thought to the role of the nor- 5. George E.P. Box, William G. Hunter and J.
Stuart Hunter, Statistics for Experimenters, Wiley-
the model should be reformulated or mality assumption is one reason prac-
Interscience, 1978.
more data should be collected. A VIF titioners make this error.
between five and 10 is a gray area in Practitioners also often overlook that RONALD D. SNEE is principal of performance
which collinearity may or may not be regression analysis and ANOVA are excellence at Tunnell Consulting in King of
a problem. Computation and evalua- robust to departures from normality, Prussia, PA. He has a doctorate in applied and
tion of the VIF should be standard with nonhomogeneous variance being mathematical statistics from Rutgers
practice in regression analysis. a much more critical assumption. They University. Snee is a Shewhart Medalist and
an ASQ Fellow.
There is also a way for you to increase need to pay particular attention to the
your chances of finding the critical few testing of measurements, such as relia-
variables. In my experience, practition- bility and lifetime and environmental Copyright 2003, Ronald D. Snee.
ers pay too little attention to the use of data, for normality because these mea-
screening designs when using DMAIC surements are known not to follow a © 2003 American Society for Quality.
to improve existing processes. The theo- normal distribution. Subject matter Reprinted with permission.
ry is that by the time they work their knowledge should not be ignored
way through the define, measure and when using statistical methods.
analyze phases, they have reduced the When handling outliers, you may Please
potentially critical variables to five or get buried in the details and become comment
fewer and standard two-level factorial confused. In the final analysis the rele-
If you would like to comment on
designs will do the trick. But when they vant question is, “Do the suspect
this article, please post your remarks
get to the improve phase with six or observations have any effect on the
on the Quality Progress Discussion
more variables, they should consider overall conclusions of the analysis?” If
Board at www.asq.org, or e-mail
the use of screening designs such as so, you should perform a careful
them to editor@asq.org.
two-level fractional factorial and study of the pedigree of the data or
Plackett-Burman designs (see Figure 2, collect additional data. If not, you
p. 87).5 The key drivers identified by the should move on. But don’t forget the
screening designs are used in factorial suspect data; your suspicions may be
and response surface experiments to confirmed later, or the mystery may
find the best region in which to operate go unsolved. Remember, when ana-
the process. lyzing very large data sets, you sim-
ply will not have the time to do a
Test for Normality and Outliers
detailed investigation of every outlier
I have observed an overemphasis on because there may be hundreds or
testing for normality and dealing with thousands of them.
outliers. These are important issues, but I encourage you to try these ideas
the overemphasis on these two subjects and look for other ways to help your-