You are on page 1of 4

Advancing Performance Excellence

www.asq.org DECEMBER 2003

2003 SALARY SURVEY


How Do You Stack Up?
Plus: Process 8 Essential How To Lose
Improvement Six Sigma Weight the
Know-How Tools Quality Way
STATISTICS
ROUNDTABLE

Eight Essential Tools


by Ronald D. Snee

T
he broad use of Six Sigma since lyze, improve and control (DMAIC) lack of team involvement is a waste of
its introduction in the 1980s has process improvement approach. valuable knowledge and experience
taught us much about how to Understanding how the output of one and should be avoided. Team involve-
make the best use of Six Sigma tools. tool can become the input for one or ment in model formulation, data col-
Now we need to take a look back and more other tools can help you enhance lection, interpretation of results and
reflect on what we’ve learned, particu- implementation of process changes is
larly those things that make the use These proven important to the effective use of the
of Six Sigma tools more effective. other tools as well.
As noted by philosopher George
statistical tools can
Santayana, “Those who remain ignorant help you implement Good Measurement Systems
of history are doomed to repeat it.” MSA generally focuses on the use of
Six Sigma more gage repeatability and reproducibility
In my experience, eight tools are
used most frequently in Six Sigma effectively. studies with measurements being
projects—all eight use statistical made on 10 parts, samples or items by
thinking and methods. The tools are: your use of the tools. A schematic of one or more people in duplicate or
process maps, cause and effect matri- the key linkages of the tools is shown triplicate using one or more measur-
ces (CE), failure mode and effects in Figure 1. Insight into the sequenc- ing devises.
analysis (FMEA), measurement sys- ing and linkage of the tools speeds up Though I’ve seen practitioners use
tem analysis (MSA), process capability the learning process and helps you as few as five parts, samples or items,
studies, multi-vari studies, design of become productive more quickly. such small sample sizes should be
experiments (DOE) and process con- used with caution and viewed as a
trol plans.1 Get Team Input minimum because they may not rep-
These eight tools are integrated with The results of creating a process map, resent the full range of variation in the
the Six Sigma define, measure, ana- CE matrix or FMEA are most useful process. Measurement variation is fre-
when these tools are quently different at different levels of
used by a Black Belt the variable being measured, and a
(BB) or Green Belt change in variation may be missed if
FIGURE 1 Six Sigma Tool Linkage (GB) and a project the parts, samples or items measured
team. All these people do not reflect the total range of varia-
need to be involved tion in the process.
because the tools are Those using the Six Sigma tools often
Customers knowledge based, overlook that a complete MSA also
e d rather than data includes an evaluation of accuracy, lin-
t ne Process
men
p r ove based, and the results earity and stability over time as well as
Im SPC
are only as good as an assessment of variation. While
Process the knowledge of every MSA may not need a detailed
map Control plan
those who use them. assessment of all these characteristics,
BBs and GBs are you should try to determine whether a
MSA Capability Multi-vari DOE typically confident in more thorough evaluation in any of
analysis
their abilities and these areas is needed.
sometimes, in the The typical discussion of MSA gives
CE matrix FMEA
interest of saving little advice on what to do when the
CE – cause and effect. time, may create the measurement system variation is too
DOE – design of experiments.
FMEA – failure mode and effects analysis.
process map, CE high. One approach is to conduct
MSA – measurement system analysis. matrix and FMEA measurement method ruggedness
SPC – statistical process control. by themselves. Such studies by using DOE techniques and

86 I DECEMBER 2003 I www.asq.org


highly fractioned factorial designs to FIGURE 2 Screening Experiments in Six Sigma
identify the variables causing the high
variation in the measurement system.2 Failure mode
Cause and Multi-vari
When doing Six Sigma projects on Process map and effects
effect matrix studies
analysis
processes outside manufacturing, you
will often find there is no measurement
system in place, so you must create one List of
by establishing the measurement candidate
Xs
method and data collection procedures.
You can then do an MSA to evaluate
the repeatability and reproducibility of
Two NO Design of experiments (DOE)
the measurement system. But beware: to five screening experiment
Measurement systems in service Xs?
processes frequently categorize process YES
outputs by attributes—pass/fail or
poor/fair/good/excellent—rather than Take action DOE
• Control Xs • Factorial
assign hard measurements.3 • Select operating ranges • Response surface

Assess Process Capability


A key limitation of many capability
of the higher performing teams may other important variables can be
studies is that the computed capabili-
result in process improvements. Thus, added to the study and an extension of
ty indexes (Cp, Cpk, Pk and Ppk) are
improvements are identified in the the sampling time can be considered.
based on too small a sample. The sam-
measure phase well before the project is
ple size should be at least 30, though Identify Key Variables
officially complete.
more than 50 is preferable, because A key strategy of Six Sigma projects
capability indexes are a function of The Effectiveness of Multi-Vari is to identify the three to six key vari-
the process standard deviation. Studies ables (drivers) that drive the perfor-
Standard deviations are highly vari- As one of the most important and mance of the process. Once these
able and require large sample sizes to versatile Six Sigma tools, the multi-vari variables are identified, you can opti-
produce precise estimates of the capa- study is useful in manufacturing and mize and control the process.
bility indexes. For example, there is nonmanufacturing environments. In a Regression analysis is often used in
large variation in capability indexes typical multi-vari study, the process is the analysis of multi-vari studies and
with a sample size as large as 90. The studied as it operates. The objective is to DOEs to identify the key variables.
95% confidence intervals for P pk = find the input variables and the con- It is not uncommon to see regression
1.33 as a function of sample size are trolled and uncontrolled (noise) process analyses performed without consider-
1.02 - 1.76 for n = 30, 1.11 - 1.61 for n = variables that affect the process output ation of the amount of measurement
60 and 1.17 - 1.52 for n = 90.4 variables. Unfortunately, many people variation in the data. Measurement
In the measure phase, a capability forget a major purpose of the multi-vari variation can decrease the R-squared
study typically uses the baseline data to study is to identify important uncon- and adjusted R-squared statistics and
establish the process baseline. The base- trolled variables, such as team, day of mislead you into thinking you have a
line data can also be analyzed using run the week, season of the year or shift. It poor model, when in fact the low R-
charts, control charts and other graphi- is not uncommon to see multi-vari squared statistic is due to high mea-
cal techniques to detect special cause studies with few or zero noise variables. surement variation. A low adjusted
variation that, when addressed, can Another common error is not sam- R-squared statistic can be due to any
lead to process improvements. A 24/7 pling the process long enough to see combination of the following: wrong
operation, for example, will have three the total range of variation in process variables in the model or missing vari-
or four teams of workers cover the dif- operation and performance. If the sam- ables that are important, wrong func-
ferent shifts. Analysis of the operation’s pling timeframe is too short, your sta- tional form for the model (for
baseline data might indicate the teams tistical analysis may find no, or few, example, linear relationships are
are performing at different levels due to important variables due to the small assumed when a curved response rela-
the team members’ skills and experi- amount of variation in the data. When tionship is present) and high measure-
ence. Bringing the performance level of few important variables are found, the ment error.
the lower performing teams up to that process should be re-evaluated so Another thing to look out for when

QUALITY PROGRESS I DECEMBER 2003 I 87


STATISTICS
ROUNDTABLE

using regression analysis, particularly creates more heat than light. Some BBs self and others make more effective
in analysis of multi-vari studies, is the and GBs, when doing regression analy- use of the tools. Recalling the admon-
presence of correlations among the sis or analysis of variance (ANOVA), ishment of Santayana, our goal should
process variables, known as multi- treat all the y values (process output) as be to make history by quickly making
collinearity. Collinearity among the a single sample and test them for nor- large breakthroughs in process perfor-
predictor variables (Xs) in regression mality. The test often fails, with the con- mance, not repeat history.
models can lead to erroneous conclu- clusion being regression or ANOVA is
REFERENCES
sions. The regression coefficient vari- not appropriate.
1. Forrest W. Breyfogle III, Implementing Six
ance inflation factor (VIF) measures Such a scenario results from a mis- Sigma—Smarter Solutions Using Statistical Methods,
the effects of collinearity. understanding of the true nature of second edition, John Wiley and Sons, 2003.
If the collinearity is large, the regres- the normality assumption: The with- 2. ASQ Chemical and Process Industries
Division, Trusting Measurement Results in the
sion coefficients used to identify the in-group variation for ANOVA and Chemical and Process Industries, ASQ Quality
key drivers will be too large on the the residuals (y – y-predicted) for Press, 2001.
3. S.E. Windsor, “Attribute Gage R&R,” Six
average and may indicate a positive regression analysis, not the raw data,
Sigma Forum Magazine, August 2003, pp. 23-28.
effect when the true effect is negative should be normally distributed. 4. ASQ Chemical and Process Industries
or vice versa. A useful guideline is VIF Allowing the statistical software to do Division, Chemical Interest Committee, Quality
Assurance for the Chemical and Process Industries,
< 5 indicates no problem and VIF > 10 the normality test without giving ASQ Quality Press, 1999.
indicates collinearity is a problem and careful thought to the role of the nor- 5. George E.P. Box, William G. Hunter and J.
Stuart Hunter, Statistics for Experimenters, Wiley-
the model should be reformulated or mality assumption is one reason prac-
Interscience, 1978.
more data should be collected. A VIF titioners make this error.
between five and 10 is a gray area in Practitioners also often overlook that RONALD D. SNEE is principal of performance
which collinearity may or may not be regression analysis and ANOVA are excellence at Tunnell Consulting in King of
a problem. Computation and evalua- robust to departures from normality, Prussia, PA. He has a doctorate in applied and
tion of the VIF should be standard with nonhomogeneous variance being mathematical statistics from Rutgers
practice in regression analysis. a much more critical assumption. They University. Snee is a Shewhart Medalist and
an ASQ Fellow.
There is also a way for you to increase need to pay particular attention to the
your chances of finding the critical few testing of measurements, such as relia-
variables. In my experience, practition- bility and lifetime and environmental Copyright 2003, Ronald D. Snee.

ers pay too little attention to the use of data, for normality because these mea-
screening designs when using DMAIC surements are known not to follow a © 2003 American Society for Quality.
to improve existing processes. The theo- normal distribution. Subject matter Reprinted with permission.
ry is that by the time they work their knowledge should not be ignored
way through the define, measure and when using statistical methods.
analyze phases, they have reduced the When handling outliers, you may Please
potentially critical variables to five or get buried in the details and become comment
fewer and standard two-level factorial confused. In the final analysis the rele-
If you would like to comment on
designs will do the trick. But when they vant question is, “Do the suspect
this article, please post your remarks
get to the improve phase with six or observations have any effect on the
on the Quality Progress Discussion
more variables, they should consider overall conclusions of the analysis?” If
Board at www.asq.org, or e-mail
the use of screening designs such as so, you should perform a careful
them to editor@asq.org.
two-level fractional factorial and study of the pedigree of the data or
Plackett-Burman designs (see Figure 2, collect additional data. If not, you
p. 87).5 The key drivers identified by the should move on. But don’t forget the
screening designs are used in factorial suspect data; your suspicions may be
and response surface experiments to confirmed later, or the mystery may
find the best region in which to operate go unsolved. Remember, when ana-
the process. lyzing very large data sets, you sim-
ply will not have the time to do a
Test for Normality and Outliers
detailed investigation of every outlier
I have observed an overemphasis on because there may be hundreds or
testing for normality and dealing with thousands of them.
outliers. These are important issues, but I encourage you to try these ideas
the overemphasis on these two subjects and look for other ways to help your-

88 I DECEMBER 2003 I www.asq.org

You might also like