H1317_CH00FM.

qxd

10/18/07

11:38 AM

Page i

The Desk Reference
of Statistical
Quality Methods
Second Edition

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page ii

Also available from ASQ Quality Press:
Process Quality Control: Troubleshooting and Interpretation of Data, Fourth Edition
Ellis R. Ott, Edward G. Schilling, and Dean V. Neubauer
The Weibull Analysis Handbook, Second Edition
Bryan Dodson
Quality Engineering Statistics
Robert A. Dovich
Applied Statistics for the Six Sigma Green Belt
H. Fred Walker and Bhisham C. Gupta
Glossary and Tables for Statistical Quality Control, Fourth Edition
ASQ Statistics Division
Statistical Quality Control Using Excel, Second Edition
Marjorie L. Icenogle and Steven M. Zimmerman
Statistical Engineering: An Algorithm for Reducing Variation in Manufacturing
Processes
Stefan H. Steiner and R. Jock MacKay
Applied Data Analysis for Process Improvement: A Practical Guide to Six Sigma Black
Belt Statistics
James L. Lamprecht
Economic Control of Quality of Manufactured Product
Walter A. Shewhart
SPC for Right-Brain Thinkers: Process Control for Non-Statisticians
Lon Roberts
X-Bar and R Chart Blanks
ASQ
Enabling Excellence: The Seven Elements Essential to Achieving Competitive Advantage
Timothy A. Pine
To request a complimentary catalog of ASQ Quality Press publications, call 800-248-1946,
or visit our Web site at http://www.asq.org/quality-press.

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page iii

The Desk Reference
of Statistical
Quality Methods
Second Edition

Mark L. Crossley

ASQ Quality Press
Milwaukee, Wisconsin

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page iv

American Society for Quality, Quality Press, Milwaukee 53203
© 2008 by American Society for Quality
All rights reserved. Published 2007
Printed in the United States of America
13 12 11 10 09 08 07 5 4 3 2 1
Library of Congress Cataloging-in-Publication Data
Crossley, Mark L.
The desk reference of statistical quality methods / Mark L. Crossley.
p. cm.
ISBN 978-0-87389-725-9 (alk. paper)
1. Quality control—Statistical methods. 2. Process control—Statistical
methods. I. Title.
TS156.C79 2007
658.5′62—dc22

2007033871

No part of this book may be reproduced in any form or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without the prior written permission of the publisher.
Publisher: William A. Tony
Acquisitions Editor: Matt Meinholz
Project Editor: Paul O’Mara
Production Administrator: Randall Benson
ASQ Mission: The American Society for Quality advances individual, organizational, and community
excellence worldwide through learning, quality improvement, and knowledge exchange.
Attention Bookstores, Wholesalers, Schools, and Corporations: ASQ Quality Press books, videotapes,
audiotapes, and software are available at quantity discounts with bulk purchases for business, educational, or
instructional use. For information, please contact ASQ Quality Press at 800-248-1946, or write to ASQ
Quality Press, P.O. Box 3005, Milwaukee, WI 53201-3005.
To place orders or to request a free copy of the ASQ Quality Press Publications Catalog, including ASQ
membership information, call 800-248-1946. Visit our Web site at www.asq.org or http://www.asq.org/
quality-press.

Printed on acid-free paper

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page v

Contents

Usage Matrix Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Accelerated Reliability Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Temperature-Humidity Bias Accelerated Testing . . . . . . . . . . . . . . . . . . . . . . . . 5
Example of the Temperature-Humidity Acceleration Model . . . . . . . . . . . . . 5
Graphical Method for the Determination of the Energy of Activation Ea . . . . . 6
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Acceptance Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Modified Control Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Acceptance Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Acceptance Sampling for Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Operating Characteristic Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
MIL-STD-105E (ANSI/ASQC Z1.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Format for Use and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Continuous Sampling Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CSP-1 Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CSP-2 Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A Method for Selecting the AQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Acceptance Sampling for Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Sampling Plans Based on Specified AQL, RQL, α Risk, and β Risk . . . . . . . 37
Sampling Plans Based on Range R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Average/Range Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Average/Range Control Chart with Variable Subgroups . . . . . . . . . . . . . . . . . 51
Average/Standard DeviationControl Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
v

H1317_CH00FM.qxd

vi

10/18/07

11:38 AM

Page vi

Contents

Chart Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Variables Control Charts: Individuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Variables Control Charts: Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
What about Subgroup Size for the Variables Control Charts? . . . . . . . . . . . 71
Attribute Chart: p Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
For P ≤ 0.10, Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
For P > 0.10, Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Chi-Square Contingency and Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . 83
Chi-Square Contingency Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Goodness-of-Fit to Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Goodness-of-Fit for the Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 89
Goodness-of-Fit for Any Specified Distribution . . . . . . . . . . . . . . . . . . . . . . . . 89
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Chi-Square Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Confidence Interval for the Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Confidence Interval, n ≤ 30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Confidence Interval, n > 30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Confidence Limit, n > 30, Single-Sided Limit . . . . . . . . . . . . . . . . . . . . . . . . 102
Confidence Limit, n ≤ 30, Single-Sided Limit . . . . . . . . . . . . . . . . . . . . . . . . 103
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Confidence Interval for the Proportion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Confidence Interval for the Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . 115
Testing Homogeneity of Variances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Defects/Unit Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Demerit/Unit Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Designed Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Normal Probability Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Visualization of Two-Factor Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Fractional Factorial Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Calculation of Main Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page vii

Contents

Two-Factor Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Statistical Significance Based on t-test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Variation Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Empirical Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Testing Curvature or Nonlinearity of a Model (and Complete Case Study) . . 172
An Alternative Method for Evaluating Effects Based on Standard Deviation . . 178
Plackett-Burman Screening Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Hypergeometric Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
The Normal Distribution as an Approximation of the Binomial . . . . . . . . . . . 192
Pascal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Cumulative Distribution Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Evolutionary Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Exponentially Weighted Moving Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
F-test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Single-Sided vs. Double-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Example of a Frequency Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Formulation of the Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Conclusions and Consequences for a Hypothesis Test . . . . . . . . . . . . . . . . . . 224
Selecting a Test Statistic and Rejection Criteria . . . . . . . . . . . . . . . . . . . . . . . 225
Hypothesis about the Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Individual-Median/Range Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Individual/Moving Range Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Measurement Error Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Variables Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Terms and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Combined (Gage) Repeatability and Reproducibility Error (GR&R) . . . . 264
Consequences of Measurement Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Confidence Interval for Repeatability, Reproducibility, and Combined R&R . . 265

vii

H1317_CH00FM.qxd

viii

10/18/07

11:38 AM

Page viii

Contents

Confidence Interval for Repeatability, EV . . . . . . . . . . . . . . . . . . . . . . . . . 266
Confidence Interval for Reproducibility, AV . . . . . . . . . . . . . . . . . . . . . . . . 268
Confidence Interval for Combined Repeatability and Reproducibility, R&R . . 269
Attribute and Visual Inspection Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
A Method for Evaluating Attribute Inspection . . . . . . . . . . . . . . . . . . . . . . . . 272
1. Correct, C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
2. Consumer’s Risk, Cr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
3. % Manufacturer’s Risk, Mr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
4. Repeatability, R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5. Bias, B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279
Multivariate Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
T 2 Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Calculation of Individual and Average Covariance . . . . . . . . . . . . . . . . . . 284
Calculation of Individual T 2 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Calculation of the Control Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Standard Euclidean Distance DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Plotting Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Control Limits and Center Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Nonnormal Distribution Cpk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Cpk Adjusted for Sk and Ku Using Normal Probability Plots . . . . . . . . . . . . . 310
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Nonparametric Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Comparing Two Averages: Wilcoxon-Mann-Whitney Test . . . . . . . . . . . . . . . 313
Comparing More Than Two Averages: The Kruskall-Wallis Rank Sum Test . . . 316
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Reporting the Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
p Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Using the p Chart When the Proportion Defective Is Extremely Low
(High-Yield Processes) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Exponential Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Data Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
p' Chart and u' Chart for Attributes (Laney’s) . . . . . . . . . . . . . . . . . . . . . . . . 345
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Pareto Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page ix

Contents

Pre-Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Pre-Control Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
A Comparison of Pre-Control and SPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Process Capability Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Cpk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Confidence Interval for Cpk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Cr and Cp Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Confidence Interval for Cp and Cr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Cpm Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Required Sample Cpk to Achieve a Desired Cpk . . . . . . . . . . . . . . . . . . . . . . . 380
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Regression and Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Linear Least Square Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Confidence for the Estimated Expected Mean Value . . . . . . . . . . . . . . . . . . . 384
Assessing the Model: Drawing Inferences about βˆ 1 . . . . . . . . . . . . . . . . . . . . 385
Testing β1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Measuring the Strength of the Linear Relationship . . . . . . . . . . . . . . . . . . 387
Testing the Coefficient of Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Determination of Failure Rate, λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Confidence Interval for λ from Time-Censored (Type I) Data . . . . . . . . . . 398
Confidence Interval for λ from Failure-Censored (Type II) Data . . . . . . . 399
Success Run Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Reliability Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Sequential Simplex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Theoretical Optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Strategies for Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Shotgun Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Approach of a Single Factor at a Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Basic Simplex Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Simplex Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Sequential Simplex Optimization, Variable Size . . . . . . . . . . . . . . . . . . . . . . . 419
Short-Run Attribute Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Short-Run p Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436

ix

H1317_CH00FM.qxd

x

10/18/07

11:38 AM

Page x

Contents

Short-Run Average/Range Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
For the Short-Run Average Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
For the Short-Run Range Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

For X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

For R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Short-Run Individual/Moving Range Control Chart . . . . . . . . . . . . . . . . . . . 449
Derivation of the Plotting Characteristic for the Location Statistic . . . . . . . . 449
Derivation of the Plotting Characteristic for the Variation Statistic . . . . . . . . 450

—–
Selecting a Value for X T and M R T . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
SPC Chart Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Taguchi Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Losses with Nonsymmetric Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
Testing for a Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Normal Probability Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Normal Probability Plots Using Transformed Data . . . . . . . . . . . . . . . . . . 477
Chi-Square Goodness-of-Fit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Skewness and Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
Weibull Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
Probability Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Plotting Data Using Weibull Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Estimation of Minimum Life Characteristic γ . . . . . . . . . . . . . . . . . . . . . . 491
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Wilcoxon Rank Sum (Differences in Medians) . . . . . . . . . . . . . . . . . . . . . . . . 493
Zone Format Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Construction of Zone Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
Collect Historical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
Appendix: Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533

H1317_CH00FM.qxd

10/18/07

11:38 AM

Page xi



SPC (Special Applications)

SPC (Basic)

Reliability

Process Improvement

Descriptive Methods

Comparative Data Analysis

Accelerated Reliability Testing
Acceptance Control Chart
Acceptance Sampling for Attributes
Acceptance Sampling for Variables
Average/Range Control Chart
Average/Range Control Chart with Variable
Subgroups
Average/Standard Deviation Control Chart
Chart Resolution
Chi-Square Contingency and Goodness-of-Fit
Chi-Square Control Chart
Confidence Interval for the Average
Confidence Interval for the Proportion
Confidence Interval for the Standard Deviation
Defects/Unit Control Chart
Demerit/Unit Control Chart
Descriptive Statistics
Designed Experiments
Discrete Distributions
Evolutionary Operation
Exponentially Weighted Moving Average
F-test
Histograms
Hypothesis Testing
Individual-Median/Range Control Chart
Individual/Moving Range Control Chart
Measurement Error Assessment
Multivariate Control Chart
Nonnormal Distribution Cpk
Nonparametric Statistics
Normal Distribution
p Chart
p ′ Chart and u ′ Chart for Attributes (Laney’s)
Pareto Analysis
Pre-Control
Process Capability Indices
Regression and Correlation
Reliability
Sequential Simplex Optimization
Sequential Simplex Optimization, Variable Size
Short-Run Attribute Control Chart
Short-Run Average/Range Control Chart
Short-Run Individual/Moving Range Control Chart
SPC Chart Interpretation
Taguchi Loss Function
Testing for a Normal Distribution
Weibull Analysis
Wilcoxon Rank Sum (Differences in Medians)
Zone Format Control Chart

Acceptance Sampling Plans

Usage Matrix Table



















qxd 10/18/07 11:38 AM Page xii .H1317_CH00FM.

it briefly describes how to use a tool or technique. variation.H1317_CH00FM. All of the examples in this reference are based on data that are purely hypothetical and fictitious in nature. This desk reference does not assume that the reader has already been exposed to the details of a specific topic but perhaps has some familiarity with the topic and wants more information regarding its application. Topics in this category answer such questions as: What size sample should I take to determine the average of a process? How much error do I have using my current measurement system? How large of a sample do I take to be 90 percent confident that the true Cpk is not less than 1. in this desk reference (see Usage Matrix Table). At the end of most topics is a bibliography. data presentation methods. who will need a minimal prior understanding of the techniques discussed to benefit from them. Descriptive Methods These tools and techniques are used to measure selected parameters such as the central tendency. ranging from Accelerated Reliability Testing to Zone Format Control Chart. Each topic is presented in a stand-alone fashion with. or topics.qxd 10/18/07 11:38 AM Page xiii Preface This desk reference provides the quality practitioner with a single resource that illustrates in a practical manner exactly how to execute specific statistical methods frequently used in the quality sciences. These modules fall into one of the following seven major categories that are defined as a function of the topic’s application. several examples detailing computational steps and application comments. A plethora of topics have been arranged in alphabetical order. This reference is not intended to be a rigorously theoretical treatise on the subject. and other single-value responses such as measurement error and process capability indices. How to Use This Reference There are 48 individual modules. rather. This reference is accessible for the average quality practitioner. in most cases.20? If the measured kurtosis of my data distribution is +0.75. is the distribution considered normal? What is the hypergeometric distribution. and when do I use it? Will my error for the proportion be affected if my sample size represents a significant proportion of my population? xiii . probabilities.

These methods are frequently used to determine whether differences in populations exist.H1317_CH00FM. Topics in this category answer such questions as: What is the probability that I will detect a process shift of +0.6 percent defective (based on 155 samples). These techniques provide the means to improve processes by addressing such questions as: Can I use design of experiments on a continuous basis for process improvement? How do I use design of experiment to reduce variation? Reliability This section provides analysis of life data and applications related to the understanding of product reliability. Typical questions answered using these techniques are: I have determined that my process average is 12. how do I determine the 90 percent confidence interval? What is the bathtub curve. have I actually improved the process? . Topics in this section answer such questions as: What control chart do I use if I only want to detect a change in the distribution of the data? How do I use a p chart when my defect rate is extremely low? What control chart can I use to make the charting process user friendly? Is there a control chart based on specification limits? Process Improvement Certain techniques can be used to improve processes by reducing variation or optimizing operating conditions.qxd xiv 10/18/07 11:38 AM Page xiv Preface Statistical Process Control (Basic) Traditional methods of statistical process control (SPC) and its general use are presented. and the new and improved process is 5.0 using 25 samples.53 using an average/range control chart with n = 5 within seven samples after the shift has occurred? Why is the average/range control chart more robust with respect to a nonnormally distributed population than the individual/moving range control chart? SPC (Special Applications) Traditional methods of SPC are extended to include short-run applications. and how does failure rate relate to the age of a product? How can I graphically estimate the parameters of the Weibull distribution? Comparative Data Analysis Comparative data analysis includes methods to compare observed data with expected values and with compare sets of data with each other. detection of changes in distributions. have I achieved my goal of 11.6 percent defective (based on a sample of n = 450). These topics address such issues as: I have calculated the MTBF from 12 failure times.5 minimum based on these sample data? The old process was running at 7. and non-Shewhart methods. SPC assists in the determination of process changes.

This section addresses such concerns as: Can I develop my own sampling plans. or must I depend on published plans? What affects the operating characteristic more—the sample size or the lot size? The following list organizes the modules according to the aforementioned categories: Descriptive Methods Confidence Interval for the Average Confidence Interval for the Proportion Confidence Interval for the Standard Deviation Descriptive Statistics Discrete Distributions Histograms Measurement Error Assessment Nonnormal Distribution Cpk Normal Distribution Pareto Analysis Pre-Control Process Capability Indices Regression and Correlation SPC (Basic) Average/Range Control Chart Average/Standard Deviation Control Chart Chart Resolution Defects/Unit Control Chart Individual/Moving Range Control Chart p Chart SPC Chart Interpretation SPC (Special Applications) Acceptance Control Chart Average/Range Control Chart with Variable Subgroups Chi-Square Control Chart Demerit/Unit Control Chart Exponentially Weighted Moving Average Individual-Median/Range Control Chart Multivariate Control Chart p′ Chart and u′ Chart for Attributes (Laney’s) Short-Run Attribute Control Chart Short-Run Average/Range Control Chart .qxd 10/18/07 11:38 AM Page xv Preface xv Acceptance Sampling Plans Acceptance sampling plans are schemes used to assist in the decision to accept or reject lots of materials based on sampling data for either variables or attributes.H1317_CH00FM.

it is hoped that those included will provide the reader with a more in-depth understanding on their use.I. .... Crossley.Q.R.B. Variable Size Taguchi Loss Function Reliability Accelerated Reliability Testing Reliability Weibull Analysis Comparative Data Analysis Chi-Square Contingency and Goodness-of-Fit Nonparametric Statistics F-test Hypothesis Testing Testing for a Normal Distribution Wilcoxon Rank Sum Acceptance Sampling Plans Acceptance Sampling for Attributes Acceptance Sampling for Variables While this reference does not exhaust the list of applicable techniques and tools to improve quality.. C.Q.H1317_CH00FM.S.E..qxd xvi 10/18/07 11:38 AM Page xvi Preface Short-Run Individual/Moving Range Control Chart Zone Format Control Chart Process Improvement Designed Experiments Evolutionary Operation Sequential Simplex Optimization Sequential Simplex Optimization. Mark L.S. Quality Management Associates. C.A.M. C.. C. Inc.B.S. Mgr.B. Master B. M.Q.E. C. C..

H1317_CH00FM. Their abundant support has made this book possible. Brett and Brooke.qxd 10/18/07 11:38 AM Page xvii Dedicated to my wife Betty and my children. .

H1317_CH00FM.qxd 10/18/07 11:38 AM Page xviii .

The increased temperature shifts the distribution of stress/strength closer together and decreases the time to failure. The only way to accelerate this time is to increase the stress on the system or unit under study. 2. As shown in Figure 1.H1317_CH01. Applying additional stress (above the normal operating conditions) simply accelerates the aging process. 3. The amount of stress and strength can be thought of in terms of a distribution. The modes of the stress can be: 1. where we increase the temperature above the normal operating condition. Thermal Humidity Vibrational Chemical The most commonly used stress accelerator is thermal. we have unreliability. 4. Failure occurs when the stress exceeds the strength of the system. Where the two overlap. This is referred to as high-temperature operating life (HTOL). Strength distribution Stress distribution Unreliability Figure 1 Stress/strength distribution under normal operating environmental conditions. 1 .qxd 10/15/07 5:43 PM Page 1 Accelerated Reliability Testing The objective of accelerated testing is to compress the time to failure. there will always be some unreliability when operating in a normal condition.

the unreliability increases. The Arrhenius relationship is important not only in temperature acceleration but also in other thermodynamic phenomena. These failure rates are modeled using the Arrhenius relationship: Rate = Be ⎛ – Ea ⎞ ⎜ ⎟ ⎝ K BT ⎠ . See Figure 2. are influenced by temperature. 0°K KB = Boltzmann’s constant. corrosion. A= tU tS (1) where: A = acceleration factor tU = life under normal stress tS = life under increased stress Accelerated or higher-stressed environments lead to a shortened time to failure compared to the time to failure in a normal-use condition. As more and more of the molecular events take place. we finally reach a point .15 According to the Arrhenius model (on a molecular level).H1317_CH01. eV T = Temperature in degrees Kelvin. and electromigration). An acceleration factor of 45 means that 1 hour under the increased stress condition is equivalent to 45 hours under the normaluse condition. 8. such as chemical processes (oxidation. particles must overcome a potential barrier of magnitude Ea in order to participate in the process leading to ultimate failure. Strength distribution Stress distribution Unreliability Figure 2 Unreliability increases due to increased stress. Many failure modes.6173 ⫻ 10–5 eV/°K Note: °K = °C + 273.qxd 2 10/15/07 5:43 PM Page 2 The Desk Reference of Statistical Quality Methods As the stress on the system increases. We will assume that the thermally accelerated failure rate follows the Arrhenius relationship. (2) where: B = A constant that is a function of the failure mechanism Ea = The activation energy in electron-volts. The acceleration factor A is the ratio of the normal life compared to the life in an accelerated stress environment.

6173 ⫻ 10–5 eV/°K AT = e ⎡ 0.000565) AT = e4.6173 × 10−5 ⎝ 308. 840 = = 1267. and (3): ⎡ Ea ⎛ 1 1 ⎞⎤ ⎢ ⎜ − ⎟⎥ t2 ⎢⎣ K B ⎝ T2 T1 ⎠ ⎥⎦ AT = = e t1 (4) T1 = stressed temperature T2 = use temperature Example: A device is normally used at 35°C.2 .15 ⎠ ⎥⎦ AT = e(7310.63a ⎛ 1 1 ⎞⎤ − ⎢ ⎜ ⎟⎥ ⎢⎣ 8. AT 62.63 KB = 8. t2 T1 = t1 T2 (3) where: t1 = rate at temperature T1 t2 = rate at temperature T2 Combining equations (1).H1317_CH01.63 Ev.5 hours.2 The test time to simulate nine years (78.15°K Ea = 0. If we subject the device to a temperature of 100°C. (2).qxd 10/15/07 5:43 PM Page 3 Accelerated Reliability Testing 3 where a catastrophic event (failure) occurs.13 = 62.15 373. The rate of these reactions is inversely proportional to the temperature. how long will we have to test to simulate nine years of service at 35°C? Assume that the energy of activation Ea = 0.840 hours) is Test time = Demonstrated life time 78.15 + 35 = 308.15°K Tstress = 273.87)(0.15 + 100 = 373. AT = e ⎡E ⎛ 1 1 ⎞⎤ ⎢ a ⎜ − ⎟⎥ ⎢⎣ K B ⎝ Tuse Tstress ⎠ ⎥⎦ Tuse = 273.

we determine the mean time between failure (MTBF) at two elevated temperatures and use the following relationship: ⎛ MTBF1 ⎞ Ln ⎜ ⎟ ⎝ MTBF2 ⎠ Ea = K B . 000 ⎞ Ln ⎜ ⎝ 7000 ⎟⎠ ⎛ 1 1 ⎞ ⎜⎝ 463. What is the energy of activation Ea? ⎛ MTBF1 ⎞ Ln ⎜ ⎟ ⎝ MTBF2 ⎠ Ea = K B ⎛ 1 1⎞ ⎜T − T ⎟ ⎝ 1 2⎠ Ea = 8.43 What is the expected MTBF for the unit if the operational temperature is 20°C (normal room temperature of 72°F)? . ⎛ 1 1⎞ ⎜T − T ⎟ ⎝ 1 2⎠ where: T2 = the greater test temperature T1 = the lesser test temperature MTBF1 = the MTBF @ T1 MTBF2 = the MTBF @ T2. In this example we were given the energy of activation Ea = 0.15 − 508.H1317_CH01.63.6173 × 10−5 ⎛ 18. This is equivalent to nine years at the normal operating temperature of 35°C.qxd 4 10/15/07 5:43 PM Page 4 The Desk Reference of Statistical Quality Methods We would test the unit at 100°C for 52.15 ⎟⎠ = (8. A device is tested at 190°C (T1) and at 235°C (T2). How would we determine this had it not been given? Example: To calculate the energy of activation Ea.000 cycles and 7000 cycles.6173 × 10−5 )(4971. Note: Both T1 and T2 are greater than the normal-use environment temperature. The respective MTBFs are 18.05) = 0.8 days.

H1317_CH01.5 The acceleration factor for humidity is given by m ⎛R ⎞ AH = ⎜ stress ⎟ . S.96 )( 0.60 AT = 99.6174 × 10−5 ⎝ 294.00061) AT = e4.23 AT = 507. we can also stress the test by increasing both the temperature and the percent relative humidity (RH). ⎝ Ruse ⎠ .151 463.6173 × 10−5 ⎝ 293. The acceleration factor for temperature is determined as in the previous examples.8) = (18. D.000)(507. One of the more common test conditions for THB is to stress the temperature to 85°C and the RH to 85 percent for 1000 hours.400 cycles Temperature-Humidity Bias Accelerated Testing In the previous discussion we accelerated the reliability testing by stressing only the temperature.43 − ⎢ ⎜ ⎟⎥ ⎢⎣ 8. A THB test is conducted at 85 percent RH and 85°C.15 ⎠ ⎥⎦ AT = e(7542.00125) AT = e6.qxd 10/15/07 5:43 PM Page 5 Accelerated Reliability Testing AT = e AT = e 5 ⎡ Ea ⎛ 1 1 ⎞ ⎤ ⎢ ⎜ − ⎟⎥ ⎢⎣ K B ⎝ T2 T1 ⎠ ⎥⎦ ⎡ ⎛ 1 1 ⎞⎤ 0. Example of the Temperature-Humidity Acceleration Model The normal operational environment for a device is 35 percent RH and 21°C. AT = e AT = e ⎡E ⎢ a ⎢⎣ K B ⎛ 1 1 ⎞⎤ − ⎜ ⎟⎥ T T ⎝ use stress ⎠ ⎥ ⎦ ⎡ ⎛ 1 0. Combining the temperature model (Arrhenius) and the RH model (Peck) gives us a single acceleration factor.65.8) = 9.15 358.9)(0. We will assume that the energy of activation is 0.15 ⎠ ⎥⎦ AT = e( 4989. In temperature-humidity bias (THB) accelerated testing.140.65 1 ⎞⎤ − ⎢ ⎜ ⎟⎥ ⎢⎣ 8.8 MTBF @ 20°C = (MTBF @ 190°C)(507. Peck developed the model for the stress of RH.

and the respective MTBFs are 60 and 300 hours.6437 ln 300 − ln 60 5. 1054. Peck found nominal values between 2.0943 = = = 9668. To simulate 10 years of service under normal operating conditions (35 percent RH.6 The combined acceleration factor is the product of the two acceleration factors: ATH = (AT)(AH) = (99. each greater than the normal-use temperature.8 Slope = Ea = (KB)(9668. we subject the device to a temperature of 85°C and a relative humidity of 85 percent for 10 = 0.8) Ea = 0.0095 years or (0. forming the Arrhenius plot (see Figure 3).83 electron volts/0K .6173 ⫻ 10–5.8 1 1 0.00295 0. 21°C).7038 − 4.qxd 6 10/15/07 5:43 PM Page 6 The Desk Reference of Statistical Quality Methods where m = humidity constant. Accelerated testing is done at 150° and 117°.H1317_CH01.66. 1.5 and 3.66 AH = 10. For this example we will use m = 2. Example: A device is normally used at 70°F. By evaluating experimentally at two temperatures.9 Slope = 9668.8) Ea = (8.7. What is the expected MTBF at the normal-use temperature of 70°F? We start by plotting the MTBFs on the log scale (vertical) against 1000 ⫻ 1/°K on the horizontal scale.0095)(35) = 3 1 2 days.00312 − 0. The slope of the line is Ea . where Ea is the activation energy and KB is Blotzmann’s KB constant = 8.6173)(9668.7 Graphical Method for the Determination of the Energy of Activation Ea If we plot the MTBF on semilog (vertical axis) versus the reciprocal of the temperature in degrees Kelvin associated with the MTBF. This is referred to as an Arrhenius plot. we can determine the use MTBF in a reduced time period.00017 − 320. we may determine the energy of activation from the slope of the line.5)(10. Rstress = 85% and Ruse = 35% ⎛ 85 ⎞ AH = ⎜ ⎟ ⎝ 35 ⎠ 2. The slope of the line is equal to Ea/KB.0.5 338.6) = 1054.

2°K ⎧⎪ ⎡ 1 1 ⎤ ⎫⎪ 0.H1317_CH01. 7 . 2 320 .qxd 10/15/07 5:43 PM Page 7 Accelerated Reliability Testing 5000 1000 500 MTBF 300 100 60 50 10 Figure 3 Arrhenius plot.27 1 hour at 47°C = 14. The acceleration factor AT is determined at 70°F.83 − ⎨ ⎢ ⎥⎬ 0 00086173 294 .27) = 4281 An extension (– – – – –) of the Arrhenius plot confirms the accelerated MTBF. Tuse = 70°F = 21°C = 294.2 ⎦ ⎭⎪ .27 hours at 21°C The MTBF @ 21°C = (MTBF @ 47°C)(14.0 0 ⎣ ⎩⎪ AT = 14.27) The MTBF @ 21°C = (300)(14.2°K AT = e ⎧⎪ Ea ⎨ ⎩⎪ K B ⎡ 1 1 ⎤ ⎫⎪ ⎢ − ⎥⎬ ⎢⎣ T2 T1 ⎥⎦ ⎭⎪ AT = e Tstress = 117°F = 47°C = 320.

New York: McGraw-Hill.” International Reliability Physics Symposium. D. 1993. 1986. W. 1990. Moen. W. W. S. P. Nelson. New York: John Wiley & Sons. Nolan. R. pp.H1317_CH01. T. Improving Quality through Planned Experimentation. “Comprehensive Model for Humidity Testing Correlation. New York: Marcel Dekker. Provost.. D. 1991. L. and L. Accelerated Testing. Peck. 44–50. Reliability Improvement with Design of Experiments. .qxd 8 10/15/07 5:43 PM Page 8 The Desk Reference of Statistical Quality Methods Bibliography Condra.

the true process mean μ must be within the interval μL ≤ μ ≤ μU. For the process nonconforming to be less than δ. where: μL = the minimum process average allowed μU = the maximum process average allowed Zδ = the upper 100(1 − δ) percentage point of the standard normal distribution δ = the maximum percentage nonconforming allowed for the process. we assume that the process data are normally distributed. The use of this chart is practical where the spread of the process (6σ) is significantly less than the specification limits (upper specification limit [USL] − lower specification limit [LSL]). less than 75 percent. the percent Cr normal process variation (6s) as a percentage of total tolerances is small. The concept behind the use of modified limits is based on the assumption that limited shifts in the process average will occur. If a specified level for a type I error or an alpha risk α is given. rather than control limits. reject limits. Therefore. may be used. The only statistical process control (SPC) detection rule used with modified control limits is when a single point is greater than the upper reject limit (URL) or below the lower reject limit (LRL). then the upper control limit (UCL) and the lower control limit (LCL) are 9 .H1317_CH02. Modified Control Limits In those situations where meeting the specification is considered economically sufficient. These types of control charts are called modified control limit charts or acceptance sampling control charts. Control charts based on modified control limits give some assurance that the average proportional defective shipped does not exceed a predetermined level. we must have μ L = LSL + Z δ σ and μ U = USL – Zδ σ . To specify the modified control limits.qxd 10/15/07 10:57 AM Page 9 Acceptance Control Chart The following sections describe control charts based on specification limits. That is.

9 8.93 2.9 12. n Z ⎞ ⎛ LCL = LSL + ⎜ Zδ − α ⎟ σ . R = 1.2 9.00 ± 4.7 8.3 10. n UCL = USL − Zδ σ + Zα σ .3 10.95 1.5 11.3 8.2 9.8 10.4 9.1 10.0 8.6 8.0010 Data: – X: R: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 9. α = .10 2.5 9.3 11.1 11.0025 Maximum proportion nonconforming.6 8.7 10.7 1. δ = 0.4 7.qxd 10 10/15/07 10:57 AM Page 10 The Desk Reference of Statistical Quality Methods UCL = μU + Zα σ .68 1. ⎝ n⎠ and LCL = μ L − Zα σ .8 10.2 8.4 11. ⎝ n⎠ where: LCL = lower control limit (modified) UCL = upper control limit (modified) LSL = lower specification limit USL = upper specification limit σ = process standard deviation n = sample size δ = maximum proportion nonconforming allowed α = probability of a type I error. n LCL = LSL + Zδ σ + Zα σ .2 10.0 10.95 .9 10.5 10.4 9.6 9.0 10.2 9.1 9.43 1.0.5 11.8 8.7 11.5 10. n Z ⎞ ⎛ UCL = USL − ⎜ Zδ − α ⎟ σ .9 10.0 11.4 9.5 – – Grand average.2 10.8 9. Example: The product specification is 10.2 9.9 10.05 2.8 10.97 – Average range.5 10. alpha risk.00 2.9 9.3 10.8 1. X = 9.H1317_CH02.4 9.9 10.05 1.

81 n = 4 σ = 0.0001 α = 0. 10 11 .0 Z ⎞ ⎛ 2.qxd 10/15/07 10:57 AM Page 11 Acceptance Control Chart R d2 Values of d2 can be found in Table A.0025 Zα = 2.81 ⎝ ⎝ 2 ⎠ n⎠ Z ⎞ ⎛ LCL = LSL + ⎜ Z δ − α ⎟ σ ⎝ n⎠ δ = 0.00010 Zδ = 3.95 UCL = 11.19 8.19 ⎝ ⎝ 2 ⎠ n⎠ The completed modified control chart is illustrated in Figure 1.0 Z ⎞ ⎛ 2.95 LSL = 6.00010 α = 0.0 − ⎛ 3.81 δ = 0.0025 11. σˆ = σˆ = 0.71 from standard normal distribution α = 0.71 − 0.95 ⎛ Z ⎞ UCL = USL − ⎜ Zδ − α ⎟ σ ⎝ n⎠ δ = 0.00 Lower reject limit = 8.00 10. Estimate of sigma.81 n = 4 σ = 0.71 − 0.7 in the appendix.0025 Zδ = 3.81 ⎞ UCL = USL − ⎜ Z δ − α ⎟ σ UCL = 14.00 Upper reject limit = 11.71 from standard normal distribution Zα = 2.H1317_CH02. Specification: 10 ± 4 12.00 1 5 Figure 1 Modified limits control chart.95 USL = 14.00 9.81 ⎞ LCL = LSL + ⎜ Z δ − α ⎟ σ LCL = 6.95 LCL = 8.0 + ⎛ 3.

Zp1 = 3. The four characteristics that completely define any sampling plan are the following: 1. p1 = the proportion nonconforming deemed acceptable (AQL) 2. and are found from Zβ ⎞ ⎛ Z ⎞ ⎛ UACL = USL − ⎜ Z p − α ⎟ σ or UACL = USL − ⎜ Z p + σ ⎝ 1 ⎝ 2 n⎠ n ⎟⎠ Zβ ⎞ ⎛ Z ⎞ ⎛ σ. respectively. This technique allows the introduction of the four parameters associated with the development of operating characteristic curves for describing acceptance sampling plans for attributes and variables. the difference between the upper and lower specifications must exceed 6σ.qxd 12 10/15/07 10:57 AM Page 12 The Desk Reference of Statistical Quality Methods Acceptance Control Charts An extension of the modified control limits chart is the acceptance control chart (ACC). p2 = the proportion nonconforming deemed not acceptable (RQL) 4.0 LSL = 6. and p2. The upper and lower control limits are designated as UACL and LACL. The conservative action would be to select the lower value of the UACL and the higher value of the LACL. In addition. we will develop a chart using the data from the previous case (the modified control limits example). α = the risk of rejection of an acceptable process or the risk that a point will fall outside the acceptance control limit (ACL) when the process is truly acceptable 3. β = the risk associated with the probability of acceptance of a non-acceptable process or the probability of a point not falling outside the ACL when the process is not acceptable The risk factors α and β should be the same for both the lower and upper specifications.0 σ = 0.0005. adding the additional parameters of α.95 p1 = 0. As an example of the ACC. Calculations: The sample or subgroup size is found from 2 ⎛ Zα + Z β ⎞ n=⎜ ⎟.H1317_CH02. ⎝ Z p1 − Z p 2 ⎠ The value chosen for n should be the next greatest integer. β. USL = 14. LACL = LSL + ⎜ Z p − α ⎟ σ or LACL = LSL+ ⎜ Z p + ⎝ 1 ⎝ 2 n⎠ n ⎟⎠ Two formulas are required for each UACL and LACL because the solution for n almost never will yield an integer value.29 . p1.

New York: McGraw-Hill. ⎝ Z p1 − Z p 2 ⎠ Therefore.H1317_CH02. “Acceptance Control Charts.17 + ⎟ 0. Schilling.0 − ⎜ 2.95 UACL = 11.33 ⎞ ⎛ ⎛ UACL = USL − ⎜ Z p − α ⎟ σ UACL = 14. Acceptance Sampling in Quality Control.33 + 1.0 + ⎜ 3. D. 4th edition. Quality Control.51. E.05. NJ: Prentice Hall. 4: 13–21. 1996. 1996.95 LACL = 8. L. Englewood Cliffs. Zβ = 1.17 + ⎟ 0.29 − ⎟ 0. Grant. H.17 α = 0.0 + ⎜ 2. Introduction to Statistical Quality Control. New York: Marcel Dekker.95 UACL = 11.36 for UACL (the lower of the two UACLs). New York: John Wiley & Sons.65 ⎞ n=⎜ ⎟ n = ⎜⎝ 3.qxd 10/15/07 10:57 AM Page 13 Acceptance Control Chart 13 p2 = 0. R. Bibliography Besterfield.33 β = 0. 1994.65 From these data: 2 2 ⎛ Zα + Z β ⎞ ⎛ 2. Statistical Quality Control. 7th edition.6. Zp2 = 2.51 1 ⎝ ⎝ n⎠ 13 ⎠ and Zβ ⎞ ⎛ 1.65 ⎞ ⎛ LACL = LSL + ⎜ Z p + σ LACL = 6. 1982. Leavenworth.49 ⎟ 2 ⎝ ⎝ 13 ⎠ n⎠ For LACL use the greater of the two values. C.95 LACL = 8.” Industrial Quality Control 14.17 ⎟⎠ n = 12.33 ⎞ ⎛ ⎛ LACL = LSL + ⎜ Z p − α ⎟ σ LACL = 6. Zα = 2.65 ⎞ ⎛ UACL = USL − ⎜ Z p + σ UACL = 14. G. .29 − ⎟ 0.36 1 ⎝ ⎝ n⎠ 13 ⎠ and Zβ ⎞ ⎛ 1.50 ⎟ 2 ⎝ ⎝ 13 ⎠ n⎠ Use 11.0 − ⎜ 3.0150. and R. D. A. LACL = 8. 1957. no. Z ⎞ 2. Freund. n = 13.01.. E.29 − 2. S. 3rd edition. Montgomery. Z ⎞ 2.

qxd 10/15/07 10:57 AM Page 14 .H1317_CH02.

the following topics will be discussed: 1. you are responsible for the screening and acceptance of materials in receiving inspection and need to decide about the acceptance of a shipment of parts. “good. Perform no inspection and trust that the supplier has done a good job. for purposes of acceptance sampling. Sample 10 percent of the lot and accept it if no defective units are found. either as an incoming lot or as one being released from manufacturing.” Lots at the AQL percent defective will be accepted by the plan the predominant amount of the time (90 percent to 99 percent of the time). 2. 3. Sample plan parameters and operating characteristic curves Development of attribute sampling plans using the Poisson Unity Value method The use of published sampling plans such as MIL-STD-105E (ANSI/ASQ Z1. The α risks are generally chosen in the range of 1 percent to 10 percent. risk.H1317_CH03. that is. In this module.4) Continuous sampling plans such as CSP-1 Sampling plans for attributes are completely defined by their performance relative to seven characteristics or operational parameters: Acceptable quality limit (AQL): That percent defective that. Assume that there are no variables for the inspection and that the criterion for inspection is simply a functional test: either the parts perform or they do not. 3. 4. 2. An AQL must have a defining alpha risk. Inspect 100 percent of the units. Manufacturer’s risk. will be considered acceptable. The supplier is ISO 9000 certified. 15 . There are several options from which to choose: 1. α: The probability that a lot containing the AQL percent defective or less will be rejected by the plan. For example. The incoming lot has 2500 units. or α. Derive a statistically based sampling plan that defines all the risk factors for a given level of quality and perform a sampling inspection. Manufacturer’s risk is also known as the alpha.qxd 10/18/07 11:40 AM Page 15 Acceptance Sampling for Attributes Lot acceptance sampling plans are used when deciding to accept or reject a lot of material. 4.

An RQL must have a defining beta risk. or β. The sample size 2.0% .qxd 16 10/18/07 11:40 AM Page 16 The Desk Reference of Statistical Quality Methods Rejectable quality level (RQL): That percent defective that. Acceptance number. Exceeding this number will lead to rejection of the lot (for single sampling plans) or the selection of additional samples (for multiple sampling plans). Sample size. that is. n: The number of items that are examined for compliance to the specification. N: The population from which the sample for examination is chosen. will be considered not acceptable.0 percent and an RQL of 19.39 percent AQL at a manufacturer’s risk of 5. the sample size is n = 32. Operating Characteristic Curve The operating characteristic (OC) curve is a graph of all possibilities of accepting a lot for each percent defective (usually unknown in practice) that can occur for a given sampling plan.H1317_CH03. The acceptance/rejection criteria Example: The following figure shows the OC curve for a sampling plan where the lot size is N = 200. Consumer’s risk. for purposes of acceptance sampling.7% α = 5.0 percent. AQL = 4. risk. It is the probability of accepting a bad lot. Consumer’s risk is also known as the beta. This plan provides a 4.39% RQL = 19. Ac or C: The maximum number of nonconforming units that are permitted in the sample. “bad.” Lots that are at the RQL percent defective or greater will be rejected by the plan the predominant amount of the time. β: The probability that a lot containing the RQL percent defective or more will be accepted (by the consumer). Lot size.7 percent at a consumer’s risk of 10. and the acceptance number is Ac = 3. Beta risks are generally chosen in the range of 1 percent to 10 percent. Sample plans consist of the following two parts: 1. Note: The first four characteristics define the performance of the plan (the operating characteristic curve).0% β = 10.

Divide the RQL by the AQL.7 Pa . α.509 4. RQL 16.3 .052 0.000 = discrimination ratio AQL 4.4 .0 Step 2.H1317_CH03. Example: Develop an attribute sampling plan that will satisfy the conditions of AQL = 4.6 .0%.1 0 5 20 10 15 Percent defective in lot 25 30 One useful approach for creating an attribute sampling plan is the Poisson Unity Value method.613 3.970 value.890 4. and β. Look up the discrimination ratio by choosing the closest value available in the column provided for α = 0. RQL.0%. This method uses an infinite lot size and is based on the four characteristics of AQL.10 44.2 . and then go to the extreme right to locate the np1 = 1. go to the extreme left column to find the C = 4 value. The closest value in the table for this case is 4.5 .0 .05 and β = 0.386 .366 1. Having located the closest discrimination ratio.549 3.818 1.057.946 6.206 np1 0. RQL = 16. α = 5%.355 0.0 = = 4.970 2.890 10.05 β = 0.057 3. Step 1.10.qxd 10/18/07 11:40 AM Page 17 Acceptance Sampling for Attributes 17 1.9 . C 0 1 2 3 4 5 6 α = 0. and β = 10%.8 .

C = 4 and n = 49. where n = 49 and C = 4 3. The other associated values for the example sample plan of C = 4 and n = 49 and the graph of the OC curve follow. Also. . therefore. Step 3.75. and the default was 4. the Poisson Unity Value of 4.000 could not be found exactly. This value represents the product of the lot percent defective and the sample size that will yield a 75 percent probability of acceptance Pa.80 16. RQL = 16. we find the value 3.02 rather than the 4.25 using an integer sample size of n = 49.qxd 18 10/18/07 11:40 AM Page 18 The Desk Reference of Statistical Quality Methods The maximum number of defective units in the sample is C = 4.369.05 for an AQL of 4. This same rounding off of the Poisson Unity Value also results in the p = 16. Dividing this value by the sample size n = 49 gives the percent defective that corresponds to a Pa of 0. and β = 10% will be satisfied by this plan.0 percent AQL as specified. or 75 percent. otherwise. α = 5%. Sample calculation: In the previous example. p1 0. reject the lot.069 49 Other values for P at various Pas can be determined. The four plan criteria of AQL = 4.H1317_CH03.0%. If the critical number of defective C is exceeded.057.75.75 and the row C = 4.96 6.02 4.31 for a Pa = 10% where the original plan was established for a p = 16. Percent probability of acceptance Pa 100 95 90 75 50 25 10 5 Percent defective for lot 0 4.90 9. np 1.53 12.0% for the RQL at Pa = 10%.97 and n = 1 .04 The sample plan is n = 49 and C = 4. accept the lot.68 In the original calculation for the sample plan. Calculate the sample size n. Additional points to assist in the construction of the OC curve can be calculated using Table A. the α = 0.00 percent was used to give a sample plan of n = 49. The np1 value is the product of the sample size n and the AQL (AQL = p1).3 in the appendix. Looking in the table at the column headed Pa = 0. This rounding off results in the Pa table for Pa = 95% to be 4. n = = 49. np1 = 1.97 In this example.31 18.369 for Pa = 0.0%.25 ≈ 49. np = 3.369 p= = 0.

and C = 7. AQL.5 0.68 Calculation of Poisson Unity Values and np1 for given C.0 0 Percent probability of acceptance Pa 100 95 90 75 50 25 10 5 2 4 6 8 10 12 14 16 18 20 22 Percent lot defective Percent defective for lot 0 4. double. 2 c + 2 X 2 1− α .4) Sampling procedures for preset AQLs and various discrimination ratios for single.31 18.16 23.9 0.qxd 10/18/07 11:40 AM Page 19 Acceptance Sampling for Attributes 19 OC curve for n = 49. calculate the Poisson Unity Value and np1.2 0.98 2 X 0. and multiple samples are available.3 0.0 Probability of acceptance.10. Poisson Unity Value = X 2 β .5 1 = = 2.16 7.05.95 and np1 = X 2 0. Pa 0. These plans are AQL oriented and address the concerns related to RQLs and risk factors indirectly by allowing the user to select various levels of inspection. and RQL values: Given α = 0.90 9.2 c + 2 1 and np1 = X 2 1− α .95 .53 12. β = 0. 2 c + 2 2 Poisson Unity Value = X 2 0.96 2 MIL-STD-105E (ANSI/ASQ Z1.H1317_CH03.80 16.8 0. .1 0.96 6.4 0.6 0. C = 4 sampling plan: 1.7 0.02 4.10 .95 .16 = 3.

The inspection may be switched to reduced or tightened based on performance. In addition to the levels that are reported as a letter.H1317_CH03. 10. a single sample plant will be evaluated using general level II and an AQL = 4%. Format for Use and Discussion In Table 1.4-2003 Sampling Procedures and Tables for Inspection by Attributes (Milwaukee.001 150.qxd 20 10/18/07 11:40 AM Page 20 The Desk Reference of Statistical Quality Methods The levels of inspection range from the nondiscriminating set of special inspection levels of S-1. Used with permission.000 to 35. Special inspection levels Lot or batch size 2 9 16 26 51 91 151 281 501 1201 3201 10. S-3.001 500. S-2.000 to 150.001 35. . there are three sets of conditions that also affect the sampling and acceptance criteria: 1. and S-4 to the general inspection levels that are more discriminating. The default level of inspection is general level II. The letter designates the appropriate sample size for the specified lot size and level of inspection. The directions regarding the choice of these degrees of inspection are outlined in Figure 1. For a lot size of N = 450 and general level II. WI: ASQ Quality Press). normal inspection is carried out. Normal inspection 2. Tightened inspection At the beginning of the inspection scheme.000 to 500.000 and over General inspection levels S-1 S-2 S-3 S-4 A A A A B B B B C C C C D D D A A A B B B C C C D D D E E E A A B B C C D D E E F F G G H A A B C C D E E F G G H J J K I II III A A B C C D E F G H J K L M N A B C D D E G H J K L M N P Q B C D E E F H J K L M N P Q R Adapted from ANSI/ASQ Z1. the appropriate sample letter is H.001 to 8 to 15 to 25 to 50 to 90 to 150 to 280 to 500 to 1200 to 3200 to 10. Reduced inspection 3. The lot size is 450. sample size code letters are arranged according to the lot size group and the level of inspection. Table 1 Sample size code letters. For an illustration case.

The following example illustrates the use of the MIL-STD-105E sampling plan. with • Total nonconforming less than limit number (optional). AQL = 0. look up the sample size for letter J. or • Other conditions warrant Tightened • 5 consecutive lots accepted • 5 lots not accepted while on tightened inspection • Discontinue inspection under Z1.H1317_CH03. Look up the appropriate sampling letter in Table 2. WI: ASQ Quality Press). A shipment of 1000 bolts has been received. then go across the top heading to AQL = 4. level = S-4. and • Approved by responsible authority Reduced • 2 of 5 or fewer consecutive lots are not accepted Normal • Lot not accepted. level I. and the level of inspection is general level II. Used with permission. The AQL = 4. Sampling Procedures and Tables for Inspection by Attributes (Milwaukee. Lot size N = 1800. level II. Determine the sample size and acceptance criteria for the following: A. and the acceptance criterion is that the lot is accepted if seven or fewer defective units are found and is rejected if more than seven defective units are found. In Table 3.65% C.5% B.qxd 10/18/07 11:40 AM Page 21 Acceptance Sampling for Attributes 21 Start • Preceding 10 lots accepted. Source: ANSI/ASQ Z1.65% .4-2003. Lot size N = 1100. 10. Step 1. The sample size for letter J is n = 80.0 to locate the Ac and Re criteria. AQL = 0. AQL = 6. or • Production irregular. and • Production steady. Lot size N = 250.0%.4 Figure 1 Switching rules for ANSI Z1. Step 2. The correct sample letter is J. or • Lot accepted but nonconformities found lie between Ac and Re of plan.4 system.

Used with permission. WI: ASQ Quality Press). If sample size equals. Acceptance number. 11.H1317_CH03. carry out 100 percent inspection. 10. in percent nonconforming items and nonconformities per 100 items (normal inspection) Ac Re Use the first sampling plan below the arrow. Used with permission. Adapted from ANSI/ASQ Z1. WI: ASQ Quality Press). Acceptance quality limits. Adapted from ANSI/ASQ Z1. AQLs. Rejection number. Table 3 Single sampling plans for normal inspection. Use the first sampling plan above the arrow. or exceeds.qxd 22 10/18/07 11:40 AM Page 22 The Desk Reference of Statistical Quality Methods Table 2 Sample size code letters. . lot size.4-2003 Sampling Procedures and Tables for Inspection by Attributes (Milwaukee.4-2003 Sampling Procedures and Tables for Inspection by Attributes (Milwaukee.

there are a couple of disadvantages: 1. Many manufacturing processes do not accumulate the items to be inspected in a lot format but rather are continuous. When production is continuous and discrete lots are formed. 47. Tabulated values for operating characteristic curves for single sampling plans Acceptance quality limits (normal inspection) Acceptance quality limits (tightened inspection) Note: Binomial distribution used for percent nonconforming computations. in nonconformities per hundred units for AQLs > 10) Note: Figures on curves are acceptance quality limits (AQLs) for normal inspection. or lot. The subject standards have OC curves for each code letter. Additional space is required for the accumulated lot Continuous sampling plans (CSPs) allow the lot to be judged on a continuous basis with no surprises upon conclusion of the lot.4-2003 Sampling Procedures and Tables for Inspection by Attributes (Milwaukee. in percent nonconforming for AQLs ≤ 10. of material. Continuous Sampling Plans Sampling plans defined by MIL-STD-105 and MIL-STD-414 are applicable when the lot of units to be examined is in one large collection. The actual operating characteristic can be found in Table 4. Adapted from ANSI/ASQ Z1. Percent of lots expected to be accepted (Pa) Chart J—Operating characteristic curves for single sampling plans (Curves for double and multiple sampling are matched as closely as practicable) Quality of submitted product (p. .H1317_CH03. WI: ASQ Quality Press). The operating characteristic data are presented both graphically and tabularly.qxd 10/18/07 11:40 AM Page 23 Acceptance Sampling for Attributes 23 Table 4 Tables for sample size code letter J. The rejection of the lot can lead to timely reinspection of the entire lot 2. Poisson for nonconformities per hundred units. Used with permission.

What CSP-1 plan will yield a 0. If any defective units are found during the initial 100 percent inspection. 100 percent inspection is required until i consecutive units have been found to be satisfactory.qxd 24 10/18/07 11:40 AM Page 24 The Desk Reference of Statistical Quality Methods Start Inspect i successive units Replace defective with nondefective Defective found No defective found Randomly sample fraction f units Defective found No defective found Figure 2 Procedure for CSP-1 plan. Upon reaching the clearing number i. CSP-1 Plan The CSP-1 plan was first introduced by Harold F. One plan would be to inspect 100 percent until 335 consecutive nondefective units are found. Dodge in 1943 (Figure 2). and the accumulation count would be restarted until 335 consecutive good units are found. . The AOQL for this plan is 0.33 percent AOQL? Referring to the CSP-1 table. there are several choices.33 percent nonconforming. the 100 percent inspection decreases to a fraction f of the units. any tenth sample found defective will result in the reinstatement of the 100 percent inspection. Table values for various combinations of i and f for designated average outgoing quality limits (AOQL) can be found in Table 5. the defective unit would be replaced with a good unit. An electronic assembly line produces 6500 units per day. After that. and then inspect every tenth unit. Initially. Discovery of a defective unit during the frequency inspection results in 100 percent inspection until i consecutive units are found. in which case interval inspection of every fth unit resumes. Application example: 1.H1317_CH03.

065 .79 23 38 49 58 73 89 108 134 175 215 255 1.94 5 7 9 11 13 16 19 23 29 36 43 6.018 1540 2550 3340 3960 4950 6050 7390 9110 11730 14320 17420 0.0 11.033 600 1000 1310 1550 1940 2370 2890 3570 4590 5600 6810 0.040 .53 36 59 76 91 113 138 170 210 270 330 400 0.025 .5 1.22 15 25 32 38 47 57 70 86 110 135 165 1.65 .5 2.046 375 620 810 965 1205 1470 1800 2215 2855 3485 4235 0.10 .074 245 405 530 630 790 965 1180 1450 1870 2305 2760 0.5 7.25 .40 .0 1.90 6 10 13 15 18 22 27 33 42 52 62 4.90 10 16 21 25 31 38 46 57 72 89 106 2.113 194 321 420 498 623 762 930 1147 1477 1820 2178 0.0 4.33 AQL (%) 140 232 303 360 450 550 672 828 1067 1302 1583 0.010 1/2 1/3 1/4 1/5 1/7 1/10 1/15 1/25 1/50 1/100 1/200 840 1390 1820 2160 2700 3300 4030 4970 6400 7810 9500 0.46 11:40 AM f AOQL (%) 10/18/07 Table 5 Values of i and f for CSP-1 plans..12 3 5 6 7 8 10 12 14 18 22 26 10.15 . H1317_CH03.198 53 87 113 135 168 207 255 315 400 500 590 0.143 25 84 140 182 217 270 335 410 500 640 790 950 0.015 .qxd Page 25 .

1)(1250 ) AFI = AFI = 0.008 fp 1 v= v = 1250. (0. In a traditional lot acceptance sampling plan. The OC curve for CSP-1 plans gives the probability or proportion of units passed under the subject plan as a function of process proportion defective.42 or 42%. If the process proportion defective for the example case had been 0.qxd 26 10/18/07 11:40 AM Page 26 The Desk Reference of Statistical Quality Methods The average number of units inspected in a 100 percent screening sequence following the occurrence of a defect is equal to u= 1 − Qi . u = 1718. Pa equals the probability of accepting lots that are p proportional defective. 1718 + 1250 AFI = The average fraction of manufactured units passed under the CSP-1 plan is given by Pa = v Pa = 0.80 percent. u = . .1 and p = .1)(. u+v When Pa is plotted as a function of p.H1317_CH03.992 335 ) The average number of units passed under CSP-1 plans before a defective unit v is found is given by 1 f = 1 / 10 = 0.008 )(. PQ i where: P = proportion defective for process or lot Q=1−P i = initial clearing sample. PQ i (.10 u + v v = 1250 1718 + (0.992 335 . then the average number of units inspected would be u= 1 − Qi 1 − .008 ) v= The average fraction of the total manufactured units inspected (AFI) in the long run is given by u + fv u = 1718 f = 0. the OC curve for the subject plan is derived.621.

8202 0.93 0.14 0.86 0.96 0. The parameters u and v are calculated for several values of p. P Q 0.03 0.0724 0.01 0.05 0.99 38 (.11 0.0205 Pa = .90 0.0481 0.85 1− Qi PQ i 47 58 73 93 120 158 211 285 389 538 753 1064 1521 2195 3200 u= 1 fp 1000 500 333 250 200 167 143 125 111 100 91 83 77 71 67 v= v u+v 0.99 38 ) u = 47 v= 1 fp v= 1 (.91 0.09 0.01: Q = 1 − p Q = 0.1078 0.10 )(.13 0.94 0.H1317_CH03.07 0.12 0.01) v = 1000 Pa = v u+v Pa = 1000 47 + 1000 Pa = 0. and Pa.08 0.8961 0. The values for the required factors are listed in a tabular form.3049 0.0313 0.98 0.06 0. f = 1/10.4040 0. increasing P in increments of 0. P Q 0.01)(. The proportion of units passing the sampling plan Pa is determined using the u and v values for each proportion defective p.95 0.92 0.10 0.6250 0.qxd 10/18/07 11:40 AM Page 27 Acceptance Sampling for Attributes 27 Example of OC curve development: Draw the OC curve for a CSP-1 plan where i = 38.99 u= 1− Qi PQ i v= 47 1 fp Pa = 1000 v u+v 0.5138 0.7289 0. and the AOQL = 2.9551 0.87 0.02 0.88 0.89 0.99 0.2220 0.90.01.9551 Continue the calculations of u.9551 Calculation for p = 0.04 0.99 u= 1− Qi PQ i u= 1 − .01 0. A plot of Pa as a function of p defines the OC curve for the plan. v. the process proportion defective.15 0.1567 0.97 0.

H1317_CH03.10 )(0.03(1 − 0.0222 0. and P = 0. It is determined by plotting the average outgoing quality (AOQ) as a function of the proportion defective of the process. The maximum AOQ for the subject plan defines the percent AOQL. where i = 38. Pa 90 i = 38 f = 1/10 80 70 60 50 40 30 20 10 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process percent defective. AOQ = .03 (process is 3 percent defective).22%.97 )38 %AOQ = 2. % Calculation of the Percent AOQL The percent AOQL is the percent defective that the sample plan overall will allow to pass through the system.10 + (1 − 0.97 )38 = 0.10 )(0. f = 1/10. f + (1 − f ) Q i For the example given. The AOQ is determined by AOQ = P (1 − f ) Q i . the AOQ is 0.qxd 28 10/18/07 11:40 AM Page 28 The Desk Reference of Statistical Quality Methods Plot Pa as a function of p: 100 Proportion accepted.

28 i = 38 f = 1/10 AOQL = 2.0 percent AOQL plan is wanted with a clearing cycle of n = 100. If no defectives are found during the sampling inspection. the inspection switches to a fraction f of the units.07 0.0 2. the CSP-2 was developed by Dodge and Torrey (1951). AOQL. % f 0.80 1.56 0.62 2.5 1.0 3.0 5.H1317_CH03.41 1. Because 100 units are inspected and found to be defect free. then 100 percent is resumed until ith consecutive units are found to be defect free.0 4. then the frequency inspection of a fraction f of the units is resumed. Under the CSP-2 mode. we have % AOQ 0.54 2.0 1/2 1/3 1/4 1/5 1/7 1/10 1/15 1/25 1/50 86 140 175 200 250 290 350 450 540 43 70 85 100 125 148 175 215 270 21 35 43 48 62 73 88 105 135 14 23 28 33 40 47 57 70 86 10 16 21 24 30 36 44 53 64 9 13 17 19 24 28 34 42 52 Example: A 1.81 2.22 2.77 2.86 1. If a defect is found.90% 3 % AOQ %P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Process percent defective.78 0.40 0. 100 percent inspection is performed until i units are found to be defect free. % CSP-2 Plan As a modification to the original CSP-1 plan. The following table gives approximate values for i as a function of the AOQL. If a second defective is found during the sampling inspection of the i units.0% to 15%.20 1. every fifth unit is now inspected.qxd 10/18/07 11:40 AM Page 29 Acceptance Sampling for Attributes 29 Calculating all the AOQs from P = 1. During this fre- .61 2.

at which the lot will be accepted) are equally or more contributory. where the lot size. and acceptance criteria are varied. in which case the frequency inspection of every fifth unit is reinstated.12 11.79 1.2 .5 N = 1000 . D 5 6 7 . and Acceptance Criteria C on the OC Curve Traditional acceptance sampling plans such as ANSI Z1.6 . If during the next 100 consecutive units a second defective is found. Sample Size n.3 .90 2.4 and Z1.9 specify sampling plans as a function of the lot size. While lot size influences the overall OC curve. sample size.53 0. AOQL.9 .22 1.1 N = 200 0 1 2 3 4 % Defective. % f 0. it is not the most contributing factor with respect to the shape of the OC curve. Other factors such as the sample size and the acceptance criteria C (the number of defective units.90 4.0 .H1317_CH03.qxd 30 10/18/07 11:40 AM Page 30 The Desk Reference of Statistical Quality Methods quency inspection.7 .46 1/2 1/3 1/4 1/5 1/7 1/10 1/15 1/25 1/50 80 128 162 190 230 275 330 395 490 54 86 109 127 155 185 220 265 330 35 55 70 81 99 118 140 170 210 23 36 45 52 64 76 90 109 134 15 24 30 35 42 50 59 71 88 9 14 18 20 25 29 35 42 52 7 10 12 14 17 20 24 29 36 4 7 8 9 11 13 15 18 22 Effects of Lot Size N. a defective is found. 100 percent inspection is required until 100 consecutive units are found defect free.4 . Case I: Constant C = 1 and constant n = 100 with variable lot size N = 200 to 1000 1.8 Pa . The values for initial clearing sample i and the fractional sample frequency f are given for several AOQLs in the following table.94 7. Consider the following OC curves.

5 .9 .8 .6 .10/18/07 11:40 AM Page 31 Acceptance Sampling for Attributes Case II: Constant N = 1000 and constant n = 100 with variable C = 0 to 4 1.qxd n = 50 .2 n = 200 .8 C=4 Pa .1 0 1 2 3 4 % Defective. D 5 6 7 Case III: Constant N = 1000 and constant c = 1 with variable n = 50 to 200 1.0 .0 .5 .4 . D 5 6 7 31 .4 .2 C=0 .7 .3 .1 0 1 2 3 4 % Defective.3 .7 Pa H1317_CH03.6 .9 .

and the second relationship is the total.50 and the cost of a defective unit received by the consumer is $60. 60 +6000 Lot value if 100% inspected +4000 +2000 0 Lot value if released –2000 –4000 –6000 – 8000 0 AQL = 0.80 if unit is defective Value to consumer = $5.50 × 100 = 0.H1317_CH03. cost associated with the release of a lot that contains defective units. we can see that changes in the lot size have a relatively small effect on the overall shape of the OC curve. cost of 100 percent inspection as a function of the percent defective. The cost of inspection will be set at k1.83% 5 10 Lot percent defective 15 20 Lot size = 1000 Cost of inspection = $0. All calculations are based on the hypergeometric distribution incorporating both the lot size N and the sample size n.00. This break-even point can be suggested as an initial AQL as follows: AQL = k1 × 100.qxd 32 10/18/07 11:40 AM Page 32 The Desk Reference of Statistical Quality Methods Comparatively. or lot. the suggested AQL is AQL = 0. or lot. if the cost of inspection is $0.00 if defective .83%.50 if unit is defect free and $5. A Method for Selecting the AQL If we consider the AQL as a break-even point associated with two intersecting relationships. the first relationship is the total. k2 For example.00 if defect free and $60. and the cost of a defective unit by the consumer will be set at k2.

G. no. Milwaukee. WI: ASQ Quality Press. Grant. Torrey. Besterfield... and M.4-2003. L. 1982. 1994. 5 (March): 7–12. H. H. Dodge. E. S. 7th edition. D. Statistical Quality Control. Quality Control.” Industrial Quality Control 7.qxd 10/18/07 11:40 AM Page 33 Acceptance Sampling for Attributes 33 Bibliography ASQ Quality Press. . New York: Marcel Dekker. Sampling Procedures and Tables for Inspection by Attributes. New York: McGraw-Hill. Acceptance Sampling in Quality Control. F. 4th edition. Schilling. 2003.H1317_CH03. Englewood Cliffs. “Additional Continuous Sampling Inspection Plans. E. and R. 1996. N. Leavenworth. ANSI/ASQ Z1. NJ: Prentice Hall. 1951.

H1317_CH03.qxd 10/18/07 11:40 AM Page 34 .

the unit of inspection either conforms to a requirement or it does not. Standard Deviation Method Levels of inspection determine the sample size as a function of the submitted lot.qxd 10/18/07 11:42 AM Page 35 Acceptance Sampling for Variables Lot acceptance plans for attribute data are based on acceptance criteria where a specified number of nonconforming units and a specified sample size are given. Form I. the unit of inspection is measured as a direct variable. The following inspection plans are those covered by MIL-STD-414. Single Specification Limit. the sample size is determined and designated as a letter B through P. 35 . the more discriminating the sample plan. The closer the RQL is to the AQL. In acceptance sampling for attributes. Levels of inspection are directly related to the ratio of the acceptable quality limit (AQL) and the repeatable quality level (RQL). Levels available are S3 → S4 → I → II → III Increasing discrimination →. If the variable observation is not treated as an attribute. Such inspection plans are predefined under MIL-STD105E (ANSI/ASQ Z1. such as weight. For a given lot size and level of inspection. If this number of nonconforming units is exceeded in a sample. sampling plans for variable data. then the observation is converted to an attribute. The decision criteria for each individual inspection are based on attributes. In many cases.4-2003). If this measurement is compared to a specification requirement.H1317_CH04. Variability Unknown. and angle or viscosity. then the acceptance criteria can be based on the average of the observations and a variation statistic such as the standard deviation as related to the specification requirement. diameter. a decision is made to reject the lot from which the samples were taken.

0. what will the sample size be? Lot size 2 to 9 to 16 to 26 to 51 to 91 to 151 to 281 to 401 to 501 to 1201 to 3201 to 10.000 10.001 to 500. Seven samples are chosen and measured.001 to ≥500.504 0. The characteristic for this example is the diameter of the rods.000 ≥500.000 B B B B B B B C C D E F G H H H Sample S4 B B B B B C D E E F G H I J K K size E = I II III Letter B B B C D E F G G H I J K L M N B B C D E F G H I J K L M N P P C D E F G H I J J K L M N P P P B C D E F G H I J K L M N P Sample size 3 4 5 7 10 15 20 25 35 50 75 100 150 200 Note: There is no letter “O.001 to 35.000 500.503 — X = 0.505 0. Select a level of inspection and sample size. Select the sample and calculate the average and sample standard deviation.qxd 36 10/18/07 11:42 AM Page 36 The Desk Reference of Statistical Quality Methods Lot size S3 S4 I II III Letter 2 to 8 9 to 15 16 to 25 26 to 50 51 to 90 91 to 150 151 to 280 281 to 400 401 to 500 501 to 1200 1201 to 3200 3201 to 10.501 0.001 B B B B B B B C C D E F G H H H B B B B B C D E E F G H I J K K B B B C D E F G G H I J K L M N B B C D E F G H I J K L M N P P C D E F G H I J J K L M N P P P B C D E F G H I J K L M N P Sample size 3 4 5 7 10 15 20 25 35 50 75 100 150 200 Note: There is no letter “O.” 7 Step 2.000 150.” Step 1.502 0.000 35.503 0.001 to 150.001 S3 8 15 25 50 90 150 280 400 500 1200 3200 10.001 to 35.000 35.503 S = 0. For a general level II.001 .000 150. A lot size of 80 rods has been received.503 0.H1317_CH04.001 to 150.

0 maximum. For lower specifications: If CDFL is less than the lower specification. where there is a lower specification — CDFL = X − kS CDFL = 0. For lower specifications: — CDFL = X − kS For upper specifications: — CDFU = X + kS For this example. accept the lot.qxd 10/18/07 11:42 AM Page 37 Acceptance Sampling for Variables 37 Step 3. form 1 (k method). RQL. Select an AQL and look up the appropriate k value from Table 1. AQL = 1. therefore.  Risk.500. Problem: A lot of 200 units has been received. If CDFL is greater than the lower specification. accept the lot. Step 4.502 Step 5. For this example. Calculate the critical decision factor (CDF).500 or LSL of 0. The specification for this example is a minimum diameter of 0. Randomly select the sample from the 200 values listed below. If CDFU is less than the upper specification. CDFL is greater than the lower specification. 21 19 16 18 20 16 20 18 21 20 20 23 22 19 22 18 24 20 15 23 22 12 20 23 21 20 16 20 17 20 23 23 22 22 20 20 21 22 22 22 17 20 23 21 17 23 22 19 22 20 20 25 21 21 18 21 23 18 19 24 22 21 17 20 21 27 19 19 21 14 21 17 14 16 15 15 22 21 21 17 21 22 18 24 19 19 15 20 16 18 14 18 20 20 19 20 14 17 23 19 19 23 24 18 17 22 21 24 24 21 22 19 21 15 22 23 19 17 20 23 19 21 18 20 19 22 18 19 22 19 19 22 20 21 19 21 20 23 14 22 20 20 24 22 22 26 20 20 18 19 25 18 14 18 21 23 23 20 20 20 18 23 17 18 16 19 23 17 19 20 19 17 19 19 18 21 17 15 21 20 25 19 20 21 18 19 17 19 20 22 Sampling Plans Based on Specified AQL. Confront the CDF with the acceptance criteria. the lot is accepted.H1317_CH04.33. The specification is 25.00 inspection using the single specification. reject the lot.5 percent.001) CDFL = 0. reject the lot. and a Risk The sample size n and the k factor can be estimated using the following relationships: k= Z P2 Zα + Z P1 Zβ Zα − Zβ and ⎛ Zα + Zβ ⎞ n=⎜ ⎟ ⎝ Z P1 − Z P2 ⎠ 2 ⎛ k2 ⎞ ⎜1 + ⎟ 2⎠ ⎝ 19 20 22 18 24 23 20 16 21 19 . For upper specifications: If CDFU is greater than the upper specification. Perform a level II.5 percent AQL is 1. k for sample size letter E (sample size n = 7) and a 2.33)(0. and the selected AQL is 2.503 − (1.

25 .97 1.00 0.53 1.40 1.83 0.07 1.50 0.03 1.18 1.51 6.89 1.79 1.0 6.43 2.73 2.32 2. standard deviation method.77 0.33 2.00 2.22 2.50 2.84 2.07 1.80 2. form I.18 2.38 2.10 2.85 1.12 1. 3 4 5 7 10 15 20 25 35 50 75 100 150 200 Sample size 1.18 2.50 2.89 1.73 .33 1.00 1.47 1.85 .40 .62 1.86 1.81 0.82 1.29 2.70 1.08 2.50 4.03 1.41 2.54 2.84 1.26 2.47 2.53 2.36 2.35 2.00 AQL (Tightened) 1.96 1.07 — 10.11 2.15 2.25 1.30 1.98 2.24 2.58 1.76 0.42 2.26 .15 .65 2.01 1.50 1.03 2.14 2.71 2.qxd Page 38 .35 1.65 .06 2.66 2.65 AQL (Normal) 1.80 1.21 1.33 1.46 1.24 1.09 1.51 1.11 2.98 2.33 .23 1.18 1.61 2.94 0.77 2.42 1.96 1.14 2.45 2.76 1. H1317_CH04.40 1.29 1.29 10.65 1.17 1.87 0.48 1.41 1.10 T 2.24 1.39 1.15 1.72 1.53 1.88 1.40 2.50 0.98 2.04 1.58 2.50 1.50 All AQL values are in percent nonconforming.45 1.70 4.47 2.61 2.55 2.20 2.57 1.03 2.92 0.69 1.26 1.57 0.75 1.14 1.12 2.93 1.47 .60 2.58 2.34 1.67 1.65 1.69 2.00 2.00 2.84 1.96 1. T denotes plan used on tightened inspection.31 2.91 1.00 .89 0.62 0.12 1.24 2.05 1.89 2.72 1.65 1.50 1.61 1.68 0.51 1.27 2. variability unknown.0 11:42 AM B C D E F G H I J K L M N P Sample letter 10/18/07 Table 1 Single specification.

65 + 1.qxd 10/18/07 11:42 AM Page 39 Acceptance Sampling for Variables where: P1 = AQL P2 = RQL 39 α = alpha.04 = 1.16 = 0. α.28 ⎞ ⎛1 + (1. .0% Manufacturer’s risk = 5% Consumer’s risk = 10% Look up the following Z scores corresponding to the AQL.65 Zβ = Z. or consumer’s. risk The equation for n assumes that σ is unknown.28 2 2 ⎛ k2 ⎞ ⎛ 1.81 = 28 + = 1 ⎟ ⎜ ⎟ ⎝ 1.99 ⎠ ⎜⎝ 2⎠ 2 ⎠ ⎝ Sampling Plans Based on Range R Sometimes it is desirable to use the range rather than the standard deviation. risk β = beta. d2 where d2 depends on the subgroup sample size.65 + 1.32 1.05 = 1.28 k= Z P2 Zα + Z P1 Zβ Zα + Zβ ⎛ Z α + Zβ ⎞ n=⎜ ⎟ ⎝ Z P1 − Z P2 ⎠ 2 k= (0.0% RQL = 16. It is easy to explain The calculations n and k are carried out as normally done. or manufacturer’s. The relationship of standard deviation and range are defined from SPC applications σ= R .75 − 0.65) + (1. accept the lot. If X − kS > — lower specification or X + kS < upper specification.28) = 1.99)(1. The range has several advantageous properties over the standard deviation: 1.10 = 1. and β values: Z P1 = Z. It is quick to determine 3. The acceptance criteria are modified to use multiples of average range in lieu of the standard deviation. The calculations for n and k assume a normal distribution and are independent of the lot size. RQL. It is easy to compute 2.75)(1.32) ⎞ = 27.99 Zα = Z. The criteria for acceptance of single-sided specifications for either an LSL or a USL are as follows: — Calculate the sample average and standard deviation from a sample of size n. Example: Decide whether to accept a lot given the following conditions: AQL = 4.H1317_CH04.75 Z P2 = Z.

Application example: A lot of small electric motors is being tested for acceptance.5% α = manufacturer’s risk = 5% P2 = RQL = 15. then the d2 value should be adjusted.708.44 (1. A lot of 150 items is submitted for inspection.2778 ⎞ d2* = d2 ⎜ 1 + m(nr − 1) ⎟⎠ ⎝ where m = number of subgroups. If the number of subgroups is small.693 ⎜ 1 + d * = 1.65 ) + (1. Assume that the standard deviation is unknown and that the following characteristics for the plan are required: P1 = AQL = 2. d2* d2* otherwise. accept the lot.2778 ⎞ d2* = d2 ⎜ 1 + m(nr − 1) ⎟⎠ ⎝ ⎛ 0.96 )(1. 0. This adjustment results in a d 2* value. ⎛ 0. 15 subgroups of 3 each would have been chosen.9(nr − 1) where n = 28 and nr = 3 (arbitrarily chosen). reject the lot.28 ) .H1317_CH04. and nr = subgroup size.04 )(1. then the number of subgroups would be m= n −1 = 15.2778 ⎞ d2* = 1.28 ) k = 1.0% β = consumer’s risk = 10% k= Z P2 Z α + Z P1 Z β Zα + Zβ k= (1.qxd 40 10/18/07 11:42 AM Page 40 The Desk Reference of Statistical Quality Methods — The traditional d2 assumes that the number of ranges used in the determination of R is large. 0. and the d 2* value is given by ⎛ 0.9(nr − 1) For this example.65 ) + (1. ⎝ 15( 3 − 1) ⎟⎠ 2 — Substitute R and d 2* into the acceptance criteria: X+R X−R If < lower specification or if > upper specification. The number of subgroups m: m= n −1 . The specification for the resistance is 1245 maximum.

Quality Control. 1994. and the average range and grand average are calculated. New York: McGraw-Hill. New York: Marcel Dekker.4-2003. 7th edition.44)2 ⎞ n=⎛ 1+ ⎟ ⎝ 1. reject the lot. The appropriate value for d 2* is given by ⎛ 0.66 = 21 The select sample is a subgroup of nr = 5 (arbitrarily chosen) number of subgroups. Milwaukee. If X + R > 1245. and R.2778 ⎞ . 2003. d 2* = 2. Englewood Cliffs. Leavenworth. E.04 ⎠ ⎜⎝ 2 ⎠ n = 20. Three subgroups of subgroup sample size 5 are chosen from the lot.65 + 1. L. Schilling. 2. 1996.. accept the lot. 4th edition.9(5 − 1) m = 3.96 − 1. Sampling Procedures and Tables for Inspection by Attributes. m= n −1 0.qxd 10/18/07 11:42 AM Page 41 Acceptance Sampling for Variables 41 Sample size n: 2 ⎛ Zα + Zβ ⎞ ⎛ k 2 ⎞ n=⎜ ⎟ ⎜1 + 2 ⎟⎠ ⎝ Z P1 − Z P2 ⎠ ⎝ 2 1. 1982. Grant. Acceptance Sampling in Quality Control. Besterfeld.326 ⎜ 1 + ⎝ 6(5 − 1) ⎟⎠ d 2* = 2. d 2* = d 2 ⎜ 1 + ⎝ m(nr − 1) ⎟⎠ ⎛ 0. Acceptance criteria: Three samples of five each are chosen.9(nr − 1) m= 11 0. . G.380 Bibliography ASQ Quality Press.353. S. D. Statistical Quality Control. E.2778 ⎞ . ANSI/ASQ Z1. WI: ASQ Quality Press. NJ: Prentice Hall. H.H1317_CH04. otherwise.28 ⎞ ⎛ (1.

qxd 10/18/07 11:42 AM Page 42 .H1317_CH04.

reflect the 99. not for the variation of individuals). or 80. The steps for the construction and interpretation of the average/range control chart are the same as for all variables control charts. While all control charts are robust with respect to nonnormal distributions. the choice for control charts is the average/standard deviation control chart. If we elect to have even greater sensitivity for detecting process changes. the individual/moving range chart is the one that is most affected by nonnormality of the data. See the module entitled Average/Standard Deviation Chart for a more detailed discussion of these control charts.7 percent probability limits. This chart is relatively insensitive to small changes and subject to distorted results when the process is significantly nonnormal with respect to the distribution of the data. The total number of actual individual observations will be 5 × 16 (n × k). the subgroup sample size may be from 2 to 25. For this chart. we may use the average/range control chart. A minimum of 25 subgroup samples of n = 4 to n = 5 are taken over a period of time that will serve as a baseline period. Changes can occur in both the average of a process and the variation of the response variable of a process. Step 1. which states that the distribution of averages will be more normally distributed than the individuals from which they came.qxd 10/17/07 2:03 PM Page 43 Average/Range Control Chart Shewhart control charts are used to detect changes in processes. therefore. For our example. This is due to the lack of the effect of the central limit theorem. Control charts can be made more sensitive to small process changes by increasing the sample size (sometimes referred to as subgroup sample size). Changes in these parameters will be noted when certain rules of statistical conduct are violated. The control limits for both the averages and the ranges are based on three standard deviations and will. 43 .H1317_CH05. measurements will be made approximately every three hours using a subgroup sample size of n = 5 with a total of 16 samples (k = 16). The central tendency or location of the process is monitored by tracking the average of individual subgroup averages and their behavior relative to a set of control limits based on the overall process average plus or minus three standard deviations (for the variation of averages. The simplest control chart for variables data is the individual/moving range control chart. For sample sizes of 2 to 10. The variation of the process is monitored by tracking the range of individual subgroups and their behavior relative to a set of control limits. The following example illustrates the steps for establishing an average/range control chart. Collect historical data.

0 23.8 23.8 25.2 22.5 6.6 6.6 25.7 23.3 26.7 28.5 25.0 26.5 21.9 26.1 27.6 8.2 25.2 31.H1317_CH05.0 Sample #13 Date: 10/5/94 Time: 7:10 AM Average: Range: Sample #4 Date: 10/2/94 Time: 5:00 PM 19.5 27.4 25.4 24.5 27. The range of a subgroup is calculated by taking the difference between the smallest and largest values within a subgroup.3 27.4 Sample #5 Date: 10/3/94 Time: 7:15 AM Average: Range: Sample #2 Date: 10/2/94 Time: 11:00 AM Sample #14 Date: 10/5/94 Time: 10:00 AM Sample #15 Date: 10/5/94 Time: 1:00 PM Sample #16 Date: 10/5/94 Time: 4:25 PM 26.6 3.5 23.7 18.5 6.1 Sample #10 Date: 10/4/94 Time: 11:20 AM Sample #11 Date: 10/4/94 Time: 2:09 PM Sample #12 Date: 10/4/94 Time: 5:15 PM 27.7 23.8 26. time of sample.6 18.3 25.7 19.6 27.3 27.6 20.9 25.5 26.0 13.5 19.1 26.0 23.3 6.7 24.6 24.6 31.5 24.2 9.7 26. the individual observations.6 Sample #6 Date: 10/3/94 Time: 10:30 AM Sample #7 Date: 10/3/94 Time: 1:10 PM Sample #8 Date: 10/3/94 Time: 4:20 PM 25.2 22.9 23.8 25.2 28.1 27.2 22.6 27.4 24. the average of the subgroup sample.9 26.4 31.6 31.8 25.3 21.9 23.4 23.qxd 44 10/17/07 2:03 PM Page 44 The Desk Reference of Statistical Quality Methods For each sample of data collected.1 24.6 23.2 25.1 31.5 28.0 12.5 23.9 28.9 23.7 27.1 2.4 19.5 21.1 8. Ranges are always positive.1 24.4 25.5 23.7 2.5 24.0 25.9 27.6 .8 20.6 8.1 25.4 20.9 27.7 25. Sample #1 Date: 10/2/94 Time: 7:00 AM Average: Range: 26.6 32.4 25.7 12.3 17. record the date.3 22.1 27.2 26.1 22.9 20.8 Sample #9 Date: 10/4/94 Time: 8:10 AM Average: Range: Sample #3 Date: 10/2/94 Time: 2:00 PM 22.7 6.4 33.4 19.9 23.6 19.6 21.0 10. and the range of the subgroup sample.

.3 + 6.5 398. we will calculate the three standard deviation control limits for the distribution of the ranges R.9 = = 7.H1317_CH05. k where k = total number of subgroups. Calculate a location and variation statistic.9 + . 16 16 Step 3. The location statistic is the average of the 16 subgroup averages: X= X1 + X 2 + X 3 + . and the probability of finding individual averages of subgroups inside these limits is 99. it is assumed that the process average has changed. This statistic is monitored by charting the individual averages X and their relationship to the grand average X.2 = = 24. Sometimes referred to as the grand average.89. Determine the control limits for the location and variation statistics. . . + X k . + 23.1 + 27.7 percent.89. Any individual range falling outside the upper control limits (UCL) or lower control limits (LCL) will serve as evidence that the variation of the process has changed. Variation – Control limits for the location statistic X: UCL X = X + 3 S X . . For our example: X = 26. Control limits are defined as X ± 3Sx–. . . . .3 percent probability of getting an average outside is so small that when averages outside these limits are obtained. 16 16 The variation statistic is the average range: R= R1 + R2 + R3 + . Location 2. Changes in the process will be monitored in the following two areas: 1. + 5. + Rk . For our example: R= 10. the average of the averages X represents the best overall location statistic for the process.qxd 10/17/07 2:03 PM Page 45 Average/Range Control Chart 45 Step 2. k where k = total number of subgroups.7 + . The 0.7 125.9 + 8.0 + 23. In a similar manner.

51 UCL = 24.51).577 × 7. we need a statistical check to determine if each of the individual ranges used to calculate the average range is appropriate or statistically acceptable.7 percent of the time to find subgroup ranges between 0 and 16.40 The LCL is calculated in a similar manner. The vertical wavy line separates the history from the future data.H1317_CH05. except that rather than add the three standard deviations (4. For our case. 3 S x = A2 R = 0. The completed chart using the historical data follows. Step 5. LCL x = X ± A2 R = 24.89 + 4.51) to the grand average (24.50 (one-third of the three standard deviations. If there is supporting evidence in the future that there has been a real and continuing change in the process. provided that the process average remains 24. Appropriate plotting scales are determined. which is estimated to be 4.7 of the appendix. .qxd 46 10/17/07 2:03 PM Page 46 The Desk Reference of Statistical Quality Methods 3 S X = A2 R where A2 = a factor whose value depends on the subgroup sample size.51. We check the individual ranges by comparing them to a UCL and LCL for ranges. Continue plotting data.89 and the standard deviation of averages for subgroup size n = 5 remains 1.577.89 ± 4.7 percent probability that a sample of five observations chosen from this process will yield an average value between 20. Control limits for both the averages and the ranges are drawn using a broken line. Control chart factors can be found in Table A.81 = 4. we expect 99. Construct the control chart and plot the data. and the grand average or center line for the averages chart is drawn using a solid line.51 = 29.114.39 There is a 99. A2 = 0.40.39 and 29.114 × 7. the subgroup sample size is 5.89). therefore. Step 4. looking for changes in the future. The control limits are derived using only the historical data. With our example. D4 = 2.51 where the value D4 depends on the subgroup sample size. UCL R = D 4 R = 2. therefore. the control limits may be recalculated using the recent historical data. there is no defined value for D3.51 = 20. there is no LCL for the range. LCL R = D3 R For a subgroup size n = 5. we will subtract. For our example.81 = 16. There is no LCL for the range until we have a subgroup sample size of n ≥ 7. – Since we are using the average of the subgroup ranges R to estimate the three standard deviations.

40 30 Average = 24.51 15 10 5 0 1 5 10 15 Range 20 25 30 35 .qxd 10/17/07 2:03 PM Page 47 Average/Range Control Chart 47 History UCL = 29.89 25 20 LCL = 20.39 15 1 5 10 15 20 Average 25 30 UCL = 16.H1317_CH05.

3 percent of the time. Rule 5. Rule 2. that indicate a lack of control or give us evidence that the process has changed. .H1317_CH05. In a normal distribution where no change in the average has occurred. The rules serve as an aid in interpreting the control charts. A. Rule 3. If the range chart indicates no change. this rule would be invoked 0. Four out of five points greater than the one standard deviation limit.qxd 48 10/17/07 2:03 PM Page 48 The Desk Reference of Statistical Quality Methods There are several rules. This probability is 1 in 128. A single point outside the UCL or LCL. W. Two out of three points greater than the two standard deviation limits. we assume that there has been a process change. if additional limits based on ±1 and ±2 standard deviations are chosen. The probability of the pattern of no process change occurring is approximately the same as getting seven consecutive heads in the toss of a fair coin. or 1 in 333 observations. Rule 4. However. This is a test for a trend or drift in the process. The probability of getting a false alarm is the same as rule 1. or patterns of data as they appear on the control chart.78 percent. Any two points between the two and three standard deviation limit lines with the third point being anywhere is an indication of process change. Seven points in a row on the same side of the average or centerline with no points outside the control limits. the fifth point can be located anywhere. or . A small trend in a process will be indicated by this rule violation before rule 1 can react. Rule 1 through rule 3 utilize only the control limits based on ±3 standard deviations and are traditionally the major ones used in statistical process control (SPC). Some of the rules are as follows: Rule 1. the sensitivity of the control chart can be increased by invoking the additional rules. Seven consecutive points steadily increasing or decreasing. This is the rate at which a false alarm would be seen. For a more detailed discussion. Shewhart originally devised this rule in 1931. see the module entitled SPC Chart Interpretation. and it was defined as Criterion I. Causes for a trend can be improvement in skills over time or tool wear. This likelihood is so remote that if it occurs. then we assume that a point outside the average control limit indicates a change in the process average. Assume that the following additional data points are collected in the future after establishing the control limits and operating characteristics based on the historical data. Use them to determine whether you have evidence that the process has changed. Similar to rule 4 above.

7 21.5 25.7 24.7 Sample #29 Date: 10/9/94 Time: 8:10 AM Sample #30 Date: 10/9/94 Time: 11:09 AM Sample #31 Date: 10/9/94 Time: 2:20 PM 31.7 15.7 30.1 28.0 28.9 22.7 18.0 24.2 24. do you feel that the process average and/or standard deviation have changed? .3 21.8 21.7 27.6 27.2 28.3 23.H1317_CH05.9 27.2 23.9 24.9 24.2 25.3 19.qxd 10/17/07 2:03 PM Page 49 Average/Range Control Chart 49 Sample #17 Date: 10/6/94 Time: 8:00 AM Sample #18 Date: 10/6/94 Time: 11:15 AM Sample #19 Date: 10/6/94 Time: 2:08 PM Sample #20 Date: 10/6/94 Time: 4:48 PM 21.8 23.2 24.6 27.7 22.4 21.2 25.8 25.4 24.4 21.2 30.8 29.1 30.7 26.1 19.6 29.1 21.2 22.7 28.5 25.3 25.0 25.2 22.7 22.4 21.2 18.0 24.0 27.0 17.3 23.8 Sample #21 Date: 10/7/94 Time: 6:48 AM Sample #22 Date: 10/7/94 Time: 10:25 AM Sample #23 Date: 10/7/94 Time: 2:00 PM Sample #24 Date: 10/7/94 Time: 5:12 PM 23.1 20.2 With this new information.0 Sample #25 Date: 10/8/94 Time: 7:45 AM Sample #26 Date: 10/8/94 Time: 11:15 AM Sample #27 Date: 10/8/94 Time: 1:55 PM Sample #28 Date: 10/8/94 Time: 4:48 PM 18.2 23.6 30.8 24.5 21.5 28.6 23.9 24.9 26.5 28.7 28.1 19.7 23.8 24.7 25.4 28.5 26.7 17.2 27.

E. H.qxd 10/17/07 50 2:03 PM Page 50 The Desk Reference of Statistical Quality Methods History UCL = 29. and D. Montgomery. Understanding Statistical Process Control. New York: John Wiley & Sons. 1996. and R. Grant.89 25 20 LCL = 20. . NJ: Prentice Hall. Bibliography Besterfield..39 15 1 5 10 15 20 Average 25 30 35 UCL = 16. 1992.H1317_CH05. Quality Control. S. 3rd edition. 1994. Wheeler. Leavenworth. C.51 15 10 5 0 1 5 10 15 Range 20 25 30 There is no evidence of a process change. TN: SPC Press. Introduction to Statistical Quality Control. New York: McGraw-Hill. 4th edition. D.. S. Statistical Quality Control. 7th edition. Englewood Cliffs. D. L. Knoxville.40 30 Average = 24. Chambers. 2nd edition. 1996. J. D.

128 1.078 2.094 8.000 0.387 0.130 6.615 7.342 1.918 5.698 4.732 1. CL: For X : X (As a weighted average) For R : d2 σˆ Control limits: For X : X ± Aσˆ For R : D1σˆ (Lower control limit) D2 σˆ (Upper control limit) All the factors.572 11. With a constant sample size. which will vary.225 1. it must remain the same throughout the life of the control chart.061 1. With a variable subgroup size.394 5.660 5.541 1.059 2.196 4.051 14.970 3. Dl.307 5.686 4.121 1.358 4.134 1.074 10. The real complexity arises when we attempt to estimate the variation statistic σ for the process.693 2.038 18.469 51 .553 15. Once the subgroup sample size is selected.561 13. we must calculate a weighted average.qxd 10/18/07 12:53 PM Page 51 Average/Range Control Chart with Variable Subgroups Traditional average/range control charts are based on data derived from subgroups that are of a constant size. d2.203 5.H1317_CH06. and so on.500 1.687 3.752 3. A.078 5. We can’t utilize a single case R since the d2 factor depends on d the subgroup sample size.034 1.546 17. The subgroup size can be from n = 2 to n = 10.589 9.847 2.949 0 0 0 0 0 0. determination of the average for the averages and ranges is a simple matter of averaging the data.060 12.326 2. D2. The generalized equations for a variable size average/range chart are: 2 Center line.534 2.546 0. are listed in the following table: Subgroup size n g h d2 A D1 D2 2 3 4 5 6 7 8 9 10 2.205 0.704 2.044 16.

5) +  + (6)(13) 541. Location statistic = X X= n1 X 1 + n2 X 2 +  + nk X k n1 + n2 +  + nk X= (6)(12. Calculate a location and variation statistic to characterize the process.qxd 52 10/18/07 12:53 PM Page 52 The Desk Reference of Statistical Quality Methods The process variation σ can be estimated from either the sample range R or the sample standard deviation S: ⎛R ⎛R ⎛R ⎞ ⎞ ⎞ (n1 − 1) ⎜ 1 . n1 ⎟ + (n2 − 1) ⎜ 2 .72 1. 11 11. from which the process will be characterized. 10 12.2 = = 12.2 10. 12. 12.673 1. 14. nk (σˆ ) .18 0. n2 ⎟ +  + (nk − 1) ⎜ k . 14. 13.169 0. 13. Collect the historical data. 11 10.0 13. Sample # 1 2 3 4 5 6 7 8 9 10 Data Subgroup size n X R S R d2 12. 11 13.03 45 6 + 4 ++ 6 Variation statistic = RCL RCL = d2 .97 1.15 0.H1317_CH06.140 1. nk ⎟ ⎝ d2 ⎠ ⎝ d2 ⎠ ⎝ d2 ⎠ σˆ = .577 1. h1 + h2 +  + hk (2) Consider the following example. 13. 11. it is suggested that a minimum of 25 subgroups be obtained. 11. 13. We have 10 sets of data for a demonstration of the concepts. 14 9. 11. 11.4 12.924 0. 14. n1 + n2 +  + nk − k σˆ = (1) g1S1 + g2 S2 +  + gk Sk .5 10. 12.630 1. 10. 12. 15.29 1. 15 6 4 5 4 5 2 4 4 5 6 12.957 1.414 1. 12 15. 13 15.49 2.77 1. In a real application. 11.25 12. 14. 14.46 2.43 1. 10 12. 10. 12 11.6 13.25 12. 9. 10. 13.258 2.8 10.0 3 1 5 2 3 2 3 5 4 3 1.265 1.75 12. Step 1.18 Step 2. 10. We have a collection of historical data with varying subgroup sample sizes and want to establish an average/range control chart.2) + (4)(10.

534 3. Compute the control limits.572 + 5.64 Step 3.436 6 + 5 +  + 6 − 10 σ estimated from the sample standard deviations: σˆ = g1S1 + g2 S2 +  + gk Sk h1 + h2 +  + hk σˆ = (10.49) +  + (5)(1.06)(1.094)(0.4366 Upper control limit. UCL X = X + Aσˆ Lower control limit.169) + (6. nk (σˆ ) σˆ = 1.059 1.88 = = 1. We will use the estimate for sigma based on the range method.18) + (3)(0. n2 ⎟ +  + (nk − 1) ⎜ k .615 +  + 9. For the average chart.128 2.96 2.96 2.534 2. Centerlines for the ranges.96 2.03 σˆ = 1.693 2.qxd 10/18/07 12:53 PM Page 53 Average/Range Control Chart with Variable Subgroups 53 Each individual subgroup will have a unique variation statistic that is dependent on the subgroup sample size.059 1. n1 ⎟ + (n2 − 1) ⎜ 2 .495 66. X : X ± Aσˆ X = 12. we will need an estimate of the process variation σ estimated from the sample ranges: ⎛R ⎛R ⎛R ⎞ ⎞ ⎞ (n1 − 1) ⎜ 1 .96 2.265) 98.64 2.577) +  + (10. RCL = d2 .43 2.06)(1.H1317_CH06.436 Subgroup number Subgroup size n d2 RCL 1 2 3 4 5 6 7 8 9 10 6 4 5 4 5 2 4 4 5 6 2.059 2.572 Note: Differences in the two estimates are due to rounding error.43 1. LCL X = X − Aσˆ .693 2. In order to determine RCL.123 9.62 2.693 1.43 3.18) = 1.059 1. nk ⎟ ⎝ d2 ⎠ ⎝ d2 ⎠ ⎝ d2 ⎠ σˆ = n1 + n2 +  + nk − k σˆ = (5)(1.

σˆ = 1.29 6. and control limits will be drawn as dashed lines.88 10.18 13.18 13.H1317_CH06.18 14.500 1.686 4. There is no LCL for the ranges.225 1.88 10. Lower control limit.698 4. Construct the chart.342 2.18 13.88 10.698 4.06 7.06 5.918 3.08 14.78 14. Subgroup number Subgroup size n D2 UCL 1 2 3 4 5 6 7 8 9 10 6 4 5 4 5 2 4 4 5 6 5. the center line for the ranges will also change with varying subgroup sample sizes.06 6.500 1.10 10.96 13.98 9.96 14.75 7.96 15.75 7. as there is no D1 factor for subgroups ≤6.121 1.698 4.500 1. Notice that the control limits will vary as the subgroup sample size changes.698 4. LCLR = D1σˆ .27 9.78 For the range chart R: Upper control limit.10 8. .078 7.078 4.27 13.75 6.918 5.75 7.10 9.225 10.88 9.qxd 54 10/18/07 12:53 PM Page 54 The Desk Reference of Statistical Quality Methods Subgroup number Subgroup size n A LCL UCL 1 2 3 4 5 6 7 8 9 10 6 4 5 4 5 2 4 4 5 6 1.500 1.436. Center lines will be drawn as a solid line.342 1. The control chart will be a graphical presentation of the data with the respective center lines and control limits.918 4. UCLR = D2 σˆ .342 1.29 Step 4. In addition.29 6.

970 14.394 2. 9.H1317_CH06.qxd 10/18/07 12:53 PM Page 55 Average/Range Control Chart with Variable Subgroups 55 History 15 14 13 X 12 11 10 9 8 7 6 1 2 3 4 5 6 7 8 9 10 11 12 8 9 10 11 12 Subgroup number 8 7 6 R 5 4 3 2 1 1 2 3 4 5 6 7 Subgroup number Step 5. 9.687 5. 14. Data to the left of the wavy vertical line represent the historical data from which the location statistic average and the estimate of sigma were derived. UCLX.00 0. and LCLX will be determined and values of R and X plotted. Subsequent values for the range center line RCL. Future data points will be compared with the historical reference values. 10.78 1. 15. 14. 14 n R X 9 2 10 3 A Dl D2 D2 9.000 0. 15. All of the traditional “rules” are applicable when using a variables size average/range control chart. Consider the following set of data for subgroups 11 and 12: Subgroup number 11 12 Data 10.469 3. 13.546 5.078 . 10. 10. Continue collecting data and look for signs of a process change. 10 13. 9. 15. 12. UCLR/LCLR. 11. 15.949 0.

436) = 10.436) = 0.436 RCL = (3.H1317_CH06.85 Lower control limit. UCLR = D2 σˆ UCLR = (5.78 Plot points: Average = 9.436 LCLR = (0. nk (σˆ ) σˆ = 1.949)(1.436) = 0. UCLR = D2 σˆ UCLR = (5.4366 Upper control limit. X : X ± Aσˆ X = 12.000)(1.394)(1.436) = 7.26 Upper control limit. RCL = d2 .436) = 13.970)(1.546)(1.078)(1.436) = 13. X : X ± Aσˆ X = 12.99 Plot points: Average = 14.436 LCLR = (0.687)(1.04 + (1. LCLR = D1σˆ σˆ = 1. LCL X = X − Aσˆ UCL X = 12.04 + (0.42 Upper control limit. RCL = d2 .03 σˆ = 1.04 − (1.60 For the range chart: Centerlines for the ranges.436 RCL = (2.4366 Upper control limit.436) = 10.04 − (0. nk (σˆ ) σˆ = 1.436) = 7.qxd 56 10/18/07 12:53 PM Page 56 The Desk Reference of Statistical Quality Methods Subgroup #11 calculations: For the average chart.03 σˆ = 1. UCL X = X + Aσˆ Lower control limit.436) = 4.436) = 4. LCL X = X − Aσˆ UCL X = 12.68 For the range chart: Centerlines for the ranges.00 Raange = 3 .75 Lower control limit.496)(1.48 LCL X = 12.78 Ran nge = 2 Subgroup #12 calculations: For the average chart.949)(1.40 LCL X = 12.000)(1. UCL X = X + Aσˆ Lower control limit. LCLR = D1σˆ σˆ = 1.

15 14 13 X 12 11 10 9 8 7 6 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 8 7 6 R 5 4 3 2 1 . Notice that both samples #11 and #12 are out of control on the location chart and in control on the variation chart.H1317_CH06.qxd 10/18/07 12:53 PM Page 57 Average/Range Control Chart with Variable Subgroups 57 The completed plot can be seen in the following figure.

H1317_CH06.qxd 10/18/07 12:53 PM Page 58 .

looking for signs of a process change. Weights are recorded in grams. Typically 25 subgroups are taken. only 12 samples will be taken. For each subgroup sample of n = 9 taken. Increasing the sample size also increases the sensitivity of the control chart. Sample subgroup sizes greater than 10 render this relationship less efficient. Determine control limits for both the location and variation statistic.H1317_CH07. The five steps for establishing the average/standard deviation control chart are as follows: 1. and the following alternative estimate must be used: 3Sx = A3S . 3. and the average/standard deviation. All nine cavities are measured for part weight every three hours. Continue data collection and plotting. The average range R is used to monitor the process variation and is also used to estimate the variation of the averages using the relationship 3Sx = A2 R. These data will serve as a historical characterization.qxd 10/17/07 2:07 PM Page 59 Average/Standard Deviation Control Chart The traditional Shewhart control charts used to monitor variables are the individual/moving range. where A2 is a constant whose value depends on the subgroup sample size. Collect historical data. Select a location and variation statistic. The following example illustrates the average/standard deviation control chart. Collect historical data. 5. Step 1. The characteristic being measured is the part weight for each of the nine parts. Construct the control chart. The individual/moving range chart is used when the observations are treated as individuals. The process – variation is measured using the range. The average/standard deviation chart may be used for any subgroup sample size of n ≥ 2. Background An injection molding process utilizes a nine-cavity mold. 59 . In order to increase the sensitivity of the chart to detect a process change. average/range. the average/range control chart is used. For this demonstration. This relationship is effective as an estimate of the standard deviation for the distribution of averages provided that the subgroup sample size is 10 or less. the average and standard deviation will be determined. 4. 2.

8 52.3 53.9 48.4 48. UCL = 49.3 47.8 51. The location statistic will be the average of the averages.8.5 47.7 2.8 49.7 46. In this case.3 52.0 53. 3 S x = (1.6 49.4 44. + 50.4 53. .9 + 3.4 51.9 = 52.8 48.8 2. A.8 47. Since these limits are based on ±3 standard deviations.4 51.H1317_CH07.0 50. .3 50.9 51.89 = 2.3 50.0 56.6 45.3 49.9 12 S= 2.7 48.7 50.9 – 2.8 ) = 2.3 48.8 50.4 = 2. .2 1.1 + .7 50.7 52.2 50.3 47.9 48.4 51.9 49.0 46.0 51.2 57.0 48.4 46. Control limits for the location statistic: The location or central tendency of this process will be monitored by the movement of the averages about the grand average.0 and 52. The selection of the location and variation statistic from which the process will be monitored is derived from the name of the chart.5 50.7 + 49.4 45.8 4.1 50.9 46.1 48.8 2.9 + 2.1 48.9 The UCL and LCL for the distribution of averages are.8 48.6 49.7 2.1 51. Select a location and variation statistic.8 = 49.7 in the appendix).7 43.5 50.1 2.5 + .2 56.7 percent of the averages are expected to fall between 47.8 43.0 52.0 2.2 51.9 50.5 46.2 52. the chart is the average/standard deviation.0 51.9 47.9 49.0 51.0 48.9 48.032 (see Table A.5 52. and the variation statistic will be the average of the standard deviations. not of the individual observations).4 54.032 ) ( 2.6 48. Upper control limits (UCLs) and lower control limits (LCLs) for these individual averages will be determined based on the grand average ±3 standard deviations (standard deviation of the averages.9 Step 2.6 50.3 52. .9 50.2 51.0 51.1 50.5 3.7 46.2 4.1 + 49.2 46.3 50.2 51.8 12 Step 3.0. For this case.3 47.9 48.3 51.5 55.5 45. 99.7 56.1 50. n = 9 and A3 = 1.8 47.8 49.8 49.4 55.9 49. Determine control limits for both the location and variation statistic.5 50.5 51.4 51.5 47.qxd 60 – X: S: 10/17/07 2:07 PM Page 60 The Desk Reference of Statistical Quality Methods 1 2 3 4 5 6 49.9 45.4 + 2.3 52.4 49.8 50. X= 48.3 51.7 53. respectively.3 48. The value of A3 is dependent on the subgroup sample size n. + 2.5 50.9 53.1 43.5 45.7 49.5 49. .7 46.4 47.3 52.1 2. The three standard deviations will be determined from 3Sx = A3S .4 52.1 47.4 51.9 53.3 45.6 53.9 = 47. Each of the individual averages varies.0 1.5 7 8 9 10 11 12 47.9 52.8 LCL = 49.5 50.

qxd 10/17/07 2:07 PM Page 61 Average/Standard Deviation Control Chart 61 B. and averages are drawn using a solid line.6 45 40 Average UCL = 4.239 ) ( 2.9 5 S = 2. The UCL and LCL for the average standard deviation are determined by UCL s = (1.9.7 percent of the sample standard deviations will fall within the limits of 1. Construct the control chart.8 ) = 1.9 UCL s = B4 S LCL s = ( 0.1 to 4.H1317_CH07.8 3 X X 4 X X X X X X 2 LCL = 1. . Step 4. Control limits for the variation statistic: The process variation will be monitored by the movement of the individual standard deviations about the average standard deviation S.1 X X X X 1 0 1 2 3 4 5 6 7 8 9 10 11 12 Standard deviation Note: The vertical wavy line is drawn only to show the period from which the chart parameters were determined (averages and control limits).761) ( 2.8 ) = 4.5 50 X X X X X X X X X X X LCL = 46. control limits. and data values are plotted on the chart. LCL s = B3S Approximately 99. The averages. Control limits are normally drawn using a broken line.1. History UCL = 52.4 X X = 49.

5 51.2 53.5 53.3 50.2 50.1 History X UCL = 52.5 52.qxd 62 10/17/07 2:07 PM Page 62 The Desk Reference of Statistical Quality Methods Step 5.9 5 S = 2.1 47.3 52.4 X X 50 X X X X X X X X X X X X X X X LCL = 46.4 49.3 1.6 52.7 46.7 51.3 51.3 56.7 50.5 52.8 2.4 51.9 50.3 1.7 50.8 3 X X 4 X X X X X X X X 2 X LCL = 1.9 2.4 52.6 3.5 52.4 54.1 50.5 50.0 51.9 53.8 50.3 52.0 53.9 50.0 51.5 51.7 47.1 48.3 49.7 55.7 51. Continue data collection and plotting.8 52.9 49.2 53.5 52.H1317_CH07.3 51.2 51.1 52.2 46.5 50.0 51.1 54.9 52.9 55.6 45 40 Average UCL = 4.9 51. looking for signs of a process change.1 1 X X X X X X X 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Standard deviation .6 55.4 48.3 47.9 3.2 53.4 50.9 54.1 52. – X: S: 13 14 15 16 17 18 46.4 50.

indicating that the process. S. H.H1317_CH07.. It is suspected that the process average has increased. 1994. S. Knoxville. Introduction to Statistical Quality Control. Englewood Cliffs. . E. 2nd edition. 1992. Bibliography Besterfield. New York: McGraw-Hill. L. D.qxd 10/17/07 2:07 PM Page 63 Average/Standard Deviation Control Chart 63 Sample #14 has an average greater than the UCL for averages. Understanding Statistical Process Control. relative to the historical characterization. 7th edition. has experienced a change. Chambers.. New York: John Wiley & Sons. and R. 4th edition. and D. 3rd edition. The process variation remains stable with no indication of change. D. TN: SPC Press. Statistical Quality Control. J. Grant. This is an example of the violation of one of the SPC detection rules (a single point outside the UCL or LCL). Wheeler. Quality Control. C. 1996. 1996. D. NJ: Prentice Hall. Montgomery. Leavenworth.

H1317_CH07.qxd 10/17/07 2:07 PM Page 64 .

00 UCL = 59. Proportion (defective) B. 2.94 ⇒ 24. Several interrelated factors affect this resolution: 1.00 X = 35.66 MR . 4.0 65 .66)(9. Averages 2.00 —– and a moving range MR = 9.0. – —– The control limits are calculated upon X ± 3S.H1317_CH08. 3S = (2.00 S = 8. 3.00 S = 8. Individuals B.00 LCL = 11. The magnitude of the change being detected The subgroup sample size. n The quickness in the response time desired to detect the change The number of detection criteria used to indicate a change Two families of control charts will be reviewed for their resolving power as related to these factors: 1.0) = 23. Control charts for variables A. Nonconformities (defects) Variables Control Charts: Individuals – Consider an individuals control chart developed from a process that has an average X = 35.qxd 10/15/07 11:47 AM Page 65 Chart Resolution The ability of a control chart to resolve the difference between its historical characterization (the statistics used to establish the chart) and future process data is a measure of the power or resolution of the chart. where 3S is estimated by 2. Control charts for attributes A.

00 ± 47.00 . The probability of this occurrence when the process change is zero is Z= 59.0 = 3.qxd 66 10/15/07 11:47 AM Page 66 The Desk Reference of Statistical Quality Methods One of the signals indicating an out-of-control condition is the occurrence of a single point falling outside the control limits.00 to 43.00.0 Z= 59.50 8. if the UCL and LCL for an individual chart are 59.00 and 11.00 is 0. The value of 370 is also called the average run length (ARL).00. The probability of getting a value less than 11. for a combined probability of 0. what is the probability of getting a single point above the UCL if the process average shifts from 35. respectively. the probability of exceeding either or both can be determined assuming any future process average.00 = 1.00135. or a percent probability of 0.00135.0027 There is a probability that 1 sample in 370 will exceed the control limit when there has been no process change. the probability of getting a value greater than 59. For example.00 X = 47.5σ)? UCL = 59.00 Looking up this value in a standard normal distribution.00.0 − 35. 8.00 is also 0.135 percent.00 S = 8. Once points of interest such as the upper control limits (UCLs) and the lower control limits (LCLs) have been determined. 1 = 370 0.00 (an increase of + 1.0027. and the standard deviation of individuals is 8.H1317_CH08.

15866 .00 2.60 0.68 percent.20 2. and the process average shifts from 35. A plot of Table 1 is seen in Figure 1.05480 .00 0.20 1. or 6.02275 . and the ARL.80 1. the corresponding probability of exceeding a control limit.03593 . One sample in 15 will exceed the UCL.21186 .50 is 0.00256 .08076 . Table 1 Probability of exceeding control limit as a function of shift.05 0. respectively.00135 .00 to 47. This means that if a control chart is established with a UCL and an LCL of 59.60 1.11507 .00 1. Amount of shift.qxd 10/15/07 11:47 AM Page 67 Chart Resolution 67 The probability of Z = 1.01390 .80 2.25 0. σ Probability of detection ARL 0.00820 .40 .00 and 11.00.10 0.30 0.H1317_CH08.00466 .15 0. Table 1 lists several degrees of process change. and the ARL = 15.2 .0668.6 Figure 1 Probability of shift detection n = 1.40 1. there is a probability of .20 0.0 1.00 .00. Calculations can be performed in a similar manner to determine the probability of detection with various degrees of process shift (in units of σ).40 0.8 2. 1.4 1.27425 741 391 215 122 72 44 28 18 12 9 6 5 4 Probability of exceeding a control limit 0.0668 that a single point will exceed the UCL.20 0.2 .

5–3(3.qxd 68 10/15/07 11:47 AM Page 68 The Desk Reference of Statistical Quality Methods To this point. k where: Pk=1 = probability of exceeding a limit in one sample k = number of samples after the shift Pk=i = probability of exceeding a limit within k samples.00307 ) 10 Pk =10 = 0.00307. .74.8.8 The probability of getting a Z score of 2.0302. all the probabilities have been based on exceeding a limit with a single sample. or 0.5 − 15.1 X = 25.8) = 37.1 = 2.5 + 3(3. Example: A process has been defined by an average of 26.8) = 15. The probability of getting a single point below the LCL within 10 samples after the shift occurred is Pk = i = 1 ± (1 ± Pk −1 ) k P10 = 1 − (1 − 0. We can determine the probability of exceeding a control limit within any number of samples k using the relationship Pk = i = 1 ± (1 ± Pk =1 ) . 3.5 S = 3.8 The probability of getting a point below the LCL is Z= 25.5 and a standard deviation of 3. What is the probability of detecting a shift in the average to 25.307 percent.74 is 0.H1317_CH08.1 LCL = 15.5 within 10 samples after the shift has occurred? UCL = X + 3S = 26.9 LCL = X –3S = 26.

Variables Control Charts: Averages Just as we can determine the probability of detecting process shifts using individuals.4 1.8402 .0377 .77 Sx = 2.9984 Table 2 gives the probability of a single point being outside a control limit within k samples of a process shift of σ units.0324 .1362 .3064 .4573 .8223 .0051 .1162 .6177 .1550 .7055 . k= shift σ 1 2 3 4 5 10 15 20 0.5692 .6142 .1518 .2018 .2456 . The – – process average X = 15.0456 . and the average range R = 4.qxd 10/15/07 11:47 AM Page 69 Chart Resolution 69 Table 2 Probability of detection within k samples following a shift σ.0450 .2 0.6 1.3689 . What is the standard deviation of the averages? 3 S x = A2 R 3 S x = ( 0.92 3 Example of application 2: The standard deviation for the individuals of a process is S = 8. where A2 = a constant dependent on the sub– group sample size n and R = the average range σ 2.0808 .1066 .2056 .0544 .9918 .9133 .8.4 .7986 .0026 .1306 .0359 .0 1.8 2.0139 . From the relationship defined by the central limit theorem S X = .1151 .2 2.3788 .5104 .9684 .0253 .0082 .2119 .0185 .3868 .0879 .0077 .2919 .6 0.7172 .9075 .4989 .2743 .0548 .55.0102 .0276 .0500 .5784 .2232 .6959 .77 = 0.9915 .3436 . What is the standard deviation for the distribution of averages of n = 7 from this process? .5190 .8 ) = 2.0790 . where σ is the n true standard deviation (estimated by S) and n = the subgroup sample size Example of application 1: An average/range control chart is constructed using a subgroup sample size of n = 5.4733 .H1317_CH08.1672 .1087 .0667 .4308 .1587 .0677 .0244 .0047 .0 2.0139 .1556 .4045 .0706 .1040 .2860 .0231 .0129 .3070 . The only difference is in the calculation of the standard deviation for the distribution of averages Sx–.577 ) ( 4.1894 . From the SPC relationship 3Sx– = A2R. we can do the same using averages.5706 .00.6761 .0892 .0676 .8144 .0228 .7226 .8 1.0093 .9595 .0163 .9719 .2442 .4224 . There are two ways to determine Sx–: – 1.2169 .2 1.0411 .0403 .2922 .4 0.9251 .

qxd 70 10/15/07 11:47 AM Page 70 The Desk Reference of Statistical Quality Methods σ n S ≈ σ = 8.00. .67 The probability of getting a Z = 1.23 7 Having the standard deviation for the distribution of averages.00 − 47.0668.0668.50 .55 n = 7 Sx = Sx = 8.0668 P8 = 1 – (1 – 0. 1. 6 S x = UCL ± LCL = 10.55 = 3. then the probability of exceeding the UCL within eight samples after the process shift is Pk=i = 1 – (1 – Pk=l)k. and the LCL is 40. If the probability of exceeding the UCL on a single sample is 0.00 X = 47. The probability of exceeding the UCL on a single sample average is 0.4248.50 The Z-score is Z= 50. Example: What is the probability of an average of n = 6 exceeding the UCL within eight samples – after the process shifts from the historical average of X = 45.50 = 1.0668)8 P8 = 0. where: k=8 Pk = 1 = 0.0668.0 ∴ S x = 1.H1317_CH08.50 is 0. we can calculate the probability of a single point exceeding a control limit in one sample or the probability of a single point exceeding a limit within k samples of a process change.50? The UCL is 50.0.67 UCL = 50.00 to 47.

The question arises.0040 .2743 . What is the probability that you want in detecting the shift? Consider the following scenario: A gypsum board manufacturer tests the edge hardness of its boards prior to shipment.9998 .7913 .6660 . What about Subgroup Size for the Variables Control Charts? The average/range control chart is used only with subgroup sample sizes of 2 to 10.0076 .1347 .0216 .9999 10 .9192 .4 1.0139 .5000 .5793 .9987 .8316 .2825 . the average/standard deviation control chart can be used with subgroup sample sizes of 2 to 10.0033 .9642 .0 to 11. we use the average/standard deviation control chart.0487 .7865 .9996 .0412 .5641 .7257 .9727 .9364 .0.0089 .8210 .9996 .1023 . An individual/moving range control chart has been established (subgroup size is n = l).8478 .0081 .9803 .1148 .0358 .2228 .2 2.0 2.4 2 . Subgroup n Shift σ 0. what is the probability you will detect the shift with one sample after the shift occurs? What is the probability of detection within 10 samples after the shift? Detection is indicated by a single point falling above or below the control limits.4095 .1488 .2 0.9965 .2741 .2 1.9641 .3249 .0047 .9994 .9713 .6534 3 4 5 6 7 8 . and the average moving range is 1.0105 .0249 .9911 .8 1.0531 .0359 .9980 .9999 Table 3 gives the probability of exceeding a control limit for various subgroup sample sizes and various degrees of process shift in units of σ in a single sample.6 0. and how often should I take a sample? The usual response is that most people use five. Determine the probability of detecting the shift with one subgroup (detection in one hour).4265 .7259 .9999 9 .0963 . If the process average shifts from 10.7625 . and you should take a sample every hour.0054 .0311 .0969 .0176 .0808 .3762 .1782 . .4756 .9232 .2306 .4207 .3186 .H1317_CH08.6 1.0157 .9817 . However.2907 .5468 . How large should my subgroup sample size be.1587 .5445 .8852 .1539 .6538 .7188 .0628 . While you could take a guess.0060 . It depends on three factors: 1. How soon do you want to detect this shift after it occurs? 3.0.0308 .0804 .0267 .0069 . The process average is 10. there is actually a better way to make these decisions.0 1.8931 . How large of a shift do you want to detect? 2.qxd 10/15/07 11:47 AM Page 71 Chart Resolution 71 Table 3 Probability of exceeding a limit for averages of subgroup size n and shift σ.4321 .3655 .2313 .9977 .5525 .6788 .0564 .0075 .9960 .9916 .8413 .4 0.9618 . and samples are obtained every hour.8 2.9999 .9893 . Assume a sample is taken each hour.4327 .8765 . For subgroups larger than 10.128.1131 .1913 .9298 .9206 .9919 .

00.66)(1.0 – 3.66(1.66 MR or l0. the probability of detection in one sample after the shift occurs. UCL = 13 X = 10. we will experience a one-sigma shift.128).1 in the appendix).00 3 If we shift the process from its original average of 10.0 and sigma is 1.0 to a new average of 11.66 MR = 3σ UCL = 10 + 3.0 Therefore σ = 2. and recalculate the probability of detection (detection in one sample after the shift). we find the probability of exceeding the UCL is 0.0 = 13.0 ± (2. We will call this P1.qxd 72 10/15/07 11:47 AM Page 72 The Desk Reference of Statistical Quality Methods The control limits for our average/range chart are determined by X ± 2. LCL = 10.128) = 1. UCL = 13 X = 11.0 = 7. We now shift the process up by one standard deviation.02275.H1317_CH08.0.0 =1 .0 =1 Original process.0 and X = 10. 1 Looking up the Z-score in a standard normal distribution table (Table A. The probability of exceeding the UCL is determined by calculating a Z-score: Z= 13 − 11 = 2. keep the UCL and the standard deviation the same.0 2.

the probability of detecting a one-sigma shift within 83 samples after the shift is 85 percent. where: Pk = the probability of detection by the kth sample P1 = the probability of detection in one sample.8 hours. We could also consider using an average/range control chart with a subgroup size of n = 5.02275)1 = 0.6 ≅ 83 samples ln (1 − 0. or about 3 1/2 days— a long time.7081 = 0. If we want to detect this shift quicker. By rearranging the equation Pk = 1 – (1 – P1)k and solving for k.85 In other words.qxd 10/15/07 11:47 AM Page 73 Chart Resolution 73 The following relationship may be used for detection in one sample: Pk = 1 – (1 – P1)k.02275)10 = 1 – 0. Using the central limit theorem.897 ln (−0.023 P1 = 0.2919 We now have a probability of 29. Pk = 1 – (1 – 0.02275.7944 = 0. we will detect the shift in 11. or simply the same as P1.2056 This is a 110 percent improvement. The probability of detection in one sample after the shift is Pk = 1 – (1 – 0.15) = = 82. k= ln Pk − 1 ln(1 − P1 ) k= 1. We can increase k to a larger value such as 15. we could sample more frequently but we would still sample 83 times. we can determine k for any desired probability of detection. This probability of detecting a one-sigma shift applies to any process regardless of the value of the process average and the process sigma. Let’s say we want to detect a one-sigma shift at a probability of 85 percent.02275 Px = 0. What is the probability of detecting the shift by 10 samples after the shift occurs? Pk(10) = 1 – (1 – P1)k. we know that the standard deviation of averages is SX = σ . If we sample every 7 minutes. The upper and lower control limits will now be based on the standard deviation for the distribution of averages (n = 5) rather than the standard deviation of individuals (sometimes referred to as sigma). Pk = 1 – (1 – P1)k P(15) = 1 – (1 – 0. Making this detection would take 83 hours.H1317_CH08. The probability of detecting a one-sigma shift (up or down) within 83 samples is 85 percent using an individual control chart. but we can improve this. or about half a day. n .02275) 0.2 percent of detecting a one-sigma shift within 15 samples or 15 hours after the shift.02275)15 = 1 – 0.

35 X = 10.445 Note that we use S X and not σ.35. or 20. The probability of getting a Z-score of 0.0 to 11.0. where: P1 = 0.65 UCL = 11.2177)10 = 0.77 percent probability of exceeding the UCL in a single subgroup average.78 0.0. keeping the UCL at 11.21770.qxd 74 10/15/07 11:47 AM Page 74 The Desk Reference of Statistical Quality Methods Recall that in our example. or 91. We now shift our process up one sigma from 10.275 percent.78 is 0. UCL = 11. or SX = 1 = 0.2177 and k = 10 Pk = 1 – (1 – 0. Our control limits will be based on three standard deviations for the distribution of averages. . 5 The control limits are X ± 3S X = 10. Z= 11.45.0 Sx = .56 percent.35 LCL = 8. The probability of detecting this shift in one subgroup P1 is determined by calculating the Z-score.2056. There is a 21.35 − 11. What would be the probability of detecting the same shift within 10 subgroups (n = 5) if Pk = 1 – (1 – P1)k.42%.045 We use the standard deviation for averages because we are using averages (of n = 5) to detect our shift of one sigma. sigma for the process was 1. Earlier we determined that the probability of detection within 10 samples after a onesigma shift using an individual control chart was 0.0 and the process average was 10.35.0 ± 1.0 = 0. The probability of exceeding the UCL for the individual control chart with the same amount of shift is 2.H1317_CH08.9142.

With the individual control chart. Decrease P1 and increase the number of samples k 2. Increase P1 and decrease the number of samples k Let’s consider using an average control chart where the subgroup sample size is n = 5. To make a fair comparison.8.56 percent probability of detecting a one-sigma shift. the average control chart is more efficient than the individual control chart.06836.H1317_CH08. The amount of shift in units of the process standard deviation (sigma) is 1.3 units within 24 hours after the shift with a 90 percent probability of detection. UCL = 22.2275)50 = 0. and the process standard deviation is 2.8 Sx = 1. Illustrative example 1: The current process average is 18.36 percent. Fifty observations taken as 10 samples of 5 each gives a 91.8 We have two unknowns in our problem: the number of samples in our subgroup size n and the number of samples for detection k.qxd 10/15/07 11:47 AM Page 75 Chart Resolution 75 This is quite an improvement.42 percent. We must fix one and determine the other. or 68.25 . With the average control chart where n = 5. we should evaluate the probability of detection within 50 individual samples since 10 samples of 5 each equates to 50 individual observations.36% Everything being equal. To judge our increase in efficiency.5  = 2. Pk = 1 – (1 – P1)k where: P1 = 0.46. There are two ways to accomplish our objective. For a fixed probability of detection Pk we can: 1. You would like to detect a shift of 1. Design a control chart that will accomplish this objective. 2. we had a probability of detection of 91.42 percent probability of detection.3 = 0. we had a 20.5.2275 and k = 50 Pk = 1 – (1 – 0. Fifty observations as individuals gives a probability of detection of 68.25 X = 18. let’s look at the actual workload.

This would require taking a subgroup of n = 5 approximately every 15 minutes.7 = =1 n 2.0 Sx = 1. σ 3.25 − 19.83 .45 sigma shift within 92 samples of n = 5. What is the probability of detecting a shift from 45.qxd 76 10/15/07 11:47 AM Page 76 The Desk Reference of Statistical Quality Methods Shifting the average up by 1.025) There is a 90 percent probability of detection of the 0. You have established an average/range control chart with n = 8.96 = 0.25 Φ 1.96 1.303 = 92.3 within 12 samples after the shift? UCL = 48.31 SX = Prior to shift.025 ln(1 − P1 ) ln(1 − . = = −.25.7.0 to 46.93 X = 45.96 sigma (S X = 1.90 − 1) −2.0 and the standard deviation σ is 3.25) is 0.H1317_CH08.025. we have: UCL = 22.3 units and keeping the UCL at 22.025.8 Sx = 1. The required number of samples k to provide a probability of detection at Pk = 90 percent is k= ln ( Pk − 1) ln (0.25 X = 19. initial condition.25 The probability of detection in one sample (of n = 5) is determined from the Z-score: 22. Z= The probability of detection within one subgroup of n = 5 with a 1.8 = 1. Illustrative example 2: The process average is 45.

00 = 0. The OCC defines the resolution or power of the p chart to detect a process shift in units of percent nonconforming.3 = 2. or 24%.31 After shift.31 Φ 2.2413.2275)12 = 0. the historical process – percent nonconforming is P = 0.18. however. where: P1 = 0.0 + 3(1.93 X = 46. For this example. The following example illustrates the calculation of an operating characteristic curve (OCC) for a p chart.93 − 46.2275 and k = 12 Pk = 1 – (1 – 0.H1317_CH08. Z= The probability of detection within 12 samples (of n = 5) is Pk = 1 – (1 – P1)k. The distribution used for attribute data is the binomial and Poisson. is not used to define the variation for attribute data. k = ln ( Pk − 1) ln(1 − P1 ) III. The following is a summary of relationships for subgroup size calculations: I. P1 = 1 − ( Pk − 1) 1 k Pk = the probability of detecting the shift within k samples after the shift occurs P1 = the probability of detecting the shift within one sample (or subgroup) after the shift occurs k = the number of samples (or subgroups) Attribute Chart: p Charts The resolution for a p chart can be determined just as the case for variables.31) = 48. . UCL = 48.02275 = P1 .3 Sx = 1.93. The normal distribution. Pk = 1 − (1 − P1 ) k II. The probability of detecting the shift in a single sample of n = 8 is 48.00 1.qxd 10/15/07 11:47 AM Page 77 Chart Resolution 77 The UCL is 45. and the sample size is n = 100.

3! .10 to 0. The probability of exceeding the UCL is 1. P 3! x =3 = e −5 (5)3 . and so on.18 ) → 0. For proportions less than or equal to 0.065. defective) in a sample chosen from a process that has an average number defective np = (100)(. Using the Poisson relationship. Poisson Distribution Sample Calculations Assume that the process average has shifted from the historical average of 0. This is done by obtaining the probability of getting exactly zero defectives plus exactly one defective plus exactly two defectives.08 proportional defective. or 0.10.05)]3 .05 proportion defective is given by Px = e − np ( np)x . x! where: n = sample size x = exact number defective in sample p = proportion defective of process. – If the process average proportion P shifts. This cumulative probability represents the probability of getting seven or fewer defective units in a sample of n = 100 and represents the probability of not exceeding the UCL.08) = 8. x! Px = 3 = e –[(100 )( 0.295 LCL = 0. For P < – 0. we can determine the probability of getting seven or fewer defective units in a sample of 100 chosen from a process that is 0.18 (1 − 0.065? Using the Poisson model. we may calculate the probability of exceeding a control limit.065)(100) defective (or to the nearest integer. until the cumulative probability of getting seven defectives has been reached.10.18 ± 0.qxd 78 10/15/07 11:47 AM Page 78 The Desk Reference of Statistical Quality Methods The control chart limits are calculated as follows: P (1 − P ) 0.1404. Px = 3 = 0..00 minus this probability.18 ± 3 → 0. and for proportions greater than 0. we will use the Poisson distribution to estimate probabilities. The probability of getting exactly three defective units in a sample of 100 chosen from a process that is 0.05)] [(100 )( 0.0.10. What is the probability of getting a single point below the LCL.08. we are determining the probability of getting less than (0. The Poisson formula is: Px = e − np ( np)x . seven.H1317_CH08.115 n 100 P ±3 UCL = 0. we will use the normal distribution as an approximation to the binomial.

250 = 1.03369 + 0. UCL = 0. For P > 0. Normal Distribution What is the probability of getting a single proportion defective greater than the UCL of 0. + + + 0 1! 2! 3! 0 Px ≤ 3 = 1 2 3 Px=3 = 0.qxd 10/15/07 11:47 AM Page 79 Chart Resolution 79 The probability of getting three or fewer defective units from a sample of n = 100 chosen from a process that is 0.05 proportional defective is the sum of all the individual probabilities from zero defective through three defective: e −5 ( 5 ) e −5 ( 5 ) e −5 ( 5 ) e −5 ( 5 ) .10.250? The control limit was based on a historical process proportional defective of 0.0433 Looking up Z = 1. the probability of occurrence is 0.04.0433 Sp = ( P 1− P n ) SP = 0.295 − 0. This result can also be obtained more readily by using a cumulative Poisson distribution table and looking up np = 5.149.265.250 SP = 0.04 0.14037 = 0.180 and a sample size of n = 100. Z= 0. .00674 + 0. The probability that a given sample proportion will exceed the UCL is determined by using the normal distribution.265.0 and x = 3. The result is 0.295 from a sample of n = 100 chosen from a process that has a proportional defective of 0.25 (1 − 0.25 ) 100 Calculation of the standard deviation was based on a binomial distribution for attribute data.295 P = 0.H1317_CH08.08422 + 0.

000 0.28 0.10 are based on the Poisson.045 0.000 0.000 0. P = 0.744 0.22 0.10 are based on the normal distribution.12 0.000 0.000 0.5 .H1317_CH08.099 0.04 0.4 .018 0.988 0.369 0.15 .001 0. Table 4 lists the probabilities of exceeding a limit for those cases.867 0.18.7 .01 0.949 0. n = 100.149 0.009 0.000 0.2 .06 0.061 0.34 0.30 0.075 0.988 0.960 .26 0.000 0.20 .704 0.000 0.09 0.003 0.599 0.000 0.000 0.453 0.000 0.005 0.876 0.21 0.000 0.005 0.035 0.960 Probability of exceeding a control limit 1.456 0.000 0.744 0.220 0.35 .867 0.35 0.32 0.287 0.213 0.16 0.000 0.453 0.000 0.772 0.03 0.000 0.000 0. P 0.05 0.3 .001 0.14 0.000 0. P = 0.324 0.6 .000 0.000 0.qxd 80 10/15/07 11:47 AM Page 80 The Desk Reference of Statistical Quality Methods Additional values for various process shifts have been made for the control chart based on a sample size of n = 100 and a historical process proportional defective of P = 0.11 0.213 0.287 0.20 0.000 0. Table 4 Resolution of an n = 100.061 0.829 0. .829 0.369 0.05 .627 0.000 0.940 0.17 0.099 0.10 0.24 0.912 0.184 0.25 .995 0.18 .940 0.36 0.07 0.002 0.33 0.9 . All probabilities for shifts to p ≤ 0.005 0.31 0.38 0.876 0.627 0.000 0.37 0.25 0.149 0.004 1.018 0.23 0.000 0.027 0.704 0.003 0.000 0.15 0.000 0.000 0.001 0.009 0.912 0.324 0.1 0 .000 0.13 0.035 0.000 0.08 0.543 0.10 .000 0.772 0.027 0.075 0.30 New process proportion defective Figure 2 OCC for p chart.27 0.000 0.8 .045 0.18 0.000 0.000 0.009 0.000 0.000 0.001 Total Probability Probability Probability Total probability P < LCL > UCL probability 0. and probabilities for shifts to P > 0.000 0.949 0.18 p chart.184 0.19 Probability Probability < LCL > UCL 1.456 0.02 0.40 0.29 0.220 0.599 0.995 0.010 0.18.0 n = 100 P = 0.543 0.000 0.

7th edition. D.qxd 10/15/07 11:47 AM Page 81 Chart Resolution 81 A plot of total probability as a function of P from the data in Table 4 can be seen in Figure 2. 2nd edition. Chambers. Knoxville. Statistical Quality Control.H1317_CH08. D. Wheeler. 3rd edition. Advanced Topics in Statistical Process Control. Wheeler. Montgomery. and R. TN: SPC Press. Bibliography Grant. J. This plot is called an OCC and completely defines the resolution or power for the control chart. S. 1996. Understanding Statistical Process Control. Leavenworth. and D. 1995. D. Introduction to Statistical Quality Control. New York: McGraw-Hill. New York: John Wiley & Sons. 1992. 1996. Knoxville. C.. J. L. E. TN: SPC Press. S.. .

qxd 10/15/07 11:47 AM Page 82 .H1317_CH08.

grand total 83 .H1317_CH09.’s Auto Crazy Harry’s Abe’s Reliable Rosco’s Row totals Truck Compact Midsize 26 18 33 38 24 45 75 32 68 53 28 49 192 102 195 Column totals 77 107 175 130 489 (Grand total) The null hypothesis assumes that the distribution of the number of a particular type of car sold is independent of the dealership.R. The expected frequency of sales within each class is given by: expected value = row totals × column totals . The response is the number of vehicles sold in one month. Data are arranged in a tabular fashion with rows and columns. H0: assumes no real differences among the dealerships with respect to distribution of sales. we want to test for the independence of the two factors of dealership and type of vehicle sold.qxd 10/15/07 11:55 AM Page 83 Chi-Square Contingency and Goodness-of-Fit Chi-Square Contingency Tables The chi-square distribution is frequently used in statistics to test for the independence of two factors. The objective is to determine if there is a dependency on the types of vehicles sold as a function of the dealership. Example 1: Four dealerships are evaluated by the number of three different types of vehicles sold in one month. Dealership Type of vehicle J. In the following example. We will construct a contingency table consisting of four columns (dealerships) and three rows (types of vehicles). That is.

61 2.9 2.58 0.7) 107 Reliable Rosco’s 53(51.8 O–E –4.38 0.7) 32(36.8) 175 130 Row totals 192 102 195 489 (Grand total) The chi-square value is now determined for all the data.8) Column totals 77 38(42.H1317_CH09.04) is greater than the table value of the critical chi-square value for a given level of risk (α).1 22.25 0.3 36. Chi-square is the sum of the differences between the observed value and the expected value divided by the expected values.2 42.08 0.9 1.0) 28(27.3 –1. The degrees of freedom are determined by (r – 1)(c – 1).2) 18(16.29 3.05 0.1) 49(51.1) 33(30.5 0.00 3.8 51.1.1 30.8 (O – E)2 17.7 42.R.0 16.2 –4.04 (sum of all (0 – E)2/E) We will reject the hypothesis of independency if the calculated chi-square value (3.84 (O – E)2/E 0.64 16.0 6.00 6. Dealership Type of vehicle J.89 20.24 7.7 –4.3) 45(42.30 4.29 5.3 2.3 2.55 0.5) 68(69.’s Auto Crazy Harry’s Abe’s Truck Compact Midsize 26(30. the expected frequency is 102 × 130 = 27. For our example.qxd 84 10/15/07 11:55 AM Page 84 The Desk Reference of Statistical Quality Methods Thus. the degrees of freedom are (3 – 1)(4 – 1) = 6.58 0.15 χ2 = 3.7 51.12 0.5 27.22 0.0 68. for Compact/Reliable Rosco’s.0 1.0) 24(22.7) 75(68. The values in parentheses are the expected frequencies. .81 5. There are 12 terms in the chi-square calculation.17 0.7 69. The table value is a function of the total degrees of freedom (υ) and the risk (α). one for each class combination. 489 The expected value for all dealerships and vehicle types are calculated.03 0.13 0.8 –2. where r = the number of rows and c = the number of columns in the contingency table. O 26 38 75 53 18 24 32 28 33 45 68 49 E 30.

8) 21(32.qxd 10/15/07 11:55 AM Page 85 Chi-Square Contingency and Goodness-of-Fit 85 If we chose a risk of 5 percent.6. Example 2: A paint tester wishes to determine whether there is a relationship between the type of UV inhibitor and the shade of paint with respect to the length of time required to cause a test specimen to lose 15 percent of its original gloss.2 150 Expected value for Dark/320= 71 × 82 = 38. Do the data provide sufficient evidence to indicate a relationship between UV inhibitor type and shade of paint at a significance of α = 0. Expected values are in parentheses.8 150 Expected value for Dark/101= 71 × 68 = 32. we cannot reject the hypothesis that the types of vehicles sold is independent of the dealership.2) 50(38.10? Type of UV inhibitor Shade of paint Type 101 Type 320 Row totals Light Dark 47 21 32 50 79 71 Column totals 68 82 150 (Grand total) Calculation of expected values: Expected value for Light/101= 79 × 68 = 35. The two factors are independent of each other at the level of significance or risk of 5 percent. A summary of the test data in each of the four categories is shown in the following table.2) 32(43. One hundred and fifty samples are tested using UV inhibitor types A and B with both a light and dark color of paint.H1317_CH09.05 = 12.2 150 Expected value for Light/320= 79 × 82 = 43. Since the calculated chi-square is not greater than the critical chi-square.0.8 150 This is the completed contingency table. then the critical chi-square is χ26. Type of UV inhibitor Shade of paint Light Dark Column totals Type 101 Type 320 Row totals 47(35.8) 79 71 68 82 150 (Grand total) .

H1317_CH09.2 43. binomial. we reject the hypothesis that the type of UV inhibitor and shade of paint are independent with respect to the time required to lose 15 percent of the original gloss.2)2 (32. λ.qxd 10/15/07 86 11:55 AM Page 86 The Desk Reference of Statistical Quality Methods Calculation of χ2: χ2 = (47.10 χ 2 1 .8)2 35. Goodness-of-Fit to Poisson The following data represent the number of defects found on printed circuit boards.53 Calculation of χ2Critical: Degrees of freedom = ( r ± 1) ( c ± 1) = ( 2 ± 1) ( 2 ± 1) = 1 Significance or risk = α = 0. we must determine the average defect rate.0 − 32.8)2 (21.71 Since the chi-square calculated (13.2)2 (50. λ= total defects ( 0 ) ( 32 ) + (1) ( 44 ) + ( 2 ) ( 41) + ( 3 ) ( 22 ) + ( 4 ) ( 4 ) + ( 5 ) ( 3 ) 223 = = 1.53 = total units 32 + 44 + 41 + 22 + 4 + 3 146 .0 − 38. we must determine the expected frequency assuming that we have a Poisson distribution. uniform.0 ± 35.2 + 38.10 = 2. and so on.53) is greater than the chi-square critical (2.8 + + 32. Poisson.8 = 13. The alternative hypothesis would be that the sample data come from some other type of distribution. This may require that some categories be combined to give a minimum count of five. The null hypothesis would be that the sample data come from a specified distribution with a defined mean and standard deviation. Number of defects found: 0 1 2 3 4 5 Number of boards affected: 32 44 41 22 4 3 As with the case of the chi-square contingency table. The objective is to compare the sample distribution with the expected distribution.0 − 43.71). In order to accomplish this. 0. Goodness-of-Fit A goodness-of-fit test is a statistical test to determine the likelihood that sample data have been generated from a population that conforms to a particular distribution such as a normal. Performance of the type of inhibitor is related to the shade of paint. Note: There is an assumption that the sample size is sufficiently large such that each category will have at least five observations.

For these data the observed number of boards with one defect was 4.9.8 boards with three defects.qxd 10/15/07 11:55 AM Page 87 Chi-Square Contingency and Goodness-of-Fit 87 The probability of getting exactly zero defects based on the Poisson distribution where λ = 1. and the expected value is 36.049. .217) = 31. and the expected value is 18.3 boards with one defect.129)(146) or 18.H1317_CH09. and the expected value is 48. we would expect to find (146)(0.331) or 48.53 (1.53) e− λ λ x = 0.253)(146) or 36.6 boards with zero defects.6.217. = Px =1 = x! 24 4 Px =0 = In a sample of n = 146. and the expected value is 31. The probability of finding exactly four defects is given by: e−1. The probability of finding exactly three defects is given by: e−1.331.049)(146) or 7.9 boards with two defects. For these data the observed number of boards with one defect was 22. and the expected value is 7. we would expect to find (146)(0. For these data the observed number of boards with one defect was 41. x! In a sample of n = 146. For these data the observed number of boards with one defect was 44. = Px =1 = x! 2 2 Px =0 = In a sample of n = 146.53 (1.53 (1. = Px =1 = x! 6 3 Px =0 = In a sample of n = 146.53 (1.53 is Px =0 = e− λ λ x = 0.129. = = Px =1 = x! 1 1 Px =0 In a sample of n = 146.8.53) e− λ λ x = 0. The probability of finding exactly two defects is given by: e−1.2.3.53) e− λ λ x = 0. The probability of finding exactly one defect is given by: e−1. For these data the observed number of boards with zero defects was 32. we would expect to find (0.2 boards with four defects. we would expect to find (0. we would expect to find (0.53) e− λ λ x = 0.253.

1 3. .76 0.9 18.25.2 –2. The degrees of freedom are determined by df = number of classes (adjusted) – 1 – number of estimated parameters classes adjusted = 5 number of parameters estimated = 1 df = 5 – 1 – 1 = 3.54 1. The critical chi-square value from the table is χ23.24 5.3 4.16 18.64 0.38 0.49 16.81 10. Since 2. The following is a summary of the observed frequency (from the data) and the expected frequency (for a Poisson distribution where λ = 1.53 (1.2 2.8 7.4 –4.6 48.24 0.015.38 0.2. For these data the observed number of boards with one defect was 3.29 The last row has fewer than five observations in the expected column.8 0. therefore. = Px =1 = x! 120 5 Px =0 = In a sample of n = 146.2 boards with five defects. we cannot reject the null hypothesis that the data come from a Poisson distribution.01 0.81 10.24 10.10 = 6.00. E O–E (O – E)2 (O – E)2/E 0 1 2 3 4 and 5 32 44 41 22 7 31.0.10.16 18. We will now look up the critical chi-square.3 36. E O–E (O – E)2 (O – E)2/E 0 1 2 3 4 5 32 44 41 22 4 3 31.4 0.01 0.0.00 is less than the critical value of 6.49 16.53) e− λ λ x = 0. we will combine this row with the previous row to increase the count to > 5 as required. The risk (r) level of significance is chosen to be 10 percent or α = 0. Number of defects Frequency observed.54 0.42 0.6 48.00 Chi-square calculated = 2.2 0. O Frequency expected.52).1 3.9 18. and the expected value is 2.2 –3.25.8 9. O Frequency expected.46 0.3 36.46 0.61 χ2 = 2.4 0.H1317_CH09.3 4. Number of defects Frequency observed. we would expect to find (0.2 0.015)(146) or 2.4 –4. The chi-square value is calculated the same as in the example for the contingency table.qxd 10/15/07 88 11:55 AM Page 88 The Desk Reference of Statistical Quality Methods The probability of finding exactly five defects is given by: e−1.

there would be the same frequency for all integer values.125 If the frequency were uniform. Consider the following case. but otherwise original.0000 χ2 = 6.qxd 10/15/07 11:55 AM Page 89 Chi-Square Contingency and Goodness-of-Fit 89 Goodness-of-Fit for the Uniform Distribution The following serial numbers were obtained from 20 one-dollar bills: 12049847 78711872 11054247 04301842 63460081 72772262 26509623 17503581 56704709 24052919 32549642 58136745 34286999 84544014 13103479 25178095 87717396 92733162 17194985 46878096 The objective is to confirm a level of significance of 5 percent that the distribution of integers is uniform. Critical chi-square.0. E 0 1 2 3 4 5 6 7 8 9 15 19 19 11 20 13 13 20 14 16 16 16 16 16 16 16 16 16 16 16 Total: 160 Integer O–E –1 3 3 –5 –4 –3 –3 4 –2 0 (O – E )2 (O – E )2/E 1 9 9 25 16 9 9 16 4 0 0. Goodness-of-Fit for Any Specified Distribution Goodness-of-fit tests can be applied to any specified distribution. Observed frequency.5625 1.5625 0. the owner of the bookstore determines the distribution of certain key words used by Shakespeare. not just those that are well known.0000 0.0000 0.5625 1. Note that no parameters are estimated from the sample data.05 = 16. we cannot reject the hypothesis that the distribution is uniform. The critical chi-square value is determined with a significance of α = 0. Since the calculated chi-square is less than the critical chi-square.2500 0.0625 0. 16. O Expected frequency. In order to substantiate the claims.9.5625 0.5625 1. A collector of old books has been approached by a seller of what has been claimed to be a rare. Fifteen works of Shakespeare are . χ29.05 and the degrees of freedom of: df = number of classes (adjusted) – 1 – number of estimated parameters df = 10 – 1 – 0 = 9.H1317_CH09. work of William Shakespeare.

These data provide an opportunity to examine the true distribution to the suspect distribution. . NJ: Prentice Hall. In this example the observed rate is that of the suspect. R. withal.50 0.05 = 9. 5th edition. 1972. Bibliography Walpole. and hadst.qxd 90 10/15/07 11:55 AM Page 90 The Desk Reference of Statistical Quality Methods examined for the frequency of the use of the words thou. Englewood Cliffs. thine.89 5. wouldst.58 1.49. The following computation reveals the truth about the suspect manuscript. therefore. the critical chi-square value is χ 2 4 . and the expected is that of the true Shakespeare works.49 Setting the level of significance at 5 percent and the degrees of freedom at 4.07 1. The calculated chi-square is greater than the critical chi-square. The frequency of the words per 1000 words of text is determined and compared with the suspect manuscript. Probability and Statistics for Engineers and Scientists.H1317_CH09. E. Word Thou Withal Wouldst Thine Hadst Observed rate (suspect) Expected rate (true Shakespeare) O–E (O – E )2 40 18 58 13 44 58 24 56 19 31 –18 –6 2 –6 13 324 36 4 36 169 (O – E )2/E 5.45 Calculated chi-square: 14. 0. we conclude that the suspect manuscript is not a work of William Shakespeare with a risk of 5 percent.

“Has the process changed?” It looks at the entire distribution rather than just the average and standard deviation as with conventional control charts. A.H1317_CH10.” This control chart. The particle distribution affects the viscosity of the finished product for a given amount of clay added. W. . has the advantage of combining data that otherwise would have to be monitored using several individual control charts. then the chi-square control chart is applicable. Shewhart (1931. One advantage of the chi-square control chart is that it looks at the entire process and answers the question. Changes in the distribution of particle sizes will cause an adjustment in the concentration of clay to be added or might require adding a liquid to reduce the viscosity of the final product. ⎟ ⎜ ⎟ ⎜ ⎟ .qxd 10/15/07 12:04 PM Page 91 Chi-Square Control Chart In his pioneering book Economic Control of Quality of Manufactured Product. 91 . 297) states that “perhaps the single statistic most sensitive to change is the Chi-square function.” Shewhart goes on to say. Chi-square values are calculated using the following relationship: ⎛ (Obs − Exp)2 ⎞ ⎛ (Obs − Exp)2 ⎞ ⎛ (Obs − Exp)2 ⎞ χ2 = ⎜ + + . “One difficulty is that the Chi-square control chart can only be used for comparatively large samples. The following illustration demonstrates the concept of the chi-square control chart. however. For each lot. Analysis of the clay is accomplished by sifting the clay through a series of sieves and measuring the percent retention on each sieve. Background A powdered clay material is used in the production of a resin filler product. This information is used to determine the chi-square value for each lot analysis. Dr. There are nine sieves of various sizes used. The average percent retention for each sieve size is determined. If the objective is to monitor the change in a distribution arising from a change in either the location statistic or the variation statistic and there are sufficient data to construct a frequency table such as a histogram. Exp Exp Exp ⎝ ⎠1 ⎝ ⎠2 ⎝ ⎠k where: Obs (observed) = the actual observed frequency of occurrence for strata or class k Exp (expected) = the expected frequency for strata or class k (can be a count or a percent). the percent retained on each of the nine sieves is determined. Examples for use of the chi-square control chart would be monitoring the weight of capsules or the distribution of particle sizes using sieve analysis data. These adjustments also affect the bonding characteristics of the resin. Data for 15 lot analyses are collected. .

13)2 + 8.00)2 (17.07)2 (11.13 χ21 = 1.87)2 + + 17.13 – X: Percent retained 40 Initial process characterization 11/1–11/15 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution.87 8.00 − 2.2990 .00 − 20.80 10.80)2 (9.07 9.76)2 (17.00 15.40)2 (13.00 − 15.87)2 (3. average A chi-square value is determined for each lot analysis.67 20.87 + (9.80 10.00 − 8.40 12.H1317_CH10.07 9.00 − 12.75 20.40 12.87 3.20 17.20)2 + + + 2.00 − 10.00 − 3.00 − 17.20 + (19. χ21 = (2.00 − 9.00 15.87 3.qxd 10/15/07 92 12:04 PM Page 92 The Desk Reference of Statistical Quality Methods The data for 15 lot analyses are shown in the following table: Sieve size Date 20 28 35 48 65 100 150 180 200 11/1 11/2 11/3 11/4 11/5 11/6 11/7 11/8 11/9 11/10 11/11 11/12 11/13 11/14 11/15 2 1 4 3 2 3 4 2 4 2 3 5 1 3 3 9 11 10 10 8 9 12 9 12 10 7 8 11 14 10 17 13 15 18 9 15 19 17 15 18 15 17 16 14 17 17 22 21 20 18 22 19 22 19 21 20 22 20 21 19 19 20 16 17 19 18 15 18 16 17 18 16 17 18 17 13 11 10 12 16 13 10 11 12 14 12 13 11 12 11 11 9 12 9 10 11 10 9 10 11 13 9 8 7 9 9 10 7 9 15 8 7 9 8 5 9 5 7 9 10 3 3 5 2 3 1 4 3 4 2 3 5 3 2 4 2.

History 25 UCL = 20.299 2. A summary of all 15 chi-square values is as follows: Date Chi-square 11/1 11/2 11/3 11/4 11/5 11/6 11/7 11/8 11/9 11/10 11/11 11/12 11/13 11/14 11/15 1.) We now establish the control chart and plot the calculated chi-square values.966 5.765 The control limits for the chi-square control chart are based on the following relationships: k = number of classes or strata in which data are reported.089 0.859 9.990 0. default to zero.0 0 1 5 10 15 . (All chi-squares are positive.qxd 10/15/07 12:04 PM Page 93 Chi-Square Control Chart 93 The remaining lot analysis results have the chi-square values calculated as in the previous example.456 3.017 0. if a negative limit is derived. Center line = k – 1 = 8.060 3. therefore.582 2. k = 9.821 1. – Lower control limit (LCL) = CL – 3 √2 (k – 1) = 8 – 12 = – 4 ⇒ 0.H1317_CH10.150 1.0 20 15 10 5 LCL = 0.124 3. – Upper control limit (UCL) = CL + 3 √2 (k – 1) = 8 + 12 = 20. For this example.378 2.469 2.

13 .87 Initial process characterization 11/1–11/15 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution. and signs of a process (distribution) change will be indicated if the chi-square calculated exceeds the calculated UCL of 20.67 Percent retained 40 35 48 65 100 150 180 200 11 14 10 15 14 12 9 11 12 11 21 23 19 17 22 13 16 15 17 14 6 9 11 9 6 8 6 7 5 1 13.0 2.6 20. (16–20) Avg.0 14. which means there is a 99. (1–15) 3. the greater the chi-square value.4 12.4 20.7 percent probability that samples chosen from the population with the historical characterization will have a 99.87 5.qxd 94 10/15/07 12:04 PM Page 94 The Desk Reference of Statistical Quality Methods The control limits are based on ±3S.8 9. Sieve size Date 20 28 11/16 11/17 11/18 11/19 11/20 3 2 1 4 5 9 7 11 9 11 17 14 15 12 16 Avg.0 9. and so on.4 3.8 15. Calculate the chi-square values for the next five lots of material and see if there is evidence of a process (distribution) change. sieve #28 of 10. The greater the deviation from the expected distribution.0 percent.8 percent.7 percent probability of having a calculated chi-square less than 20.2 11. sieve #35 of 15.0.56 percent.07 15. The first 15 samples are compared to a parent distribution defined by an expected percent retention on sieve #20 of 2. Additional data are taken.0 17. average 8.0 (for this example).H1317_CH10.4 10.2 8.

20 22. average Note the difference in the two distributions. . The results can be seen in the following figure.73 12.H1317_CH10.qxd 10/15/07 12:04 PM Page 95 Chi-Square Control Chart 95 40 Percent retained 11/16–11/20 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution.76 The new additional chi-square values are plotted on the original chart.77 16.75 20. Chi-square values for samples 11/16 through 11/20 are as follows: Date 11/16 11/17 11/18 11/19 11/20 Chi-square 22.

5 11.67 48 65 100 150 180 200 17 16 19 17 15 18 14 15 15 13 17 14 13 11 14 15 10 14 10 9 3 5 3 2 17. An additional four lots are analyzed with the following results.87 10.5 14.4 10. Continue to calculate the chi-square values and plot the data.4 12. (1–15) 2.H1317_CH10.0 20 15 10 5 LCL = 0.0 17.4 14. Sieve size Date 20 28 35 11/21 11/22 11/23 11/24 2 3 1 3 11 9 12 12 12 11 10 13 Avg.3 13.8 20.0 2.0 9. (16–20) Avg.2 8.8 11.2 15.3 3.0 11.3 5.0 9.13 .87 3.8 8.0 0 1 5 10 15 20 The chart has detected this change in the base or historical distribution.qxd 96 10/15/07 12:04 PM Page 96 The Desk Reference of Statistical Quality Methods History 25 UCL = 20.07 13.3 15.6 20.4 3.8 15. (21–24) Avg.

The distribution of data in samples 21–24 is not sufficient to cause a change in the control chart pattern relative to the historical data contained in samples 1–15.qxd 10/15/07 12:04 PM Page 97 Chi-Square Control Chart 97 A significant change in the distribution of the data in samples 16–20 has caused the chart to be out of control. History 25 UCL = 20.0 0 1 5 10 Percent retained 40 15 Initial process characterization 11/1–11/15 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution.0 20 15 10 5 LCL = 0.H1317_CH10. average 20 25 .

1980.H1317_CH10. average Percent retained 40 Initial process characterization 11/21–11/24 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution. Shewhart. average Bibliography Montgomery. W. C. . 1996.qxd 98 10/15/07 12:04 PM Page 98 The Desk Reference of Statistical Quality Methods 40 Percent retained 11/16–11/20 30 20 10 0 20 28 35 48 65 100 150 180 200 Sieve size Particle size distribution. WI: ASQC Quality Press. New York: John Wiley & Sons. Introduction to Statistical Quality Control. 3rd edition. D. A. Economic Control of Quality of Manufactured Product. Milwaukee.

Single-sided confidence limits An example of the first case. and the opposite of confidence is risk or degree of uncertainty.6” or “I am 90 percent confident that the true average is greater than 123. Sample size n 2.H1317_CH11. Confidence is related to the following: 1. these statistics are derived from samples taken from a larger population or universe and. “I don’t know the true average.” An example of a single-sided confidence limit would be. The risk as associated with confidence is abbreviated α.5.4 and 15. Two-sided confidence intervals 2. and for sample sizes equal to or greater than 30. but I am 90 percent confident it is between 12. The following example illustrates confidence calculations. For sample sizes less than 30. The sum total of confidence and risk is 100 percent: Confidence + Risk = 100%. 99 . As with all statistics. are only estimates of the true parameter. This discussion focuses on the confidence of the average as it represents the truth or parameter of the mean. The true average is actually the mean and is abbreviated μ. the two-sided confidence interval. we use the normal distribution. Error of the estimate E Confidence in the average can be stated in two specific ways: 1. “I do not know the true average.qxd 10/15/07 12:25 PM Page 99 Confidence Interval for the Average – Two of the most used descriptive statistics are the average X and the standard deviation S.” Confidence can be thought of as a degree of certainty. we use the t-distribution. All estimates are subject to error with respect to the degree to which they represent the truth. but I am 95 percent confident that it is less than 34. therefore. can be expressed in statements such as.0. Calculations for confidence are based on probability distributions.

31. the risk α = 0.87 Step 3. The standard error is S 1. ⎝ n⎠ where: n = sample size S = sample standard deviation – X = sample average tα/2. the quantity Step 1. Step 2. n ≤ 30 Example: What is the 90 percent confidence interval for the average. Determine the standard error. Confidence interval = Point estimate ± error Confidence interval = 25. S = 1.45 to 25.n–1 = 1. and the quantity n tα/2. n 3. Looking up the t value using a standard t-distribution.H1317_CH11. or 0.55.0.2. we have tα/2. Confidence intervals for the average when the sample size is less than or equal to 30 are determined using the following relationship: ⎛ S ⎞ X ± t α / 2 . and 90.10. Calculate the error of the estimate E.45 and 25.55 ⎝ n⎠ Step 4. ⎛ S ⎞ E = tα / 2.qxd 100 10/15/07 12:25 PM Page 100 The Desk Reference of Statistical Quality Methods Confidence Interval.n −1 ⎜ ⎟. Determine the confidence interval.90.55 Confidence interval = 24. Since this is a two-sided interval. and the – sample average X = 25.” .31 E = 0. n–1 = a factor based on sample size and a confidence of 1 – a.761 × 0. 99.n–1 represents the number of standard error to add and subtract to the estimate average to define the confidence interval. The chosen level of confidence is 90 percent. we typically choose values of 99.2 = = 0.9. S is the standard error.55 An appropriate statement for this case would be: “I do not know the true average (mean). we use half the risk (α/2 = 0. where n = 15. In selecting levels of confidence. we will be calculating an upper and lower limit.0 ± 0.05).0.n −1 ⎜ ⎟ E = 1. therefore.761.0. The sample size is 15. For informational purposes. but I am 90 percent confident that it is between 24. Determine the t value. 95.0? Since this is a confidence interval.

0050. We use the standard normal distribution in reverse by looking up the Z-score that will produce the appropriate α/2 value for a given level of confidence. and in this case S = 4.00. as it assumes the sample size to be infinite. Step 2.5.0005 0.00 and standard deviation S = 4.050 0. What is the 99 percent confidence interval for this point estimate? Step 1.29 2. and α/2 is 0. and X = 45. Unlike the t-distribution.37.010. Confidence Interval. except we use the normal distribution rather than the t-distribution.50 = 0. The standard error is given by σ . The chosen level of confidence is 99.58.995 0.0250 0. n where: σ = true standard deviation n = sample size. Calculate the standard error.0500 3.50. 150 . the normal distribution is not dependent on sample size.0 percent. the risk α is 0.n–1 used when n is less than or equal to 30.50.96 1. Confidence C Risk α α /2 Zα/2 0. Our best estimate of σ is S.990 0. Determine the Zα/2 value.65 Example: – A sample n = 150 yields an average X = 80.999 0.80 2. Looking in the body of a standard normal distribution. our estimate of the standard error is 4. This value is substituted for the quantity tα/2.0050 0. Therefore.900 0. n > 30 The steps that evolve when the sample size is greater than 30 are exactly the same as with sample sizes less than or equal to 30.990.0025 0. we find that the nearest Z value is 2.950 0.100 0.58 1.qxd 10/15/07 12:25 PM Page 101 Confidence Interval for the Average 101 Supplemental problem: – Determine the 95 percent confidence interval when n = 22.001 0.010 0. S = 4. or 0.005 0.H1317_CH11.

However. that is.28 .H1317_CH11. This is because we are developing only an upper 95 percent confidence limit. σ is never really known. Calculate the error of the estimate E.990 0. Since we have specified a confidence of 0. we will determine what Z-score will give us a value equal to the risk α. a sample size of n = 75.001 0.qxd 102 10/15/07 12:25 PM Page 102 The Desk Reference of Statistical Quality Methods Step 3. n > 30.95 ⎝ 150 ⎟⎠ Step 4.0500 in the main body of the table is 1. Other Z values for selected levels of confidence for a single-sided limit are as follows: Confidence C Risk α Zα 0. Example: What is the 95 percent upper confidence limit given a point estimate average – X of 65.995 0. The corresponding Z-score that will give a value of 0.900 0.950 0.0.09 2.999 0.050 0.00 ± 0. we assume that σ is known and the normal distribution is appropriate to use.58 2. For large sample sizes. Since the sample size is greater than 30.95 Confidence interval = 79.50 ⎞ E = (2. Determine the confidence interval. Single-Sided Limit The confidence limit differs from the confidence interval in that only one value is given— either an upper confidence limit or a lower confidence limit.95 Confidence Limit.65.010 0.00? Step 1.95. Confidence interval = Point estimate ± error Confidence interval = 80. we will use the standard normal distribution. E = Zα / 2 S n ⎛ 4.58) ⎜ E = 0. Determine the Zα value. and we will assume that the sample standard deviation S will serve as a good estimate of σ.005 0. the risk α will be 0. Notice that we are determining a Zα rather than a Zα/2 as with the case of a two-sided confidence interval.33 1.05 to 80. and a standard deviation of S = 3.100 3.65 1.05. We will use the standard normal distribution in reverse.

57.10.57 ⎝ n⎠ ⎝ 75. Example: – What is the lower 90 percent confidence limit where n = 25. n ≤ 30. Calculate the standard error.qxd 10/15/07 12:25 PM Page 103 Confidence Interval for the Average 103 Step 2.00.0 ⎠ Step 4. Calculate the upper 95 percent confidence limit. Determine the error of the estimate E. and a sample size of 200? Confidence Limit. ⎛ 3. Single-Sided Limit The steps are identical to the sample n > 30. and n = 25 (we look up n – 1 or 24 degrees of freedom df ).00 + 0. The t value from the t-table distribution is 1.57 Upper 95% confidence limit = 65.0 Step 3.00 n 25 Step 3. S = 10. Calculate the error of the estimate E.318.318⎜ ⎟ E = 2.57 An appropriate statement for this example would be: “I do not know the true average (mean). Calculate the standard error.n −1 ⎜ ⎟ E = 1.64 ⎝ n⎠ ⎝ 25 ⎠ .0. S 10 = = 2.” Supplemental problem: What is the 90 percent lower confidence limit given a standard deviation of 12. except use the t-distribution. but I am 95 percent confident that it is not greater than 65. Upper 95% confidence limit = Point estimate + Error Upper 95% confidence limit = 65. and X = 230. Step 2. an average of 150.H1317_CH11. Risk α = 0.65⎜ ⎟ E = 0.0 ⎞ ⎛ S ⎞ E = Zα ⎜ ⎟ E = 1.0? Step 1. Determine the t value. S = n 3 = 0. ⎛ 10 ⎞ ⎛ S ⎞ E = tα .35 75.

8 22. Quality Engineering Statistics. 1992. and M. NJ: Prentice Hall. F. .7 Sample size = 190 4.1 3. Petruccelli. H.6 20. Applied Statistics for Engineers and Scientists.H1317_CH11. M. Upper Saddle River. A.0 Standard deviation = 3. D. A. Chua. Perform the same calculation as Problem 3. except use a sample size of 25.qxd 104 10/15/07 12:25 PM Page 104 The Desk Reference of Statistical Quality Methods Step 4.36 An appropriate statement would be: “We do not know the true mean. R. B. Determine the 90 percent lower confidence limit.. De Feo. 2007. Milwaukee. Nandram.” Problems: 1. Chen. 1999. Lower 90% confidence limit = Point estimate – Error Lower 90% confidence limit = 230. New York: McGraw-Hill.C. If the error for an average is excessive. and J.0 – 2. Gryna. 5th edition.5 23. what two things could be done to reduce it? 2. but we are 90 percent confident that it is not less than 227.36.9 23.6 24. R.. Calculate the 85 percent upper confidence limit for the following data: 22.0 25. Bibliography Dovich.64 Lower 90% confidence limit = 227. Juran’s Quality Planning and Analysis for Enterprise Quality. Calculate the 90 percent confidence interval given the following: Average = 68. WI: ASQC Quality Press. J.

05N Case 1: Two-sided confidence interval where the sample size is small relative to the population. and.05N 4. Single-sided confidence limit where the sample size is large relative to the population. n ≤ 0. are dissatisfied. For example. we assume that the result represents to some degree the true nature (parameter) of the population from which the sample was chosen. n ≤ 0. Two-sided confidence intervals where the sample size is relatively small compared to the population. as with any estimate. we calculate a proportion of 0. Two-sided confidence intervals where the sample size is relatively large compared to the population. Single-sided confidence limit where the sample size is small relative to the population. As with any statistic. there is always an associated risk of error. assume that we interview 150 individuals chosen at random to find out if they are satisfied with the service provided by a local hospital and find that 27 are dissatisfied.18. The degree to which the subject statistic represents the population is expressed in the confidence of the estimate (statistic). We will focus the following discussion on error as it relates to proportions.000. it may or may not reflect the true feelings of the entire community. n ≤ 0.H1317_CH12.18. The following four distinct cases will be considered: 1. The sample size was 150.qxd 10/15/07 12:30 PM Page 105 Confidence Interval for the Proportion Whenever we calculate a descriptive statistic such as an average. n > 0.18 is an estimate of the true condition.05N 2. there is a risk of error. the population of the community that the hospital served was 100. proportion. From these data. This 0. n > 0. and the number of individuals who responded negatively was 27. The calculated proportion nonconforming to our requirement of satisfied customers is 0.000. or 18 percent.05N 3.05N In the preceding example. or standard deviation from a random sample selected from a larger set of observations. 105 . The only way to totally avoid error is to poll the entire population of 100. While this answer is absolutely correct for the sample.

If we are 90 percent confident. we are assuming a risk of 5 percent. If we are 95 percent confident. If all of the error is applied to one side of the estimate. n where: p = proportion n = sample size. we refer to a single-sided confidence limit and use Zα. the more standard errors we add and subtract. Note: This equation is to be used when n ≤ 0. we are assuming a 10 percent risk that the true proportion will not be inside the limits defined by the point estimate of ±Zα/2E: p ± Zα/2 where: p (1 − p ) . In order to express a level of confidence in the estimate of the proportion.05N. % Confidence + % Risk = 100% % Risk = 100% – % Confidence Confidence and risk are expressed as a proportion rather than as a percentage. The greater the level of confidence. we are determining a two-sided limit or confidence interval. . The number of standard errors is Zα/2.qxd 106 10/15/07 12:30 PM Page 106 The Desk Reference of Statistical Quality Methods The degree to which the sample estimate reflects the truth is defined by our level of confidence. When the error is added and subtracted to the estimate. All statistics are subject to error. Confidence and risk must total 100 percent. The Zα values are used in single-sided confidence limits and will be discussed later. where α is the risk. The error E (or sometimes referred to as standard error SE) for the proportion P is calculated as follows: E= p (1 − p ) .H1317_CH12. If we are 90 percent confident. The multiple of errors E to be added and subtracted can be found in Table 1. we are assuming a risk of 10 percent. we add and subtract a specific number of standard errors to the point estimate. n p = proportion n = sample size Zα/2 = factor dependent on risk assumed. The quantities Zα/2 are to be used for upper and lower confidence intervals.

An appropriate statement would be: “I do not know the true percentage of the population that is dissatisfied with the service. The total population is 100.100 0.999 .18 (1 − 0.001 0. From Table 1 for a level of confidence of 90 percent.291 2. where n = 150 and p = 0.05 N and p = 15 = 0.H1317_CH12.050 0.950 .0050 0. the appropriate Zα/2 is 1.0250 0. two-sided limits. but I am 90 percent confident it is between 13.23. therefore.18 ± 1.010 0.990 . Increasing the sample size n 2. n < 0.qxd 10/15/07 12:30 PM Page 107 Confidence Interval for the Proportion 107 Table 1 Factors for confidence intervals and limits. Reducing the level of confidence Example 2: Eighty-five records are sampled out of a total of 6000.” If the error is considered excessive at ± 5% we can reduce it by: 1.041.0005 0.645 Example 1: Calculate the 90 percent confidence interval for the previous example.18 ) → 0. 85 The confidence interval is given by p ± Zα/2E .176(1 − 0. Confidence C Risk α α/2 Z α/2 .13 to 0.0 percent. From these 85 records.18.05N.960 1.645.176 ) .000.900 0.645 ⇒ 0. E= 0.576 1.0500 3. 15 are found to be incorrect.05 150 n the 90 percent confidence interval is 0.176 85 The standard error of the estimate E is given by p(1 − p) = n E = 0. What is the 95 percent confidence interval for the proportion? n ≤ 0.18 ± 0.0 percent and 23. and using the relationship p ± Zα /2 p (1 − p ) 0.

The lower 95 percent confidence limit is 0.016 99 percent confidence interval = 0.05N.576(0. The finite population factor is given by N −n n = 1− .096.040 ± 0.05N. What is the 99 percent confidence interval? p= 6 = 0. Six items are found to be nonconforming to the specification.080 The upper 95 percent confidence limit is 0.” Example 3: One hundred fifty items are randomly sampled from a population of 4500. This correction factor is called the finite population correction factor.041 The upper 99 percent confidence limit is 0.H1317_CH12. but I am 95 percent confident it is between 9.040 (1 − 0.040 + 0.040 – 0.041 = –0. E= p (1 − p ) = n 0.6 percent and 25.080 = 0.081. default to zero.040 150 Zα/2 for 99 percent confidence is 2.172 – 0.256.65 percent. n > 0.576. Case 2: Two-sided confidence interval where the sample size is large relative to the population. .001 ⇒ 0. An appropriate statement would be: “I do not know the true proportion of records that are incorrect. 99% confidence interval = 0.000.05N When the sample size n is greater than 0. When a negative lower control limit results.176 + 0.qxd 108 10/15/07 12:30 PM Page 108 The Desk Reference of Statistical Quality Methods for a 95 percent confidence.041 = 0. Zα/2 = 1.176 ± 1.96(0.080 = 0.041) = 0. N −1 N The standard error must be multiplied by this correction factor anytime the sample size n is greater than 0.040 ± 2.96. The lower 99 percent confidence limit is 0.176 ± 0. a correction factor must be applied to the standard error.040 ) 150 E = 0.014) = 0.

Example 2: A small engine-repair company serviced 6800 customers last quarter.24 ± 0. The 90% lower confidence limit is 0. 60 indicated that they felt the length of time for engine repairs was excessively long. A survey was sent to the entire customer base. which exceeds the criteria of n ≥ 0.24 ± 1.24 + 0.056 90% confidence interval = 0.04 C = 95% The sample size is 22 percent of the population. the population N = 350.148.24 – 0. and the number defective np = 12? 12 = 0.056) = 0.092 = 0.24 ) 50 1− 50 350 E = 0. and of the 1500 who responded.qxd 10/15/07 12:30 PM Page 109 Confidence Interval for the Proportion 109 The standard error for those cases now becomes E= p (1 − p ) n .24 (1 − 0. . n N Example 1: What is the 90 percent confidence interval for the proportional defective where the sample size n = 50. What is the 95 percent confidence interval for the resulting percent of dissatisfied customers? N = 6800 n = 1500 np = 60 p = 0.H1317_CH12.05 N. we must apply the finite population correction factor to the error calculation.645 p= Zα /2 E= p (1 − p ) n E= 1− n N 0.092 The 90% upper confidence limit is 0.645(0.24 50 = 1.092 = 0.332. 1− n N The confidence interval is now determined by p ± Zα/2 p (1 − p ) n 1− . therefore.

we are calculating a confidence limit rather than a confidence interval.004 1500 6800 The confidence interval is p = Zα/2E = 0.282 .” Confidence limit = p + Zα/2E or confidence limit = p – Zα/2E A list of selected Zα can be found in Table 2.7 percent and 18.96)(0. a similar statement would be: “I do not know the true percent nonconforming. we have been dividing the error of the estimate and applying it to the upper and lower limits to establish a confidence interval. but I am 90 percent confident it is between 12.5 percent. an appropriate statement would be: “I do not know the true percent nonconforming.960 E= p (1 − p ) n 1− E= n N 0. The upper 95 percent confidence limit is 0.950 0.032. With confidence intervals.040 ± (1. n ≤ 0. Table 2 Factors for single-sided confidence limits.010 0.” With a single-sided confidence limit.008 = 0. or simple confidence limit. If all of the error is associated with either the upper or lower end of a confidence interval.04 (1 − 0.990 0.001 0.040 ± 0.008 = 0.645 1.008.04 ) 1500 1− E = 0.326 1.qxd 110 10/15/07 12:30 PM Page 110 The Desk Reference of Statistical Quality Methods Z α / 2 = 1. The only parameter that changes is the term Zα/2.040 – 0. Confidence C 0.090 2. n > 0.05 N In all of the previous cases. but I am 95 percent confident it is not less than 23. The lower 95 percent confidence limit is 0.5 percent.050 0.900 Risk α Zα 0.999 0.05 N and Case 4: Single-sided confidence limit where the sample size is large relative to the population.100 3.H1317_CH12. Case 3: Single-sided confidence limit where the sample size is small relative to the population.004) = 0.048. which is replaced with Z α .040 + 0.

090 – 0. Of those interviewed.” Additional problems: Calculate the confidence intervals for each of the following: N = 12.28(1 ± 0.282 0.09 − 1.H1317_CH12.qxd 10/15/07 12:30 PM Page 111 Confidence Interval for the Proportion 111 Example 1: A sample of 90 individuals are interviewed during the week as they are leaving the Le Cash.06 = 0. but I am 95 percent confident that it is not less than 0. but I am 90 percent confident that it is not greater than 34 percent.34.500 n = 160 np = 42 C = 99% Sample Size Required Given a Level of Confidence C and Acceptable Error SE Up to this point. . An appropriate statement would be: “I do not know the true percentage of customers who feel that the service is too slow.645 p (1 − p ) 0. What is the upper 90 percent confidence limit for the estimated percent of dissatisfied customers? Since n < 0.090 ) n 400 1− 1 − ⇒ 0. Le Penty restaurant. we have been determining the amount of error associated with a given level of confidence with the sample size specified. p + Zα p (1 − p ) n 0. 28 percent felt that the service was a little too slow.070. the finite population correction factor must be used. From a more practical matter. and the number of nonconforming is 36? Since the sample size exceeds 0.05N.000 n = 120 np = 7 C = 90% N = 1800 n = 200 np = 12 C = 95% N = 50.09 400 Z α = 1.28 + 0. During the same week. the sample size n = 400.28 + 1. An appropriate statement would be: “I do not know the true proportion.070.05N. we would like to know what sample size would be required to ensure a level of confidence with a predetermined amount of acceptable error. This can be accomplished by simply rearranging the equations for the error and solving for the sample size n. p= p − Zα 36 = 0.” Example 2: What is the lower 95 percent confidence limit for a proportion if the total population N = 1500.020 = 0. 3500 customers exited the restaurant. the finite population correction factor is not required.645 n N 400 1500 The lower 95 percent confidence limit is 0.28) 90 The upper 90 percent confidence limit is 0.090 (1 − 0.

Do a preliminary sampling to provide an initial estimate of the proportion p. then use Zα rather than Zα/2. and it is the determination of p for which we want to know n. Sample size n > 0. Sample size n.qxd 112 10/15/07 12:30 PM Page 112 The Desk Reference of Statistical Quality Methods There are two cases to consider: 1. What is the correct sample size for the new survey? . The level of confidence you want is 95 percent. This initial sampling should be sufficiently large as to provide at least two or three nonconforming units. This sample size may be more than required. where: p = sample proportion Z α/2 = factor for two-sided confidence interval with a level of confidence of 1 – α E = acceptable error. The only obstacle in using these two relationships to solve for the correct sample size n is that we need to use the quantity p in the calculation. where n ≤ 0. we will assume that it will be less than 5 percent of the population. The total number of customers who received service during the subject period of time was N = 3200. Since we do not know what percentage the sample will be of the population of N = 3200. 2.H1317_CH12. Let p = 0. We may use the following options: 1. If the resulting sample is greater than or equal to 5 percent. Use a previous sampling or survey data.50. If we are calculating a confidence limit (single sided).03. the sample size will be recalculated. ( E)2 where: p = sample proportion Zα/2 = factor for two-sided confidence interval with a level of confidence of 1 – α E = acceptable error. and the results of that survey showed that 15 percent of the customers were dissatisfied. This will give us the absolute maximum sample size that we will ever require. 3.05N: n= p(1 − p ) 2 ⎛ E ⎞ ⎛ p(1 − p ) ⎞ ⎜⎝ Z ⎟⎠ + ⎜⎝ N ⎟⎠ α /2 . and the amount of error you are willing to tolerate is E = ±0. 2.05 N: n= ( Z α / 2 ) 2 p (1 − p ) . Example 1: You have been asked to develop a customer-satisfaction survey to determine the percentage of customers dissatisfied with a certain aspect of a service. A similar survey was conducted last quarter.

Milwaukee.17 3200 Since 0.99 Error E = 0. J. Gryna. D. R. Applied Statistics for Engineers and Scientists. and J. 1992.960 ⎠ ⎝ ⎠ 3200 A sample size of 465 should be taken.15(1 − 0. B.. A.03 ) 2 To test whether the finite population factor is required. R. . we should recalculate using the correction factor version: n= p(1 − p) 2 ⎛ E ⎞ ⎛ p(1 − p) ⎞ ⎜⎝ Z ⎟⎠ + ⎝ N ⎠ α/2 ..05? Should the finite population correction factor be applied for this case? Bibliography Dovich. We can be 95 percent confident that the resulting proportion will be within ±0.17 ≥ 0. Example 2: What is the required sample size n for the two-sided confidence interval given Confidence C = . Quality Engineering Statistics.15 ) n= n = 544 ( E)2 ( 0. 5th edition. A. Chua. 1999. we determine the percentage of the population the sample represents: 544 = 0. NJ: Prentice Hall.15) n = 465. Petruccelli.H1317_CH12.03 ⎞ + ⎛ 0.05. 2 ⎛ 0. Juran’s Quality Planning and Analysis for Enterprise Quality. and M. 2007. F. Chen.15(1 − 0. New York: McGraw-Hill. WI: ASQC Quality Press. n= 0. De Feo.C.15 (1 − 0.03 of the true proportion.05 Population N = 4500 Estimate of p = 0.qxd 10/15/07 12:30 PM Page 113 Confidence Interval for the Proportion n= 113 ( Z α / 2 ) 2 p (1 − p ) (1. Nandram. Upper Saddle River.H. M.15) ⎞ ⎝ 1.960 ) 2 0.

H1317_CH12.qxd 10/15/07 12:30 PM Page 114 .

The error for the sample standard deviation is calculated using the chi-square distribution. we can determine the sample standard deviation from the relationship S= Σ( X − X ) 2 . n −1 where: X = individual value – X = average of individuals n = sample size ∑ = sum. σ2 We rewrite as follows so that σ2 is isolated in the middle term: 1 X 2 1− α / 2 X 2 α/2 .H1317_CH13. where: n = sample size s = sample standard deviation σ = population standard deviation (unknown). The probability of risk is α. Cumulative probabilities for the chi-square distribution can be determined as a function of the area associated with various tail areas designated as α/2 for the upper portion of the distribution and 1 – α/2 for the lower portion of the distribution. The square of the standard deviation is the variance.qxd 10/15/07 12:38 PM Page 115 Confidence Interval for the Standard Deviation The standard deviation is one of several variation statistics (those descriptive statistics that measure variation). As with all statistics. For each continuous and discrete distribution. Others include the range and maximum absolute deviation. < < ( n − 1) s 2 σ 2 ( n − 1) s 2 115 . For the normal continuous distribution. Derive the confidence interval for the standard deviation by X 2 1− α / 2 < ( n − 1) s 2 < X 2 α/2 . the standard deviation is subject to error. there is an associated variance and standard deviation. The standard deviation can be described as a standard way of measuring the deviation (variation) of the individuals from their average.

If we reverse the order of the terms.7 0.05.6 17. the result is now (n − 1)s2 (n − 1)s2 .6 and χ 2 0. 29 = 42.0358 C = .H1317_CH13.90 α = .0295 and 0.0458. (29 )(0. 29 χ20. What is the 90 percent confidence interval for the standard deviation? n = 30 s = 0. the quality engineer has taken a sample of 30 parts (all of which have the same target value) and determined the standard deviation s to be 0. Taking the square root of the three terms gives the confidence interval for the standard deviation. 29 = 17.10 (n − 1)s2 (n − 1)s2 < < σ χ2 a / 2 χ21− a / 2 (29)(0.05.7.0358 )2 (29 )(0.qxd 116 10/15/07 12:38 PM Page 116 The Desk Reference of Statistical Quality Methods Taking the reciprocal of the three terms: (n − 1)s2 (n − 1)s2 2 .0358 )2 <σ< 42.0295 < σ < 0.95. . 2 < < σ χ 2α / 2 χ21− α / 2 This expression represents the 100(1 – α) percent confidence interval for the variance.0358)2 < σ < χ20. (n − 1)s2 (n − 1)s2 < < σ χ 2α / 2 χ21− α / 2 Example of confidence interval calculation: As part of a study for variation reduction in an injection molding operation.0458 We are 90 percent confident that the true standard deviation σ is between 0.95. > > σ χ21− α / 2 χ 2α / 2 Taking the reciprocals reverses the direction of the inequalities. we find χ 2 0.0358)2 (29)(0. 29 Looking up the chi-square value in a chi-square table using n – 1 degrees of freedom.0358.

qxd 10/15/07 12:38 PM Page 117 Confidence Interval for the Standard Deviation 117 Notice that the interval is nonsymmetrical with respect to the point estimate of sigma s = 0. The derived statistic K2k–1 is approximately distributed as X2 with k – 1 degrees of freedom. A method for determining if the population variances are different for several populations is the method developed by M.7 17.322 4.7 12.416 2.H1317_CH13.086 3. which is referred to as Bartlett’s test. Testing Homogeneity of Variances On occasion we might want to know if the standard deviations are equal for several populations. where the difference between the point estimate and the upper 90 percent confidence limit is 0.347 3. Sample number Product 1 2 3 4 5 S S2 ln S2 A B C D E F G H I 123 48 228 245 67 155 88 101 132 127 50 230 247 70 150 89 113 132 132 55 225 249 73 143 81 96 137 125 46 229 239 65 160 76 115 138 126 50 222 241 71 162 92 106 130 3.0043. One such case might be the use of the deviation from nominal as a response for a short-run SPC control chart. k +1 C = 1+ .147 3. Example: The objective is to establish a short-run SPC control chart based on the deviation from nominal for several products.5 42.0058. It would be appropriate to use this response provided that the ability to produce at a target were the same for all products being made.714 6.754 4.845 2.425 2.2 10. where k = the number of populations compared.0358.535 7.370 2. Bartlett.194 7.154 2.2 59. The difference between the lower 90 percent limit and the point estimate is 0. The specific characteristic will be the melt index.7 63.493 11. The K statistic is calculated as follows: K 2 k −1 = ⎡ ⎛ ⎢ k (n − 1)ln ⎜ ⎝ ⎣ ∑ S2 ⎞ − (n − 1) k ⎟⎠ ∑ ln s 2 C ⎤ ⎥ ⎦.3 11. then the population standard deviations (and hence the variances) would also be different.2 2. The following table gives five measurements (sample size n = 5) for each of nine products (k = 9).271 4. If the variation from target were different among products. 3k (n − 1) where: k = number of populations or samples n = sample size (same for all k samples).362 3. Bartlett’s test in this discussion requires that the sample sizes all be equal.981 3.501 .2 10. S.

873)] = 9. 1992. 1974.05 = 15. Milwaukee. B.0. E. R. WI: ASQC Quality Press.624 vs χ 28. Dovich.522 k k +1 9 +1 C = 1+ C = 1+ = 1. . NJ: Prentice Hall. Chen. J.qxd 118 10/15/07 12:38 PM Page 118 The Desk Reference of Statistical Quality Methods S2 = 26. Bibliography Burr.0926 K 28 = 9. and M. Quality Engineering Statistics.51 Since the calculated K 2 does not exceed χ28. W.H1317_CH13. Applied Statistical Methods.873 ∑ n=5 ⎡ ⎛ ⎢ k (n − 1)ln ⎜ ⎝ K 2 k −1 = ⎣ S2 ⎞ ∑ k ⎟⎠ − (n − 1)∑ ln s 2 ⎤ ⎥ ⎦ C [ 9(4)ln(26. Nandram.522) − (4)(26..0926 3 k ( n − 1) 27 ( 5 − 1) ∑ ln S where k = 9 2 = 26. 1999. A. Upper Saddle River.624 K 2 k −1 = 1. homogeneity (constant or equal variance) is a tenable hypothesis. Petruccelli.0.05. A deviation from nominal control chart would be plausible. D. New York: Academic Press. Applied Statistics for Engineers and Scientists.

above average. 35 Fifty-eight customer-satisfaction questionnaires were returned. The central tendency for the u chart is the average number of defects per unit for all of the samples used to provide the initial process characterization. and three defects were observed.qxd 10/18/07 11:46 AM Page 119 Defects/Unit Control Chart The selection of control charts depends on the following two factors: 1. u = Total defects in k samples Total number of units in k samples Examples: Thirty-five CDs were inspected. or u chart. A total of four defects were noted in the 58 returned questionnaires. average. The average number of defects per unit is u= 3 = 0. below average. and excellent.H1317_CH14. the resulting control chart will be the defects/unit control chart. Type of data collection: The data collected from the sample can be either a total number of defective units np in the sample or the total number of defects c If we want to vary the sample size and we want to tally the number of defects. Sample size: The sample size can vary in size or remain fixed 2. A response of less than average was deemed nonconformance or a defect.086 .069 58 119 . What is the average defects per unit? u= 4 = 0. Each question had a choice of five responses: poor. Each questionnaire had 12 response areas. Average defects per unit.

qxd 120 10/18/07 11:46 AM Page 120 The Desk Reference of Statistical Quality Methods Normal variation for the average number of defects per unit u is defined as ±3 standard deviations about the average defects per unit. Example: The average number of nonconformities per unit u– for customers checking out of a hotel has been determined to be 0. this location statistic is the average defects per unit of inspection or sampling. Collect historical data to provide a basis of process characterization.055 UCL = 0. For each sample. For the u chart. This value was determined from exit surveys during a three-month period. and LCL = u − 3 n n Note: The control limits are different for different sample sizes.196 n 25 LCL = u − 3 u 0. The limits of this normal variation define the upper control limits (UCLs) and lower control limits (LCLs) for the u chart.055 + 3 = 0. Typically.H1317_CH14. Determine a location statistic using all of the historical data. or subgroup sample size as it is frequently called. we default to a zero control limit.000 n 25 Since the LCL is negative. The standard deviation for defects is based on the Poisson distribution and is dependent on the average defects per unit and the sample size. Step 2.055 − 3 = −0. 25 samples k of size n will be collected. The sample size. Before we can detect a change. the number of defects per unit will be determined. what are the UCL and LCL for this sample size? UCL = u + 3 u 0.055 LCL = 0. should be sufficiently large as to contain at least two or three defects. That is to say that the UCL and LCL for the u chart are determined by UCL = u + 3 u u .7 percent of their data within these limits.086 = 0. we need to know how the process was performing in the past. The following steps for producing a u chart are the same as with any control chart for attributes: Step 1. Control charts are used to detect changes in processes. If the daily sample size is 25. This statistic is determined by dividing the sum of all defects found by the sum of all samples taken to characterize the process.055. . Processes that are well behaved and statistically stable will have 99.

c k n1 + n 2 + n 3 + . Case Study As the manager of a hotel. the room is cleaned and prepared for the next guest. Ground fault failed test 2. These rule violations will be discussed later. Step 4.D. Dust on furniture 5. only 21 samples will be used to characterize the process. including points outside the UCL or LCL. Covers on bed not smooth or even 4.H1317_CH14. n k where: c = number of defects n = sample size Step 3. 100 percent of the rooms have been audited for the eight items listed. Bathroom not adequately cleaned Each day for three weeks. Continue to collect and plot data into the future. Too few towels. In this example. face cloths in bathroom 3. . looking for evidence of a process change. Traditionally. . An audit check sheet lists several items that relate to this preparation. you want to establish a u chart to monitor the room-preparation activities. Review the plotted points. Construct the chart and plot historical data. Floors dirty 6. Step 5. . The following nonconformities list will be used: 1. Calculate control limits based on the process average ±3 standard deviations. No toiletries in bathroom 7. . and control limits are drawn using broken lines. No house cleaning I. The vertical plotting scale should be selected such that adequate room remains to plot future data points that might fall outside the control limits.qxd 10/18/07 11:46 AM Page 121 Defects/Unit Control Chart u= 121 c1 + c 2 + c 3 + . Change is evident if any of the SPC rule violations have occurred./welcome card left 8. looking for statistical stability and evidence that the process was under a state of statistical control (few or no points outside the control limits and no statistically rare patterns. After a guest checks out. . Develop a u chart based on the data provided. or rule violations). average lines are drawn as solid lines.

056 0. . and 18. Sample number Date Sample size n Number of defects c Defects/unit 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 11/1 11/2 11/3 11/4 11/5 11/6 11/7 11/8 11/9 11/10 11/11 11/12 11/13 11/14 11/15 11/16 11/17 11/18 11/19 11/20 11/21 75 103 53 68 88 115 90 48 65 100 85 75 55 90 60 95 70 105 87 92 70 6 8 2 6 5 11 5 2 5 3 6 4 2 5 2 7 6 4 4 6 3 0.071 0. The statistical control limits are based on the average defects per unit for the historical data ±3 standard deviations. where n = sample size.060 n1 + n 2 + n 3 + . The average sample size for this example is n– = 80.046 0.038 0. .030 0.086 0.057 0. . 13. c k 6+8+2+6+.qxd 122 10/18/07 11:46 AM Page 122 The Desk Reference of Statistical Quality Methods Step 1. . we may use the average sample size of 80.077 0. 6. we may use a sample size of 80.042 0.H1317_CH14.088 0.080 0. Calculate control limits.036 0. .75n– to 1.053 0.056 0.074 0. 3 = = 0. . . 8.4 for the standard deviation calculation without significantly affecting the chart. every time the sample size changes. . u= c1 + c 2 + c 3 + .4 for calculation of the three standard deviations. With the exception of samples 2.043 Step 2. 3. Collect historical data. Technically speaking.065 0.25n–. the average sample size may be used in place of individual samples when the individual sample size is within the limits of 0. For any samples falling within the limits of 60 to 101.033 0. n the standard deviation changes.075 0. 70 Step 3. Determine the location statistic u–.4. From a practical perspective. n k 75 + 103 + 53 + 68 + . .096 0. The three standard deviations are calculated using the relationship u 3 .038 0.

060 + 3 0. (u )2 For example. It is suggested that the sample size be increased approximately 10 percent to.060 = 0. Control limits for remaining samples: Sample #2.qxd 10/18/07 11:46 AM Page 123 Defects/Unit Control Chart 123 A.15 would require a sample size of n= ( 9 )( 0. the relationship of average defects per unit and sample size give the following relationship.161 53 UCL = 0.060 − 3 0.060 + 0.000.15 ) 9u = = 60 . a process with an average number of defects/unit of 0.060 = 0.000 80. n = 66.4 LCL = 0. 2 (u ) ( 0.060 = 0.132 103 UCL = 0.060 + 0.4 B.15 ) 2 The actual sample size should be a little larger than n = 60 such that the LCL will be slightly larger than zero.072 = 0. For a given average defects/unit u.4 UCL = 0.060 + 3 0. n = 53 Sample #6.082 = −0.022 = 0.069 = 0. n = 115 The remaining samples may be calculated in a similar manner. (All LCLs defaulted to 0.101 = 0. As a matter of fact.082 = 0.H1317_CH14. UCL = 0. for example.060 = 0.060 = 0. A default LCL value of zero will be used. the minimum sample size to give a zero LCL is defined by the relationship n= 9u .129 115 Sample #3.060 + 0.142 80.060 + 3 0.) . since all LCLs using a process average defects/unit of 0.060 − 0. n = 103 Note: The LCLs will not be calculated. A summary of the historical data and UCLs follows.060 will yield a negative control limit when the sample size is less than 146.060 + 0. Control limits for sample sizes 60 to 101 inclusive: Let –n = 80.060 + 3 0.

H1317_CH14.042 0.096 0.080 0.142 0.142 0.036 0.142 0.030 0. Construct the chart and plot historical data.071 0.02 0.142 0. The process average defects line is drawn as a solid line and extends into the future beyond the last historical sample. Note the variable control limits for those data points resulting when the sample size exceeds the interval of = 0.088 0.132 0.142 0.057 0.075 0.16 .043 0.065 0. This is done because it is this historical average to which we want to compare all future sample results.qxd 124 10/18/07 11:46 AM Page 124 The Desk Reference of Statistical Quality Methods Sample number Date Sample size n Number of defects c Defects/unit UCL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 11/1 11/2 11/3 11/4 11/5 11/6 11/7 11/8 11/9 11/10 11/11 11/12 11/13 11/14 11/15 11/16 11/17 11/18 11/19 11/20 11/21 75 103 53 68 88 115 90 48 65 100 85 75 55 90 60 95 70 105 87 92 70 6 8 2 6 5 11 5 2 5 3 6 4 2 5 2 7 6 4 4 6 3 0.053 0.142 0.142 0.132 0.161 0.056 0.142 0.046 0.142 0.04 .14 .077 0. the control limits become closer to the average line.06 .129 0. Step 4.0 1 5 10 15 20 25 .074 0. The control chart with the initial 21 data points plotted and the control limits with the process average defects per unit follows.033 0.086 0. History .12 .142 0.142 Notice that as the sample size increases.142 0.142 0.10 .159 0.25n–.08 .038 0.75n– to 1.142 0.038 0.056 0.142 0.167 0. This increases the probability of detecting a process change as indicated by a point outside the control limit.

qxd 10/18/07 11:46 AM Page 125 Defects/Unit Control Chart 125 Step 5. Continue to collect and plot data.045 0.011 0.130 0.H1317_CH14.060 defects/unit. This conclusion is supported by the violation of rule #2: seven consecutive points below the historical average.10 . This conclusion is supported by no violations of the SPC rule.16 .027 0. and where would you focus your attention to improve it? . A detailed discussion of these rules can be found in the module entitled SPC Chart Interpretation.02 0. it appears that the process average has decreased relative to the historical value of . Continue to collect data for the process and plot the data. On November 22.04 . A review of the historical data indicates a stable. Each member of the cleaning and preparation staff was given a report of the more frequently occurring nonconformances and were also given more training.034 0. Do you feel that the improvement effort was beneficial? Sample number Date Sample size n Number of defects c Defect/unit u 22 23 24 25 26 27 28 29 11/22 11/23 11/24 11/25 11/26 11/27 11/28 11/29 105 90 58 88 95 75 55 93 11 12 2 4 1 2 2 4 0.14 . do you feel that the process is in control.06 .105 0. a quality-improvement team decided to address the issue of room preparation in an effort to improve the process.043 Upon completion of the control chart. Completed u chart: History .0 1 5 10 15 20 25 30 Optional problem: The Stockton Plastics Corporation has been collecting data for approximately one week on the frequency and nature of defects that occur in the injection molding department.12 . in-control process.036 0. The following data were collected.08 . Based on this data. Each member of the staff was given a checklist that provided a sign-off to assist in making sure that all requirements were addressed.

00 3. 4th edition.H1317_CH14. D.75 2. Quality Control. H. $ 23 44 100 58 168 135 75 80 8. Montgomery.50 0.. New York: McGraw-Hill. S. 1996. C. D. S.25 11. 7th edition. Leavenworth. Englewood Cliffs. D. 1994. Chambers. and where would you focus your attention to improve quality? Why? Bibliography Besterfield.00 1. .60 Inspection data from 100 percent inspection: Type and frequency of defects Date 11/1 11/1 11/2 11/2 11/2 11/3 11/4 11/5 11/5 11/6 11/7 11/7 Lot size n 300 400 400 580 350 400 500 280 500 350 400 500 Total defects 5 12 8 15 14 22 10 14 25 7 24 18 23 44 100 58 168 135 75 80 1 3 0 2 0 4 1 2 5 3 0 1 0 2 2 4 0 5 4 2 4 3 0 4 2 3 3 3 2 5 3 2 3 0 4 4 1 0 5 0 9 7 2 6 5 0 9 6 0 1 0 4 1 0 0 1 3 0 5 3 1 0 0 0 2 1 0 1 2 0 0 2 0 0 0 1 0 0 0 0 2 0 2 1 0 0 0 1 0 0 0 0 1 1 4 0 What is the most frequently occurring defect. Wheeler. Grant. 1992. NJ: Prentice Hall. TN: SPC Press.25 0.. Understanding Statistical Process Control. and R. and D. J. Statistical Quality Control. 1996. 2nd edition. Introduction to Statistical Quality Control. L. New York: John Wiley & Sons. 3rd edition. Knoxville.80 3.qxd 10/18/07 126 11:46 AM Page 126 The Desk Reference of Statistical Quality Methods Defect list: Characteristic Defect code Cracked parts Splay Undersize Chips Flash Color Underweight Plagimitized Cost to repair. E.

2. Develop a list of defects that are categorized into classes of severity Collect historical data to characterize the process Determine the average demerits/unit and the associated standard deviation Calculate control limits based on the average ±3 standard deviations Construct the control chart with upper and lower limits. 5. or Du. 3. 3. looking for evidence of process changes The following illustration serves as a demonstration for the construction of the Du control chart.H1317_CH15. 6. defects are not always equal in their impact on the quality of the product or performance of the service to which they are associated. With some imagination. 127 . and plot the data Continue to monitor the process. in the audit of a sample of hotel rooms. Another example would be the results of the final audit of the readiness of a New York to Seattle commercial flight where three defects were found: (1) inadequate pillows relative to the specification for the flight. and (3) in-flight movie equipment not working. the resulting level of customer satisfaction is significantly different. 4. By categorizing defects into classifications and giving a relative weighting scale to specific defects.qxd 10/18/07 11:48 AM Page 127 Demerit/Unit Control Chart Control charts for attributes are broadly classified into the following four types: 1. As a point of practical concern. 2. if one finds that no hot water is available versus having two towels where three are required. we can develop a control chart that will not be overly sensitive to minor defects or under sensitive to critical defects. Defective with constant sample size np Defective with variable sample size p Defects with constant sample size c Defects with variable sample size u For both the c chart and the u chart there is a presumption that all defects are equal. defects in most operations and services can be divided into two or more classification levels with respect to the severity of the defects. or at least the severity of the nature of the defect will not influence behavior of the control chart. (2) a shortage of 25 meals for the flight. Essential operations for the Du control chart include: 1. For example. The following example illustrates the construction of the demerit/unit. 4. control chart. Such a treatment of defects will lead to the use of what is commonly referred to as a demerit/unit control chart. we know that in some cases.

The total demerits are calculated for sample #1 as follows: (1 critical × 100 demerits) + (2 major × 25 demerits) + (6 minor × 5 demerits) Total demerits for sample #1 = 180 The demerits per unit are equal to the total demerits divided by the total units inspected: 180/40 = 4. and a minor as 5 demerits.50 demerits/unit.qxd 128 10/18/07 11:48 AM Page 128 The Desk Reference of Statistical Quality Methods Step 1. . A tally sheet is maintained for the audit results summary.H1317_CH15. and minor defects. major. but three is a reasonable number to use in most cases. A critical defect counts as 100 demerits. we are allowing a critical defect to have 20 times the impact on the control as a minor and 4 times the impact as a major. the computed total defects for each sample of size n. More categories of classification may be used. Approximately 50 sets of samples as a minimum are taken to characterize the process for the period to be used as a reference time period. and the computed number of demerits per unit of inspection. a major as 25 demerits. Nonconformances to the requirement are classified as critical. Collect historical data for process characterization. cost to repair. Using the scale of critical = 100. Hotel rooms are audited to check compliance to a set of standard requirements. Develop a defect list and classifications. major = 25. Each day. Specific defects are as follows: Critical Ground fault detector malfunction TV not working No hot water Major No towels No telephone directory No shampoo Drapery drawstring broken Internet connection not working Minor Missing mint on bed in evening Dust on furniture Drawer moderately difficult to open No motel stationery Motel phone number missing from room phone Step 2. The demerits per unit are determined for each sample by dividing the total demerits for that sample by the total units inspected for that sample. This record sheet contains a count of the total critical. or impact on the ability of the service to be rendered adequately or the product to perform its intended mission. 40 rooms are audited after the housekeeping department has prepared the rooms for new guests. major. and minor. and minor = 5. Each defect is assigned demerits according to the classification. The number of demerits per classification is a management decision related to factors such as potential liability.

The standard deviation for —– the binomial is given by √npq. n – The average fractional defective term p varies from one type of defect to another. Rearranging this equation based on a proportional defective rather than a number defective.75 3.38 4. we must weigh the calculation of the standard deviation.qxd 10/18/07 11:48 AM Page 129 Demerit/Unit Control Chart 129 Table 1 Completed tally record.33 Step 3. p mi (1 − p mi ) . In order to combine the effects of these defects and allow a relative weighting in accordance with the demerit assignment. the standard deviation is equal to p (1 − p ) .75 1. and q = 1 – p.25 5. where n = sample size. This is accomplished by calculating a weighted average standard deviation.25 Totals: 600 8 32 80 2000 3. Of the 600 samples inspected (rooms audited). 32 that are major.50 2. p = fractional defective.13 5. The binomial distribution is used to model the Du control chart.13 2.50 2. there are 8 defects that are critical. σDu: σ Du = Wc p c (1 − p c ) + Wma n p ma (1 − p ma ) + Wmi n where: W = demerit assignment for classified defects p = fractional defects.50 4.38 5.25 2.75 3. Sample number Size n Date Criticals Majors Minors Total demerits Demerits/ unit 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 7/7/94 7/8/94 7/9/94 7/10/94 7/11/94 7/12/94 7/13/94 7/14/94 7/15/94 7/16/94 7/17/94 7/18/94 7/19/94 7/20/94 7/21/94 1 0 1 1 2 0 0 0 1 0 0 1 1 0 0 2 3 3 2 0 4 3 1 2 1 3 4 0 3 1 6 2 9 4 3 6 5 6 5 4 4 6 8 7 5 180 85 220 170 215 130 100 55 175 45 95 230 140 110 50 4. n . and 80 that are minor.H1317_CH15. Calculate the average demerits/unit and three standard deviations.50 1.38 1.

The average number of demerits/unit= Total demerits 2000 = = 3. we default to 0.7 percent of the data points within this range. Upper control limit (UCL) = 3.133 σ Du = 128.. Draw a broken line for the control limits and extend it halfway between the vertical lines that represent the 15th sample and the 16th sample. Locate the control limits and label them. Construct the control chart with upper and lower control limits and plot data. Total units inspected 600 Step 4.33.013 pma = 0. Determine control limits.70 A negative control limit is meaningless.133 . therefore.33 – 6.03 = 9. Draw a solid line for the average and extend it to the end of the chart. we have σ Du = Wc 2 p c (1 − p c ) + Wma 2 p ma (1 − p ma ) + Wmi 2 p mi (1 − p mi ) .37 + 2. n pc = the fractional contribution of the critical defects and is equal to the total number of critical defects found.00 for the LCL.053 600 and p mi = 80 = 0. we calculate the fractional contribution of the major and minor defects: p ma = 32 = 0. we compute the standard deviation: σ Du = Wc 2 pc (1 − pc ) + Wma 2 pma (1 − pma ) + Wmi 2 pmi (1 − pmi ) n Wc = 100 Wma = 25 Wmi = 5 pc = 0.44 162.01 = 6. 600 Using the following relationship.12 = = 2.013 . We expect to find approximately 99.36 Lower control limit (LCL) = 3.qxd 130 10/18/07 11:48 AM Page 130 The Desk Reference of Statistical Quality Methods Rearranging. divided by the total number of units inspected: pc = 8 = 0.053 pmi = 0.01. Step 5. . All Shewhart control chart limits are based on an average response ±3 standard deviations.H1317_CH15.03 = –2. 40 40 Three standard deviations = 3 × 2.33 + 6. 600 In a similar manner.31 + 31.03.

Using these rules and the following data. looking for changes in the future. They are sufficiently rare enough that when these patterns do occur. 9 History 8 7 Du 6 5 4 3 2 1 0 1 5 10 15 20 25 Step 6. demerits/unit. A single point is present outside the control limits 3.33 demerits per unit? Note that the sample size has been increased to n = 120. A wavy vertical line is drawn between sample #15 and sample #16 to separate the future values from the historical data that were used to develop the control chart average and control limits.qxd 10/18/07 11:48 AM Page 131 Demerit/Unit Control Chart 131 Record all information in the data area. we assume that a process change has taken place. Seven points are steadily increasing or decreasing These three indicators or rules are statistically rare with respect to their probability of occurrence for a stable process. The following are evidence of a process change: 1. Continue to monitor the process. pma. pc. . Label the left portion of the control chart as History. The historical values for Du. determine if there has been a process change. Data collected in the future can be used to detect a shift in the process average. including date. There is a run of seven points in a row on one side of the average 2. major. and minor defects found in the sample. total demerits. and runs of length seven are not present. A recalculation of the control limits is required anytime the sample size changes. Do the data indicate that the performance has gotten better or worse when compared to the historical average of 3. sample size. and number of critical. The process appears to be statistically stable in that approximately half of the points are distributed above the average and half below.H1317_CH15.

H1317_CH15.qxd

10/18/07

132

11:48 AM

Page 132

The Desk Reference of Statistical Quality Methods

and pmi are used in the recalculation. Only the sample size n is changed in the computation. The new value for three standard deviations is calculated by
3σ for Du = 3

162.12
= 3.49.
120

The new upper and lower control limits using n = 120 are:
UCL = 3.33 + 3.49 = 6.82
LCL = 3.33 – 3.49 = –0.16 = 0.00.
Future data from the process are as follows:
Sample
number

Size n

Date

Criticals

Majors

Minors

120
120
120
120
120
120
120
120
120
120
120

7/22/94
7/23/94
7/24/94
7/25/94
7/26/94
7/27/94
7/28/94
7/29/94
7/30/94
8/1/94
8/2/94

2
1
3
3
2
5
7
5
3
6
4

4
12
10
2
4
3
10
9
2
13
0

12
16
21
9
2
13
13
3
1
3
14

16
17
18
19
20
21
22
23
24
25
26

Total
demerits

Demerits/
unit

The total demerits and demerits/unit columns have been left blank intentionally for the
reader to complete. The finished control chart can be seen in the following figure. It
appears that the process average demerits/unit has shifted positively.
UCL = 9.36
9

History

8

UCL = 6.82

7

Du

6
5
4
3
2
1
LCL = 0

0
1

5

10

15

20

25

H1317_CH15.qxd

10/18/07

11:48 AM

Page 133

Demerit/Unit Control Chart

133

Optional problem:
A complex electronic device goes through a comprehensive final audit. Defects are categorized into four classifications and are assigned demerit weighting factors accordingly:
1.
2.
3.
4.

Critical C = 100
Severe S = 50
Major M = 10
Incidental I = 1

Construct a Du control chart based on the following data:
Defects distribution
Lot no.
1
2
3
4
5
6
7
8
9
10
10

Sample size n

C

S

M

I

100
100
100
100
100
400
400
400
250
250
250

1
0
0
1
0
3
2
1
5
2
5

0
1
2
3
1
2
5
8
2
4
2

4
5
1
2
5
6
14
10
11
7
4

9
3
5
8
6
24
30
18
12
16
11

How would this control chart compare with a traditional u chart?
The demerit/unit control chart is sometimes referred to as a D chart.
An alternative method given by Dale H. Besterfield (1994, 271–272) relies on an average number of nonconformities per unit rather than a weighted binomial relationship.
The plot point is given by the formula
D = WcUc + WmaUma + WmiUmi ,
where:

D = demerits/unit
Wc = weighting factor for criticals
Wma = weighting factor for majors
Wmi = weighting factor for minors

and
Uc = counts for criticals per unit
Uma = counts for majors per unit
Umi = counts for minors per unit.
If the weighting factors for the critical, major, and minors are 12, 9, and 4, respectively, the demerits per unit would be calculated
D = 12Uc + 9Uma + 4Umi.

H1317_CH15.qxd

134

10/18/07

11:48 AM

Page 134

The Desk Reference of Statistical Quality Methods

The central or average line is calculated from the formula




Du = 12Uc + 9Uma + 4Umi ,
and the standard deviation for Du is given by
σ=

(12 ) 2 U c + ( 9 ) 2 U mi + ( 4 ) 2 U mi .
n

Example:
The sample size is n = 40, and the average defects/unit for the critical, major, and minor defects
are 0.07, 0.38, and 1.59, respectively. What is the center line, UCL, and LCL for this chart?

The process average demerits/unit Du is
Du = 12U c + 9U ma + 4U mi ,
Du = (12)(.07) + (9)(0.38) + (4)(1.59) = 10.62.
σ=

(12)2 U c + (9)2 U mi + (4)2 U mi
n

(12)2 (0.07) + (9)2 (0.38) + (4)2 (1.59)
40
σ = 1.29 and 3σ = 3.87
σ=

UCL = 10.62 + 3.87 = 14.49
LCL = 10.62 – 3.87 = 6.75
A sample of 40 is taken where 6 criticals, 7 majors, and 12 minors are found. Calculate Du for this point. Does the calculated value indicate an out-of-control condition?
6
15
18
Du = (12 ) ⎛ ⎞ + ( 9 ) ⎛ ⎞ + ( 4 ) ⎛ ⎞
⎝ 40 ⎠
⎝ 40 ⎠
⎝ 40 ⎠
Du = 6.98
This point is not outside the control limits, and out of control is contraindicated.

Bibliography
Besterfield, D. H. 1994. Quality Control. 4th edition. Englewood Cliffs, NJ: Prentice Hall.
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th edition. New
York: McGraw-Hill.
Hayes, G. E., and H. G. Romig. 1982. Modern Quality Control. 3rd edition. Encino, CA:
Glencoe.
Montgomery, D. C. 1996. Introduction to Statistical Quality Control. 3rd edition. New
York: John Wiley & Sons.

H1317_CH16.qxd

10/18/07

11:49 AM

Page 135

Descriptive Statistics

Processes can be thought of as defined activities that lead to a change. Processes are dynamic
and always add value to services or manufactured devices. As quality practitioners, we are
all interested in processes. Processes can be measured in a numerical sense, and all observations regarding processes can be ultimately converted into numerical data. Examples of
measurements include service-related and manufacturing-related measurements.
Service-related
Length of time required to answer customer questions
Proportion of customers who are not satisfied with a service
Number of errors found on expense reports
Units of production per hour
Error on shipping papers
Nonconformities on a service audit
Manufacturing-related
Total defects on 45 printed circuit boards
Surface finish
Diameter of a part
pH of a solution
In most cases, we have an overabundance of both the opportunity and the available
data for measurements of a characteristic.
All of the opportunities for a process to be measured can be considered as a universe
from which we can develop a characterization. This entire opportunity can be considered
as the universe or population of a process. In magnitude, this opportunity is unlimited.
For our discussion, we will equate process = population = universe. The abbreviation
for the population is N. N consists of all the observations that can be made on a process.
In most cases, N is exceedingly large.

Process
Population N
Universe

135

H1317_CH16.qxd

136

10/18/07

11:49 AM

Page 136

The Desk Reference of Statistical Quality Methods

Since this population is so large, we cannot from a practical point measure all of it.
The process might be one of answering calls at a phone help line where hundreds of calls
are received each hour. We are, nevertheless, interested in the length of time required to
answer each call. The size of the opportunity (population) does not diminish our interest
in the process. If we cannot measure all of the calls, we can always take a sample of the
population. Samples are subsets of the larger population or universe.
We can act upon data from samples to calculate such information as averages, proportions, and standard deviations. The following are examples of this population/sample
relationship as used in quality:
1. You are interested in determining the percentage of customers who use your service
and would consider using it again. You have sampled 500 and found that six would
not use your service again. This proportion is 6/500, or 0.012.
2. A shipment of 1200 bolts has been received. The average weight of a random sample of 25 is determined to be 12.68 grams.
3. Approximately 45 patients a day are admitted to a local hospital. A daily sample of
20 are chosen each day to determine the standard deviation for the time required to
complete the admission process.

Process
Population N
Universe

Time = $$$$

Sample n

Information calculated from samples is called statistics. Examples of statistics frequently used in quality are:

H1317_CH16.qxd

10/18/07

11:49 AM

Page 137

Descriptive Statistics

137

Proportion (or percentage), P

Average, X
Standard deviation, S
Traditional statistics such as the average are abbreviated using English letters or symbols
made up of English letters. The average proportion is written with a letter P and a bar over

it. Any statistic with a bar over it refers to the average of that statistic. For example, R represents the average range. The abbreviations may or may not be capitalized.
Statistics by their very nature are subject to error. Statistics are estimates of the true
description of the process. If we could and did use all of the data in the population or universe to calculate a defining characteristic, then there would be no error. Such numbers
derived from all of the data are not called statistics but rather parameters. The true proportion is still called the proportion; however, it is abbreviated with a lowercase Greek letter for the English letter P—pi or π. The true standard deviation is abbreviated using the
Greek lowercase letter for the English letter S—sigma or σ. The case for the average is
unique in that the name of the true average is the mean, and the abbreviation for the mean
is the lowercase Greek letter for the English letter M—mu or μ.
This parameter is estimated by the corresponding statistic, and it is the parameter that
actually describes the process without error.

Process
Population N
Universe

Time = $$$$

Average X
Proportion P
Standard deviation S

Calculate
Sample n

H1317_CH16.qxd

138

10/18/07

11:49 AM

Page 138

The Desk Reference of Statistical Quality Methods

The following statement summarizes the relationship of a process to a statistic: “We
are all interested in the process. But due to the vastness of it, we are required to obtain a
sample from it. From the sample, we calculate a statistic. The statistic is an estimate of the
truth or parameter. The truth describes the process without error.”
The use of statistics is critical to the science of quality. There is no substitute for the
value of numerical data, as was so well stated by Lord Kelvin in 1883: “When you can
measure what you are speaking about and express it in numbers you know something
about it; but when you cannot express it in numbers your knowledge is of a meager and
unsatisfactory kind.”
The degree to which a statistic represents the truth is a function of the sample size,
error, and level of confidence.
Statistics used to describe sets of data are called descriptive statistics and generally fall
into the two catagories of location and variation.
Location statistics give an indication of the central tendency of a set of data, and variation statistics give an indication of the variation or dispersion of a set of data. Both these

Parameter
Describes

Process
Population N
Universe

Time = $$$$

Estimate

Statistic
Average X
Proportion P
Standard deviation S

Calculate
Sample n

H1317_CH16.qxd

10/18/07

11:49 AM

Page 139

Descriptive Statistics

139

Descriptive statistics

Location:

Variation:

Average X
Median X˜
Mode M
Average proportion P

Range R
Standard deviation S

numbers used in conjunction with each other give a better overall description of data sets
and processes.
The nature of the actual data collected can be defined as variables or attributes. A variable is any data collected initially as a direct number and can take on any value within the
limits in which it can exist. Examples of variables data would be temperature, pressure,
time, weight, and volume.
Attribute data are data that evolve from a direct observation where the response is yes
or no. The actual attribute data can and must, however, be translated into a number so that
mathematical operations can be made. An example of attribute data would be the characteristic of meeting a specification requirement. The specification is either met (yes response)
or it is not (no response).
Variables data can be converted to attribute data by comparing them to a specification
requirement.
Example:
The weight of a product is 12.36 pounds—a variable. The specification for the weight is
12.00 ± 0.50 pounds. Observation: the product meets the specification requirement, so the
response is yes—an attribute.

H1317_CH16.qxd

140

10/18/07

11:49 AM

Page 140

The Desk Reference of Statistical Quality Methods

Bibliography
Gryna, F. M., R.C.H. Chua, and J. A. De Feo. 2007. Juran’s Quality Planning and Analysis
for Enterprise Quality. 5th edition. New York: McGraw-Hill.
Hayes, E., and H. G. Romig. 1982. Modern Quality Control. 3rd edition. Encino, CA:
Glencoe.
Petruccelli, J. D., B. Nandram, and M. Chen. 1999. Applied Statistics for Engineers and
Scientists. Upper Saddle River, NJ: Prentice Hall.
Sternstein, M. 1996. Statistics. Hauppauge, NY: Barron’s Educational Series.

H1317_CH17.qxd

10/18/07

11:51 AM

Page 141

Designed Experiments

Designed experiments provide a statistical tool to allow for the efficient examination of
several input factors and to determine their effect on one or more response variables.
Input factors are those characteristics or operating conditions that we may or may not
have direct control over but that can and do affect a process. Response variables or factors
are the observations or measurements that result from a process.
Examples of this input factor/response variable relationship follow.
Process: Wood-Gluing Operation
Input factors
Amount of glue
Type of glue
Drying temperature
Drying time
Moisture content of wood
Type of wood
Relative humidity of environment

Response variables
Tensile strength, pounds/inch2

For all of these input factors, there is an optimum level of setting that will maximize
the bond strength. Some of these factors may be more important than others.
Designed experiments (frequently referred to as design of experiments, or DOE) can
assist in determining which factors play a role in affecting the level of response.
There are essentially five steps in a DOE:
1. Brainstorming to identify potential input factors (or factors) and output responses (or
responses) and establishing levels for the factors and the measure for the response(s)
2. Designing the experimental design or matrix
3. Performing the experiment
4. Analyzing the results
5. Performing a validation run to test the results
Example A:
A manufacturer of plastic/paper laminate wants to investigate the lamination process to see
if improvements can be made.
141

H1317_CH17.qxd

142

10/18/07

11:51 AM

Page 142

The Desk Reference of Statistical Quality Methods

Step 1. Brainstorm.
During this initial stage, individuals gather to discuss and define input factors and output
responses. The results of this meeting yield the following:
Input factors
Top roll tension setting, lb.
Rewind tension setting, lb.
Bottom roll tension setting, lb.
Take-up speed, ft./min.
Type of paper
Thickness of plastic film, mils.
Type of plastic film

Response variables
Amount of curl
Number of wrinkles per 100 ft.
Peel strength, lb./in.

For this initial experiment, three factors and one response variable are selected. The
input factors are identified as A, B, and C. The response variable is the amount of curl, and
the three input factors are the following:
Input factor
A
B
C

Description
Top roll tension
Bottom roll tension
Rewind tension

Response variable
Curl, inches

Description
A 36-inch strip of paper/plastic film is hung against a
vertical plane. The distance from the end of the laminate
to the vertical plane is measured in millimeters.

Laminate

Curl

The current method calls for these factors to be set as follows: Factor A B C Setting 22 22 9 For this experiment. Start by having eight runs and three columns for the factors A. . B. This experiment is a full 23 and will have eight experimental setups. The change in the factor setting should be large enough to change the response variable but not enough to critically affect the process. two-level experiment is 23. is given by 2n. Note: Some text on DOE uses 1 for one level and 2 for the second level. The notation for this three-factor. The total number of experimental conditions is given by one 23 = 8.H1317_CH17. The general case for any two-level experiment. we will set each of the factors to a “+” level and a “–” level. Step 2. Factor A B C – Setting 16 16 6 + Setting 28 28 12 The new conditions are set at such a level as to give a change in the response variable if the factor is a contributor to the response variable. Each of the factors will be set at a + and – level around the traditional value at which they are run. where n = the number of factors. Design the experiment. where all possible experimental conditions are run.qxd 10/18/07 11:51 AM Page 143 Designed Experiments 143 The objective of this experiment is to better understand the effects of the three chosen factors on the amount of curl in the laminate. where: 2 = number of levels 3 = number of factors. and C.

Factor Run A 1 2 3 4 5 6 7 8 – + – + – + – + B C The second column is treated in a similar manner.H1317_CH17.qxd 10/18/07 144 11:51 AM Page 144 The Desk Reference of Statistical Quality Methods Factor Run A B C 1 2 3 4 5 6 7 8 Start by placing a “–” for the first run of factor A and then alternating + and – for the first column. or the 23 design. This completed table represents the design or matrix for the two-level. except there are pairs of –’s and +’s. The third column is completed with groups of four –’s and four +’s. Factor Run A B C 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + . three-factor experiment.

respectively. Main Effects For each of the main factors A. statistically significant reduction in the amount of curl. The actual sequence for the eight experiments should be a random order. Perform the experiment. we must set the factors A and C to the + and – settings. we have concluded that factors A and C are major factors with respect to their effect on curl. The process should now be run for a more extended period of time to collect data on the curl response.qxd 10/18/07 11:51 AM Page 145 Designed Experiments 145 The order of one through eight listed in numerical sequence is called the standard order. Since we are minimizing the response of curl. The – and + signs determine the setting for the particular run. as it has no significant effect on tension. From the graphical analysis of the effects. Validation. the rewind tension set to 6. and the remaining factor B to the most economical setting. Step 4. run #3 would have: Factor A (Top roll tension) set at 16 Factor B (Bottom roll tension) set at 28 Factor C (Rewind tension) set at 6 Step 3. Step 5. The results of each run are recorded. and C. Factor Run A B C Response. and a single measurement observation will be made for each run. Essentially we want to know if there has been a measured. For example. curl 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + 87 76 90 83 101 92 100 92 . Analyze the data. The original objective was to minimize the curl response. The signs of the column to which a factor has been assigned will be used to determine the effect. Each of the eight experimental runs is made in a random order. The top roll tension should be set to 28.H1317_CH17. B. The effect will be the average effect obtained when the factor under consideration is changed from a – setting to a + setting. an effect will be determined. These data from the improved process can now be compared with the historical data using traditional methods of hypothesis testing.

00 = +12. whereas catalyst B would perform as well as catalyst A but . Catalyst A might perform well but only at a higher temperature.H1317_CH17.50 = −8.75 − 94.25 − 84. For example. Interactions result when the magnitude of one main effect depends on or is related to another effect. the rate at which a chemical reaction occurs can be influenced by a catalyst. there can be interaction effects. the following settings for the factors should be made: Factor Factor A (Top roll tension) Factor B (Bottom roll tension) Factor C (Rewind tension) Setting A + = 28 B – = 16 C–=6 In addition to the main effects.75 ⎝ ⎠ ⎝ ⎠ 4 4 The average effect in going from a + setting for factor A to a – setting for factor A is –8.qxd 146 10/18/07 11:51 AM Page 146 The Desk Reference of Statistical Quality Methods Effects are determined by taking the difference between the average response when the factor is set + and the average response when the factor is set –.75 units of curl.25 − 89. Main Effect A: ⎛ 76 + 83 + 92 + 92 ⎞ − ⎛ 87 + 90 + 101 + 100 ⎞ = 85.25 ⎝ ⎠ ⎝ ⎠ 4 4 Main Effect C: ⎛ 101 + 92 + 100 + 92 ⎞ − ⎛ 87 + 76 + 90 + 83 ⎞ = 96. 100 92 101 + C 87 – – 92 90 83 + B 76 A – + It can be seen that in order to minimize the curl response.25 ⎝ ⎠ ⎝ ⎠ 4 4 The main effects can be visualized by drawing a cube plot of the responses.00 = +2. Main Effect B: ⎛ 90 + 83 + 100 + 92 ⎞ − ⎛ 87 + 76 + 101 + 92 ⎞ = 91. The selection of the catalyst can be a determining factor.

25 ⎝ ⎠ ⎝ ⎠ 4 4 BC effect: ⎛ 87 + 76 + 100 + 92 ⎞ − ⎛ 90 + 83 + 101 + 92 ⎞ = 88. The computation of these interactions is performed using the signs of the columns as was the case with the main effects. B. and A × C.75 ⎝ ⎠ ⎝ ⎠ 4 4 ABC effect: ⎛ 76 + 90 + 101 + 92 ⎞ − ⎛ 87 + 83 + 92 + 100 ⎞ = 89.25 ⎝ ⎠ ⎝ ⎠ 4 4 AC effect: ⎛ 87 + 90 + 92 + 92 ⎞ − ⎛ 76 + 83 + 101 + 100 ⎞ = 90.75 ⎝ ⎠ ⎝ ⎠ 4 4 87 76 90 83 101 92 100 92 .75 − 89.qxd 10/18/07 11:51 AM Page 147 Designed Experiments 147 only at a lower temperature. the interactions will be determined. There are three interactions that involve two factors: AB = the interaction of top roll tension and bottom roll tension BC = the interaction of bottom roll tension and rewind tension AC = the interaction of top roll tension and rewind tension These interactions may also be written as A × B.00 = +0.H1317_CH17. In order to evaluate interactions of factors A.50 = +1.75 − 91. B × C.25 − 90. in other words. The catalyst and temperature factors interact with each other.50 = −0. Factor Run A B C AB AC BC ABC 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + + – – + + – – + + – + – – + – + + + – – – – + + – + + – + – – + Response. and C. The signs for the interaction columns are determined by multiplying the signs of the main effects used in the interaction.50 = −2. In addition. there is a single three-factor interaction expressed as ABC. curl AB effect: ⎛ 87 + 83 + 101 + 92 ⎞ − ⎛ 76 + 90 + 92 + 100 ⎞ = 90. there is a catalyst-temperature interaction.75 − 90.

5 × 100 = %MR = First % median rank = %MR = 7.3 × 100 = 23.75 –2.25 1. Order i 1 2 3 4 5 6 7 Effect value –8. Normal Probability Plot Normal probability plotting paper is used to determine whether data are normally distributed. The order of this ranking is expressed as i.75 –0. The effect is located on the horizontal axis. Step 1.H1317_CH17. and the relative rank of the effect is expressed as a percent median rank (%MR).4 7 + . %MR = i − .7 1 − . Step 2.3.0 Second % median rank = 7 + . One way is to repeat the experiment and determine the actual experimental error.3 × 100 = × 100 = 9.25 2. the use of a normal probability plot can be helpful. Calculate the percent median rank values. MR = where c = 0.5.5 Third % median rank = 7 + . Arrange all of the effects in ascending order (starting with the lowest at the top).25 12. where n − 2c + 1 c = 0. or are the differences due to random. and the other is utilized when only one observation is made for each run (as in this case). Another form of the median rank is the Hazen. When only one observation per run is available.4 where: i = the order of the effect n = the total number of effects This median rank is called the Benard Median Rank and follows the general expression of i−c .4 .3 × 100 = 36.4 3 − . experimental error? There are two ways to determine if these effects are statistically significant.25 Effect A BC ABC AC AB B C Note that there is one less effect than there are experimental runs.3 i − .75 0.4 n + .4 2 − . 0.qxd 148 10/18/07 11:51 AM Page 148 The Desk Reference of Statistical Quality Methods Are these effects significant.3 × 100 n + .

Plot the effects as a function of the percent median rank on the normal probability paper.H1317_CH17.99 99.0 90.9 99. concentrating the emphasis around the points nearest the zero effect.5 . In this example.5 50.25 12.0 36.0 63.1 .05 . Draw a straight line through the points. 99.01 –10 –8 –6 –4 –2 0 2 4 Value 6 8 10 12 Median rank 80 xB x 90 . set the main effect to the same sign as the effect. To maximize a response variable.25 A BC ABC AC AB B C 9.75 0.75 –0.8 99 98 95 xC 70 60 50 40 30 AB AC x ABC x BC x 20 A 10 5 x 2 1 .75 –2. Effects falling off this straight line are judged to be significant. To minimize the response variable.2 .5 23.25 1. factors A and C are statistically significant.qxd 10/18/07 11:51 AM Page 149 Designed Experiments Order i Effect value Effect Percent median rank 1 2 3 4 5 6 7 –8.5 149 Step 3.25 2.5 77. set the factor to the reverse of the effect.

and one of the two main effects involved in the two-factor interaction will be plotted on the horizontal axis. The following demonstrates how to develop the two-factor interaction for the AB interaction. Draw a two-axis graph. The average response for these two is 87 + 101 = 94.0. These two runs are run #1 and run #5.0. The horizontal value is attribute in nature. 2 When factor A is set to A– and factor B is set to B–. The response will be plotted on the vertical axis. 90 80 70 60 A– A+ Since factor A was chosen for the horizontal axis.0. 2 These two values define the B– line on the graph. Only the designations of – and + are used for the selected factor. Two-factor interactions can be visualized using linear plots. Step 1. There are two experimental runs where A is set to A– and B is set to B–. A+ and B– are found in two experimental runs—run #2 and run #6. The average response for these two is 76 + 92 = 84.qxd 150 10/18/07 11:51 AM Page 150 The Desk Reference of Statistical Quality Methods Visualization of Two-Factor Interactions Main effects can be visualized using the cube plot. the average response is 94. The other end of the B– line is determined by calculating the average response when A is set at A+ and B is set at B–. The B– line will be plotted first. factor B will be plotted as a pair of linear lines—one for B– and one for B+. .H1317_CH17.

0 B– 90 84.0.H1317_CH17. These are run #3 and run #7. There are two runs where B is set + and A is set +.0 = 87.5. The average of these two responses is 90 + 100 = 95.qxd 10/18/07 11:51 AM Page 151 Designed Experiments 151 94.0 80 70 60 A– A+ The remaining line is the B+ line. There are two runs where B is set + and A is set –. These are run #4 and run #8.0 80 70 60 A– AB interaction A+ . 2 100 95. The average of these two responses is 83.0 94. 2 The other end of the B+ line is at the A+ position.5 B– Line 84.0 B+ Line 90 87.0 + 92.

H1317_CH17.5 C– 80 81. but it is the best technique available when only one observation is available.0 + 83.0 86. The new data from the first and second replications yield an average and standard deviation for each of the eight runs: .0 = 87. the entire experiment must be replicated at least one more time in order to calculate a measure of variation—the standard deviation.5 90 C+ 87.0 2 C + B + point = 95.0 + 81.qxd 152 10/18/07 11:51 AM Page 152 The Desk Reference of Statistical Quality Methods Plotting the B+ line completes the two-factor interaction plot.0 = 81.5 2 C + B ± point = 93. only one observation was made for each experiment run. 92.0 = 92. The entire set of eight runs is replicated.5 2 The C+ line: In the original experiment.5 2 C ± B + point = 90.0 = 86. The only technique to determine if the effects were statistically significant was to perform a normal probability plot and make a subjective judgment if any of the points fell off the straight line (with emphasis on the zero point). In order to determine experimental error.0 + 76.5 70 60 B– B+ The linear graph for the BC interaction follows: The C– line: C ± B ± point = 87. This method is subject to errors of judgment.0 + 90.

707 2. Vn S n2 V1 + V2 + .5 98.38 –1.0 91.535 0.0 81. in decreasing order.88 11.63 0. Calculate the pooled standard deviation for all the observations. a better judgment can be made. . By using the t-distribution and determining the confidence interval for each effect.63 2. . dev.13 –0. the conclusion is that the observed effect is simply random variation within the expectation for the calculated experimental error. That is. the effect is real and not just a random value.414 1. the conclusion is somewhat subjective. .H1317_CH17.5 91. If.5 77.828 0. the conclusion is that the effect is real and is statistically significant.707 The new calculated effects. based on the average of two observations/ run are shown in the following table: Factor Effect A ABC BC AB AC B C –9.121 3. again. The following method can test if the effect is within the normal variation of the experimental error or if the effect is statistically removed from the limits of normal variation.qxd 10/18/07 11:51 AM Page 153 Designed Experiments 153 Factor Run A B C AB AC 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + + – – + + – – + + – + – – + – + BC + + – – – – + + ABC 1st 2nd Avg.5 102. – + + – + – – + 87 76 90 83 101 92 100 92 88 78 92 80 96 91 104 91 87. Std.5 0. .63 While a normal probability plot would yield the same conclusion that factor A and factor B are probably statistically significant. Vn . however. A confidence interval will be determined for each effect. the confidence interval does not contain zero.13 –1. Sp = V1 S12 + V2 S 22 + . If zero is found to be within the confidence interval limits.0 91.707 1.414 2.

then the pooled standard deviation is simply the square root of the average variances. Each of the calculated effects is actually a point estimate. pr ⎟⎠ where: α = risk (Confidence = 1 – Risk) p = number of +’s per effect column r = number of replicates Sp = pooled standard deviation f = degree of fractionation.9952 + 0.952 Note: If the number of replicates is the same for all runs.9988 + 8. The error is calculated as ⎛ t α / 2 .414 ) 2 + . Any effect that is contained within the limits of 0 ± 2.707 ) 2 + ( 2 )(1.8 = 2.25 is considered not to be statistically significant.9925 + 0.9988 + 3.828 ) 2 + ( 2 )( 0. .( r −1 ) 2 k − f ⎜ Sp ⎝ 2 ⎞ .306 ⎛ 2 ⎞ E = 2.9973 + 24.9997 16 Sp = 1. .qxd 154 10/18/07 11:51 AM Page 154 The Desk Reference of Statistical Quality Methods where: V1 = number of observations in run #1 S 21 = the variance (standard deviation squared) for sample run #1 Vn = number of observations in the nth run S 2n = variance of the nth sample run Sp = ( 2 )( 0.9997 + 15.414 ) 2 + ( 2 )(1. The confidence interval for each of these estimated effects is determined by Effect ± Error. + ( 2 )( 2.707 ) 2 2+2+2+2+2+2+2+2 Sp = 0.9997 + 3.306 ⎜ 1. The error of the estimated effect at a level of confidence of 95 percent is ⎛ 2 ⎞ E = tα / 2.952 (4)(2) ⎟⎠ ⎝ E = 2.(r −1)2k− f ⎜ Sp pr ⎟⎠ ⎝ tα / 2. .025.25.(r −1)2k− f = t0.H1317_CH17.

75 to 13.88 Note: Any effect whose absolute value is greater than the error is statistically significant. The main effect B is deemed to be statistically significant at the 95 percent level of confidence. This was not evident using the normal probability plot method. This number of observations and experiments can be time consuming and cost prohibitive in many cases.13 0. D.H1317_CH17.qxd 10/18/07 11:51 AM Page 155 Designed Experiments 155 The following effects are significant: Effect A B C 95 percent confidence interval –11. Consider the requirement for a full four-factor experiment. Fractional Factorial Experiments If four factors are to be investigated at two levels each. The same number of factors can be evaluated using half the normal runs by running a one-half fractional factorial experiment. and in order to establish an experimental error. then the total number of experiments required would be 24 = 16.13 8. This design can be created by extending the eight runs to 16 and adding a fourth column for the next factor. Factor Run A B C D 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 – + – + – + – + – + – + – + – + – – + + – – + + – – + + – – + + – – – – + + + + – – – – + + + + – – – – – – – – + + + + + + + + .63 to –7. two replicates will be required for a total of 32 observations.63 to 5.

H1317_CH17. and the identity is I = ABCD. we cannot differentiate between D and ABC. we have confounded the two effects. the full 16 runs for a full 24 can be cut in half.qxd 10/18/07 156 11:51 AM Page 156 The Desk Reference of Statistical Quality Methods Three-level interactions are very rare. Since D and ABC are aliases. By mixing the effect of main effect D with the three-level interaction ABC. let D = ABC. In other words. D = ABC is called a generator. then by letting the ABC interaction of this 24 experiment equal the D main effect. If the assumption is made that any three-level interaction is actually due to experimental error. D and ABC are called aliases. Factor Run A B C D ABC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 – + – + – + – + – + – + – + – + – – + + – – + + – – + + – – + + – – – – + + + + – – – – + + + + – – – – – – – – + + + + + + + + – + + – + – – + – + + – + – – + Let D = ABC. Factor Run A B C D ABC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 – + – + – + – + – + – + – + – + – – + + – – + + – – + + – – + + – – – – + + + + – – – – + + + + – + + – + – – + – + + – + – – + .

or V and are named by the number of letters in the shortest word of the defining relationship. The identity element is I = ABCD. Possible experimental design resolutions are as follows: RIII: Main effects and two-factor interactions are confounded. This experimental design is called a 24–1 design. The proper designation for this design is 2 IV 4 –1. or generators. Fractional factorial designs can be designated by the general form of 2k–f. The number of factors is five. RIV: Main effects are clear from any two-factor interactions. Table 1 lists other designs available using such confounding relationships. therefore. the defining relationships are D = AB and E = AC. three letters make up the identity element. RV: Main effects are clear. and the resolution is III (I = ABD and I = ACE). the defining relationship was D = ABC. This is a 1/4 fractional design. Run #9 through run #16 can be eliminated. there are 16 runs—24 = 16. The number of experimental runs is determined by the value of 2k–f. the resolution is IV. and two-factor interactions are clear. In a 27–3.H1317_CH17. the number of runs is eight. Example: In a 25–2. The resolution of a design determines the degree to which confounding is present. . Four factors can be run using eight experimental runs. There are four letters in the word. Fractional designs are created by confounding main effects with interactions. This is a 2III5–2 experiment. Resolutions are expressed as roman numerals III. and two-factor interactions are confounded with other two-factor interactions. In the example of a 24–1. IV.qxd 10/18/07 11:51 AM Page 157 Designed Experiments 157 Notice that run #1 through run #8 are now duplicated by run #9 through run #16. so watch out! RIII are good for screening designs. where: k = number of factors 2k–f f = degree of fractionation f = 1 or 1⁄2 fractional design f = 2 or 1⁄4 fractional design f = 3 or 1⁄8 fractional design.

Number of factors k Designation Number of runs 3 2III3–1 4 A = BC 4 2IV4–1 8 D = ABC 5 5–1 16 2III5–2 8 2VI6–1 32 F = ABCDE 2IV6–2 16 E = ABC F = BCD 2VII7–1 64 G = ABCDEF 2IV7–2 32 F = ABCD G = ABDE 2IV7–3 16 E = ABC F = BCD G = ACD 8 D = AB E = AC F = BC G = ABC 2V 6 7 2III7–4 Design generator E = ABCD D = AB E = AC Case Study The AstroSol company manufactures solar panels used to generate electricity. A relative – and + for each of the factors around the current operating conditions is determined for each factor. . minutes Source of CdTe All other factors in the process will remain fixed at their current levels. The R&D department wants to maximize the output of the cells. the design team decides to look at five factors in a 1/2 fractional factorial design. The power output is then measured and reported in units of watts/ft2. °F Bake time.H1317_CH17. 2V5–1. The five factors to be considered are: Thickness of Sn in microns Thickness of CdTe in microns Bake temperature. The manufacturing process consists of deposition of alternating layers of tin (Sn) and cadmium telluride (CdTe) and then baking these deposited layers in an oven. as this design will isolate a main and two factorial interactions. After considerable review and discussion.qxd 158 10/18/07 11:51 AM Page 158 The Desk Reference of Statistical Quality Methods Table 1 Selected fractional factorial designs.

23 + 10.76 + 8.17 + 10.30 + 9.23 + 8.10 + 10.73 + 9.97 + 10.74 ⎠ 8 −⎛ ⎝ B effect: ⎛ 10.H1317_CH17.27 + 9.30 + 10.30 + 8.73 + 9. all main effects are clear and all two-factor interactions are clear.83 + 8.30 + 10.28 ⎠ 8 C effect: ⎛ 9.23 + 10. μ Thickness of CdTe.77 + 9.77 + 9.30 + 8.97 + 8.83 + 8. °F Bake time.50 ⎞ ⎝ ⎠ 8 9.30 + 9.73 + 10.27 + 9.97 + 10.73 + 9.83 + 10.73 + 9.73 + 9.27 + 10. μ Bake temperature.17 ⎞ = −0.23 + 10. The individual responses.50 + 8.17 + 8.77 ⎞ = −0. minutes Source of CdTe A B C D E 159 – + 20 30 350 40 United States 40 50 400 50 China For a 2V5–1 fractional design.54 ⎠ 8 .10 + 8.23 + 9.76 + 9.qxd 10/18/07 11:51 AM Page 159 Designed Experiments Factor Description Thickness of Sn.30 ⎞ = 1.30 + 9.76 + 10.30 + 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9. the base design will be that of a 24 full factorial where the fifth factor will be generated by E = ABCD and I = ABCDE. response average. Interactions: AB = CDE AC = BDE AD = BCE AE = BCD BC = ADE BD = ACE BE = ACD CD = ABE CE = ABD DE = ABC Since this is a resolution V.30 + 10.50 + 8.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9. and standard deviation are recorded in Table 2. Calculation of Main Effects A effect: ⎛ 9.10 + 8.23 + 9. The 16 runs were replicated three times so that experimental error and statistical significance could be determined.

12 0.77 9.15 0. H1317_CH17.15 0.7 8.9 Y2 9.4 10.2 9.4 9.6 7.50 8.6 9.50 Avg.5 10.23 0.4 8.5 8.8 10.0 9.0 8.10 0.46 0.27 9.7 8.4 10.8 9.9 10.23 10.8 9.8 9.30 9.8 11.9 9.qxd Page 160 160 .26 0.15 0.97 10.26 0.2 10.30 8.36 S 11:51 AM Run Response 10/18/07 Table 2 Design matrix.0 9.5 8.8 10.3 9.83 10.2 11.6 8.8 9.1 10.10 8.6 9.4 9.21 0.6 9.73 9.A – + – + – + – + – + – + – + – + 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 – – + + – – + + – – + + – – + + B – – – – + + + + – – – – + + + + C – – – – – – – – + + + + + + + + D CDE AB + – – + + – – + + – – + + – – + ABCD E + – – + – + + – – + + – + – – + + – + – – + – + + – + – – + – + BDE AC + – + – + – + – – + – + – + – + BCE AD – – + + + + – – + + – – – – + + BCD AE + + – – – – + + + + – – – – + + ADE BC + + – – + + – – – – + + – – + + ACE BD – + – + + – + – + – + – – + – + ACD BE + + + + – – – – – – – – + + + + ABE CD – + + – – + + – + – – + + – – + ABD CE – + + – + – – + – + + – + – – + ABC DE 9.1 9.45 0.7 10.76 9.23 10. 0.0 10.06 0.4 8.7 9.61 0.30 10.17 8.4 9.9 9.2 8.15 0.73 9.2 8.4 Y3 9.4 8.25 0.0 11.2 Y1 9.6 9.7 9.

27 + 9.83 + 10.10 + 8.83 ⎞ = 0.76 + 8.17 + 8.23 + 10.30 + 9.07 ⎠ 8 AC interaction: ⎛ 9.27 + 9.05 ⎠ 8 −⎛ ⎝ E effect: ⎛ 9.30 + 9.27 + 10.73 + 9.10 + 8.50 + 9.30 ⎞ = 0.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.16 ⎠ 8 AD interaction: ⎛ 9.50 + 10.23 + 10.83 + 10.10 + 10.17 + 8.73 + 9.73 + 8.50 + 9.10 + 10.30 + 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.73 + 8.30 + 9.10 + 10.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.50 + 8.23 + 10.17 + 8.97 + 8.23 + 10.30 + 9.27 ⎠ 8 AE interaction: ⎛ 10.23 + 10.97 + 10.50 ⎞ ⎝ ⎠ 8 9.97 + 10.73 + 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.27 + 9.30 + 9.17 + 8.17 + 8.76 + 9.30 + 8.77 + 9.23 + 10.83 + 10.30 + 10.30 + 9.76 + 9.77 ⎞ = −0.73 + 8.08 ⎠ 8 161 .97 + 8.73 + 9.73 + 8.23 + 9.23 + 10.76 + 8.83 + 10.73 + 8.77 ⎞ = −0.qxd 10/18/07 11:51 AM Page 161 Designed Experiments D effect: ⎛ 10.27 + 9.77 ⎞ = 0.97 + 8.52 ⎠ 8 Two-Factor Interactions AB interaction: ⎛ 9.83 + 10.73 + 9.23 + 9.H1317_CH17.77 ⎞ = 0.23 + 9.27 + 9.30 + 10.23 + 10.76 + 9.17 + 8.76 + 9.30 + 10.73 + 9.97 + 8.30 + 9.30 + 9.77 + 9.73 + 8.50 + 8.30 + 10.30 + 9.10 + 10.23 + 10.30 + 8.30 + 10.50 + 10.

27 + 10.73 + 8.23 + 10.30 ⎞ = 0.83 + 10.50 + 9.30 ⎞ = 0.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 10.17 + 8.17 ⎞ = −0.50 + 10.03 ⎠ 8 DE interaction: ⎛ 9.83 + 10.50 + 8.30 + 8.30 + 9.23 + 10.97 + 10.10 + 8.30 + 10.06 ⎠ 8 CE interaction: ⎛ 9.77 + 9.30 + 9.83 + 8.17 + 8.50 + 8.23 + 10.10 + 10.77 ⎞ = 0.10 + 10.73 + 9.30 + 9.23 + 10.76 + 9.H1317_CH17.50 + 9.73 + 8.97 + 10.qxd 162 10/18/07 11:51 AM Page 162 The Desk Reference of Statistical Quality Methods BC interaction: ⎛ 9.30 + 10.23 + 9.27 + 9.73 + 10.76 + 9.11 ⎠ 8 BD interaction: ⎛ 9.30 + 9.77 + 9.73 + 9.77 ⎞ = −0.14 ⎠ 8 CD interaction: ⎛ 9.23 + 10.97 + 8.76 + 9.30 + 9.10 + 8.73 + 10.76 + 8.76 + 9.29 ⎠ 8 BE interaction: ⎛ 9.30 + 10.30 + 9.10 + 10.73 + 9.73 + 9.83 + 8.83 + 10.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.23 + 9.27 + 9.30 + 8.76 + 8.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 10.27 + 9.30 + 9.77 ⎞ = −0.30 + 9.83 + 8.23 + 10.73 + 9.23 + 10.27 + 9.97 + 8.30 + 10.97 + 8.30 + 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.50 + 8.23 + 10.73 + 9.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.17 + 8.27 + 10.73 + 9.17 + 8.50 ⎞ ⎝ ⎠ 8 −⎛ ⎝ 9.10 + 8.17 + 10.73 + 10.01 ⎠ 8 .23 + 9.23 + 8.97 + 10.77 + 9.30 + 8.

5 Median rank BD BC .5 63.01 0.5 The effects are plotted on normal probability paper to estimate which effects and interactions are significant.05 .99 99.05 0.0 –1.28 A C E AD CD CE DE D AB AE BC BE AC BD B 4.1 .5 11.54 –0.0 95.5 1.0 69.03 –0.5 24.0 17.5 –1.00 Value 0.5 0.5 50.07 0.11 0.5 76. and percent median rank values are calculated as follows: Order i Effect value Effect Percent median rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 –0.27 –0.52 –0.qxd 10/18/07 11:51 AM Page 163 Designed Experiments 163 Main and two-factor interaction effects are ranked in ascending order.5 37.74 –0.0 56.08 0.5 .H1317_CH17.29 1.06 –0. A normal probability plot of effects is shown in the following figure: 99.9 99.16 0.0 43.2 .8 99 98 95 B 90 80 70 60 50 40 30 AD AB 20 C E C 10 5 A 2 1 .01 –2.0 30.14 0.0 82.0 1.5 89.0 –0.

10 f =1 2 ⎞ pr ⎟⎠ p=8 r=3 k=5 t α / 2 .. In order to maximize the power output factors. E. ...10)2 + (3)(0. Sp = V1S12 + V2 S22 + ..14 ( 8 )( 3 ) ⎠ ⎝ Any effect ±0.698 ⎜ 0.. E = t α / 2 . and E should be set at the – level and factor B at the + level..14 that contains zero is judged not statistically significant at the 90 percent confidence level.288 48 Step 2...( r −1 ) 2 k − f ⎜ Sp ⎝ α = 0. ⎛ Error..H1317_CH17....288 ⎟ E = 0.9855 Sp = 0.( r −1 ) 2 k − f = t .3 + 3 Sp = 3.Vn Sp = (3)(0..32 = 1..(3)(0. it appears that main effects A.12)2 + ..21)2 + (3)(0.Vn Sn2 V1 + V2 + .. A. C.qxd 164 10/18/07 11:51 AM Page 164 The Desk Reference of Statistical Quality Methods Based on the analysis of the normal probability plot.. Calculate the pooled standard deviation.15)2 + (3)(0... Determine the statistical significance at the 90 percent level of confidence.. consideration should be given to the BD and AD interactions.... C. and B are statistically significant...698 ⎛ 2 ⎞ E = 1..05 .. Statistical Significance Based on t-test Step 1.. In addition.36)2 3 + 3 + 3 + .

qxd 10/18/07 11:51 AM Page 165 Designed Experiments Order i Effect value Effect 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 –0. and E at the – level.03 0.54 –0. or ln. and set factor B at the + level. The objective of the experiment is to reduce the log S. .08 0.11 0.22 0.74 –0.52 –0. B.14 to to to to to to to to to to to to to to to –0.13 0. One approach is to take multiple observations during an experimental run.08 0. Variation Reduction Variation reduction is accomplished when we reduce the standard deviation. and take the log.07 –0. C.03 –0.88 –0. however.17 –0.06 –0.02 0. and E appear to be statistically significant at the 90 percent confidence level.16 0. calculate the standard deviation. Set A. therefore.43 1.66 –0.00 0.41 –0.20 –0.15 1. Dr.11 0.06 –0. George Box (1978) of the University of Wisconsin–Madison and his colleagues published a series of papers suggesting the use of –log10(s) as a response variable when the objective is to reduce variation.29 1.01 0.60 –0. The following factor setting will maximize the output.38 –0.28 A C E AD CD CE DE D AB AE BC BE AC BD B 90% confidence interval –0.14 0.68 –0. The resulting transformed response can then be treated as a normally distributed response and the effects calculated in the traditional manner.27 –0. standard deviations are not normally distributed and cannot be used directly as a response to minimize.30 0.21 0.H1317_CH17. C. Factor A B C E D Description Optimum setting Thickness of Sn Thickness of CdTe Bake temperature Source of CdTe Bake time 20 microns 50 microns 350° United States Does not significantly affect power output. set a lower time of 40 minutes to reduce cycle time.15 –0.25 0.40 –0.05 0.28 0.07 0.19 0. or ln S.09 –0.13 0.42 165 Statistically significant? Yes Yes Yes Yes No No No No No No No No Yes Yes Yes Conclusion Factors A.

01 + 219.qxd 10/18/07 166 11:51 AM Page 166 The Desk Reference of Statistical Quality Methods An alternative is to simply calculate the variance (standard deviation squared) and compare the average variance for all of the – factor settings with the average of all the + settings using an F-test.96 + 123.01 219.6 11.16 + 219. Example: The objective is to minimize the variation of compressive strength of a cast concrete material.52 53. and the standard deviation was determined using the three observations for each of the eight runs.96 123.23 108.0 184.1 15.16 + 106.09 + 184. .21 + 228. we conclude that the factor under consideration is statistically significant. Run A B C AB AC BC ABC Average S S2 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + + – – + + – – + + – + – – + – + + + – – – – + + – + + – + – – + 1900 1250 1875 1160 2100 1740 1940 1350 3. A full three-factor experiment (23) was run.0 13.16 228.81 = 3.04 + 184.09 + 64.21 = 108.3 8.21 = 188.85 For effect B: (S )2 + = 228.9 5.57 = 1.21 + 29.09 64.81 4 (S )2 + = 15.57 4 F= ( S ) 2 larger ( S ) 2 smaller F= 133.1 14.85 4 (S )2 − = 15.01 + 106.8 10.04 + 64.21 29.0 = 53.4 15. The experiment was replicated three times.96 = 133.04 106.62 The remaining F values are calculated in a similar manner.H1317_CH17.21 We now calculate an F value for each of the effects. If the calculated F value exceeds a critical F value.0 + 123.62 4 F= ( S ) 2 larger ( S ) 2 smaller F= 188. For effect A: (S )2 + = 29.

8.10.52 1. ( r − 1) 2 k − f 2 df = 8 df = where: r = number of replicates = 3 k = number of factors = 3 f = degree of fractionation = 0 (this is a full factorial) At a level of confidence of 0.03 1. The other factors are not statistically significant contributors to variation.05.75 1.01 167 (Statistically significant at a confidence level of 90 percent) A critical F value is determined. we have an α/2 of 0. dividing the risk between the two levels.09 1.44. Calculation of all these effects using the average compressive strength yields the following: Factor Effect A B C AB AC BC ABC +579 –373 +236 –74 +104 –109 –41 .23 3. We can see that factor B is significant and that the average variance was smaller when factor B was set to the – level. the risk is 0.25 1. if the objective had been to maximize the compressive strength. This value is found in an F table using one-half the degrees of freedom as calculated previously for the t value used in testing for statistical significance. The appropriate critical F value is found for F. the response would have been the average compressive strength.05. In the previous example.8 = 3. therefore. we should set B at the – level to minimize variation.qxd 10/18/07 11:51 AM Page 167 Designed Experiments Effect F value A B C AB AC BC ABC 1.90. Any effects that have a calculated F value greater than this are considered statistically significant.H1317_CH17.

4 ⎟ pr ⎠ ( 4 )( 3 ) ⎠ ⎝ At the 90 percent level of confidence.41) = 7.4. half the runs are set – and the other half are set +.5 +30 –12 +8 –5 –7 .746 )(10.qxd 168 10/18/07 11:51 AM Page 168 The Desk Reference of Statistical Quality Methods The calculated experimental error is determined to be ⎛ Experimental error. The grand average of all the experimental data represents the expected average response if all the calculated effects were equal to zero.38 ⎟ = (1. all of the factors in their interactions are statistically significant. The effects for the main factor and the interactions are calculated by taking the difference between the average of the + settings and the average of the – settings for the experimental data. We will add or subtract one-half the effects to the grand average. Consider the following example. it must be included in our model along with the main effects used in the interaction effect. In this case.H1317_CH17. the number of “+” and “–” conditions or settings is always the same. The effects are: Factor A B C AB AC BC ABC Effect +60 –6.( r −1 ) 2 k − f ⎜ Sp ⎝ ⎛ 2 ⎞ 2 ⎞ = t 0. This is due to the hierarchy rule for defining a model. Empirical Predictions In a designed experiment. The main effects are included even if they are individually not statistically significant. In a balanced experiment.05 . and the average of all the responses is 125. The error for the effects is ±7.38 )( 0.4 ⎛ t α / 2 . A full factorial experiment for three factors was conducted. Setting B to the – level also minimizes the variation.( r −1 ) 2 k − f ⎜ Sp ⎝ 2 ⎞ pr ⎟⎠ Sp = the average standard deviation Sp = ΣSi 2 = 10. to maximize the compressive strength. n Experimental error at 90 percent confidence = 7. There is one exception: If an interaction effect is statistically significant. This is because the magnitude of the effects is relatively large compared to the experimental error of ±7. Only those effects that are statistically significant will be included in our empirical equation.16 ⎜ 10. we would set factors A and C to the + level and factor B to the – level.38.5. E = t α / 2 .

35) Y = 125 + 30 + 15 + 6 + 1. sec. and C in this example are defined by: Factor Description (–) (+) A B C Time.35? We simply substitute the above values in our empirical equation and solve for the predicted response.5[(OStgt )( ΔMS)] + MS. Y = 125 + 30( A) − 15( B) − 6( A)( B) + 4( A)(C ) The factors A. Since the AB interaction is to be included in the model.qxd 10/18/07 11:51 AM Page 169 Designed Experiments 169 Main effects A and C and two-factor interaction AB are statistically significant. To convert from a specified OS (or OStgt) to the corresponding MS value. we must convert the OS values to real process or machine settings (MSs). Recall that for our experiment. Y = 125 + 30( A) − 15( B) − 6( A)( B) + 4( A)(C ) Y = 125 + 30(+1) − 15(−1) − 6(+1)(−1) + 4(+1)(+0. The values in parentheses are orthogonal values and can only take on the values between –1 and +1. B. so must main effects A and B even though B is not statistically significant. where: OStgt = the specified orthogonal value MS = the average of the + and – settings for the experiment ΔMS = the + setting minus the – setting for the experiment. What would be the expected average bond strength if A = +1.H1317_CH17. In order to evaluate the conditions. we use the relationship MS = 0.4 Y = 177. the following – and + settings were used: . Pressure. We will add one-half the effects to the grand average to obtain the predicted expected response. rms 15 20 70 45 50 120 In this example the response is pounds/bond strength between two samples of wood that have been bonded together.4 The values in this equation are orthogonal settings (OSs) and can vary from –1 to +1. and C = +0. psi Surface finish. B = –1.

5[(+0. Pressure. 2 ΔMS or ΔMS A = MS+ – MS– = 45 – 15 = 30 Factor Description (–) (+) ΔMS MS A B C Time.75 + 95 = 103. and converted the orthogonal settings to a set of corresponding machine settings. degrees O-ring hardness. Background A valve manufacturer has evaluated the effect of two factors (seat chamfer angle and hardness of o-ring) on the life of a valve. we specified a set of orthogonal settings. which is 20 psi.35 OS value for factor C is a bit more complicated. sec. Pressure.qxd 170 10/18/07 11:51 AM Page 170 The Desk Reference of Statistical Quality Methods Factor Description (–) (+) A B C Time. which is 45 seconds.5[17. sec.5[(OStgt )( ΔMS)] + MS where the value is +0 0. The response is the mean time between failure (MTBF) in hours.5] + 95 MS = 8. Factor A B Description Chamfer angle. and the –1 setting for B.35 MS = 0. rms 15 20 70 45 50 120 30 20 50 30 35 95 For our example we had the OS values for the empirical model. psi Surface finish. for factors A. shore A (–) (+) 46 62 32 78 . These are simply the +1 setting for A.35)(50)] + 95 MS = 0.75 In this example. respectively. and +0.H1317_CH17. –1. The +0. and C. psi Surface finish. B. The +1 for A and –1 for B are no problem. predicted the expected response. rms 15 20 70 45 50 120 For factor A we have a MS or MSA = 15 + 45 = 30. We will use the conversion equation.35. and we set the factors to +1. MS = 0.

What is the expected average MTBF if we choose a chamfer angle of 38° and an o-ring hardness. To accomplish this. This can occur when the + setting is less than the – setting.14 . we have Y = 1800 + 175( A) − 225( B) − 60( A)( B). ⎛ MStgt − MS ⎞ OS = 2 ⎜ ΔMS ⎟⎠ ⎝ OS = 2 ⎛ −1 ⎞ ⎝ −14 ⎠ OS = 2 ⎛ 38 − 39 ⎞ ⎝ −14 ⎠ OS = 2(0. MStgt = 38. Adding one-half of each effect to the average of all responses. of 75? These values are given in MSs and cannot be directly substituted into our empirical equation. 2 ΔMSA = MS+ − MS− = 32 − 46 = −14. MSA = Note that the ΔMS may be negative. OS = 2 ⎜ ΔMS ⎟⎠ ⎝ where: MStgt = the specified machine setting MS = the average for the – and + settings for the factor in question used in the experiment ΔMS = the + setting minus the – setting for the factor in question used in the experiment.H1317_CH17. we can utilize the following relationship: ⎛ MStgt − MS ⎞ .qxd 10/18/07 11:51 AM Page 171 Designed Experiments 171 Effects Factor Effect A B AB +350 –450 –120 Error of effects = +45 Average of all responses = 1800 We start by developing the empirical equation. shore A. We must convert each MS to an orthogonal setting (OS). All effects are statistically significant. For factor A: 46 + 32 = 39.07) = 0.

Y = 1800 + 175(+0.46. the brand of cutting fluid. Factor A B Description Particle size.63) − 60(+0.qxd 172 10/18/07 11:51 AM Page 172 The Desk Reference of Statistical Quality Methods Performing a similar calculation for factor B set to an MS of shore A = 75.2 200 7. We have demonstrated how to convert from MSs to OSs and vice versa when converting specific values of OS to MS. ⎝ 16 ⎠ OS = 2 ⎛ 5⎞ = +0. The resulting model will always work when the factors are set to their (–) and (+) OSs. The following is an example of setting the OS values to achieve an objective average response. should yield the same average as that of the average of all the base-run experimental data. and so on. Testing Curvature or Nonlinearity of a Model (and Complete Case Study) In traditional designs of experiments (DOEs) we determine effects to establish a linear or first-degree relationship to model the expected average response as a function of the statistically significant factors. which are +0. the production shift. or what we call center points.75 − 5. we find the corresponding OS to be MStgt = 75 MS = 70 ⎛ MStgt − MS ⎞ OS = 2 ⎜ . with minimum variation.29.63. ⎝ 16 ⎠ We now have OS values for factors A and B.63.H1317_CH17. Objective: To achieve complete dissolution of a material in 40 seconds. These values will be used in our empirical equation: Y = 1800 + 175( A) − 225( B) − 60( A)( B). Y = 1800 + 24. These (0) points. But will the model work at settings between the (–) and (+) OSs? We will test for curvature or nonlinearity. ΔMS ⎟⎠ ⎝ OS = 2 ΔMS = 16 ⎛ 75 − 70 ⎞ . Any difference in the average of the center points and the average of all the base-run data is called curvature or the nonlinearity effect. we have a perfect linear model. The grand average of all our base-run experimental data is actually the average where half the settings were (–) and half were (+) or equivalent to running all the experiments at an OS of (0). Consider the following case. Center points can be determined only if the factor is quantitative and not qualitative.5 − 141. respectively. Y = 1677.63).9 . microns pH (–) (+) 100 6.14 and +0. If the difference is zero.14)(+0.14) − 225(+0. Examples of qualitative factors would be the type of polymer used in an o-ring.

58 3.34 Main effect for factor A = average of A+ – average of A– Effect A = 58.8.7 19.53 0.85 2 2 Interaction AB effect = average of AB+ – average of AB– Effect AB = 31.78 − 50.36 2.qxd 10/18/07 11:51 AM Page 173 Designed Experiments 173 Experimental data: Run A B AB Response Average Standard deviation S Variance S2 1 2 3 4 – + – + – – + + + – – + 32.00 9.00 81.85 = −54. 31.2 + 7.0 − = 60.6. 100 + 200 = 150.7 − = 25. and 46. 6.7 31. 61 101. 2 Randomly during the base experiment.7 + 101.85 − 66. We will calculate the curvature or nonlinearity effect NL by taking the difference between the grand average of the base run Y B and the average of the center points Y C .04 = +2.7 + 19. N L = Y B − Y C = 52.7 58. 32 58. 93 18.7 0.7 58. 55.02 1. we will run five experiments where the particle size is 150 and the pH is 7. The response for our five center points is 53. 20.05.7 + 19.9 = 7.0 101.5. 48.0 + 101. 47. 2 The setting for the center point for factor B will be the midpoint or average of the two settings for the base experiment.15 2 2 In addition to the data obtained from the base-run experiment. we will collect data from five center points.0 + 19.2.05. 54.74 .1.7 − 79. 111.70 = −27.H1317_CH17. The setting for the center point for factor A will be the midpoint or average of the two settings for the base experiment. 21 31.7 + 58.85 2 2 Main effect for factor B+ – average of B– Effect B = 101.34 9.7 31.7 − = 38.85 = +15.70 − 44.

85 +15. All effects will be included in our prediction model.95 for this example dfb = (r – 1)2K–F = (3 – 1)22 = 8 dfc = # center points – 1 = 4 t = t-score from table t.61 Any effects that are outside the limits of ±4.24 (–27. A similar treatment can be made for all effects. ( A) + ( B) + 2 2 2 −54.782)(4. ( A) + 2 2 2 Y = 52.266) + (4)(13.61 are statistically significant at a level of confidence of 95 percent.93( A) + 7.025.85 + 4. Applying this criterion to all the effects.85 – 4. Y = Grand average + . The data obtained from our center points may be used in the determination of our error of effects. The fact that the interval does not contain zero means that the probability that the true effect is zero is 5 percent or less.07 SPC = (dfb )(Sb2 ) + (dfc)(Sc2 ) (8)(23. but we are 95 percent confident it is between –32.qxd 174 10/18/07 11:51 AM Page 174 The Desk Reference of Statistical Quality Methods We will use NL later to determine the suitability of our linear model.46 (–27.61). The empirical model is: Effect A Effect B Effect AB ( A)( B).58) = ±4.46 = SPC = 8+4 dfb + dfc 4 4 = = 0.782 SPc = standard deviation with center points SPC = (dfb )(Sb2 ) + (dfc)(Sc2 ) dfb + dfc Sb2 = average variance of base run experiment = 23. 8+4 = t.78 + 13. we conclude that they are all statistically significant at a level of confidence of 95 percent.H1317_CH17.15 −27. dfC SPC 4 NB where: a = 1 – C and C = the level of confidence C = 0. We don’t know the true value for effect A.78 + ( B) + ( A)( B).26 Sc2 = The variance of the center points = 13.46)(0.08( A)( B). 12 = 1.61) and –23.07) = 4.58 Nb 12 N B = total number of observations in the base experimeent = 12 Error of effects = (1. df B . Error of Effects = ±t a 2 .93( B) − 27.025.85 Y = 52.

39 at 90 percent confidence.85.85. For 95 percent confidence.20.53)2 81.02)2 + (1.67 .67. The two degrees of freedom are determined by (# Replicates − 1)(# Runs) (2)(4) = = 4.96. It should always be our objective to reduce variation wherever possible.34 + 9 = = 4. S−2 4.4.40.qxd 10/18/07 11:51 AM Page 175 Designed Experiments 175 We now have our linear empirical model to predict average responses as a function of particle size and pH.36 + 2. The Fcalc ratio is calculated by averaging the variances of the runs where the factor was set positive. the risk is 5 percent and a/2= 2. Fcalc factor A: (3.85 Fcalc = = 8.85 = 7.85 = . 2 2 (0. and then dividing the larger result by the smaller result. S–2 = 2 2 41.67 S+2 = Fcalc factor B: (9.4. we reject the hypothesis that the variance for the + setting is equal to the variance of the – setting and conclude that the factor in question does affect variation. We will restrict factor B such that variation is minimal. 2 2 (0.58)2 + (9.02)2 0.34 + 81. Fcalc = 5.00)2 + (1. 2 2 40.00)2 0. If Fcalc > Fcrit.67.4 = 6. we will perform an F-test. averaging the variances where the factor was set negative. if we increase our risk (of being wrong) to 10 percent. We calculate an F ratio for each of the main effects and compare this calculated F ratio (Fcalc) to a critical F ratio (Fcrit).34 = = 5.36 S12 = = = 40. 4.H1317_CH17.53)2 9 + 2. and both factors are significant in their effect on variation reduction. Recall that the calculated F ratio for factor B was calculated as Fcalc = S+2 41.5. the Fcrit becomes F10. however.5 percent.34 = = 41.4 = 9. df1 = df2 = Fcrit Neither of the factors is significant at a 5 percent risk. Do we have a factor that is responsible for variation reduction? To answer this question. 2 2 = F2.67 S+2 = Fcrit is determined by looking up the F value at a/2 and two equal degrees of freedom where a = risk.58)2 + (3.

We have more variation when B is set +. we should consider using another model such as a second-degree relationship.08( A)(−1) Y = 52. But is a linear model suitable in this case? Recall our calculation of the nonlinearity effect. significantly different from our predicted value of 100. Factor A B C Description Cutting speed. The average of our actual center points from experimental data yields 160. degrees (–) 1000 1 5 (+) 1800 1. to minimize variation. we will set factor B to its – OS.5 18 The empirical model follows: Y = 100 + 23 A − 12 B + 6C − 8 AB + 3 BC OSs: Factor: A B C Predicted response – 0 + + 0 – – 0 + 64 100 146 The model we have established assumes a linear relation is appropriate. See the following figure: .15( A) We now have a model that provides a minimum variation.93(11) − 27. inches/minute Cutting angle.74.H1317_CH17.85 + 13. A 23 experiment has been conducted to evaluate three factors and how they affect RMS surface finish. NL = +2.93( A) + 7. the greater the variation. rpm Feed rate. we set B = 6. Letting B = –. Our linear model will always work at the extreme settings (OSs of +1 or –1) but not for settings between –1 and +1.778 − 13.93 + 27. The reason for this discrepancy is nonlinearity.93( A) + 7. therefore.2. Y = 52.08( A) Y = 44.qxd 10/18/07 176 11:51 AM Page 176 The Desk Reference of Statistical Quality Methods The greater the variance.78 − 13. If the nonlinearity effect is sufficiently large. Consider the following case.

we will calculate the error of nonlinearity. Is an NL of +2. dfC SPC 1 1 1 1 + = ± (1. df B .22 N B CP 12 5 .532) = ±4.782)(4. N B CP NB is the number of total observations in the base experiment (12).74 acceptable? To answer this question.04 = +2. df B .78 − 50.H1317_CH17. and CP is the number of center points (5 for our example).46)(0. ±t a 2 . ±t a 2 .qxd 10/18/07 11:51 AM Page 177 Response Designed Experiments 177 160 146 100 64 A– B+ C– A° B° C° A+ B– C+ = Y = 100 + 23A – 12B + 6C – 8AB + 3BC = Reality Even under the best circumstances we will always have some deviation from the straight line linear model. we conclude that nonlinearity is not a problem and accept our linear model. If the error of nonlinearity is greater than the nonlinearity effect. dfC SPC 1 1 + N B CP Note that the first and second terms are exactly the same as used in the calculation of the error of effects.46) + = (1.74. The third term changes slightly to 1 1 + .782)(4. the nonlinearity was N L = Y B − Y C − = 52. The question is how much deviation or nonlinearity can we have and still assume that a linear model is acceptable? Recall that in our 22 case.

5[( ΔMS)( tgt OS )] + MS where: ΔMS = MS+ − MS− = 105 − 90 = 15 MS+ + MS− 105 + 90 = = 97.H1317_CH17.85 + 13. we will achieve a dissolution time of 40 seconds. Ho: σ+2 = σ–2 Ha: σ+2 ≠ σ–2 2. If the absolute value of this test statistic is greater than Zα/2. We are essentially performing a hypothesis test: 1. We have addressed the objective of minimizing variation by setting factor B to a –1 OS.85 + 13. OS –1 –0. we find the OS to be –0.7 Setting the pH to 6.37 must be converted to an equivalent MS.37 in order to achieve a dissolution time of 40 seconds.15( A) Our objective is to achieve a dissolution time of 40 seconds. Solving for A. For factor A: 40 = 44. Compute the test statistic. with minimum variation. nonlinearity is not a problem at 95 percent confidence. An Alternative Method for Evaluating Effects Based on Standard Deviation Montgomery has shown that ln S 2+ /S 2– has an approximate standard normal distribution. We set to 40 and solve for the OS.15(A).37)] + 97. Using the standard deviation as a response and calculating the variance of the average standard deviations where the factor is + and – and then calculating the natural log of the + term divided by the – term yields a test statistic. The OS of –0.5[(15)(−0.qxd 178 10/18/07 11:51 AM Page 178 The Desk Reference of Statistical Quality Methods The error of NL is greater than the NL.7°. Y = 44. therefore.2 and the temperature to 94. .37 0 +1 MS 90 ? 97. we may conclude that the effect is statistically significant.5 = 94.5 2 2 MS+ = 105 MS− = 90 MS = MS = 0.5 105 MS = 0.5[( ΔMS)( tgt OS )] + MS = MS = 0..

51 0.2510 − 0.42.3321 ⎠ ⎝ ⎠ ⎝ 2 2 AB interaction: ⎛ 0. 2.4658 ⎞ =⎜ ⎟⎠ = 0.4658 Effects based on the standard deviation are as follows: A main effect: ⎛ 0.3890 − 0.0561 ⎠ ⎝ ⎠ ⎝ 2 2 B main effect: ⎛ 0.2718 − 0.0361 + 0.61 Average Standard deviation 2.51. facto or A is not significant.77. Factor B is time. where + = 4 hours and – = 2 hours.53 4.0630 = 0.3121 ⎞ = 0.0777 + 0.H1317_CH17.0777 0. If ln Example: Consider the 22 experiment where the objective is to minimize the variation in the percent shrinkage of pressure-treated wood.0569 = 0.077 0.qxd 10/18/07 11:51 AM Page 179 Designed Experiments 179 2 (S+ ) reject Ho and accept Ha.0777 + 0.361 ⎞ = 0.3121 + 0. 4.0361 + 0.96.3121⎞ =⎜ ⎟⎠ = 0.0380 ⎝ 2 2 2 ln (S+ ) 2 (S− ) = ln 0. where + = 150 psi and – = 100 psi. 2.4658 ⎞ − ⎛ 0.86. Two factors will be evaluated. Conclude that the effect is significant.500 4.4658 ⎞ − ⎛ 0.1949 = 0.1741 = 0.0380 .0777 + 0. 3. Three replicates are run. 2. 4.637 4.0777 + 0. therefore. Run A B AB 1 2 3 4 – + – + – – + + + – – + Responses 2.4658 ⎞ − ⎛ 0. 3.51 >/ 1.46.3121 0.87.96. and the percent shrinkage is determined for each run. Zα /2 = 1.3121 ⎞ = 0. Note that we are not minimizing the shrinkage but rather the variation in the shrinkage.0361 0.31.397 2. 2.0630 ⎝ 2 2 S− ⎛ 0.0977 ⎠ ⎝ ⎠ ⎝ 2 2 Test for statistical significance as follows: Effect A: 2 2 S+ ⎛ 0. Factor A is pressure.46 2. (S− ) For a 95 percent level of confidence.75. Four experimental runs are planned. 0.0361 + 0. 2 > Zα / 2 .28 4.

0777 + 0.0738 ⎝ 2 2 ⎛ 0.0032 ⎝ 2 2 ln (S+ )2 2 (S− ) = ln 0. Since the sign for the main effect for B is +. At best.89 >/ 1.and three-factor interactions is significant. No calculated effects were made using the shrinkage response.1513 = 3.3121 + 0.1513 ⎝ 2 S−2 ⎛ 0.0738 = 0.0777 + 0. the Plackett-Burman designs are a resolution III.0303 ⎝ 2 ln (S+ )2 2 (S− ) = ln 0. 0.96.H1317_CH17. however. factor B is significant.96. we chose to set factor B to the – setting. the AB interaction is not significant. it is obvious that setting B – would minimize the actual percent shrinkage as well as the variation in the shrinkage. or two hours. Confounding of main effects with two.4658 ⎞ =⎜ ⎟⎠ = 0.0361⎞ =⎜ ⎟⎠ = 0. The pressure is not a statistically significant factor with respect to minimizing variation.3121⎞ S−2 = ⎜ ⎟⎠ = 0.0361 + 0. therefore.86 3.qxd 10/18/07 180 11:51 AM Page 180 The Desk Reference of Statistical Quality Methods Effect B: 2 S+2 ⎛ 0. The design columns for several Plackett-Burman designs follow: n=8 +++–+–– n = 12 ++–+++–––+– n = 16 ++++–+–++––+––– n = 20 ++––++++–+–+––––++– n = 24 +++++–+–++––++––+–+–––– .89 0.0303 Conclusion To minimize the variation in the actual percent shrinkage. the time of treatment should be two hours.4658 ⎞ S+2 = ⎜ ⎟⎠ = 0. therefore.0361 Effect AB interaction: 2 ⎛ 0. Note that we reverse the signs of the effect to determine the setting for the factor in order to minimize the response. 0. Plackett-Burman Screening Designs These series of designs are modestly useful for screening but should be used with great caution.86 > 1.

0 49. An example is shown for a seven-factor.95 69.01 50.qxd 10/18/07 11:51 AM Page 181 Designed Experiments 181 To construct the design matrix.05 69. the appropriate generating column is selected. The first column is generated by placing the signs in a vertical column.0 3.59 9.54 42. eight-run Plackett-Burman design.98 .69 47.48 3.52 27.14 1.08 31.46 45.99 67.92 42.0 45. and the remainder of the second column follows the sequence designated by the generating column.31 50. The factors and their settings for this experiment are as follows: Setting Factor Description – + A B C Feed rate. Run A B C D E F G 1 2 3 4 5 6 7 8 + + + – + – – – – + + + – + – – – – + + + – + – + – – + + + – – – + – – + + + – + – + – – + + – + + – + – – + – In the event that fewer than eight factors are to be screened. The last sign of the first column becomes the first sign in the second column.0 59. Case Study You have been asked to examine a process with respect to three factors that are suspected to influence a response. Temperature. simply drop the required number of columns (factors).0 71.71 7. After n – 1 columns have been developed.95 72.85 1. The objective is for the process to yield a response of 55.46 47.05 64.56 1.48 30.43 5.H1317_CH17. the final row is made by using all –’s.0 with a minimum of variation.0 29.0 37. °C Percent filler 40 30 22 60 70 30 The design matrix and data are as follows: Run A B C AB AC BC ABC I II Average Standard deviation 1 2 3 4 5 6 7 8 – + – + – + – + – – + + – – + + – – – – + + + + + – – + + – – + + – + – – + – + + + – – – – + + – + + – + – – + 40.54 43. lbs/hr.0 67.

pr ⎟⎠ . 2 ⎞. The level of confidence is chosen to be 95 percent.qxd 182 10/18/07 11:51 AM Page 182 The Desk Reference of Statistical Quality Methods Effects based on average response are as follows: Main effect A: ⎛ 29 + 37 + 49 + 45 ⎞ − ⎛ 43 + 59 + 71 + 67 ⎞ = −10 ⎝ ⎠ ⎝ ⎠ 4 4 Main effect C: ⎛ 71 + 49 + 67 + 45 ⎞ − ⎛ 43 + 29 + 59 + 37 ⎞ = +16 ⎝ ⎠ ⎝ ⎠ 4 4 Interaction effect AC: ⎛ 43 + 59 + 49 + 45 ⎞ − ⎛ 29 + 37 + 71 + 67 ⎞ = −2 ⎠ ⎝ ⎠ ⎝ 4 4 Interaction ABC: ⎛ 29 + 59 + 71 + 45 ⎞ − ⎛ 43 + 37 + 49 + 67 ⎞ = 2 ⎠ ⎝ ⎠ ⎝ 4 4 Main effect B: ⎛ 59 + 37 + 67 + 45 ⎞ − ⎛ 43 + 29 + 71 + 49 ⎞ = +4 ⎠ ⎝ ⎠ ⎝ 4 4 Interaction effect AB: ⎛ 43 + 37 + 71 + 45 ⎞ − ⎛ 29 + 59 + 49 + 67 ⎞ = −2 ⎝ ⎠ ⎝ ⎠ 4 4 Interaction BC: ⎛ 43 + 29 + 67 + 45 ⎞ − ⎛ 59 + 37 + 71 + 49 ⎞ = −8 ⎝ ⎠ ⎝ ⎠ 4 4 Determine the statistical significance for effects based on average responses. The error for the calculated effects is given by ⎛ Error. E = t α / 2 .05 r=2 f=0 p = 4.( r −1 ) 2 k − f ⎜ Sp ⎝ where: α = 0.H1317_CH17.

43 + 1.85 + 3.025 .H1317_CH17.99 Any effects greater than the experimental error 5.56 + 5. E = ( 2.71 + 1. The effects that are significant are main effects A.20 8 183 2 = 0. Effects based on standard deviation are as follows: Main effect A: ⎛ 1. The model for predicting the response using only those effects that are significant is Yexpected = grand average – –12 (effect A)(A) + –12 (effect C)(C) – –12 (effect BC)(B)(C).8 = 2.59 ⎞ = 0.48 + 9.14 + 1.43 + 7.303 Error.50 (4)(2) t 0.56 + 1.31 ⎝ ⎠ ⎝ ⎠ 4 4 Main effect C: ⎛ 1.43 + 3. + ( 9.59 + 9.98 ⎞ − ⎛ 5. Yexpected = 50 – 5(A) + 8(C) – 4(B)(C).43 + 7.48 ⎞ = 2.98 ⎞ − ⎛ 3.56 + 7.11 ⎝ ⎠ ⎝ ⎠ 4 4 Interaction AB effect: ⎛ 3.98 ⎞ − ⎛ 1.14 + 1.59 + 9.98 ⎞ − ⎛ 3.43 + 5.71 + 7.43 + 5.56 + 1. and the two-factor interaction effect BC.20 )( 0.85 + 1. C.23 ⎝ ⎠ ⎝ ⎠ 4 4 Interaction AC effect: ⎛ 3.62 ⎝ ⎠ ⎝ ⎠ 4 4 Main effect B: ⎛ 5.59 ⎞ = 0.98 ) 2 = 5.85 + 3.48 + 9.27 ⎝ ⎠ ⎝ ⎠ 4 4 . .59 + 9. .78 ⎝ ⎠ ⎝ ⎠ 4 4 Interaction BC effect: ⎛ 3.85 + 9.56 + 1.48 + 3.59 ⎞ = 1.71 + 7.14 + 1.71 + 1.48 ⎞ = 0.48 + 3.qxd 10/18/07 11:51 AM Page 183 Designed Experiments Sp = ( 3.50 ) = 5.56 + 5.14 + 1.98 ⎞ − ⎛ 3.56 ) 2 + (1.71 + 7.14 ⎞ = 0.303 )( 5.85 + 1.71 + 1.14 + 3.43 ) 2 + .99 are deemed statistically significant at the 95 percent level.85 + 1.98 ⎞ − ⎛ 1.

the test statistic must be greater than 1.qxd 10/18/07 184 11:51 AM Page 184 The Desk Reference of Statistical Quality Methods Interaction ABC effect: ⎛ 1. Test statistic: ln ( S+ ) 2 ( S− ) 2 If the test is greater than Zα/2.71 + 1. In this case. .11 1.56 + 7. factor B must be set at the – setting.59 ⎞ (S− )2 = ⎜ ⎟⎠ = 3.62. The level of significance chosen is 90 percent.71 + 1.62 2.48 + 9.31 (Statistically significant at a confidence level of 90%) 0.01.68.. or 30°C.H1317_CH17.43 + 7. Calculate the test statistic for the effects as follows: Main effect A: 2 ⎛ 1.01 25. we will reject the hypothesis that the effect is not statistically significant at a level of confidence of 1 – α.645.98 ⎞ − ⎛ 3.43 + 5.48 + 3.37 In order to minimize variation.14 + 1.23 0.10 = ln = ln 1. ⎝ 4 2 2 ⎛ 3.85 = 0.14 + 1.85 + 3.68 13.54 Since the test statistic is less than the critical value of 1. Similar calculations for all of the factors and interactions give the following test statistics: Factor A B C AB AC BC ABC Test statistic 0. We conclude that factor A is not a significant factor affecting variation.37 ⎝ ⎠ ⎝ ⎠ 4 4 Determine the statistical significance for effects based on standard deviation as a response.27 0. The test statistic will be the natural log of the ratio of the square of the average standard deviation of the –’s to the +’s.56 + 5. Each effect based on standard deviation will be tested independently for statistical significance.98 ⎞ (S+ ) = ⎜ ⎟⎠ = 5. we cannot reject the null hypothesis that the variation from a positive setting of A is any different than the variation from a negative setting of A.645.59 ⎞ = 0.85 + 9. ⎝ 4 ln (S+ )2 5.78 0. 2 = ln (S− ) 3.

Montgomery. and R. Statistics for Experimenters. 1997. B. 4th edition. G. Colorado Springs. 4th edition. Understanding Industrial Designed Experiments.qxd 10/18/07 11:51 AM Page 185 Designed Experiments 185 Bibliography Barrentine. An Introduction to Design of Experiments. S. W. D. CO: Air Academy Press. and J. S. Milwaukee. Design and Analysis of Experiments. 1994. WI: ASQ Quality Press. 1978. E. Box. G.H1317_CH17. R.. 1999. G. Hunter. C. New York: John Wiley & Sons. Schmidt.. L. Hunter. Launsby. . New York: John Wiley & Sons. P.

qxd 10/18/07 11:51 AM Page 186 .H1317_CH17.

The hypergeometric distribution is appropriately used when randomly sampling n items without replacement from a lot of N items of which D items are defective. Hypergeometric Distribution The hypergeometric distribution is the only discrete distribution that has the lot or population size as an element. 1. ⎝ b⎠ b !(a − b )! The mean and standard deviation of the hypergeometric distribution are Mean . μ = nD N and Standard deviation. The probability of finding exactly x defective items is given by ⎛ D⎞ ⎛ N − D⎞ ⎝ x⎠⎝ n− x ⎠ . ⎛ a⎞ a! Note: The expression ⎜ ⎟ ≈ . ⎜ 1 − ⎟⎠ ⎝⎜ N ⎝ N N −1⎠ . 2. P( x ) = ⎛ N⎞ ⎝ n⎠ where: D = Nonconforming units in the population or lot N = Size of the population or lot n = Sample size x = Number of nonconforming units in the sample. the probability distribution is a discrete distribution.H1317_CH18. . . .qxd 10/15/07 4:26 PM Page 187 Discrete Distributions When the characteristic being measured can only take on integer values such as 0. σ = 187 nD ⎛ D⎞ ⎛ N − n⎞ ⎟. . The number of nonconforming units in a sample would be a discrete distribution.

we must sum the probabilities of getting exactly zero.0087 + 0.H1317_CH18. Only two outcomes are possible for each trial.0013.043 + 0. what is the probability that at least one woman will be in the first five who deplane? What is the probability that the first five deplaning will all be women? Answer: 0.103 × 10 12 ) P( x ) = P( x ) = 4. In quality applications. Problem: A commuter airplane has 28 passengers. 2.1197 If we want to know the probability of getting three or fewer nonconforming units P(£3). nine of them are women.1722. For example. Binomial Distribution The binomial distribution is independent of the population size and assumes the following: 1.1918 × 10 15 ⎛ 60 ⎞ ⎝ 20 ⎠ P ( x ) = 0. tossing a coin and getting heads does not influence the probability of getting heads on the next toss.8817 and 0. one. we determine these individual probabilities and the sum as P(x ≤ 3) = 0. P(x ≤ 3) = P(x = 0) + P(x = 1) P(x = 2) + P(x = 3) Using the hypergeometric distribution function. P(x ≤ 3) = 0. If a sample of 20 is randomly chosen from this lot. Assuming a random order of deplaning.1197. The probability of getting heads in a fair toss is always 50 percent. two.0008 + 0. Independence of outcome means that the result of one trial does not influence the outcome of another trial. Only two possibilities are availablegood or bad.qxd 188 10/15/07 4:26 PM Page 188 The Desk Reference of Statistical Quality Methods Example application of the hypergeometric distribution: A collection (lot) of 60 parts is known to have 15 nonconforming. the only outcomes resulting from an inspection are that the items are either conforming or nonconforming to a requirement. and three. . what is the probability that exactly three nonconforming parts will be found? Lot size N = 60 Sample size n = 20 Number defective in lot D = 15 ⎛ 15 ⎞ ⎛ 60 − 15 ⎞ ⎝ 3 ⎠ ⎝ 20 − 3 ⎠ ( 455 )(1.

what is the probability of getting exactly three nonconforming units? n = 68 p = 0.07 q = 0. and three must be determined and these individual probabilities summed in order to determine the probability of getting three or fewer.qxd 10/15/07 4:26 PM Page 189 Discrete Distributions 189 The standard deviation of the binomial is given by σ = npq .93)65 = (50116)(0. What is the probability that three or fewer defective units will be found in a random sample of 20 units? The probabilities of getting exactly zero.0 percent nonconforming. one. The binomial equation or probability distribution function is ⎛ n⎞ P( x ) = ⎜ ⎟ P xQ n − x .07)3 (0.93 x=3 n P( x ) = ⎛ ⎞ P x Q n − x ⎝ x⎠ 68 P( x = 3) = ⎛ ⎞ (0. The mean is given by μ = np. If a sample of 68 is chosen at random.0089) = 0. where: n = sample size p = proportion defective q = 1 – p.00034)(0. P(x ≤ 3) = P(x = 0) + P(x = 1) + P(x = 2) + P(x = 3) The probability of getting exactly zero is a unique case for the binomial equation. ⎝ x⎠ Example 1: A process is known to perform at 7. The binomial equation for this case reduces to P(x = 0) = (1 – p)n .1517 ⎝ 3⎠ Example 2: A process is running approximately 25 percent defective. two.H1317_CH18.

1339 = 0.25)(0. ⎝ 2⎠ ⎛ 20 ⎞ P ( x = 3) = ⎜ ⎟ (. ⎛ 20 ⎞ P ( x = 2) = ⎜ ⎟ (. P(x = 0) = (0. If the factor p in the binomial approaches zero and the sample n approaches infinity such that the term np approaches a constant λ. Mean.75)18 = 0. x! where λ = np.25)2 (. which is the product of the process proportion defective p and the sample size n chosen from the process or population. μ = λ = np Standard deviation.0669 + 0. and three defective units. In this example.25)3 (.qxd 10/15/07 190 4:26 PM Page 190 The Desk Reference of Statistical Quality Methods or P(x) = Qn.0211.0032. where defects per unit or defects are monitored with respect to time. we calculate the probability of getting exactly two and three defective units.0042) = 0.H1317_CH18.75)20 = 0. The Poisson is unique in that both the variance and the mean are equal to λ.75)17 = 0.1339. The Poisson distribution has only one parameter λ.0211 + 0. two. P(x ≤ 3) = 0. ⎝ 1⎠ In a similar manner. ⎝ 3⎠ The probability of getting three or fewer defective units is the sum of the probabilities of getting zero.2281 Poisson Distribution The Poisson distribution can be thought of as a limiting form of the binomial distribution. one.0032 + 0. the binomial becomes the Poisson distribution.0669. Any randomly . The probability of getting exactly one defective is ⎛ 20 ⎞ P ( x = 1) = ⎜ ⎟ (0. The Poisson distribution is given by P( x ) = e −λ λ x .25)1 (0.75)19 = (20)(0. σ = np A typical application of the Poisson distribution is that of the u and c control charts.

000 n = sample size = 60 np = λ = ( 60 )( 0.89 x=0 P( x ) = e − λ λx e −0. and two defects.036 126 . P(x ≤ 2) = 0.2490 + 0. what is the probability of finding no more than two defects in a sample of 60 garments? p = proportion defective = 4536 = 0.8 λ 0 ( 0.4493 )(1) P( x = 0 ) = = = 0.6333 .4493 x! 0! 1 Example 2: During a period of one week. defects per meter. P ( x ≤ 2 ) = P ( x = 0 ) + P ( x = 1) + P ( x = 2 ) e − λ λ x e −2.2690 2! 2! P( x = 0 ) = The probability of getting two or fewer defectives is the sum of the probabilities of getting zero.16 The probability of finding no more than two defects is the probability of finding two or fewer defectives.16 λ 0 ( 0.1153 x! 0! 1 e −2. These errors are randomly distributed throughout the text. or defects per square yard) represents a suitable application of the Poisson distribution.1153 )( 2. 126. one.02 ) = 0.16 ) 2 P( x = 2 ) = = = 0.16 ) 1 P ( x = 1) = = = 0. What is the probability of not finding an error on 40 randomly selected pages? p = proportion nonconforming = 9 = 0. one.2490 1! 1! e −2.000 garments have been manufactured in which 4536 defects were found.02 450 np = λ = ( 40 )( 0.16 λ1 ( 0. Example 1: A textbook has 450 pages on which nine typographical errors are found.1153 )( 2.2690 + 0. This probability is determined by summing the probabilities of finding exactly zero.qxd 10/15/07 4:26 PM Page 191 Discrete Distributions 191 occurring event that takes place on a per unit basis (such as defects per hour.063 ) = 2.1153 + 0.16 λ 2 ( 0.H1317_CH18. If the process continues to perform at this level.1153 )(1) = = = 0. and two defectives.

not necessarily only integers.H1317_CH18.5 − 12 2 − 0.qxd 192 10/15/07 4:26 PM Page 192 The Desk Reference of Statistical Quality Methods The Normal Distribution as an Approximation of the Binomial The binomial distribution is appropriate for independent trials. larger sample sizes are needed.50 n = 80 p = 0. If the number of trials n is large.15. It is appropriate to use a continuity correction factor in the normal distribution. binomial.15) ⎠ P( x = 2) = Θ( −1. Example 1: A sample of n = 80 is chosen from a population characterized by p = 0. P( x = a ) = 1 1 [ ( a − np ) 2 / np ( 1− p ) ] e2 2 πnp (1 − p ) The binomial distribution is discrete.15 np = 12.31) P( x = 2) = 0.50 and n > 10.14)(12)(1 − 0. then the central limit theorem may be used to justify the normal distribution with a mean of np and a variance of np(1 – p) as an approximation to the binomial. each of which has a probability p and a number of trials n.15) ⎠ ⎝ 2(3. The normal distribution is not appropriate for p < 1/(n + 1) or p > n/(n + 1).0 ⎛ ⎞ ⎛ ⎞ 2 + 0.09510 P( x = 2) = 0. whereas the normal distribution is continuous. What is the probability of getting exactly two defectives using the normal. For larger values of p.5 − np ⎞ ⎛ x + 0.19) − Θ( −1. and Poisson distribution as a model? Which distribution would be the best for this calculation? .11702 − 0.02192 Example 2: A sample of n = 60 items is chosen from a process where p = 0.14)(12)(1 − 0.5 − 12 P( x = 2) = Θ⎜ ⎟ − Θ⎜ ⎟ ⎝ 2(3. The normal distribution deals with variables that can take on any value. What is the probability of getting exactly two defectives using the normal distribution? np > 10 and p < . ⎛ x − 0.15.5 − np ⎞ P(x) = Θ ⎜ ⎟ ⎟ − Θ⎜ ⎝ 2πnp(1− p) ⎠ ⎝ 2πnp(1 − p) ⎠ where Θ = the cumulative normal distribution function The normal distribution is appropriate and acceptable for p of approximately 0.

5 − np ⎞ − Θ⎜ P( x = 2 ) = Θ⎜ ⎟ ⎟ ⎝ 2 πnp (1 − p ) ⎠ ⎝ 2 πnp (1 − p ) ⎠ ⎛ P( x = 2 ) = Θ⎜ ⎝ ⎞ ⎛ 2 + 0.85 ) 60 − 2 ⎝ 2⎠ P ( x = 2 ) = (1770 )( 0.H1317_CH18.000123 )( 81) P( x = 2 ) = 2 P ( x = 2 ) = 0.94 ) − Θ (1.14 )( 9 )(1 − .000081) = 0.15 np = (60)(0.qxd 10/15/07 4:26 PM Page 193 Discrete Distributions Using the normal distribution: n = 60 p = 0.15 ) ⎠ P ( x = 2 ) = Θ ( −0.08 ) P ( x = 2 ) = 0.5 − np ⎞ ⎛ x + 0.5 − 9 ⎟ − Θ⎜ 2 ( 3.15 ) ⎠ ⎝ ⎞ 2 − 0.0 ( 9.15 ) 2 ( 0.15 q = 0.17361 − 0.0050 P( x ) = 193 .5 − 9 ⎟ 2 ( 3.0225 )( 0.0 x=2 ⎛ x − 0.14 )( 9 )(1 − .0 ) 2 P( x = 2 ) = 2! ( 0.15) = 9.03354 Using the binomial distribution: n = 60 p = 0.14007 P ( x = 2 ) = 0.0032 Using the Poisson distribution: e − λ λx x! e −9.85 x=2 n P( x = 2 ) = ⎛ ⎞ p x q n− x ⎝ x⎠ 60 P ( x = 2 ) = ⎛ ⎞ ( 0.

The probability distribution function defining this function is given by x − 1⎞ r P( x ) = ⎛ p (1 − p ) x − r x = r . what is the probability of getting two defectives on the fifteenth sample? p = 0. r + 2 . If we have a series of independent trials with a probability of success of p. This value of x is a Pascal random variable. ⎝ r − 1⎠ A special case of the Pascal distribution is when r = 1. .3383 ) P ( x = 15 ) = 0.08. This particular case where we fix the number of successes to one and determine the probability of the number of trials is the geometric distribution. .0064 )( 0.92 ) 13 ⎝ 1⎠ P ( x = 15 ) = (14 )( 0.qxd 194 10/15/07 4:26 PM Page 194 The Desk Reference of Statistical Quality Methods Summary: Distribution Normal Binomial Poisson P(x = 2) 0. we are interested in determining the probability of the number of trials required until the first success.H1317_CH18.08 ) 2 ( 0.0335 0. In this case. r + 1.08 x = 15 r=2 x − 1⎞ r P( x ) = ⎛ p (1 − p ) x − r ⎝ r − 1⎠ 14 P ( x = 15 ) = ⎛ ⎞ ( 0. then x will be the trial on which the rth success will occur. Example 1: Given a process performing at p = 0.0032 0. Other values of r > 0 (and r is not necessarily an integer) define the negative binomial distribution. .0050 Pascal Distribution The Pascal distribution is similar to the binomial distribution in that it is based on independent trials with the probability of success for a given trial not being influenced by the result of other trials.0303 .

152 0.921 0.007 0.616 0.868 0.061 0.072 0.570 0.818 0.342 0.5 4.551 0.078 0.805 0.083 λ = (60)(0. We then move down the vertical axis until we find the value for x (2).1 4.210 0.634 0. cumulative probability tables were developed.877 0.609 0.441 0.791 0. This is the cumulative probability of getting two or fewer where λ = 5.720 0.2 4.0 0 1 2 3 4 5 6 7 0.495 0.6 4.905 0.753 0.0 Moving across the horizontal.1246 λ x 4.651 0.737 0.0842 P(x ≤ 2) = 0.040 0.896 0.qxd 10/15/07 4:26 PM Page 195 Discrete Distributions 195 Cumulative Distribution Tables In order to facilitate expeditious calculations of Poisson probabilities.377 0.009 0.133 0.762 0.056 0.066 0.844 0. the probability of 0.00. The following illustrates the use of a cumulative Poisson distribution table.125 0.017 0.395 0. The intersection of the appropriate row and column gives the probability of getting x or fewer defectives.014 0.326 0.703 0. we look up the value for λ (5.083) = 5.668 0.887 0.015 0.532 0.052 0.163 0.279 0.414 0.7 4.044 0.294 0.9 5. At the intercept.856 0.769 0.011 0.777 0.913 0.008 0.143 0.310 0.125 is given.929 0.4 4.3 4.686 0.458 0. P(x ≤ 2) = P(x = 0) + P(x = 1) + P(x = 2) P(x ≤ 2) = 0.0).174 0.0337 + . A vertical column provides for various values of x.831 0.476 0.197 0.0833? x=2 n = 60 p = 0.265 0.867 .936 0.048 0.010 0.513 0.0067 + 0.012 0. What is the probability of finding two or fewer defectives in a sample of n = 60 chosen from a process where p = 0.943 0.185 0.085 0.007 0.224 0.360 0.879 0. and a horizontal heading provides for values of np or λ.590 0.8 4.H1317_CH18.

E. Statistical Quality Control. Encino. Gryna. R. D.1 p ≤ 0. R. M. Nandram. C. 1994. M. H. Upper Saddle River. Hauppauge. Probability and Statistics for Engineers and Scientists. Myers. Sternstein. Petruccelli.. Quality Control..qxd 10/15/07 196 4:26 PM Page 196 The Desk Reference of Statistical Quality Methods The following figure shows the selection of distributions for the approximation of the hypergeometric: Hypergeometric P(x) = D x N–D n–x N n n/N ≤ 0. Chen. L. 5th edition. 3rd edition. 1996. 1996. A. NJ: Prentice Hall. and H. G. and R. New York: McGraw-Hill. Walpole. 1993.. Leavenworth. . 2007. J.H.1 n/N > 0. Grant. D. Statistics. 3rd edition. Romig. and M. G. D. Englewood Cliffs. Introduction to Statistical Quality Control. NJ: Prentice Hall.C. Montgomery. Applied Statistics for Engineers and Scientists. F..1 p-Binomial P(x) = n x p – qn–x x f-Binomial P(x) = np < 5 np ≥ 5 Poisson Normal p > 0. and J. Modern Quality Control. NJ: Prentice Hall. New York: McGraw-Hill. 5th edition. E. Englewood Cliffs. Juran’s Quality Planning and Analysis for Enterprise Quality. E. 1982. 4th edition. 1999. B. 1996.. New York: John Wiley & Sons. H. S.1 Np x n N x 1– None n N Np–x P(x) = Bibliography Besterfield. 7th edition.H1317_CH18. CA: Glencoe. Chua. De Feo. and R. Hayes. NY: Barron’s Educational Series.

9 x 59.0 percent.0 1.H1317_CH19.0 0.qxd 10/17/07 2:24 PM Page 197 Evolutionary Operation An alternative approach to process optimization is the use of evolutionary operation.8 x 59. % Solids.2 Coating thickness. and the coating thickness is 0.0 54. The following two-factor EVOP is an example of the EVOP process. The response variable is peel strength in pounds per linear inch. The underlying principle of EVOP is to make small process changes during the phase and replicate the conditions (cycles) until there is positive evidence that one of the conditions moves the process into an area of improvement. The method presented here uses the standard deviation calculated directly and the statistical significance of effects evaluated using the t-distribution. First introduced by George Box in 1957. Polymer concentration is 54. After deciding on the amount of change. It can be performed for any number of process variables. which was used to measure experimental error and serve as a measure of statistical significance.0 58.6 0.0 52. we will obtain peel strength for each new condition.0 50.0 58.4 0. mils Factor B 197 1. Factor A 60. but from a practical perspective it is limited to three process variables. or EVOP. The original method developed by Box utilized the range of observations to estimate the standard deviation.0 56.8 1. This initial result of one response per setup is called cycle 1. the technique of EVOP is closely related to sequential simplex optimization.8 mils.3 x x 58. The current operating conditions are used to produce a pressure-sensitive adhesive. We will vary the operating parameters slightly to evaluate the response.4 .

0449 0.212 0.495 0. we cannot determine the experimental error.0 58.0 mils.95 + 58.qxd 10/17/07 198 2:24 PM Page 198 The Desk Reference of Statistical Quality Methods The initial trial change will be to set the percentage of total solids to 52.1798 0. Phase 1.18 − 58. S 2 1 2 3 4 – + – + – – + + 58. The operating conditions and the responses are treated as a traditional 22. An additional set of runs (cycle 2) will be made using the same set of operating conditions (phase 1).9 59.0 59.3 59.65 0.70 + 59.28 ⎝ ⎠ ⎝ ⎠ 2 2 .9 59. two-level experiment.H1317_CH19. Factor Description – + A B % Solids Coating thickness 52.6 and 1.42 ⎠ ⎝ ⎠ ⎝ 2 2 B: ⎛ 58.3 The effects will not be calculated. because with only a single observation.4 60.0 Phase 1.95 + 58.0050 0. cycle 2: Run A B Cycle 1 Cycle 2 Average Standard deviation.85 ⎞ = 59.65 ⎞ − ⎛ 58.85 58.2450 Main effects: A: ⎛ 58.65 ⎞ − ⎛ 58.1 58.424 0.0 59.70 ⎞ = 59.8 58.0 0.071 0.0 1.8 58.0 percent (±2 percent) around the normal operating condition and to set the coating thickness to 0.25 − 58.70 59.90 = 0.85 + 59. S Variance.6 56. cycle 1: Run A B Response 1 2 3 4 – + – + – – + + 58.95 58.83 = 0.0 percent and 56.8 58.

H1317_CH19.854 A: ⎛ 59.6 61.0 Average 58.05 k =2 f =0 r =2 p =2 Sp = 0.8 58.10 Standard deviation. We will replicate the experiment again by running cycle 3.65 ⎠ ⎝ ⎠ ⎝ 2 2 B: ⎛ 59.qxd 10/17/07 2:24 PM Page 199 Evolutionary Operation 199 Error of effects: ⎛ t α / 2 .8 58. we will use a 95 percent confidence level (α = 0.10 ⎞ − ⎛ 58.55 − 58.776)(0.07 ⎞ = 59.0 59.7293 .58 ⎝ ⎠ ⎝ ⎠ 2 2 0.68 Since the effects are smaller than the error.97 = 0.1436 0. we cannot conclude that the effects are statistically significant at the 95 percent level of confidence.345)(0.3 59.05).707) E = ±0.9 59. cycle 3: Run A B Cycle 1 Cycle 2 Cycle 3 1 2 3 4 – + – + – – + + 58.( r −1 ) 2 k − f ⎜ Sp ⎝ 2 ⎞ pr ⎟⎠ where: α = risk k = number of factors f = number of times the experiment is fractionated r = number of replicates or cycles Sp = pooled standard deviation p = number of +’s in a column For this example.3600 0. S Variance.59 − 58.0433 0.0 58.07 + 60.379 0. S 2 0.7 59. Phase 1.94 = 0.00 ⎞ = 59.00 + 60.87 + 59.345 E = (2.208 0.00 60.87 59.4 60. α = 0.5 59.10 ⎞ − ⎛ 58.07 59.1 58.600 0.87 + 59.

705 0.66 − 58.6 61.5 59.3 59.4 60.88 + 58.0961 0.4970 A: ⎛ 59.94 = 0.565 E = (2.( r −1 ) 2 k − f ⎜ Sp ⎝ α = 0.1 58.9 59.88 59.72 ⎝ ⎠ ⎝ ⎠ 2 2 B: ⎛ 59.8 58.( r −1 ) 2 k − f ⎜ Sp ⎝ 2 ⎞ pr ⎟⎠ α = 0.310 0.qxd 200 10/17/07 2:24 PM Page 200 The Desk Reference of Statistical Quality Methods Error of effects: ⎛ t α / 2 .170 0.98 = 0.46 2 ⎞ pr ⎟⎠ .2401 0. S Variance.7 59.00 ⎞ = 59.306)(0.1 59.05 0. cycle 4 will be run.05 k =2 f =0 r =3 p =2 Sp = 0.9 58.08 + 60.88 + 59.55 ⎝ ⎠ ⎝ ⎠ 2 2 Error of effects: ⎛ t α / 2 .0 58.8 58.00 + 60.0 59.490 0. Phase 1.0 59.05 k=2 f=0 r=4 p=2 Sp = 0.9 59. S2 1 2 3 4 – + – + – – + + 58. cycle 4: Run A B Cycle 1 Cycle 2 Cycle 3 Cycle 4 Average Standard deviation.0 58.00 60.565)(0.756 Since the error is still larger than the effects.05 ⎞ − ⎛ 58.08 59.08 ⎞ = 59.0289 0.53 − 58.58) = 0.H1317_CH19.05 ⎞ − ⎛ 58.

179)(0.8 1.0 percent) and B is + (coating thickness = 1.6 Cycle 2 will be run to obtain a standard deviation for the responses.5 60.8 58.05 x x 58. Factor A and factor B should both be set positive (+) in order to maximize the peel strength.00 54.46)(0. Factor Description – + A B % Solids Coating thickness 54. where A is + (percentage of solids = 56.50 The error is now less than the effect.0 Phase 1 58.6 0.qxd 10/17/07 2:24 PM Page 201 Evolutionary Operation 201 E = (2. Factor A 60. so we conclude that the effects are statistically significant.0 0. cycle 1: Run A B Response 1 2 3 4 – + – + – – + + 58.0 59.0 56.08 x 60.0 52.4 A new set of conditions will be established (phase 2) around the new settings.50) = 0.0 1.0 50.2 Coating thickness.0 0.0 1. . mils Factor B 1. % Solids.3 59.0).88 x 59.2 Phase 2.H1317_CH19.9 59.4 0.

cycle 2: Run A B Cycle 1 Cycle 2 Average Standard deviation.071 0.45 + 60. S 2 1 2 3 4 – + – + – – + + 58. .5 60.707) = 0.40 ⎞ B: ⎛ ± = 60.90 ⎝ ⎠ ⎝ ⎠ 2 2 Error of effects: ⎛ t α / 2 .141 0.20 = 0.0449 Main effects: 59.1 59.0199 0.00 + 59.10 ± 59.75 0.0050 0.150 E = (2.0 + 59.9 59.75 ⎞ ⎛ 59.45 ⎞ A: ⎛ ± = 60.H1317_CH19.817 The error for both effects is statistically significant.40 + 60.6 59.qxd 10/17/07 202 2:24 PM Page 202 The Desk Reference of Statistical Quality Methods Phase 2.212 0.0199 0.150)(0.00 59.45 60.9 59.75 ⎞ ⎛ 59.4 60.3 59.23 = 0.08 ± 59.776)(0.40 59.5 59.05 k=2 f=0 r=2 p=2 Sp = 0.141 0.085 ⎝ ⎠ ⎝ ⎠ 2 2 59. S Variance.( r −1 ) 2 k − f ⎜ Sp ⎝ 2 ⎞ pr ⎟⎠ α = 0.

6 0. Bibliography Box.4 0. Montgomery.0 54. R.0 50. we would devise another set of process conditions (phase 3) to continue our process optimization. New York: John Wiley & Sons.0 0. C.2 Coating thickness.H1317_CH19.45 56.qxd 10/17/07 2:24 PM Page 203 Evolutionary Operation 203 The following figure shows process averages after phase 2. 4th edition.0 59. G.0 1.0 52. Introduction to Statistical Quality Control.0 1.0 x 59.. D.8 1. cycle 1: Factor Description – + A B % Solids Coating thickness 56.0 60. 3rd edition. . 1997. and N. Factor A 60. cycle 2: Phase 2 % Solids. C. Montgomery.4 After phase 2.76 x x 59.0 58. Evolutionary Operation. Draper. mils Factor B 1. 1996. D. E.4 x 60. New York: John Wiley & Sons. P. New York: John Wiley & Sons. The suggested conditions for phase 3 are as follows: Phase 3.0 1. 1969.4 The EVOP is continued until no additional gain is achievable at the designated level of confidence. Design and Analysis of Experiments.

qxd 10/17/07 2:24 PM Page 204 .H1317_CH19.

128. then the weight assigned to the current sample (used in calculating EWMA) is 0.1024.H1317_CH20. 0. The amount of decrease of the weights is an exponential function of the weighting factor λ. In selecting the value for λ. the values of the immediate historical data points. It is potentially affected by nonnormal distributions An alternative to the individual/moving range control chart (and the average/range control chart) is the exponentially weighted moving average (EWMA) control chart. 0. and in effect it is an individual Shewhart control chart. n +1 If λ = 1 is selected for the EWMA. to varying degrees.2. The weights λ (1 – λ)1 decrease geometrically with the age of the sample. Wt–1 = λ(1 – λ)1 where: Wt – 1 = weight associated with observation Xt – 1. use the following relationship between the weighting fac– tor λ and the sample size n for a Shewhart X /R control chart: λ= 2 . with Xt being the most recent observation λ = weighting factor 0 to 1 When λ is very small. The EWMA tends to reflect the current condition biased with. it is relatively insensitive to short-lived process changes. It is relatively weak in detecting a small process change in a timely manner 2. therefore.16.20. These weights decrease in an exponential decay manner from the present point value to subsequent points. the moving average at any time t carries with it a great deal of inertia from the previous data. Because 205 .qxd 10/18/07 12:56 PM Page 205 Exponentially Weighted Moving Average The individual/moving range control chart has the following two modest disadvantages: 1. and the weights given to the preceding samples are 0. If λ = 0. and so on. which can assume any value between zero and one. all of the weight is given to the current sample. An EWMA is a moving average of past data where each of the data points is assigned a weight.

Since the EWMA is a weighted average of the current sample and all of the past data. however. the EWMA is sometimes referred to as a geometric moving average (GMA). it is very insensitive to the assumption of normalcy for the distribution of observations. 40 to 50 should be collected. The data will be used to provide a historical statistical characterization of the process being monitored. For this example.qxd 206 10/18/07 12:56 PM Page 206 The Desk Reference of Statistical Quality Methods these decline geometrically.H1317_CH20. The upper and lower control limits are based on The first few samples have the quantity X±3 σ ⎛ λ ⎞ 2t ⎜ ⎟ [1 − (1 − λ ) ]. It is. n ⎜⎝ 2 − λ ⎟⎠ The upper and lower control limits for the EWMA are UCL/LCL = X ± 3σ λ . The following example illustrates the construction of the EWMA control chart. The following data represent individual observations of waiting times to be served in a branch of the Midwest National Bank. At least k = 25 samples should be collected and. we will use k = 15. n ⎝ 2−λ⎠ These control limits will increase rapidly to their limiting value. . 2−λ λ adjusted by the amount of [1 – (1 – 2− λ λ)2t]. Collect data from which the baseline EWMA will be constructed. – σ2 If the X i are independent random variables with variance . Times are recorded in minutes. then the variance of n Zt is σ 2 zt = σ2 ⎛ λ ⎞ [1 − (1 − λ )2 t ]. n ⎜⎝ 2 − λ ⎟⎠ As t increases. Step 1. where possible. therefore. σ2zt increases to the limiting value σ 2z = σ2 ⎛ λ ⎞ . The EWMA can also be used with averages. ideal for monitoring individual values.

Z3 is Z3 = λX3 + (1 − λ)Z2 Z3 = 0.2 for this example) Z t −1 = EWMA for the immediate preceding point.2(132.2(60.0) + (1 − 0.H1317_CH20.qxd 10/18/07 12:56 PM Page 207 Exponentially Weighted Moving Average Sample t Xt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 60 91 132 100 80 120 70 95 14 80 110 100 82 105 65 207 – X = 95. Any appropriate estimate based on SPC relationships may be used.3 Z1 = 88.2 Z2 = 88.2. where: Zt = EWMA at time t λ = weighting factor (λ = 0. Z1 is Zt = λXt + (1 − λ)Zt − 1 Z1 = λX1 + (1 − λ)Z0 Z1 = 0.8. The EWMAs are calculated for individual observations as Zt = λXt + (1 − λ)Zt −1 .3.2)88. Z2 is Z2 = λX2 + (1 − λ)Z1 Z2 = 0. Step 2.4. – The initial Z0 = the process average X = 95.0) + (1 − 0.3 S = 23. .8 Z3 = 97.2(91.2)95.5 Note: The sample standard deviation S is used as an estimate of σ.0) + (1 − 2)88. Calculate the EWMAs.

5 91. which summarizes individual observations Xt and EWMAs Zt: Sample t Xt Zt 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 60 91 132 100 80 120 70 95 140 80 110 100 82 105 65 88. Calculate the control limits.1 = 77.3 + 14.2 88.2 ⎟⎠ .4 100.8 97. Limits for sample #1: ⎛ 0.2 ⎟⎠ ⎛ 0.2)2 ] = 95.1 = 81.8 103.2 ⎟⎠ Limits for sample #2: ⎛ 0.4 ⎝ 2 − 0.3 − 3(23.3 and S = 23.2)4 ] = 95.9 94.2 ⎞ [1 − (1 − 0.5. the control limits are based on ⎛ λ ⎞ X ± 3S ⎜ ⎟ [1 − ( 1 − λ ) 2 t ] . For EWMAs based on individual observations (n = 1).2 ⎞ [1 − (1 − 0.1 = 113.5) ⎜ ⎝ 2 − 0.9 98.2 ⎞ UCL 2 = 95. X = 95.0 98.2 ⎟⎠ ⎛ 0.3 + 3(23.5) ⎜ ⎝ 2 − 0.4 93.5) ⎜ [1 − (1 − 0.7 100.4 ⎝ 2 − 0.5) ⎜ [1 − (1 − 0. ⎝ 2−λ⎠ – For this example.2 LCL 2 = 95.3 + 18.qxd 208 10/18/07 12:56 PM Page 208 The Desk Reference of Statistical Quality Methods The remaining EWMAs are calculated and tallied in the following table.8 Step 3.5 93.3 − 18.3 − 14.H1317_CH20.2 ⎞ UCL1 = 95.3 + 3(23.4 97.2 LCL1 = 95.3 99.1 = 109.2)2 ] = 95.3 − 3(23.6 96.2)4 ] = 95.

1 LCL 3 = 95.2)8 ] = 95.2 ⎞ UCL 5 = 95.9 98.5 118.3 + 21.3 99.3 − 21. 109.5) ⎜ [1 − (1 − 0. .5 91. .0 118.5) ⎜ ⎝ 2 − 0.2)10 ] = 95.4 113.0 98. Control limits for Zt Sample t Xt Zt LCL UCL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 60 91 132 100 80 120 70 95 140 80 110 100 82 105 65 88.2 ⎟⎠ ⎛ 0.qxd 10/18/07 12:56 PM Page 209 Exponentially Weighted Moving Average 209 Limits for sample #3: ⎛ 0.2 ⎟⎠ ⎛ 0.2 = 73.3 + 22. .2 ⎞ UCL 4 = 95.8 97.3 − 3(23.5) ⎜ [1 − (1 − 0.1 72.4 97.9 94. .H1317_CH20.3 + 3(23. .2 ⎟⎠ ⎛ 0.7 ⎝ 2 − 0.4 115.2 ⎞ [1 − (1 − 0. . .3 + 3(23.3 − 20.5 118. .2 ⎟⎠ Limits for sample #4: ⎛ 0. Notice that the control limits soon stabilize to a constant value.2 = 75.9 LCL 4 = 95. .2 = 115.7 100.2 ⎞ [1 − (1 − 0.3 + 20.2 88.8 103.3 118.3 − 22.5) ⎜ ⎝ 2 − 0.1 LCL 5 = 95.5) ⎜ [1 − (1 − 0.6 .3 + 3(23.2 77.1 73.4 100.2)8 ] = 95.4 = 116.5 93.5 ⎝ 2 − 0.5 116.2 75.2 ⎞ UCL 3 = 95. . .3 − 3(23.2)6 ] = 95.2 = 117.2)10 ] = 95.2 ⎟⎠ Limits for sample #5: ⎛ 0.8 81.6 72.1 72.7 117.9 73.4 93.6 96.3 72.5) ⎜ ⎝ 2 − 0.3 − 3(23.2 ⎞ [1 − (1 − 0.0 .4 = 73.2 ⎟⎠ The remaining control limits are calculated and listed in the following summary.5 ⎝ 2 − 0.2)6 ] = 95.

Ten new data values are obtained.2 110.7 112.H1317_CH20. UCL = 118.9) 95. Sample # Xi (15 16 17 18 19 20 21 22 23 24 25 65 110 115 111 118 126 117 122 131 124 129 Zt 91.2 Cl = 95. Construct the chart and plot the data. Continue to collect and plot the data.7 104.9 109.4 99.0 70 1 5 10 15 Step 5.3 116.4 101. looking for signs of a change.6 120 110 λ = 0.qxd 210 10/18/07 12:56 PM Page 210 The Desk Reference of Statistical Quality Methods Step 4.2 .6 118.1 120. Calculate the EWMA values and plot the results.3 100 90 80 LCL = 72.

C. L. “Control Charts Using Coupled Exponentially Weighted Moving Averages.6 120 History 110 λ = 0. New York: John Wiley & Sons. J.2 Cl = 95.0 70 1 5 10 15 20 25 Bibliography Montgomery. 1996. . Wheeler. no. 3rd edition. Sweet. A. TN: SPC Press.H1317_CH20. Introduction to Statistical Quality Control. 1995. D.3 100 90 80 LCL = 72.qxd 10/18/07 12:56 PM Page 211 Exponentially Weighted Moving Average 211 The completed EWMA chart with all data points is as follows: UCL = 118. 1986. D.” Transactions of IIE 18. Knoxville. 1: 26–33. Advanced Topics in Statistical Process Control.

H1317_CH20.qxd 10/18/07 12:56 PM Page 212 .

08 S12 = 0.qxd 10/17/07 2:33 PM Page 213 F-test One of the main objectives in the pursuit of quality in manufacturing (and service for that matter) is to reduce variability. By comparing the variances of the two samples.6 12. This test relies on the F statistic.H1317_CH21.4 S1 = 0. For example.0576 S22 = 0.9 12.4 12. we may test the hypothesis that one of the variances is greater than the other.0064 213 . named after Ronald A. The weights of tablets prior to process improvement: 12. F= S12 0.6 12.24 S2 = 0.00 S22 0. Fisher (1890–1962). This goal is consistent with the Taguchi philosophy of performing at the target with minimum variation.0064 The F statistic or ratio is determined by dividing the larger variance by the smaller variance. who developed the probability distribution in the 1920s.5 12.6 12.7 12. and σ2 might represent the standard deviation after efforts were made to reduce the variation. A sample of seven tablets is taken from the production line prior to the improvement. and five tablets are taken from the production line after implementation of the modification. Consider a process variable 1 that has a standard deviation of σ1 and a process variable 2 that has a standard deviation σ2.5 12. One of the best methods to assess whether variation reduction has been accomplished is to compare the variances (or square of the standard deviation). σ1 might represent the standard deviation of the tablet weight of a pharmaceutical product before attempts were made to reduce variability.5 12.3 12.0576 = = 9.2 The weights of tablets after process improvement: 12.

.. v2 = 4. v2. 5. For this example.2 15..2 The critical F value for v1 = 6. Such a two-tailed test would be the case where the null hypothesis is σ12 = σ22 and the alternative hypothesis is σ12 ≠ σ22. Double-Sided Tests In the previous example. 2. and 1 percent. The vertical columns represent the degrees of freedom for the sample giving the larger variance.16... . cases in which a double-sided or two-tailed test is appropriate. is accepted. A test of this type is called a singlesided or one-tailed test.qxd 214 10/17/07 2:33 PM Page 214 The Desk Reference of Statistical Quality Methods With this F-test. Ho: σ12 = σ22 Ha: σ12 > σ22 We will reject the null hypothesis and accept the alternative hypothesis if our F calculated value is greater than the critical F value. meaning that a risk.16 9. We were interested only in a one-way change... in other words. a confidence level of 95 percent is selected.16 we will reject the null hypothesis (σ12 ≠ σ22) and accept the alternative hypothesis (σ12 > σ22). A level of significance must be chosen.13 in the appendix.0 and exceeds 6. The horizontal rows represent the degrees of freedom for the sample giving the smaller variance. v1. Since our calculated F-ratio is 9. 6 4 4. α = 5%. as opposed to σ12 equal to σ22 (the null hypothesis). For these cases. There are.5 1 v1 v2 .. concluding that a reduction in variation has been demonstrated. we are testing the hypothesis that σ12 is greater than σ22 (the alternative hypothesis). For this example v1 = 7 – 1 = 6 (the sample size of the larger variance) and v2 = 5 – 1 = 4 for the smaller variance.. however. the tabled values are doubled.01 6. Single-Sided vs.05 is 6.5. Each row of v2 has four levels of α corresponding to 10.. and a risk of α = 0.H1317_CH21. The degrees of freedom are determined by the sample size minus one. and the F-table is utilized as it is presented. α 10 5 2.. we were concerned about whether a decrease in variability was evident... The critical F value is found in Table A.. the 5 percent level of significance becomes the 10 percent level.

025. we have α/2 = 0. Based on a very large number of test results. and the other consists of 11. The respective standard deviations are 14.49 SY 2 (0.05. and accept the alternative hypothesis.05 = 3.23.7. A new method (Y) has been developed in an effort to reduce the variation of test results.52. It would be an improvement if the test variation were to be reduced as a result of using the new method.10. Ha: σ12 ≠ σ22. The F-ratio is F= 196 = 4. Assume a risk. Since F calculated is greater than F critical. concluding that there is a difference in the variation in the two methods. we reject the null hypothesis.0 and 7. this method has a standard deviation of SX = 0. Example 2: A method of testing (X) has been used extensively to measure the water content in a latex co-polymer.qxd 10/17/07 2:33 PM Page 215 F-test 215 Example 1: Two different methods of training have been evaluated. Ha: σ12 = σ22.0.H1317_CH21.00 α / 2 = 0.038)2 Selecting a risk of α = 0.0.05. The critical F value is F∞. Since the calculated F is greater than F critical. α = 0. 49 The critical F-ratio. we will reject Ho in favor of Ha.071)2 = = 3. F15.025 = 3. Do the data support the hypothesis that a reduction in variation is evident? Ho: σX 2 = σY 2 Ha: σX 2 ≠ σY 2 (for a two-tailed test) F= SX 2 (0. One test group consists of 16 participants. Eight independent tests gave a standard deviation of Sy = 0.071 units.038. .10.

H1317_CH21.qxd 10/17/07 2:33 PM Page 216 .

0 40.4 40.2 38.5 42.6 40. The data for these measurements follow: 42.1 38.4 39. Collect data.6 .5 36.1 43.9 39.8 36.3 41. typically 50 or more Determination of a number of class intervals or cells Determination of the class interval width Calculation of all class intervals and categorization of the data into appropriate class intervals 5. The frequency histogram consists of a vertical axis that corresponds to the frequency at which a value or group of values occurs and a horizontal axis that gives the value of the data or group of data.9 42.7 38.H1317_CH22.9 36.6 43.0 36.3 41.7 42. For this example.2 217 44.2 34.2 39.4 39.1 38.3 41.6 41.5 35.4 43.2 46.5 42.2 38.2 37.6 40.6 45.0 39. 4.5 40.1 40.3 41.8 47. Collection of a set of data.5 41.1 42.0 34.4 42.6 40.3 34. The shape of the histogram can give insight to the nature of the distribution of data.7 35. All of these techniques involve the following common set of steps: 1. A frequency histogram shaped like a bell is indicative of a normal distribution.5 40.1 29.5 37.4 43.2 37.9 33.4 35.3 42.3 37.6 47. Construction of the frequency histogram Example of a Frequency Histogram Step 1.7 45.6 39. 3.9 44.4 37.0 31.9 37.qxd 10/15/07 1:54 PM Page 217 Histograms Histograms or frequency histograms present a picture or graphical representation of data. There are numerous techniques for developing a frequency histogram.5 38.9 46.2 31.8 41.9 37.4 41.3 42.5 38.1 40. the resistance in ohms for 126 coils is measured. 2.

1 41.5 40.5 39.2 8 Step 4.3 38. This relationship will give values approximately equal to those given in Juran’s handbook.6 37. Calculate all class intervals and categorize the data into appropriate class intervals.9 40.2 40.1 39.6 39.1 38. Determine the number of class intervals or cells K into which the data will be sorted.5 = 2.7 40.8 42.7 36.5 34.0 35. Determine the class interval width.6 41.1 41. The number of class intervals K depends on the total number of data values or observations N. Juran’s Quality Handbook (1999.2 40.6 43.0 38.7 33.7 − 29.0 43.5 41.7 40.8 44. .5. 44.76 = 8.4 42.0 37.0 38.5 39.3 36.0 43.H1317_CH22.9 47.1 40.5 For our example.1) gives the following table: Number of observations N 20–50 51–100 101–200 201–500 501–1000 > 1000 Recommended number of intervals K 6 7 8 9 10 11–20 Juran suggests that the following equation be used to estimate the number of class intervals.9 36. the number of observations N = 126 and the number of class intervals K is given by K = 1. The class interval width is determined by dividing the difference in the largest and smallest observations by the number of class intervals.0 44.0 36.4 47.qxd 218 10/15/07 1:54 PM Page 218 The Desk Reference of Statistical Quality Methods 46.8 37. K = 7.5 41.5 ln N + 0.8 45.5(4.1 35.3 Step 2.8 33. The resulting value should be rounded to the nearest integer.8 40.7 32.84) + 0.4 38.8 43. Step 3.9 40. Interval width = 47.2 41.2 41.9 47.7 42.4 43. K = 1.4 36.

5 + 2. 2.7 to 33. Each of the following intervals is calculated by taking the ending point of the previous interval and adding the interval width.4.1 4th interval = 36. assigning each observation to its appropriate interval: Interval 28.1 9th interval = 47.3 Frequency or tally count 3 3 10 22 25 38 14 6 5 .7 to 44.7 to 33. The end of the first interval will be at the beginning point plus the full interval width (2. For example. 1st interval = 28. If we encounter an observation of 41.1 to 38.1 to 38.7. it will be assigned to the sixth interval. These intervals will hold observations from the beginning point up to but not including the ending point.7.9 to 47.9 to 36.9 3rd interval = 33.9 44.9 33.1 to 49.00).00).9 to 36.2 or 28.5 to 42.7 2nd interval = 31.3 5th interval = 38.4 to 31.qxd 10/15/07 1:54 PM Page 219 Histograms 219 There will be eight intervals.3 ← This additional interval is required to accommodate the maximum observed value of 47. First interval bounds: 29.3 to 40. A table of the intervals and a tally count of all the observations is made.7 to 44.1 to 49.5 to 42. the fifth interval will contain all of the observations from 39.3 38.7 42.1 to 29.H1317_CH22.4 up to but not including 41.4 to 31.4 to 31.2. The beginning of the first interval will be at the smallest observation minus one-half the interval width (1.5 – 1.7 31.9 8th interval = 44.1 47.7 7th interval = 42.9 to 47.3 to 40.00 units. each with a class interval width of 2.5 6th interval = 40.4.1 36.5 40.

3 to 40.4 to 31. Construct the frequency histogram.7.1 36. In the previous example.9 to 47.qxd 10/15/07 220 1:54 PM Page 220 The Desk Reference of Statistical Quality Methods Step 5. These values will be recorded as 29 and 47.H1317_CH22. Start by determining the largest and smallest observations and record these to one decimal place less than that of the data.9 44.3 Observation An alternative to the traditional frequency histogram is a technique called the leafstem plot.3 38.5 and 47.1 to 38. A 29 30 31 32 33 34 35 36 37 38 39 B 29 30 31 32 33 34 35 36 37 38 39 1 C 29 30 31 32 33 34 35 36 37 38 39 1 .9 33.7 to 44. the smallest and largest values were 29.5 to 42.1 to 49.7 to 33. A vertical line is drawn to the right of the values.9 to 36.5 40. A vertical table is made beginning with 29 and ending with 47. 40 35 Frequency 30 25 20 15 10 5 0 28.7 42.1 47. This method is a quick and relatively simple way to present data in a graphical format.7 31.

qxd 10/15/07 1:54 PM Page 221 Histograms 40 41 42 4 43 44 45 46 47 40 41 42 4 43 44 45 46 47 221 40 41 42 4 43 3 44 45 46 47 1. This will represent the location of the observation 42.3.4. The second observation from the 126 values is 39. Find 39 in the table. Completed leaf-stem plot: 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 5 4 8 4 2 5 8 4 4 1 6 3 4 3 6 1 1 3 0 0 8 1 4 9 5 4 1 9 0 6 2 9 7 7 9 6 6 4 0 7 9 1 5 1 0 8 1 0 3 3 7 5 9 3 4 8 8 2 1 8 8 3 8 0 2 7 2 5 5 6 3 2 1 4 0 2 9 2 2 6 4 5 1 6 1 4 0 0 1 7 9 5 6 3 9 7 9 3 9 2 5 2 4 6 7 5 2 2 5 1 5 9 8 0 5 7 4 5 6 6 7 9 5 0 3 0 6 . Find 43 in the table. then place a “3” to the right of the vertical line. Continue until all 126 observations have been posted.4. Find 42 in the table. then place a “1” to the right of the vertical line.H1317_CH22. The resulting leaf-stem plot will aid in visualizing the distribution of the observations. This will represent the location of the observation 43.1. The first observation from the 126 values is 42. then place a “4” to the right of the vertical line. This will represent the location of the observation 39. The third observation from the 126 values is 43.1.3. 3. 2.

2nd edition. The Quality Toolbox. M. Wilson. and A. Godfrey. WI: ASQ Quality Press. B. 1993. G.. 5th edition. NY: Quality Resources. Is this distribution normally distributed? What could contribute to the shape of this distribution? 136 198 106 190 94 150 142 116 206 144 136 156 124 108 195 231 172 151 92 184 181 138 112 163 211 183 81 162 207 205 194 168 225 107 111 196 139 216 240 204 152 148 111 147 222 196 146 144 122 194 236 205 153 128 132 133 232 207 146 192 193 135 213 206 193 133 212 204 223 180 120 117 Bibliography Betteley. Ishikawa. K. New York: McGraw-Hill. Using Statistics in Industry. and D. 2004. 1994.qxd 222 10/15/07 1:54 PM Page 222 The Desk Reference of Statistical Quality Methods Optional problem: Construct a leaf-stem plot for the following 76 weights of employees. White Plains. . Tague. E.. Milwaukee. 2nd edition. 1999. Guide to Quality Control. N.H1317_CH22. New York: Prentice Hall. Juran. Juran’s Quality Handbook. R. Mettrick. N. Sweeney. J.

000 per year. we might establish that the true average starting salary for college graduates with a B.H1317_CH23. When they are both stated in terms of the appropriate population parameter. in light of this evidence.qxd 10/15/07 4:28 PM Page 223 Hypothesis Testing Hypothesis tests are used to derive statistically sound judgments regarding evaluating sample data compared to population parameters or other external criteria. we take an indirect approach to obtaining evidence to support the alternative hypothesis (what we are trying to prove). or Ho. that is felt to be true. The hypothesis that we hope to disprove is called the null hypothesis. or claim. whether the initial belief (or hypothesis) can be maintained as acceptable or must be rejected as untenable.000 per year.S.S. Formulation of the Hypothesis A researcher in any field who sets out to test a new method or a new theory must first formulate a hypothesis. 223 . These two hypotheses are sometimes referred to as opposing hypotheses. For example. the null hypothesis and alternative hypothesis describe two states of being that cannot simultaneously be true. A hypothesis is a systematic approach to assessing beliefs about reality. Formulate the appropriate null hypothesis and alternative hypothesis for testing that the starting salary for graduates with a B. The hypothesis for which we want to develop supporting evidence and that we want to prove is called the alternative hypothesis. or Ha. Instead of trying to prove that the alternative hypothesis is true. it is confronting a belief with evidence and then deciding. which is the opposite of the alternative hypothesis. To be paired with this alternative hypothesis that the researcher believes to be true is the null hypothesis. The hypothesis that the researcher wants to establish is called the alternative hypothesis. degree in electrical engineering is greater than $38. In hypothesis testing. we attempt to produce evidence to show that the null hypothesis is false. degree is greater than $38.

Conclusions and Consequences for a Hypothesis Test The objective of any hypothesis test is to make a decision as to whether to reject the null hypothesis Ho in favor of the alternative hypothesis Ha.90. because our decision is based on a statistic that is derived from a sample chosen from the population.H1317_CH23. While we would like to make a correct decision all of the time. This significance. the null hypothesis is. OK p=1–α (c) Type II error p = β risk (b) Type I error p = α risk (d) Correct result. . and 0. Sometimes the significance is expressed as a level of confidence of the hypothesis. The probability of a type I error is α.000 per year rather than not equal to $38. or risk. Confidence is 1 − α.000 In this example. Two types of error can occur: type I and type II.000 Ha: μ > $38. true. in fact.qxd 224 10/15/07 4:28 PM Page 224 The Desk Reference of Statistical Quality Methods The hypothesis is stated in terms of the population parameter. in fact.95. true Null hypothesis Ho is. we attempt to gather evidence that the starting salary is greater than $38. in fact. we cannot. OK p=1–β The level of significance α must be chosen for the hypothesis. As with all statistics. false. True state of the world Null hypothesis is accepted Null hypothesis is rejected Null hypothesis Ho is. 0. Table 1 Test result. Typical levels of confidence are 0. This is an example of a onetail hypothesis test.99. true. These two errors are depicted in Table 1. in fact.000 per year. A type II error occurs if we fail to reject a null hypothesis when it is. Ho: μ = $38. is the probability that we will reject the null hypothesis when. in fact. there is an associated error. The probability of a type II error is β. A type I error occurs when we reject a null hypothesis that is. false (a) Correct result. α is also the level of significance for the hypothesis test. in reality.

we assume that while the true standard is still unknown.000 – The first step is to sample the population in order to determine a sample average X.05N.750 and S = $2816. we want to test the following hypothesis: Ho: μ = $38. is the percentage of time over an extended period that the statistical test will make a type I error. or α. Selecting a Test Statistic and Rejection Criteria For our example. XA − XB 3. The difference between two sample proportions. The sample proportion P 4. risk at α = 0. The sample mean X – – 2. – Assume from the sample data that we find that X = $39. For small samples of n < 30. Hypothesis tests are used to test statistical inferences regarding the following: – 1. When the sample size n > 30. which is the type of statistic to be computed from a random sample taken from the population of interest. and we use the normal distribution. For our example. the sample standard deviation is an adequate estimation of σ. – In this example. σX μ = the true but unknown average σ X– = standard deviation of averages of n observations S σX = n S = sample standard deviation. For large samples where n ≥ 30 and n < 0. This test statistic will be used to establish the probability of truth or falsity of the null hypothesis.000 (The hypothesized mean or μ0) Ha: μ > $38. we select a test statistic. PA − PB Each of these has its own sampling distribution and an associated test statistic. The sampling distribution for averages where n ≥ 30 is such that Z= where: X−μ . For this example.H1317_CH23.05. we will sample n = 45 and determine the average X. a normal distribution will be used. The t-distribution is used when the standard deviation σ is unknown. In this step of the hypothesis test.qxd 10/15/07 4:28 PM Page 225 Hypothesis Testing 225 The level of significance. therefore. . we will use the t-distribution as a model. one of the easily controlled requirements. we will calculate the test statistic using n = 45. The difference between two sample means. we will use the normal distribution as a model. we will set the significance.

greater than Zα. To summarize. we will reject the null hypothesis and accept the alternative hypothesis that the true average starting salary for electrical engineers is greater than $38. 2816 45 Z = 4. 4. Hypothesis test about the difference between two population means. Z= We will compare the test statistic to the defined rejection region. Since Z is.000 per year.qxd 10/15/07 226 4:28 PM Page 226 The Desk Reference of Statistical Quality Methods Calculation of the test statistic Z yields 39. For this example of a one-tail hypothesis for a sample average and a large (greater than 30) sample size. and standard error is the standard deviation of the sampling population divided by the square root of the sample size.17. π1 − π2 . and observe if the test statistic falls within the region of acceptance or rejection The general formula for computing a test statistic for making an inference is test statistic = observed sample statistic − tested value . The Zα for α = 0. If our calculated Z test statistic is greater than the Zα. Hypothesis tests can be divided into several types depending on the following: 1. 2. we will reject the null hypothesis and accept the alternative. μ1 − μ2 Hypothesis test about the difference between two population proportions. standard error where the observed sample statistic is the statistic of interest from the sample. Formulate the null and alternative hypotheses Select the significance level for the test Compute the value for the test statistic Compare the test statistic to a critical value for the appropriate probability distribution. small sample sizes (large ≥ 30 and small < 30) 2. 000 .H1317_CH23. 750 − 38. Zα for α = 0.05 is 1. 3. Large vs. the rejection region is Z > Zα.645. hypothesis testing requires the following operations: 1. to determine the Z score that will yield a table value equal to the α value). Hypothesis test about a population mean μ Hypothesis test about a population proportion π 3. in fact.05 can be determined by using a standard t-distribution table where the degrees of freedom are equal to ∞ or by using a standard normal distribution in a reverse manner (that is. tested value is the hypothesized population parameter.

Select a test statistic. Hypothesis Test about a Population Mean: Small-Sample Case a. You have sampled 25 units and determined that – the average is X = 14. . Two-tail test Ho: μ = μ0 Ha: μ > μ0 (or Ha: μ < μ0) Ho: μ = μ0 Ha: μ ≠ μ0 Test statistic: Test statistic: t= X − μ0 s n t= X − μ0 s n Reject Ho if Reject Ho if t > tα (or t < –tα) t < –tα /2 or t > tα /2) Example 1. t= X − μ0 s n Step 3. We will set the level of significance at α = 0.H1317_CH23. Formulate two opposing hypotheses. we will reject the null hypothesis and accept the alternative that the true population average is less than 15. Derive a rejection rule. we illustrate the hypothesis for a small-sample test about a population mean. One-tail test b. Ho: μ = 15 Ha: μ < 15 Step 2.0 ounces.10.3. one-tail test: A claim has been made that the actual amount of mustard in Acme brand mustard is less than the advertised amount of 15 ounces. Small-sample.qxd 10/15/07 4:28 PM Page 227 Hypothesis Testing 227 In the following example. What is your conclusion? Step 1.2 ounces and the sample standard deviation is S = 1. If the calculated test statistic is less than −tα.

two-tail test: A manufacturing manager claims that. Step 4. equal to or greater than 15.3 t= 14. we modify the degrees of freedom when looking up the critical t-score. are equal. we will reject the null hypothesis and conclude that the true average weight is less than 15.318. Step 2. on average.0 minutes to build a unit. Ho: μ = 22. Example 2. n−1 in the t-table. The degrees of freedom for these cases are determined by df = ⎛ S12 S22 ⎞ ⎜⎝ n + n ⎟⎠ 1 2 2 2 ⎛ S12 ⎞ ⎛ S22 ⎞ ⎜⎝ n ⎟⎠ ⎜⎝ n ⎟⎠ 1 + 2 n1 n2 2 .3 25 Since the test statistic is less than −1. calculate the test statistic.0 ounces. – n = 25 X = 14. we will reject the null hypothesis and accept the alternative hypothesis (that the true average weight is. as we are not testing that the true time is greater than or less than but that the true average is not equal to 22. Small-sample.0 Ha: μ ≠ 22. If the variances of the two populations are assumed not to be equal.08 1.2 − 15.0 This is a two-tail test.318. Note: It is assumed that the variances.0 ounces). while unknown.0 minutes. and accept or reject the null hypothesis. we find 1.H1317_CH23. in fact.2 S = 1.0 = −3. If our test statistic is less than −1.qxd 228 10/15/07 4:28 PM Page 228 The Desk Reference of Statistical Quality Methods Looking up the tα. t= X − μ0 s n . Formulate two opposing hypotheses.318. Select a test statistic. How can you verify the claim having secured test data for 15 production times? Step 1. Select a sample. it takes 22.

Step 4.93 Standard deviation. Large-sample. Select a sample.70 n 15 Since the test statistic is greater than the rejection region of t > 3. One-tail test b. and accept or reject the null hypothesis. The correct t value for n − 1 = 14 and α/2 = 0.qxd 10/15/07 4:28 PM Page 229 Hypothesis Testing 229 Step 3. s 4. S = 0. calculate the test statistic.0 .70.H1317_CH23. For this example. we will reject the null hypothesis and conclude that the true average build time is not equal to 22. we must look up one-half the level of significance when using the t-distribution table.0 minutes.28 . – From the 15 data values. Derive a rejection rule. we will look up tα/2. However.145. n = 120 What is your conclusion? – Average.05. Hypothesis Test about a Population Mean: Large-Sample Case a. Two-tail test Ho: μ = μ0 Ha: μ > μ0 (or Ha: μ < μ0) Ho: μ = μ0 Ha: μ ≠ μ0 Test statistic: Test statistic: Z= X − μ0 s n Z= X − μ0 s n Reject Ho if Reject Ho if Z > Zα (or Z < –Zα) Z < –Zα /2 (or Z > Zα /2) Example 1.025 is t0. We will set the level of significance at α = 0. The resulting test statistic is t= X − μ0 25.14.14 = 2. one-tail test: A software company wants to verify that the technical service help line answers customers’ inquiries in less than two minutes.8 and the standard deviation S = 4. t = 3. we find the average X = 25.14.8 − 22. t= .025. n−1. The results of the data are summarized as follows: Sample size. since we are performing a twotail test. X = 1.

Formulate two opposing hypotheses.73 Step 3.1 in the appendix. we will reject the null hypothesis in favor of the alternative hypothesis. As a statistician for a competing organization.282.0 Step 2.0 minutes in favor of the alternative that the true average response time is. What is your conclusion based on the data using a 5 percent level of significance? Step 1.282 and −Zα = −1. If the value of our test statistic is less than −Zα/2 or greater than Zα/2.14 Step 3. Example 2. you want to test the claim. You sample 70 participants who have been on the subject diet plan for four weeks. Ho: μ = 2.00 0.10 in the main body of Table A.0 minutes. Derive a rejection rule. We will set the level of significance at α = 0. we reject the null hypothesis and conclude that the true average length of time is less than 2. two-tail test: A diet plan states that.6 pounds. Locating 0. on average. Select a test statistic. Z= X − X0 S n Z= 1. . we find the corresponding Z value to be 1.10. Formulate two opposing hypotheses. You find that the average weight loss has been 11.0 Ha: μ ≠ 12. The value for our test statistic is less than −1.00 Step 2. participants will lose 12 pounds in four weeks.0 minutes. Select a test statistic.qxd 230 10/15/07 4:28 PM Page 230 The Desk Reference of Statistical Quality Methods Step 1.00 Ha: μ < 2. Step 4. If the test statistic Z is less than the critical value of −Zα. we will reject the null hypothesis that the true average response time is equal to 2. Large-sample. Z= X − u0 S n Z = −3.H1317_CH23. Ho: μ = 12. less than 2. Apply the rejection criteria and form a conclusion. therefore. in fact. Derive a rejection rule.4 pounds with a sample standard deviation of 1.93 − 2.28 120 Z = −2.282.

Large-sample. Formulate the two opposing hypotheses. A sample of 120 Carolina Giant tomatoes gives an average weight of 470 grams with a standard deviation of 23. one-tail test: Two types of hybrid tomatoes have been developed: Big Red and Carolina Giant.0 D0 = 45 . Ho: (μCG – μBR) = 45 CG = Carolina Giant BR = Big Red Ha: (μCG – μBR) > 45 X1 = 470 X 2 = 418 S1 = 23. At a level of 95 percent confidence. Test statistic: ZCalc = Test statistic: ( X1 − X 2 ) − D0 σ ( X1 − X2 ) ZCalc = ( X1 − X 2 ) − D0 σ ( X1 − X2 ) where: σ ( X1 − X2 ) = σ12 σ 22 + n1 n2 Reject Ho if: Reject Ho if: Z < – Zα (or Z > Zα) Z < – Zα/2 or Z > Zα/2 Example 1. Two-tail test Ho: (μ1 − μ2) = D0 Ho: (μ1 − μ2) < D0 (or Ha: (μ1 − μ2) > D0) Ho: (μ1 − μ2) = D0 Ho: (μ1 − μ2) ≠ D0 Note: In most cases D0 = 0. does the average weight of the Carolina Giant tomato exceed the average weight of the Big Red by 45 grams? Step 1. A sample of 160 Big Red tomatoes gives an average weight of 418 grams with a standard deviation of 31.qxd 10/15/07 4:28 PM Page 231 Hypothesis Testing 231 Large-Sample Test of Hypothesis about (l1 – l2) a. One-tail test b.5.H1317_CH23.0.5 S2 = 31. A comparative study has been undertaken to determine if the Carolina Giant produces a larger tomato than the Big Red.

05 is 1. Select a sample. ZCalc = ( X1 − X 2 ) − D0 (470 − 418) − 45 = σ ( X1 − X2 ) σ ( X1 − X2 ) ZCalc = 7 (470 − 418) − 45 = = 2. frequently D0 = 0 .25 3.645 = 0.645 or θ 1.25 1201 160 Step 3.05 since our degree of confidence is 95 percent and our risk α is 5 percent. One-tail test b.15 3.645 We are 95 percent confident that the average weight of Carolina Giant tomatoes exceeds that of Big Red tomatoes by no less than 45 grams. Step 4. Two-tail test Ho: (μ1 − μ2) = D0 Ha: (μ1 − μ2) > D0 (or Ha: (μ1 − μ2) = D0) Ho: (μ1 − μ2) = D0 Ha: (μ1 − μ2) ≠ D0 Test statistic: Test statistic: t= ( X 1 − X 2 ) − D0 ⎛ 1 1 ⎞ S p2 ⎜ + ⎟ ⎝ n1 n 2 ⎠ t= ( X 1 − X 2 ) − D0 ⎛ 1 1 ⎞ S p2 ⎜ + ⎟ ⎝ n1 n 2 ⎠ Reject Ho if: Reject Ho if: t > tα or (t < –tα) t > tα /2 or (t < –tα /2) where: S p2 = ( n1 − 1) S12 + ( n 2 − 1) S 22 n1 + n 2 − 2 D0 = the hypothesis difference.0)2 + = 3.H1317_CH23. Calculate the test statistic. calculate a test statistic.14. The value for Zα /2 with α = 0.25 σ12 σ 22 + n1 n2 σ ( X1 − X2 ) = σ ( X1 − X2 ) = (23. or 0. Zα or ZCritical is Z0. we reject the null hypothesis and conclude that the true average weight loss is not equal to 12 pounds.15 > 1. and accept or reject the null hypothesis.5)2 (31.qxd 232 10/15/07 4:28 PM Page 232 The Desk Reference of Statistical Quality Methods Step 2. Z 0.96.05. reject Ho in favor of Ha 2. and the test statistic is −3. Test the rejection criteria. Small-Sample Test of Hypothesis about (l1 – l2) a.05 = 1.05 If ZCalc > ZCritical. therefore.

8)2 . n1 = 15 – X 1 = 18.8 The pooled variance is S p2 = (n1 − 1)S12 + (n2 − 1)S22 . We will use the test statistic of t= ( X1 − X 2) − D0 S p2 ⎛1 1⎞ ⎜⎝ n + n ⎟⎠ 1 2 .6) − 0 ⎛ 1 1⎞ 13.4)2 + (18 − 1)(3. We want to test the hypothesis that there is no difference in the sole-wear rate between the two brands. The population variances are equal 2. two-tail test: Two brands of walking shoes are compared with respect to sole wear on a treadmill-type testing machine. Using this pooled variance of 13.4 − 20. 15 + 18 − 2 S p2 = 13. Formulate the two opposing hypotheses. Ho: (μ1 − μ2) = 0 (no difference in mileage) Ha: (μ1 − μ2) ≠ 0 (a difference in mileage) Step 2. Small-sample. The populations have the same distribution.14 ⎜ + ⎟ ⎝ 15 18 ⎠ t = −1.H1317_CH23.6 S2 = 3. t= (X1 − X2) − D0 ⎛ 1 1⎞ Sp2 ⎜ + ⎟ ⎝ n1 n2 ⎠ t= (18. n1 + n2 − 2 S p2 = (15 − 1)(3.14.4 n2 = 18 – X 2 = 20. assumed to be normal 3.qxd 10/15/07 4:28 PM Page 233 Hypothesis Testing 233 The following assumptions are made: 1. we calculate the test statistic. Random samples are selected in an independent manner from the two populations Example 1.14. Step 1. standard deviation for each and determine the pooled variance Sp2. Calculate the test statistic.4 S1 = 3. Shoes were tested until 10 mils of sole were removed.74 . We will sample the wear rate for 15 samples of brand A and 18 samples of brand B and will calculate the average. Fifteen samples of brand A and 18 samples of brand B were evaluated.

Two-tail test Ho: π = π0 Ha: π > π0 (or Ha: π < π0) Ho: π = π0 Ha: π ≠ π0 Test statistic: Test statistic: Z= π − π0 π 0 (1 − π 0 ) n Z= π − π0 π 0 (1 − π 0 ) n Reject Ho if: Reject Ho if: Z > Zα (or Z < –Zα) Z < –Zα /2 (or Z > Zα /2) Example 1.qxd 234 10/15/07 4:28 PM Page 234 The Desk Reference of Statistical Quality Methods Step 3. Looking up the t value for a 5 percent risk (tα/2) and n1 + n2 − 2 degrees of freedom.697 (we use degrees of freedom of n = 30 because the exact degrees of n = 31. State a conclusion. Since our test statistic t exceeds the critical region. the human resources manager remarked that 75 percent of the employees would prefer to work four 10-hour days as opposed to the normal five 8-hour workdays. Derive a rejection criteria. one-sided test (testing to a hypothesized proportion): During a recent business meeting.5 percent agreed that the 10-hour schedule would be better. Step 4. was not available from our t-table). At the 10 percent significance level.H1317_CH23. which would have been correct. 78. was the human resources manager correct in his assertion? . Large-sample. we reject the null hypothesis and conclude that the wear rate is different. we find the critical t value to be ±1. We will test our hypothesis at the 10 percent significance level. One-tail test b. We are performing a twotail test that the true difference between the two brands can be either less than or greater than zero. Hypothesis test for proportions: Large-Sample Test of Hypothesis about a Population Proportion a. Of a random sample of 85 employees.

or proportion and a single hypothesized expectation.75 Ha: π > 0. Step 4.750 ) 125 Z = 0.282. Up to this point. we cannot reject the null hypothesis. calculated the test statistic.or two-tail tests. For example. and established the rejection criteria.750 (1 − .H1317_CH23. This type of test is referred to as a hypothesis test about the difference between two statistics.90 Step 3. We cannot conclude that over 75 percent of the employees support the notion that the 10-hour workday would be better than the 8-hour workday.282. they can be done for population means and proportions. and they are available for both small and large samples. .750 . Z= Z= π − π0 π 0 (1 − π 0 ) n . Ho: π = 0. mean. We now compare the test statistic Z to the critical Zα of 1. Comparisons of population proportions are generally not performed with samples less than 30. we have been performing hypothesis tests based on a single statistic.282.75 Step 2.785 − . We are testing at a significance level of α = 10% (or a level of confidence of 90 percent). The following examples are given for comparisons of means for small samples (less than 30) and large samples and comparisons of proportions for large samples. is the hypothesized mean equal to or greater than a specific value? We may use hypothesis tests to compare two population means or proportions to test whether they are equal. Calculate the test statistic. Derive the rejection rule or criteria. These tests are also one. The Zα for a 10 percent risk is 1. Since our calculated Z test statistic is not greater than the critical Z of 1. Formulate the opposing hypotheses. We have already selected the sample.qxd 10/15/07 4:28 PM Page 235 Hypothesis Testing 235 Step 1.

525 ate grits on a regular basis. Based on this data. Example 1: Over the past decade it has been suspected that the percentage of adults living in the southeastern region of the United States who eat grits (a gourmet delicacy indigenous to this area) has declined. The same survey taken in 1996 with a sample of 1850 found that 685 adults ate grits. One-tail test b. has the consumption of grits increased or has it remained the same? Step 1. Formulate the opposing hypotheses. A sample survey by the Grain Research Institute of Technology (GRIT) in 1986 found that from a random sample of 1500 adults.qxd 236 10/15/07 4:28 PM Page 236 The Desk Reference of Statistical Quality Methods Large-Sample Test of Hypothesis for o1 – o2 or the Difference in Two Population Proportions a. Ho: (π− − π2) = 0 Ha: (π− − π2) > 0 π1 = true proportion of population that ate grits in 1996 π2 = true proportion of population that ate grits in 1986 . Two-tail test Ho: (π− − π2) = 0* Ha: (π− − μ2) < 0 or (π− − π2) > 0 Ho: (π− − π2) = 0 Ha: (π− − π2) ≠ 0 Test statistic (both cases) Z= P1 − P2 σ ( P1 − P2 ) Reject Ho if: Reject Ho if: Z < –Zα or Z < –Zα /2 Z > Zα for Z > Zα /2 Ha: (π– – π2) > 0 *This test can be applied to differences other than zero.H1317_CH23.

370 − 0. Z= P1 − P2 σ ( P1 − P2 ) P1 = (np)1 n1 where: (np)1 = number eating grits in sample from 1996 (n1) n1 = 1850 685 = 0.06 0.05.0012) σ ( P1 − P2 ) = 0.352 1500 ⎛1 1⎞ σ ( P1 − P2 ) = P(1 − P)⎜ + ⎟ ⎝ n1 n2 ⎠ or σ ( P1 − P2 ) = p1q1 p2 q2 + n1 n1 ⎛ (np)1 + (np)2 ⎞ 685 + 528 = 0.362 P = combined proportion = ⎜ ⎟= 1850 + + 1500 n n ⎝ ⎠ 1 2 ⎛1 1⎞ σ ( P1 − P2 ) = P(1 − P)⎜ + ⎟ ⎝ n1 n2 ⎠ 1 1 ⎞ → σ ( P1 − P2 ) = (0.352 = 1.362)(0.370 1850 (np)2 P2 = n2 P1 = where: (np)2 = number eating grits in sample from 1986 (n2 ) n2 = 1500 P2 = 528 = 0.qxd 10/15/07 4:28 PM Page 237 Hypothesis Testing 237 Step 2.638)⎛ + ⎝ 1850 1500 ⎠ σ ( P1 − P2 ) = (0. Set the risk at α = 0. Calculate the test statistic.638)(0.017 .017 Test statistic: Z= P1 − P2 σ ( P1 − P2 ) Z= 0.H1317_CH23.362)(0.

Z calculated is not greater than Z critical. Does the evidence support the conclusion that a process improvement has been made? Step 1. During April. therefore. The test statistic is not greater than the Zα value of 1. we will reject the null hypothesis (that both processes are the same) and accept the alternative hypothesis that the old process is worse than the improved process. For this example. therefore. The proportion of the population that eat grits has not increased. Calculate the test statistic.qxd 238 10/15/07 4:28 PM Page 238 The Desk Reference of Statistical Quality Methods Step 3. we will set the risk α = 0.050 = 1. the process was modified in an effort to lower the defect rate. the confidence level is 95 percent with a risk or significance level of 5 percent. Ho: π1 = π2 Ha: π1 > π2 where: π1 = old process π2 = improved process Step 2.30 (0.95 ) + 350 280 Step 3.050.075 − 0.645. While there is a positive difference between the sample data from 1986 and 1996. in fact.05 is 1. A sample of 280 printed circuit boards yielded a proportion defective of 0. we cannot conclude that an improvement has.075 )(0.645. If our calculated test Z-score is greater than the critical Z-score. been made. this difference is not large enough to indicate a difference in the total population.06) is greater than Zα. in which case the critical Z-score is 1.050. Example 2: The proportion defective of a sample of 350 printed circuit boards sampled during March was found to be 0.9925 ) (0. Z= p1 − p2 p1q1 p2 q2 + n1 n2 Z= 0.645. Reject the null hypothesis Ho: π1 = π2 in favor of the alternative hypothesis Ha: π1 > π2 if the test statistic (1. .H1317_CH23. For this example. Compare the test statistic (Z calculated) to the critical Z-score. the null hypothesis cannot be rejected. The appropriate Z-score for Z.075.050 )(0. Formulate the opposing hypotheses. Compare the test statistic to the rejection criteria.

.qxd 10/15/07 4:28 PM Page 239 Hypothesis Testing 239 Hypothesis about the Variance A manager of the reservations center for a major airline has supported the decision to upgrade the computer system that handles flight reservations. Rejection Criteria If χ2 calculated is less than χ2 critical. however. The level of significance is set at 2 percent.2 minutes. in fact. Calculate the test statistic.98.2 ) 2 ( 29 ) = 6. Compare the calculated χ2 to a critical χ2.H1317_CH23.574 We reject the null hypothesis that the true standard deviation is equal to 7 minutes and conclude that the new system does.06 ( 7) 2 Step 3. reject Ho and accept Ha. < χ2 crit. which is the square of the standard deviation.574. have a lower standard deviation.06 < 15. Step 1. It will be based on a random sample of 30 customers using the new system. is too difficult to learn and will lead to more variation in the time to check in passengers. and chi-square for a risk of 2 percent and a sample size of n = 30 is χ21−a. claim that the new system. Ho: σ2 = 49 Ha: σ2 < 49 Note: The hypothesis test must be in terms of the variance. n−1 = χ2. Step 2. The old system has a documented standard deviation of 7 minutes per passenger.29 = 15. His critics. The calculated χ2 = 6.06. 6. A hypothesis test is designed to settle this issue. while more powerful than the old system. χ2 = ( 3. χ2 = s 2 ( n − 1) σ2 The sample standard deviation S from the 30 sample observations is 3. χ2 calc. The test statistic will be chi-square χ2. Formulate two opposing hypotheses.

FCrit = Fα FCalc > FCrit . is there more variation at U of S than at MSC? Step 1.09 (12. Formulate the hypotheses. The average IQ of 20 students from Mensir State College (MSC) is 128 with a standard deviation of 12. Example 1: The average IQ of 31 students from the University of Smartlandia (U of S) is 132 with a standard deviation of 18. FCalc = S12 S22 FCalc = (18. One-tail test b.qxd 240 10/15/07 4:28 PM Page 240 The Desk Reference of Statistical Quality Methods F-test for Equal Population Variances a.8)2 2 = MSC . FCrit = Fα / 2 when S12 > S22 (or FCalc > FCrit when Ha: σ12 > σ 22 ) (or FCalc > Fα / 2 when S22 > S12 ) where FCrit is based on Fα or Fα/2 with ν1 = numerator degrees of freedom and ν2 = denominator degrees of freedom. At a confidence level of 90 percent. Two-tail test Ho: σ12 = σ 22 Ho: σ12 = σ 22 Ha: σ12 < σ 22 Ha: σ12 ≠ σ 22 (or Ho: σ12 > σ 22 ) Test statistic: FCalc = Test statistic: S22 S12 (or FCalc = S12 when Ha : σ12 > σ 22 ) S22 FCalc = Larger sample variance Smaller sample variance FCalc = S12 2 2 2 when S1 > S2 S2 or S22 when S22 > S12 S12 Rejection criteria: Reject Ho in favor of Ha if Rejection criteria: Reject Ho in favor of Ha if FCalc > FCrit . Calculate the test statistic.5.H1317_CH23.5)2 = 2. Ho: σ12 = σ 22 Ha: σ12 > σ 22 1 = U of S Step 2.8.

Applied Statistics for Engineers and Scientists.qxd 10/15/07 4:28 PM Page 241 Hypothesis Testing 241 Step 3.10. Introduction to Statistical Quality Control. B. Chen.. we conclude that the variation from U of S is greater than that of MSC with a level of confidence of 90 percent. Walpole. Montgomery. Hauppauge. J.30.76 Step 4. 3rd edition. J.09 > 1. Englewood Cliffs. New York: McGraw-Hill. D. Sternstein. Confront the test statistic with the rejection criteria. 1996. Introduction to Statistical Analysis. NY: Barron’s Educational Series. 1969. Reject Ho and accept Ha if FCalc is > FCrit. Upper Saddle River. Probability and Statistics for Engineers and Scientists. Determine FCrit. Petruccelli. and M. E. Nandram. . C.19 = 1.10 n1–1 = 30 n2–1 = 19 F. H. Statistics. FCrit = Fa. n2–1 a = 0. M. NJ: Prentice Hall. W. NJ: Prentice Hall. 3rd edition. New York: John Wiley & Sons. n1–1. 5th edition. D. 1996. L. and R. R. 1999. 1993.H1317_CH23. and F.76 Therefore.. Massey. Bibliography Dixon.. Jr. Myers. 2.

qxd 10/15/07 4:28 PM Page 242 .H1317_CH23.

control limits. looking for signs of a process change 243 . The larger macroprocess will be monitored by the movement of the individual medians around the average median. when our original intention was to increase only the filling head in question. Each of the individual cavities would represent several microprocesses. and the median/range. Continue plotting future data. and plotting historical data 5. there is a macro contributing to a process that is influencing a set of smaller microprocesses. In all of these cases. Collection of historical data to characterize the current performance of the process 2. The process variation will be monitored using a traditional range control chart. If a faulty filling head were to develop a state of short filling that led to an out-ofcontrol condition. In some cases. including average. The microprocesses will be monitored by the movement of the individuals around the average. as with all variable control charts. Another example would be a filling line where several filling heads would represent the individual microprocesses.H1317_CH24. there would be a 67 percent probability of not including a specific filling head in the sample. average/range. the usual response would be to adjust the primary pump. The steps for the construction of this chart. average/standard deviation. and the master pump supplying all of the individual filling heads would represent the macroprocess. Construction of the control chart. involve: 1. Calculation of statistical control limits for the location and variation statistics based on an average ±3 standard deviations 4. production could continue for a full day without sampling one of the filling heads. These two charts will be constructed on a common chart.qxd 10/17/07 2:34 PM Page 243 Individual-Median/Range Control Chart The selection of control charts for variables characteristics are traditionally the individual/ moving range. there is an assumption that the characteristic is originating from a single population. An optional method to monitor the overall process and address the performance of the individual microprocesses is found in the application of the individual-median/range control chart. and the injection pressure common to all of the cavities would be the macroprocess. An example of such a case would be a multi-cavity mold operation. Calculation of a location and variation statistic 3. If in the latter example a 12-head filling line were to be monitored using an average/range control chart where five samples were randomly chosen every hour and the average weight was charted. Even with hourly sampling. This action would lead to an increased overall average causing all of the other positions of the filling heads to be increased.

5 45.3 43.3 . Sample #1 3/6/95 AM Sample #2 3/6/95 PM Sample #3 3/7/95 AM Sample #4 3/7/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 42.6 43. giving a total of 10 samples for our example. and so on. is abbreviated X values is determined simply by eliminating the largest followed by the smallest and continuing until only one value remains.2 45. smallest. and then calculate the average to obtain the median.8 44.8 43. The median and range for each sample are determined and recorded. largest.8 4. The median is the middle ˜ .9 43. Where the number of values in the sample is even.qxd 10/17/07 244 2:34 PM Page 244 The Desk Reference of Statistical Quality Methods The concepts of the individual-median/range chart are illustrated in the following example.7 4. A filling line has five filling heads supplied by a master pump. 8 4 11 7 13 → 8 4 11 7 → 8 11 7 → 8 7 → 8 = median Step 1. Each filling head.1 46. In a real application. in ounces. is independently adjustable.0 45.9 46. until only the median remains. smallest. Samples are collected twice per day for five days. Individuals within a sample should be identified as to the specific head from which they were obtained.2 43.0 44. The median of an odd set of value of a set of numbers and.4 43. in the containers. Example: Determine the median of the following five values by dropping the largest.9 5.4 40. we eliminate the largest and smallest until two values remain.1 44. one should obtain a minimum of 25 sets of samples to accurately characterize the process.7 46. Collect historical data. First we will briefly discuss how to calculate the median. as well as the master pump.9 45.H1317_CH24.1 43. as a statistic.6 45. The characteristic being monitored is the weight of the material.5 42.7 41.2 40.4 5.9 43.

14.5 Sample #9 3/10/95 AM Sample #10 3/10/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 45.3 6.2 + 44.5 + 3.0 42.4 43.7 46.4 43.4 3.9 + 4.5 43.9 44.9 + 43.H1317_CH24.4 ˜ = X .0 42.3 + 6.46. X – The variation statistic will be the average range R and is determined by averaging the 10 ranges: 4. .9 + 43.5 + 3.5 42. R= Step 3.5 43.2 + 3.8 43.4 3.1 46.8 45.4 + 42.0 46.7 43.2 45.3 + 45.3 + 5.9 42.5 45.4 .0 + 5.7 44.2 43.6 44.7 + 43.5 43.4 5.2 45.8 + 45.8 45.0 45.4 + 45.2 3.4 45.4 43. 10 R = 4. Calculate the location and variation statistic.4 Step 2.4 + 3.0 42.qxd 10/17/07 2:34 PM Page 245 Individual-Median/Range Control Chart 245 Sample #5 3/8/95 AM Sample #6 3/8/95 PM Sample #7 3/9/95 AM Sample #8 3/9/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 40.0 43.9 3.0 42.5 44.0 42.2 41.2 45.1 + 5. 10 – ˜ = 44.3 46. – ˜ and is determined by averaging The location statistic will be the grand average median X the 10 median values: – 43.4 + 43. Determine the statistical control limits for the location and variation statistic.

given that X are UCLx˜ /LCLx˜ = 44.14 ± (0.290.H1317_CH24.75 = 49. The upper and lower control limits for this example.89 LCLXi = 44. ±3 standard deviations. R– = 4.14.08 = 47.290)(4. – ˜ = 44.08 = 41. is found in Table 1.14 + 5. where n = 5 is 0.691. UCLXi/LCLXi = X – ˜ = average median where: X E2 = factor dependent on the subgroup sample size – R = average range.46).14 and R– = 4.14 ± (1.14 + 3. – ˜ = 44. Control limits for the individuals: The control limits for the individuals are based on the average of the individuals.46) UCLXi = 44.qxd 246 10/17/07 2:34 PM Page 246 The Desk Reference of Statistical Quality Methods Control limits for the median: Upper and lower control limits for the median are calculated by adding three standard deviations for the median distribution for the upper control limit (UCL) and subtracting three standard deviations for the lower control limit (LCL).75 = 38.46.14 – 5.46.39.691)(4. – ˜ ± Ã2R– UCLx˜ /LCLx˜ = X – ˜ = average median where: X Ã2 = factor dependent on the subgroup sample size n – R = average range The value for the factor Ã2 for this example.06. and E2 = 1. UCLx˜ = 44.22. Control limits for the ranges: The upper and lower control limits are determined respectively by multiplying the average – range R by factors D4 and D3. – UCLR = D4R – and LCLR = D3R where: D4 and D3 = factors dependent on the subgroup sample size n – R = the average range . and are calculated as follows: – ˜ ± E2R–. where X trol limits for individuals are – ˜ ± E2R– UCLXi /LCLXi = X → UCLXi /LCLXi = 44. the upper and lower conFor our example. UCLx˜ = 44.14 – 3.

203 0.191 0.434 0.308 0.46 .06 Upper control limit for individuals. Subgroup n – Ã2 (medians) E2 (individuals) D3 (LCL range) D4 (UCL range) 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 1.010 0.608 1.378 0.772 1.691 0.109 1.114)(4.255 0.637 1.574 2.43 and LCLR = not defined (no values for D3 are defined until the sample size reaches seven) LCLR = 0 as a default Summary of control limits: Upper control limit for medians.622 1.391 0.566 1.824 0.184 1.201 0.548 0.114 2.763 – – – – – 0.693 1.184 0.136 0.575 1. UCLXi = 49.234 0.39 Upper control limit for ranges.778 0.796 0.975 0.282 0.770 0.282 2.223 0.004 1.813 0.849 0.290 1.541 For our example.328 0.803 0.218 0.443 0.415 0. LCLXi = 38.660 1.46) = 9.054 1.548 1.412 0.347 0. R = 4.899 0. LCLR = 0 – ˜ = 44.316 0.597 1. UCLR = (2.qxd 10/17/07 2:34 PM Page 247 Individual-Median/Range Control Chart 247 Table 1 Factors used for individual-median/range charts.231 0.256 0.508 0.744 1.672 1.362 0.433 0.459 3.557 1.794 0.880 1.717 1.22 Lower control limit for medians.836 0. UCLR = 9.89 Lower control limit for individuals.251 0.403 0.43 Lower control limit for ranges.945 0.457 1.786 0.363 0.H1317_CH24.451 0. X – Average range.076 0.267 2.189 2.215 0.921 0.585 1.816 1.425 0.307 0.276 0.283 0.14 Average median.350 0. UCLx˜ = 41. UCLx˜ = 47.187 0.653 1.864 0.924 1.864 1.777 1.811 0.

it should be identified relative to its origin (such as head #1. and the particular individual that is identified as the median will be represented by an “x. sketch in control limits and averages. Signs of a process change will be indicated when one of the SPC rules is violated. the average line for the range is not drawn.qxd 10/17/07 248 2:34 PM Page 248 The Desk Reference of Statistical Quality Methods Step 4.39 35 LCL XM = 41. Notes on Plotting Data All individual points should be identified as a single dot (. we will use broken lines for the control limits for the individuals and a short-long dash for the median control limits. and plot historical data. and so on) by placing a number next to the dot. Any median values falling outside the median limits represent a rule violation.06 30 1 5 10 x = median Note that all of the medians and individuals are within their respective control limits. head #2. This process is statistically stable and well behaved. These rules are numerous.89 x x 45 x x x x x x x x 40 LCL Xi = 38. looking for evidence of a process change. Since we are plotting two location statistics (median and individuals) on one chart. The completed chart showing the first 10 sets of data follows: History 55 UCL XM = 47. and any individual values (including the median) that fall outside the individual control limits also represent a rule violation. For subgroup sample sizes less than seven. Construct the control chart.” The median values will be connected with a line. If an individual value falls outside its control limit.22 50 UCL Xi = 49. Control limits are traditionally drawn with a broken line (----------------------). and averages are drawn with a solid line (———————–). Step 5. and most have the following two common characteristics: . Continue to monitor the process.).H1317_CH24.

0 44.4 43.0 38.3 42. Data for the next seven samples are provided below. The UCL for the range is UCL = D4 R .0 42.7 43. Sample #11 Sample #12 Sample #13 Sample #14 3/11/95 AM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 3/11/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 3/12/95 AM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 3/12/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 42. Note that there are two incidences where head #3 had individual values that were below the LCL for individuals (38.6 43. They are patterns that are statistically rare to occur with a normal distribution 2.0 40. Control limits for the range are calculated in the traditional manner for Shewhart – – charts.3 Sample #15 Sample #16 Sample #17 3/13/95 AM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 3/13/95 PM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 3/14/95 AM Head 1 Head 2 Head 3 Head 4 Head 5 Median Range 46.8 44.0 37. This indicates that this individual fill head is delivering less than would be expected based on the historical characterization. either increased or decreased The three more frequently used SPC detection rules are: 1.0 42. . They indicate a direction in which the process has changed.4 43.1 42. The only rule applicable to monitoring the range will be rule #1.6 45. and the LCL for the range is LCL = D3 R .4 40.6 42.4 43.0 40.qxd 10/17/07 2:34 PM Page 249 Individual-Median/Range Control Chart 249 1. and the UCL is calculated to be UCL = 9.5 43.7 46.43.5 43.1 40.0 42.3 4.8 45.4 44.2 43.3 45.7 43.39).0 41.0 3. A trend of seven consecutive points all steadily increasing or decreasing in value These rules will be applied both to the movement of the individuals with respect to their control limits and to the median with respect to its control limit.4 The completed control chart follows.5 7.4 5.5 41.3 42.0 43.5 44.0 7.4 43.7 43. Plot the data and determine if there is any evidence of a process change.8 44. Seven consecutive points below the average or above the average 3. For this example.5 5.1 44.4 40. A single observation outside the upper or lower control limit 2.H1317_CH24. the LCL is undefined as n = 5. when the sample size is less than seven.3 2.1 40.6 45.

H1317_CH24. the output has been reduced enough to cause a process change in this position.43 5 0. This supports the conclusion that filling head #3 is not performing as was established for the historical data. .06 30 1 5 10 Individual-median chart 15 20 x = median • = individual 10 UCL = 9.89 x x 45 x x x x x x x x x x x x x x x 40 (3) LCL Xi = 38. while the medians for these samples remain in control.0 1 5 10 15 20 Range chart Sample #14 and sample #17 have individuals from filling head #3 falling below the LCL for individuals.39 (3) 35 LCL XM = 41.22 50 UCL Xi = 49. Perhaps the filling nozzle is blocked slightly. causing a restriction in the filling rate. In any case.qxd 10/17/07 250 2:34 PM Page 250 The Desk Reference of Statistical Quality Methods History 55 UCL XM = 47.

qxd 10/15/07 2:12 PM Page 251 Individual/Moving Range Control Chart Control charts are used to detect changes in processes. When processes are monitored using variables data and the opportunity for obtaining data is relatively infrequent. 1:00 PM. If we want to detect a process change relative to the last month of performance. we may elect to use an individual/moving range control chart. For demonstration purposes. Collect historical data. The amount of change that has occurred The individual/moving range control chart has a subgroup sample size of n = 1 and is the weakest of all Shewhart control charts with respect to its ability to detect a process change. Typically. 4.H1317_CH25. 2. An example of such a case would be measurements of a characteristic of a batch process where individual batches are made once every four hours. The times will be recorded in units of seconds. 3. 40 to 100 individual data points should be taken over the period of time that we wish to serve as a baseline period. The subgroup sample size 2. The process that our control chart will monitor is the operation of filling out patient information during the admission process at a hospital. and the total number of samples taken to characterize the process is k = 15. we will use a total of 15 samples for our historical database. The ability of a control chart to detect a change in the process is related to: 1. looking for changes in the future The following example illustrates how to develop an individual/moving range control chart. The sample size is n = 1. and 4:00 PM each day for five days. 251 . Collection of historical data Calculation of a location and variation statistic Determination of statistical control limits around the location and variation statistic Graphical presentation of the data and control limits Continuation of data collection and plotting. 5. then we will take the 40–100 measurements over a one-month period. Samples are taken at approximately 9:00 AM. Our samples were taken three times per day over a five-day period. Step 1. The general principles and use of control charts are universal and are based on the following elements: 1.

Calculate a location and variation statistic. because it does not have a previous data point associated with it. Moving ranges are always positive in value. and mode. Examples of location statistics are average. median.H1317_CH25. To calculate the average. At the time we collect our initial data.6 15 15 . Average X = 196 + 260 + 187 + 170 +  + 230 3130 = = 208. There will always be one less moving range than there are data points. and the last moving range is 230 – 210 = 20. The range between successive points is defined as the moving range and is calculated by taking the difference between the current value and the previous value. we will also calculate and record the moving ranges. For the individual/moving range control charts. The first data point collected will not have a moving range. the second moving range is 260 – 187 = 73. The best indication of the variation of the process at a given point in time is the range between two successive values around that point in time. The first moving range is 260 – 196 = 64. Examples of variation statistics are standard deviation. range. the location statistic will be the average of all the ranges collected for the historical period.qxd 10/15/07 252 2:12 PM Page 252 The Desk Reference of Statistical Quality Methods The best single value to express the location of the performance of this process at a specific time is the data point taken at that time. All processes can be described or characterized by two types of descriptive statistics: Location—The central tendency or where a process is tending to perform. Variation—The amount of change in the values obtained from a process. and moving range. add all of the values and divide by the number of data points used to obtain the sum. Date Time Admission time Moving range 11/2/94 11/2/94 11/2/94 11/3/94 11/3/94 11/3/94 11/4/94 11/4/94 11/4/94 11/5/94 11/5/94 11/5/94 11/6/94 11/6/94 11/6/94 8:45A 1:12P 3:48P 9:04A 1:00P 3:55P 9:00A 12:55P 4:10P 8:53A 1:07P 3:58P 9:11A 12:55P 3:57P 196 260 187 170 296 171 320 139 196 228 176 125 226 210 230 — 64 73 17 126 125 149 181 57 32 52 51 101 16 20 Step 2.

0 = 248.6.4 Based on the historical data we have collected.7 percent of the time and will be based on the average ±3 standard deviations.4 and 410.8 seconds.66MR UCL = 208. Repeat this for the UCL for . UCL = X + 2. Average moving range MR = 64 + 73 + 17 + 126 +  20 1064 = = 76.3 There is no LCL for the moving range. The upper and lower control limits for the individual values are determined by adding and subtracting three standard deviations to the average.2 = 410. There is variation of the individual observations relative to the average. LCL = 6. Using these moving ranges.0 14 14 Step 3. There was variation among the moving ranges just as there was variation among the individual values.7 percent probability that we will find an individual value between 6. and – label appropriately X = 208. which was determined for each successive pair of data points.4. Our original variation statistic was the moving range. A single UCL for the moving ranges is calculated using the relationship: — Upper control limit MR = 3.2.66 × 76. These limits will define the upper and lower values we expect to encounter approximately 99. Note that the first individual observation does not have an associated moving range. The average of the individuals represents the best overall location statistic for the process. and from the average moving range we estimated three standard deviations. An estimate of three standard deviations can be made from the relationship — 3S = 2. Step 4.qxd 10/15/07 2:12 PM Page 253 Individual/Moving Range Control Chart 253 The variation statistic we will use is the average moving range.6 + 202. LCL = X − 2.8.66MR – — Lower control limit. there is a 99. We used the moving range to calculate the average moving range. Construct the control chart and plot the data.2 = 6.H1317_CH25.267 × 76. Determine control limits for the location and variation statistics. 3S = 2. Locate the position of the control limits for the individuals and the average of the individuals. Draw the average line using a solid line.66MR . and UCL = 410. Control limits will be determined for the individual values that make up the average.6 − 202.8 LCL = 208. – — Upper control limit. we calculated an average mov— ing range MR.0 = 202.267MR UCLMR = 3. and draw the control limits using a broken line. It is calculated by averaging the moving ranges.

Plot the individual and moving range values.qxd 254 10/15/07 2:12 PM Page 254 The Desk Reference of Statistical Quality Methods the moving range. An individual/moving range chart (historical data only) follows: 300 UCL = 248. In the area above the UCL for the individuals in the vicinity of sample #6 through sample #9. and connect the points as they are located.3 200 100 0 1 5 10 15 Moving range UCL = 410. record the word History to identify those data points used for historical characterization. Draw a vertical wavy line between sample #15 and sample #16.6 200 100 LCL = 6.8 400 History 300 Avg. = 208.H1317_CH25.4 0 1 5 10 15 Individuals .3. This line will serve to separate the historical data used to characterize the process and establish control limits from values plotted from the future. Label the UCL for the moving range as UCL = 248. It is not necessary to show the average of the moving range.

3. Only rule one through rule three utilize the ±3 standard deviation limits and are the rules traditionally used in an operational environment.H1317_CH25. It is not impossible for these rules to be violated even when there has been no real process change. Continue to collect data from the process. There are other patterns representing nonstable situations. No points fall outside the three standard deviation limits. but the likelihood is so small that when we do see a rule violation. we assume that a change has occurred. They indicate a direction in which the process parameter is changing Rule One: A lack of control is indicated whenever a single point falls outside the three standard deviation limits (shift detection). we would expect to see this change manifest as a change in the relationship of the individual values to the control limits. Rule Two: A lack of control is indicated when seven consecutive points are on the same side of the average line with none outside the three standard deviation limits (shift detection). The five patterns described by the aforementioned rules are the most frequent used to detect process changes. Continue plotting future data. There are several rules that. Rule Five: Four out of five points are greater than one standard deviation. then the distribution of data in the future will support the statistical characterization of the process as described by the historical data. if violated. If either the process average or the normal expected variation were to change. If the process has not changed. 149–183).qxd 10/15/07 2:12 PM Page 255 Individual/Moving Range Control Chart 255 Step 5. assume that a change in the process has occurred or that a lack of control is indicated. and the fifth point falls anywhere. some of which may not indicate a direction of change. For our example. see Statistical Quality Control Handbook (Western Electric Company 1958. Rule Three: A lack of control is indicated when seven consecutive points are steadily increasing or decreasing in value (trend detection).6 with a standard deviation for the distribution of individuals of 67. No points fall outside the three standard deviation limits. Only rule one should be applied to the moving range portion of the control chart for the individual/moving range. This is because the distribution of the moving range is not normal and is not symmetrical with respect to the average moving range. All of these rules have the following common features: 1. They are statistically rare to occur 2. .2. the distribution would conform to one having an average of 208. For a more detailed discussion of control chart patterns. which is an estimate of three standard deviations). looking for process changes. and plot the results on the control chart.4 (one-third of 202. Rule Four: Two out of three consecutive points are greater than the two standard deviation limits with the third falling anywhere. and we would expect the maximum moving range to be 248.

compare with the following finished chart. UCL = 410. plot the following additional data points. = 208. Has the process average changed? Has the process variation changed? Date Time Admission time 11/7/94 11/7/94 11/7/94 11/8/94 11/8/94 11/8/94 11/9/94 11/9/94 11/9/94 11/10/94 11/10/94 11/10/94 11/11/94 11/11/94 11/11/94 11/12/94 11/12/94 11/12/94 8:56A 12:40P 3:50P 9:00A 1:10P 4:05P 9:05P 12:58P 3:50P 8:30A 1:10P 4:00P 8:50A 1:11A 3:51P 9:11A 1:00A 4:00P 115 175 250 215 330 220 280 215 305 275 125 150 250 285 190 105 330 250 After completing your chart. rule two.4 0 1 5 10 15 20 Individuals 25 30 . and rule three.H1317_CH25.6 200 100 LCL = 6.8 400 History 300 Avg.qxd 256 10/15/07 2:12 PM Page 256 The Desk Reference of Statistical Quality Methods Based on rule one. and determine if there is evidence of a process change.

Englewood Cliffs. Eight consecutive points are seen. Statistical Quality Control. Modern Quality Control. 2nd edition. 3rd edition. 4th edition. New York: Western Electric Company. Chambers. E. 1996. New York: John Wiley & Sons. starting with sample #18 (11/7/94 3:50 PM). NJ: Prentice Hall. New York: McGraw-Hill. Montgomery. CA: Glencoe..H1317_CH25. TN: SPC Press. D. and D. S. Quality Control. Romig. H. Statistical Quality Control Handbook. G.. and H. E. 3rd edition. . 7th edition. D. Bibliography Besterfield. 1992. Leavenworth. G. S. Western Electric Company. 1994. J.. Hayes. Introduction to Statistical Quality Control. 1958. 1996. Knoxville. and R. 1982. Understanding Statistical Process Control. L. D.qxd 10/15/07 2:12 PM Page 257 Individual/Moving Range Control Chart 257 300 UCL = 248. Encino. Wheeler. Grant. C. 2nd edition.3 200 100 0 1 5 10 15 20 Moving range 25 30 The chart indicates the possibility of an increase in the average with no change in the variation.

H1317_CH25.qxd 10/15/07 2:12 PM Page 258 .

Reproducibility: How well others agree with our measurement when measuring the same sample using the same measuring device. or EV. Precision: The degree to which repeated observations vary from the true value or from each other. Repeatability is sometimes referred to as equipment variation.50011 0. Example of repeatability and reproducibility: Bob has been asked to measure the diameter of a steel shaft. or AV.50001 0.50009 259 . Variables Measurement Error Terms and Concepts Accuracy: The degree to which the observed value agrees with the true value. Measurement data can be derived from either pure attribute inputs such as visual standards or from direct measured variables such as gages.H1317_CH26. His results show five different measurements on the same part: 0.qxd 10/15/07 4:33 PM Page 259 Measurement Error Assessment All of the conclusions reached when using data obtained from observations depend on the accuracy and validity of the data.50013 0.50012 0. Reproducibility is sometimes referred to as appraiser variation. The latter type of measurement error will be discussed first. a statement of variation. Repeatability: A measure of how well one can obtain the same observed value when measuring the same part or sample over and over using the same measuring device. We assume that the information fairly represents the truth.

d 2* may be used to estimate the standard deviation of measurement error. or EV ) but rather with the different techniques in using the measuring device. Also exclude the possibility of positional variation. as this is generally controlled by specifying where the part is to be measured.qxd 260 10/15/07 4:33 PM Page 260 The Desk Reference of Statistical Quality Methods This variation can be attributed to the following three causes: 1.0000733 In most formal studies to assess measurement. . Measurement variation due to inconsistency of the measuring technique We will assume that while Bob’s technique may or may not be correct. We will exclude technique as a contributing factor. The amount of variation for each of these contributing factors can be determined by simply calculating the standard deviation of each set of data for the repeatability and reproducibility: Standard deviation for Bob’s five measurements: 0. – R is the average range of repeated measures.50008 Mark 0. The lack of agreement is due to the reproducibility error or appraiser variation (AV ). and d2 is a constant that is dependent on the number of observations to calculate the range. the relationship σˆ = R .50011 Mary 0. the d2 factors must be modified to compensate for the small number of samples.0000482 Standard deviation for four measurements from all operators: 0. If we ask others in the department in addition to Bob to measure the same part using the same measuring device. In pure SPC calculations.49995 Jim 0. the actual number of samples from which the average range is determined is assumed to be very large (n > 25) for most control chart applications. the range of measurements is used to estimate the standard deviation. The values for d2* can be found in Table 1. For this reason. since there is a relatively small number of operators who typically measure a given part. These modified factors are abbreviated d2*. For this reason. This leaves only the measuring device as a source of error.50000 We will assume the lack of agreement is associated not with the measuring device (this has already been acknowledged via the repeatability. error or variation from repeated measures is called repeatability error or equipment variation (EV ). he is using the same technique each time he makes a measurement. From statistical process control (SPC).H1317_CH26. In this relationship. Measurement variation due to positional variation of the part 3. Measurement variation due to the measuring device 2. the results might be: Bob 0. These modified values depend on both the size of the sample from which the range is determined and on the number of samples used to determine the average.

09 2.08 2.27 3.99 2.98 2.48 3.55 2.48 3.54 2.87 2.19 3.884 0.49 3.16 1.001 Estimation of one standard deviation of EV0: Use the relationship of σˆ = R .72 1.35 3.28 1.18 3.35 3.34 2.08 2.48 3.51 3.19 3.128 1.27 3.17 1. Size of sample m Number of samples g 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >15 2 3 4 5 6 7 8 9 10 11 1.34 2.34 2.71 1.001 0.35 3.15 1.75 2.704 2.004 0. The samples are selected with typical measurement values over the entire range of values expected to be found.693 2.34 2.27 3.407 3.88 2.08 2.55 2.27 3.74 2.96 2.71 1.07 2.859 0.83 2.85 2.22 3.55 2. Each of the five parts is measured two times.55 2.74 1.19 3.866 0.36 3.001 0.07 2.890 0.85 2.56 2.01 3.000 0.73 1.18 3.870 0.10 2.48 3.18 3.99 2.45 3.09 2.11 3.55 2.078 3.98 2.85 2.34 3.89 2.852 0.34 2.28 3.09 3.77 2.42 3.55 2.34 3.870 0.08 3.862 0.27 3.34 3.72 1.34 3.99 2.42 3.71 2.81 1.72 2.37 2.534 2.970 3.35 3.13 3.41 3.43 3.09 3.72 2.888 0.48 3.38 3.28 3.86 2.34 2.851 0.38 2.336 14 15 3.868 0.27 3.72 1.10 3.326 2.08 3.00 2.qxd 10/15/07 4:33 PM Page 261 Measurement Error Assessment 261 Table 1 d2* values for the distribution of the average range.41 1.08 2.87 2.003 0.87 2.57 2.15 2.258 3.67 2.41 3.60 2.02 3.886 0.48 3.10 3.18 3.885 0.002 0.889 0.98 2.847 3.30 3.72 2.888 0.35 2.41 3.867 0.55 2.49 3.18 3.98 2.08 3.886 0.19 1.002 0.34 3.77 1.73 2.73 1.42 3.36 2.91 2.855 0.50 3.15 1.866 0.16 1.19 3.27 3.20 3.09 3.27 3.54 2. The results are tallied as follows: Operator A Operator B Part # Trial 1 Trial 2 Range Trial 1 Trial 2 Range 1 2 3 4 5 0.58 2.35 3. Two operators are chosen to make the measurements.34 2.75 1.12 2.48 3.10 3.48 3.003 0.71 2.17 1.29 3.27 3.173 12 13 3.71 1.35 3.41 3.43 3.55 3.48 2.98 2.86 2.34 3.72 2.42 3.16 1.37 3.98 2.72 2.49 3.72 2.42 3.98 2.24 2.27 3.91 1.18 3.98 2.40 2.09 3.73 2.07 2.48 3. .71 2.002 0.23 1.15 1.18 3.10 3.07 2.42 3.18 3.56 2.15 1.35 2.09 3.87 2.472 Example of repeatability (EV) error determination: Five samples are chosen for measurement.41 3.26 3.42 3.21 3.863 0.85 2.86 2.35 2.09 3.71 1.059 2.49 3.71 1.11 2. d 2* – where the average range R is the grand average range of the two operators’ ranges for the duplicate measurements for each of the five sample parts.H1317_CH26.21 1.18 1.

16.15σ to reflect a 99 percent level of confidence. each operator measured the same five parts twice. where m = 2 and g = 10. 5 The grand average range for the two operators is R1 = 0.003 + 0. we find d2* = 1. Any measurement we make with the system under evaluation is subject to a measurement of ±0.0010 = 0.0028 + 0. This error factor is referred to as reproducibility error or AV.000 + 0. Each operator measured the samples two times. the AV or repeatability is reported as 5. Any differences in the average of operator A and operator B would be attributed to differences in technique.H1317_CH26. Example of reproducibility (AV ) error determination: In the example illustrating the concept of repeatability. then the difference between the grand average for all ten observations (5 parts × 2 measurements each = 10 measurements) would be equal.0010. Looking up the appropriate d2* in Table 1. 2 This average range is the same as if we had averaged all 10 sets of ranges. In terms of SPC. In this case.0019 = 0. The AV standard deviation is calculated similar to that of the EV.0041. If there were no measurement error due to the actual measuring device (EV = 0). The total number of samples measured g is determined by multiplying the number of parts (five) by the number of operators participating in the study (two).0019.001 + 0. we are using a range of one sample of two observations. 5 The average range for operator B is Rb = 0.16 This estimate is for one standard deviation for the repeatability error and should be written as σEV.0016 1. we may also express the EV as ±0.15 × 0.002 + 0. Traditionally.0082 By dividing the EV by two and rewriting. The appropriate d2* is obtained from .003 = 0.002 + 0.0028.001 = 0.004 + 0.0016 EV = 0. EV = 5.002 + 0. g = 2 × 5 = 10.0041 99 percent of the time due to equipment error. m = 2 (m = the number of trials). the subgroup sample size is two in terms of measurement error. σ= R1 d2* σ= 0.qxd 262 10/15/07 4:33 PM Page 262 The Desk Reference of Statistical Quality Methods The average range for operator A is Ra = 0. In this case.001 + 0.

0038) ⎤ ⎡ [(5. Since standard deviations cannot be added or subtracted.0136 or ± 0. performing operations.H1317_CH26.41 (5)(2) ⎣ ⎣ ⎦ AV = 0.0027 1. d2* = 1.qxd 10/15/07 4:33 PM Page 263 Measurement Error Assessment 263 Table 1. Corrected AV compensated for EV: 2 ⎡ 5. and it is dependent on the number of operators (m = 2) and the number of samples (g = 1) since there is only one range calculation.8704 Difference R2 = 0.8742 Average operator B = 0.0038 Estimate of one standard deviation for appraiser variation: σˆ = R2 σ= d2* 0.15 R2 ⎤ ⎡ (5.15σ EV )2 ⎤ AV = ⎢ ⎥ −⎢ ⎥ * nt ⎦ ⎣ d2 ⎦ ⎣ where: R2 = range for average values of operator measurements R2 = 0.15)(0.41 σ AV = 0.0027 Unadjusted AV = 5.0016 2 (5. we convert to the variance by squaring the standard deviation. The range is determined by subtracting the lowest from the highest.41 Each operator’s average is determined by averaging all 10 samples.15 × σAV Unadjusted AV = 0. Average operator A = 0.0016)]2 ⎤ AV = ⎡⎢ ⎥ ⎥⎦ − ⎢ 1.0068 .014 This unadjusted AV is contaminated by the variation due to the measuring device (EV) and must be adjusted by subtracting a proportion of the EV from it. and then taking the square root to convert back to standard deviations.000193 − 0.0000068 AV = 0.0038 = 0.15)(0.0038 n = number of parts used in evaluation n=5 t = number of times (trials) each operator measures the part t=2 σEV = 0.

The GR&R is expressed as a percentage of the total tolerance (upper specification – lower specification). GR & R = ( AV )2 + ( EV )2 GR & R = (0. we square each. Dividing by 5.5000 ± 0. we compare the total GR&R to the specification tolerance. If a part was measured and found to be 0. Combined (Gage) Repeatability and Reproducibility Error (GR&R) Since repeatability and reproducibility are forms of standard deviation. To combine the two error factors.529 and the upper specification was .H1317_CH26.qxd 264 10/15/07 4:33 PM Page 264 The Desk Reference of Statistical Quality Methods Note: It is technically possible that a negative AV could be encountered. The %GR&R would be %GR&R= ⎛ R&R ⎞ × 100.008) Any measurement made with this system is subject to an error of ±0.7 percent of the total tolerance is consumed. the following guidelines may be used: %GR&R Less than 10 percent acceptable %GR&R 10 percent to 30 percent may be acceptable depending on application %GR&R Greater than 30 percent not acceptable Consequences of Measurement Error The following two erroneous inspection conclusions may be made.7%.15. each of which is amplified when measurement error is excessive: 1. 26.0082)2 GR & R = 0. The acceptance of materials or parts that truly do not meet specification requirements In the previous example.016. Assume that for this example the specification is 0.008 units of measurement 99 percent of the time. ⎝ T. default to zero.030. they cannot be added together directly. ⎝ 0. the total repeatability and reproducibility error was determined to be 0. The rejection of materials and parts that truly meet specification requirements 2.016 ⎞ × 1000.0136) 2 + (0. If this happens. As a rule of thumb. add them together.016 (or GR & R = ±0.0032. the standard deviation of measurement error σR&R is 0. Variances (square of the standard deviation) can be added.T ⎠ %GR&R= ⎛ 0. As a result of measurement error. In order to make a more quantitative judgment as to the significance of this amount of error.060 ⎠ %GR&R=26. and take the square root.

529 .500 Nominal ZU = .530? .530 Upper specification 0. Supplemental problem: An R&R of 0.31 corresponds to a probability of exceeding 0.31 0. This probability can be determined using the normal distribution. and an observed measurement of 0.530. this is approximately a one-in-three chance. Recall that the confidence interval for the standard deviation is determined by (n − 1)s 2 (n − 1)s 2 < < . EV.355 is reported for a measuring system. σ X 2α / 2 X2 α 1– 2 . What is the probability that the part actually meets the specification requirement? Confidence Interval for Repeatability. reject the part.92.8 percent chance that the part actually exceeds the upper specification limit of 0. then the conclusion would be to accept the part as meeting the specification requirement. Reproducibility.0032. the upper specification of 0.530.530. and a point of interest as the upper specification limit.3783. Given an upper specification of 0. There is a 37. the standard deviation of measurement.529. a standard deviation of measurement error of 0. what is the probability that the true measurement is greater than 0.529 = 0.530. in fact. The lower specification is 12. we can calculate the probability that the part exceeds the upper specification and that measurement error has led us to falsely accept the part when we should.0.H1317_CH26. and R&R can be calculated similarly to those of the standard deviation.470 Lower specification .qxd 10/15/07 4:33 PM Page 265 Measurement Error Assessment 265 0. With this information.0032 A Z-score of 0. and Combined R&R Confidence intervals for the AV.530 ± 0. An inspected unit is deemed out of specification with an observed measurement of 11.

1 48.6 18.8 47.4 57.8 74.9 42.qxd 10/15/07 266 4:33 PM Page 266 The Desk Reference of Statistical Quality Methods where: n = sample size s = standard deviation χ2 = chi square value for appropriate degrees of freedom and confidence level.8 16.9 14.8 104.H1317_CH26.0 103.7 18.5 15 13.1 22.1 70.0 90.0 40.8 74.3 73.9 131.4 112.3 30.5 65.8 26.7 97.5 47.6 67.4 12.38 14 10. Confidence Interval for Repeatability.8 149.5 140.8 5. The degrees of freedom will depend on the d2* value.0 67.2 54.76 7.2 105.3 44.6 82.2 126.2 3.9 129.2 131.4 34.8 101.7 8.4 40.5 2.4 53.9 36.9 58.2 20.6 112.5 44.5 35.12 12 9.9 9.1 50.6 19.8 21.9 12 10.1 16.4 11.7 7.82 2.9 3 2.03 6.4 22.0 35.6 24.0 42.8 15 10.6 6 4.5 7 6.45 8.0 80.7 88.0 23.3 89.1 43.5 9.6 113.4 22.9 16.3 84.3 11.0 7.4 8 7.6 105.7 78.7 47.8 45.9 36.9 63.1 122.0 79.6 27.6 29.3 31.9 5.2 30.9 90.47 5.4 52.d.5 54.5 40.6 70.8 37.1 52.0 28.9 65.1 1.27 6.0 54.0 2 1.0 17.7 30.3 13. Size of sample m Number of samples g 2 1 1.3 59.5 22.0 25.1 10 9.3 7 5.8 15.2 13.7 94.4 51.0 3. EV Lower confidence limit LCL = Upper confidence limit ( EV )2 χ2α / 2 v1 UCL = ( EV )2 χ 21− α / 2 v1 where: EV = calculated EV from measurement error study v1 = degrees of freedom for appropriate m (sample size) and g (number of samples) from Table 2 χ2 = chi square for specified confidence and v1 degrees of freedom Table 2 Degrees of freedom v as a function of d2*.3 9.1 56.3 26.62 4.0 37.9 75. 0.4 62.4 27.3 79.1 105.3 147.7 8.9 139.76 13 9.7 52.7 81.2 9 8.8 7.0 84.5 52.8 122.3 7.6 74.5 66.4 >15 c.6 18.5 61.7 5 4.6 19.5 18.3 32.9 49.6 31.2 13.9 58.1 97.5 74.1 32.4 37.1 122.7 8.4 12.54 .9 96.4 20. The equation will be modified such that the sample size and degrees of freedom will be taken from Table 2.6 30.1 14.3 33.6 14 12.5 10.1 42.6 24.1 60.8 4 3.74 3.3 89.0 49.6 116.1 31.5 36.9 119.7 9.5 41.6 95.7 27.2 22.8 61.9 25.4 47.4 114.3 60.8 81.9 109.1 87.8 27.2 40.6 6 5.5 11.0 11 9.8 13 11.1 24.8 16.6 94.97 10.0 21.8 158.6 72.3 8 9 10 11 6.5 68.7 137.2 63.8 84.8 67.7 38.3 20.5 40.0 99.88 3 4 5 2.

g = 10 (Parts × Operators = 5 × 2 = 10) Number of trials. For the EV calculation.0082 Number of samples = 5 parts Number of operators = 2 Total measurements taken.95 χ21–α/2 = 3. v1 = 9. m = 2 Level of confidence = 90% We get v1 from Table 2.33 . g = 10 (Parts × Operators = 5 × 2 = 10) Number of trials. For the EV calculation.0082)2 16.0 χ2α/2 for 9. m = 2 Level of confidence = 90% We get v1 from Table 2.0082 Number of samples = 5 parts Number of operators = 2 Total measurements taken. m = 2 and g = 10.H1317_CH26.0 degrees of freedom and α/2 = 0.92 LCL = ( EV )2 χ2α / 2 v1 LCL = (0.qxd 10/15/07 4:33 PM Page 267 Measurement Error Assessment 267 Example calculation: Calculate the 90 percent confidence interval for EV given the following (from the previous example): Part I.92 9 LCL = 0. Lower confidence interval limit: EV = 0. v1 = 9. m = 2 and g = 10.0 χ21–α/2 for 9.0059 Part II: Upper confidence interval limit: EV = 0.05 χ2α/2 = 16.0 degrees of freedom and 1 – α/2 = 0.

0135.00 degrees of freedom and α/2 = 0. and we have only one such sample range. Lower confidence interval limit: AV = 0. For the AV calculation.” Confidence Interval for Reproducibility. but we are 90 percent confident it is between 0. and there are two operators) Total measurements taken. v2 = 1. An appropriate statement would be: “We do not know the true repeatability error (EV).00 χ2α/2 for 1. g = 1 Level of confidence = 90% We get v2 from Table 2.0135. m = 2 and g = 1.05 χ2α/2 = 3.0059 and 0. For this case.0136 Number of samples.84 .0082)2 3. m = 2 (each operator has an average. operator A and operator B. the number of trials is two since we are dealing with a range of two averages. m = 2 and g = 1.33 9 UCL = 0.0059 to 0.H1317_CH26. AV Lower confidence limit Upper confidence limit 2 LCL = ( AV ) χ2α / 2 v2 UCL = ( AV )2 χ 21− α / 2 v2 Calculate the 90 percent confidence interval for EV given the following (from the previous example): The calculations for the AV are done exactly as for the EV except that the degrees of freedom are changed. With this example. Part I.qxd 268 10/15/07 4:33 PM Page 268 The Desk Reference of Statistical Quality Methods UCL = (EV )2 χ21 − α / 2 v1 UCL = (0.0135 Summary The 90 percent confidence interval for the EV is 0.

3 10.0118 .10 and one degree of freedom = 0.0069 to 0. but we are 90 percent confident it is between 0.016 χ2α/2 for α = 0.3 LCL = (R&R)2 χ 2α / 2 v1 + v2 LCL = (0.2178.0039 (AV )2 χ21 – α / 2 v2 UCL = UCL = (0.84 1 269 LCL=0. An appropriate statement would be: “We do not know the true repeatability error (EV).0 LCL = 0.0136)2 3.0069 and 0.0 χ20.016)2 18.0069 Part II.05. Lower confidence interval limit: Degrees of freedom v1 = 9. m = 2 and g = 1) χ21–α/2 for α = 0.0136)2 0.0 R&R = 0.2178.qxd 10/15/07 4:33 PM Page 269 Measurement Error Assessment LCL= (AV )2 χ 2α / 2 v2 LCL= (0.0 v2 = 1.10.0039 1 UCL = 0.0 = 18.” Confidence Interval for Combined Repeatability and Reproducibility.10 and degrees of freedom = 10.H1317_CH26. R&R Lower confidence limit LCL= (R&R)2 χ 2α / 2 v1 + v2 Upper confidence limit UCL= (R&R)2 χ21− α / 2 v1 + v2 Part I.00 (from Table 2. Upper confidence interval limit: v2 = 1.2178 Summary The 90 percent confidence interval for the AV is 0.

0254. Upper confidence interval limit: v1 = 9. to name a few. χ20.0 R&R = 0.016 χ21–α/2 for α = 0.10 = 3. The items are then renumbered by the person conducting the attribute inspection efficiency study.0.95. but we are 90 percent confident that it is between 0.94 UCL = (R&R)2 χ21 – α / 2 v1 + v2 UCL = (0. This is a direct result of the central limit theorem. It involves the determination of the percentage accuracy of inspection or the percentage of materials or items correctly classified as conforming or nonconforming to an attribute requirement as judged by a reference or check inspector. Attribute and Visual Inspection Error When working with visual standard. The following method is suitable for determining the inspection efficiency accuracy. A. The specific items are tagged with an identification number. A description of the exact location and type of defect(s) found is also noted. A master list is made of the item number. The amount of error would decrease proportionally to the square root of the sample size. inspectors should be judged or evaluated in their ability to inspect those quality visual standards that have been established. textile workmanship. The inspector inspects the lot and identifies the nonconforming items.0118 to 0. Examples of such standards that have been developed internally are soldering standards.94 10 UCL = 0.qxd 270 10/15/07 4:33 PM Page 270 The Desk Reference of Statistical Quality Methods Part II. paint quality. Juran (1935. and location.0118 and 0. An appropriate statement would be: “We do not know the true R&R. the location .0254 Summary The 90 percent confidence interval for the R&R is 0. and print quality. This “test” group of items contains a mixture of both defective and nondefective items. The lot of items subject to inspection is given to the inspector participating in the evaluation. Reporting an average of n = 4 will cut the measurement error in half relative to the error expected by reporting a single observation. Melsheimer developed this method in 1928.H1317_CH26. A collection of items to be inspected is assembled. and a consensus is reached by several individuals as to the classification of the item as being defective or nondefective. it is appropriate to maintain a set of reference standards that represent acceptable and nonacceptable quality. J. Periodically. It may be required that items be synthetically rendered defective.0254. defect.0 v2 = 1. a short-term fix could be realized by reporting an average of several observations.” In the event that the measurement R&R is excessive.10 and degrees of freedom = 10.016)2 3. 643–644) and C. M.

8% 35 – 4 + 9 Defects found by participating inspector 255 good by participating inspector D = 35 Defects found to be OK by check inspector K=4 246 good after check inspector True defects = 31 Defects found by check inspector B = 9 where: D = defects reported by the participating inspector K = number of “good” units rejected by participating inspector.qxd 10/15/07 4:33 PM Page 271 Measurement Error Assessment 271 of the defect. and the nature of the defect. D–K+B Before inspection Test lot (contains 250 good and 40 defective units) Inspection efficiency = 35 – 4 × 100 = 77. as determined by the check inspector B = defects missed by the participating inspector. The units found to be conforming are screened by the check inspector. The items found defective by the participating inspector are tallied as D. as determined by the check inspector . The defective units missed by the participating inspector but found by the check inspector are tallied as B. These units are then reinspected by the check inspector. The inspection efficiency or accuracy is determined by % inspection accuracy or % of defects correctly identified = D–K .H1317_CH26. Any items in the defective group D that are judged acceptable by the check inspector are tallied as K.

For example.qxd 272 10/15/07 4:33 PM Page 272 The Desk Reference of Statistical Quality Methods A Method for Evaluating Attribute Inspection Attribute inspection involves the use of sensory evaluation. which ones are definitely good. First we need to decide what is acceptable and what is not. The terms marginally and definitely are subjective. As the characteristic becomes closer and closer to the MAD. such observations (measurements) are by their very nature subjective. a spot on a printed label might be acceptable as along as it is less than the MAD. but it does not meet the requirement leading to a nonconformity. half of them are marginally bad and half of them are definitely bad. A record is kept regarding the “truth” of the status of the numbered samples. half of these are marginally less than the MAD and half are definitely less than the MAD. One acceptable criterion is that the incident is detected. a scratch of a certain length might be detected. It’s difficult to quantify a judgment call. For example. the more likely the call will be incorrect. The 40 samples are randomly numbered 1–40. This means that individuals will have different interpretations regarding the state of a condition— to say the least. The following is one method to comparatively evaluate several inspectors with respect to attribute or sensory inspection. Observed characteristic MAD In this case. . This threshold could be thought of as a maximum allowed defect (MAD). In our case. but it might be of such magnitude as to render the unit of inspection as nonconforming. and which ones are good but to a lesser degree than those that are definitely good. Of the 50 percent that are less than the MAD. A committee will have the responsibility of determining which samples are good. 50 percent of which pass the requirement. It is important that 50 percent of the test group be good and 50 percent be bad (according to the committee). . Any detection that is not greater than the MAD will be considered acceptable. Of the remaining samples that are nonconforming. In order to evaluate the level of effectiveness for a sensory inspection. it is suggested that a test group of approximately 40 samples be made. Samples of the item being inspected are made with varying degrees of the nonconforming element. and herein lies the real problem . .H1317_CH26. the inspected unit would be acceptable (obvious in this example). spots that are less than the MAD.

Inspector Sample Date Trial 1 Trial 2 Trial 3 Actual 1 GG = 2 GB = 3 4 5 6 7 8 9 10 11 12 BG = BB = .H1317_CH26.280’’) 3 (0.314’’) 2 (0.320’’) 40 (0.qxd 10/15/07 4:33 PM Page 273 Measurement Error Assessment MAD = (0. Sample 1 (0.310’’) 273 Note: A spot > the MAD is considered a defect (nonconformity).275’’) A worksheet is prepared for the administration of the attribute measurement assessment.

” The samples are then presented to the inspector in a random order for a first. The column “Actual” shows the true condition of the sample as determined by the committee. This process continues until all 40 samples have been inspected.H1317_CH26. called Bad BG = count of actual Bad. called Good BB = count of actual Bad. Inspection responses that are classified as bad by the participant are recorded as a “–” and those that are good are recorded as a “+. called Good GB = count of actual Good. called Bad Total of all counts should equal the total of all the inspections. During the study. only 12 samples are used. second. For this example case. the inspector is not allowed to see the data on the worksheet.qxd 274 10/15/07 4:33 PM Page 274 The Desk Reference of Statistical Quality Methods The samples are presented to the participant one at a time in a random order. and his or her decision is recorded on the worksheet. Data for the attribute study: Inspector MC Date 11/2/12 Sample Trial 1 Trial 2 Trial 3 Actual 1 + + + + GG = 15 2 + – – – GB = 3 3 – – – – + – – + BG = 4 4 5 – – + – BB = 14 6 + + + + 7 – – – – 8 + + + + 9 – – – – 10 + + + + 11 + – + + 12 + + – – GG = count of actual Good. and third trial inspection. .

H1317_CH26. 3. Correct = C Consumer’s risk = Cr Manufacturer’s risk = Mr Repeatability = R Bias = B 1.” GG + BB Total Inspected 15 + 14 C= = 0.81 36 C= Inspector MC Date 11/2/12 Sample Trial 1 Trial 2 Trial 3 Actual 1 + + + + GG = 15 2 + – – – GB = 3 3 – – – – + – – + BG = 4 4 5 – – + – BB = 14 6 + + + + 7 – – – – 8 + + + + 9 – – – – 10 + + + + 11 + – + + 12 + + – – . 5.qxd 10/15/07 4:33 PM Page 275 Measurement Error Assessment 275 There are five values or indices related to the overall effectiveness of the inspector: 1. Correct. 4. C The proportion of the time the inspector “got it right. 2.

we are most interested in the probability of accepting a bad product. Consumer’s Risk.17 GG + GB 15 + 3 . % Manufacturer’s Risk. The manufacturer’s risk (false alarm) is determined by Mr = 3 GB = = 0.qxd 276 10/15/07 4:33 PM Page 276 The Desk Reference of Statistical Quality Methods 2.22 BB + BG 14 + 4 Cr = Inspector MC Date 11/2/12 Sample Trial 1 Trial 2 Trial 3 Actual 1 + + + + GG = 15 2 + – – – GB = 3 3 – – – – + – – + BG = 4 4 5 – – + – BB = 14 6 + + + + 7 – – – – 8 + + + + 9 – – – – 10 + + + + 11 + – + + 12 + + – – 3. Cr is the probability of accepting a part we believe to be good that in reality is bad. Mr The last thing a manufacturer wants to do is to call a part good when in reality it is bad. Cr As consumers.H1317_CH26. 4 BG = = 0.

there are three opportunities for agreement.H1317_CH26. For each sample. and the number of agreements is 26.72 36 . there are three opportunities per sample to have agreement. Each sample has been inspected three times. R Repeatability is a measure of how well the participant and the inspector get the same results when inspecting the same part. The total opportunities are 36. These opportunities are agreement between trial #1 and #2. and there are 12 samples. trial #2 and #3. It is a measure of consistency. Repeatability. We are not concerned with getting the correct answer. and trial #1 and #3. R= 26 = 0. only how constant we are in our “within sample” inspection.qxd 10/15/07 4:33 PM Page 277 Measurement Error Assessment Inspector MC 277 Date 11/2/12 Sample Trial 1 Trial 2 Trial 3 Actual 1 + + + + GG = 15 2 + – – – GB = 3 3 – – – – + – – + BG = 4 4 5 – – + – BB = 14 6 + + + + 7 – – – – 8 + + + + 9 – – – – 10 + + + + 11 + – + + 12 + + – – 4. therefore.

Mr = 0. Correct. B Value 0. 2. 5. Cr Manufacturer’s risk.2541 = = 0.2966 Summary of indices: Index 1. 4.81 0. R Bias. Mr Repeatability. The Mr and Cr must be converted using a bias factor (BF). B Bias is the ratio of the Mr and Cr.qxd 10/15/07 278 4:33 PM Page 278 The Desk Reference of Statistical Quality Methods Inspector MC Date 11/2/12 Sample Trial 1 Trial 2 Trial 3 Actual 1 + + + + GG = 15 2 + – – – GB = 3 3 – – – – + – – + BG = 4 4 5 – – + – BB = 14 6 + + + + 7 – – – – 8 + + + + 9 – – – – 10 + + + + 11 + – + + 12 + + – – 5.72 0.17 0.22 BfCr = 0. C Consumer’s risk.17 Bf Mr = 0. The bias factors are listed in Table A.86 BfCr 0. 3.2966 B= Bf Mr 0.9 in the appendix. Bias.2541 Cr = 0.H1317_CH26.86 .22 0.

W. Bibliography Automotive Industry Action Group (AIAG). J.80 to 1. 1974. C. Cr Manufacturer’s risk. Lyday. 10 (October): 643–644.20 Marginal 0.90 ⬍0. 3rd edition. “Inspector’s Errors in Quality Control. Montgomery. If all the test samples were extremely obvious. TN: SPC Press. Evaluating the Measurement Process. Juran.80 0. Mr Repeatability. .90 0. The scores depend significantly on the mix used in the test samples. 3rd edition. Introduction to Statistical Quality Control. and 100 percent on the four scores. 1989. Quality Control and Industrial Statistics. 1996. D. a normal probability plot (NOPP) would be beneficial in the detection of outliers. C Consumer’s risk. If we have seven or more inspectors.10 ⬍0. Knoxville.50 Unacceptable ⬍0. Duncan. Homewood.10 0. D.50 Interpretation of the results must be done with caution. along with having the committee take the test. Irwin. 1935.02 0.05 ⬎0.80 ⬍0.05 0. MI: AIAG. New York: John Wiley & Sons.80 0. 0 percent. Measurement Systems Analysis. IL: Richard D. 4th edition. R Bias.80 ⬍0.02 ⬍0.. Southfield. J.90 0. then all the inspectors would have achieved a l00 percent. no.qxd 10/15/07 4:33 PM Page 279 Measurement Error Assessment 279 Criteria for acceptance: Index Correct.” Mechanical Engineering 59. Wheeler.05 0.20 to to to to to 0. M. It would also be prudent to repeat the test over time to see if the performance is changing.90 0. 0 percent.50 or 1. What we are looking for are performances that differ significantly from the group.H1317_CH26. J.05 ⱖ0.80 to 1. A. B Acceptable ⱖ0. 2002. and R.50 or ⬎1.

qxd 10/15/07 4:33 PM Page 280 .H1317_CH26.

Two methods of constructing multivariate control charts are presented: 1. a single control chart can respond to a process change in the concentration of the slurry and/or the pH of the slurry. A control chart that allows monitoring of two or more related variables is referred to as a multivariate control chart.qxd 10/15/07 5:50 PM Page 281 Multivariate Control Chart Often processes will have variables that are related to each other. The volume of the shrimp is also determined by water displacement.H1317_CH27. The weight of the breading is determined by the difference between the before-washing weight and the after-washing weight for five randomly selected shrimp. For each sample. the average. Data for the first 14 hours of production are taken. Samples are taken from the breading line every hour. and range are determined. standard deviation S. and both affect the ratio of shrimp meat to the total breaded weight. Hotelling’s T 2 statistic (T 2 chart) 2. This chart can detect a change in the relationship of two or more variables. 281 . such as the percentage of calcium carbonate in limestone and the pH of a 25 percent slurry of calcium carbonate. Rather than maintain a control chart for both of these characteristics. Standardized Euclidean distance (DE chart) T 2 Chart The following example illustrates the T 2 control chart. Two process characteristics are monitored in the production of breaded shrimp: X1 = weight of the batter pickup in grams X2 = volume of the shrimp in cm3 These two characteristics are correlated.

79 5.73 5.83 5.01 6.47 6.15 11.94 11.41 1.96 0.52 5.38 12.14 6.85 0.13 11.29 6.81 0.34 0.10 0.69 11.53 6.09 5.89 12.38 12.74 11.19 0.81 12.44 0.11 12.75 0.73 11.89 11.22 12.14 0.23 0.24 0.43 1.97 11.02 6.83 12.15 6.88 11.22 12.13 6.87 11.48 11.37 11.45 11.29 12.98 0.14 6.19 11.44 0.20 0.56 6.05 5.49 5.91 11.87 11.83 11.07 0.71 6.99 11.58 12.53 12.45 6.08 5.19 12.54 6.99 12.65 11.74 5.08 11.72 11.41 0.34 0.92 11.17 0.54 11.35 0.92 6.40 1.qxd 282 10/15/07 5:50 PM Page 282 The Desk Reference of Statistical Quality Methods Sample #1 Average: S: Range: Sample #4 X2 X1 X2 X1 X2 X1 X2 6.62 0.55 12.14 6.52 6.H1317_CH27.18 5.99 5.68 5.21 6.38 0.29 0.13 0.30 12.02 5.40 6.80 12.75 6.72 0.96 6.90 0.80 .86 6.90 11.94 0.19 6.11 6.23 6.95 5.27 6.98 5.70 5.90 Sample #9 Average: S: Range: Sample #3 X1 Sample #5 Average: S: Range: Sample #2 Sample #10 Sample #11 Sample #12 X1 X2 X1 X2 X1 X2 X1 X2 5.59 6.70 5.39 0.23 11.77 5.63 12.02 0.43 12.69 11.80 12.62 12.41 Sample #6 Sample #7 Sample #8 X1 X2 X1 X2 X1 X2 X1 X2 5.02 0.11 0.16 6.23 0.17 0.10 11.42 12.91 11.97 11.07 12.02 6.53 5.06 5.38 11.42 0.53 6.92 5.22 0.27 6.12 0.27 12.13 0.46 1.39 11.94 6.48 5.53 11.09 5.86 11.74 6.99 0.33 12.00 0.39 6.82 11.91 0.01 5.62 12.25 11.05 5.86 0.16 0.14 0.18 12.36 6.92 11.53 5.85 10.17 6.41 1.83 12.65 11.81 5.05 5.15 12.12 11.32 11.06 0.96 6.01 5.72 11.65 11.68 12.90 6.72 6.80 12.75 6.15 5.

This means that points are unlikely to occur in the upper-left or lower-right corner of the square.28 12.48 1.87 6.02 0.84 5. it is likely not to yield a high X2. we can identify a better region of the square.07 0.52 5. due to the positive correlation.67 5. provide two areas of joint process control.39 11.25 12.79 12. The following square represents the control region using an X chart for each parameter.64 11.76 0. This is accomplished using the T 2 chart.H1317_CH27.qxd 10/15/07 5:50 PM Page 283 Multivariate Control Chart Average: S: Range: Sample #13 Sample #14 X1 X2 X1 X2 5. if we consider the correlation.11 283 Each of the process variables could be monitored via an average/range control chart that would.81 5. Any point outside the square indicates that the process has changed—such as a subgroup outside the limits for either one or both of the variables. in effect. . a unit produced is unlikely to yield a low value for X1 and a high value for X2. So.38 0.71 11.49 11.45 1. given that the process is under control.74 11.80 0. UCL (2) Joint control region Average (2) LCL (2) LCL (1) UCL (1) Average (1) However.39 6. A better control region would be one that excludes the two unlikely corners and enlarges the other remaining corners where the likelihood of occurrence increases.19 11.59 11.36 5.63 5.45 11.26 5. If a particular unit has a low value for X1.83 12.94 6.97 6.53 1.

H1317_CH27.qxd

10/15/07

284

5:50 PM

Page 284

The Desk Reference of Statistical Quality Methods

Individual charts cannot address the correlation between parameters. However, this
does not discount the value in maintaining individual control charts for specific characteristics. If the characteristics are not correlated, then the maintenance of separate control
charts is required.
The data used in this example are highly correlated with r = 0.9770. The joint control
region using the T 2 chart is shown in the following figure:

UCL (2)

Joint control region
using the T 2 chart

Average (2)

LCL (2)
UCL (1)

LCL (1)
Average (1)

The T 2 Equation
T 2 Control Chart for Two Parameters
The value for T 2 for each subgroup is calculated as
Ti 2 =

n
S12

S22 −

S 2(12 )

⎡ S 2( X − X )2 + S 2( X − X )2 − 2 S ( X − X )( X − X ) ⎤ ,
1
2i
2
(12 )
1i
1
2i
2
1
⎣ 2 1i

where: X1i and X 2 i are subgroup averages
S12 and S 22 are overall averages of the variance for the within-subgroup variance
X1 and X 2 are overall averages for all subgroups
S( 12 ) is the overall average of the covariance of X1 and X2
n is the subgroup sample size.
Calculation of Individual and Average Covariance
S(12) is the overall average of the covariance of X1 and X2. Each subgroup will yield an
independent covariance. The average of the covariance will be used in calculating the T2
statistic used in the control chart. The individual subgroup covariance is calculated by

S( 12 ) i =

ΣX11 i X 21 i −

( ΣX11 i )( ΣX 21 i )
n
.
n −1

H1317_CH27.qxd

10/15/07

5:50 PM

Page 285

Multivariate Control Chart

285

Example:
For subgroup sample #1:

Sample #1

X1

Average:
S:
Range:
Sum, S:

X2

X1X2

6.08
5.70
5.54
6.53
6.45

11.97
11.65
11.58
12.53
12.36

72.78
66.41
64.15
81.82
79.72

6.06
0.44
0.99
30.30

12.02
0.42
0.95
60.09

364.88

S( 12 ) i =

S( 12 ) i =

ΣX11 i X 21 i −

364.88 −

( ΣX11 i )( ΣX 21 i )
n
n −1

( 30.30 )( 60.09 )
5
= 0.1837
4

The remaining covariances are calculated in a similar manner for subgroups 2–14. A
summary of the S(12), subgroup averages, and variances are found in Table 1.

Table 1 Statistics for the T 2 chart.

Subgroup (I)

X 1i

X 2i

S 12i

S 22i

S (12) i

1
2
3
4
5
6
7
8
9
10
11
12
13
14

6.06
5.72
5.86
6.23
5.96
5.99
6.02
6.13
6.13
5.98
5.85
6.23
5.80
6.07

12.02
11.62
11.81
12.19
11.91
11.94
12.00
12.07
12.11
11.90
11.75
12.22
11.76
12.02

0.1936
0.1681
0.1444
0.0256
0.1849
0.1225
0.0196
0.1521
0.1681
0.0196
0.0144
0.0576
0.2025
0.2809

0.1764
0.1681
0.1156
0.0289
0.2116
0.1156
0.0400
0.1936
0.1600
0.0289
0.0100
0.0841
0.1444
0.2304

0.1837
0.1666
0.1233
0.0221
0.1982
0.1152
0.0267
0.1708
0.1610
0.0218
0.0128
0.0671
0.1687
0.2557

X1 = 6.00

X 2 = 11.95

S12 = 0.1252

S22 = 0.1220

S ( 12 ) i = 0.1210

Averages:

H1317_CH27.qxd

10/15/07

286

5:50 PM

Page 286

The Desk Reference of Statistical Quality Methods

Calculation of Individual T 2 Values
Ti 2 =

n
S12

S 22

S 2( 12 )

[S ( X
2
2

1i

S12 = 0.1252 S 22 = 0.1220
n
S12

S 22

S 2( 12 )

=

− X1 ) 2 + S12 ( X 2 i − X 2 ) 2 − 2 S( 12 ) ( X1 i − X1 )( X 2 i − X 2 )

(S

( 12 ) i

)

2

= ( 0.1210 ) 2

n=5

5
= 7893.9
( 0.1252 )( 0.1220 ) − ( 0.1210 ) 2

The term 7893.9 will be utilized for all T 2 calculations.
⎛ 0.1220 ( X 1 i − 6.00 ) 2 + 0.1252 ( X 2 i − 11.95 ) 2 ⎞
Ti 2 = 7893.9 ⎜

− ( 2 )( 0.121)( X 1 i − 6.00 )( X 2 i − 11.95 )


⎛ 0.1220 ( 6.06 − 6.00 ) 2 + 0.1252 (12.02 − 11.95 ) 2 ⎞
Ti 2 = 7893.9 ⎜

− ( 2 )( 0.121)( 6.06 − 6.00 )(12.02 − 11.95 )


T12 = 7893.9 ( 0.0004392 + 0.0006135 − 0.0010164 )
T12 = 0.2866
The remaining T 2 values are calculated using
⎛ 0.1220 ( X 1 i − 6.00 ) 2 + 0.1252 ( X 2 i − 11.95 ) 2 ⎞
Ti 2 = 7893.9 ⎜

− ( 2 )( 0.121)( X 1 i − 6.00 )( X 2 i − 11.95 )


⎛ 0.1220 ( 5.72 − 6.00 ) 2 + 0.1252 (11.62 − 11.95 ) 2 ⎞
T22 = 7893.9 ⎜

− ( 2 )( 0.121)( 5.72 − 6.00 )(11.62 − 11.95 )


T22 = 7893.9565 + 0.013634 − 0.0223608
T22 = 6.6167

⎛ 0.1220 ( X 1 i − 6.00 ) 2 + 0.1252 ( X 2 i − 11.95 ) 2 ⎞
Ti 2 = 7893.98 ⎜

− ( 2 )( 0.121)( X 1 i − 6.00 )( X 2 i − 11.95 )


⎛ 0.1220 ( 5.86 − 6.00 ) 2 + 0.1252 (11.81 − 11.95 ) 2 ⎞
T3 2 = 7893.98 ⎜

− ( 2 )( 0.121)( 5.86 − 6.00 )(11.81 − 11.95 )


T3 2 = 7893.9 ( 0.0023391 + 0.002454 − 0.0047432 )
T3 2 = 0.8036

]

H1317_CH27.qxd

10/15/07

5:50 PM

Page 287

Multivariate Control Chart

287

A summary of all the individual T 2 values follows.
Sample, i

T2

1
2
3
4
5
6
7
8
9
10
11
12
13
14

0.2866
6.6167
0.8036
2.4226
0.0657
0.0041
0.9457
0.7063
1.8418
0.9457
3.8192
4.3628
1.6081
0.2011

Calculation of the Control Limits
There is no lower control limit. The upper control limit is calculated using the following
relationship:
Upper control limit, T 2 α , p ,n −1 =
where:

p ( n − 1)
Fα , p ,n − p ,
n− p

α = probability of a point above the control limit when there has been
no process change
p = number of variables (2 for this example)
n = subgroup sample size (5 for this example)
Fα,p,n – p = value from F distribution table for level of significance α with p
and n – p degrees of freedom.

For this example, F.05,2,3 = 9.55.
When looking up the F value, use p degrees of freedom for the numerator and n – p
degrees of freedom for the denominator.
The upper control limit is
T 2α , p , n −1 =

2(5 − 1)
p(n − 1)
(9.55) = 25.47.
Fα , p , n − p =
3
n− p

The individual T 2 values are plotted, and the upper control limit of 25.47 is drawn
using a broken line.

H1317_CH27.qxd

288

10/15/07

5:50 PM

Page 288

The Desk Reference of Statistical Quality Methods

UCL = 25.47
25.0
20.0

History

15.0
10.0
5.0
0
–5.0

The process appears to be very well behaved, indicating that the correlation of the data
is stable and well defined.
The control chart of averages for each of the two variables, plotted independently,
follows:
Average control chart, n = 5
Variable X1
7.0
UCL = 6.46
6.0
LCL = 5.54
5.0

13.0

Average control chart, n = 5
Variable X2
UCL = 12.41

12.0
11.0

LCL = 11.49

Both control charts are in a state of good control (statistically stable).
The first 14 points were used to provide a historical period from which the process was
characterized. Calculate the T 2 for the next six data sets, and determine if the process has
changed.

H1317_CH27.qxd

10/15/07

5:50 PM

Page 289

Multivariate Control Chart

Average:
S:
Range:

289

Sample #15

Sample #16

Sample #17

Sample #18

X1

X2

X1

X2

X1

X2

X1

X2

5.60
5.91
5.44
5.80
6.20

11.91
12.05
11.75
11.53
11.61

6.22
6.35
5.84
6.01
6.08

12.17
12.05
11.73
11.55
12.10

6.25
6.55
5.95
6.12
6.38

11.30
12.15
12.05
12.55
12.55

5.94
5.53
5.61
6.05
5.52

12.16
12.01
11.62
11.51
11.95

5.79
0.29
0.76

11.77
0.21
0.52

6.10
0.20
0.51

11.92
0.27
0.62

6.25
0.23
0.60

12.12
0.49
1.25

5.73
0.25
0.53

11.85
0.27
0.65

Average:
S:
Range:

Sample #19

Sample #20

X1

X2

X1

X2

6.44
6.35
6.18
6.17
5.91

12.21
12.46
12.10
12.27
12.36

6.22
5.87
6.05
6.28
6.08

11.75
12.10
11.88
12.00
11.92

6.21
0.20
0.53

12.28
0.14
0.36

6.10
0.16
0.41

11.93
0.13
0.35

The new future data values are used to calculate the T 2 points. Use the process averages, variances, and covariance from the historical data. That is,
Ti 2 = 7893.9.
⎡⎣ 0.1220( X1i − 6.00)2 + 0.1252( X 2i − 11.995)2 − (2)(0.121)( X1i − 6.00)( X 2i − 11.95) ⎤⎦
A summary of the T 2 for subgroups 15–20 follows:

Sample, i

T2

15
16
17
18
19
20

2.282
16.250
10.790
28.511
17.713
13.847

H1317_CH27.qxd

10/15/07

290

5:50 PM

Page 290

The Desk Reference of Statistical Quality Methods

The completed control chart with both the historical data and the new future data
follows.
Process change
UCL = 25.47
25.0
20.0

History

15.0
10.0
5.0
0
1

5

10

15

20

–5.0

Average control chart, n = 5
Variable X1
7.0

History

UCL = 6.46
6.0

LCL = 5.54
5.0
1

5

10

15

20

15

20

Average control chart, n = 5
Variable X2

13.0
UCL = 12.41

History

12.0

LCL = 11.49

11.0
1

5

10

Note that the average control chart for both variables X1 and X2 remain in control, failing to detect the change in process correlation.

H1317_CH27.qxd

10/15/07

5:50 PM

Page 291

Multivariate Control Chart

291

Correlation plot
6.50
17

6.00

X1

11
2

3
13 15

20 16 8
14 1
5
10 6 7

9

18

412
19

Out-of-control point

5.50

5.00
11.00

11.50

12.00
X2

12.50

13.00

In the preceding plot, the numbered values represent the subgroup sample average.
The bold values show the future data, and the normal text represents the historical data
from which the T 2 chart parameters were derived. Notice that the subgroup sample #18
falls significantly off the best linear line, indicating a departure from the expected value
per the historical data. This departure from the expected correlation is detected using the
T 2 but not using the traditional X /R chart.

Standard Euclidean Distance DE
An alternative to the more complex T 2 is the DE control chart. This chart is based on calculating a standardized value of the monitored characteristics, combining them vectorially
in the same manner that one would combine variances, and reporting an overall single
value to plot.
Plotting Statistic
The characteristic plotted is a combined standardized Z-score from two or more
characteristics:
Zi =

X i − Ti ,
Sigma ( Ti )

H1317_CH27.qxd

292

10/15/07

5:50 PM

Page 292

The Desk Reference of Statistical Quality Methods

where:

Xi = the location statistic for the characteristic, such as an individual
observation or an average of a subgroup as appropriate
Ti = the target value of the location statistic, may be the average of the
individuals or the average of averages for subgroup data n ≥ 2 as
appropriate
Sigma (Ti) = the standard deviation of individuals or standard deviation of
averages as appropriate.

The DE may be used for individuals or averages.
The Z-score calculations for individuals are based on:
Zi =

Xi − Xi
,
Si

where: X i = individual observation
X i = average of the individuals
Si = sample standard deviation used as an estimate of the true
standard deviiation, σ .
The Z-score calculations for averages are based on:
Zi =

Xi − Xi
,
SX

where: X i = individual subgroup average
X i = average of subgroup averages

⎛ A R⎞
S X = standard deviation of the averages can be estimated from ⎜ 2 ⎟ .
⎝ 3 ⎠

Values for A2 are dependent on the subgroup sample size n.
Several Z-scores are combined to yield a single DE value to plot:
DE = ( Z1 ) 2 + ( Z 2 ) 2 + . . . + ( Z k ) 2 ,
where: Z1 = Z-score for 1st variable
Z2 = Z-score for 2nd variable.
Control Limits and Center Line
There is no lower control limit, and the upper control limit for the individual DE values is
UCL DE =

χ02.995 (p),

where: χ 20.995 = the 99.5 percentile of the chi-square distribution using p degrees of
freedom (p = number of variables considered).

H1317_CH27.qxd

10/15/07

5:50 PM

Page 293

Multivariate Control Chart

293

Note: Some chi-square tables require looking up 1 – C, or in this case α = 0.005.
The center line CL for the DE chart is determined by
CL =

p.

The following table gives the center line values CL and upper control limits
UCL DE = χ 20.995 ( p) as a function of the number of variables p:

p

CL

UCL

2
3
4
5
6
7
8
9
10

1.41
1.73
2.00
2.24
2.45
2.65
2.83
3.00
3.16

3.26
3.58
3.85
4.09
4.31
4.50
4.69
4.86
5.02

The following example demonstrates the construction and use of the
UCL DE = χ 20.995 ( p)
Two characteristics that are monitored in the production of a water-based nondrying
cement are the batch viscosity and the peel strength of a test strip contact bonded to a substrate. Twenty batches have been tested for these two parameters. A traditional individual/
moving range control chart will be developed for each of the independent parameters,
along with a DE chart combining the two parameters.
Data:

Batch

X1
Peel, lb/in

X2
Viscosity, cps

1
2
3
4
5
6
7
8
9
10
11
12

17.5
18.3
14.2
16.4
19.0
16.3
19.8
19.8
16.1
17.5
19.4
18.3

1200
1700
825
1025
1600
1320
1725
1600
1150
1300
1825
1550

Batch

X1
Peel, lb/in

X2
Viscosity, cps

13
14
15
16
17
18
19
20

15.5
17.8
15.2
17.5
21.0
17.3
18.2
20.0

1550
1500
950
1300
1850
1375
1400
1925

X 1 = 17.76

X 2 = 1433.5

S 1 = 1.78

S 2 = 305.2

H1317_CH27.qxd

294

10/15/07

5:50 PM

Page 294

The Desk Reference of Statistical Quality Methods

For each pair of data points (X1 and X2), a standard Z-score and a combined Z-score
are calculated.
Batch 1:
Standard peel strength, Z1, X 1 =
Standard viscosity, Z1, X 2 =

X11 − X1 17.5 − 17.76
=
= −0.03
1.78
S1
X12 − X 2 1200 − 1443.5
=
= −0.77
S2
305.2

Combined Z -score, D1E = (−0.03)2 + (−0.77)2 = 0.77
Batch 2:
Standard peel strength, Z 2, X 1 =
Standard viscosity, Z 2, X 2 =
Combined Z -sccore, D2 E =

X 21 − X1 18.3 − 17.76
=
= 0.30
1.78
S1
X 22 − X 2 1700 − 1433.5
=
= 0.87
305.2
S2
(0.30)2 + (−0.87)2 = 0.92

Batch 3:
Standard peel strength, Z3, X 1 =
Standard viscosity, Z3, X 2 =

X 31 − X1 14.2 − 17.76
=
= −2.00
1.78
S1
X 32 − X 2 825 − 1433.5
=
= −1.99
S2
305.2

Combined Z -sscore, D3 E = (−2.00)2 + (−1.99)2 = 2.82
The remaining batches are calculated, and a summary of the results follows.

41.55 –0.2 Using an upper control limit of 3.48 1.5 19.32 0.8 19.0 1200 1700 825 1025 1600 1320 1725 1600 1150 1300 1825 1550 1550 1500 950 1300 1850 1375 1400 1925 –0.qxd 10/15/07 5:50 PM Page 295 Multivariate Control Chart Batch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 295 X1 Peel.44 –0.4 18.3 15.82 1.54 0.50 1.02 –1.39 0.0 0.44 2.5 0.0 1 5 10 Batch number 15 20 .5 17.32 0.77 0.76 X 2 = 1433.00 –0.58 –0.27 2.30 –1.27 1.77 0.4 19.58 0. lb/in X2 Viscosity. Z1 Standardized viscosity.2 17.15 –0.0 History 2.2 16.27 0.96 0.1 17.04 X 1 = 17.26 and a center line of 1.30 –2. the DE control chart using the historical data follows: UCL = 3.2 20.25 1.0 16.5 S 1 = 1.90 1.19 –0.61 0.38 0.26 –0.26 3.28 0.36 –0.22 –1.37 0.44 1.26 0.3 14.15 1.38 0.82 –0.87 –1.76 0.5 1.41 1.5 18.3 18.5 2.32 0.H1317_CH27.44 1.92 0.99 –1.11 1. Z2 Combined DE 17.44 1.78 S 2 = 305.0 17.5 21.03 0.03 1.93 –0.14 0.03 0.93 –0.8 15.27 0.55 –0.70 –0.34 0.92 2.22 2.82 1. cps Standardized peel.0 CL = 1.3 19.8 16.

76 1433.39 –1.50 12.63 –1.24 3. Characteristic Peel.50 UCL = 3.3 21.06 –0. The control limits are based on X ± 3S.10 2349.H1317_CH27.66 –1. cps Peel. Z1 Viscosity.5 14.60 0.26 3.38 1.99 –2.0 0.30 2.46 1.5 1. Z2 DE 21 22 23 24 25 13.0 1 5 10 Batch number 15 20 25 Control charts for the individual characteristics are presented in the following table. X1 Viscosity.0 History 2.42 2.42 517. X2 Average LCL UCL 17.66 2.5 0.05 1. S is from the sample standard deviation using samples 1–20.qxd 10/15/07 296 5:50 PM Page 296 The Desk Reference of Statistical Quality Methods Continue the DE chart using the following information: Batch Peel. lb/in Viscosity.90 23.10 .1 1250 1450 1975 2235 1000 –2.3 14.8 15.41 1.77 2.5 Future 2.0 CL = 1.

0 1 5 10 15 Variable. 1996.10 History 22.76 18.H1317_CH27. New York: John Wiley & Sons.9 0 1 5 15 10 Variable.0 LCL = 12.0 16.0 Future 20. X 1 20 25 UCL = 2349.1 History 2000 Future CL = 1433. 3rd edition. .0 14.qxd 10/15/07 5:50 PM Page 297 Multivariate Control Chart 297 UCL = 23. Advanced Topics in Statistical Process Control. TN: SPC Press.42 12.0 CL = 17. 1995. Wheeler. X 2 20 25 Bibliography Montgomery. D. C.5 1000 LCL = 517. J. D. Knoxville. Introduction to Statistical Quality Control.

H1317_CH27.qxd 10/15/07 5:50 PM Page 298 .

qxd 10/18/07 12:01 PM Page 299 Nonnormal Distribution Cpk The traditional calculation of Cpk (and other process capability indices) is based on the assumption of a normal distribution. including the normal distribution. The kurtosis Ku = 0 and the skewness Sk = 0 The calculation of Cpk and Cr can be adjusted to compensate for nonnormalcy by using the median rather than the average and applying corrections based on Pearson frequency curves. Normal Sk = –2 Ku = 0 Sk = 0 Ku = 0 Sk = 0 Ku = –1 Sk = +2 Ku = +1 299 . The mean. These two probability values represent the probability of getting a value less than the average minus three standard deviations and the probability of getting a value less than the average plus three standard deviations. These specific values are used for the calculation of Cpk for all distributions. The distribution is symmetrical about the average 2. and mode are equal 3.H1317_CH28.99865. median. Some of the distinguishing characteristics of a normal distribution are: 1. Knowing the skewness and kurtosis for a given distribution of data allows the determination of the specific values that will result in a probability of 0.00135 and a probability of 0.

μ 22 Note: Occasionally Ku is defined as Ku = μ4 . The following example illustrates a technique used to calculate a Cpk compensating for skewness and kurtosis. The greater the positive skewness.H1317_CH28. A negative kurtosis is indicative of a relative flatness compared to a normal distribution. μ1 = 0 n ⎞ μ2 ⎛ = s2 ⎝ n − 1⎠ A measure of skewness often used is Sk = μ3 μ = 3 .qxd 300 10/18/07 12:01 PM Page 300 The Desk Reference of Statistical Quality Methods The Pearson distributions can be used for many distributions. where Sk = 0 and Ku = 0. A negative skewness means that more data will be less than the average. The calculation of these statistics as applied in this discussion will be defined in terms of various moments about the arithmetic mean. defined by the formula ∑ ( Xi − X ) k . the more the predominant amount of data will be greater than the average. As skewness and kurtosis change. so does the shape of the distribution. Kurtosis is a measure of the peakness of the distribution. n μk = – where: X = average Xi = an individual observation. (μ 2 )3 μ 3 2 2 and a measure of kurtosis is Ku = μ4 − 3. . is the variance. The following illustrations demonstrate the effect of these parameter changes on a normal distribution. There are several statistics used to measure skewness and kurtosis. and the second moment about the mean. adjusted for bias. μ 22 in which case a normal distribution will have a skewness of Sk = 0 and a kurtosis of Ku = 3. and a positive kurtosis reflects a distribution with more peak than a normal distribution. Skewness measures how unsymmetrical the distribution is with respect to the average. including the normal. The first moment about the mean is always zero.

868 7.66 ⎛ 6. use Table 1.046 –0. For – Sk.046 Standard deviation.954 2. Uspec = 14. use Table 2.524 1.206 n = 22 μ2 = 1.046 –1. Collect data and determine the Sk and Ku values.046 σ = 1.828 14.145 – X = 6. – Average.461 25. S = 1.qxd 10/18/07 12:01 PM Page 301 Nonnormal Distribution Cpk 301 Step 1.002 0.000 0.70 1. Xi Frequency f – (Xi – X ) – (Xi – X )2 – (Xi – X )3 – (Xi – X )4 4 5 6 7 8 9 1 8 6 4 2 1 –2.496 0.910 3.H1317_CH28.224 S = 1.954 1.498)2 ⎟⎠ Step 2.818 8.60 Ku: –0.20 0.186 1.197 0.0 Step 3.144 –0.046 0. X = 6.253 μ3 = 1. More exact values are obtained by linear interpolation: Sk : 0. For + Sk. Lspec = 2.434 .0 Lower specification.726 –8.777 17.655 1.25 –0.25 Ku = ⎜ ⎝ (1.498)2 = 0. Summarize the process characterization statistics. P.00135.206 (1.578 76.498 Sk = Ku = μ3 μ 32 = μ4 −3 μ 22 1.169 ⎞ − 3 = −0.00135 with specified Ku and Sk.486 1.094 0.40 –0.66 1.000 0.954 4. Determine the standardized value for getting a probability of less than 0.299 1.565 –1.253 Upper specification.

264 3.299 3.668 2.861 0.446 1.430 3.000 2.510 1.189 4.259 3.235 1.986 3.946 2.971 1.8 5.181 1.335 2.976 2.442 2.183 2.045 1.905 0.338 1.533 0.135 1.483 1.317 1.346 3.489 0.356 1.461 0.583 0.484 0.164 3.093 3.300 3.069 3.600 2.924 2.659 0.348 1.385 2.923 2.932 0.444 0.3 1.817 1.088 1.342 2.115 3.664 2.510 1.216 3.008 1.387 3.218 4.127 2.787 0.501 1.440 2.863 2.870 3.7 0.367 3.653 2.371 0.586 2.895 2.015 3.676 0.975 1.271 3.184 3.512 2.450 3.316 1.384 3.126 3.6 0.087 1.225 1.369 2.336 1.0 5.735 2.410 1.6 0.175 1.943 3.261 1.996 1.638 0.787 2.911 2.580 0.362 3.0 4.755 1.674 3.282 3.277 3.855 2.161 3.813 1.063 1.259 3.350 3.714 0.311 3.464 3.821 2.762 0.423 3.399 3.031 0.306 3.132 2.072 2.092 3.160 3.673 1.023 1.174 4.656 0.137 1.225 3.231 Ku –1.6 4.651 0.229 3.265 2.697 1.378 3.562 2.473 0.041 1.975 1.949 2.873 2.513 2.8 2.920 1.274 3.986 3.178 1.311 3.856 2.543 2.394 3.023 2.005 2.052 1.410 0.163 1.111 3.721 0.925 2.556 1.428 3.419 3.011 3.766 0.699 2.747 2.488 2.736 2.677 0.246 2.435 3.801 0.298 2.619 1.703 2.125 1.220 3.549 2.100 3.269 1.224 1.651 2.6 0.043 3.058 3.403 3.006 3.685 0.448 2.320 3.834 2.617 0.160 1.4 3.329 2.141 2.321 3.933 1.531 0.138 2.089 1.981 2.068 1.458 3.194 2.8 4.957 0.327 3.357 1.131 1.047 1.994 3.306 3.267 1.206 1.975 4.863 0.756 1.502 2.010 1.829 2.5 0.4 0.791 0.742 0.622 2.836 0.840 0.337 3.692 0.0 1.386 1.387 1.149 1.084 3.116 2.349 3.858 0.253 1.4 4.629 2.2 0.867 1.589 2.736 0.246 3.2 2.396 1.5551.094 1.295 3.441 3.0 0.600 0.277 1.692 2.045 3.979 1.065 1.013 2.589 1.918 0.241 3.804 1.615 0.807 0.470 2.602 1.233 3.2 5.289 2.524 2.129 3.649 0.2 3.839 0.695 1.230 1.675 1.254 2.972 2.683 0.966 2.771 2.381 1.872 0.072 1.185 1.402 3.740 1.953 0.730 2.434 1.101 4.431 3.957 3.815 0.129 2.381 1.395 3.578 1.338 1.471 3.9 0.502 1.000 1.887 2.296 2.458 3.0 0.453 1.840 2.247 3.255 3.317 1.5 0.870 0.911 1.0 3.0.115 1.636 1.775 0.191 1.510 0.841 0.059 2.421 1.116 2.018 1.113 1.695 2.2 0.0 –0.059 2.765 1.602 2.174 3.356 3.575 0.475 0.321 2.1 0.868 0.738 1.541 1.664 1.013 1.2 0.727 2.056 4.931 1.625 1.447 2. 1.910 2.186 3.384 1.300 3.828 3.952 3.512 1.669 2.726 1.8 3.358 2.500 2.799 0.438 1.948 2.389 3.281 1.031 4.684 1.qxd 10/18/07 12:01 PM Page 302 302 .629 0.805 2.539 1.566 1.382 1.055 3.865 0.626 1.254 3.198 3.752 0.689 2.290 3.485 1.438 3.153 3.232 1.466 3.4 –0.073 1.524 0.536 0.011 1.494 1.066 3.427 2.6 3.8 0.574 1.6 2.790 2.408 3.132 1.008 1.285 1.383 2.396 2.964 1.004 4.043 3.202 2.405 3.374 2.574 0.979 1.043 3.581 2.619 1.830 0.930 2.711 1.148 2.356 3.747 0.0 1.091 3.744 0.927 1.496 1.140 2.839 3.272 0.661 1.894 0.914 2.047 3.461 3.200 3.267 2.589 0.025 3.974 3.325 3.800 2.555 1.835 2.708 2.184 1.378 3.8 –0.895 0.199 3.855 2.261 3.681 0.213 3.785 2.207 0.172 1.062 1.234 1.831 2.000 3.4 –1.912 2.395 3.521 2.522 2.731 3.675 0.404 1.634 0.616 2.6 1.438 2.423 3.858 0.686 0.075 3.086 1.904 2.121 4.198 1.8 0.205 2.787 0.222 3.701 0.791 0.817 2.367 3.080 2.506 2.9 H1317_CH28.887 0.364 1.145 3.456 1.827 1.752 0.313 3.785 0.727 1.584 1.212 1.496 1.562 0.893 2.821 1.873 0.344 3.983 3.653 2.739 0.736 1.015 2.850 1.8 1.828 0.621 1.204 4.149 3.127 3.107 2.611 3.436 3.4 1.218 3.4 0.088 3.940 2.494 2.4 0.696 1.864 2.167 3.326 3.754 1.557 0.424 2.857 0.526 0.140 4.140 3.964 3.283 2.128 0.646 2.620 0.616 0.364 3.288 3.2 4.714 2.420 1.889 1.782 3.469 3.965 1.813 1.000 1.974 1.281 3.324 3.415 3.326 0.546 0.717 0.743 0.6 –0.133 3.941 2.092 1.494 0.4 2.762 1.406 3.271 2.168 3.2 1.210 2.072 2.939 0.382 3.917 0.256 3.118 3.748 1.760 1.454 3.888 2.950 1.169 1.342 3.114 3.243 2.441 3.177 3.367 3.887 2.236 3.018 2.735 2.655 1.321 2.953 2.173 3.4 Table 1 1.491 1.262 1.079 4.784 2.983 3.822 0.2 1.7 0.197 3.189 2.013 3.908 3.913 0.0 2.283 2.712 0.896 2.844 1.266 3.446 3.583 0.539 3.723 0.299 1.874 2.220 2.768 0.945 2.100 1.581 1.366 3.196 3.089 2.339 1.001 3.588 2.026 2.371 3.630 0.414 3.3 0.2 –1.230 1.111 2.911 0.491 2.876 1.1 1.368 2.919 1.713 1.103 3.329 3.345 2.157 4.356 3.

202 1.955 2.818 2.046 3.4 3.936 2.014 2.0 12.972 2.6 3.496 2.331 3.470 3.453 3.270 3.003 3.280 3.369 2.300 2.389 3.2 3.393 3.489 1.4 11.155 2.483 3.358 4.218 3.625 1.333 2.070 2.140 3.225 3.874 1.191 3.501 2.432 3.4 7.818 2.123 2.685 2.499 2.266 4.496 3.810 2.320 3.438 2.240 3.2 7.495 3.205 3.533 2.132 3.283 3.8 6.195 1.4 9.522 2.544 2.806 2.492 3.394 2.083 2.978 2.700 2.309 3.369 3.3 2.195 3.351 4.097 2.122 3.2 8.485 3.829 2.431 3.762 2.1 2.655 2.984 2.360 3.357 3.8 8.184 3.371 4.070 2.065 3.171 3.736 1.853 2.250 2.736 2.405 2.766 2.004 2.972 1.189 3.515 2.162 1.348 3.8 11.403 3.289 3.059 3.866 2.271 2.970 2.597 2.596 1.825 2.261 3.6 9.353 3.919 1.118 3.703 2.120 2.964 2.727 1.292 3.421 3.993 1.0 9.388 3.331 3.192 2.895 1.385 3.235 3.329 0.337 2.164 2.230 3.386 3.363 3.567 2.480 3.424 3.085 3.162 3.567 2.100 3.318 2.624 2.005 2.4 1.275 3.565 2.1 Table 1 Continued.446 3.2 6.411 2.987 3.2 11.245 2.374 3.237 3.627 2.492 3.123 3.505 1.2 10.418 3.157 1.357 3.771 1.144 3.152 3.325 3.477 3.310 2.535 2.540 1.416 3.075 1.312 3.714 2.332 2.671 2.0 4.181 3.103 3.279 3.879 2.521 1.490 3.825 2.782 1.9 H1317_CH28.337 4.843 2.349 3.303 3.465 3.191 2.338 3.043 3.666 1.171 3.777 2.365 4.424 3.161 3.111 3.112 3.336 3.416 3.9 2.229 3.372 3.252 2.878 2.851 2.313 4.945 1.840 2.292 3.906 2.265 3.204 3.471 3.675 2.300 3.744 1.019 2.5 1.491 3.128 3.101 3.414 3.6 8.040 2.172 3.752 2.8 3.417 3.919 2.450 3.221 2.233 3.462 3.393 4.177 3.398 3.255 4.551 2.132 3.383 3.911 2.136 2.042 3.448 3.979 2.861 1.033 3.0 2.272 3.863 2.405 3.164 3.422 3.918 1.107 3.075 3.002 3.481 3.472 3.5 3.7 1.458 3.322 4.902 2.468 3.952 2.559 1.243 0.8 12.082 3.353 3.244 3.499 2.054 3.351 3.013 3.739 2.017 3.940 2.818 1.839 2.135 2.686 2.295 2.456 2.6 11.123 1.240 3.260 3.219 2.0 6.891 2.622 2.751 2.382 2.158 3.0 11.296 3.390 3.876 2.088 3.896 1.075 3.713 2.985 2.051 3.011 3.474 3.495 3.392 3.585 2.727 1.268 3.449 2.033 3.970 2.281 2.4 10.089 3.065 2.793 2.437 3.784 1.641 2.778 1.598 2.166 3.496 1.141 1.921 1.275 3.085 1.624 2.588 1.3 3.459 3.340 3.067 3.257 3.448 3.422 3.297 3.150 3.469 3.427 2.040 3.932 2.6 6.148 2.780 2.305 4. 3.489 2.449 3.6 1.0.408 Ku 5.281 3.793 2.338 1.478 3.482 1.6 7.292 2.301 2.240 3.2 2.888 1.451 3.861 2.604 2.473 3.436 3.346 3.720 2.860 2.445 3.841 2.388 4.196 2.888 2.4 8.243 4.095 3.625 2.308 3.003 3.794 2.253 3.218 2.395 3.476 2.206 2.740 2.284 3.493 3.240 2.286 3.382 4.855 1.247 3.409 3.447 3.111 3.335 3.265 3.635 1.265 2.282 1.434 3.469 2.321 3.278 3.8 10.656 2.420 3.188 3.956 2.768 1.489 3.327 3.023 2.328 3.921 2.411 3.806 2.148 3.312 3.276 4.527 2.727 2.434 2.653 1.151 2.359 3.456 2.7 3.047 2.994 3.361 1.137 3.6 5.303 3.539 1.943 2.118 3.175 3.197 3.224 3.2 9.315 3.361 2.639 2.233 3.975 2.412 3.377 4.183 3.200 3.138 3.883 2.377 3.306 3.131 3.189 2.8 7.340 2.341 3.759 2.403 1.254 3.890 2.296 4.785 2.289 0.554 2.486 3.042 2.452 1.444 3.388 2.323 1.317 3.4 6.398 4.890 1.316 1.396 0.965 2.179 3.286 3.400 3.375 3.363 1.720 2.0 8.334 4.702 1.497 0.685 2.583 2.464 3.413 3.763 2.854 1.773 2.951 2.109 2.473 2.062 3.322 3.026 3.583 1.019 3.324 3.632 1.998 2.156 3.426 3.419 3.076 3.456 3.125 3.053 3.933 2.682 1.073 2.814 1.494 3.058 3.666 2.706 1.343 3.094 3.218 3.990 3.0 10.403 4.439 3.209 3.395 3.401 2.363 2.475 3.422 2.172 2.047 3.0 7.454 3.603 2.070 3.827 1.440 3.420 2.383 3.6 10.364 0.330 4.282 2.387 3.355 3.392 3.613 2.488 3.044 2.451 0.930 2.379 3.2 3.381 3.286 4.467 3.214 3.361 3.270 1.800 1.364 2.953 1.316 3.115 2.646 2.438 2.831 1.475 2.030 3.473 3.229 2.023 3.407 1.487 3.345 3.093 2.429 3.484 3.8 1.443 3.104 2.965 2.407 3.226 3.366 3.870 2.698 2.739 2.216 3.442 3.496 3.466 3.461 3.425 0.222 3.948 2.140 3.901 2.579 2.363 3.242 1.962 2.443 1.320 2.8 9.qxd 10/18/07 12:01 PM Page 303 303 .465 2.667 1.261 2.919 2.249 3.933 1.189 3.970 2.986 2.211 3.651 2.475 0.064 3.379 3.

194 3.895 2.140 4.405 2.105 4.467 4.134 2.659 0.189 5.948 4.253 5.199 4.824 3.335 4.415 5.824 4.6 –0.251 4.2 5.653 4.460 4.934 4.6 1.038 5.172 4.364 3.395 4.223 5.295 4.029 5.2 0.501 0.261 3.152 5.966 2.1 1.137 4.484 3.449 4.560 2.167 4.106 5.119 2.587 4.193 4.822 2.244 5.4 0.929 4.164 4.209 4.247 4.0 4.860 4.474 5.810 4.837 3.847 3.400 2.423 4.109 4.422 2.074 5.794 4.824 4.825 4.283 3.407 4.681 3.525 4.361 4.627 4.269 4.803 2.663 4.905 3.396 5.181 5.307 4.859 4.532 4.881 3.243 4.979 4.922 4.540 2.943 5.603 3.899 2.285 4.525 1.267 5.991 4.340 4.418 1.571 4.458 4.027 1.903 4.356 4.498 4.952 4.887 3.250 5.655 1.639 3.646 3.110 5.217 4.052 5.471 3.828 3.454 2.437 5.290 4.095 3.177 4.396 4.727 1.9 1.066 4.993 3.2 1.431 4.498 4.220 5.693 4.350 5.234 4.619 4.653 2.714 4.125 4.154 5.485 3.194 4.075 3.910 4.325 5.483 5.926 3.2 2.043 4.174 4.522 3.704 2.190 5.2 1.650 4.336 4.189 4.810 0.243 3.506 4.992 5.159 5.440 4.467 4.891 3.700 4.299 1.344 4.369 4.383 5.963 5.321 4.884 0.567 4.172 3.757 3.797 3.705 4.378 4.204 4.305 5.7 2.123 3.693 3.658 2.594 4.976 5.518 4.944 4.257 5.736 4.159 5.907 4.688 3.762 4.981 4.241 2.858 3.570 3.813 2.101 4.622 4.771 4.620 4.741 4.0 5.560 4.850 4.865 4.511 4.324 4.719 2.564 3.140 3.312 4.109 3.7 1.167 2.170 2.247 4.722 4.218 4.087 4.141 5.881 4.796 4.839 4.073 5.458 3.627 4.232 5.563 4.269 4.131 4.303 3.4 4.777 4.548 4.133 3.2 0.476 4.951 4.139 5.648 2.893 4.0 1.385 3.210 2.381 5.882 4.311 4.511 2.200 4.978 4.539 3.529 3.150 5.579 1.131 5.253 4.723 4.288 5.843 3.4 2.752 4.518 2.031 4.045 5.077 4.461 4.674 3.654 3.344 5.2 3.463 3.387 3.093 5.550 4.446 2.207 5.020 4.500 4.115 4.726 2.6 3.454 4.549 4.307 4.844 3.861 4.976 5.061 2.080 4.252 4.665 2.873 4.395 2.052 2.233 1.371 4.460 4.636 1.000 3.433 4.365 4.537 4.809 4.398 2.879 4.439 4.547 2.945 4.677 4.060 4.6 0.888 4.049 5.015 4.086 5.007 4.842 3.646 2.705 2.447 3.284 5.653 4.381 3.647 4.065 2.715 3.922 4.267 4.157 4.631 4.895 4.399 2.488 3.587 4.2 4.508 4.682 4.166 1.917 3.044 4.737 4.737 4.856 2.952 2.783 4.838 4.715 4.303 5.521 4.2 –1.229 5.689 4.8 5.288 4.002 5.376 5.8 –0.529 3.066 5.765 4.0.965 2.6 4.972 4.947 4.576 3.3 1.978 4.9 H1317_CH28.206 3.436 5.325 4.278 4.153 5.189 4.687 4.479 4.649 4.116 3.259 3.978 4.412 2.726 1.308 4.690 2.121 4.055 5.699 2.942 3.8 4.631 4.0 3.472 4.3 1.2 1.4 –1.075 5.803 4.546 3.492 4.844 3.352 4.321 5.060 3.320 2.757 4.869 3.416 4.351 4.908 3.997 5.883 4.349 2.4 1.332 0.468 4.555 4.169 2.0 –0.839 3.997 4.044 5.789 4.626 1.060 5.812 4.934 3.998 4.550 3.936 3.703 4.798 3.870 3.358 3.932 4.4 5.970 3.776 4.809 4.349 5.928 2.900 4.475 4.535 4.510 3.539 4.283 4.708 2.308 4.789 4.632 1.029 5.778 4.564 4.8 2.929 4.393 4.238 3.355 4.418 0.159 5.517 4.609 2.975 4.758 3.645 4.863 4.152 2.5 1.805 3.956 0.596 4.079 4.004 5.383 4.512 1.172 5.322 2.410 4.0 2.372 4.510 4.318 5.806 4.662 4.249 5.736 0.600 4.926 3.qxd 10/18/07 12:01 PM Page 304 304 .648 4.126 5.756 4.394 4.114 2.676 3.535 4.690 4.800 4.202 5.6 2.749 3.361 1.914 3.973 5.224 3.961 3.131 4.201 3.0 0.949 3.386 4.648 3.601 4.880 4.281 4.269 2.443 5.490 4.966 2.396 4.4 2.963 4.160 4.505 1.5 2.367 4.399 5.0 1.169 2.231 4.723 4.597 4.793 3.346 5.945 3.414 4.164 4.752 4.225 2.375 4.571 4.561 3.584 1.033 4.373 2.611 4.851 4.224 4.442 2.649 4.034 4.909 4.885 2.516 1.614 4.940 4.687 2.623 4.199 4.468 1.476 2.6 2.366 2.012 5.296 4.345 3.826 3.314 2.046 4.961 3.757 4.609 2.993 5.080 5.287 4.450 4.573 4.115 4.736 3.206 5.990 4.248 4.993 3.969 3.840 3.354 4.208 4.131 2.261 3.581 0.261 4.4 3.968 3.418 4.996 5.871 2.725 4.808 4.614 4.779 4.376 4.479 4.155 4.059 5.122 5.097 1.861 2.715 3.047 4.168 4.370 3.015 5.290 3.653 1.295 5.862 4.056 4.984 3.875 4.004 4.667 4.678 3.073 4.208 4.8 1.8 3.638 4.378 3.611 3.724 3.943 3.745 4.124 4.782 3.107 5.976 4.468 3.315 4.369 4.085 5.468 4.6 1.081 4.280 5.980 5.758 4.239 4.774 2.096 4.8 2.016 5.960 2.561 3.768 3.231 4.264 3.559 4.916 4.425 4.845 4.179 3.608 2.189 4.243 Ku –1.103 5.108 2.208 5.1 1.691 4.725 4.522 4.824 4.6 Table 2 1.177 4.364 3.8 1.149 2.731 3.4 –0.366 3.4 1.0 1.786 3.013 4.197 4.588 3.678 4.173 3.142 4.

842 5.169 5.451 5.143 5.669 4.8 8.204 5.391 5.792 4.382 4.0 5.166 5.5 5.452 4.429 4.760 5.526 5.6 6.428 4.161 5.860 4.434 4.684 5.2 6.853 4.236 5.206 5.679 5.071 5.745 4.523 5.4 10.509 4.135 5.046 5.584 5.0 10.663 5.678 4.431 5.845 4.330 4.638 4.4 7.238 5.287 5.914 4.478 4.695 4.947 4.557 5.837 4.377 4.139 5.385 5.565 0.503 5.446 5.590 5.618 5.609 5.118 5.6 10.572 5.647 5.730 4.358 4.736 5.767 5.747 5.599 5.528 5.8 10.357 5.171 5.538 5.624 5.692 1.919 4.349 5.611 4.727 5.587 5.609 5.642 0.456 5.0 7.259 5.747 4.827 4.197 5.902 4.642 5.247 5.373 4.515 4.394 1.364 5.605 5.926 4.0 4.569 4.276 5.087 5.596 5.055 5.652 5.828 5.447 4.600 5.840 5.817 5.805 4.383 5.581 5.497 5.393 4.530 5.242 5.667 5.660 4.215 5.867 4.467 1.783 5.337 4.074 5.403 4.346 5.091 5.188 5.630 4.021 5.808 5.380 5.108 5.325 5.788 4.149 5.616 4.6 9.090 5.562 4.501 5.848 1.372 5.771 4.958 4.505 5.669 5.611 5.871 4.389 5.067 5.767 4.331 5.834 5.085 5.530 4.983 4.0 12.181 5.618 4.218 5.410 5.8 4.527 4.503 4.078 5.522 5.557 4.267 5.009 5.246 5.861 4.425 5.757 5.722 5.150 5.399 5.351 4.4 5.031 5.173 1.509 5.177 5.591 5.451 5.404 5.353 5.382 4.387 5.294 5.580 5.3 4.297 5.930 4.0 11.576 4.468 4.710 4.753 4.280 5.545 5.593 5.481 4.688 5.678 5.648 5.567 5.621 4.631 5.4 4.296 4.1 5.974 4.896 4.768 5.558 5.536 5.780 4.341 5.266 4.314 5.987 4.999 5.799 4.285 5.0 6.380 5.390 5.762 5.736 4.754 4.626 4.2 7.879 4.554 4.742 4.143 5.658 5.263 5.464 5.370 5.440 5.813 5.1 Table 2 Continued.534 5.460 5.230 5.532 5.423 5.609 4.175 5.703 4.305 4.018 5.454 5.170 5.164 5.491 5.387 5.466 4.7 4.831 5.838 5.4 6.434 5.521 4.071 5.434 5.601 5.393 5.063 5.8 12.441 4.376 5.467 5.276 4.949 0.233 5.633 4.229 5.092 5.225 5.286 4.132 5.356 5.995 4.845 4.0 9.403 5.248 5.443 5.236 5.095 5.705 4.019 5.557 5.364 4.443 5.2 8.710 4.969 4.480 5.785 4.762 4.548 4.305 5.200 5.616 1.063 5.593 5.874 0.361 5.364 5.797 5.291 5.868 4.736 5.123 5.776 4.222 5.439 4.854 4.885 4.9 H1317_CH28.388 4.975 4.706 5.109 5.749 5.820 4.581 5.614 5.991 4.547 4.778 4.688 5.336 5.465 5.817 4.378 5.0.367 5.0 8.318 5.791 4.825 5.934 4.926 4.850 4.669 5.940 4.637 5.308 5.276 5.873 4.081 5.991 4.354 4.326 5.153 5.511 5.513 5.717 4.462 5.940 4.511 4.952 4.999 5.591 4.675 5.594 4.436 5.765 5.666 4.670 5.103 5.006 5.689 5.6 4.8 5.208 5.562 5.654 4.2 4.704 5.406 4.836 5.690 5.551 5.322 1.515 5. 4.543 4.492 5.4 8.268 5.865 4.910 4.905 4.617 5.488 5.798 0.408 Ku 5.978 4.046 5.588 4.371 4.284 5.640 4.078 5.538 4.314 5.673 5.059 5.634 5.qxd 10/18/07 12:01 PM Page 305 305 .634 4.448 4.8 7.530 5.041 5.923 4.539 5.245 5.532 4.6 7.293 5.686 5.582 4.542 5.893 4.320 5.607 5.2 5.552 4.795 4.605 4.2 4.724 4.240 5.6 5.692 4.473 4.822 4.158 5.308 5.672 4.9 5.803 5.561 5.857 4.763 5.717 4.373 5.303 5.214 5.2 9.474 4.646 5.8 11.660 5.117 5.600 4.478 5.036 5.098 0.740 5.459 5.051 5.472 5.710 5.755 5.701 4.8 6.055 5.320 5.437 5.097 5.625 4.983 4.697 4.7 5.457 4.449 5.452 5.6 11.135 5.458 4.841 4.541 1.391 4.365 4.656 5.845 5.837 4.419 5.2 11.458 5.911 4.344 4.483 5.186 5.521 4.414 5.487 0.576 5.249 1.8 9.012 5.114 5.336 5.211 5.901 4.722 5.312 5.918 4.569 5.890 4.322 4.427 5.603 5.758 4.496 4.937 4.253 5.463 4.345 5.191 5.157 5.5 4.750 5.037 5.015 5.489 4.343 4.601 5.753 5.512 5.682 5.024 0.6 8.421 4.393 5.538 5.680 5.829 4.760 5.156 5.717 5.682 4.966 4.257 5.255 4.084 5.518 5.4 11.4 9.146 5.821 5.538 4.025 5.312 5.002 5.613 5.316 5.744 5.600 4.775 5.763 4.533 5.414 4.228 5.963 4.844 5.832 4.769 1.677 4.696 5.013 5.398 4.310 5.483 4.313 4.847 5.222 5.300 5.811 4.521 5.561 4.687 4.243 5.714 4.943 4.687 5.732 5.933 4.784 4.2 10.466 5.721 0.195 5.493 5.790 5.414 5.375 5.301 5.3 5.126 5.529 5.399 4.006 5.164 5.621 5.647 4.946 4.127 5.771 5.272 5.772 4.

Interpolation will yield a more precise result.20 0. Calculate the estimated median Mˆ .161 0.70 0. This is the Lp value. Look up the standardized median M’ using Table 3.891 Step 8. From the P.60 Ku : –0.212 0.00135 Lp = 6. – Lp = X – SP. calculate the percentile point in units in which the process is being measured.183 Standardized median M’ = 0.175) Mˆ = 5.486) Lp = 4.99865.175 Step 6.66 0. Mˆ = 6. – Mˆ = X – SM’ – If Sk is negative.069 3. Sk : 0. use Table 1.25 –0.99865 Up = 6.99865. – If Sk is positive (as in this case).qxd 306 10/18/07 12:01 PM Page 306 The Desk Reference of Statistical Quality Methods P.20 0.109 P. From the P.40 –0.175 0. add SM’ to X.00135.837 3.82 .99865 for obtaining a value less than P.069 Step 5.069) Up = 9.253(3.173 3. Sk : 0.18 Step 7. use Table 2.99865 = 3.66 0.H1317_CH28. subtract SM’ from X.046 – 1.486 Step 4. – Up = X + SP.25 –0. calculate the percentile point in units in which the process is being measured.926 2. This is the Up value.141 0.40 –0.00135 = 1. Determine the standardized value that will represent a probability of 0. For + Sk.253(0.253(1.046 + 1.046 – 1.60 Ku : –0.70 2. For – Sk.

000 0.265 0.374 0.0 5.071 0.073 0.020 0.231 0.000 0.6 2.184 0.008 0.033 0.132 0.188 0.4 4.437 0.171 0.3 0.417 0.023 0.396 0.074 0.460 0.029 0.125 0.250 0.208 0.627 0.126 0.8 5.022 0.081 0.107 0.373 0.068 0.0 1.016 0.022 0.000 0.122 0.000 0.306 0.030 0.048 0.555 0.196 0.082 0.025 0.042 0.088 0.0.5 0.097 0.063 0.104 0.278 0.065 0.653 0.074 0.063 0.000 0.000 Ku –1.122 0.126 0.218 0.148 1.150 0.167 0.317 0.046 0.014 0.315 0.233 0.542 0.081 0.172 0.132 0.531 0.059 0.2 3.2 0.032 0.181 0.116 0.295 0.6 0.206 0.089 0.413 0.155 0.067 0.080 0.188 0.6 Table 3 0.217 0.133 0.085 0.323 0.000 0.044 0.008 0.058 0.149 0.085 0.207 0.039 0.069 0.093 0.059 0.103 0.qxd 10/18/07 12:01 PM Page 307 307 .296 0.0 0.109 0.078 0.013 0.031 0.2 –1.024 0.007 0.095 0.0 4.171 0.638 0.117 0.000 0.338 0.083 0.605 0.8 0.410 0.4 0.000 0.494 0.000 0.213 0.564 0.151 0.261 0.233 0.425 0.148 0.176 0.544 0.111 0.067 0.142 0.038 0.549 0.222 0.538 0.122 0.140 0.052 0.281 0.000 0.022 0.207 0.246 0.351 0.539 0.233 0.456 0.084 0.336 0.193 0.4 0.151 0.1 0.034 0.4 2.540 0.145 0.043 0.097 0.062 0.297 0.009 0.266 0.008 0.419 0.007 0.040 0.076 0.118 0.8 4.006 0.000 0.6 0.061 0.1 0.017 0.049 0.8 2.183 0.521 0.027 0.105 0.236 0.087 0.681 0.057 0.8 0.2 1.079 0.456 0.030 0.598 0.089 0.6 –0.8 –0.474 0.107 0.186 0.162 0.579 0.545 0.017 0.015 0.197 0.211 0.006 0.006 0.533 0.000 0.019 0.040 0.524 0.9 H1317_CH28.044 0.162 0.242 0.029 0.065 0.000 0.048 0.379 0.057 0.6 3.101 1.057 0.111 0.081 0.008 0.424 0.025 0.224 0.113 0.2 0.068 0.110 0.130 0.014 0.246 0.133 0.031 0.101 0.037 0.122 0.131 1.163 0.235 0.151 0.137 0.077 0.228 0.279 0.024 0.022 0.050 0.060 0.274 0.091 0.352 0.114 0.095 0.155 0.058 0.158 0.527 0.017 0.433 0.2 0.633 0.138 0.268 0.249 1.169 0.009 0.012 0.263 0.524 0.0 0.727 0.182 0.018 0.103 0.023 0.031 0.075 0.313 0.197 0.311 0.394 0.106 0.000 0.569 0.616 0.051 0.093 0.023 0.168 1.266 0.9 0.072 0.011 0.054 0.0 –0.064 0.000 0.032 0.520 0.220 0.145 0.000 0.014 0.087 0.091 0.206 0.054 0.097 0.379 0.063 0.106 0.108 0.579 0.045 0.082 0.122 0.6 1.007 0.006 0.111 0.181 0.415 0.139 0.078 0.274 0.049 0.000 0.204 0.008 0.000 0.8 3.052 0.586 0.131 0.579 0.172 0.127 0.616 0.0 2.041 0.053 0.461 0.068 0.000 0.375 0.536 0.285 0.078 0.522 0.082 0.000 0.058 0.097 0.7 0.115 1.070 0.219 0.317 0.381 0.212 0.015 0.0 3.246 0.359 0.115 0.039 0.4 3.057 0.010 0.018 0.000 0.4 –1.024 0.065 0.252 0.483 0.4 1.296 0.061 0.3 0.158 0.231 0.297 0.025 0.347 0.023 0.039 0.176 0.037 0.196 0.000 0.161 0.376 0.470 0.6 4.099 0.080 0.020 0.078 0.552 0.489 0.143 0.000 0.053 0.439 0.047 0.000 0.119 0.041 0.142 0.028 0.081 0.130 0.113 0.000 0.094 0.159 0.285 0.064 0.175 0.8 1.035 0.105 0.140 0.501 0.308 0.574 0.014 0.136 0.119 0.468 0.101 0.007 0.091 0.000 0.203 0.114 0.030 0.007 0.060 0.488 0.132 0.256 0.141 0.047 0.026 0.5 0.043 0.288 0.077 0.051 0.039 0.095 0.032 0.218 1.000 0.076 0.169 0.336 0.124 0.028 0.333 0.034 0.151 0.0 0.036 0.000 0.000 0.050 0.018 0.412 0.426 0.069 0.518 0.161 0.165 0.508 0.388 0.322 0.103 0.510 0.282 0.582 0.026 0.104 0.2 2.020 0.190 0.055 0.023 0.344 0.027 0.017 0.301 0.228 0.161 0.181 0.105 0.101 0.443 0.191 0.026 0.014 0.067 0.591 0.037 0.343 0.607 0.021 0.043 0.341 0.254 0.265 0.090 0.040 0.213 0.192 0.129 0.122 0.4 –0.204 0.148 0.068 0.085 0.040 0.590 0.189 0.426 0.123 0.000 0.053 0.498 0.007 0.059 0.073 0.141 0.108 0.355 0.091 0.212 0.043 0.062 0.563 0.191 1.196 0.000 0.049 0.047 0.100 0.072 0.526 0.070 0.239 0.238 0.474 0.2 0.126 0.111 0.299 0.285 1.042 0.385 0.053 0.053 0.280 0.576 0.159 0.7 0.365 0.011 0.316 0.089 1.330 0.031 0.432 0.066 0.168 0.284 0.013 0.301 0.310 0.174 0.054 0.443 0.480 0.167 0.016 0.212 0.354 0.010 0.153 0.091 0.073 0.504 0.034 0.4 0.196 0.045 0.029 0.000 0.015 0.015 0.276 0.035 0.264 0.038 0.120 0.000 0.333 0.046 0.082 0.237 0.056 0.116 0.231 0.000 0.395 0.295 0.033 0.481 0.025 0.133 0.183 0.183 0.6 0.072 0.000 0.137 0.084 0.092 0.015 0.475 0.009 0.2 4.255 0.263 0.398 0.048 0.2 5.257 0.4 5.041 0.155 0.145 0.032 0.034 0.015 0.098 0.036 0.299 0.754 0.554 0.621 0.180 0.143 0.006 0.484 0.337 0.201 0.072 0.049 0.246 0.055 0.222 0.

077 0.091 0.019 0.039 0.118 0.068 0.6 11.073 0.186 0.111 0.168 0.8 6.2 9.085 0.063 0.103 1.117 0.097 0.099 0.114 0.037 0.039 0.116 0.128 0.9 H1317_CH28.005 0.157 0.173 0.139 0.0 0.129 0.086 0.061 0.026 0.101 0.000 0.132 0.115 0.034 0.082 0.055 0.084 0.2 11.074 0.026 0.049 0.145 0.079 0.074 0.165 0.8 10.051 0.069 0.110 0.019 0.038 0.181 0.105 0.018 0.229 0.086 0.0 8.4 8.000 0.021 0.101 0.042 0.061 0.159 0.6 6.122 0.068 1.026 0.247 0.047 0.000 0.071 0.030 0.087 0.036 0.000 0.154 0.005 0.2 7.065 0.6 8.133 0.087 0.113 1.050 0.169 0.176 0.193 0.077 0.085 0.209 0.138 0.140 0.4 11.078 0.136 0.038 0.153 0.179 0.2 6.114 0.046 0.160 0.134 0.063 0.6 7.080 0.090 0.047 0.104 0.176 0.112 0.047 0.236 0.066 0.053 0.135 0.189 0.000 0.000 0.040 0.020 0.019 0.076 0.012 0.129 0.0 9.033 0.040 0.097 0.197 0.6 9.095 0.196 0.086 0.000 0.090 0.000 0.126 0.100 0.207 0.047 0.000 0.094 0.130 0.052 0.094 0.175 0.124 0.076 0.012 0.049 0.000 0.013 0.079 0.043 0.223 0.174 0.033 0.156 0.027 0.142 0.020 0.095 0.170 0.202 0.142 0.137 0.068 0.025 0.184 0.069 0.199 0.118 0.108 0.080 0.096 0.061 0.044 0.128 0.133 0.011 0.093 1.076 0.168 0.032 0.043 0.097 0.155 0.052 0.045 0.107 0.102 0.034 0.073 0.000 0.086 0.0 12.115 0.094 0.153 0.055 0.054 0.241 0.109 0.026 0.041 0.028 0.048 0.161 0.107 0.012 0.182 0.056 0.7 0.050 0.000 0.142 0.235 0.013 0.079 0.0 7.100 0.8 9.084 0.019 0.115 0.014 0.005 0.120 0.110 0.000 0.019 0.193 0.113 0.147 0.125 0.103 0.019 0.020 0.099 0.046 0.217 0.006 0.226 0.166 0.131 0.8 11.113 0.9 0.057 0.152 0.054 0.022 0.4 0.000 0.060 0.124 1.181 0.088 0.000 0.072 0.029 0.173 0.012 0.083 0.143 0.164 0.108 0.157 0.019 0.124 0.034 0.146 0.159 0.093 0.059 0.197 0.047 0.000 0.027 0.139 0.070 0.013 0.123 0.107 0.000 0.036 0.059 0.029 0.193 0.033 0.4 10.013 0.025 0.164 0.021 0.073 0.018 0.268 0.056 0.055 0.021 0.076 0.062 0.070 0.136 1.128 0.061 0.150 0.8 0.093 0.050 0.168 0.119 0.069 0.056 0.109 0.000 0.0 10.000 0.8 12.4 6.042 0.033 0.087 0.187 0.005 0.061 0.057 0.071 0.084 1.4 0.085 0.qxd 10/18/07 12:01 PM Page 308 308 .143 0.006 0.118 0.005 0.058 0.6 0.084 0.6 0.0 0.079 0.037 0.158 0.063 0.006 0.149 0.119 0.165 0.082 0.277 0.012 0.092 0.044 0.145 0.000 0.109 0.000 0.078 0.021 0.171 0.060 0.087 0.114 0.143 0.137 0.2 0.108 0.089 0.179 0.129 0.205 0.036 0.0 6.254 0.021 0.147 0.081 0.014 0.242 0.041 0.028 0.097 0.000 0.012 0.128 0.000 0.188 0.051 0.072 0.091 0.080 0.064 0.048 0.012 0.058 0.096 0.049 0.043 0.006 0.166 0.212 0.113 0.3 0.092 0.149 1.145 0.0 11.033 0.162 0.005 0.127 0.104 0.162 0.040 0.120 0.070 0.005 0.012 0.020 0.025 0.155 0.125 0.141 0.067 0.116 0.152 0.178 0.000 0.041 0.072 0.035 0.105 0.176 0.075 0.098 0.026 0.120 0.075 0.054 0.179 0.177 0.135 0.000 0.110 0.170 0.035 0.171 0.065 0.019 0.028 0.005 0.122 0.185 0.202 0.034 0.096 0.066 0.057 0.075 0.8 8.086 0.137 0.026 0.095 0.126 0.048 0.000 0.042 0.012 0.091 0.221 0.123 0.054 0.106 0.040 0.127 0.8 7.065 0.005 0.131 0.028 0.070 0.5 0.067 0.53 0. 0.2 10.185 0.013 0.209 0.066 0.2 0.083 0.117 0.157 0.029 0.7 0.8 0.044 0.094 0.082 0.046 0.013 0.116 0.078 0.098 0.089 0.013 0.132 0.165 0.099 0.201 0.012 0.100 0.000 0.103 0.062 0.161 0.069 0.000 0.151 0.063 0.027 0.088 0.071 0.104 0.3 0.064 0.1 0.039 0.5 0.144 0.168 0.150 0.012 0.111 0.183 0.062 0.089 0.160 0.000 0.155 0.139 0.6 10.4 9.064 0.149 0.000 0.027 0.104 0.005 0.190 0.005 0.182 0.152 0.104 0.163 0.213 0.1 Table 3 Continued.111 0.163 1.027 0.078 0.150 0.000 Ku 5.035 0.055 0.135 0.131 0.040 0.000 0.053 0.000 0.121 0.121 0.005 0.126 0.006 0.055 0.261 0.2 8.032 0.213 0.088 0.4 7.005 0.071 0.076 1.230 0.037 0.073 0.190 0.085 0.105 0.005 0.005 0.032 0.2 0.140 0.147 0.005 0.006 0.106 0.153 0.218 0.098 0.020 0.019 0.046 0.0.054 0.172 0.094 0.137 0.077 0.012 0.054 0.124 0.063 0.205 0.081 0.045 0.141 0.032 0.

01 C pu = ˆ 9.0 9. ˆ − LSpec 5.2527 4 Cpk = 2. Capability ratio. 14) Process capability where Sk = 0. 12 14 . Cpk = minimum of Cpl and Cpu .0 − 2. Calculate the process capability indices based on normal distribution compensated for skewness and kurtosis. A.66 and Ku = –0.82 − 4.01 6 8 10 Spec Lim: (2.0 − 5. Process capability index.891 − 4.33 ˆ − Lp 5.04546 Std Dev: 1. Cpk As with the traditional Cpk.00 M C pl = = 2.01 Data 10 8 Mean LSL USL +3sp –3sp 6 4 2 0 2 Samples: 22 Mean: 6.82 Up − M C pl = C pu C pk = 2.qxd 10/18/07 12:01 PM Page 309 Nonnormal Distribution Cpk 309 Step 9.180 B.82 − 2.18 M ˆ 14.48 Cp Cp = Cp = 14.82 USpec − M = = 2.891 − 5. Cr and Cp USpec − LSpec Up − Lp 1 Cr = = 0.H1317_CH28.25.

046 − 2. The median .253) C pl = 1.63 6 8 10 Spec Lim: (2.08 C pu = 14.08 C pu = C pk C pl = 6. the results would have been: C pl = X − LSpec 3s USpec − X 3s = C pl = 1. and the proportion greater than the average plus three standard deviations is 0.046 3(1.H1317_CH28. 14) 12 14 Process capability assuming normal distribution.12 and Cr = 6s USpec − LSpec 7. 12.000135.99865.518 = 0.63.000 3(1.qxd 310 10/18/07 12:01 PM Page 310 The Desk Reference of Statistical Quality Methods Had the Cpk and Cr been determined without adjusting for skewness and kurtosis.04546 Std Dev: 1.08 Cr = 0.000 − 6.00 Data 10 8 Mean LSL USL +3sp –3sp 6 4 2 0 2 Samples: 22 Mean: 6.2527 4 Cpk = 1.253) C pu = 2. Cpk Adjusted for Sk and Ku Using Normal Probability Plots The calculations for Cpk using a normal distribution are based on the fact that the proportion of the data below the average minus three standard deviations is 0.

00135 2. D. New York: McGraw-Hill. “Process Capability Calculations for Non-Normal Distributions. Process Capability Indices. L.99865 – X0. In the equation for the traditional C pl = For: X 3S Use: X0. The median can be determined by locating the point at which 50 percent of the data are below (or above). 1997. Bibliography Bothe. Johnson.5000 X0.50000 always represents the median and for a normal distribution will also represent the average. X0. In the equation for the traditional C pu = For: X 3S ( X − LSpec) 3S ( USpec − X ) 3S Use: X0.5000 – X0.5000 Cpk is now determined as the minimum of Cpl and Cpu.5000 X0.135 percent of the data are below. New York: Chapman and Hall. Kotz. Calculations for the adjusted Cpk will be made using the following substitutions: 1.00135 is the data value that 0. where p = the proportion less than the value X. The points at which specified proportions of the data are below the pth percentile are designated as Xp. . R. Clements.qxd 10/18/07 12:01 PM Page 311 Nonnormal Distribution Cpk 311 for the nonnormal distribution can be used in place of the average. The point located at X0. S. see the module Testing for a Normal Distribution. and N.” ASQC Quality Progress (September): 95–101. J.. A. 1993.H1317_CH28. For a more in-depth discussion on normal probability plots (NOPP).865 percent of the data are below.99865 is the data value for which 99. Measuring Process Capability. and X0. 1989.

qxd 10/18/07 12:01 PM Page 312 .H1317_CH28.

There are several such methods available. A and B. then combine the two population data samples and rank in order of increasing response from smallest to largest. or risk. Step 1.05. let α = 0. Comparing Two Averages: Wilcoxon-Mann-Whitney Test Does the average of product A exceed that of product B? Stated in the form of a hypothesis test: Ho: A = B Ha: A > B A manufacturer of an exterior protective coating is evaluating two test formulations. Choose α. The failure times are ranked from the lowest to the highest with identification of the formula type. the significance level. Step 2. Collect data. assign to each the average rank had ties not been observed. In the case of ties. For this example. Each panel is continuously monitored.10 for this one-sided test.qxd 10/15/07 2:31 PM Page 313 Nonparametric Statistics Many of the traditional statistical techniques used in quality are based on specific distributions such as the normal distribution. The objective is to maximize the length of time when 50 percent removal has been reached.05. and 0. Statistical techniques that are not dependent on a particular distribution (distribution free) but rather on a relative ranking are referred to as nonparametric methods. The methods discussed here are the Wilcoxon-Mann-Whitney test and the Kruskall-Wallis rank sum test.H1317_CH29. Risks are traditionally 0. Five test panels made with formula A and eight test panels made with formula B are tested. Combine the responses and rank from 1 (lowest) to 13 (highest). for the test. 313 . 0.025. and the time at which 50 percent of the coating thickness has worn is recorded in days.

05 Z = 1.28 for α = 0.H1317_CH29. compute RB (the sum of ranks for formula B).10 Z = 1. Look up Rα (n1. If R′A ≤ Rα(n1.96 for α = 0. this procedure should not be used. Step 5b.65 for α = 0. n2).025. otherwise. This table gives critical values for n1 and n2 to 15. RA (the sum of ranks for formula A) is calculated as follows: .n2) in Table 1. Step 5a. 2 12 where: Z = 1. If na is the smaller (as in this case). there is no reason to believe that the two formulas are different. Let n1 = the smaller sample = 5 (na) n2 = the larger sample = 8 (nb) n = n1+ n2 = 13 Step 4. Since the smaller sample is associated with formula A (na is the smaller sample). Step 3. otherwise. conclude that formula A exceeds formula B. compute RA (the sum of ranks for formula A). For larger values of n1 and n2. critical values can be estimated by n1 n n (n + n + 1) (n1 + n2 + 1) – Zα 1 2 1 2 .5 6. If RB ≤ Rα (n1. days Rank A A B B B A B A B A B B B 26 28 30 41 46 50 50 68 70 72 73 78 80 1 2 3 4 5 6.5 8 9 10 11 12 13 Formula A 26 28 50 68 72 (1) (2) (6. there is no reason to believe that the average for formula A is greater than the average for formula B. and compute R′A = nA (n + 1) – RA.qxd 314 10/15/07 2:31 PM Page 314 The Desk Reference of Statistical Quality Methods Formula Time.5) (8) (10) Formula B 30 41 46 50 70 73 78 80 (3) (4) (5) (6. If the two sample sizes are equal or if nb is smaller. conclude that the average of formula A exceeds formula B.5) (9) (11) (12) (13) If more than 20 percent of the observations are involved in ties.n2).

05 0.10 0.025 0.025 0.10 0.025 0.025 0.025 0.5.05 0.05 0.05 0.10 0.025 0.025 0.10 0.025 7 6 — 7 6 — 8 7 6 9 8 7 10 8 7 11 9 8 11 10 8 12 10 9 13 11 9 14 11 10 15 12 10 16 13 11 16 13 11 13 11 10 14 12 11 15 13 12 16 14 13 17 15 14 19 16 14 20 17 15 21 18 16 22 19 17 23 20 18 25 21 19 26 22 20 20 19 17 22 20 18 23 21 20 25 23 21 27 24 22 28 26 23 30 27 24 32 28 26 33 30 27 35 31 28 37 33 29 30 28 26 32 29 27 34 31 29 36 33 31 38 35 32 40 37 34 42 38 35 44 40 37 46 42 38 48 44 40 41 39 36 44 41 38 46 43 40 49 45 42 51 47 44 54 49 46 56 52 48 59 54 50 61 56 52 55 51 49 58 54 51 60 56 53 63 59 55 66 62 58 69 64 60 72 67 62 75 69 65 70 66 62 73 69 65 76 72 68 80 75 71 83 78 73 86 81 76 90 84 79 87 82 78 91 86 81 94 89 84 98 92 88 102 96 91 106 99 94 106 100 96 110 104 99 114 108 103 118 112 106 123 116 110 127 120 115 131 125 119 136 129 123 141 133 127 149 142 136 154 147 141 159 152 145 174 166 160 179 171 164 200 192 184 4 5 6 7 8 9 10 11 12 13 14 15 RA = 1 + 2 + 6.025 0.5 + 8 + 10 = 27.10 0.05 0.025 0. RA′ = n A (n + 1) – RA = 5(13 + 1) – 27.05 0.H1317_CH29.10 0.10 0.025 0.05 0.05 0.025 0. Rα(n1. n1 (Smaller sample) n2 ` 3 4 5 6 7 8 9 10 11 12 13 14 15 3 0.05 0.10 0.05 0.10 0.10 0.5 = 42.05 0.025 0.10 0. .05 0. n2).10 0.qxd 10/15/07 2:31 PM Page 315 Nonparametric Statistics 315 Table 1 Critical values of smaller rank sum for Wilcoxon-Mann-Whitney test.10 0.5.05 0.

9 25. assign rank 1 to the smallest. Comparing More Than Two Averages: The Kruskall-Wallis Rank Sum Test Do the averages of K products differ? The objective of this nonparametric test is to determine whether several samples could have all come from the same population.5) (14) (15) (20) (6) (8) (12) (16.05. Step 2.5) (16. . therefore. χ20. . n2. If R′A ≤ Rα (n1.2 46.1 12.6 19.9 6.1 30. 3.05 n1 = 5 n2 = 8 R0.H1317_CH29. Data: Design 1 Design 2 Design 3 1.5 16.5) (13) (18) *The numbers shown in parentheses are the ranks.7 1.5. as required in step 3 of the following procedure. the true averages of all the samples are equal. Choose a level of significance. That is.5 and R0.90. We have n1.2 = 4. .5 25. . Look up the chi-square for χ21–α for K – 1 degrees of freedom (K = 3).5 13. let α = 0. conclude that formula A exceeds formula B.8) = 23. . for all observations combined.62 Step 3.9 (1)* (2) (3) (4) (7) (10.8. Procedure: Step 1.05(5. and so on.8 = 23 R A′ is not ≤ R0.5) (5) (9) (10. n3. n observations on each product design 1. . or alpha risk α.5) (16. .5. R′A = 42.8 25.2 13.4 20. 2.n2). assign to each the rank that would have been assigned had the observations differed .10. K. + n K Assign a rank to each observation according to its size in relation to all n observations. rank 2 to the next largest. .7 46.qxd 10/15/07 316 2:31 PM Page 316 The Desk Reference of Statistical Quality Methods where: α = 0.1 82. . . from lowest to highest.2 46. In cases of ties. For this example. N = n 1 + n 2 + n3 + .05.2 46. that is. we cannot conclude that formula A exceeds the performance of formula B.5 42.1 29.

N ( N + 1) ni 12 (2280. there is no reason to believe that the averages differ. otherwise. 1988. S. If H > χ21–α. Nonparametric Statistics. this procedure should not be used. 1956.H1317_CH29. there should be a minimum of five observations for each sample. H= Step 6. Step 4. Calculate the following: H= R2 12 ∑ i – 3( N + 1). Bibliography Kohler.5 R2 = 78. New York: McGraw-Hill. If more than 20 percent of the observations are involved in ties. 420 H = 2. H. Statistics for Business and Economics.15. Siegel. Glenview.0 R3 = 55. IL: Scott. When using this procedure. conclude that the averages of the K designs differ. R1 = 76.5 Step 5.30) – 63. .qxd 10/15/07 2:31 PM Page 317 Nonparametric Statistics 317 slightly. Calculate Ri (the sum of the ranks of the observations on the ith product) for each of the products. 2nd edition. Foresman and Company.

qxd 10/15/07 2:31 PM Page 318 .H1317_CH29.

and mode) 2. The normal distribution is used exclusively with the application of statistical process control (SPC) of variables data and is assumed when working with process capability indices. median. All distributions have the following two defining characteristics: 1. Mode M: The most frequently occurring data value 3. A location statistic that defines the central tendency of the data (examples are the average. One of the most frequently used is the normal distribution. Median X˜: The middle value The variation of a distribution can be measured by: 1. One of the subjective tests for a normal distribution is to construct a frequency histogram and look for a bell-shaped curve. A variation statistic that defines the amount of dispersion or variation of the data (examples are standard deviation and range) The central tendency of data can be described by one of the following statistics: – 1. such as performing a normal probability plot or performing a chi-square goodness-of-fit test (see the module Testing for a Normal Distribution). is sometimes referred to as the bell curve. The pattern or histogram of normally distributed data is shaped like a bell and. Average X : The sum of the data divided by the total observations 2. Standard deviation S: S= ∑( X ± X )2 n ±1 where: X = individual value – X = average of individuals n = sample size Σ = sum 319 .H1317_CH30. There are other tests for a normal distribution. Range R: The difference between the largest and smallest data values 2. Graphically they can describe unique shapes. hence.qxd 10/17/07 2:37 PM Page 319 Normal Distribution Distributions are mathematical functions that describe data.

in this case it is 28.qxd 10/17/07 320 2:37 PM Page 320 The Desk Reference of Statistical Quality Methods The standard deviation is a conventional way of measuring the deviation (variation) of the individuals from their average. the greater the variation. Example 1: – Given an average of X = 23.58? Step 1. 5. The following examples illustrate this use of the normal distribution.58 .1 28. what is the probability of getting a value greater than 28. and ±3S are important from a practical perspective such as in the application of SPC.00 S = 3. The normal distribution has several important characteristics: 1. The distribution is divided in half by the average The average.1. X = 23. ±2S.7 percent of the data are contained within the limits of X ± 3S While the limits of ±1S.58.0 and a standard deviation of S = 3. The greater the standard deviation. Using the probability density function for the normal distribution. 3. and median are equal – Approximately 68 percent of the data are contained within the limits of X ± 1S – Approximately 95 percent of the data are contained within the limits of X ± 2S – Approximately 99. There is an assumption that there is a measured quantity greater than (for this case) the point of interest. A vertical line drawn in the tail area represents the point of interest. The vertical line dividing the distribution in half represents the average. The standard deviation is not drawn in. we can determine the probability of events occurring given an average and standard deviation. Sketch out the problem. mode. 4. but it can be recorded near the average. It is drawn so that there is some tail area remaining outside the limit. 2. The standard deviation is a universal index that can communicate the degree of variation of data. Draw a normal distribution.H1317_CH30. but it should be on the appropriate side of the average line. there are other times we might want to apply other multiples. The exact location of the point of interest is not critical.

H1317_CH30. The Z-score is the difference between the average and the point of interest in units of standard deviation.0 3. the Z-score is a Z upper score: ZU = X′ – X 28.x0 column. then proceed across this row until intercepting the x. x.x 9 ⏐Z⏐ 4. Look up the Z-score using the standard normal distribution table.50. Calculate a Z-score. For our example.83. 1 = 27. Z-scores can be labeled as Z upper or Z lower depending on the location of the point of interest relative to the average. This proportion may also be expressed as a rate greater than 28.qxd 10/17/07 2:37 PM Page 321 Normal Distribution 321 Step 2.8——— 0. .00 . An alternative way of expressing this is 3.9 3. .58 is 0.80.58 – 23.593 percent.1 Calculate Z-scores to the nearest second decimal place.82 amps and a standard deviation of 0. 1.03593 This implies that 1 out of 28 will exceed 27. ZU = = ZU = 1. Example 2: The specification for the amount of current drawn for a small electric motor under test is 3.45.x 2 x.x 1 x. x. .0 ± 0. . What percentage of the motors fail to meet the specification requirement? What is the combined rate of nonconformance? .03593. .8 3. Step 3.x 3 . S 3.7 . Go down the left column to 1.83 ⇒ 28 0.x 0 x. Thirty-five motors are tested with an average draw of 2.03593 The proportion of the distribution of data that are greater than 28.8.58 by taking the reciprocal and rounding to the nearest integer.

50 − 2.45 3. x. .8 3.45 ZL = Step 3.x 9 x.51 S 0.82 S = 0.x 3 .7——— 0. . Look up the Z-scores.x 3 . 0.45 X ′ –X 3. .x 1 ⏐Z⏐ 4.x 2 x. . .x 1 ⏐Z⏐ 4. x. .82 – 2.71 S 0. Calculate the Z-scores: X – X′ 2.qxd 322 10/17/07 2:37 PM Page 322 The Desk Reference of Statistical Quality Methods Step 1. .x 0 x.x 2 x. 1. .7 3.8 3.5 X = 2.0 3. .82 ZU = ZU = ZU = 1. .H1317_CH30.0 3.9 3.6 . .x 9 . .7 3.5——— 0.50 ZL = Z L = 0.x 0 x.06552 x. . x.5 Step 2.9 3.23885 x. Sketch out the problem: 2.6 .

50.71 indicates that a proportion of 0.06552 The total exceeding the specification is 1 1 = = 3.23885 (23.0 .qxd 10/17/07 2:37 PM Page 323 Normal Distribution 323 A ZL of 0. The rate of nonconformance below the lower specification is 1 = 4.23885 + 0. therefore.885 percent) will be below the lower specification of 2.23885 A ZU of 1. 0. 1 out of 4 will fall below the lower specification. 0.186 = 4.552 percent) will be greater than the upper specification of 3. Both are below the average.50.51 indicates that a proportion of 0. we will designate the Z-score for the 80. The rate of nonconformance above the upper specification is 1 = 15. we have two points of interest.263 = 15.0 point of interest as ZLL and the Z-score for the 90.0 point of interest as ZL.06552 0. 0.06552 (6.H1317_CH30. What proportion of the product is expected to have between 80 and 90 parts per thousand sodium chloride? Step 1.0.5 parts per thousand with a standard deviation of 15.5 S = 15.30437 Example 3: Samples of cat food are analyzed. and the average sodium chloride content is 110. 80 90 X = 110. 1 out of 15 will exceed the upper specification. In this example. Sketch out the problem. 1 out of 3 will not meet the specification.

03 X − XL′ S ZL = 110.02118 x.08534.0 Z L = 1.9 3. Look up the Z-scores.08534 –0.qxd 324 10/17/07 2:37 PM Page 324 The Desk Reference of Statistical Quality Methods Step 2. .02118.x2 .7 3.x1 x.6 3.x 3 0.5 − 90.8 3. .416.8 3.3 ——————————————– 0. This relationship is defined by the central limit theorem. ZLL = ′ X − XLL S ZLL = 110. The distribution and variation of averages chosen from a population of individuals can be related to the distribution of the individuals from where they came.0 and 80. .5 − 80. .5 .08534 The proportion of the data less than 80. Therefore. the proportion between these two limits is 0. Calculate ZLL.x 1 x.x 2 ⏐Z⏐ 4. x. Central Limit Theorem The standard deviation σ and the corresponding statistic S represent the variation of individuals. .0 15.0 ZLL = 2.0 is 6.0 3. SX = σ n . 1.37 B. Calculate the two Z-scores.7 3. For Z = 2.9 3.03 x.x 0 x.37 x.02118 0.0 is 0. and the proportion of the data that are less than 90.5 .x0 x. 2.x7 ⏐Z⏐ 4. The percentage of data between 90. .0 3.0 ——————————————— For Z = 1. . A.H1317_CH30.0 is 0.0 15. Calculate ZL. ZL = Step 3.06416.6 3. .

H1317_CH30. n SX = 8 . 5 S X = 3. = Averages of n = 5 = Individuals 16 29. The standard deviation of averages S X is less than the standard deviation of the individuals σ Example: – A population is defined with a sample average of X = 40 and a sample standard deviation of S = 8. averages of n = 5. the following will result. regardless of the distribution of the individuals from where they are chosen 2.2 X = 40 50.qxd 10/17/07 2:37 PM Page 325 Normal Distribution 325 Note: The best estimate for σ is the statistic S.8 Comparative distribution of individual vs. The standard deviation of the averages SX– will be SX = σ . X – 3S = 16 X = 40 X – 3S = 64 If samples of n = 5 are chosen from this distribution and a frequency distribution drawn of the averages. 64 .8. Two things result from the central limit theorem: 1. The distribution of averages tends to be normally distributed. The standard deviation of averages will equal the true standard deviation of the individuals (from which the averages were determined) divided by the square root of the sample size.

For example: For a Z-score of 2.00990 1 101. If the Z-score were associated with a level of nonconformance.01 rounds to 101 as an integer. we would state the probability of getting a nonconforming unit as 0.00990 or Φ|0.990% Parts per million (ppm): Move the decimal in the table value six places to the right. the result is expressed as a proportion.00990. and rewrite the integer as a fraction.33.00990| = 2.33. % nonconforming = . the table value is 0. . round off to the nearest integer. There are several alternative ways to express our findings: Percent: Move the decimal in the tabled value (proportion) two places to the right.H1317_CH30. ppm = 9900 Rate (of nonconformance): Take the reciprocal of the table value. The rate of nonconformance is 1 in 101 or 101 .qxd 326 10/17/07 2:37 PM Page 326 The Desk Reference of Statistical Quality Methods Reporting the Results When we look up a Z-score. 1 = 101.00990 or that the proportion nonconforming is 0.01 0.

H1317_CH31. Our sample size may result from a 100 percent inspection of units being produced over 327 . Assuming that the data are attribute in nature. An example of attribute data is the percentage of observations that do not conform to a requirement. we can narrow the field of available control charts from which to select. Attribute data are obtained from an initial observation of a yes or a no. Attribute data are normally associated with the compliance to a requirement. 3. Variable data are data obtained in which the initial observation is a measurement that can take on any value within the limits that it can occur and within the limits of our ability to make the measurement. Sample size: We may choose to vary the sample size at some time in the future. 2. Once we have decided on the type of data being collected. length. 4. Quality data or measurements fall into one of the following two broad categories: 1. and failing a requirement yields a no response. 2. and mass. 1. Sample size: fixed or variable 2. Meeting the requirement yields a yes response. Attribute type: defect or defective The response to these two inquiries determines the specific control chart for attributes we will use. we will have the following four basic types available: 1. p chart np chart c chart u chart These charts can be categorized according to: 1. Examples of variables are temperature.qxd 10/15/07 2:35 PM Page 327 p Chart Control charts are selected according to the type of data taken to monitor a process. Variables data can be converted to attribute data by comparison to a requirement. pressure. Frequently used attributes are the percentage of customers not satisfied with the quality of a service and the percentage of voters voting for a specific candidate.

A p chart with a constant sample size performs exactly as an np chart. Control charts for variables are based on the normal distribution as a statistical model. A defective is a unit of inspection that does not conform to a quality requirement. looking for signs of a process change in the future . Continuation of monitoring the data. Determination of statistical upper and lower control limits based on ±3 standard deviations 4. two reversed polarities. or eight defective boards could have had one defect each. p = proportion nonconforming. Attribute data are binary in nature in that the initial response of an inspection activity yields only two possible outcomes: the unit of inspection either conforms to a requirement or it does not—yes or no. Construction of the control chart and plotting of historical data 5. the construction and use of a p chart has the following operations or steps: 1. if we inspect 50 printed circuit boards and find three missing components. and a u chart with a constant sample size performs exactly as a c chart. We do not have to vary the sample size for the p chart and u chart. Attribute type: Attribute data can be related to the total number of defective units (nonconforming) within a sample or the number of defects (nonconformities) in a sample. The number of units produced during this period may vary from day to day. This module discusses the application and principles of the p chart. The p chart and u chart can perform very well in place of the c chart and np chart. and three cracks. 2. We may. we would report eight defects. From this information. we do not know the number of defective boards. elect to inspect a fixed or constant number of units for a specified period of time. where n = sample size. and the standard deviation is npq . then the c chart and np chart can be eliminated from our field. The response is discrete. All of the eight defects could have been found on one board.10 and the product of the proportion defective and the sample size is greater than five (np > 5). Where the sample size n is relatively small in proportion to the population N such that n/N is less than 0.H1317_CH31. As with all attribute control charts. and q = 1 – p. Calculation of a location statistic (an average) 3.qxd 328 10/15/07 2:35 PM Page 328 The Desk Reference of Statistical Quality Methods a fixed period of time such as a shift. the binomial distribution approximates the normal distribution. on the other hand. A defect is the specific incident that caused the unit of inspection to be defective. The distribution on which the p chart is based is the binomial distribution. For example. Chart name c chart u chart np chart p chart Sample size Constant May be variable Constant May be variable Attribute type Defect Defect Defective Defective If we can define constant as actually being a variable sample size where the amount of variation is defined as zero. but we may do so if we want. Collection of historical data to characterize the process 2. The average of the binomial distribution is np.

. and the survey will be deemed unsatisfactory (defective unit). A more detailed discussion on Pareto analysis can be found in the module entitled Pareto Analysis. as it is only concerned with the proportion of defective units found. The actual p chart does not depend on the frequency of defects. If some prior knowledge is available regarding the expected average proportion defective. It is suggested that the actual sample size be 10 percent greater than the calculated minimum to provide for a positive LCL. Data: n = sample size np = number of defective units p = proportion defective Note: For this example. A survey is sent to 50 randomly selected customers.qxd 10/15/07 2:35 PM Page 329 p Chart 329 The following example illustrates these operations. only 16 samples will be used. The c chart and u chart focus on defects (see the module entitled u Chart for more discussion). The survey consists of 12 questions with the choice of responses rated from zero to four.H1317_CH31. The following example of a p chart is based on responses from a customersatisfaction survey taken every few days. The scale rating defines accordingly: 0 = poor 1 = below average 2 = average 3 = above average 4 = excellent A response of zero or one to any of the 12 questions will be counted as a nonconformity (defect). The number of samples used to characterize the process k is typically a minimum of 25 (more may be used). A record of the specific defects observed should be maintained so that appropriate corrective action can be taken and the use of Pareto analysis can be accomplished. The sample size for a p chart should be sufficiently large as to provide at least one or two defective units when a sample is taken. This p chart is based on the proportion of unsatisfied customers. the appropriate sample size to provide a positive lower control limit (LCL) can be determined by Minimum sample size n = 9(1 − p) . The sample size is 50. Step 1. Collect historical data. The 25 minimum will give us a level of confidence that the estimate fairly represents the true nature of the process. p where p = the proportion defective for the process.

qxd 330 10/15/07 2:35 PM Page 330 The Desk Reference of Statistical Quality Methods Sample #1 Sample #2 Sample #3 Sample #4 8/12/95 8/15/95 8/18/95 8/21/95 n = 50 np = 4 p = 0.10 n = 50 np = 3 p = 0.08 Sample #13 Record the data on the control chart form. The location statistic for the p chart will be the average proportion defective p– and is determined by dividing the total number of defective units found by the total units inspected for the data collected during the historical period.14 n = 50 np = 1 p = 0.06 Sample #9 Sample #10 Sample #11 Sample #12 9/5/95 9/8/95 9/11/95 9/14/95 n = 50 np = 7 p = 0.02 n = 50 np = 4 p = 0.12 n = 50 np = 3 p = 0. p= 4 + 2 + 6 + 3 ++ 4 50 + 50 + 50 + 50 +  + 50 p= 57 800 p = 0.H1317_CH31.08 Sample #14 Sample #15 Sample #16 9/17/95 9/20/95 9/23/95 9/25/95 n = 50 np = 3 p = 0.06 n = 50 np = 4 p = 0.12 n = 50 np = 3 p = 0. .08 n = 50 np = 2 p = 0.06 Sample #5 Sample #6 Sample #7 Sample #8 8/23/95 8/25/95 8/28/95 9/2/95 n = 50 np = 2 p = 0.04 n = 50 np = 6 p = 0. This is done to minimize the probability of a single point falling exactly on the average. Step 2.071 Note that p– is reported one more decimal place than the individual sample proportion defective.06 n = 50 np = 6 p = 0.00 n = 50 np = 5 p = 0.04 n = 50 np = 0 p = 0. Calculate the location statistic.08 n = 50 np = 4 p = 0.

n= If we had taken a sample size of 118.109 = –0.H1317_CH31. The standard deviation for the binomial distribution where defective units are expressed as a proportion is p(1 − p) .7 percent of the time between 0.000). The control limits are based on the average proportion defective ±3 standard deviations: p±3 p(1 − p) .071 – 0.180. n where: p– = proportional defective n = sample size. As a practical matter. the sample size should be somewhat larger than the absolute minimum of 118 (say 125).109.qxd 10/15/07 2:35 PM Page 331 p Chart 331 Step 3. The 99. default to 0.8 = 118.00 and 0.002.071) 0.109 = 0.071. sketch in the average and control limits. 0.180 proportion defective when selecting a sample of n = 50.071(1 − 0. Step 4. . positive LCL from the relationship 9(1 − 0. we expect to find 99. Construct the control chart. in which case the LCL would have been 0.071 + 0.071) 50 0. we can calculate the proper sample size so as to have a real. n For this case: 0. and plot the data. From the historical characterization. The LCL is 0.071 n = 117. Knowing that the expected proportion defective is 0.7 percent probability arises from the use of ±3 standard deviations for our statistical limits.000072.071 ± 0.071 ± 3 The upper control limit (UCL) is 0. the expected LCL would have been 0. Determine the control limits based on ±3 standard deviations.038 (when a negative control limit results.

and control limits are drawn as broken lines (.15 P 0. Data will be collected into the future. however. Anytime a process change is indicated. These detection rules have the following two characteristics: 1.0 1 5 9 13 17 21 25 29 33 37 p chart with historical characterization data only.-). . looking for indications that a change has occurred.05 0..10 P = 0.. A vertical plotting scale should be chosen such that part of the chart is available for those future points that might fall outside the control limits.. A vertical wavy line has been drawn to separate the history from the future data.. Evidence supporting the conclusion that a change has taken place will be in the form of statistically rare patterns referred to as SPC (statistical process control) detection rules.. UCL = 0.. All of these SPC patterns or rules are statistically rare for a normal. Continue to monitor the process. Only the process average proportional defective p– is drawn out into the future area of the control chart.H1317_CH31. Rule 3: A lack of control or process change is indicated whenever seven consecutive points are steadily increasing or decreasing.. Most of these SPC detection rules indicate a direction for the process change Rule 1: A lack of control or process change is indicated whenever a single point falls outside the UCL or LCL. Rule 2: A lack of control or process change is indicated whenever seven consecutive points fall on the same side of the process average. no change in the process has occurred.qxd 332 10/15/07 2:35 PM Page 332 The Desk Reference of Statistical Quality Methods Traditionally. in fact. stable process 2.. There is a small probability that one of these rules will be invoked when. Step 5. is so small that we assume the violation truly represents a process change. averages are drawn as solid lines (———————).071 0. an investigation should be undertaken to identify an assignable cause.18 History 0. This probability. as evident by violation of one of the SPC detection rules.

063.071 are given by 0.071) 150 0. and since we add and subtract three standard deviations to and from the historical process average proportional defective to determine the UCL and LCL.12 n = 150 np = 4 p = 0. After sketching in the new control limits for n = 150 and extending the average. the closer the control limits will be to the average.071.071(1 − 0.09 Sample #22 Sample #23 Sample #24 10/13/95 10/17/95 10/20/95 n = 150 np = 14 p = 0. The p chart does allow for a change in the sample size.09 n = 150 np = 0 p = 0. The larger the sample size. looking for a violation of one of the three SPC detection rules.008 We will sketch in the new control limits and extend the historical average. we will expect to see a change in these limits. A change in sample size will result in a change in the value for three standard deviations.10 Sample #21 10/10/95 n = 150 np = 14 p = 0.02 .03 n = 150 np = 9 p = 0.qxd 10/15/07 2:35 PM Page 333 p Chart 333 We will continue to monitor the example process to see if a change has occurred relative to our historical period from 8/12 to 9/25. we plot the future data points. 0.134 LCL = 0.00 n = 150 np = 3 p = 0.06 n = 150 np = 15 p = 0. we will change the sample size on the future data from 50 to 150. To demonstrate the effects on our chart. The new control limits using a sample size of 150 and the historical average proportion of 0.H1317_CH31.071 ± 30. UCL = 0. This decrease in standard deviation as we increase the sample size increases our ability to detect smaller process changes by increasing the sample size. because we want to detect a process change relative to the historical average. We do not recalculate the average using the future data point.071 ± 3 0. Future data: Sample #17 Sample #18 Sample #19 Sample #20 9/28/95 10/1/95 10/4/95 10/7/95 n = 150 np = 18 p = 0.

0040. that the application of the binomial distribution is not appropriate.00328 P (1 − P ) n UCL = 0.071 0. Completed p chart: UCL = 0.qxd 334 10/15/07 2:35 PM Page 334 The Desk Reference of Statistical Quality Methods Notice that rule one has been violated on sample #23.15 UCL = 0.071. the binomial would not be appropriate as with the traditional p chart.18 History 0. .00358. therefore.0003 − 0. This np value is significantly below the normal requirement of greater than five. the control chart would be out of control with a calculated p value of 0. the resulting control chart could lead to erroneous conclusions. if a process were to have a nonconformance rate of 300 ppm and a sample size of 250 was taken. more critically.00328 LCL = 0.05 LCL = 0. Consider the following case: CL = P = 0.0 1 5 9 13 17 21 25 29 33 37 Using the p Chart When the Proportion Defective Is Extremely Low (High-Yield Processes) The effects of a very low proportional defective give rise to the problem that very large sample sizes are required and.008 0.0003 LCL = P − 3 P (1 − P ) n LCL = 0.H1317_CH31. Other problems could also arise. This process would always be out of control.00358 In this case.10 P = 0. the np value would be only np = (250) (. For example. While a p chart could be constructed.0003) = 0. which exceeds the UCL of 0.134 P 0. if we had one defective in 250.0003 + 0. suggesting that the process is performing better than expected (proportional defective has decreased) relative to the historical average of 0.00 UCL = P − 3 UCL = 0.075.

and we will soon violate the rule of seven points in a row below the average Another drawback in using the p chart with very low proportional defective is the slow response to a large shift in the process average. This means that. No LCL is available 2. Rather than focus on the number of defective units found.693n . finding one defective in a sample of n = 250 leads to an out-of-control condition.5 for Pn in this equation. on average. Only samples yielding zero defective units will maintain an in-control condition. ! 2 ⎣ ⎦ When n is large. 3. . the probability of getting one defective in a sample of 250 is n! ⎡ ⎤ x n− x Px =1 = ⎢ ⎥P Q ⎣ x !(n − x )! ⎦ Px = (250)(0. 2 . 1/0. we can solve for the median n as follows: n = − n ln[1 − 0. or about 8.qxd 10/15/07 2:35 PM Page 335 p Chart 335 The application of a p chart for this case is awkward and is not appropriate for two reasons: 1.1292. 4 . If the process proportion defective were to double to 600 ppm.0006)(0. n = 0. By substituting 0.0003).H1317_CH31. an alternative approach would be to focus on the cumulative count of nondefective items until a defective item is found. In this case. samples must be inspected before a defective is found and the process is judged to be out of control.00358. p By summation of the geometric series and expanding: n(n − 1) 2 ⎡ ⎤ Pn = 1 − ⎢1 − np + p + ⎥ .1292.… (a geometric distribution) 1 (n = the average run length given a proportion defective of p). Consider the previous example. we may substitute n– for 1/p and Pn ( ) = 1− e .50]. where the process average defective was 300 ppm (0. and the UCL was 0.9994)249 = 0. The probability of the nth item being defective is given by Pn = (1 − p ) n −1 p n= n = 1. −n n Solving for n: n = n ln[1 − Pn ]. the sample size was n = 250.

Cumulative count to defective 1240 (A) 3688 (B) 4180 (C) 2650 (D) T.608. UCL = 6. no.00135 n . the Sp control chart: Average = 0. “A Control Chart for Very High Yield Processes.693 n α ⎤ ⎡ LCL = − n ln ⎢1 − ⎛ ⎞ ⎥ ⎣ ⎝ 2⎠⎦ and since α is small α LCL = ⎛ ⎞ n ⎝ 2⎠ α ⎤ ⎡ UCL = − n ln ⎢1 − ⎛ 1 − ⎞ ⎥ ⎝ 2⎠⎦ ⎣ α UCL = − n ln ⎛ ⎞ ⎝ 2⎠ If α is set to 0.” Quality Assurance 13. 1 (March): 18-22.H1317_CH31.1 Sample number Number of units tested Cumulative count when count stops Number of defective units since previous stop 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 500 500 240 800 1000 500 900 488 750 1000 850 1000 630 500 1000 770 380 500 1000 1240 800 1800 2300 3200 3688 750 1750 2550 3550 4180 500 1500 2270 2650 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1Goh. .0027 as with traditional Shewhart control charts. 1987. the control limits become LCL = 0. N.qxd 10/15/07 336 2:35 PM Page 336 The Desk Reference of Statistical Quality Methods Control chart for cumulative counts of conforming items.

If multiple cycle log paper is not available.0027 CL = 0. the average (CL) would equal 3.693)(2940) = 2037 n= LCL = 0. and the sample number (normal linear scale) is plotted on the horizontal scale.60. Exceeding the UCL is indicative of the process performing better than expected compared to the historical performance.00034 = 340 ppm 2940 α = 0. The cumulative sample sizes are plotted on the vertical axis using a log scale. All cumulative points are plotted. the plotted point is not connected to the next point.000 1000 x x x x x x x x x x x CL = 2037 x x x x n x x 100 10 LCL = 4 1 1 5 10 Sample number 15 . 428. The cumulative sample size plotting is restarted.400 10. 100.000 UCL = 19.qxd 10/15/07 2:35 PM Page 337 p Chart 337 Example case study: Light-emitting diodes (LEDs) are 100 percent tested on line in lots that vary from 500 to 1000.400 or 4. the UCL would equal log 19.003135n = (0. the log of all the numerical data may be plotted on normal rectangular coordinate graph paper.608)(2940) = 19.H1317_CH31. points below the LCL indicate a decrease in process performance relative to the past performance. The following test data have been collected: The average sample size when a defective unit was detected is 1240 + 3688 + 4180 + 2650 = 2940 4 1 p= = 0. In this case.693n = (0.28.31.00135)(2940) = 4 UCL = 6. When a point is plotted that represents the detection of a defective. and the LCL would equal 0. conversely.608n = (6.

95. a control chart based on the cumulative sample size to the point of a single failure occurs. From the exponential relationship for determining reliability. . average number of units accumulated when a defective is encountered n = specific sample size for a given level of reliability R = the probability of reaching a count of n parts without a failure By solving this relationship for n at specified levels of reliability.qxd 10/15/07 338 2:35 PM Page 338 The Desk Reference of Statistical Quality Methods Exercise: Construct a Σp chart using the following data: Set α = 0. the failures that do occur may be modeled using the traditional reliability model based on the exponential distribution. This model is used extensively in reliability predictions. R=e n −⎛ ⎞ ⎝ θ⎠ where: θ = mean time between failure or. If we select an alpha risk of α = 0. we determine the sample size n for the appropriate reliability. the control chart limits and average change as explained in the following paragraphs.05. or 95 percent probability that this number of units will be found defect free.H1317_CH31.05 Sample number Number of units tested Cumulative count when count stops Number of defective units since previous stop 1 2 3 4 5 6 7 8 9 10 11 12 13 500 300 175 600 500 300 700 1200 600 550 750 800 358 500 800 175 600 1100 1400 2100 3300 600 1150 750 1550 1908 0 1 1 0 0 0 0 1 0 1 0 0 1 Cumulative count to defective 800 175 3300 1150 1908 Exponential Model If we consider the process to be mature and in a steady state of failure rate (but very low). The UCL represents the sample size that there will be a 5 percent probability of reaching with no defect being found. for this application. The LCL is equal to the sample size that represents a reliability of 0.

50 Median = −115.000 138. Calculate limits.944 Step 2. . 944 ln 0.000 93.0513 ) LCL = 5948 Center line = Median.000 130.50 Median = ( −115.000 11/24 11/28 11/30 12/1 12/3 12/6 12/9 12/12 12/15 160.000 110.000 78.000 76. Example: Hypodermic needles are manufactured in a continuous process.H1317_CH31. Use a reliability of 50 percent for the median.000 90. θ = average sample size n = 115.000 155. which use the average and the assumption of a normal or approximate normal distribution. Step 1. Select an alpha risk α and determine the appropriate reliability.000 80. 944)( −0. 944 ln 0. The following 18 samples represent the number of units produced before a defective unit is found.000 89.000 98.95 LCL = ( −115.000 140.000 100.000 145. which is determined by the point at which the reliability is 50 percent. For this example. Collect historical data: Date Sample size n Date Sample size n 11/1 11/4 11/7 11/9 11/11 11/13 11/16 11/19 11/22 120.000 Estimated mean time between failure.693) Median = 80. the risk is set at 5 percent (α = 0. 944 )( −0.000 105.95 LCL = −115.05). The needles are 100 percent inspected by an automatic visual inspection device.qxd 10/15/07 2:35 PM Page 339 p Chart R=e 339 n −⎛ ⎞ ⎝ θ⎠ n ln R = − ⎛ ⎞ ⎝ θ⎠ n = θ ln R θ = Mean number of units before failure or average sample size to failure The center line for this chart will be the median.. Median = − θ ln 0. 349 Note: The center line is optional and is not considered in the traditional SPC detection rules. LCL = − θ ln 0.000 180. The LCL is based on a reliability of 95 percent.

00 ) UCL = 347 . Step 4. 400. looking for signs of a change. Seven consecutive points increasing or decreasing 2. 944 )( −3. Step 3. Having points below this limit implies that the process has deteriorated.87 ppm.000 100. An LCL of n = 5948 is the same as having a failure rate of 1/5948.832 is the same as having a failure rate λ of 1/347. Having a point below the LCL indicates that the process is performing poorer than the historical period.000 LCL = 5948 0 Note on interpretation: The only rule for detecting a process change is exceeding the UCL or LCL.000168. or 0. Continue to monitor the process.qxd 340 10/15/07 2:35 PM Page 340 The Desk Reference of Statistical Quality Methods The UCL is based on a reliability of 5 percent.832 300.00000287. Draw the control chart with limits.832. and plot the data.000 UCL = 347. A UCL of n = 347. Exceeding this limit implies that the process has improved. Exceeding the upper limit indicates that the process is performing better than the historical period. which is equivalent to a process defective rate of 2. Two indicators are used to detect a process change: 1. 832. 944 UCL = ( −115.H1317_CH31. or 0. A single point outside the UCL or LCL .05 UCL = −115.000 200. which is equivalent to a process defective rate of 168 ppm. UCL = − θ ln 0.

Nelson2 developed a simple method of control charting processes where the nonconforming rate is extremely low (that is.6 power. no.000 History 400.000 UCL = 347.000 290. ppm).qxd 10/15/07 2:35 PM Page 341 p Chart Date Sample size n 12/18 12/20 12/23 12/26 12/28 12/30 189. Data Transformation Lloyd S.6 has a skewness of zero and a kurtosis of 2.000 200.72.000 LCL = 5948 0 The out-of-control point gives us supporting evidence that the process has changed and for the better. This method is based on the fact that the Weibull distribution with a slope parameter (β) of 3. This compares favorably to a normal distribution where the skewness is zero and the kurtosis is 3. L. 3 (July): 239–240.000 150. S. This transformed 2Nelson.H1317_CH31.000 100. .00.000 160.” Journal of Quality Technology 26. “A Control Chart for Parts-per-Million Nonconforming Items.000 220.000 360. the required transformation is simply to raise the cumulative number of successful samples required to obtain a defective to the 1/3.832 Future 341 Out of control 300. 1994. Therefore.

0 0.5 = 15.000 12.54 x 16 x x 15 14 x x 13 12 Process out of control on sample 6.5 16. X Transformed data. Y = X 0.5 — 0. Average = 15.80 x 17 UCL = 16.100 15.5 + 16.qxd 10/15/07 342 2:35 PM Page 342 The Desk Reference of Statistical Quality Methods value is then treated as a normally distributed variable and is then monitored using the traditional individual/moving range control chart.37 = 16.7 + 2. The objective is to establish a control chart based on the failure data.17 LCL = 13. it is rejected and the count for the interval to the last failure is recorded.7 14.000 15.5-inch formatted diskettes that are 100 percent certified by writing and the reading data.66 MR = 15.2 14.54 LCL = Y − 2.2777 Moving range 1 2 3 4 5 6 7 19.37 = 13.80 .400 22.2 13. Example: A machine produces 3.17 + 1.780 14.66 MR = 15.H1317_CH31. Each time a diskette fails.0 +  + 2.600 25.700 18.2 15.2 +  + 14.3 16.8 15.17 − 1.2 Average moving range MR = = 1.4 2.7 2.17 7 0.37 6 Process average Y = UCL = Y + 2.4 1. Data: Sample number Cumulative count to failure.5 1.

Statistical Quality Control. TN: SPC Press. J. Introduction to Statistical Quality Control. 1994. and R. New York: John Wiley & Sons.. Montgomery. 4th edition. 1996. D. Chambers. . D.H1317_CH31. 7th edition. Grant. 3rd edition. D. 1996.qxd 10/15/07 2:35 PM Page 343 p Chart 343 Bibliography Besterfield. Englewood Cliffs. Leavenworth. 1992. L. and D. New York: McGraw-Hill. S. Understanding Statistical Process Control. Wheeler. E. NJ: Prentice Hall. S. C.. 2nd edition. Quality Control. Knoxville. H.

qxd 10/15/07 2:35 PM Page 344 .H1317_CH31.

Until the late 1990s. What does not make sense is why the plotted data do not necessarily conform to fit those limits. 345 . Why is this not happening? The answer is quite simple.) Over the years. The distributional assumptions cited for these cases are very important. If that is not true. then these charts have no meaning. Sometimes there is “batch-to-batch” variation in such data sets.qxd 10/15/07 3:05 PM Page 345 p' Chart and u' Chart for Attributes (Laney’s) The classical control charts for attribute or count data are the p chart (for proportion defective—binomial distribution assumed) and the u chart (for defects per unit—Poisson distribution assumed). larger subgroups should produce sample means that are closer to the overall mean than the smaller ones are. [5] Overdispersion may be present far more often than we realize. As that happens. in practice. it makes the latter (which is not affected by subgroup size) relatively more prominent. It was mentioned earlier that the problem is often observed when the subgroup sizes are large. We know axiomatically that the distance from a control limit to the center line is inversely proportional to the square root of the subgroup size. This is most often seen when the subgroup sizes are very large.) Not to worry. Increasing subgroup sizes causes the former to diminish. a number of scholarly papers have been published in peer-reviewed journals discussing various diagnostic tests for the overdispersion condition [2. (Statisticians will recognize this as the fundamental concept of analysis of variance. this weakness is often present. The binomial/Poisson assumptions required for these applications imply that their population parameters (approximated by the center lines) remain constant over the entire period. Logically. the only solution was to follow the advice of Western Electric [6] and display the data in an XmR chart. If the assumptions are not valid. (Some days it’s hot and some days it’s cold. Problem solved—sort of. the control limits should tighten up. they told us. Unfortunately. we may use variants of these charts: the np chart and the c chart. They told us it was because the “constant parameter” assumption required for these charts is not always valid. so it makes sense that as the subgroup size increases.H1317_CH32. they merely make it easier to detect. This phenomenon is called “overdispersion” and was well known to the Old Boys at Western Electric back in the 1950s. just use an individuals (XmR) chart when this happens. A common case in point is well known and has been documented for many decades. Large subgroup sizes do not cause overdispersion. When all the subgroups are the same size. Sometimes an attribute chart will have control limits that hug the center line so tightly that most of the data points appear to be out of control. What we have here is a combination of within-subgroup variation and between-subgroup variation.3]. the charts may be useless.

26551 0.35108 0.26738 0.26543 0.32195 0.30754 0.32943 0.29227 0.43647 0.28584 0.34889 0.28618 0.29048 0.28985 0.32296 0.30754 0.2832 0.28564 0.32929 0. Laney’s p' and u' control charts provide a solution for all the aforementioned problems encountered with the traditional p and u control charts. Proportion defective: Consider the data set in Table 1.26531 0.28595 0.30754 0.29215 0.2617 0.3289 0.33106 0.3246 0.30754 0.23409 0. for p charts and u charts.32484 0.30754 0.32916 0.30754 0. and (2) we lose the variable control limits we are accustomed to seeing in p charts and u charts.30754 0.30754 0.30754 0.qxd 346 10/15/07 3:05 PM Page 346 The Desk Reference of Statistical Quality Methods In the special case where the subgroup sizes are all the same (np charts or c charts).26619 0.32281 0.28609 0. Example 1.30754 0.H1317_CH32.33931 0.30754 0.30754 0.34965 0.32294 0.28792 0.28578 0.34957 0.30754 0.30754 0.28581 0. this is the right thing to do.29212 0.30754 0.36406 0.32926 0.30754 0.28565 0.32899 0.33613 0.28592 0.34976 0.30754 0.29214 0. where i = subgroup ni = subgroup size xi = number of deefective items pi = xi / ni = proportion defective p = Σxi / Σni = center line (CL)  σ = p (1 − p ) = estimated population standard deviation  LCLi = p − 3σ / ni = lower control limit (LCL)  UCLi = p + 3σ / ni = upper control limit (UCL) Table 1 p chart. The XmR chart’s inherent leveling off of the control limits (which really should be wider for smaller subgroups and narrower for larger subgroups) may cause some points to be misclassified (as shown in the following examples).32924 .29143 0.37188 0.32997 0.20475 0.30754 0.2694 0.32912 0.264 0.32904 0. this creates a couple of problems: (1) by giving equal weight to larger and smaller subgroups. However.28 0.28604 0.32292 0. i ni xi pi CL LCL UCL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1075 1085 1121 1081 1011 4114 3997 4061 4165 4001 8084 6587 8062 8218 8096 4049 4147 4202 4100 4070 301 395 302 402 207 1100 1046 1340 975 1166 2626 2875 2669 2382 2009 1361 1194 1190 1320 1381 0.30754 0.30754 0.32944 0.24815 0. the estimates are statistically biased.

48513 0.09587 0.48513 0.48513 0.06827 0.03341 0..01736 .43647 0. Converting this to the Xi chart produces Table 2.00472 0. A graph of the last four columns gives us the classical p chart shown in Figure 1.48513 0.48513 0.1 0.04121 0.48513 0.12995 0.12995 0.26738 0.48513 0.32997 0.16713 0.48513 0.30754 0.qxd 10/15/07 3:05 PM Page 347 p' Chart and u' Chart for Attributes (Laney’s) 347 0.30754 0.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 1 p chart.66 R Table 2 Xi chart.3 0.48513 0.30754 0.2832 0.12995 0.12995 0.30754 0.30754 0..30754 0.2617 0.30754 0.12995 0.12995 0.48513 0.37188 0.09465 0.28 0.30754 0.12995 0.30754 0.03875 0.00568 0.30754 0.66 R UCL = CL + 2.4 0.10541 0.48513 0.12995 0.12995 0.33613 0. i pi CL LCL UCL R(p) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0.12995 0.12995 0.2694 0.48513 0.6 0.12995 0.28792 0.06263 0.30754 0.20475 0.12995 0.30754 0. where Ri = pi − pi −1 (i = 2.30754 0.30754 0.30754 0.48513 0.05733 0.33931 0.30754 0.12995 0.33106 0. Clearly.30754 0.36406 0.7 0.48513 0.08406 0.12995 0. overdispersion is present.2 0.12995 0.H1317_CH32.48513 0.32195 0.48513 0.28985 0. 20) R = ΣRi / 19 CL = p LCL = CL − 2.30754 0.12995 0.12995 0.04821 0.48513 0..08799 0.32484 0.11163 0.10248 0.30754 0.48513 0.5 0.48513 0.48513 0.12995 0..29143 0.0417 0.12995 0.30754 0.23409 0.24815 0.

370839 22. the z chart. The secret to going further will be revealed if we depict the data in a well-known but seldom-used variant.95655 4. σ / ni Table 3 z chart.0824 –5.qxd 10/15/07 348 3:05 PM Page 348 The Desk Reference of Statistical Quality Methods 0.1 0.58164 –6. where p −p zi =  i .7 0.4 0. .999867 4.097385 –10.6 0. Until now.67474 4.392585 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 –3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Note that the mean of the Z values is assumed to be zero and the standard deviation is one (so the three sigma control limits are at +/– 3).576489 –3. as shown in Table 3.5 0.H1317_CH32.58 3. the original p values are replaced by their Z-scores. The completed z chart can be seen in Figure 3.41897 1.2712 –2.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 2 Xi chart.3 0. I Z CL LCL UCL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 –1.73779 –3. In the z chart.20831 3.76687 4.584001 –7. this was as far as we could go. This gives us the Xi chart in Figure 2.2 0.03411 –2.4744 –11.942801 –2.28032 3.

65119 21. We can still clearly see the overdispersion problem. .6512 –21. where should they be? It was at this very point that Laney’s great inspiration occurred.392585 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 –21. Donald J.95655 4.73779 –3.65119 21.qxd 10/15/07 3:05 PM Page 349 p' Chart and u' Chart for Attributes (Laney’s) 349 25 20 15 10 5 0 –5 –10 –15 Figure 3 z chart.65119 21.58164 –6.65119 21. since that would represent long-term variation.76687 4.41897 1. instead of assuming that the standard deviation was one.6512 –21.6512 –21.6512 –21.6512 –21.584001 –7.942801 –2.20831 3.65119 21.050891 8.6512 –21.097385 –10.6512 –21.65119 21.6512 –21.370839 22.65119 5.6512 –21.65119 21.58 3.6512 –21.6512 –21. Obviously. Wheeler’s haunting admonition: “Why assume the variation when you can measure it?” [7].65119 21.65119 21. So he decided to repeat the z chart. This may not seem very helpful—because it isn’t.65119 21.65119 21.0824 –5.105579 15. He says he suddenly remembered Dr.28032 3. he would actually measure it using the same methods as in an XmR chart.6512 –21. .35087 11.36854 8.6512 –21. but this time.6512 –21.999867 4.6512 –21. So .6512 –21.67474 4.65119 21.65119 21.3039 18.52278 6.800979 7.392718 .65119 21.6664 1.680592 0.65119 21. He reasoned that this would duly adhere to the basic rule of control charts that the control limits should be based on short-term variation.65119 21.500762 0.H1317_CH32.03411 –2.6512 –21.6512 –21. +/– 3 is not where the control limits should be.681179 5.65119 21. He knew that the root-mean-square formula for the variance would be wrong. I z CL LCL UCL R(z) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 –1.65119 21.579146 19.377709 13.6512 –21.990662 6.698683 9.062851 5.4744 –11.6512 21. This gives us Table 4 Table 4 z ' chart.09825 8.65119 21.418837 2.2712 –2.576489 –3.

is  σ z = avg( Rz ) / 1. but many people are confused by z charts. We may conclude that there is 7.22 times wider than they were in the z chart. Our final result is shown in Table 5 and Figure 5. by the usual formula for the standard deviation in an XmR chart.22 times more short-term variation in our process than the binomial assumption alone can account for. Note the addition of the last term. this is much better. As Roger Hoerl at GE explained it. which are constructed just like the previous XmR chart except that the values plotted are the Z-scores of the original p values. The control limits in the z' chart are 7. this merely redefines the “rational subgroup” for this application. and Figure 4.22.128 = 7. It magnifies our estimate of process variability to account for the batch-to-batch variation that exists in addition to the usual binomial sampling error. it turns out that the estimated process standard deviation. they don’t understand the units. . which Laney dubbed the p' chart:   UCLi / LCLi = p ± 3σ / ni ⋅ σ z . As Figure 4 shows.qxd 350 10/15/07 3:05 PM Page 350 The Desk Reference of Statistical Quality Methods 25 20 15 10 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 –5 –10 –15 –20 –25 Figure 4 z ' chart. This gives us a modified p chart. This is fine. and the ranges are now the distances between consecutive Z-scores.H1317_CH32. So all that remains is to undo the original z-transformation formula so that everything is once again depicted on the p-plane. Instead of the assumed value of 1.

14953 0.7378 –3.2 0.4188 0.5228 6.05089 8. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 351 .43647 0.30754 0.18445 0.62172 0.32484 0.41856 0.5 0.2832 0.30754 0.61138 0.37771 13.41884 2.30754 0.46547 0.15078 0.3 0.0983 8.19734 0.7669 4.3685 8.30754 0.33106 0.30754 0.30754 0.2694 0.15054 0.06285 5.H1317_CH32.41774 0.4744 –11.69868 9.15343 0.58 3.46165 0.qxd 10/15/07 3:05 PM Page 351 p' Chart and u' Chart for Attributes (Laney’s) Table 5 p ' chart.60591 0.30754 0.19628 0.4 0.28985 0.1496 0.271 –2.36406 0.15241 0.30754 0.46329 0.68059 0.57915 19.09739 –10.7 0.33613 0.30754 0.68118 5.0037 –0.46355 0.61082 0.00917 0.80098 7.2083 3.26738 0.0 Figure 5 p ' chart.30754 0.4643 0.39259 0.30754 0.19651 0.29143 0.15095 UCL R(z) 0.23409 0.6 0.30754 0.50076 0.10558 15.0824 –5.30754 LCL 0.30754 0.41865 0.5816 –6.46555 0.419 1.30754 0.9428 –2.99987 4.24815 0.37084 22.584 –7.1 0.35087 11.00426 0.3039 18.15274 0.32195 0.99066 6.2803 3.03411 –2.28792 0.6664 1.15179 0. i z p CL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 –1.37188 0.19643 0.30754 0.00285 0.30754 0.9566 4.46267 0.46453 0.30754 0.32997 0.30754 0.0066 0.61223 0.46413 5.39272 0.20475 0.30754 0.33931 0.43063 0.6747 4.57649 –3.2617 0.46233 0.15152 0.28 0.

Table 6 u chart.88546 1.9538 1.32762 2.3268 2.42574 3. the Poisson assumption that underlies the u chart requires that the population parameter remain constant over time.64868 1. Defects per unit: The same technique may be employed for the case of measuring defects per unit (the u chart).07911 2.65288 1.07911 2. It puts the control limits out where they belong (like the XmR chart).03553 4.8306 1. the (flat) control limits will be exactly where the XmR chart would have put them Example 2.8835 3.07911 2.07911 2. and it gives us the variable control limits we expect from a p chart. It should be pointed out that the p' chart is completely consistent with both of its “parents.38575 2.77323 1.27276 2.50954 2. Just as in the binomial case.96393 1.016 1.38422 2.77091 1.07911 2.88488 1.992 0.qxd 10/15/07 352 3:05 PM Page 352 The Desk Reference of Statistical Quality Methods This is the final product.02204 1. As a hybrid of the p chart and the XmR chart.27179 2.50954 2.07911 2.07911 2.H1317_CH32.07911 2.98492 3.07911 1.97619 3.07732 0. If there is no overdispersion present.07911 2.88566 1.07911 2.38968 2.82597 2.” With a little algebra.82553 1.07911 2. Note that point #12 is shown to be a likely special cause.27334 2.07911 2.07911 2.774 1.0398 2.01 4.82936 1. i n x u CL LCL UCL 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 103 102 101 98 101 197 199 200 194 201 499 500 496 500 504 300 305 303 291 292 194 398 204 92 750 598 992 602 791 8 1009 508 498 996 492 1199 599 592 293 591 1.50742 2.90196 2. the p' chart gives us the best of both worlds.07911 2.77247 1.07911 2. however.51608 2.27256 2.32886 2. so the control limits are identical to those of the p chart 2.02397 2. σ z = 1 .99667 1. Consider the data set in Table 6 and the resulting u chart in Figure 6.33269 2.07911 2.93878 7. In practice.76854 1.33226 .07911 2.38499 2.07911 2.88566 1. The only difference here is that the within-subgroups population standard deviation is  estimated by σ = u instead of p (1 − p ) .50534 2. That was not predicted by the XmR chart.6508 1.83142 1.88643 1. which was unable to allow for the fact that this point came from a rather large subgroup. Overdispersion can exist here too.00687 2.00403 1.64215 1.0198 0. this is not always true.38731 2.07911 2. it can be shown that:  1. If all the subgroups are the same size.64868 1.27256 2.

24782 7.0198 0.97619 3.07911 2.24782 7.98797 1.93878 7.24782 7.88216 1.07911 2.94692 1.39021 1.0896 –3.9538 1.07732 0.96393 1.0896 –3.24782 7.01014 0.07911 2.0896 –3.24782 7.0896 –3.07911 –3.08103 6.90196 2.0896 –3.0896 –3.qxd 10/15/07 3:05 PM Page 353 p' Chart and u' Chart for Attributes (Laney’s) 10 8 6 4 2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 6 u chart.0896 –3.24782 7. u CL LCL UCL R(u) 1.0896 –3.24782 7.0171 353 .97492 1.00687 2.992 0.02048 2.24782 7.99667 1.0896 –3.03752 1.01847 1.8835 3.07911 2.24782 7.48697 4.02204 1.07911 2. Clearly.07911 2.07911 2.0896 –3.02397 2.07911 2.24782 7.01197 0.24782 7.24782 2.0896 –3.03273 0.01581 3.0896 7.0896 –3.94939 1.24782 7.07911 2.0896 –3.016 1.0896 –3.07911 2.07911 2.98224 1.0896 –3.07911 2.00403 1. Table 7 X chart.0896 –3.07911 2.03553 4.0398 2.24782 7.07911 2.24782 7.06732 4.42574 3.24782 7.07911 2.07911 2.01 4.07911 2.98492 3.24782 7.24782 7.0896 –3.24782 7. Table 7 and Figure 7 show the X chart for these data.H1317_CH32.24782 7.00604 0.0896 –3.07911 2.0896 –3. there is a lot of overdispersion here.07911 2.

9431 –2.22266 6.97619 3.9552 19.41564 45.829 37.13008 19.9752 –2.07911 2.9619 –1.30988 28.07911 2. Table 8 u ' chart.1187 19.07911 2.98492 3.11877 15.0318 .90196 2.429 0.1445 13. Finally.949 –2.6023 0.034 –1. z u CL LCL UCL R(z) –1.00403 1.08186 7.07911 2.1724 12.486 –16.H1317_CH32.1334 6.1811 7.3509 –17.9954 –1.96393 1.03283 9.3533 19.07911 2.07911 2.8835 3.07911 2.172 39.20801 9.07911 –4.21614 6.07732 0.07911 2.992 0.5128 –12.03553 4.0897 –1.07911 2.15362 6.10719 7.05691 5.8987 –1.06683 9.0398 2.07911 2.685 –0.07911 2.qxd 354 10/15/07 3:05 PM Page 354 The Desk Reference of Statistical Quality Methods 10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 7 X chart.0498 –4.07911 2.99667 1.07911 2.07911 2.9111 –2.172 23.07911 2.077 –1.07911 2.24791 5.23521 5.0579 –2.016 1.051 –0.23521 5.1672 15.2651 9. Table 8 and Figure 8 show the final u' chart.605 –1.6534 1.9877 –2.07911 2.7677 –0.206 24.0508 9.06934 7.07911 2.9236 –2.02204 1.01 4.8841 –16.9086 –4.302 –20.395 –1.20905 14.4134 –7.10134 9.12008 6.14592 7.02397 2.10134 7.4286 9.0801 –1.42574 3.9538 1.2542 15.93878 7.0941 27.077 –1.0644 –1.0198 0.00687 2.9431 –5.07911 2.8211 40.8746 –4.3768 12.23837 5.2985 10.11782 11.

Conclusion The p' chart and the u' chart seem to represent the general case for attribute data. In this example. . σ z was computed as 16. Forrest. it is a good idea to always start with the p' or u' versions and see what happens. We can see that point #5 is not out of control. 1996. and considering that the p' chart/u' chart will be virtually identical to the p chart/u chart if the overdispersion is found to be negligible.” Journal of Quality Technology 28:451–459. New York: John Wiley & Sons. Peter A. While such a condition might exist in a very carefully controlled environment.qxd 10/15/07 3:05 PM Page 355 p' Chart and u' Chart for Attributes (Laney’s) 10 9 8 7 6 5 4 3 2 1 0 355 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure 8 u ' chart. That being the case. Implementing Six Sigma. even though the X chart suggested that  it was. Breyfogle. 177–178. Heimann. 2. “Attributes Control Charts with Large Sample Sizes. This may be an indication that the subgroups are not independent—that there is positive autocorrelation in the data. 1999. experience teaches that between-subgroup variation is quite common in real-world processes. whereas the classical p chart and u chart only work in the special case where there is no between-subgroup variation present. where there is both within-subgroup variation and between-subgroup variation in the data.31.H1317_CH32. Bibliography 1. There was over 16 times more (shortrun) variation in the process than the Poisson  assumption alone could have predicted. Note: There have been cases where σ z < 1.

” Quality Engineering 13. Jones. no. Statistical Quality Control Handbook. Govindaraju. 5. ..qxd 356 10/15/07 3:05 PM Page 356 The Desk Reference of Statistical Quality Methods 3. IN: AT&T Customer Information Center. Understanding Variation—The Key to Managing Chaos. 1956. Laney. 4: 531–537.” Quality Engineering 14. no. Knoxville. Donald J. 75. TN: SPC Press. 1: 19–26. November. Wheeler. A. and K. Laney. 4. David B. 1993. Western Electric. 2000. 6. “Improved Control Charts for Attributes. David B.” Quality Digest. 7. 2006. “P-Charts and U-Charts Work (But Only Sometimes). 2002. “Graphical Method for Checking Attribute Control Chart Assumptions. G. Indianapolis.H1317_CH32.

3. Design a tally sheet to collect the data. and so on. A list of geographical locations might include Seattle. showed that the distribution of income was not evenly distributed. where: N = number of individuals x = income A and a = constants. Pareto’s “law” of income distribution was published in Manuale d’Economia Politica (1906). Vilfredo Pareto (1848–1923). using cumulative percent of population on one axis and cumulative percent of wealth on the other. 5. The relationship was based on the following formula: log N = log A − a log x. defects by location. Arrange the collected data in order of the smallest frequency occurrence first and the largest frequency last. voids. various questions have been raised to the effect that Pareto was the wrong man to identify with the concentration of defects. Accumulate the percent distribution in a cumulative manner until a total of 100 percent has been obtained. 4. Since 1951. and runs. dollars of sales by location. Lorenz. In 1897. Develop a list of the responses to be classified. Chicago. 357 . The following illustrates the basic procedure for performing a Pareto analysis: 1. scratches. New York. Another candidate who might be associated with the application of defect concentration would be M.qxd 10/15/07 4:46 PM Page 357 Pareto Analysis Pareto analysis is used to assist in prioritizing or focusing activities. O. 6.H1317_CH33. 7. who depicted the concentration of wealth in the now-familiar curves. Decide the objective of the Pareto analysis (examples include defects by type. an Italian economist. A list of defects for a paint finish might include smears. but that it was concentrated to a relatively small percentage of the population. Determine the percentage of each classification as a percentage of the total. or numbers of physicians by specialty). 2. Collect data.

and collect the data. .3 1.3 90.3 0.0 Cumulative percentage of total 40.9 99.0 23.3 71.6 96. Customer-satisfaction survey cards were collected during the first quarter of 1998 from Dr.0 2.H1317_CH33.0 From this tabular Pareto analysis. A score on a Likert-type format of three or less is considered not acceptable from a quality-improvement perspective.8 150 100.3 6.6 94.6 85. Dr. Wise wants to find out what areas he needs to focus on in order to improve the quality of his service.3 78. Rearrange the data in descending order. develop a list of responses.6 97.7 5.2 100.3 8. it can be readily seen that 63 percent of Dr. Wise’s problems are caused by 18 percent of the problem list.0 7. Decide the objective. Area Pain Cost Cleanliness Procedure explanation Courteousness Magazine selection Professionalism Attractiveness of office Promptness of schedule Uniforms Mouthwash taste Totals: Number of responses Percentage of total responses 60 35 12 11 10 8 6 3 2 2 1 40. and determine the percentage that each area is of the total and the cumulative percent total. design a tally sheet.0 1.0 63. The initial collected data yield the following: Area of concern Number of responses Courteousness Cost Procedure explanation Attractiveness of office Mouthwash taste Pain Magazine selection Uniforms Promptness of schedule Professionalism Cleanliness 10 35 11 3 1 60 8 2 2 6 12 Total 150 Steps 5–7.3 4. Wise’s dental practice.qxd 10/15/07 358 4:46 PM Page 358 The Desk Reference of Statistical Quality Methods An example of a Pareto analysis follows: Steps 1–4.

H1317_CH33.qxd

10/15/07

4:46 PM

Page 359

Pareto Analysis

359

This information can be presented in a graphical manner as follows:
5

10

15

20

25

30

35

40

45

Pain
Cost
Cleanliness
Procedure explanation
Courteousness
Magazine selection
Professionalism
Office attractiveness
Promptness
Uniforms
Mouthwash

It is clear that Dr. Wise should focus on reducing patient discomfort and lowering his
fees.
In this example, an incident of Magazine is no more important than an incident of
Cleanliness when, in fact, an incident of Cleanliness could result in an infection and possible litigation and a potential malpractice suit.
By assigning weighting factors to different levels of severity, such as critical, major,
and minor, the Pareto analysis can be augmented to reflect the level of impact for the
respective assignments.
This assignment of weighting factors results in a weighted Pareto analysis.
Assume that Dr. Wise has categorized the various areas of concern into three groups
and has assigned a weighting factor of 30, 5, and 1 in order of decreasing severity.
Category
Critical
Major
Minor

Weighting factor
30
5
1

H1317_CH33.qxd

360

10/15/07

4:46 PM

Page 360

The Desk Reference of Statistical Quality Methods

With this scale, it will require 30 minor incidents to impact the Pareto analysis as much
as one critical incident. Likewise, it will require six major incidents to impact the analysis
as much as one critical incident.
Each area of concern will be multiplied by the weighting factor, and the resulting product will be used to develop the Pareto analysis.
Area of
concern
Courteousness
Cost
Procedure explanation
Attractiveness of office
Mouthwash taste
Pain
Magazine selection
Uniforms
Promptness of schedule
Professionalism
Cleanliness
Total

Number of
responses
10
35
11
3
1
60
8
2
2
6
12

Category

Factor

Pareto
response

Major
Major
Major
Major
Minor
Major
Minor
Minor
Major
Critical
Critical

5
5
5
5
1
5
1
1
5
30
30

50
175
55
15
1
300
8
2
10
180
360

150

1156

The Pareto responses are reordered in a descending manner, and the percentage of total
and the cumulative percentage of total are determined.
Area of
concern
Cleanliness
Pain
Professionalism
Cost
Procedure explanation
Courteousness
Attractiveness of office
Promptness of schedule
Magazine selection
Uniforms
Mouthwash
Total

Weighted
Pareto response

Percentage
of total

Cumulative
percentage of total

360
300
180
175
55
50
15
10
8
2
1

31.1
26.0
15.6
15.1
4.8
4.3
1.2
0.9
0.6
0.3
0.1

31.1
57.1
72.7
87.8
92.6
96.9
98.1
99.0
99.6
99.9
100.0

1156

100

Note that the number one and number two rankings of concerns are now Cleanliness
and Pain. Of the responses, 72.7 percent are described by 27.3 percent (3 out of 11) of the
concerns list.
Optionally, a Pareto diagram can be constructed.

H1317_CH33.qxd

10/15/07

4:46 PM

Page 361

Pareto Analysis

361

Problem:
Construct a Pareto diagram for the distribution of letters in the following paragraph. What
percentage of the letters are responsible for 80 percent of the total letters in the passage?
Black Huckleberry has fair landscape value and is quite attractive in the fall. It
provides fair cover but no nesting to speak of. Fruit is available for a short period
of time during the summer only and is a favorite of more than 45 species of birds
such as grosbeaks, towhees, bluebirds, robins, chickadees, and catbirds.

Bibliography
Besterfield, D. H. 1994. Quality Control. 4th edition. Englewood Cliffs, NJ: Prentice Hall.
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th edition. New York:
McGraw-Hill.
Gryna, F. M., R.C.H. Chua, and J. A. De Feo. 2007. Juran’s Quality Planning and Analysis
for Enterprise Quality. 5th edition. New York: McGraw-Hill.
Hayes, G. E., and H. G. Romig. 1982. Modern Quality Control. 3rd edition. Encino, CA:
Glencoe.
Juran, J. M. 1960. “Pareto, Lorenz, Cournot, Bernoulli, Juran and Others.” Industrial
Quality Control 17, no. 4 (October): 25.

H1317_CH33.qxd

10/15/07

4:46 PM

Page 362

H1317_CH34.qxd

10/17/07

2:40 PM

Page 363

Pre-Control

Pre-control is a technique for validating a process to perform at a minimal level of conformance to the specification. F. E. Satterthwaite (1954) initiated the introduction of precontrol in 1954 and identified the development team as consisting of himself, C. W. Carter,
W. R. Purcell, and Dorian Shainin. Shainin has been responsible for the popularization of
pre-control.
Pre-control attempts to control a process relative to the specification requirements,
whereas statistical process control (SPC) monitors the process relative to natural variations
as described by the normal distribution and its associated average and standard deviation.
SPC is truly based on historical characterization of a process and is independent of any
specification requirements.

Description
The specification (bilateral) is divided into four equal parts. The area from the specification nominal to one-half of the upper specification and the area from the nominal to onehalf of the lower specification are called the pre-control limits. The area between the two
pre-control limits is called the green zone. The areas outside the pre-control limits but not
exceeding the upper or lower specification limits are called the yellow zone. Areas outside
the specification limits are called the red zone. Data used in the pre-control technique are
frequently placed on a chart, and the zones are colored according to the aforementioned
scheme, leading to what is referred to as a rainbow chart.
Nominal
Lower
PC
specification limit

Red

Yellow

PC
Upper
limit specification

Green

Green

1/2 print
tolerance
Print tolerance

363

Yellow

Red

H1317_CH34.qxd

364

10/17/07

2:40 PM

Page 364

The Desk Reference of Statistical Quality Methods

Example:
What are the pre-control limits if the specification is 4.50 ± 0.40?
1. Divide the tolerance by 4:
0.80
= 0.20.
4.0
2. Subtract the value from nominal for the lower PC limit:
4.50 − 0.20 = 4.30.
3. Add the value to nominal for the upper PC limit:
4.50 + 0.20 = 4.70.
Thus, the two pre-control limits are located at 4.30 and 4.70, one-half the normal
tolerance.
Note: If the specification is unilateral (single sided), the sole pre-control limit is placed
halfway between the target, or nominal, and the single specification limit.
Pre-Control Rules
1. Initial setup: five consecutive samples are measured. All measurements are
required to fall within the pre-control limits before proceeding.
2. Following the initial setup approval, two consecutive samples are taken from the
process and measured.
a. If both units fall in the green zone, continue.
b. If one unit falls inside the pre-control limits and the other falls in the yellow
zone, continue.
c. If both units fall inside the same yellow zone, the process is stopped and adjustments are made. After the adjustments have been made, the setup approval is
again required. Five consecutive units must fall within the pre-control limits.
d. If both of the units measured fall in opposite yellow zones, then it is likely that
variation has increased. The process is stopped, and efforts are made to reduce
variation. Again, five consecutive measurements are required to fall within the
pre-control limits.
e. A single measurement outside the specification limits (red zone) will result in
stopping the process and adjusting. Following adjustment, setup approval is
required.
After any adjustment or process stoppage, five consecutive measurements are required
to fall inside the pre-control limits.
The statistical foundation of pre-control is based on the assumption that the process
average is equal to the nominal or target value and that normal variation (average ±3 standard deviations) is equal to the product or process tolerance. That is, Cpk = 1.00 and

H1317_CH34.qxd

10/17/07

2:40 PM

Page 365

Pre-Control

365

Cp = 1.00. For a normal distribution, approximately 86 percent of the data (12 out of 14)
will fall inside the pre-control limits (the green zone), and approximately 7 percent (1 out
of 14) will fall outside the pre-control limits but inside the specification (the yellow zone).
The probability of getting measurements outside the specification is very low, approximately 3/1000.
The probabilities for various outcomes can be seen in the following table:

Color zones
Decision

Red

Stop
Get help
Stop and
adjust
Continue

A

Yellow

Green

Yellow

B
A,B

A
B

Lower
specification

Red

Probability

A

1/14
1/14
1/14
1/14
12/14
1/14
1/14
12/14
12/14

A
A,B
A,B
B
A
A
B

B
A

PC
PC
limit
limit
Nominal (target)

Upper
specification

×
×
×
×
×
×
×
×
×

1/14 = 1/196
1/14 = 1/196
1/14 = 1/196
1/14 = 1/196
12/14 = 144/196
12/14 = 12/196
12/14 = 12/196
1/14 = 12/196
1/14 = 12/196

Total = 196/196

Continue to monitor the process, sampling at a frequency equivalent to six pairs (A,B)
chosen between each adjustment or stoppage.
Time between adjustments, hours
1
2
3
4
5
etc.

Time between measurements, minutes
10
20
30
40
50
etc.

A Comparison of Pre-Control and SPC
Pre-control is specification oriented and detects changes in the capability of a process to
meet the specification, whereas the objective of SPC is to detect changes in the location
and variation statistic (average and standard deviation, for example). Both will force the
operator to take measurements of the process. One argument for the use of pre-control over
SPC is that pre-control does not burden the operator with the task of plotting the data.

H1317_CH34.qxd

366

10/17/07

2:40 PM

Page 366

The Desk Reference of Statistical Quality Methods

Plotting the data using traditional SPC methods is vital in detecting process changes,
identifying and removing (special) causes of variation, and distinguishing the differences
in assignable and chance (common) causes.
If we set the specification limits at ±2, the pre-control limits become ±1. Pre-control
assumes that the process average equals the nominal and that Cp = 1.00. This implies that
σ = 2/(3 Cp). We may now determine the probability that a pre-control chart scheme will
give a signal (both observations in a yellow zone or one in the red zone) for and degree of
process change δ and the process Cp. This probability β* is the probability of getting a signal and is calculated as follows:
g = Pr[ X in green zone; δ ]
= Pr[ −1 ≤ X ≤ +1; δ ]
(1 − δ ) ⎤
⎡ ( −1 − δ )
= Pr ⎢
≤Z≤
σ ⎥⎦
⎣ σ
3(1 − δ )Cp ⎤
⎡ 3( −1 − δ )Cp
= Pr ⎢
≤Z≤
⎥⎦
2
2

⎡ 3( −1 − δ ) Cp ⎤
⎡ 3(1 − δ ) Cp ⎤
= φ⎢
− φ⎢

⎥⎦
2
2



y = Pr[ X in yellow zone, δ ]
= Pr[ −2 ≤ X ≤ +2 ; δ ]
⎡ 3( 2 − δ ) Cp ⎤
⎡ −3( 2 + δ ) Cp ⎤
= φ⎢
− φ⎢

⎥⎦
2
2



β * = Pr[signal; δ ]
= 1 − {g ( g + 2 y )}
1/β* = average run length (ARL) for a signal given Cp and amount of process shift δ
where: δ = amount of process shift from nominal
φ = cumulative probability density function for the normal distribution
Similarly, the probability of detecting a process change (a single point outside a control limit) using SPC can be determined.
β = Pr[sample average exceeds a control limit; δ]
where: δ = process shift from nominal

H1317_CH34.qxd

10/17/07

2:40 PM

Page 367

Pre-Control

367

Table 1 ARLs of pre-control (PC) and average control charts (CC) with subgroup size n = 2 as a
function of level shift and process capability index, Cp.

Cp = 0.75

Cp = 1.00

Cp = 1.30

c

Percent
defective

ARL
CC

ARL
PC

Percent
defective

ARL
CC

ARL
PC

Percent
defective

ARL
CC

ARL
PC

0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00

2.445
2.535
2.810
3.274
3.940
4.821
5.935
7.299
8.932
10.850
13.066

370.4
328.9
243.0
164.6
108.8
72.4
48.9
33.7
23.8
17.1
12.6

9.6
9.4
8.7
7.7
6.7
5.7
4.8
4.1
3.5
3.0
2.6

0.270
0.300
0.395
0.567
0.836
1.231
1.791
2.561
3.594
4.948
6.681

370.4
302.0
188.3
108.8
63.4
38.1
23.8
15.4
10.4
7.3
5.3

44.4
41.4
33.9
25.6
18.5
13.2
9.5
6.9
5.2
4.0
3.1

0.009
0.012
0.022
0.044
0.087
0.166
0.307
0.547
0.941
1.565
2.514

370.4
266.1
135.0
66.3
34.2
18.8
11.1
6.9
4.6
3.3
2.4

370.3
310.7
198.1
110.6
59.9
33.1
19.2
11.7
7.5
5.1
3.6

Cp = 1.75
c

Percent
defective

ARL
CC

ARL
PC

0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00

0.000
0.000
0.000
0.000
0.001
0.004
0.012
0.032
0.082
0.194
0.433

370.4
214.6
82.8
33.7
15.4
7.9
4.5
2.9
2.0
1.6
1.3

13273.1
8200.0
2847.0
890.7
297.6
110.5
45.9
21.3
11.0
6.3
3.9

Bibliography
Besterfield, D. H. 1994. Quality Control. 4th edition. Englewood Cliffs, NJ: Prentice Hall.
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th edition. New York:
McGraw-Hill.
Ledolter, J., and A. Swersey. 1997. “An Evaluation of Pre-Control.” Journal of Quality
Technology 29, no. 2 (April): 163–171.
Montgomery, D. C. 1996. Introduction to Statistical Quality Control. 3rd edition. New York:
John Wiley & Sons.
Satterthwaite, F. E. 1954. A Simple, Effective Process Control Method, Report 54-1.
Boston: Rath & Strong.
Wheeler, D. J., and D. S. Chambers. 1992. Understanding Statistical Process Control.
2nd edition. Knoxville, TN: SPC Press.

H1317_CH34.qxd

10/17/07

2:40 PM

Page 368

H1317_CH35.qxd

10/15/07

3:12 PM

Page 369

Process Capability Indices

The relationship of the process specification requirements and the actual performance of
the process can be described by one of several index numbers: Cpk, Cr, and Cp. Each of
these index numbers represents a single value that serves as a measure of how well the
process can produce parts or services that comply with the specification requirement. All
process capability indices are applicable only to variables measurements and assume that
the data are normally distributed.

Cpk
The Cpk index uses both the average and standard deviation of the process to determine
whether the process is capable of meeting the specification. For processes where the average is closer to one specification than the other (process not centered), which is more common than being centered, or where there is only a single specification limit (unilateral), the
following table gives the expected level of nonconformance in parts per million (ppm).
Cpk value

Nonconforming, ppm

Rate of nonconformance

1.60
1.50
1.40
1.30
1.20
1.10
1.00
0.90
0.80
0.70

1
3
13
48
159
483
1350
3467
8198
17865

1/1,000,000
1/333,333
1/76,923
1/20,833
1/6,289
1/2070
1/741
1/288
1/122
1/56

Cpk is defined as
Cpk = min (Cpu, Cpl),
where:

( USL − X )

( X − LSL )
C pl =

C pu =

369

505) = = 1. The average X = 4.500 ± 0. Step 2. The sample standard deviation S 2. Determine if the average is closer to the upper or lower specification 4.52. Collect data.H1317_CH35. average. and the standard deviation S = 0.500 USL = 4. σ may be estimated from: 1. Select the appropriate relationship for calculating the Cpk Example 1: Step 1. Collect data to determine the average and standard deviation 2.0011.510 − 4. 3σ (3)(.49 Target = 4.505 |––––––––––––––––––––––––––––––––– | | | | | | | LSL = 4. Sketch a linear graph indicating the upper and lower specification. Sketch a linear graph. The specification is bilat– eral with a requirement of 4.010. Determine the location of the average and select the appropriate relationship for calculating the Cpk. The average is closer to the upper specification. Forty-five measurements have been made on the length of a part. the defining relationship for Cpk is Cpk = Cpu = (USL − X ) (4.510 Steps 3 and 4. Cpk may be calculated for unilateral or bilateral specifications. with or without a target value.0011) . – X = 4.qxd 370 10/15/07 3:12 PM Page 370 The Desk Reference of Statistical Quality Methods USL = upper specification limit LSL = lower specification limit σ = population standard deviation. Control chart data from average/range charts or average/standard deviation charts: σ= R d2 or σ = S c4 The determination of Cpk is accomplished in the following four basic steps: 1. therefore. and target if available 3.505.

Note that the Cpk can be negative.H1317_CH35. yielding approximately three ppm as a defective rate. but I am 95 percent confident that it is between _____ and _____. There is no maximum requirement or target. the process average is below the lower specification limit. where: n = sample size α = risk 1 – risk = confidence Example 1. Example 2: The minimum latching strength for a component is 7500 pounds. Another way to phrase this problem is: “I do not know the true Cpk. 9n 2n − 2 Note: For single-sided limits. What is the Cpk? Sketch the linear graph as follows: – X = 6800 | | –——————————————————— | | LSL = 7500 C pk = C pl = (6800 − 7500) = −1.qxd 10/15/07 3:12 PM Page 371 Process Capability Indices 371 This process exhibits a high degree of capability.87 (3)(125) The process is extremely poor. Two-sided confidence interval: Determine the 95 percent confidence interval given a Cpk of 1. The average and standard deviation for 130 measurements are – X = 6800 and S = 125. use Zα rather than Zα/2. Confidence Interval for Cpk The degree of confidence we have in the Cpk is related to the sample size of the data used to determine the Cpk and the amount of risk we are willing to take (α risk). The following equation can be used to determine the confidence interval for the Cpk statistic: C pk ± Zα / 2 C pk 2 1 + .” .15 and a sample size of n = 45.

0 90.64 3. Decreasing the level of confidence Example 2.0 99.28 . What is the lower 90 percent confidence? n = 100 Cpk = 1.0 95.15 ± 1.33 1.1 1. the level of confidence is 95 percent. The upper limit is 1. Increasing the sample size 2. An appropriate statement would be: “I do not know the true Cpk.26 = 0.29 2.96 1. therefore.0 3.41. Single-sided confidence limit: Calculating a single-sided confidence limit is performed the same way as a confidence interval. but I am 95 percent confident that it is between 0.15 − 0. C pk ± Zα / 2 C pk 2 1 (1. % Zα/2 Zα .90 Risk. and the Zα/2 value is 1.64 1. we add the error term to the Cpk estimate. except the risk is not divided by two. If the limit is a lower limit.15 + 0. If the limit is an upper limit.qxd 372 10/15/07 3:12 PM Page 372 The Desk Reference of Statistical Quality Methods Select the Z value from the following table: Risk.09 2.155)2 1 + = 1. A Cpk of 1.0 10. This spread or error of estimate can be reduced by doing one or both of the following: 1.15 ± 0.” Notice the wide spread in the confidence interval.28 We are assuming a 5 percent risk that our interval will not contain the true Cpk.58 1.H1317_CH35.41 .26 9n 2n − 2 (9)(45) (2)(45) − 2 The lower limit is 1.26 = 1.33 Confidence = 0.10 Zα = 1. we subtract the error term from the Cpk estimate.89 and 1.33 has been determined from a sample of 100 measurements.9 99.96 + = 1. α = 0.89.96.0 5. % Confidence.

20.28 + = 1. but I am 90 percent confident that it is not less than _____. . E2 ⎢ ⎟⎥ ⎜ 2 ⎠ ⎥⎦ Z 2 ⎝ ⎢⎣ Z 2 which is in the general form of aX 2 + bX + c = 0.20 (9)(100) (2)(100) − 2 9n 2 n − 2 An appropriate statement would be: “I do not know the true Cpk. 2a Notice that the b term contains the Cpk.” Determination of the appropriate sample size for a single-sided confidence limit for Cpk can be determined given a level of confidence and a predesired amount of error.” C pk − Za C pk 2 (1.33 − 1.33 − 0.33)2 1 1 + = 1. The single-sided error for Cpk is given by C pk 2 1 . but I am 90 percent confident that it is not less than 1.13 = 1. + 9 n 2n − 2 Za Rearranging and solving for n yields the following quadratic equation: ⎡ ⎛ 2 + 9C pk 2 ⎞ ⎤ 2 −18n + n ⎢18 + ⎜ ⎟ ⎥ + E 2 = 0.qxd 10/15/07 3:12 PM Page 373 Process Capability Indices 373 This problem can also be phrased: “I do not know the true Cpk.H1317_CH35. a = −18 ⎛ ⎞ ⎜ 2 + 9C pk 2 ⎟ b = 18 + ⎜ ⎟ E2 ⎜ ⎟ ⎝ ⎠ Z2 c= 2 E2 Z2 The sample size n may be determined using the quadratic equation: n= − b ± b 2 − 4 ac . A prior estimate for the Cpk may be used as a starting point.

35 2 = 118. they measure the proportion of the specification tolerance that is being consumed by normal variation. In order to achieve a Cpk greater than 1. which is sometimes reported as a %Cr. Cp is the reciprocal of Cr: Cp = 1 6S 6S . Cr = . %Cr = × 100.166 Zα = 1. %Cr is the percentage of the tolerance that is being consumed by normal variation. It is estimated that the Cpk will be approximately 1.166 and a level of confidence of 90 percent.8) (2)( −18) n = 50.H1317_CH35. the %Cr is more comprehensible.20 E = 0.qxd 374 10/15/07 3:12 PM Page 374 The Desk Reference of Statistical Quality Methods Consider the following case: We want to determine the Cpk with an acceptable error of 0.33 (world-class quality). . These index numbers are Cr (capability ratio). Normal variation is defined as six standard deviations. the %Cr must be less than 75 percent. rather.28 a = −18 ⎛ ⎞ 2 ⎜ 2 + 9C pk ⎟ b = 18 + ⎜ ⎟ E2 ⎜ ⎟ 2 ⎝ ⎠ Z c= n= b = 907. and Cp.35 ± 823283 − 4( −18)(118.20 based on historical data. Cr USL − LSL USL − LSL Of these two. as it is a straightforward percentage of consumption.5 or 51 n= Cr and Cp Indices There are two indices that do not use the process average to determine a process capability. What is the appropriate sample size required? Let C = 90% C pk = 1.9 E2 Z2 − b ± b 2 − 4 ac 2a −907.

Example 1: The specification for a part is 12.2% With this %Cr.72 and 2.” .332 The 95 percent confidence interval is 1.32 has been calculated based on 29 observations.92 > Truee Cp > 1.32 28 28 2.92.92.qxd 10/15/07 3:12 PM Page 375 Process Capability Indices 375 Cr and Cp require that the specification be bilateral (have both an upper and lower limit). What is the 95 percent confidence interval for this estimate? Cp = 2. I do not know the true Cp. χ2n–1. n −1 n −1 2 Example 1: A Cp of 2. but I am 95 percent confident that it is between 1.3079 > True Cp>2.975 = 15.95 Risk.4607 15.05 Look up the two required χ2 values as follows: For the upper bound. A proper statement would be: “Based on a sample size of 29. α / 2 χ 2 n −1.72 2.38.1–α/2 = χ228.38 − 11.025 = 44. Confidence Interval for Cp and Cr The confidence interval for Cp can be calculated using the following relationship: χ2 n −1.3079 Cp χ 2 n −1.32 n = 29 Confidence = 0.4607 For the lower bound.α/2 = χ228.0. χ2n–1.1− α /2 Cp > True Cp> .1− α / 2 > True Cp > Cp n −1 n −1 44.α /2 χ n −1. What is the %Cr? %Cr = 6S × 100 USL − LSL %Cr = (6)(0.62 %Cr = 63.08) × 100 12.08.H1317_CH35.33.00 ± 0. and the standard deviation is S = 0.72 to 2. α = 0. there is a potential of achieving a Cpk of 1.

.1− α / 2 Cpm Index One of the more recent quality process capability indices is Cpm.96 ⎞ ≅ ⎜ (28 − + 2 2 ⎟⎠ ⎝ 2 χ 2 28. 1 = 0.025 = 43.72.58 = upper confidence bound for the %Cr 1.72 and 1 = 0. Using the approximation formula: χ 2 v. the upper bound χ2n–1. n −1.96. The confidence interval for Cr can also be rewritten as follows: Cr n −1 χ 2 < True Cr < Cr n −1. it is necessary to use approximate formulas. and Spiring (July 1988). the lower confidence limit for the Cp is the upper confidence limit for the Cr.H1317_CH35.α/2 = χ2280025 = 44.0.4607. A discussion of this index is reported by Chan. or 1 – α/2.α ⎛ ⎛ 1⎞ Z ⎞ ≅ ⎜ ⎜v − ⎟ + . 2⎠ 2 ⎟⎠ ⎝ ⎝ where: v = degrees of freedom = n – 1 Z = Z-score from standard normal distribution. Cheng. In the previous example. giving α.34 = lower confidence bound for the %Cr 2. the 95 percent confidence lower bound was 1.92 If tables of the chi-square distribution are not available.α ⎛ 1 1. α/2. From the chi-square table in this example.α / 2 n −1 χ 2 . That is. one commonly used approximation is 2 χ 2 v .qxd 376 10/15/07 3:12 PM Page 376 The Desk Reference of Statistical Quality Methods The confidence interval for the %Cr can be readily determined by simply taking the inverse of the confidence interval for the Cp and reversing the terms. In the absence of such tables.

0)2 (1. Example 1: – The specification is 15 + 6/–5.65 1 + (17. The average X = 17.369). .65 C pm = USL − LSL 1 Cr 1 + (μ − T σ2 2 1 = 0. which only relates the process variation and average to the nearest specification limit.00 + 3/–4.2)2 The following illustrates the movement of Cpk and Cpm for a constant specification requirement of 28. C pm = USL − LSL 6σ ′ ∑( Xi − T )2 n −1 T = target or nominal where: σ ′ = C pm = (USL − LSL) 6 σ + (μ − T ) 2 2 1 = Cr 1 + (μ − T 2 σ2 If the process variance σ2 increases (decreases). and the standard deviation S = 1.2.H1317_CH35. if μ moves away from T). unlike the Cpk. If the process drifts from its target value (that is. the denominator increases (decreases) and Cpm will decrease (increase). where the standard deviation remains a constant 0. the denominator will again increase. Cr will remain constant (Cr = 0. What is Cpm? Cr = 6S = 0.0. In cases where both the process variance and the process mean change relative to the target.qxd 10/15/07 3:12 PM Page 377 Process Capability Indices 377 The Cpm index takes into consideration the proximity to the target value of a specification. causing Cpm to decline. the Cpm index reflects these changes as well.43 and the process average changes from 29 to 24.0 − 15.

07 22 23 24 25 26 27 28 LSL 29 30 31 32 USL X = 29.0 .43 Cpk = 2.5 Cpk = 2.33 Cpm = 1.0 Target = 28.71 Cpm = 1.48 22 23 24 25 26 27 28 29 30 31 LSL 32 USL X = 28.0 S = 0.H1317_CH35.33 Cpm = 2.0 Cpk = 2.qxd 10/15/07 378 3:12 PM Page 378 The Desk Reference of Statistical Quality Methods Target = 28.71 22 23 24 25 26 27 28 29 LSL 30 31 32 USL X = 28.

5 .71 Cpm = 1.qxd 10/15/07 3:12 PM Page 379 Process Capability Indices Cpk = 2.07 22 23 24 25 26 27 28 29 30 31 LSL 32 USL X = 27.57 22 23 24 25 26 LSL 27 28 29 30 31 32 USL X = 26.5 Cpk = 1.0 24 25 26 LSL 27 28 29 30 31 32 USL X = 27.76 22 23 379 Target = 28.16 Cpm = 0.H1317_CH35.55 Cpm = 1.0 Cpk = 1.

You have determined a Cpk to be 1. what Cpk should you have? CpkR= ????? CpkD= 1.75 − 0. With n = 30 samples and a level of confidence of 90 percent. we can determine the Cpk we need to observe from sample data (CpkR) in order to achieve a desired CpkD.28 C pk = 1.75 with a sample of n = 45.75 Θ 1.10 n = 45 1 (1.28 = 0.50 9(45) 2(45) − 2 An appropriate statement for this problem would be: “I do not know the true Cpk.33 n = 30 and Zα = 1. but I am 90 percent confident that it is not less than 1.” In other words. . it can be seen that: C pkR = 1 3n ( −2 n + 2 + Z α ) 2 (−6nC + 6 n C pkD ± 2 pkD 4 nZ α − 8 n Z α + 2 nZ α + 18 Z α C pkD ( n − n ) − 2 n Z α + 4 n Z α 2 2 2 4 2 2 3 2 2 4 3 Or alternatively.10 = 1. if you obtained a Cpk of 1. what is the lower confidence limit (LCL)? LCL = C pk − Zα ) 2 1 (C pk ) + 9n 2n − 2 LCL = 1.qxd 10/15/07 380 3:12 PM Page 380 The Desk Reference of Statistical Quality Methods Required Sample Cpk to Achieve a Desired Cpk Consider the following problem. C pkR = ⎢ C ± pkD 2 ⎥ 18n(n − 1) ⎥⎦ ⎣ 2n − 2 − Zα ⎦ ⎢⎣ Example using the latter equation: You want to achieve a Cpk of 1.33.28 ) Zα = Z 0.25 =1.H1317_CH35.75 − 1.75 with n = 45 you would be 90 percent confident that the true Cpk would not be less than 1. If we relabel the LCL as CpkD and the Cpk in the right side of our LCL equation as CpkR and solve for CpkD.50.75)2 + =1. At a level of confidence of 90 percent.50.28 2 ). C pkD = C pkR − Zα ) 2 1 (C pkR ) + 9n 2n − 2 C pkD = The C pk you want C pkR = The C pk you are required to obtain from n samples to ensure a C pkD at 1 – a level of confidence Solving for CpkR. Zα2 (9nC2pkD + 2n − 2 − Zα2 ) ⎤ ⎡ 2n − 2 ⎤ ⎡ ⎢ ⎥.

Kushler. K. Y. no. L. you need to get a calculated Cpk of 1.14 in the appendix gives CpkR to yield a CpkD for n = 30 to 1000 and levels of confidence from 90 percent to 99 percent.H1317_CH35. Chan.” Journal of Quality Technology 20. Cheng.61.” Journal of Quality Technology 24. D. R.3616 ⎥⎦ ⎢⎣ ⎥⎦ = (1. F.” Journal of Quality Technology 22.33 ± 0. no. H.60 + 56.qxd 10/15/07 3:12 PM Page 381 Process Capability Indices 381 Zα2 (9nC2pkD + 2n − 2 − Zα2 ) ⎤ ⎡ 2n − 2 ⎤ ⎡ ⎢ ⎥ C pkR = ⎢ C ± pkD 2 ⎥ 18n(n − 1) ⎥⎦ ⎣ 2n − 2 − Zα ⎦ ⎢⎣ 58 ⎤ ⎡ (1. S. If you want to be 90 percent confident that your Cpk is > 1.. 3: 98–100. .28)2 (477. 1988. A.029)(1.3616) ⎤ 1 . Borrego. W. A. “How Reliable Is Your Capability Index?” Journal of Applied Statistics 39: 331–340. A. 33 C pkR = ⎡⎢ ± ⎢ ⎥ 540(29) ⎣ 56. and P. “A New Measure of Process Capability: Cpm. no.611. D..125 The CpkR must always be greater than the CpkD. Hurley. the answer is 1. and F. 1992. Table A. Bibliography Bissel. and S.611 and 1. 3: 162–175. 1990.. Chou. “Confidence Bounds for Capability Indices. 1990.33 and you measure 30 parts to get your calculated Cpk. “Lower Confidence Limits on Process Capability Indices. Owen. 4: 118–195.236) CpkR = 1. Spiring. therefore.

H1317_CH35.qxd 10/15/07 3:12 PM Page 382 .

qxd 10/18/07 2:55 PM Page 383 Regression and Correlation Linear Least Square Fit When we attempt to fit a best straight line to a collection of paired data. The formal mathematical procedure that accomplishes the same objective is the method of least squares. The values of βˆ 0 and βˆ 1 are such that the sum of the square of the differences between the calculated value of Y and the observed value of Y is minimized. the value of x is given (independent). Using the least squares method. we essentially try to minimize the distance between the line of fit and the data. The least square estimators βˆ 0 and βˆ 1 are calculated as follows: βˆ0 = Y − βˆ1 X and SS xy . Example: Data have been collected in hours for the life of a deburring tool as a function of the speed in RPMs. This method involves a simple linear relationship for two variables: an independent variable x and a dependent variable y. and the value of y (dependent) depends on the value of x. βˆ1 = SS x – ( ΣX i ) 2 where: SSx = Σ(Xi – X )2 = ΣXi2 – n – – SSxy = Σ(Xi – X )(Yi – Y ) = ΣXiYi – ( ΣX i )( ΣYi ) . we substitute their values into the equation of a line to obtain the following least squares prediction equation: Yˆ = βˆ 0 + βˆ 1 X . n Once βˆ 0 and βˆ 1 have been computed. It should be noted that rounding errors can significantly affect the answer you obtain in calculating the SSx and SSxy. what is the linear equation relating tool life to speed? 383 . That is. This relationship can be written in the general form of Yˆ = βˆ 0 + βˆ 1 X .H1317_CH36. It is recommended that calculations for all sum of squares (SS) be carried to six or more significant figures in the calculation.

440.676( X ).25 − ( −0. 065.400 40.400 158.073.000 260. 000 − = 104 . we may determine the confidence limit for the result using the following relationship: 1 (X p − X ) .000 75.5 8 8 n n SS x = ΣX i 2 − Hence.600 243.400 16.250.904.400 336.000 2. 550 8 n ( ΣX i )( ΣYi ) (10 .065. What is the estimated expected tool life at 1400 rpm? Yˆ = 1075.H1317_CH36. 675 8 n ΣY 1210 ΣX i 10 .000 232.27 Confidence for the Estimated Expected Mean Value Having calculated the expected value for Y (tool life). 104.67.000 270. 940 Y= i = = 151. 940 ) 2 = 15.690. 550 βˆ 0 = Y − βˆ x X βˆ 0 = 151.67 − 0. 675 βˆ 1 = = −0.676)(1367.qxd 10/18/07 384 2:55 PM Page 384 The Desk Reference of Statistical Quality Methods Data table: Yi Tool life. rpm Xi 2 Yi 2 XiYi 180 220 130 110 280 200 50 40 1350 1230 1380 1440 1200 1300 1500 1540 1. 000 − = −70 . 584 .584.100 78.000 61.210 10.5) = 1075.600 Sum 1.25 X= = = 1367.600 179.500 1.676.400 2.822.500 1.940 15.600 1. SS xy βˆ 1 = SS x −70.000 1.371.000 1.000 2.900 1. where X is the tool operating speed.400 48. 940 )(1210 ) SS xy = ΣX i Yi − = 1.676(1400 ) = 129. The least squares linear regression equation is Yˆ = βˆ 0 + βˆ 1 X Yˆ = 1075.600 32.000 2.67 − 0. S 1+ + . hours Xi Speed.900 12.512.n − 2 n SS x 2 2 Yˆ ± t α .300 ( ΣX i ) 2 (10 .

550 We are 95 percent confident that. 300 − 8 n −70.27 ± (2. therefore α/2 = 0. .H1317_CH36.676)(−70.27 ± 34.qxd 10/18/07 2:55 PM Page 385 Regression and Correlation where: 385 Yˆ = the predicted value of the dependent variable Xp = the value of the independent variable n = sample size SSx = sum of squares for X ( ΣX i ) 2 SS x = ΣX i2 − n – X = average of independent variables S = sample standard deviation tα/2.23 hours. based on the least squares model of Yˆ = 1075. Assessing the Model: Drawing Inferences about aˆ 1 One method for assessing the linear model is to determine whether a linear relationship actually exists between the two variables.n − 2 SS x n 2 Yˆ ± t α 129.95.676(X)? Yˆ = 1075. 287. 550 SS y = ΣYi2 − S xy S = 49. the mean tool life will be between 94.27 C = 0. If the true value for β1 had been equal to zero.5 − (−0.6 = 2. and the linear model would have been inappropriate. n − 2 = t0.67 – 0.676(1400) = 129. 675 and βˆ 1 = 104.025 tα .87) 1 (1400 − 1367.67 − 0.676 = −70.67 – 0. 675 = −0. Example: What is the 95 percent confidence interval for the estimated mean tool life with a speed of 1400 rpm using the least squares model of Yˆ = 1075.n–2 = t value for 1 – α confidence and n – 2 degrees of freedom.025.5 = 232.31 hours and 164. 287.447)(38. 675) = 38.87 2 1 (Xp − X) S + .5)2 + = 129. no linear relationship would have existed.676(X) with a speed of 1400 rpm.447 n=8 n−2=6 2 S = standard deviation of Y for a given value of X S = SS y − βˆ 1 SS xy (ΣYi)2 (1210)2 = 49.96 8 104.

550 .87 = = 0. SS x 104. 287. 550 Assuming that the error variable ∈ is normally distributed. however. In all three cases. we test the following hypothesis: Ha: β1 ≠ 0.87.5 − S∈ = 15. 287. Sβˆ 1 where Sβˆ 1 = the standard deviation of βˆ 1 (also called the standard error of βˆ 1) and Sβˆ = 1 S∈ . The test statistic for β1 is t= βˆ1 − β1 . β1 is unknown because it is a population parameter. 550. the alternative hypothesis Ha would be Ha: β1 > 0 and Ha: β1 < 0. SS y = 49. the null hypothesis Ho is that Ho: β1 = 0. SS xy = −70.H1317_CH36. n −1 From the previous example. SS x S∈ = standard error of the estimatee.qxd 386 10/18/07 2:55 PM Page 386 The Desk Reference of Statistical Quality Methods In general. S∈ = ( −70. use the sample slope βˆ 1 to make inferences about the population slope β1.5. 675)2 104.049. If we want to test for a positive or negative relationship. the test statistic t follows student’s t = distribution with n – 2 degrees of freedom. Testing a1 If we want to test the slope in order to determine whether some linear relationship exists between X and Y. We can. . Sβˆ = 1 S∈ 15. SS x = 104. 675. (SS)2 xy SS y − SS x S∈ = . respectively. 8−2 49.

n −1 = t 0. ρ. 287. 550)(49.499 t= βˆ1 − β1 Sβˆ t= −0. −70. 499. and the linear model is not appropriate. it is useful to measure the strength of the linear relationship. the only question addressed was whether there was enough evidence to allow us to conclude that a linear relationship exists.qxd 10/18/07 2:55 PM Page 387 Regression and Correlation 387 Can we conclude at the 1 percent level of significance that tool wear and speed are linearly related? We proceed as follows: Ho: β1 = 0 Ha: β1 ≠ 0 Decision rule: Reject the null hypothesis Ho if t > tα / 2. Because ρ is a population parameter. 1 We reject the null hypothesis and conclude that β1 is not equal to zero.676 − 0 = −13. (104. Measuring the Strength of the Linear Relationship In testing β1. We can accomplish this by calculating the coefficient of correlation. Testing the Coefficient of Correlation If ρ = 0.H1317_CH36. A linear correlation exists. we must estimate from the sample data.5) r = −0. the values of x and y are uncorrelated. Its range is –1 ≤ ρ ≤ +1.795 > 3. 7 = 3.001. In many cases.049 −13. The sample coefficient of correlation is abbreviated r and is defined as follows: r= r= SS xy (SS x )(SS y ) .985. 675 .795 0. We can test to determine if x and y are correlated by testing the following hypothesis: Ho: ρ = 0 Ha: ρ ≠ 0 .

9985 = −13. 0. 8−2 t= r .01). Sr t= −0. we reject Ho and conclude that ρ ≠ 0. Value of the test statistic: Since r = –0.7 = 3.970 = 0.0707 Conclusion ⏐–13. Consequently.0707. .H1317_CH36.qxd 388 10/18/07 2:55 PM Page 388 The Desk Reference of Statistical Quality Methods The test statistic for ρ is t= where Sr = 1− r2 .499 (assuming that α = 0. Sr = 1 − r2 .n–1 = t0. Sr This test statistic is valid only when testing ρ = 0. Sr t= r Sr The complete test follows: Ho: ρ = 0 Ha: ρ ≠ 0 Test statistic: Decision rule: Reject the null hypothesis if ⏐t⏐ > tα/2.932. n−2 r −ρ .985. n−2 Sr = 1 − 0. the test statistic is simplified to t= r .499.001.932⏐ > 3. therefore.

Englewood Cliffs. Because in both cases we are testing to determine whether a linear association exists.H1317_CH36. 1999. the results must agree. only one of the tests is required to establish whether a linear relationship exists. as a result. NY: Barron’s Educational Series. NJ: Prentice Hall. . Walpole.. In practice. Myers. Upper Saddle River. M. 1996. H. and R. 5th edition. B. a linear relationship exists between them. Sternstein. Bibliography Petruccelli. and M. Hauppauge.. NJ: Prentice Hall. Chen. R. Applied Statistics for Engineers and Scientists. The results are identical for the tests of β1 and ρ. J. E. Probability and Statistics for Engineers and Scientists. Statistics. D.qxd 10/18/07 2:55 PM Page 389 Regression and Correlation 389 There is sufficient evidence to conclude that x and y are correlated and. 1993. Nandram.

H1317_CH36.qxd 10/18/07 2:55 PM Page 390 .

or number of cycles Specified conditions: for example. including but not limited to: 1. rotate. service. to light. Exponential distribution 3.000 miles or 3 years. Normal distribution 2. reliability is: • • • • • Probability of success Durability Dependability Quality over time Availability to perform a function Typical reliability statements include “This car is under warranty for 40. cut. likelihood of mission success Intended function: for example. temperature. and so on. Components of this definition are explained as follows: • • • • • Probability: quantitative measure. Reliability can be described mathematically by one of several distribution functions. or they can be determined for the entire system.qxd 10/15/07 3:20 PM Page 391 Reliability Reliability can be defined as the probability that a product.H1317_CH37. speed. will perform its intended function adequately for a specified period of time. months. system.” Concept Quality = Does it perform its intended function? Reliability = How long will it continue to do so? The reliability or probability of mission success can be determined for individual components of a complex system. Weibull distribution 391 . degree of compliance Specific period of time: minutes. whichever comes first” or “This mower has a lifetime guarantee. days. component. operating in a defined operating environment without failure. or pressure Stated another way. or heat Satisfactory: perform according to a specification.

MTBF = 4510 = 902 hours. Motor number 1 2 3 4 5 Hours at failure 600 1000 850 1100 960 Failure rate. Consider the following data giving the failure rates for an automatic dishwasher. there is one fatality per million due to parachutes failing to open.H1317_CH37. 5 The failure rate for most things changes over time. For example.0011 Total test time 4510 Mean time between failure (MTBF). or there are six car wrecks every seven months on Interstate 95. λ = Number of failures 5 = = 0. Failure rate. The relationship of the failure rate with respect to time is depicted by the bathtub curve. Given the following data. . ranging from one hour old to 20 years old. MTBF can be written as a function of failure rate. 1/λ. For the previous example. For example. calculate the failure rate. a successful push-up is defined as lying in an extended prone position and using your arms to fully raise your body vertically off the floor while keeping the length of your body rigid and your toes on the floor. λ: The number of failures that occur in a number of cycles or time period. θ: The average time or cycles a unit runs before failure. Five motors are run on a test stand until they all fail. Failure rate is determined by dividing the total number of failures by the total time or cycles accumulated for the failures.qxd 392 10/15/07 3:20 PM Page 392 The Desk Reference of Statistical Quality Methods The concept of reliability requires an understanding of the following terms and relationships: Failure: Not performing as required. The failure rates were determined on dishwashers at different ages.

qxd 10/15/07 3:20 PM Page 393 Reliability 393 At one hour old.555 0.01 Failure rate at one hour old is 1/714 0.000.0001 0. 10 yr. the failure rate decreases to one failure in 10.0000018. or 0. This period. the dishwasher has a failure rate that is relatively high at one failure per 714 units. 15 yr.000001 0. the failure rate steadily decreased.000001. 1 day 4 mo. 5 yr. This portion of the life of the dishwasher is called the useful life and is characterized by a constant failure rate.0001. .0014.000. the failure rate stabilized to a constant failure rate of slightly less than one failure per million units—1/1.00001 Failure rate at one month old is 1/555. 0. 24 yr. the dishwasher has a failure rate that is one in 555. is referred to as the burn-in period.000 or 0. Age During the period from four months to about 12 years. After the units have reached an age of one day.555. 20 yr. By an age of one month.000 0. 1 yr. or a failure rate of 0.001 Failure rate at one day old is 1/10. or the period of infant mortality. During this initial period of the life of the dishwasher (the period from one hour to one month). 1 mo. or a failure rate of 0.0000001 1 hr. Failures in this area are typically due to poor workmanship or defective components and raw materials. where the failure rate is decreasing.H1317_CH37.

24 yr. 24 yr. 1 yr. 20 yr. . 1 yr. 1 mo.000001 0.00001 0. 5 yr.H1317_CH37. The area of increasing failure is the burnout period. 1 mo.000001 0.0000001 1 hr. 5 yr. Age During the period from 12 years to 24 years. The group of dishwashers that are 24 years old are failing at a rate of one in 100.qxd 394 10/15/07 3:20 PM Page 394 The Desk Reference of Statistical Quality Methods 0. 15 yr.0001 0.01 Infant mortality Useful life Burnout 0. the failure rate increased. 0.0001 0. oxidation.00001 0.0000001 1 hr. It is during this time that individual components are wearing out as a result of of metal fatigue. 20 yr. and corrosion. 10 yr. 1 day 4 mo.001 0.001 0. Age 10 yr.01. stress. 15 yr. or a failure rate of 0.01 0. 1 day 4 mo.

Complete data: All units are tested until all units fail.H1317_CH37. k Failure rates are determined by testing units until failure. When the product is in a state of constant failure. where: t = time or cycles associated with the reliability θ = MTBF λ = failure rate.00018 means that one failure is occurring every 5556 hours. 2. Answer: 76 percent Determination of Failure Rate. rate predictions regarding reliability may be made using the exponential distribution or exponential model. There are three modes of testing: 1.61 Rt = 0. the failure rate of 0. Example calculation: The MTBF of an electric motor is 660 hours. the product is in a burnout phase. What is the probability that the unit will survive 1500 hours of service? Note: There is an assumption that the failure rate is calculated based on the same time unit as the reliability calculation objective. Failure-censored data (type II testing): Multiple units are started. Supplemental problem: The failure rate during the useful life of a product is 0.54 or 54% There is only a 54 percent chance that the motor will survive 400 hours of running time. That is. Time-censored data (type I testing): Multiple units are started at the same time. Where the failure rate is constant defines the period of useful life.00018. and the test is discontinued upon reaching a predetermined time. Where the failure rate is increasing. the probability of mission success or reliability based on the exponential model is given by Rt = e t − ⎛⎜ ⎞⎟ ⎝ θ⎠ or Rt = e − tλ .qxd 10/15/07 3:20 PM Page 395 Reliability 395 Summary The period life of a product where the failure rate is decreasing is defined as the infant mortality or burn-in phase. and the test is discontinued upon reaching a predetermined number of failures. . What is the probability that the motor will last or run for 400 hours? Rt = e t −⎛ ⎞ ⎝ θ⎠ t = 400 θ = 660 Rt = e −0. 3. Exponential Distribution Given a constant failure rate λ or MTBF θ.

seven batteries were run for 140 hours. 1800. at which time the test was terminated.qxd 396 10/15/07 3:20 PM Page 396 The Desk Reference of Statistical Quality Methods In all cases. What is the estimated failure rate? 0 1 2 20 Time. three batteries failed at 28. The remaining four batteries were functioning properly when the test was discontinued. Complete data Eight electric motors are tested. During the test time. 49. 1350. the failure rate λ is determined by the general relationship: λ= number of failures . 390 λ = 0. total test time 1. 1000. hours: 900. and 110 hours. 1100. All fail at the following times.00077 i i =1 2. 600. 750.H1317_CH37. respectively. λ? Failure times. Time-censored failure data (type I) In a life test. hours 60 80 40 100 120 140 x x 3 + Unit 4 number 5 + x 6 + 7 + x = failure + = still running . What is the estimate for the failure rate. 2890 λ= n λ= n ∑t 8 10 .

is used for the calculation. and each panel is measured automatically every five daylight hours.H1317_CH37. 345. The test was run a total of 1500 hours. 655.0040 failures per hour θ = 1 / 0. and 1100 hours. rather than the total amount of time.00 ± 0. the number of cycles accumulated at failure.0040 = 250 hours Supplemental problem: The gloss is measured on a painted test panel each day. at which time the test was terminated. Nine random units are selected for reliability testing. Nine panels are tested.qxd 10/15/07 3:20 PM Page 397 Reliability λ= 397 r n ∑t i + ( n − r )T i =1 where: r = total number of failures = 3 n = number of units on test = 7 t = time at which the test is terminated = 140 λ= 3 ( 28 + 49 + 110 ) + ( 7 − 3 )140 λ = 0. Failure-censored data (type II) Precision fluid metering devices are designed to deliver 5. Given the following data. respectively. The total number of cycles at failure are recorded. The specification for acceptance is that the unit deliver 5. A gloss reading of less than 25 microlumens is considered a failure. Four test panels failed at 120. checking the delivery to confirm that the specification is met.00 ml of liquid. what is the estimated failure rate? This calculation is performed the same as with the time-censored type I data. The nine selected units are cycled over and over. The test is terminated when r failures have occurred. Note that in this example. λ= r n ∑t i + ( n − r )T i =1 where: r = number of failures = 6 n = number of units on test = 9 T = time when the test was terminated (when r failures occur) = 12.600 .10 ml. What is the failure rate? 3.

and what is the 90 percent confidence interval for the estimate? Data: Unit no.000101 i =1 θ= 1 = 9901 0.H1317_CH37.000101 Confidence Interval for k from Time-Censored (Type I) Data The 100(1 – α) percent confidence interval for λ is given by ⎛ χ 2⎛ ⎛ χ 2⎛ α ⎞ α⎞ ⎞ 2 r . 600 ) λ = 0.1− ⎞⎟ ⎜ ⎜⎝ 2 r + 2 . seven units fail.00 – confidence) χ2 = chi-square. What is the estimated failure rate.600 12.qxd 10/15/07 398 3:20 PM Page 398 The Desk Reference of Statistical Quality Methods Unit 5 4 7 3 2 1 8 9 6 Cycles on failure 250 800 1100 1800 4800 12. 350 + ( 3 )(12 . Example: Ten units have been tested for 250 days.600 12.0035 2000 . ⎟⎠ ⎟ ⎠ ⎜ ⎝ ⎜ ⎟ 2 2 λ⎜ ⎟⎠ < True λ < λ ⎜⎝ ⎟⎠ . ⎝ 2r 2r where: λ = estimated failure rate r = number of failures α = risk (α = 1. 1 2 3 4 5 6 7 8 9 10 Day at failure 145 200 250+ 188 250+ 250+ 220 120 227 150 λ= 7 = 0.600 λ= r λ= n ∑t i Status F F F F F F + + + + ( n − r )T F = failed + = still running at termination 6 21. At the end of the 250-day period.500 12.

05 ⎞ λ⎜ ⎟ < True λ < λ ⎜ ⎟ ⎝ 14 ⎠ ⎝ 14 ⎠ Looking up the appropriate chi-square value using a chi-square table.0.6 × 10–4 failures per hour.3 ⎛ 26.0066. and after five failures.57 χ 216.1− ⎞⎟ ⎜ ⎜ ⎝ ⎜ ⎜⎝ 2 r + 2. ⎝ 2r 2r where: r = 7 α = 0.57 ⎞ < True λ < 0.0164 and 0.0066.qxd 10/15/07 3:20 PM Page 399 Reliability 399 ⎛ χ 2⎛ ⎛ χ 2⎛ α ⎞ α⎞ ⎞ 2 r . 2 ⎟⎠ ⎟ λ⎜ < True λ < λ ⎜ .H1317_CH37. where the degrees of freedom for the chisquare is 2r for the failure-censored data and 2r + 2 for the time-censored data.” Confidence Interval for k from Failure-Censored (Type II) Data Failure rates that are determined from reliability tests that are terminated when a specified number of failures has been reached are referred to as failure-censored or failuretruncated tests.0164 < True λ < 0.95 ⎞ ⎛ χ 216. 2 ⎟⎠ ⎟ 2⎠ ⎟ λ⎜ ⎟⎠ < True λ < λ ⎜⎝ ⎟⎠ . Once the confidence interval for λ has been obtained.0. .1− 2 ⎟⎠ ⎟ ⎜ χ ⎜⎝ 2 r . α ⎞⎠ ⎞ 2 2 λ⎜ ⎟⎟ < True λ < λ ⎜⎜ ⎟⎟ ⎜ 2r 2r ⎝ ⎠ ⎝ ⎠ ⎛ χ 214. the confidence interval for the MTBF can be determined by simply taking the reciprocal of the limits for the failure and reversing the order of the intervals.0. An appropriate statement would be: “We do not know the true failure rate.0035 ⎛ χ 2 ⎛⎝ 2 r . The confidence interval for these types of data is determined using the following relationship: ⎛ 2⎛ ⎛ 2⎛ α ⎞ ⎞ ⎞ α⎞ ⎜ χ ⎜⎝ 2 r .0035 ⎜ ⎝ 14 ⎟⎠ ⎝ 14 ⎟⎠ 0.0035 ⎜ 0. the estimate for λ is determined to be 4.0.05 = 26. ⎟ ⎝ ⎝ 2 r ⎟⎠ ⎠ 2r Note that the only difference between this relationship and the time-censored confidence interval is in the upper limit equation. Determine the 95 percent confidence interval for this failure-censored data and the corresponding interval for the MTBF. The MTBF resulting from taking the reciprocal of the lower limit for the failure value yields the upper confidence limit for the MTBF. Example: A sample of eight panel bulbs is tested. we get the following: χ 214 .1− α ⎞⎠ ⎞ ⎛ χ 2 ⎛⎝ 2 r + 2.3 ⎞ ⎛ 6.95 = 6.10 λ = 0. but we are 90 percent confident that it is between 0.

25 χ2(2r.95 α = 0.2 r χ 2 α .1–α/2) = χ2(10.0.05 α/2 = 0.48 . χ2(2r.6 × 10–4 failures per hour 2r = 2 × 5 = 10 1 – α = 0.975) = 3.05 χ 2 1− α . An alternative equation for calculating the confidence interval for the failure rate using the total time on all test units when the test is terminated and the total number of failed units is determined is given by 2T 2T < True λ < . χ 2 1− α . Calculate the single-sided lower confidence limit at 95 percent confidence for the failure rate.qxd 400 10/15/07 3:20 PM Page 400 The Desk Reference of Statistical Quality Methods λ = 4.000942.025) = 20. look up the required chi-square values for 1 – α/2 and α/2 degrees of freedom.H1317_CH37.2 r 2 2 where: T = total test time on all units when the test is terminated r = total number of failed units.6 × 10 − 4 × 3. The total time when the seventh unit fails is 6000 hours.48 The 95 percent confidence interval for λ is 4.975 From the chi-square table. α = 0. and 1062 < True MTBF < 6711.25 4. Example: A set of seven units is tested until all seven units fail.2 r < True λ 2T .025 1 – α/2 = 0.α/2) = χ2(10.000149 < True failure rate < 0.6 × 10− 4 × 20. < True failure rate < 10 10 0.0.

a concept known as the success run theorem may be used to report the given level of reliability.57 χ 2 1− α . The relationship of the reliability. 000 This lower 95 percent confidence limit for the failure rate is the reciprocal of the upper 95 percent confidence limit for the MTBF. If the items operate to the prescribed time. where R is the reliability. C is the level of confidence. level of confidence.95 n= n= −2.00055 2T 12 .2 r 6. n= ln(1 − C ) ln R It is important to realize that time is not the criterion.051 Table 1 may be used to determine the appropriate sample size for several levels of reliability and confidence. the risk α is not divided by 2. All of the risk is assigned to the single-sided confidence limit. How many relays must pass this test in order to demonstrate a reliability of 95 percent with a confidence of 90 percent? ln(1 − C ) ln R ln(1 − 0. Success Run Theorem There are some cases where no failures occur during a reliability demonstration test.303 = 45 −0. and number of failure-free samples is given in the formula R = (1 − C)1/n.90 ) n= ln 0. The upper 95 percent confidence limit for the MTBF is 1818. Items are simply tested to a predetermined time. By rearranging the original formula.2r = χ20.14 = 6. Sample application: Success for an electric relay is defined as 150. . θ.000 cycles without failure. We are 90 percent confident that the reliability is not less than 95 percent.95. success is demonstrated. Forty-five relays must successfully operate for 150.H1317_CH37.57 = = 0.95 with the column where C = 0. Most often we use the success run theorem to determine the number of failure-free test items required to demonstrate a minimum reliability at a given level of confidence. For those cases.90 gives a sample size of n = 45. we may solve for n.qxd 10/15/07 3:20 PM Page 401 Reliability 401 Note that since we are asking for only the lower confidence limit. χ21–α.000 cycles to demonstrate a reliability of 95 percent with a level of confidence of 90 percent. and n is the sample size or number of failure-free test items. The intersection of the row where R = 0.

95 RS = 0.95 0.90 0.85 R = 0.95 0. Components can be diagramed using series and parallel configurations.H1317_CH37.80 0.99 0.qxd 402 10/15/07 3:20 PM Page 402 The Desk Reference of Statistical Quality Methods Table 1 Success run sample size.95 × 0.85 9 12 19 37 189 379 1897 0. Confidence level. Series Systems The system reliability for components that function in series is determined by the product of the reliability of each component. R 0.90 R = 0. .9999 24 33 51 104 528 1058 5296 31 43 66 135 688 1379 6905 Reliability Modeling The reliability of more complex systems can be modeled using the reliability of individual components and block diagrams.90 11 15 22 45 230 460 2303 0. Using a reliability block diagram (RBD).85 0.99 14 19 29 59 299 598 2995 21 29 44 90 459 919 4603 0.999 8 10 16 32 161 322 1609 0.90 × 0.99 .80 0. RS = R1 × R2 × R3 × .99 RS = 0. calculate the reliability of the following system: R = 0.995 0. × RK The system reliability for a series of components is always less reliable than the least reliable component.995 0. . C Minimum reliability.

Parallel Systems The reliability of systems that have components that function in parallel is determined by subtracting the product of the component unreliabilities (1 – R) from 1. Rt = 200 = e − ( 0.67 The RBD can now be constructed for the individual components: R = 0.67 RS = 0. The system reliability of a parallel system will always be more reliable than the most reliable component. the reliability at t = 200 is Rt = 200 = e t −⎛ ⎞ ⎝ θ⎠ . Example: Find the system at t = 200 hours for a series system configuration given that one component has a failure rate of 0. Rt = 200 = e − ( 0.qxd 10/15/07 3:20 PM Page 403 Reliability 403 If the failure rate or MTBF is given. RS = 0.002 × 200 ) Rt = 200 = 0. Failure rates and MTBFs cannot be combined directly.29 ) Rt = 200 = 0.002.75 R = 0.75 Given λ = 0.50.H1317_CH37.002.002 and the other component has an MTBF of 700 hours.67. where: t = 200 λ = 0. the reliability must be determined in order to perform the system reliability model.75 × 0. Given θ = 700 hours. where: t = 200 θ = 700. the reliability at t = 200 is Rt = 200 = e − λt . .

003 RS = 0. Parallel components are reduced to an equivalent single component and the completion of the reliability calculation based on a serial model.80)] RS = 1 − [(0.15)(0. and 0.90 RS = 1 − 0.80? R = 0. then the system reliability is given by RS = 1 − (1 − R)n.H1317_CH37.85 RS = 1 − [(1 − 0.10)(0.997 R = 0. .85)(1 − 0. .80 Complex Systems Block diagrams of more complex arrangements utilizing a mixture of both serial and parallel components can be developed. . where n = number of components in parallel.85.qxd 404 10/15/07 3:20 PM Page 404 The Desk Reference of Statistical Quality Methods R1 R2 RS = 1 − [(1 − R1)(1 − R2)(1 − R3) . Example: What is the system reliability for three components having individual reliabilities of 0.20)] R = 0. 0. (1 − Rk)] R3 Rk If all of the components are of equal reliability.90.90)(1 − 0.

12)]. R = 0. Determine the system reliability as a series configuration: RS = 0. .85.90 × 0.95 R = 0. Shared-Load Parallel Systems In the previous calculation.95 R = 0. R = 1 − 0. Problems where the failure rate of the surviving component is different than it would have been in the active redundancy state are more complex.95)(1 − 0. ⎠ where: λ1t = failure rate when both components are in operable condition λ2t = failure rate of the surviving component when one component has failed. Shared-load problems with two parallel components where the failure rate changes with the number of surviving components are determined by ⎛ 2λ1 Rt = e −2 λ 1 + ⎜ ⎝ 2λ1 − λ 2 ⎞ −λ t ⎟ ( e 1 − e −2 λ 1 t ) .qxd 10/15/07 3:20 PM Page 405 Reliability 405 Example: Determine the reliability of the following RBD: R = 0.994.994 × 0.H1317_CH37.88 Reduce the two parallel components to the equivalent of one component as follows: R = 1 − [(1 − 0.95.05)(0.006. it is assumed that only one of the parallel components is required for success and that the failure of one of the parallel components does not increase the load on the remaining parallel component. R = 1 − [(0.88)].90 R = 0. RS = 0.

the individual failure rate for each λ1 is 0. The failure rate of the single surviving unit λ2 is 0. What is the reliability of the system at t = 400 hours? ⎛ λ1 ⎞ − λ t Rt = e − λ 1 t + Rsw ⎜ ⎟ (e 1 − e − λ2t ) ⎝ λ 2 − λ1 ⎠ 0.5 )(0.00025 ) ⎟⎠ Rt = e−0. ⎝ λ2 − λ1 ⎟⎠ Example: A pump with a failure rate λ1 of 0.25 − e−0.993 Rt = 0. The system reliability depends on the reliability of the switch and the reliability of the standby component. but the additional stress on the surviving unit increases its failure rate.000075 per hour. Operational data show that one unit will provide adequate cooling.0002 ⎠ Rt = 0. The backup pump is switched into place upon failure of the primary pump. When cooling is provided by both units operating.00025 per hour.0015 − 0.95. ready to switch in when the active component fails.9231 − 0. The more complex case is where the two components have different failure rates and the switching is imperfect.000075 ) ⎞ − ( 0.8607 + (−1.75 ) Rt = 0.5 )(e−0.7788 − 0.0015 )( 400 ) ) Rt = e − ( 0.0002 )( 400 ) + 0.0015 per hour. The system reliability is determined by ⎛ λ1 ⎞ − λ1t Rt = e− λ1 + Rsw ⎜ (e − e− λ2 t ). The reliability of the switch is 0.00025 )(1000 ) − ( 2 )( 0.000075 λ2 = 0.000075 ) − (0.95 ⎛ ⎝ 0.0002 )( 400 ) − e − ( 0.5488 ) Rt = 0.qxd 406 10/15/07 3:20 PM Page 406 The Desk Reference of Statistical Quality Methods Example: A mobile refrigeration unit is served by two cooling systems in parallel.0002 per hour is backed up with a standby pump with a failure rate λ2 of 0.9778 .15 + (−1.0002 ⎞ ( e − ( 0.95 ( 0. What is the 1000-hour reliability for this system? ⎛ 2λ1 ⎞ − λ t Rt = e −2 λ 1 t + ⎜ ⎟ ( e 2 − e −2 λ 1 t ) ⎝ 2λ1 − λ 2 ⎠ where: t = 1000 λ1 = 0.8607 ) Standby Parallel Systems Parallel configurations can have a unit in a standby mode.H1317_CH37.000075 )(1000 ) −e Rt = e−2 ( 0.9231 + 0.000075 )(1000 ) + ⎜ ) (e ⎝ (2 )(0.00025 ⎛ 2(0.1538 )( 0.

Krishnamoorthi. Milwaukee. 1995. Milwaukee. A. New York: John Wiley & Sons. 1995. Reliability Methods for Engineers. 1990. T. WI: ASQC Quality Press. WI: ASQC Quality Press. R.H1317_CH37.qxd 10/15/07 3:20 PM Page 407 Reliability 407 Bibliography Dovich. O’Connor. . S. 3rd edition. Practical Reliability Engineering. P. K. D. Reliability Statistics.

qxd 10/15/07 3:20 PM Page 408 .H1317_CH37.

The response for this one set of conditions is 68 percent.qxd 10/15/07 3:24 PM Page 409 Sequential Simplex Optimization Theoretical Optimum The intersection of the two broken lines in Figure 1 shows the location of the result of a single experiment performed with the conditions or combinations of temperature (70°C) and time (30 minutes) for a chemical reaction. The response surface is unknown. in reality. This requires at least two different values for variable x. If we are to understand how a factor affects a response. dy/dx. Shape is a differential quality. we must vary it.H1317_CH38. 409 60 70 80 . gives no information regarding the response surface. which is. as is the case with Figure 1. One experiment. In order to define a shape or surface. there must be a dx. 100 Temperature (°C) 90 80 100 90 70 80 70 60 60 50 40 30 50 10 20 30 40 50 Time (minutes) Figure 1 Theoretically optimum combination. and the shape can only be determined by a series of experiments. unknown. as indicated by the contour of the experimental space.

Initially. you must hold all other factors constant and vary the factor you are investigating.qxd 10/15/07 410 3:24 PM Page 410 The Desk Reference of Statistical Quality Methods 100 Temperature (°C) 90 80 100 90 70 80 70 60 60 50 40 30 50 10 20 30 40 50 Time (minutes) 60 70 80 Figure 2 Shotgun approach to optimization. a frequently used strategy that is called the multifactor strategy. an evaluation of the time variable is conducted by keeping the temperature fixed at the theoretical best setting of 70°C and varying the time around the theoretical point of 30 minutes. The response of exper- . Strategies for Optimization Shotgun Approach Randomly changing one or two variables and measuring the response is one approach to optimization. This gives the three experimental conditions of A. This shotgun approach is sometimes referred to as a stochastic or probabilistic approach to optimization. Consider the experiments in Figure 3. it fails in that the answer is conditional on the values of the other factors and their settings. Figure 2 illustrates the shotgun approach. One advantage of this approach is that if enough experiments are performed. and C. Approach of a Single Factor at a Time Figure 3 shows the possible result when varying one factor at a time.” While this approach sounds good.H1317_CH38. You are totally ignoring the potential of interactions. Consider the seemingly logical statement: “If you want to find out the effect of a factor on a response. The absolute optimum may or may not be attained. B. A totally random approach does lead to the possibility of repeating an experiment already covered. they will cover the entire response surface. This is important if information is needed for the total experimental space.

Simplex Terminology 1. A simplex is a geometric figure defined by number of points. The strategy of a single factor at a time will not accomplish the optimum setting. If there are k number of factors. With this knowledge. but four factors is difficult to illustrate. Basic Simplex Algorithm A simplex is a geometric figure that has a number of vertexes (corners). A true multifactor optimization strategy is required. Doing this would lead to a decrease in yield. . The results indicate that the higher temperature of 79°C is the best condition or optimal setting with a yield of 88 percent. then the simplex is defined by k + 1 points. If there are three factors. The only way to move up the ridge toward the optimum is to change both factors simultaneously. The experimenter now holds the time constant at 34 minutes and varies temperature around the theoretical best setting of 70°C by running experiments D and E. A vertex is a corner of a simplex. iment C indicates that an improvement in yield from 68 percent (condition A) to a yield of 78 percent occurs for condition C. then the simplex is defined by four points in space (the corners of a tetrahedron). the experimenter might be led to slightly increase the temperature and hold the time constant at 34 minutes. 2. A two-factor simplex is defined by three points of a triangle. It is one of the points that define a simplex.H1317_CH38. Two and three factors can be visualized easily. What has happened in Figure 3 is that the experimenter has become stranded on the oblique ridge.qxd 10/15/07 3:24 PM Page 411 Sequential Simplex Optimization 411 100 Temperature (°C) 90 80 E B 70 100 A 90 C 80 70 D 60 60 50 40 30 50 10 20 30 40 50 60 70 80 Time (minutes) Figure 3 Approach of one factor at a time.

B for the best response.H1317_CH38. The response variable will be percent yield. Simplex 1 coordinates: Time 20 23 17 Temperature 64 58 58 Yield 42 37 31 Rank B N W Factor X1 X2 Response Rank Coordinates of retained vertexes 20 64 42 B 23 58 37 N Σ 43 122 P = Σ/k 21. The three vertexes are labeled W for the vertex giving the worst response. there are always three vertexes describing the simplex. The yield for each of these conditions is determined and ranked as best (B). worst (W). A centroid is the center of mass of all the vertexes in that simplex. This initial simplex is defined by the following three vertexes. 6. and N for the next best response. When two factors are considered. Figure 4 shows the worksheet for the first simplex. The two independent variables or factors will be time (X1) and temperature (X2). A hyperface is the same as a face. 31 54 . 4. 5.qxd 412 10/15/07 3:24 PM Page 412 The Desk Reference of Statistical Quality Methods 3. and next best (N). The centroid of the remaining hyperface is the center of mass of the vertexes that remain when one vertex is removed from the simplex. the yield is 36 percent. Simplex Calculations Problem Two variables will be considered for the optimization of a process. but it is used when the simplex is of four dimensions or greater. An initial simplex will be formed by three vertexes around an initial set of conditions located at 60°C and 20 minutes. A face is that part of the simplex that remains after one of the vertexes is removed.5 3 R = P + (P – W ) 26 64 Figure 4 Simplex number 1. At this operating condition.5 61 W 17 58 (P – W ) 4.

42 66 . These responses are reranked B. rerank the remaining three. next best.qxd 10/15/07 3:24 PM Page 413 Sequential Simplex Optimization 413 Having established an initial set of conditions. and the data are recorded on the simplex #2 worksheet (Figure 5). N. and record on the worksheet for simplex #3 (Figure 6). The new reflected vertex is determined and run. The resulting response is 50 percent. and ranking the three responses according to best. running the experiments.5 67 W 20 64 (P – W ) 4. we discard the current worst of 31 percent and retain the remaining vertexes. The result of this experiment yields 54 percent. This reflected vertex for simplex #1 is time (X1) = 26 minutes and temperature (X2) = 64°C. Of these current four responses. Factor X1 X2 Response Rank Coordinates of retained vertexes 26 64 54 B 20 64 42 N Σ 46 128 P = Σ/k 23 64 W 23 58 (P – W ) 0 6 R = P + (P – W ) 23 70 37 50 Figure 5 Simplex number 2. Discard the current worst vertex. the reflected vertex is calculated using the worksheet.H1317_CH38. and worst. Factor X1 X2 Response Rank Coordinates of retained vertexes 26 64 54 B 23 70 50 N Σ 49 134 P = Σ/k 24.5 3 R = P + (P – W ) 29 70 Figure 6 Simplex number 3. and W.

The three remaining are ranked as 80 ⫽ B. and calculate the reflected vertex for simplex #5 (Figure 8).H1317_CH38.5 67 W 26 64 (P – W ) 4. .qxd 414 10/15/07 3:24 PM Page 414 The Desk Reference of Statistical Quality Methods The response for this reflected vertex is 66 percent. Discard the current worst.5 3 R = P + (P – W ) 35 70 54 80 Figure 8 Simplex number 5. and 57 ⫽ W. Factor X1 X2 Response Rank Coordinates of retained vertexes 29 70 66 B 26 64 54 N Σ 55 134 P = Σ/k 27.5 –3 R = P + (P – W ) 32 64 50 57 Figure 7 Simplex number 4. The worst response of 54 is discarded. See Figure 9 for the new simplex. The response for the reflected vertex of 32 minutes and 64°C is 57 percent. Factor X1 X2 Response Rank Coordinates of retained vertexes 29 70 66 B 32 64 57 N Σ 61 134 P = Σ/k 30. 66 ⫽ N.5 67 W 23 70 (P – W ) 4. Discarding the current worst vertex corresponding to the yield of 42 percent and reranking generates simplex #4 (Figure 7). rerank.

In a similar manner of rejecting the current worst and reranking the remaining three.5 73 W 29 70 (P – W ) 4. 9. Factor X1 X2 Response Rank Coordinates of retained vertexes 35 70 80 B 32 76 80 N Σ 67 146 P = Σ/k 33.5 3 R = P + (P – W ) 38 76 66 98 Figure 10 Simplex number 7. Discarding the current worst and reranking will establish simplex #7 (Figure 10). The choice for the vertex for the best and next best does not matter as long as both of the 80 percent yields are used. the same as the original best for simplex #6. and 13. See Figures 11. we generate the remaining simplexes 8. The reflected vertex generated in simplex #6 gave a yield of 80 percent.H1317_CH38. . 12.qxd 10/15/07 3:24 PM Page 415 Sequential Simplex Optimization 415 Factor X1 X2 Response Rank Coordinates of retained vertexes 35 70 80 B 29 70 66 N Σ 64 140 P = Σ/k 32 70 W 32 64 (P – W ) 0 6 R = P + (P – W ) 32 76 57 80 Figure 9 Simplex number 6. and 10.

5 79 W 32 76 (P – W ) 4.qxd 416 10/15/07 3:24 PM Page 416 The Desk Reference of Statistical Quality Methods Factor X1 X2 Response Rank Coordinates of retained vertexes 38 76 98 B 32 76 80 N Σ 70 152 P = Σ/k 35 76 W 35 70 (P – W ) 0 6 R = P + (P – W ) 35 82 89 X1 X2 Response Rank Coordinates of retained vertexes 38 76 98 B 35 82 89 N Σ 73 158 P = Σ/k 36. Factor Figure 12 Simplex number 9.5 3 R = P + (P – W ) 41 82 80 Figure 11 Simplex number 8.H1317_CH38. 80 95 .

qxd 10/15/07 3:24 PM Page 417 Sequential Simplex Optimization 417 Factor X1 X2 Response Rank Coordinates of retained vertexes 38 76 98 B 41 82 95 N Σ 79 158 P = Σ/k 39. See the following figure for the complete map of the 10 simplexes (or simplices).5 79 W 35 82 (P – W ) 4. and the optimization is complete. The resulting vertex for simplex #10 gives a yield of 100 percent. 100 Temperature (°C) 90 10 80 9 7 70 3 60 1 4 5 100 8 90 80 6 70 2 60 50 40 30 50 10 20 30 40 Time (minutes) 50 60 70 80 .5 –3 R = P + (P – W ) 44 76 89 100 Figure 13 Simplex number 10.H1317_CH38.

12. 10. FL: CRC Press.qxd 418 10/15/07 3:24 PM Page 418 The Desk Reference of Statistical Quality Methods See the following table for a summary of the simplex movements toward optimization: 1. 7. F. H. 1991. 5. percent yield Initial Initial Initial Reflected Reflected Reflected Reflected Reflected Reflected Reflected Reflected Reflected Reflected 20 23 17 26 23 29 32 35 32 38 35 41 44 64 58 58 64 70 70 64 70 76 76 82 82 76 42 Best 37 Next best 31 Worst 54 50 66 57 80 80 98 89 95 100 Bibliography Walters. 13.. 6. 11. and S.H1317_CH38. 2.. Jr. N. L. 8. Deming. Vertex X1 (time) X2 (temp. 4. L. Sequential Simplex Optimization. . S. Parker.) Response. R. Morgan. 3. Boca Raton. 9.

the simplex is a triangle made up of three vertexes (or vertices). which demonstrates the concept of a variable sample size simplex and a method to evaluate statistical significance of the improvement. 58. which is defined by the following coordinates: Vertex number X1 X2 Responses Average Standard deviation Rank 1 2 3 10 30 20 20 20 30 50.qxd 10/15/07 4:49 PM Page 419 Sequential Simplex Optimization. that is.0 57. 46. We begin by altering the conditions around the current practice to form a triangle. A simplex is a geometric figure made up of coordinates that define a vertex. which is measured by a bioplasmic reversing polarizer. there has not been a mechanism by which the rankings have been tested to see if a statistically significant improvement has been made.0 55. The resin is sprayed on and cured for a length of time at a specified temperature.0 48. The resulting clarity index is 51. In both cases.5 2.0. the size of the simplex is not allowed to change. the simplex is a tetrahedron.71 W N B 419 . the better the clarity.0 54. we determine the response for all vertexes. We want to alter the conditions enough to bring out the effects of the variable but not enough to blow up the plant! We will evaluate the responses from our initial simplex (I). The clarity. For two variables. and carry the remaining vertexes and a generated new vertex to the next simplex. A clear resin has been developed as a protective coating for biological specimens. The best competitor claims an index of 75. The differences observed might simply be error in the experiment. The greater the number. Variable Size The basic simplex optimization utilizes a simplex that is of a fixed size throughout the optimization process.99 0. reject the poorest response. This is simplex I.83 0.H1317_CH39.7 57. is the most important.3. 55. Variable size simplexes (or simplices) allow the simplex to expand or contract as a function of the amount of improvement.0. Consider the following example. This is continued until the goal or objective is reached or until no further improvement is possible. regardless of the degree of improvement. Traditionally. The current procedure calls for a temperature (X1) of 20° and a curing time (X2) of 20 minutes. For three variables. A clarity index of >85 is desirable.

05.5 − 55. and worst (W). In reality. the contour. next best (N). If the calculated t-score is greater than the critical t-score. the t-calculated value is given by tcalc = XB − XN . map will not be available. the difference is significant and we continue the study.0 2. 80 100 120 . time 80 60 70 60 60 40 50 90 I 20 40 0 0 20 40 60 X1.92. For each simplex generated.H1317_CH39. 57.55 tcalc = = = 2. S2 S 2 = the average variances of the two sets of data.22 1. 1. The question is. 120 100 50 X2. temperature Figure 1 Simplex I on surface response map. or response. If tcalc is > 2.92. the critical value is 2.48 For 95 percent confidence and two degrees of freedom. it would be prudent to compare the B and the N to see if the differences are statistically significant. Is there a statistically significant difference between the B and N averages? We will answer this question by performing a simple t-test. Figure 1 shows the position of the initial simplex on the contour map of the experimental space. For sample averages derived from two observations.qxd 420 10/15/07 4:49 PM Page 420 The Desk Reference of Statistical Quality Methods We have ranked the responses for the three vertexes according to best (B). the difference is statistically significant.

The reflected R coordinates are calculated as R = P + ( P − W ). The average of the coordinates for the B and N ranks is labeled as P .0 The response for the (E) = 80 Is E > R? .qxd 10/15/07 4:49 PM Page 421 Sequential Simplex Optimization. to the first simplex. evaluate the extended reflection (E) E = R + (P − W ) For the X1 coordinate E = 40 + 15 = 55. and W to determine our next step.0 For the X 2 coordinate E = 30 + 5 = 35. The R vertex is evaluated and recorded on the first worksheet. use simplex B. . Variable Size 421 Having ranked our vertices and tested for statistical significance in the difference between the B and the N.. Referring to Figure 2: Is R > B? . We calculate the P − W values. .NE The response diagram in Figure 3 shows the progression from the initial simplex. I. we now transfer the data to the simplex worksheet. Yes. 1. Factor Simplex no. R. Yes. where W represents the coordinates for the W ranked vertex. . .H1317_CH39. N. The W from the current worksheet is not carried forward to the next worksheet.5 B 30 20 55 N 25 25 10 20 48 W 15 5 40 30 65 R Cw Cr 55 35 80 E We now need to review the responses for the B. I 1 X1 X2 Response Rank 20 30 57. .

..NE Use simplex B. the B.NCw No Yes Discontinue Figure 2 Simplex decision map. and W Evaluate R R>B No No R>N Yes Yes No R>W Yes Evaluate E No E>R Yes Use simplex B.. a W value is never carried forward. and E coordinates will be carried forward.H1317_CH39. From the current initial simplex worksheet. Remember.NR Use simplex B. They will be reranked.qxd 422 10/15/07 4:49 PM Page 422 The Desk Reference of Statistical Quality Methods Initial simplex Continue experimentation Rank trials as B.NCr Have objectives been met? Use simplex B. N. N. and the W from this worksheet will not be carried forward...

Factor Simplex no. time 70 60 60 40 50 I 90 1 20 90 80 40 0 0 20 40 60 80 100 120 X1.5 30 20 55 W 7. temperature Figure 3 Surface response map showing the first two simplexes. I 2 X1 X2 Response Rank 55 35 80 B 20 30 57. 423 .5 45 45 75 R Cw Cr E See Figure 4 for the responses after generating the second simplex.5 32. I and 1.qxd 10/15/07 4:49 PM Page 423 Sequential Simplex Optimization.H1317_CH39. Variable Size 120 100 50 80 60 X2.5 12.5 N 37.

and B. . 1.H1317_CH39. The responses for the vertices in worksheet I ← 1 that were labeled as B. These reranked vertices are recorded on worksheet 1 ← 2. time 70 60 60 2 40 50 I 90 1 20 40 0 0 20 40 60 80 100 120 X1. and E have been relabeled as N.NR The vertices B. temperature Figure 4 Surface response map for simplexes I. and R will be used to generate worksheet 2 ← 3. W. and 2.qxd 424 10/15/07 4:49 PM Page 424 The Desk Reference of Statistical Quality Methods 120 100 50 80 60 X2. Is R > B? No Is R > N? Yes Use simplex B. N. respectively. N. The data in worksheet 1 ← 2 were obtained from worksheet I ← 1..

NR Factor Simplex no.qxd 10/15/07 4:49 PM Page 425 Sequential Simplex Optimization. 3 4 X1 X2 Response Rank 80 50 82 B 55 35 80 N 67.5 45 45 75 W 22..5 79 E 425 . 2 3 X1 X2 Response Rank 55 35 80 B 45 45 75 N 50 40 20 30 57.5 42.5 –2.5 37. Variable Size Factor Simplex no.5 W 30 10 80 50 82 R Cw Cr 110 Is R > B? Yes Evaluate E 60 Is E > R? No 65 E Use simplex B.H1317_CH39.5 90 40 87 R Cw Cr 112.

qxd 426 10/15/07 4:49 PM Page 426 The Desk Reference of Statistical Quality Methods Is R > B? Yes Evaluate E Is E > R? No Use simplex B.. The generalized relationship of the various expansions and contractions can be seen in the following diagram: E R X2 N Cr Cw W B X1 . 4 5 X1 X2 Response Rank 90 40 87 B 80 50 82 N 85 45 55 35 80 W 30 10 115 55 65 R 15 5 70 40 90 Cw Cr E Is R > B? No Is R > N? No Is R > W? No Use simplex B.NR Factor Simplex no..NCw The use of the Cw (contracted toward the W vertex) indicates that an improvement can be found midway between the W vertex and the midpoint of the line described by vertices N and B.H1317_CH39.

I through 5. discontinue using the simplex The completed surface response map can be seen in Figure 5. . Variable Size 427 Factor Simplex no. 5 6 X1 X2 Response Rank 70 40 90 B 90 40 87 N 80 40 80 50 82 W 0 –10 80 30 90 R Cw Cr 80 Is R > B? No R=B 20 82 E No further improvement can be made. which shows all the simplexes.H1317_CH39.qxd 10/15/07 4:49 PM Page 427 Sequential Simplex Optimization.

time 70 60 60 5 3 4 2 40 50 I 90 1 20 80 90 40 0 0 20 40 60 X1.qxd 428 10/15/07 4:49 PM Page 428 The Desk Reference of Statistical Quality Methods 120 100 50 80 60 X2. 80 100 120 . temperature Figure 5 Completed simplex with response contours.H1317_CH39.

This information could be used to construct a u chart. then the process average number of defects per unit might be different. By plotting the Z value for each sample. respectively. a Z-score is determined by Z= Xi − X . UCL = U + 3 U n LCL = U − 3 and U n where U = the average defects per unit If the size of the paper changes and a more complex operation such as two-sided copying and collating is required. then another control chart will need to be developed for the redefined process using its unique process average. if the process is one that photocopies 8. In the case of variables data. In all of these cases. we are standardizing the data. the corresponding Z-score becomes Z= U −U U n 429 .H1317_CH40. then the entire control chart changes. If this occurs. For example. the average number of defects u might be 0.02. respectively. Z-scores are determined by calculating the magnitude of the difference in a reported value and its expected average in units of standard deviation. the corresponding Z-score may be plotted. it is assumed that the process average is unique to a particular process or product and is used to provide a historical basis for detecting a process change.5⬙ × 11⬙ paper on one side where the typical run size is 100 sheets. . If U changes. Rather than plot the U value for each sample taken. σ where: Xi = individual value X = average σ = standard deviation. If we replace the terms for a u chart.qxd 10/18/07 12:35 PM Page 429 Short-Run Attribute Control Chart Traditional control charts for attributes include the p chart and np chart for defectives where the sample size is variable and constant. and the u chart and c chart for defects where the sample size is variable and constant.

and for two-sided 11⬙ × 17⬙ paper.02. The U value for 8. or U −U .12. Development of the short-run u chart is as follows: LCL < U < UCL U −3 −3 U <U <U +3 n U n U U < U − U < +3 n n −3 < U −U U n < +3. Since the objective of all control charts is to determine whether the process average changes relative to the historical process average or target process average. then the Z-scores may be determined and plotted on a universal chart. U n Each U has a unique value of Ut. An assumed level of defects/unit Case Study The Ionic Corporation manufactures depth-finding units and has several models that are produced in small lots.5⬙ × 11⬙ paper was 0. The lower control limit (LCL) = –3 The upper control limit (UCL) = +3 The process average = 0 The individual plot points are Z-scores. the U value might be 0.qxd 430 10/18/07 12:35 PM Page 430 The Desk Reference of Statistical Quality Methods where: U = individual sample defects/unit U = average deefects per unit n = sample size – – The U value is unique to a specific product. Ut may come from: 1. The product line is broad. ranging from relatively simple units for . we will refer to the process average as the target average or Ut.H1317_CH40. Previous control charts for that product or item 2. If the U value is known for each product.

27 0. as would be required for a traditional u chart. and the process for manufacturing is more or less the same.28 0.17 As a quality engineer.H1317_CH40. Only the complexity of the unit produced changes. Z= U − Ut Ut n where: n = sample size U t = target average defects/uniit for specific part or model Defects per unit U 0. Part A B C D Target Ut 0.qxd 10/18/07 12:35 PM Page 431 Short-Run Attribute Control Chart 431 consumer use to sophisticated commercial units that are much more complex. Data from the final inspection audits for four production models give the following historical process average defects/unit Ut.05 0. Calculate a Z-score for each sample collected.13 0. Collect historical data.18 0. you would like to establish a u chart using the short-run technique.19 0.00 0. Step 1.09 0.08 .20 0.10 0. All of the units produced are inspected and tested to a rigorous format.08 0.09 0. The following information is available for the past 13 production runs: Date Product Sample size n Number of defects c 11/2 11/3 11/4 11/4 11/5 11/5 11/6 11/6 11/7 11/8 11/8 11/9 11/9 A C D A C D A D D B C A B 18 23 7 11 38 16 7 23 11 42 17 13 26 5 2 1 2 4 3 0 2 3 2 3 1 2 Step 2. The sample size represents 100 percent of the units produced.28 0.11 0. You will use a short-run method because there is no real long run of any specific model.14 0.

08 +1.13 23 Continue calculation of all 13 samples.28 0.00 0.28 18 Second sample: Part = C U t = 0.28 − 0.53 +0.53 0.32 –0.20 = +1.11 0.93 +0.28 n 18 U − Ut Z= Ut n 0. Date Product Sample size n Number of defects c Defects per unit U Z-score 11/2 11/3 11/4 11/4 11/5 11/5 11/6 11/6 11/7 11/8 11/8 11/9 11/9 A C D A C D A D D B C A B 18 23 7 11 38 16 7 23 11 42 17 13 26 5 2 1 2 4 3 0 2 3 2 3 1 2 0.15 –0.09 0.27 0.09 − 0.19 0.80 –1.72 +0.09 0.09 n 23 U − Ut Ut n Z= 0.34 +0.20 n = 18 c=5 U= Z= c 5 = = 0.13 = −0.32 .28 0.02 +1.05 0.16 –0.18 0.28 0.qxd 432 10/18/07 12:35 PM Page 432 The Desk Reference of Statistical Quality Methods First sample: Part = A U t = 0.19 –1.08 0.18 –0.97 –0.13 n = 23 c=2 U= Z= c 2 = = 0.14 0.H1317_CH40.

42 0.64 History x +3 +2 +1 0 –1 x x x x x x x x x x x x x x x x x x x x x –2 –3 Product: Date: A C D A C D A D D B C A B C A B C D B A 11/2 11/4 11/5 11/6 11/7 11/8 11/9 11/10 11/11 11/12 B C 11/14 A process change is indicated with sample #22 occurring on 11/14 with product C. Continue to collect and plot data. .75 +1.32 +0.33 0.00 +2.87 +1. History +3 +2 x x +1 x 0 x x x x x x –1 x x x x –2 –3 Product: Date: A C D A C D A D D B C A B 11/2 11/4 11/5 11/6 11/7 11/8 11/9 Step 4.20 0.H1317_CH40.41 0. Record and plot historical data on the control chart. looking for evidence of a process change.43 +0.26 0.27 0. Date Product Sample size n Number of defects c Defects per unit U Z-score 11/10 11/10 11/11 11/11 11/12 11/12 11/13 11/14 11/14 C A B C D B A B C 40 19 23 15 30 25 9 18 22 5 8 6 3 8 2 3 4 9 0.13 0.08 0.33 –0.61 +3.qxd 10/18/07 12:35 PM Page 433 Short-Run Attribute Control Chart 433 Step 3.22 0.14 +2.

and the department manager wants to use a short-run p chart to monitor the process.022 0. Example: A manufacturer of noninterruptible power supplies has an operation that makes wiring harnesses that are used on its product lines.100 .07 0. In some cases.H1317_CH40.077 0. This production is assembled once or twice a week to produce harnesses that will be used for the next production period. There are several different styles of harnesses.071 0.qxd 10/18/07 434 12:35 PM Page 434 The Desk Reference of Statistical Quality Methods Short-Run p Chart The same principle used to develop the u chart is used for the p chart. only the standard deviation calculation changes. Given the following information.03 0. develop the short-run p chart.086 0.065 0.052 0.013 0. the lot size for orders may be as high as 100.018 0. The harness production line consists of four workers who normally work in other departments. Model number A B C D E Process average proportional defective. The calculation for the plotting characteristic Z using the appropriate estimate for the standard deviation is given by Z= P − Pt Pt (1 − Pt ) n . A value for the target process average proportion Pt must be given for each product. The manager wants only one chart to monitor the department performance.167 0.11 0.045 0.037 0.09 Production data: Date Product Lot size Number defective Proportional defective P 4/5 4/5 4/5 4/11 4/11 4/11 4/11 4/15 4/15 4/15 4/15 4/19 4/19 A D B C B E D E C B D C E 45 123 88 28 35 58 75 95 42 65 27 55 40 1 8 4 2 3 3 1 11 7 5 1 1 4 0. Pt 0.05 0. but there are many runs of different styles.116 0.

100 –0.03(1 − 0.66 0.052 0.077 0.03(1 − 0.18 0.167 0.065 0. Date Product Lot size Number defective Proportional defective P ZP 4/5 4/5 4/5 4/11 4/11 4/11 4/11 4/15 4/15 4/15 4/15 4/19 4/19 A D B C B E D E C B D C E 45 123 88 28 35 58 75 95 42 65 27 55 40 1 8 4 2 3 3 1 11 7 5 1 1 4 0.31 0.045 0.89 1.03 ) 45 Z= 0.071 0.76 –1.020 t = −0.116 0.022 − 0.31 –2.08 –0.03 ) 45 Z= Continue to calculate the Z-scores for the remaining data and record.013 0.47 0.01 –1.22 –0.76 0.018 0.H1317_CH40.022 0.18 0.086 0.31 0.020 = −0.37 –1.22 Plot the ZP scores: History +3 +2 +1 0 –1 –2 x x x x x x x x x x x x x –3 Product: Date: A D B C B E D E C B D C E 4/5 4/5 4/11 4/11 4/15 4/15 4/19 .037 0.qxd 10/18/07 12:35 PM Page 435 Short-Run Attribute Control Chart 435 Calculate the Z-scores: Sample #1 P − Pt Z= Pt (1 − Pt ) n Sample #2 P − Pt Z= Pt (1 − Pt ) n 0.022 − 0.

Knoxville. Quality Control. H.133 0.35 History x +3 +2 +1 0 –1 –2 x x x x x x x x x x x x x x x x x –3 Product: Date: A D B C B E D E C B D C E B E A C B 4/5 4/5 4/11 4/11 4/15 4/15 4/19 4/20 4/22 Completed short-run p chart Bibliography Besterfield. Montgomery. Introduction to Statistical Quality Control. plot points. D. . TN: SPC Press.083 +1. D. 3rd edition.093 0.qxd 436 10/18/07 12:35 PM Page 436 The Desk Reference of Statistical Quality Methods Continue to collect data.094 0. Short Run SPC. Wheeler. J.82 +3.118 0.29 +0.20 –0. D. New York: John Wiley & Sons. C.73 +0. NJ: Prentice Hall. 4th edition. 1994. 1991. 1996. and look for signs of a process change: Date Product Lot size Number defective Proportional defective P ZP 4/20 4/20 4/21 4/22 4/23 B E A C B 85 30 75 32 48 10 4 7 3 4 0.H1317_CH40. Englewood Cliffs.

we may monitor this characteristic as if it originated from a single process. These historical expected values are established with historical data. We will be measuring the amount of change for a specific characteristic relative to the expected value for that characteristic. This will be done for several different products. In many manufacturing processes. The recommended number of subgroups required for such historical characterization varies. The following derivations are made to generate the new short-run plotting characteristic that will replace the traditional average and range used in the X /R chart.H1317_CH41. For the Short-Run Average Chart Initial premise: LCL X < X < UCL X Replace the lower control limit (LCL) and upper control limit (UCL) with the defining equation: X − A2 R < X < X + A 2 R Subtract X from all terms: − A2 R < X − X < + A2 R 437 .qxd 10/15/07 3:48 PM Page 437 Short-Run Average/Range Control Chart The traditional implementation of statistical process control (SPC) requires that a minimal number of sample observations be obtained to characterize the process or set up a historical reference against which future changes in the process are monitored. the opportunity to measure 125 units of production would be impractical. The concept for short-run average/range charts is based on a normalization of the statistic we will use to monitor changes in the location (central tendency) and variation. if not impossible. however. will have a unique expected value for both the location statistic and the variation statistic. For the average/range control chart. each of a subgroup size of four or five. due to low production volumes in a just-in-time operation. By consolidating the measurements of a universal characteristic that are common to several different products into one population. a minimum of 25 subgroups is suggested. all having the same characteristic. Each of the products.

R The value for A2 depends on the subgroup sample size. we will plot . . we will plot X−X . There will be a unique R value for R each product. For the Short-Run Range Chart Initial premise: LCL R < R < UCL R Replace LCL and UCL with the formula: D3 R < R < D4 R Divide all terms by R : D3 R R D4 R < R R R R D3 < < D 4 R < R Rather than plotting the range R. The control limits will be based on ±A2 for this normalized chart. as in the case of a normal average/range chart.qxd 438 10/15/07 3:48 PM Page 438 The Desk Reference of Statistical Quality Methods Divide all terms by R : − A2 R X − X + A2 R < < R R R − A2 < X−X < + A2 R Rather than plotting the average X . Each of the plotting parameters requires a unique value for the X and R for each part or unit of inspection. Control limits for X − X are UCL = +A2 and LCL = –A2. The following sections explain the several sources for these values. R where X and R will have unique values for each product.H1317_CH41. Control limits are UCL = D4 and LCL = D3.

Critical difference Cd = FcS = 0.4547 0.054 m = 5 and the table value for Fc = 0. the sample standard deviation. 10.008. where: Cd = a factor dependent on the sample size m from which the standard deviation was calculated S = the sample standard deviation.4942 0.3978 Example 1: Five measurements are available.7344 0.qxd 10/15/07 3:48 PM Page 439 Short-Run Average/Range Control Chart 439 – For X 1.000.4733 0. and the specification nominal is 10. Let the nominal = X Note: If the nominal is the choice for an estimate of the average. Should we use the nominal or the average of the five measurements for the target X ? Data: 10.993.9535.9535 0. Fc factors: Sample size m 4 5 6 7 8 9 10 11 Fc Sample size m Fc 1. was determined.H1317_CH41.102. 9.0042 . there is a requirment.0515 The absolute difference between the nominal and the sample average is ⏐10.9535 × 0.4235 0.007 The sample average X = 10.5185 0.101.4383 0.5796 0. Calculate the average from any available data 3.5463 12 13 14 15 16 17 18 19 0. the average of m observations should be used rather than the nominal where Fc is a factor determined by the sample size m from which S.4101 0.042⏐ = 0. Use X from the previous control chart data 2.042. and the sample standard deviation S = 0.6699 0.054 = 0.6200 0.1765 0. 10. 10. If ⏐Nominal – Average⏐ ≥ FcS*. The absolute value for the difference in a sample average and the nominal times the sample standard deviation must be less than a cricital value: Cd = FcS.8226 0.000 – 10.

The value for c4 depends on the sample size n from which the sample standard deviation S is calculated. 6Cpk d2 = factor dependent on the selected subgroup sample size USL = upper specification limit LSL = lower specification limit. – For R 1. 2.qxd 440 10/15/07 3:48 PM Page 440 The Desk Reference of Statistical Quality Methods Since 0. use average rather than nominal for the target X . The values for d2 and c4 can be found in Table A. d 2 c4 ⎛d ⎞ R = ⎜ 2 ⎟ S. For a unilateral specification. 3Cpk 3.0042 is not greater than the critical difference of 0. The production sched- .H1317_CH41. Use R . The products are all different models. we can use the nominal in place of the average of the five measurements. Calculate R from the sample standard deviation S. Case Study A product is routinely sampled and brought into a testing laboratory for evaluation. Rule: If the difference between the average and the nominal is greater than or equal to the critical difference. Your objective is to establish a short-run control chart. ⎝ c4 ⎠ The value for d2 depends on the selected subgroup sample size.7 in the appendix. but the characteristic being measured is common to all products. we may use R= (USL − Xt )d2 3Cpk or R = ( Xt − LSL )d2 . R is derived from S as follows: σ= R d2 and σ= S . Calculate R from an assumed Cpk: R= where: (USL − LSL )d2 .0515. R S = . c4 Therefore.

00 ) R= 81. We are using a subgroup sample size of n = 3. You will be working with three different product models. If it is not.00. but you must first test to see if this is statistically acceptable. R= (USL − LSL )d2 6Cpk R= (54. Model A: This is a new design. and a temporary specification of 30 ± 24 has been issued.264 6. therefore.00 and a specification requirement of 30 ± 24.79.qxd 10/15/07 3:48 PM Page 441 Short-Run Average/Range Control Chart 441 ule consists of several product changes and is relatively short for a given product design.79) = 4.20 and a specification of 400 ± 20.0 − 6. d2 = 1. Establish working values for target X and R for each model.57 .693 (6 )(1. The subgroup has a sample size of n = 3.0 – R A determination: Use an expected Cpk = 1.5 Model B – XB determination: Five measurements have been obtained and yield an average of X = 160. Model C: Another new model is added with an expected Cpk of 1. and you have test data for five units tested last month. Step 1.9535)(4. Model A – XA determination: Based on an expected Cpk = 1.693. The expected process capability index (Cpk) is 1.0 for the target X .00 R = 13. the expected target average X will be set at the nominal. Model B: The specification is 150 ± 20. Critical difference. Cd = FcS = (0. X A = 30.00. You may consider using the specification nominal of 150.56 and a sample standard deviation of S = 4. you will use the average of the five available data points instead of the nominal.0 )1.H1317_CH41.

R = 9. 6C pk (6)(1.41.9400 ⎠ R B = 8. d2 = 1.00 160. and the subgroup sample size of n = 3: R= ( USL − LSL)d 2 (420 − 380)1.63 9.693 ⎞ RB = ⎛ 4.20 – – Summary of X and R by Model Model A B C X 30. Therefore.57. Difference = ⏐Nominal – Average⏐ = ⏐150.72 . .6 > 4.H1317_CH41.41 These values are recorded in the upper-left portion of the short-run X /R chart form. the expected Cpk = 1.6 10. R= .63 Model C – Xc determination: The specification is 400 ± 20. – RB determination: The standard deviation S from the five measurements will be used to estimate the expected range for model B when subgroups of n = 3 are taken.693 67. R= .20) 7.9400 for a sample standard deviation S calculated from five measurements S = 4.56 400. we will set the target X to equal the nominal of 400: XC = 400.79 ⎝ 0.79 ⎛d ⎞ R =⎜ 2 ⎟S ⎝ c4 ⎠ 1.20. therefore.qxd 442 10/15/07 3:48 PM Page 442 The Desk Reference of Statistical Quality Methods If the absolute difference of the average of the five measurements exceeds the critical difference Cd.00 R 13. and no prior data are available. use X B = 160. – Rc determination: Using the specification of 400 ± 20.0 – 160.50 8.0.693 for subgroup size n = 3 c4 = 0.56⏐ = 10.. then use the average of the five measurements rather than the nominal for X B.56.

4 34.7 161.1 41.6 160.2 161.0 405.3 30.8 157.5 162.4 163.9 156.9 158.0 400.8 161.6 3.3 3.4 165.0 159.8 7.4 153.2 37.1 44.9 161.0 161.0 37.3 11.8 3.0 399.1 Model Sample 6 B Sample 7 B Sample 8 B Sample 9 C Sample 10 C Model Measurement (1) (2) (3) 163.7 160.3 21.3 9. Collect data.0 405.0 Sample 11 A Sample 12 B Sample 13 B Measurement (1) (2) (3) 44.9 43.3 8.0 161.6 161.9 Average Range 39.4 8.4 160.1 33.6 35.2 6.7 159.3 159.0 416.0 395.9 37.2 154.7 Average Range 34.1 37.3 Model . Sample 1 A Sample 2 A Sample 3 A Sample 4 B Sample 5 B Measurement (1) (2) (3) 37.3 411.0 17.H1317_CH41.0 403.6 157.2 11.1 161.4 4.0 Average Range 161.3 161.2 162.1 26.qxd 10/15/07 3:48 PM Page 443 Short-Run Average/Range Control Chart 443 Step 2.2 157.

recorded on the chart form. RA 13. range.8 Range R: 7. and the points are connected to the first subgroup point. Subgroup 1: Model A The plotting points are calculated.00 = = 0.1 26. Measurements: 41.1 37.53.0 − 30. average plot point.qxd 444 10/15/07 3:48 PM Page 444 The Desk Reference of Statistical Quality Methods Step 3. and range plot point.0 Range R: 17.2 = = 0. and plot the points on the chart.2 The plot point for the average is X − X A 34. Calculate the average.36.0 The plot point for the average is X − X A 37.50 The plot point for the range is R 17.1 Average X : 34. RA 13. Measurements: 37.8 − 30.26.00 = = 0. RA 13.50 Subgroup 2: Model A The second subgroup metrics are calculated and plotted.52.9 43. and plotted.3 30.1 Average X : 37.H1317_CH41.50 The plot point for the range is R 7.50 .0 = = 1. RA 13.

The process exhibits good statistical control with no points outside the upper or lower control limits for both the average and range portions. The limits will be ± A2.2 11.qxd 10/15/07 3:48 PM Page 445 Short-Run Average/Range Control Chart 445 Subgroup 3: Model A Calculate.63 R 3. and connect to subgroup point 3.53. RA 13.63 The remaining subgroup samples 5 through 13 are calculated and plotted on the chart.1 33.7 The plot point for the average is X − X B 159.3 3. See Table A.3 The plot point for the average is X − X A 37.1 Average X : – Range R : 37. For n ⫽ 3.50 Subgroup 4: Model B Note that new target values for X and R must be used. plot. respectively.4 Average X : – Range R : 159.56 = = 0. n.2 − 30.9 158.4 34. and D4 is 2. D3 is undefined.43. Add the UCL/LCL for the average chart.3 = = 0.84. The limits for the range chart will be D3 and D4. Measurements: 44.3 − 160.023. RA 13.7 in the appendix.H1317_CH41. for the LCL and UCL.7 = = 0.574.15. RB 8.00 = = 0. Step 4.5 The plot point for the range is R 11. A2 for n ⫽ 3 is 1. RB 8.6 157. The UCL/LCL will be a function of the subgroup size. Measurements: 161. .

0 x x x x x 0 x x x x x x x x –1.02 +1. additional data are collected.0 LCL = –1. do you feel that a process change has occurred? If so.02 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Sample number 2.0 x x x 1 3 x x x x 2 x 4 x 5 6 7 8 x x 9 10 11 12 13 14 15 16 17 18 19 Step 5. looking for signs of a process change.0 R/Rt x 1.57 UCL = 2.qxd 10/15/07 446 3:48 PM Page 446 The Desk Reference of Statistical Quality Methods See the following completed control chart with all of the historical data points plotted. Continue to collect data into the future. (X – Xt)/Rt History UCL = +1. During the next few days.H1317_CH41. Based on these observations.57 x 2. was there a change in the average or in the variation? .

3 157.02 –0.28 –0.36 –0.7 384.07 0.53 1.17 2.2 3.70 –0.5 9.2 153.4 388.9 392.0 390.1 153.0 11.36 0.95 0.0 9.2 25.1 7.50 1.3 161.0 37.99 .4 Sample 18 C A summary of all subgroup calculations follows: Subgroup number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Model Average X Range R A A A B B B B B C C A B B B A A B C C 34.6 4.26 0.2 149.05 0.2 9.4 154.15 –0.70 0.52 0.14 –0.2 6.0 8.3 39.1 3.3 9.5 6.9 Average Range 154.3 390.2 17.2 Average Range 384.0 21.3 159.0 158.2 152.7 3.2 388.4 29.8 161.93 1.3 162.0 26.2 25.76 0.3 3.6 159.0 11.14 0.2 7.qxd 10/15/07 3:48 PM Page 447 Short-Run Average/Range Control Chart 447 Sample 14 B Sample 15 A Sample 16 A Sample 17 B Measurement (1) (2) (3) 150.3 403.37 0.53 1.7 26.36 0.7 386.5 Model Sample 19 C Model Measurement (1) (2) (3) 394.9 7.4 53.9 9.8 37.1 388.4 X − XT RT R RT 0.69 0.86 –1.H1317_CH41.84 0.5 26.0 28.43 0.19 –0.2 160.2 8.46 0.5 9.20 –1.35 0.23 0.0 21.4 405.1 9.15 0.56 0.37 0.1 6.53 –0.35 0.60 0.1 26.2 159.2 23.7 3.

D. NJ: Prentice Hall.57 UCL = 2. Wheeler. New York: John Wiley & Sons.H1317_CH41.0 x x x x x 0 x x x x x x x x x x x –1. TN: SPC Press. 1996.02 x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 2.02 +1. H. Englewood Cliffs. Knoxville. C.57 x 2. Introduction to Statistical Quality Control.0 x x x 1 3 x x x x 2 x 4 x 5 6 7 8 x x x x x x x x 9 10 11 12 13 14 15 16 17 18 19 The control chart indicates an out-of-control condition beginning with sample 18. D. Quality Control.qxd 10/15/07 448 3:48 PM Page 448 The Desk Reference of Statistical Quality Methods (X – Xt)/Rt History UCL = +1. . 1994. Montgomery. Short Run SPC. D. J. 3rd edition. 4th edition.0 R/Rt x 1. Bibliography Besterfield. 1991.0 x x LCL = –1.

One model might have a mounting angle of 55°. the location statistic is the individual observation and its relationship to the average of all the individuals. Control limits for the individuals are based on the overall average ±3 standard deviations. tolerance. The time to acquire this number of individual data points for several different products makes it very difficult to establish a traditional individual/moving range control chart.66 MR . Examples of this might include the angle of a mounting bracket for several different brackets. and another might have a requirement of 60°.qxd 10/15/07 3:50 PM Page 449 Short-Run Individual/ Moving Range Control Chart Individual/moving range control charts are useful when the opportunity for data is limited and the data are from a family of products that have the same characteristic but the target. The measured characteristic will be the melt index. The three standard –– –– deviations are calculated from 3S = 2. Specifications for the requirements of four products are as follows: Product A B C D Melt index 25–75 88–120 18–36 133–150 A traditional individual/moving range control chart requires between 60 and 100 observations to characterize the process. The following derivations will be used to develop a format for short-run statistical process control (SPC). Another example might be a batch manufacturing process for the manufacture of hot melt adhesive. or the average and standard deviation for different products varies. By deriving a few relationships from the traditional methods and by making a few assumptions. Derivation of the Plotting Characteristic for the Location Statistic In a traditional individual/moving range control chart. 449 . where MR is the average moving range.H1317_CH42. we can place related data from several products on a single control chart using short-run techniques.

66MR – Subtracting X from all terms: –– – –– −2. 3. there will be a unique average and average moving range. the new plot point for the individuals will be X – .66MR –– 4.66 MR X−X Using this transformation. This unique average and moving range will be identified as the target average XT and the target – moving range MR T. Derivation of the Plotting Characteristic for the Variation Statistic The traditional characteristic for the location statistic is the moving range and how it behaves relative to the upper control limit (UCL).268MR > MR. Therefore. − MR T MR T –– The traditional UCL for the moving range is given by UCLR = 3.66MR < X − X < +2.268MR .268 > MR . 2.66 MR < X < X + 2. we have 3.66. Dividing all terms by MR : −2. LCLx < X < UCLx – –– – –– X – 2. For all cases of the moving range (n = 2): Moving range = |X1 − X2|. For each product. and MR the new upper and lower control limits for this plot point will be ±2. Recall that there is no lower control limit (LCL) for the moving range when using the individual/moving range control chart. as transformed to the short-run format. MR .H1317_CH42. any given moving range should be less than the UCL: –– 3.qxd 10/15/07 450 3:50 PM Page 450 The Desk Reference of Statistical Quality Methods For any individual/moving range control chart: 1. –– Dividing by MR .66 < X−X < + 2. we have a moving range calculated as Moving range = X1 − X T X 2 − X T . For the plot points X1 and X2.

160 1. For bilateral specifications: MRT = 1. Use historical control chart data –– 2. The plot point for the shortrun moving range simply becomes the moving range MR. Sample size n 5 6 7 8 9 10 11 12 13 14 15 20 25 fMR 1.268 with an average moving range of 1. limit − XT | 3Cpk b. – — Selecting a Value for XT and MRT – – Each product must have an expected average X T. we do not have to divide again.00.140 . where: fMR = a constant dependent on the number of data points used to calculate the sample standard deviation S = the sample standard deviation. Calculate MR T from a collection of data using the relationship of MR T = fMR S.H1317_CH42.| 6Cpk –– –– 3.154 1. and the UCL becomes 3.164 1.150 1. Estimate MR T from an expected or measured Cpk a.156 1.169 1.185 1. or we may determine it from a collection of data such as a collection of measurements or control chart data.180 1.qxd 10/15/07 3:50 PM Page 451 Short-Run Individual/Moving Range Control Chart 451 But since the individual plot points have already been divided by an expected or target –– average moving range MR T.148 1.200 1. We may let the nominal = X T.143 1.152 1. − lower spec.128 | spec. we may: 1. For unilateral specifications: MRT = 1. –– For an expected value of the average moving range MR T.128 | upper spec.

2 and S = 3.79) –– MR T = 4.8.128 | upper spec. –– MR T = 2.5 2 –– For MR T : MRT = 1. X T = Nominal = 58 − (69 − 58 ) = 63.07.00 and let the process average equal the specification nominal.19. These products are produced in a batch operation requiring approximately four hours to produce a 2000-gallon batch of product.37 Glycomul T Since this is a new product. . Using the relationship MR T = fMR S. Determine values for X T and MR T for each product. Specifications for the various products are as follows: Product Glycomul S Glycomul L Glycomul T Specification 235–260 330–350 58–69 – –– Step 1. Glycomul L We have 12 batch records for Glycomul L. and the stan–– dard deviation of the data points is S = 2. produces several bioplasmic double regenerative phalangic processing aids (the famous Glycomul line).79.154)(3. –– MR T = (1. The average of the 20 batches is 249.8 = X T.50. Calculate MR T from fMR = 1.128 | 69 − 58 | 6(1. 6C pk MRT = 1. the expected average moving range is –– MR T = (1. Glycomul S We have 20 previous batch records. – We will let the average of 249.H1317_CH42.qxd 452 10/15/07 3:50 PM Page 452 The Desk Reference of Statistical Quality Methods Case Study The Acme Chemical Co.79. The development pilot plant trials indicate that there will be some difficulty in manufacturing this product.143)(2.19). we have no previous data. Each of these products has specific hydroxyl value.154 and S = 3.00 ) MRT = 2. − lower spec. We will assume a Cpk of 1. | . The average and standard deviation for these – –– data points give X T = 335.

50 4. the first moving range cell is blackened in. The calculated short-run plotting value for this sample is Individual plot point = ( X − XT ) (248.2 63.qxd 10/15/07 3:50 PM Page 453 Short-Run Individual/Moving Range Control Chart 453 Summary of product parameters: Product Code – XT — MR T Specification A B C 249.07 235–260 330–350 58–69 Glycomul S Glycomul L Glycomul T This information should be recorded on the control chart.50. Plot points on the chart.5 2. For this reason. Collect historical data to characterize the process.8 335.7 2. This sample is the hydroxyl value for a batch of Glycomul S identified as product A. and the target moving range is 2.50 MRT The variation plot point for this first sample is undefined because there is no moving range possible with only one data point. Sample #1 Each observation must have a transformed value to plot for the individual.8.50 2. Data are collected for the next 18 batches of production.80 = = = – 0.37 2. . Calculate the location and variation plot for each data point. The target average is 249. Step 2.0 − 249.H1317_CH42. Date Product Hydroxyl value Date Product Hydroxyl value 11/3/96 11/3/96 11/3/96 11/3/96 11/4/96 11/5/96 11/5/96 11/5/96 11/6/96 A A A B B C A A C 250 248 253 334 336 63 249 253 64 11/6/96 11/7/96 11/7/96 11/7/96 11/8/96 11/8/96 11/8/96 11/9/96 11/9/96 C C A B A A C C C 65 62 252 335 247 250 65 60 66 Step 3. This information is recorded on the control chart.8 ) –1.

8 ) –1. For this product.50 2.0 − 335. ( X − X T ) ( 336.18 − ( −0.28.0.80 = = = 0. Moving range = 0.72. ( X − X T ) ( 334.20 = = = 1.50 2. –– which has a different set of short-run parameters.37 4.55 Sample #5 Another batch of Glycomul L is run with a reported hydroxyl value of 336. The measured hydroxyl value for this batch is 334.37.27 4.28 − (−0.50.37 4.20 = = = −0.50 MRT Mooving range = 1. XT = 335.08 − (−0.0. Plot the points and connect to the previous point.0.27 ) = 0. we have a product change.27 ) = 1.80 = = = – 0. The location plot point is ( X − XT ) (253. The next batch run is Glycomul L. The calculated short-run plot point is Individual plot point = ( X − XT ) (248.2 ) −1.8) 3.2 ) 0.37 MR T Moving range = 1.18 4.2.37 MR T Moving range = 0.80 Note: All moving ranges are positive.00 Sample #4 With the fourth sample.72) = 0.0.45 .8 with a target moving range of 2.H1317_CH42. and MR T = 4.qxd 454 10/15/07 3:50 PM Page 454 The Desk Reference of Statistical Quality Methods Sample #2 Another batch of Glycomul S is run with a reported hydroxyl value of 248.0 − 249.0 − 249.50 MRT The moving range is the algebraic difference between the largest and smallest consecutive values.0 − 335.28 − ( −0. 2. 2.72) = 2. The target value is again 249. Sample #3 A third batch of Glycomul S is run with a reported hydroxyl value of 253.

24 2. a new product that has been produced only in a pilot operation.42 Continue calculating the plot points and plot the results. The estimate for MR T is 2. ( X − X T ) ( 63.00.50 = = = −0.07.07 MR T Moving range = 0.5 ) −0.24 ) = 0. The expected average –– –– moving range MR T is based on an assumed Cpk of 1.H1317_CH42. is run for the – first time. History +3. The completed control chart showing all of the plotted historical data points follows.5.00 –3.00 1.00 0.qxd 10/15/07 3:50 PM Page 455 Short-Run Individual/Moving Range Control Chart 455 Sample #6 Glycomul T. We are using the nominal for the target average X T = 63.18 − ( −0.00 UCL = +2.00 LCL = –2.00 2.00 +1.00 –2.268 3.07 2.66 1 5 10 15 20 Individuals 25 30 UCL = 3.00 0 –1.0 − 63.0 1 5 10 15 20 Moving range 25 30 .66 +2.

H1317_CH42.qxd

456

10/15/07

3:50 PM

Page 456

The Desk Reference of Statistical Quality Methods

Step 4. Continue to collect data, looking for signs of a process change.
A change in the supplier of a key raw material was made beginning with sample #19. Using
the following sample data, do you feel that this change in supplier changed the process?
Sample

Product

Hydroxyl value

Sample

Product

Hydroxyl value

19
20
21
22

A
A
B
B

248
250
331
329

23
24
24
25

B
C
C
A

333
59
59
243

The completed short-run individual/moving range control chart follows. It appears that
the process has changed with sample number 25.
Step 5. Continue to collect and plot data.
History
+3.00

UCL = +2.66

+2.00
+1.00
0
–1.00
–2.00
–3.00

LCL = –2.66
1

5

10

15
20
Individuals

25

30

UCL = 3.268
3.00

2.00

1.00

0.0
1

5

10

15
20
Moving range

25

30

H1317_CH42.qxd

10/15/07

3:50 PM

Page 457

Short-Run Individual/Moving Range Control Chart

457

Bibliography
Besterfield, D. H. 1994. Quality Control. 4th edition. Englewood Cliffs, NJ: Prentice Hall.
Montgomery, D. C. 1996. Introduction to Statistical Quality Control. 3rd edition. New
York: John Wiley & Sons.
Wheeler, D. J. 1991. Short Run SPC. Knoxville, TN: SPC Press.

H1317_CH42.qxd

10/15/07

3:50 PM

Page 458

H1317_CH43.qxd

10/15/07

6:13 PM

Page 459

SPC Chart Interpretation

A statistical process control (SPC) chart is a set of statistical control limits applied to a set
of sequential data from samples chosen from a process. The data comprising each of the
plotted points are a location statistic such as an individual, an average, a median, a proportion, and so on. If the control chart monitors variables data, then an additional associated chart for the process variation statistic can be utilized. Examples of variation statistics
are the range, standard deviation, or moving range.
By their design, control charts utilize unique and statistically rare patterns that we can
associate with process changes. These relatively rare or unnatural patterns are usually
assumed to be caused by disturbances or influences that interfere with the ordinary behavior of the process. These causes that disturb or alter the output of a process are called
assignable causes. They can be caused by equipment, personnel, or materials.
Most of these patterns can be characterized by:
1. The degree of statistical rarity for processes where no change has occurred (rate of
false alarm)
2. The direction in which the process has changed (increased or decreased)
Not all of the SPC detection rules indicate a direction of process change. For that reason, it is suggested that these rules or patterns be given a low priority for consideration and
application.
Several of these patterns, or SPC detection rules, will be discussed. These SPC rules
will be numbered as they are presented, and the example chart diagrams will be that of an
average. The detection rules, however, apply to all control charts (individual/moving
range, p chart, and so on). The example control chart diagrams will have the following
defined areas:




Zone A: The area defined by the limits of X – 2S to X – 3S and X + 2S to X + 3S




Zone B: The area defined by the limits of X – 1S to X – 2S and X + 1S to X + 2S
Zone C: The area defined by the average ±1S

459

H1317_CH43.qxd

460

10/15/07

6:13 PM

Page 460

The Desk Reference of Statistical Quality Methods

Rule 1. A single point outside the control limits; false alarm rate is 1/370.
This rule pattern will occur once in 370 times even if the process is in control and no shift
has occurred. The control limits are based on an average ± three standard deviations. With
a normal distribution, the expected proportion falling inside the control limits is 0.9973,
and the expected proportion falling outside the control limits is 0.0027. The 0.0027 proportion outside the control limits is equivalent to 1/370. The occurrence of rule #1 is frequently referred to as “out of control” when in fact it is a rule violation.
x

A
B
X

x

C

x

C

x

x

B

x

A

Rule 2. Seven consecutive points on the same side of the average; false alarm rate is 1/64.
The probability of getting seven points on one side of the average is (1/2)7 = 0.0078. The
probability of getting seven consecutive points above the average or below the average is
2(1/2)7 = 0.0156.
Seven consecutive points above or below the average is equivalent to a rate of 1/64.
The number of consecutive points in a row varies from reference to reference. All are correct; only the false alarm rate will change. See the following sources for various selections
for the number of points on one side of the average.
Number of
consecutive points

False alarm rate

Statistical Quality Control,
7th edition, Eugene L. Grant
and Richard S. Leavenworth

7

1/64

Introduction to Statistical
Quality Control, 2nd edition,
Douglas C. Montgomery

8

1/128

ISO-8258

9

1/256

Reference

A

x

B
X

C
C
B
A

x

x
x

x

x
x

x
x

x

H1317_CH43.qxd

10/15/07

6:13 PM

Page 461

SPC Chart Interpretation

461

Rule 3. Seven consecutive points increasing (or decreasing), indicating a trend; false alarm
rate is 1/360.
The probability of getting seven consecutive points increasing (or decreasing) is 2/6! =
0.00277. This is equivalent to a rate of 1/360.
A

x

x

B
X

x

C

x

x
x

x
x

C
B

x
x

A

x

x

Rule 4. Two out of three consecutive points falling in zone A or beyond; false alarm rate
is 1/659.
The probability of getting a single point in zone A or beyond is 0.02275. The probability
of getting two consecutive points in zone A or beyond is (0.02275)2 = 0.000518, which
gives a rate of 1/1923. To reduce this false alarm rate, the rule stipulates that the number
of points in zone A or beyond in three consecutive samples be three. The probability of
getting two out of three consecutive points in zone A or beyond is (0.02275)2(0.97725)(3)
= 0.00152, which is equivalent to a rate of 1/658.
A
B
X

x

x

x

C

x

C

x

B
A

Rule 5. Four out of five consecutive points falling in zone B or beyond; false alarm rate
is 1/360.
The probability of getting a single point in zone B or beyond is 0.1567. The probability of
getting four consecutive points is (0.1567)4 = 0.000603, equivalent to a rate of 1/1659. The
rule is four out of five in zone B or beyond, not four consecutive points. The probability of
getting a point somewhere other than zone B or beyond is 1 – 0.1567 = 0.8433. The probability of getting four out of five in zone B or beyond is (0.000634)(0.8433)(5) = 0.0027,
which is equivalent to 1/360.
A
B
X

x
x

x

C
C
B
A

x

x

H1317_CH43.qxd

462

10/15/07

6:13 PM

Page 462

The Desk Reference of Statistical Quality Methods

Rule 6. Five consecutive points outside zone C with points on both sides of the average;
false alarm rate is 1/331.
The probability of getting five points outside zone C (both sides of the average) is given by:
(15/16)(0.31738)5 = 0.00302 equivalent = rate of 1/331.
A

x

B
X

C

x

x

x

x

x

x

C
B

x

x

A

x

Rule 7. Fifteen points in zone C above and below the average; false alarm rate is 1/307.
The probability of getting 15 points above and below the average is given by:
(0.68262)15 = 0.00326, which is equivalent to 1/307.
A

x

B
X

C

x x

C

x

x x

x x x
x x

x
x

x
x x

x

x

B
A

Rule 8. Fourteen points in a row alternating up and down; false alarm rate is 1/219.
A
B
X

x
x
x

C
B
A

x

x

C

x

x

x
x

x

x
x

x
x

x

x

x
x

H1317_CH43.qxd

10/15/07

6:13 PM

Page 463

SPC Chart Interpretation

463

Bibliography
Grant, E. L., and R. S. Leavenworth. 1996. Statistical Quality Control. 7th edition. New
York: McGraw-Hill.
Montgomery, D. C. 1996. Introduction to Statistical Quality Control. 3rd edition. New
York: John Wiley & Sons.
Nelson, L. S. 1984. “Technical Aids: The Shewhart Control Chart—Tests for Special
Causes.” Journal of Quality Technology 16, no. 4 (October): 237–239.
Technical Committee TC 69/5C. 1991. Shewhart Control Charts. Geneva, Switzerland:
International Organization for Standardization.
Wheeler, D. J. 1995. Advanced Topics in Statistical Process Control. Knoxville, TN:
SPC Press.

H1317_CH43.qxd

10/15/07

6:13 PM

Page 464

H1317_CH44.qxd

10/17/07

2:47 PM

Page 465

Taguchi Loss Function

Taguchi defines quality as the avoidance of loss to society in terms of economic loss or
dollars. This loss is a result of noncompliance to manufacturing or service specification
targets. He also suggests that this loss can be mathematically modeled as a function. The
Taguchi loss function is quadratic in nature in that Taguchi hypothesizes that the loss is
proportional to the square of the deviation from the target value.
The Taguchi loss function is
Cx = K(X – Tg)2,
where: Cx = cost, dollars
K = proportionally constant (loss parameter or loss constant)
X = observed or measured value
Tg = Target or nominal value (perfection), sometimes abbreviated τ.
Loss, dollars

C

Target

Tg – T l

Tg

Tg + T l

The key to utilizing the Taguchi loss function is in the determination of the proportionally constant K.
The specification for a shaft is 1.00 ± 0.05. The cost of nonconformance (Cx) is
$25.00. What is the value of K?
Cx = K ( X − Tg)2

K=

Cx
( X − Tg)2
465

K=

25.00
= 10, 000
(1.05 − 1.00)2

H1317_CH44.qxd

466

10/17/07

2:47 PM

Page 466

The Desk Reference of Statistical Quality Methods

Knowing K, we may now calculate the loss for any shaft diameter. For example, given
K = 10,000, what is the expected loss when the shaft diameter is 1.08?
Cx = K(X – Tg)2

Cx = 10,000(1.08 – 1.00)2

Cx = 10,000(0.08)2

Cx = $64.00

Even if the shaft is within the specification limits, there will be an associated loss. The
Taguchi philosophy is such that any deviation from perfection (target) will result in a loss
(to society). The loss for a shaft measuring 0.98 is
Cx = $4.00.

Cx = 10,000(0.98 – 1.00)2,

A customer’s specification for a particular part is 10.00 ± 0.15 (Tg ± Tlc), and the associated cost of nonconformance is $35.00, Cc. In an effort to avoid producing nonconforming parts, the manufacturer sets an internal specification of 10.00 ± 0.05 (Tg ± Tlm).
The internal cost to the manufacturer for failing to meet this requirement is $12.00.
The customer will realize a cost of Cc whenever the product lies at or exceeds the
specification of Tg ± Tlc. The customer’s loss will be
Cc = K(Tg + Tlc – Tg)2,
Cc = K(Tlc)2.
Likewise, it can be shown that the loss to the manufacturer when exceeding the manufacturing specification is
Cm = K(Tlm)2.
The value for the loss parameter K can be found by solving for K in either relationship.
Loss parameter K =

Cc
Cm
2 =
(Tlc)
(Tlm)2

In this example, where Cm = $12.00 and Tlm = ±0.05, the value for the loss parameter K is given by
K=

12.00
= $4800.
(0.05)2

We can now determine the required manufacturing specification tolerance:
K=

Cc
Cm
2 =
(Tlc)
(Tlm)2

Tlm = Tlc

Cm
Cc

Tlm = 0.15

122
35

Tlm = ±0.088.

00 ( X − Tg)2 120. UCx = $22.00 ± 0. the loss incurred when the product is below the lower specification may be different than that when the upper specification is exceeded. An example would be the diameter of a hole or shaft. LCx = $120.00 = K L ( X − Tg)2 KL = 120. The cost of being at the lower specification is $120. Tlm = 0. it is relatively easy or less costly to reduce by removing more material.00 22. Tlm = ±0. Tlc = ±0. KL determination: Loss at lower specification. however.0 − 9. From a practical point. Example: Customer’s tolerance. Cm = $15. Cc = $25.00 (LCx). These two K values will be designated as KL and KU.00 = KU (Tg − X )2 KU = 22.2)2 K L = $3000 22.20. if the shaft is undersized.012 In some unique cases.00 Manufacturer’s loss. The shape of the Taguchi loss function is nonsymmetrical due to these differences.8)2 KU = $550 KL KU determination: Loss at upper specification. . and the cost of being at the upper specification is $22. If a shaft is oversized. the undersized part is simply scrapped.00 120.015 15 25 Manufacturer's tolerance.015 Customer’s loss.H1317_CH44. These values will be used to calculate a unique K value for those measurements falling below the target and another unique K for those measurements falling above the target.00 (UCx). it is more complex and costly to add material.00 (0.00 Manufacturer's tolerance.qxd 10/17/07 2:47 PM Page 467 Taguchi Loss Function 467 The manufacturer’s tolerance can be determined when the ratio of the consumer’s loss to the manufacturer’s loss is known. Tlm = Tlc Cm Cc Manufacturer's tolerance. Consider the following case where the specification is 10.00 (Tg − X )2 KU = The loss function for the parts measuring less than the target is given by LCx = 3000(Tg – X)2.00 (10.

6 10.9 11. Using these loss functions.6 9. dollars Measurement 9.0 10.6 9. 10. dollars 500 X X 400 X 300 X X 200 X $120 100 X X X $22 0 X 9.4 9.1 10.2 10.5 22 10.5 10.8 10.8 11.4 10.5 198 269.0 .H1317_CH44.3 10.qxd 468 10/17/07 2:47 PM Page 468 The Desk Reference of Statistical Quality Methods The loss function for the parts measuring greater than the target is given by UCx = 550 (X – Tg)2.7 9.4 Measurement Taguchi loss with differing proportionality constants.7 10.6 10.0 X X 10.5 352 446 550 800 X 700 600 X Loss. dollars 49.8 9.5 9.0 LS Tg US Loss.5 88 137.8 X X 10.9 10.2 750 480 270 120 30 0 5. the following losses are determined for various measurement observations: Measurement Loss.

2)2 LMS = Average loss lower.4 8. A process variable has a specification requirement of 8.5)2 + 8(8.4 8. UMS = ∑ ( XU − tgt )2 n where: XU = data below the target or nominal n = total samples tgt = target or nominal UMS = 1 [3(8.00 CU associated with exceeding the upper specification.5)2 + 2(8.35 − 8.65 8.00 KL = = 72 2 = (tgt − lower spec. ALL = KL (LMS) = (72)(0.0212 CL 18.50 (+0.0021 25 Lower mean square.5 8. what is the average expected total loss ATL for this process? Data: 8.15 − 8.2/–0.15 8.4 8.00 CL associated with falling below the lower specification and a loss of $24.65 − 8.26 = $2.6 − 8.4 8.35 8.6 8.5)2 + 2(8.) (0.H1317_CH44.0021) = $1.4 8.53 + $1.5 1 Upper mean square.35 8.5)2 )] = 0.6 8. Given the following 25 observations.5)2 25 + 3(8. ALU = KU (UMS) = (600)(0.4 8.2 8.5)2 + 2(8.5 8.79 . − tgt ) (0.35 8.26 Average total loss.4 8.5 8.2 8.4 8.5)2 CU 24.3 8.qxd 10/17/07 2:47 PM Page 469 Taguchi Loss Function 469 Losses with Nonsymmetric Specifications Consider the following case.4 − 8. LMS = 1 ∑( X L − tgt )2 n where: XL = data below target or nominal 1 [(8. ATL = ALL + ALU = $1.5)2 ] = 0.0212) = $1.00 KU = = 600 2 = ( Upper spec.5) and a loss of $18.3 − 8.2 − 8.5 8.3 8.53 Average loss upper.6 8.

and R. Statistical Quality Control. E. 1996. Advanced Topics in Statistical Process Control. L. D. New York: McGraw-Hill. Knoxville. TN: SPC Press. 7th edition. 1995.qxd 470 10/17/07 2:47 PM Page 470 The Desk Reference of Statistical Quality Methods Bibliography Grant. S. .. Wheeler.H1317_CH44. Leavenworth. J.

there is an assumption that the data come from a normally distributed universe. It is relatively easy to perform and requires simple calculations However. Step 1. The graphical methods are more subjective in the interpretation but are generally easier to use. There are advantages and disadvantages for both the analytical and graphical methods. This judgment is sometimes very subjective. Fifteen samples have been obtained with the time recorded in minutes. They are based on the fact that data plotted on normal probability graph paper will yield a straight line when they are normally distributed. Normal Probability Plot Normal probability plots are a method to test for a normal distribution. Applications such as process capability can be affected by nonnormal distributions.H1317_CH45.qxd 10/17/07 2:48 PM Page 471 Testing for a Normal Distribution In many statistical applications. The normal probability paper is scaled in the vertical axis based on the normal probability function and is scaled in the horizontal axis as a linear scale.0 471 . It can be accomplished with a small sample size (n ≥ 15) 2. the one disadvantage is that the criterion for normalcy is the degree to which a straight line is obtained.0 17.0 15. The methods presented here are graphical and statistical. The normal plotting technique is illustrated in the following example.8 16. Sample number I Sample value X 1 2 3 4 5 6 7 14. The head of radiology in a midwestern hospital wants to determine if the length of time required to perform a series of tests is normally distributed. This module discusses four methods for testing for a normal distribution. Collect data and arrange in ascending order.7 18. A normal probability plot has two advantages: 1.3 19.3 17.

4 ) where: n = total sample size i = order of data Both the Hazen and Benard formulas are derived from the general form of MR = i−c for the Hazen c = 0. There are several methods to estimate the median rank for data. Calculate the median rank MR for each data value.8 24. One method (Hazen’s) is based on the relationship: % MR = (i − 0.5 and for the Benard c = 0.1 percent for samples of n = 50.5) × 100.6 Step 2.qxd 472 10/17/07 2:48 PM Page 472 The Desk Reference of Statistical Quality Methods Sample number I Sample value X 8 9 10 11 12 13 14 15 19.3 22.2 20.0 20.3 .4 21.3 ) × 100 ( n + 0.H1317_CH45.0 21. Benard’s approximation is accurate to 1 percent for samples of n = 5 and 0. % MR = ( i − 0.9 22. n Another relationship is that of Benard’s median rank.5 ) 1/ n ( 2 i − n − 1) . n −1 . n − 2c +1 Another significantly more complex median rank determination is that defined by MR = n − i + ( 0.

8 24.0% In a similar manner. Caution should be used in locating the vertical percent median rank.3 17. Locate the value 14.49 62.03 30.6 4. Determine a horizontal plotting scale.8 16.H1317_CH45.0 15.99 69.7 ) (15.2 20.3 ) (15 + 0.4 ) ( 2 − 0. 1st median rank: % MR = ( i − 0.7 ) (15. Plot the data vs.4 ) × 100 % MR = × 100 % MR = ( 0.47 88. continue to calculate the percent median rank for all 15 observations.00 17.0 21.55% 2nd median rank: % MR = ( i − 0.4 ) (1. . Benard’s median rank will be used. Sample number i Sample value X % median rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 14.0 17.01 43.52 37.4 21.4 ) × 100 % MR = 11. the percent median ranks.0 19.3 ) ( n + 0.7 18.4 ) × 100 % MR = 4.3 ) ( n + 0.0 20.4 ) × 100 % MR = × 100 % MR = (1 − 0.55 11.97 82. as the scale is not linear and it changes as the median rank number increases in magnitude.48 75.qxd 10/17/07 2:48 PM Page 473 Testing for a Normal Distribution 473 For the example given.3 22.9 22.51 50.0 on the horizontal axis and the corresponding percent median rank on the vertical axis.45 Step 3.3 19.3 ) (15 + 0.00 56.96 95.53 24.

5 0.99 99.9 99.qxd 474 10/17/07 2:48 PM Page 474 The Desk Reference of Statistical Quality Methods 1st plotted value: 99.0 Value Median rank 80 .1 0.55 2 1 0.05 0.01 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 14.8 99 98 95 90 70 60 50 40 30 20 x 10 5 4.2 0.H1317_CH45.

99 99.00%. 99.01 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 15.1 0.H1317_CH45.05 0.5 0. X2 = 15.2 0.8 Value Median rank 70 60 475 .9 99.8 and median rank position = 11.qxd 10/17/07 2:48 PM Page 475 Testing for a Normal Distribution The second data point is plotted.8 99 98 95 90 80 50 40 30 20 x xx 11 10 5 2 1 0.

99. draw the best straight line possible through the points.5 0. The following figure shows the completed normal probability plot. as these are the regions where the tails of the distribution influence the distribution of the normal probability the most.2 0. The degree to which a single straight line can be drawn though all of the points is proportional to the degree that a normal distribution is present.9 99.05 0. When all of the points have been plotted.1 0. Be less influenced by the first and last points of the data.qxd 476 10/17/07 2:48 PM Page 476 The Desk Reference of Statistical Quality Methods Continue to plot the remaining points in a similar manner.8 99 98 x x x 95 90 80 x 50 40 30 x x x x x 20 x x 10 5 x 2 1 0.99 99. A normal distribution is supported due to the straightness of the fitted line to the plot.H1317_CH45. Median rank 70 60 x x x .01 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 Value Completed normal probability plot.

Pi Transformed median rank.4653 0. Yi 1 2 3 4 5 6 7 8 9 10 11 12 13 14 25.184 +1.36 42.14].885 +1. The Benard’s median rank values are determined and transformed.07 36.10 39.651 –0. No transformation of the individual data values.99 29.8125 0. Normal Probability Plots Using Transformed Data When normal probability plotting paper is available.54 30. If normal probability paper is not available. is required.14 – (1 – Pi)0.14 – (1 – MR)0.3958 0.91[(Pi)0.885 –0.184 –0.0486 n + 0.60 30. Xi.62 35. This example exhibits normalcy. one may still test for a normal distribution using conventional rectangular coordinate paper by transposing the median rank values or Pi values to Yi values according to the following: Yi = 4.661 Details of computation for first data point: Note: The Benard’s median rank is calculated as a proportion.H1317_CH45.5347 0.087 +0. not as a percentage. Xi.8819 0.3 1 − 0. Median rank.9514 –1. i Data value.51 34.1875 0. Xi Median rank. Consider the following example: Fourteen data values.08 28.10 34.6736 0. Order.14].087 +0.651 +0. the median ranks (not %MR but MR) may be transformed and plotted using ordinary rectangular graph paper.85 33.3 = = 0.6042 0.448 –0. are given. Pi = i − 0.4 14 + 0.91[(MR)0.2569 0.1181 0.0486 0.661 –1.7431 0.3264 0.03 0.263 +0.90 36.4 .qxd 10/17/07 2:48 PM Page 477 Testing for a Normal Distribution 477 The degree to which a straight line can be drawn through all of the points reflects the degree to which a normal distribution is appropriate to describe the data.448 +0.263 –0. The transformation equation is given by Yi = 4.75 32.

and the slope is inversely proportional to the standard deviation. represent the horizontal positions.14 – (1 – 0. The Xi intercept is an estimate of the average of the data.0486)0. Xi. The degree to which a straight line is obtained is proportional to the degree of normalcy for the data. The completed plot is indicative of normally distributed data because the line is relatively straight.qxd 478 10/17/07 2:48 PM Page 478 The Desk Reference of Statistical Quality Methods Transformation of Pi: Yi = 4.H1317_CH45. and the individual data values.14] = –1.91[Pi)0. .661 The Yi and Xi values are plotted on rectangular coordinate graph paper. +3 +2 +1 10 20 0 30 40 50 Xi –1 –2 –3 Yi Normal probability plot using rectangular coordinates.14– (1 – Pi)0.91[(0.14] Yi = 4.0486)0. The resulting Yi values represent the vertical plot positions.

Collect approximately 300 data points and determine the average and standard deviation.qxd 10/17/07 2:48 PM Page 479 Testing for a Normal Distribution 479 Chi-Square Goodness-of-Fit Test The chi-square (χ2) test is derived from expected values with an assumed distribution compared to the distribution of the data being tested. χ2 = ∑ (Oi − Ei )2 Ei where: Oi = the observed frequency for a given interval Ei = the expected frequency for a given interval The chi-square goodness-of-fit test is illustrated with the following example.H1317_CH45. The chi-square test statistic is calculated by summing up the square of the differences between the observed data and the expected data and dividing by the expected value for a given interval. the data have been arranged into 12 columns and 25 rows. Data: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 1 2 3 4 5 6 7 8 9 10 11 12 53 46 50 49 48 50 52 54 51 48 54 54 51 47 50 50 44 50 50 47 52 53 51 48 43 56 46 49 46 51 54 50 49 50 51 50 46 56 52 56 52 44 47 49 53 51 48 49 47 53 48 50 52 47 50 50 57 51 51 48 50 50 51 53 50 48 45 51 52 47 45 52 51 57 49 47 44 45 48 52 53 51 52 47 48 50 51 47 52 46 46 49 54 48 53 48 51 53 42 50 56 49 51 51 48 49 54 53 51 47 51 53 49 51 50 48 52 50 54 49 50 50 47 47 49 47 51 56 54 49 51 51 48 48 55 47 50 50 49 48 46 46 47 52 49 53 52 51 48 47 47 45 57 48 51 49 49 54 48 48 52 52 48 47 48 50 49 47 56 53 50 50 53 51 46 51 50 47 45 50 50 55 42 55 45 47 57 48 51 50 43 52 55 47 48 58 50 54 54 53 53 49 48 55 45 49 49 54 58 52 48 47 47 53 51 54 49 53 47 48 55 49 45 52 53 49 51 48 49 53 48 50 49 50 50 49 48 53 49 51 52 48 57 51 54 49 53 45 52 51 50 50 51 46 50 48 53 45 50 50 48 55 50 51 50 51 49 51 48 53 47 51 50 44 53 55 54 44 49 50 50 53 51 49 46 53 49 50 42 54 48 52 53 56 49 53 47 56 50 51 . For convenience. Step 1.

Determine the expected frequency of occurrence in the defined intervals given the average and standard deviation for a normal distribution.5 102 102 40. Sort the data into six discrete intervals. Ei For the first interval.qxd 480 10/17/07 2:48 PM Page 480 The Desk Reference of Statistical Quality Methods Step 2.0.5)2 = 0. The expected percentage of the data falling inside the intervals and the actual observed data values inside the intervals are calculated as follows: Interval definition % of data for normal distribution – ≤X – 2S – – X – 2S to X – 1S – – X – 1S to X – – X to X + 1S – – X + 1S to X + 2S – ≥X + 2S 2.5 2. Calculate the chi-square statistic for the data.5 and the standard deviation s = 3.0 13.5 34. 7.80.0 − 7.H1317_CH45. we have (5. The data are tallied into the intervals as a frequency of occurrence. Interval definition Interval value – ≤X – 2S – – X – 2S to X – 1S – – X – 1S to X – – X to X + 1S – – X + 1S to X + 2S – ≥X + 2S <44 44 to 47 to 50 to 53 to >56 Frequency 5 27 95 106 52 15 47 50 53 56 Step 4. The intervals are determined as a function of the average and standard deviation.0 34.5 Expected data points 7.5 . Calculate the average and sample standard deviation of the data. The sum of these terms gives the chi-square statistic: χ2 = ∑ (Oi − Ei )2 . If we are dealing with a normal distribution.5 40. we can calculate the expected number of data values falling inside the defined intervals. – The average X = 50.5 Observed data points 5 27 95 106 52 15 Step 5.5 7. The difference between the observed and expected frequency is squared and divided by the expected frequency for each interval.5 13. Step 3.

2 2. > χ2 crit. If χ2 calc. we have (27 − 40.5)2 = 4. Compare the calculated chi-square value to a critical chi-square value. 16.5 102 102 40.5 7.10) = 6. 40. . Ku for a normal distribution may be Ku = 3. therefore. Depending on the method of calculation. f. χ2(3.5 0.251. A perfect normal distribution will have a Sk = 0 and a Ku = 0. The total of the individual terms gives the chi-square statistic. The total degrees of freedom is three. Skewness and Kurtosis Skewness (Sk) and kurtosis (Ku) are measures of the normalcy of a distribution.10).5 All intervals are determined and presented in the following table. we reject the hypothesis that the distribution is normal. Interval definition – ≤ X – 2S – – X – 2S to X – 1S – – X – 1S to X – – X to X + 1S – – X + 1S to X + 2S – ≥ X + 2S Expected E data points Observed O data points (O − E )2 E 7.5 0. reject the assumption of normalcy.7 7.2 Ei Step 6. The critical chi-square value is determined by looking up the chi-square value for n – 1 – f degrees of freedom and an α risk of 10 percent (0. We subtract one more degree of freedom than the total parameters we are estimating.50. The degrees of freedom are based on a sample size n (number of intervals) of six minus three degrees of freedom.0. There are two parameters being estimated: the average and the standard deviation.8 4.5 40.2 > 6.5 χ2 = ∑ (Oi − Ei )2 = 16.qxd 10/17/07 2:48 PM Page 481 Testing for a Normal Distribution 481 For the second interval.5 5 27 95 106 52 15 0.H1317_CH45.251 The criterion for rejection of the assumption of a normal distribution is if the calculated χ2 is greater than the critical χ2.

n μ k = the kth moment of the mean. The first moment about the mean is always zero. The kth moment of the mean is given by ∑( Xi − X )k .645 for 90 percent confidence. n The confidence intervals for the skewness and kurtosis are given by ˆ ± Zα / 2 6 Sk n ˆ ± Zα / 2 24 .H1317_CH45. .qxd 482 10/17/07 2:48 PM Page 482 The Desk Reference of Statistical Quality Methods Ku and Sk are based on calculations using moments of the mean. μ 22 The standard deviations for the skewness and kurtosis are SSK = 6 n and SKU = 24 . μ1 = 0 ⎛ n ⎞ = s2 μ2 ⎜ ⎝ n − 1 ⎟⎠ μ2 = σ2 σ = μ2 A measure of skewness often used is Sk = μ3 (μ 2 ) 3 = μ3 μ 23/ 2 and a measure of kurtosis is Ku = μ4 − 3. and the second moment about the mean adjusted for bias is the variance. and Ku n Zα/2 = 1.96 for 95 percent confidence and 1.. μk = – where: X = average Xi = an individual observation.

59 341. . 5 6 7 8 9 10 11 12 13 14 15 16 17 18 .09) = 3.17 35.42 1 + 6 + 15 + .68) + (15)( −4.89 18.14) + (15)(8.35 0. . .88 187.65) + (6)( −19.91 1 + 6 + 15 + .51 148. . .65 –19.49 28. .88) = 4.20 12.03 2. . .09 – (X – X )3 – (X – X )4 –50.69 7. + 1 μ3 = ∑ ( X − X )3 n μ3 = (1)( −50.40 53. + (1)(28.29) + (15)(2.14 8.69) + (6)(7.38 1 + 6 + 15 + .29 2.89) + .49 0.40) + (6)(53. .86 27.09 1. + (1)(789. . .24 0.91) + .01 2. + 1 μ4 = ∑ ( X − X )4 n μ4 = (1)(187.05) = 41.94 79.qxd 10/17/07 2:48 PM Page 483 Testing for a Normal Distribution 483 The following data illustrate calculation of Sk and Ku and testing for normalcy of the data: X – (X – X )2 Frequency f 6 7 8 9 10 11 12 13 14 15 1 6 15 13 11 8 6 4 2 1 13.89 0.34 0.H1317_CH45. .69 5.35) + .68 –4.05 μ2 = ∑ ( X − X )2 n μ2 = (1)(13. + 1 15 10 5 0 1 2 3 4 Frequency histogram of data.91 –0.98 118.88 789. + (1)(148.29 10.

Encino. and D. 1993.645⎜ ⎟ → 0. New York: Prentice Hall. Upper Saddle River. 5th edition. Dovich. E. CA: Glencoe.91)3 90 percent confidence interval: ⎛ 6 ⎞ 0.29 ± 1. B. 1982. E.08 to 1. Bibliography Betteley. H. Petruccelli. Walpole. D. Probability and Statistics for Engineers and Scientists. G. 1992. Romig. E. E.29 μ2 261 .. and we conclude that the data come from some distribution other than a normal one.qxd 484 10/17/07 2:48 PM Page 484 The Desk Reference of Statistical Quality Methods Sk = μ3 4.06 ⎝ 67 ⎠ 41. and R. Mettrick. Milwaukee. G.. Chen. 1999. Sweeney. Quality Engineering Statistics. New York: John Wiley & Sons.27 to 0. R.. Nandram. Grant. and H. 1996.69 ⎝ 67 ⎠ This distribution is skewed to the right as indicated by the shape of the frequency histogram and the positive Sk. The distribution is normal with respect to the amount of kurtosis since the confidence interval for the kurtosis contains zero (which is expected for a kurtosis from a normal distribution). Montgomery. and R. 3rd edition. Hayes. 7th edition. NJ: Prentice Hall. 1994. Englewood Cliffs.42 = 0. NJ: Prentice Hall. Using Statistics in Industry. However. WI: ASQC Quality Press. J.72 ⎞ ⎛ ⎝ 67 ⎠ 90 percent confidence interval: ⎛ 24 ⎞ −0. 1996. A. C. S. .H1317_CH45. Leavenworth. the confidence interval for the skewness does not contain zero. D. and M. Applied Statistics for Engineers and Scientists.. Statistical Quality Control.. Introduction to Statistical Quality Control. N.645⎜ ⎟ → −1. Myers.57 3/ 2 = μ2 (3. New York: McGraw-Hill. Modern Quality Control. Wilson. The distribution is also flatter than a perfectly normal distribution as indicated by the slightly negative Ku.57 ± 1. G. R. 3rd edition. L.383 μ 67 Ku = 24 − 3 = 2 − 3 = −0.

It is named after Waloddi Weibull. we abbreviate the characteristic life as η. Shape parameter. Weibull demonstrated that this distribution is very flexible for a wide variety of data distributions. where: β (beta) = the shape parameter or Weibull slope η (eta) = the scale parameter or characteristic life δ (delta) = the location parameter or minimum life characteristic. Note: If β ≠ 1. the failure rate is increasing. and when β > 1.H1317_CH46.5 3. Weibull’s 1951 paper. and burnout.qxd 10/15/07 5:00 PM Page 485 Weibull Analysis The Weibull distribution is one of the most used density functions in reliability. the rate of change in the reliability is decreasing. who invented it in 1937. “A Statistical Distribution Function of Wide Applicability. β 1 2 2. the failure rate is constant. The Weibull probability density function is defined as follows: F (t ) = β β( t − δ ) − 1 e ηβ ⎡ ⎛ t −δ⎞β ⎤ ⎢− ⎜ ⎥ ⎢⎣ ⎝ η ⎟⎠ ⎥⎦ . his distribution has passed the test of time and today is widely accepted as a viable model for application in reliability engineering. These three areas are frequently referred to as infant mortality.6 Distribution Exponential Rayleigh Lognormal Normal The Weibull reliability function describes the probability of survival as a function of time. useful life. The Weibull distribution can be used as a tenable substitute for several other distributions by varying the shape parameter. 485 . Though initial reaction to Weibull’s distribution varied from skepticism to outright rejection.” led to a wide use of the distribution. when β = 1. we refer to the characteristic life as the mean time between failure (MTBF) and abbreviate it as θ. When β = 1. Rt = e ⎛ t −δ ⎞ −⎜ ⎟ ⎝ η ⎠ β When β < 1.

β as the slope.2 −⎛ ⎝ 230 ⎠ R80 = 0. η = 230.2. ⎣ ⎝ 1 − f (t ) ⎠ ⎦ This is a linear equation with ln(t) as the independent variable.4 .3 n + 0. where β = 1. Probability Plotting Taking the logarithm of the Weibull cumulative distribution twice. Rt = e R80 = e ⎛ t −δ ⎞ −⎜ ⎟ ⎝ η ⎠ β 80 − 5 ⎞ 1. ⎣ ⎝ 1 − f (t ) ⎠ ⎦ For an estimate of f(t).2 = 0. where β = 1.0042 2301. η = 230. h( t ) = β(t − δ )β −1 ηβ Example: What is the instantaneous failure rate at t = 80. and δ = 5? h( t ) = 1.77 or 77% The hazard function represents the instantaneous failure rate and can be used to characterize failures in accordance with the bathtub curve. and δ = 5.qxd 486 10/15/07 5:00 PM Page 486 The Desk Reference of Statistical Quality Methods Example of a Weibull reliability calculation: Calculate the reliability of a component at t = 80. the depen⎡ ⎛ 1 ⎞⎤ dent variable as ln ⎢ln⎜ ⎟ ⎥ . we may graphically determine the Weibull parameters β and θ.2. its drawback is the difficulty in estimating its parameters. the following relationship is obtained: ⎡ ⎛ 1 ⎞⎤ ln ⎢ln⎜ ⎟ ⎥ = β ln (t ) − β ln θ . we may use Benard’s median rank. and β ln θ as the y-intercept. f (t ) = i − 0.2(80 − 5)0. There are several methods for estimating the Weibull parameters.H1317_CH46. Plotting ln(t) against ⎣ ⎝ 1 − f (t ) ⎠ ⎦ ⎡ ⎛ 1 ⎞⎤ ln ⎢ln⎜ ⎟ ⎥ .2 While the Weibull is very versatile.

⎟ = ln⎝ ln 1 − 0.4 Step 3. Calculate the median rank for each observation. 264. giving the following time-to-failure data: 118. 333. For the first observation.644 ⎠ ⎝ 1 − f (t ) ⎠ .644.3 = = 0. 175. 87. Time to failure 87 118 137 175 190 203 221 264 270 333 Step 2. 221. Determine the natural logarithm of the inverse of 1 – MR for each observation. and 270 Step 1. Step 4. 137.3 = = 0.qxd 10/15/07 5:00 PM Page 487 Weibull Analysis 487 where: i = the failure order n = the sample size Example calculation: Ten identical tools are run until all fail.0323 .067.669 ln⎜ ln .067 ⎠ ⎝ 1 − f (t ) ⎠ For the seventh ordered observation: ⎛ 1 ⎞ 1 ⎛ ⎞ = −0. 203. 190.4 10 + 0.4 For the seventh observation. n + 0. the MR is f (t ) = i − 0. For the first ordered observation: ⎛ 1 ⎞ 1 ⎛ ⎞ = −2. n + 0.4 10 + 0.3 i − 0.3 7 − 0. Determine the natural logarithm for the time to failure for each observation. and order the data in ascending order. Assume δ = 0. ln⎜ ln ⎟ = ln⎝ ln 1 − 0. the median rank (MR) is f (t ) = i − 0.H1317_CH46.

60 5.548 0.596 0.994 ⎡ ⎛ 1 ⎞⎤ Step 5.740 0. β = 2.16 5.726 –1. .47 4.81 –2.824 –0.837 0.70.205 –0.31 5.66 The Weibull slope.25 5.067 0.163 0. The Weibull characteristic life η is calculated from the equation η=e ⎛ − y0 ⎞ ⎜ ⎟ ⎝ β ⎠ .40 5.70 –8 –10 –12 –14 Y0 = –14. Plot the ln (t) on the x-axis and ln ⎢ln⎜ ⎟ ⎥ on the y-axis. i Time to failure f(t) ln(t) ln[ln(1/1 – f (t ))] 1 2 3 4 5 6 7 8 9 10 87 118 137 175 190 203 221 264 270 333 0.669 –1.qxd 10/15/07 488 5:00 PM Page 488 The Desk Reference of Statistical Quality Methods The following table summarizes the calculations for steps 1–4: Order. ⎣ ⎝ 1 − f (t ) ⎠ ⎦ 1n (t ) 1 2 3 4 x 5 0 x x xx x 6 7 x x –2 x –4 –6 1 1n 1n 1 – f (t ) Slope = 2.508 0.032 0.298 0.H1317_CH46.92 5.231 0.58 5.355 0.933 4.259 0.452 0.77 4.644 0.

70 = 228. If the plotted data do not yield a single straight line. Consider the following data: Order number. Essentially by subtracting the minimum life characteristic from all of the failure times and by replotting the data. yields a somewhat straight line.0 57. The characteristic life η may be determined by locating the failure time that corresponds to a cumulative percent failure of 63 percent.5 64.3 94. hours %MR 1 2 3 4 5 6 7 8 9 10 11 12 13 28 39 50 60 70 85 100 115 135 160 200 250 370 7. or there could be a failure-free time or a minimum life characteristic. and then an estimate of the characteristic life and Weibull slope may be determined. Plotting Data Using Weibull Paper A less rigorous graphical technique relies on the use of special graph paper called Weibull plotting paper. the original data may be adjusted somewhat to yield a straight line. we are reducing the Weibull plot to a two-parameter model using the characteristic life and Weibull slope. Discretion must be used in the interpretation of the underlying cause for the nonlinearity.9 87.6 35.5 12. Graphically we can determine the slope directly from the special paper. there may be the influence of a positive minimum life parameter γ.H1317_CH46.5 50. when subtracted from all of the failure times. The minimum life characteristic is time that.8 A plot of the failure times as a function of the %MR yields a slightly curved line. Subtracting the estimated value for the minimum life characteristic. ⎛ 14.66 ⎞ ⎝ ⎠ The characteristic life is η = e 2.qxd 10/15/07 5:00 PM Page 489 Weibull Analysis 489 where: Y0 = the y-intercept β = the Weibull slope.4 79.7 20.1 42. . i Failure time.1 27. Failure times plotted as a function of the %MR will yield a straight line (with a slope β = 1) if the probability density function follows that of an exponential model.9 72. This could be due to the existence of mixed distributions.

0 4.8 0. The MR is calculated using Benard’s approximation.qxd 490 10/15/07 5:00 PM Page 490 The Desk Reference of Statistical Quality Methods We begin by plotting the original data as a function of the %MR.H1317_CH46.4 6. The ith MR is given by %MR = i − 0. 0 99 4 1.0 The original data give the following plot: 2.6 0.0 3.3 × 100.5 95 90 80 x x x x 70 η 60 x x x x 40 x 30 x x 20 x 10 x 5 4 3 600 700 800 900 1000 500 400 300 200 50 60 70 80 90 100 40 30 1 20 2 10 Cumulative percent failure (%MR) 50 . n + 0.7 0.0 0. 2 1. 1. The data are ordered in decreasing order.

This line is designated as L2. the maximum failure point as t3.8 7 18 29 39 49 64 79 94 114 139 179 229 349 Replotting the adjusted data gives a nearly straight line. i Failure time. and the minimum failure point as t1. t1 = 28.1 42. The midpoint of these two lines is designated as t2.15.5 12.3 94.qxd 10/15/07 5:00 PM Page 491 Weibull Analysis 491 Estimation of Minimum Life Characteristic f Equally spaced horizontal lines covering the entire area of the curve are drawn. The value for η is read as approximately 112. The characteristic life η is determined by following the 63 percent MR position across until intercepting the adjusted straight line and then going down this position vertically until reaching the horizontal scale.9 72. The minimum life characteristic is calculated by λ = t2 − (t3 − t2) (t2 − t1) .7 20.1 27. This is line L1.5 50.0 57. . (t3 − t2) − (t2 − t1) For this example.5 64. t2 = 70. Order number.6 35.4 79. The Weibull slope is determined by drawing a perpendicular line from the straight line (from the data) to the reference point on the left designated by the dot (60% MR). Another perpendicular line is drawn from this reference point to intersect the curved arc from where the Weibull slope is read. γ = 21 Subtracting 21 from all of the data gives the new adjusted data from which the Weibull slope and characteristic life may be determined.9 87.H1317_CH46. and t3 = 370. hours %MR Adjusted failure times 1 2 3 4 5 6 7 8 9 10 11 12 13 28 39 50 60 70 85 100 115 135 160 200 250 370 7. The estimated value for the Weibull slope β is 1.

95 ß = 1. R. 1951. 2 1.12 1.5 L 2 90 80 70 η 60 40 L1 Cumulative percent failure (%MR) 50 30 20 10 5 4 3 500 400 300 600 700 800 900 1000 η = 112 200 50 60 70 80 90 100 40 30 20 1 10 2 Weibull plot of adjusted data with values for β and η. O’Connor. Practical Reliability Engineering. Milwaukee.0 3. Dodson. North Palm Beach.6 0. 1994. T. “A Statistical Distribution Function of Wide Applicability. W.0 0. 0 99 4 1.492 10/15/07 5:00 PM Page 492 The Desk Reference of Statistical Quality Methods 6. Reliability Engineering Statistics. Weibull. P. R. New York: John Wiley & Sons.” Journal of Applied Mechanics 18:293–297. Dovich. 1992. Milwaukee. The New Weibull Handbook. D.8 0.qxd 2. Bibliography Abernethy. 2nd edition. B. B. 2006.0 4. WI: ASQ Quality Press. Abernethy.7 0. . The Weibull Analysis Handbook. 1991. A. FL: Robert B.0 H1317_CH46. WI: ASQC Quality Press.

qxd 10/15/07 5:02 PM Page 493 Wilcoxon Rank Sum (Differences in Medians) The Wilcoxon sum of ranks is a distribution-free or nonparametric test. are used in the fabrication of a valve. we get the following table: Observation Type Rank 153 258 394 528 712 836 891 901 1068 1592 1689 B B A B B A B A A B A 1 2 3 4 5 6 7 8 9 10 11 493 . and it is independent of any distribution type. μ A < μ B Illustrative example 1: Two types of polymer. It does not require that the distribution be normal or of a certain type.H1317_CH47. The hours to failure are: Type A Type B 1689 901 1068 394 836 891 1592 258 153 528 712 Ranking all the data with the source identified. It is useful in comparing two medians to the following null hypothesis: 1. A and B. μ A > μ B 3. Five samples of type A and six of type B are tested to failure. μ A = μ B 2.

100 17/0.074 52/0.015 52/0.036 28/0.500 2 3/0.9 percent or level of confidence of 90. we want to prove that the median life of type A is greater than that of type B. See the following table.1 percent.089 48/0.071 22/0.036 23/0.125 23/0.071 26/0.036 44/0. Compare the calculated sum of ranks W with the critical sum of ranks WC.082 16/0.090 14/0.033 34/0.016 6 7/0.095 35/0.048 13/0. Construct the null hypothesis (Ho) and the alternative hypothesis (Ha).167 19/0. Calculate the test statistic.050 4 5/0.032 36/0.018 29/0.028 26/0. use the table of lower and upper percentage points If Ha : μ A > μ B .075 12/0.333 7/0.167 3 4/0.100 15/0.033 40/0. . do not reject Ho.048 31/0.017 69/0. The test statistic will be the sum of ranks W for the first sample.021 7 8/0.037 66/0. Let the risk of our conclusion be 8.086 38/0.089. W = 3 + 6 + 8 + 9 + 11 = 37 Step 4.024 57/0.029 18/0.056 25/0.090 64/0.021 45/0.048 21/0.143 21/0.082 42/0.) Step 3. For this example.067 23/0.095 20/0.047 15/0.024 32/0.017 35/0.9 percent. n1 n2 1 2 3 4 5 6 7 1 2/0.092 32/0.049 17/0. The relationships of the calculated W and the critical sum of ranks with respect to rejecting the null hypothesis and accepting the alternative hypothesis are: If Ha : μ A ≠ μ B .qxd 494 10/15/07 5:02 PM Page 494 The Desk Reference of Statistical Quality Methods Step 1.083 29/0. At a risk of 8.019 42/0.250 9/0. Ho : μ A = μ B Ha : μ A > μ B Step 2.057 25/0. 37 is not greater than 38. Select a level of confidence and risk. where n2 = larger sample size and n1 = smaller sample size.019 Upper percentage critical points for the distribution of rank sums.041 50/0. (The probability of a type I error is 0.100 14/0. there is not sufficient evidence to conclude that type A has a median greater than that of type B.037 55/0.H1317_CH47.014 5 6/0. reject Ho and accept Ha.029 26/0. NA. use the table of upper percentage points If Ha : μ A < μ B .016 38/0.200 11/0. use the table of lower percentage points If W > WC.

138. 133.4 2 12 Illustrative example 2: A study has been made comparing the systolic blood of individuals selected from two body mass index (BMI) groups. 140. 144. 133.9 25. The BMI is determined by BMI = Weight (lbs. 138. and group B has BMIs between 24 and 25.H1317_CH47.)2 A general relationship between weight and BMI is: <18. 143.35 = 37.0–29. 142.089 Za = 1. 138. 140 Group B: 139. n1 (n1 + n2 + 1) n n (n + n + 1) + Za 1 2 1 2 2 12 n2 = 6 n1 = 5 a = 0. WC for larger N values greater than 10 can be approximated by Lower WC = n1 (n1 + n2 + 1) n n (n + n + 1) − Za 1 2 1 2 2 12 Upper WC = n1 (n1 + n2 + 1) n n (n + n + 1) .qxd 10/15/07 5:02 PM Page 495 Wilcoxon Rank Sum (Differences in Medians) 495 Tabled values are limited. 136. 139. Height (in. . 139. 139. 140. 137.) × 703. 140. Data: Group A: 137.5–24.9 >30 Underweight Normal Overweight Obese Group A has BMIs between 25 and 29. 144. 135 NA = N1 = 15 NB = N2 = 11 Step 1. 138. 134. + Za 1 2 1 2 2 12 and/or If we estimate the upper WC (recall that the table value was 38). 138.4. 138. 137.5 18. we find WC = 37. Construct the null hypothesis (Ho) and the alternative hypothesis (Ha).35 5(5 + 6 + 1) (5)(6)(5 + 6 + 1) + 1.

5 11. Because this is a two-tailed test.5 11.5 20.5 25. we get the following table: Systolic blood pressure Group Order Rank 133 133 134 135 136 137 137 137 138 138 138 138 138 138 139 139 139 139 140 140 140 140 142 143 144 144 B B B B B A B B B B A A A A A A B B A A A A A A A A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 1.5 16.5 11. Select a level of confidence and risk. the rank is reported as the median for all the multiple values.5 3 4 5 7 7 7 11.5 16.qxd 496 10/15/07 5:02 PM Page 496 The Desk Reference of Statistical Quality Methods Objective: Are the two groups from the same population or from different populations? No distribution is assumed.5. Step 3.H1317_CH47.5 16.5 20.5 1. we will place half our risk in each test statistic. Calculate a test statistic. The corresponding ranks (for each value) are the median of 1 and 2—1. in this study the value 133 occurred with the first. The sum of ranks for the first sample will be the test statistic.5 20.and second-order values. the upper and lower critical Wc’s. Ho : μ A = μ B Ho : μ A ≠ μ B Step 2. .5 11.5 23 24 25.5 If multiple values occur.5 11. The level of confidence is 95 percent. For example.5 16. and the risk is 5 percent. Rearranging the data in ranked order.5 20.

036 23/0.100 15/0.087 86/0.036 44/0.018 29/0. The calculated upper and lower critical sum of ranks.037 55/0.200 11/0.143 21/0.045 23/0.042 111/0.125 23/0.019 42/0.095 60/0.091 30/0.044 27/0.074 52/0.5) = 289 = the test statistic Step 4.086 107/0.090 64/0.023 61/0.025 9 10 9 19/0.057 25/0.021 45/0.073 28/0.097 35/0.041 71/0.018 41/0.017 35/0.090 14/0.077 45/0.095 35/0.097 92/0.044 96/0. .028 26/0.032 36/0.067 23/0.022 115/0.5) + 4(20.048 31/0.100 14/0.085 56/0.038 51/0.071 26/0.015 52/0.96 = 149.050 67/0.022 Upper percentage distribution of W.030 32/0.047 81/0.014 5 6/0.024 32/0.041 63/0.016 38/0.qxd 10/15/07 5:02 PM Page 497 Wilcoxon Rank Sum (Differences in Medians) 497 The sum of ranks W for the first sample.089 48/0.041 19/0.029 26/0.091 68/0.080 40/0.025 79/0.091 22/0.018 57/0.100 17/0.021 84/0.056 25/0. WC = 15(15 + 11 + 1) (15)(11)(15 + 11 + 1) − 1. Compare the calculated sum of ranks W with the critical sum of ranks WC.047 59/0.094 52/0.095 11/0.100 20/0.024 45/0.046 105/0.021 83/0.050 40/0. is given by W = 7 + 6(11.091 123/0.047 21/0.047 15/0.095 20/0.047 85/0.082 42/0.111 18/0. n1.023 108/0.075 12/0.083 29/0.048 21/0.036 28/0.021 65/0.022 132/0.020 87/0.H1317_CH47. Wilcoxon sum of ranks.071 22/0. reject Ha and accept Ho.047 128/0.025 53/0.086 38/0.016 6 7/0. WC = n1 (n1 + n2 + 1) n n (n + n + 1) − Za 1 2 1 2 2 12 Upper critical rank.167 3 4/0.022 28/0.082 16/0.049 17/0.089 25/0.073 37/0.045 90/0.050 4 5/0.038 43/0.024 57/0.095 81/0.048 13/0.020 70/0.018 31/0.019 8 17/0.037 66/0.036 47/0.038 54/0.042 37/0.091 73/0. WC.7 12 2 W = 289 n1 n2 1 2 3 4 5 6 7 8 1 2/0.029 18/0.021 74/0.500 2 3/0.099 48/0.022 99/0.025 10 20/0.017 69/0.015 33/0.044 76/0. WC = n1 (n1 + n2 + 1) n n (n + n + 1) + Za 1 2 1 2 2 12 Rejection criteria: If the calculated W is less than the lower WC or if the calculated W is greater than the upper WC.167 19/0.095 10/0.090 77/0.097 9/0.100 101/0.250 9/0.5) + 2(16.024 49/0.033 40/0.082 64/0.092 32/0.021 7 8/0.5) + 23 + 24 + 2(25. Lower critical rank.041 50/0.036 29/0.333 7/0. are determined by Lower critical rank.033 34/0.024 38/0.

025 1/0. Therefore.090 41/0.023 63/0.091 44/0.036 4/0.022 8/0.024 12/0. .048 13/0.099 27/0.041 5/0. Wilcoxon sum of ranks.041 28/0.023 29/0.091 4/0.125 4/0.042 15/0.024 27/0.095 20/0.036 11/0.092 16/0.041 33/0.050 35/0.036 12/0.500 2 1/0.038 24/0.047 82/0.029 7/0.083 15/0. Therefore.030 10/0.075 6 1/0.097 60/0.091 87/0.020 32/0. We have sufficient evidence to conclude that the two groups do in fact represent two populations of systolic blood pressure. WC = 15(15 + 11 + 1) (15)(11)(15 + 11 + 1) + 1.073 12/0.100 5 1/0.015 9/0.024 21/0.022 1/0.033 14/0.073 19/0.022 65/0.018 15/0.024 15/0.qxd 498 10/15/07 5:02 PM Page 498 The Desk Reference of Statistical Quality Methods W is not less than the lower WC of 149.025 1/0.086 73/0.018 9/0.250 3/0.095 10 3/0.143 3/0.016 7/0.021 20/0.067 10/0. We now compare W to the upper critical rank value.019 18/0.021 51/0.024 14/0.100 4/0.097 9 3/0.96 = 225.018 23/0.200 3/0.016 17/0.029 11/0.057 13/0.036 21/0.037 39/0.044 43/0.077 25/0.H1317_CH47.3.019 1/0.080 20/0.090 7 3/0.071 14/0.046 66/0.021 8/0.087 58/0.014 6/0.167 3 1/0.097 17/0.018 11/0.042 69/0.167 3/0.086 22/0.089 11/0.021 38/0.082 8 3/0.020 49/0.089 30/0.047 51/0.038 26/0.3 12 2 W = 289 W is greater than the upper WC of 225.100 70/0. we can reject Ho in favor of Ha.091 12/0.095 55/0.095 Lower distribution of W.095 36/0.089 11/0.7.037 29/0.049 5/0.047 45/0.082 23/0.050 7/0.100 6/0.074 32/0.021 42/0.050 16/0.333 3/0.047 5/0.047 9/0.071 7/0.111 4/0.022 8/0.048 4/0.041 41/0.021 31/0.022 78/0.022 53/0.085 34/0.025 40/0.032 19/0.090 49/0.017 13/0.044 56/0.091 46/0. Upper critical rank.017 36/0.045 54/0.047 31/0.082 380.033 20/0. we cannot (at this time) reject Ho. n1 n2 1 2 3 4 5 6 7 8 9 10 1 1/0.044 9/0.095 6/0.045 6/0.036 23/0.094 28/0.038 17/0.048 8/0.015 26/0.044 9/0.025 22/0.100 4 1/0.

The average/range control chart limits for the averages portion is determined by UCL = X + A2 R and LCL = X − A2 R . Zone charting offers an effective way to overcome many of these types of obstacles. these individuals may feel uncomfortable with the task of plotting data and having to remember a number of the traditional “rules” that indicate an out-of-control condition or evidence that a change in the process characteristic being monitored has occurred.qxd 10/15/07 5:05 PM Page 499 Zone Format Control Chart The successful implementation of statistical process control (SPC) into the work environment requires that the people most closely related to the process maintain the control chart. Zone charting uses a summation rule based on the probability of getting values within certain discrete areas as defined by the process average and increments of ±1. In many cases and for a variety of reasons. The variation statistic is based on ±3 standard deviations and is derived from some type of range measurement. where: R = range X = average of averages A2 = a factor dependent on the subgroup sample size. where: MR = moving range X = average of individuals.66 MR. 499 . ±2. and ±3 standard deviations. The individual/moving range control chart limits for the individual portion are determined by Upper control limit (UCL) = X + 2.66 MR and Lower control limit (LCL) = X − 2. Traditional Shewhart control charts define a location statistic and a variation statistic.H1317_CH48.

The probability of getting values in one of these zones is approximately 2. B. Zone D = zones above the average + 3S and below the average – 3S. The performance of a control chart is determined by the average amount of time required to detect a change in the process. the control chart limits are based on an average response é3 standard deviations. The ARL for a Shewhart control chart using a single detection rule of a single point outside the upper or lower control limit where a 1 – S shift has occurred is ARL = 42.H1317_CH48. é2. Compute the process average and standard deviation of the process s. The probability of getting a single point inside one of these zones is approximately 34 percent. Zone B = average + 1S to average + 2S and the average – 1S to average – 2S. Collect sufficient process data to characterize by determination of the process average and variation. Other rules are available. The zone factors are also adjusted to allow for various ARLs for a given amount of process change. Each zone area will be identified as A.5 percent.qxd 500 10/15/07 5:05 PM Page 500 The Desk Reference of Statistical Quality Methods In both cases. The variation statistic will be either the average range or the average moving range depending on the variable chart selected to monitor the process. The probability of getting values in one of these zones is approximately 0. we can reduce the ARL for a given amount of shift in the process. This performance is measured in terms of the average run length (ARL). For example. The probability of getting a value in one of these zones is approximately 13. Zone C = average + 2S to average + 3S and the average – 2S to average – 3S. C. or D. The weighting factors are proportional to the inverse of the probability of the event occurring.35 percent. Construction of Zone Charts 1. By adding additional rules of detection. we reduce the ARL of 42 to an ARL of 11 (for a 1 – S shift). by adding the rule of seven points in a row on the same side of the average. Zone A = average é1S. Determine the average é1. but the task of remembering all of the rules becomes burdensome in application. 2. and é3 values. .15 percent. Zone charts are based on the probability of getting values in each of the eight defined zones or areas.

8 2.5 2. Select an appropriate zone-weighting scale. b.8 2. Never connect a value that exceeds the CRS to the next point.4 2.0 0 0 0 0 0 0 1 0 1 1 1 1 1 1 1 1 2 2 2 3 3 2 2 2 5 2 4 4 6 6 4 4 5 6 12 4 8 8 8 12 4 4 5 6 12 5 8 8 8 12 49 100 211 320 358 311 41 100 27 62 19 26 45 63 86 44 16 26 13 21 7 8 11 14 17 11 6 8 6 7 4 4 5 6 7 5 4 4 3 4 2.0 0.qxd 10/15/07 5:05 PM Page 501 Zone Format Control Chart 501 ARLs for selected zone scales deviation from historical average in units of S. . add the zone score of this value to the plotted zone score of the previous value.8 3. d. Locate the next point.9 370 100 140 33 42 12 13 7 6.H1317_CH48. 4. If on the same side of the average as the preceding value.5 1.5 2. If the next point is on the opposite side of the average as the previous one.5 3 Shewhart 3 S limits only: Shewhart 3 S + 7 in a row: 3. discontinue the accumulation process and start over. start the counting process over. These weights are cumulatively summed until a critical run sum (CRS) has been reached or exceeded. do not accumulate the zone score. and record the proper zone score. Zone scores Process change A B C D CRS 0. c.8 3. Collect data and plot individual averages or individuals depending on the chart type.9 3.6 2. Use the following steps: a. Upon reaching the CRS value.0 1. a condition that indicates a process change relative to the historical characterization. The CRS value triggers the alarm system for the process. Upon reaching or exceeding the CRS. Locate the zone area for the first value to be plotted. Several zone-weighting systems and CRS values are available depending on the ARL wanted for a given amount of process change. the process is deemed out of control.3 3.

15 7.31 – X = 7. .32 – X = 7.24 #13 #14 #15 #16 #17 #18 7.26 R = 0.22 6.23 7.12 7.20 7.12 #7 #8 #9 #10 #11 #12 7.05 7.03 7.18 R = 0.36 7.25 R = 0.14 R = 0. The location statistic will be the grand average of the averages X .15 7.29 – X = 7.18 7.42 7.18 – X = 7.33 7.22 R = 0.20 – X = 7.15 7.05 Samples 1–18 will be used to determine the historical characterization of the process.09 7.27 7.19 7.34 7.20 R = 0.51 – X = 7.29 7.26 – X = 7.21 7.25 7.29 – X = 7.39 7.06 7.27 7.15 7.18 7.00 7.26 7.11 7.16 – X = 7.08 7.27 7.23 7.30 7.16 7.13 7.06 7.36 7.16 R = 0.23 R = 0.15 7.34 7.15 – X = 7.11 7.19 7.11 6.05 7.01 7.10 – X = 7.25 7. Collect Historical Data #1 #2 #3 #4 #5 #6 7.35 7.98 7.18 7.14 R = 0.28 7.23 7.16 7.25 7.08 7.qxd 10/15/07 502 5:05 PM Page 502 The Desk Reference of Statistical Quality Methods The following data will be used to develop an average/range control chart using the zone format.24 7.17 – X = 7.26 7.13 7.25 7.28 7.31 – X = 7.05 7.12 – X = 7.37 7.11 7.37 7.14 7.22 R = 0.18 R = 0.24 7.22 7.22 – X = 7.12 7.39 – X = 7.17 7.31 – X = 7.41 7.20 7.14 7.21 R = 0.23 7.22 R = 0.19 R = 0.86 7.37 7.14 7.44 – X = 7.23 7.14 7.33 7.17 7.15 7.10 R = 0.17 R = 0.09 7.H1317_CH48.18 7.16 R = 0.20 7.19 R = 0.13 7.08 7.24 7.05 7.13 7.25 7.

Three standard deviations for the distribution of averages are calculated using the following relationship: 3S X = A2 R .29 +  + 0. Note: The range portion for this example of the zone format average/range chart is not presented.H1317_CH48.21 +  + 7. The following table lists the zone-definition area and the corresponding zone score.26 18 503 X = 7.142 to 7. and ±3 standard deviation zones. and a zone-weighting scale is assigned for each of the ±1.191 The variation statistic will be the average range R .257 The control limits are based on an average ±3 standard deviations.32 + 0.191 to 7. ±2.044 Zone score 8 4 2 1 1 2 4 8 .049 Zones in increments of one standard deviation about the average are determined.16 + 7.338 7. One standard deviation for the distribution of averages is determined by dividing 3SX by three. R= R1 + R2 + R3 +  + Rk k R= 0.289 7.577)(0.191 7. See the module entitled Average/Range Control Chart.qxd 10/15/07 5:05 PM Page 503 Zone Format Control Chart X= X1 + X 2 + X 3 +  + X k k X= 7. SX = 0.240 7. as the range portion is performed exactly as with a traditional range chart.044 7.148.142 7.093 to 7.093 to <7.338 7.24 18 R = 0.240 to 7. 3S X = (0.19 + 7.20 + 0.289 to 7.257) = 0. Zone location Zone value (1) – >X + 3S – – X + 2S to X + 3S – – X + 1S to X + 2S – – X to X + 1S – – X – 1S to X – – X – 1S to X – 2S – – X – 2S to X – 3S – <X – 3S >7.

338 +2S = 7. Plotting points are circled with the appropriate zone score placed inside the circle.044 8 4 2 1 1 1 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 .191 –1S = 7. Connect the circled values with a line.191 –1S = 7.19 is located in zone 1 below the average. The following rules of protocol are used with the zone charting technique: 1. 3.qxd 504 10/15/07 5:05 PM Page 504 The Desk Reference of Statistical Quality Methods These zone values and the zone-weighting factors are recorded on the following zone format X /R chart: +3S = 7. If the average line is crossed going from one zone area to another. 2.289 +1S = 7.289 +1S = 7.093 –3S = 7. Sample #1 An average of 7.338 +2S = 7.044 8 4 2 1 1 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Each of the averages is plotted in the appropriate zone location.142 –2S = 7.240 Avg = 7. A zone score of 1 is placed in this area: +3S = 7. An indication of an out-of-control condition is evident when a cumulated zone score of greater than or equal to eight is obtained.H1317_CH48. The first few values are plotted to illustrate the concept. Continue to accumulate with the current zone score the previous circled zone score as long as the positions remain on the same side of the average line. 4.093 –3S = 7.142 –2S = 7. start the accumulation process over.240 Avg = 7.

the accumulation process is started over.044 8 4 2 1 1 1 1 2 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Sample #4 An average of 7.qxd 10/15/07 5:05 PM Page 505 Zone Format Control Chart 505 Sample #2 An average of 7. we add the previous value of 1 to the current zone area of 1 for a total zone score of 2.338 +2S = 7. The average has been crossed going from sample #2 to the current position for sample #3. +3S = 7.14 is located in zone 2 below the average. The average has been crossed going from the previous value to the current sample #4. The 2 is circled and connected to the preceding value.191 –1S = 7.240 Avg = 7.289 +1S = 7.142 –2S = 7. .21 is located in zone 1 above the average. therefore. The correct value for this zone position is 1.240 Avg = 7.16 is located in zone 1 below the average. therefore.H1317_CH48. The average has not been crossed.093 –3S = 7.338 +2S = 7. +3S = 7. therefore.142 –2S = 7.191 –1S = 7. the accumulation process is started over.289 +1S = 7.044 8 4 2 1 1 1 2 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Sample #3 An average of 7.093 –3S = 7.

240 Avg = 7. do you have evidence to support the conclusion that the process data reflect any change? #19 #20 #21 #22 #23 #24 7.02 7.05 R = 0.10 R = 0.02 7.07 7.044 8 4 2 1 1 1 1 2 2 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Continue to plot the remaining points for samples 5–18.25 – X = 7.191 –1S = 7.05 7.H1317_CH48.20 R = 0.23 7.17 7.04 7.24 – X = 7.18 7.289 +1S = 7. +3S = 7.44 . Based on the next nine sets of data.14 7.98 7.92 7.94 7.48 – X = 7.94 7.16 6.17 R = 0.27 7.97 7.41 – X = 7.06 R = 0.42 7.13 7.20 7.16 R = 0.191 –1S = 7.22 6.qxd 10/15/07 506 5:05 PM Page 506 The Desk Reference of Statistical Quality Methods +3S = 7.240 Avg = 7.18 – X = 7.093 –3S = 7.06 7.338 +2S = 7.02 7.24 6.289 +1S = 7.06 7.22 7.24 6.044 8 History 4 2 3 1 1 1 1 2 1 1 2 2 3 2 2 2 1 1 1 1 2 3 2 4 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 A change has been made in the operation of the process.142 –2S = 7.093 –3S = 7.142 –2S = 7.06 7.338 +2S = 7.24 – X = 7.22 6.10 7.

14 R = 0.05 R = 0.06 7.240 Avg = 7.08 – X = 7.” Tappi Journal 87 (February): 159–161.093 –3S = 7.31 – X = 7.338 +2S = 7.289 +1S = 7.93 7.02 7.17 7.qxd 10/15/07 5:05 PM Page 507 Zone Format Control Chart #25 #26 #27 7.07 6. Bibliography Jaehn.10 – X = 7. H. . “ Zone Control Charts: A New Tool for Quality Control.24 +3S = 7.11 7.04 7.01 7.044 507 8 History 4 2 3 1 1 1 1 2 2 1 1 2 3 2 2 2 1 1 1 1 1 2 4 2 3 3 4 2 2 8 4 6 10 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Completed zone chart.142 –2S = 7.191 –1S = 7.11 7.12 7.05 7.05 7.06 7.H1317_CH48. A.35 7.06 R = 0. 1987.

H1317_CH48.qxd 10/15/07 5:05 PM Page 508 .

29806 0.00032 0.04457 0.43644 0.01426 0.00003 0.00058 0.14917 0.00205 0.00005 0.00695 0.00256 0.x7 x.00212 0.00019 0.47608 0.00914 0.7 0.00012 0.2 3.00639 0.00043 0.01923 0.42466 0.03438 0.38591 0.21186 0.24825 0.02222 0.00076 0.28434 0.47210 0.27760 0.29116 0.00570 0.00587 0.27425 0.46017 0.05821 0.09510 0.00003 0.8 0.35197 0.08851 0.01130 0.00453 0.7 2.02743 0.00056 0.01618 0.00003 0.00013 0.31918 0.00010 0.0 0.00415 0.00003 0.00776 0.01044 0.00054 0.00015 0.00604 0.9 2.00248 0.1 0.00379 0.25785 0.01659 0.13136 0.10565 0.05480 0.00045 0.00135 0.04272 0.11900 0.02275 0.00842 0.1 3.04846 0.00020 0.00368 0.00005 0.00011 0.00427 0.00149 0.28096 0.00336 0.14686 0.7 3.00028 0.00144 0.22065 0.16853 0.00798 0.00016 0.40517 0.10204 0.00003 0.08534 0.00047 0.01072 0.00003 0.1 1.00006 0.01876 0.7 1.5 0.00357 0.0 0.04006 0.35569 0.12714 0.46812 0.03074 0.00079 0.00226 0.08226 0.34090 0.00307 0.00066 0.00509 0.32997 0.6 0.x9 4.02938 0.20897 0.02442 0.02680 0.1 2.24196 0.00003 0.42074 0.02559 0.8 2.01831 1.00317 0.00035 0.00074 0.11507 0.06944 0.02118 0.00272 0.00002 0.2 2.00494 0.3 3.00090 0.00657 0.00011 0.36693 0.00024 0.01287 0.06552 0.09853 0.00480 2.00062 0.8 1.00554 0.13350 0.04648 0.00621 0.37448 0.04746 0.01787 0.03754 0.00164 0.x2 x.39358 0.4 2.17106 0.03836 0.00111 0.05592 1.00036 0.00008 0.26435 0.50000 0.05262 0.00003 0.48006 0.00022 0.19215 0.08379 0.1 Standard normal distribution.45224 0.06811 0.00087 0.00889 0.12100 0.05938 0.04093 0.00326 0.10749 0.00199 0.00071 0.11702 0.5 0.9 0.14457 0.22363 0.37070 0.01160 0.20045 0.00019 0.00082 0.31561 0.17879 0.00034 0.00025 0.00010 0.09012 0.05705 0.38209 0.00539 0.25143 0.00118 0.00048 0.4 0.07353 0.20327 0.4 1.00026 0.06301 0.28774 0.00023 0.0 0.23885 0.33360 0.18673 0.00219 0.00008 0.00755 0.00006 0.22965 0.00714 0.15625 0.18943 0.01700 0.00013 0.03920 0.14007 0.05050 0.00006 0.00029 0.03363 0.02500 0.00005 0.00159 0.48405 0.6 3.00004 0.43251 0.00003 0.00939 0.23270 0.00014 0.07636 0.00240 0.01578 0.02068 0.37828 0.00010 0.14231 0.25463 0.06178 0.15151 0.00064 0.41683 0.00122 0.00038 0.27093 0.04363 0.17619 0.40129 0.12302 0.8 3.00004 0.6 1.01191 0.00004 0.00193 0.11123 0.9 1.06426 0.00021 0.30503 0.x5 x.00040 0.03673 0.10027 0.00187 0.00052 0.00280 0.05155 0.9 3.00402 0.00097 0.00104 0.17361 0.07493 0.01223 0.19490 0.07780 0.00009 0.16109 0.00007 0.00181 0.44433 0.44038 0.00170 0.00466 0.00964 0.26109 0.3 1.00027 0.00154 0.00008 0.00289 0.00014 0.00126 0.05370 0.32276 0.07215 0.42858 0.5 0.00002 3.2 1.02872 0.00002 0.00131 0.03144 0.00100 2.01390 0.31207 0.34458 0.34827 0.09680 0.21476 0.00007 0.07927 0.41294 0.01500 0.12507 0.01321 0.44828 0.00820 0.00006 0.10935 0.39743 0.00004 0.H1317_CH49-AP.30854 0.01463 0.x6 x.00676 0.00007 0.49601 0.00018 0.4 3.3 2.07078 0.16354 0.00005 0.qxd 10/15/07 4:14 PM Page 509 Appendix: Tables Table A.00022 0.00004 0.x0 x.00114 0.00069 0.01970 0.00017 3.00009 0.26763 0.08076 0.00004 0.00298 0.00233 0.00060 0.09342 0.01539 0.04947 0.03515 0.32636 0.09176 0.20611 0.13567 0.00107 0.01101 0.38974 0.12924 0.00050 0.2 0.02807 0.03593 0.00440 0.00734 0.00140 0.06681 0.00990 0.01017 0.00866 0.00017 0.11314 0.30153 0.08692 0.02385 0.04551 0.x3 x. Z x.13786 0.00391 0.10384 0.15386 0.18406 0.03288 0.40905 0.16602 0.03005 0.18141 0.00005 0.00523 0.01355 0.00015 0.29460 0.02169 0.03216 0.3 0.45621 0.22663 0.35942 0.00031 0.19766 0.06057 0.02330 0.6 2.48803 0.49292 0.00094 0.24510 0.02018 0.00084 0.33724 0.02619 0.01255 0.00264 0.21770 0.46414 509 .5 0.04182 0.01743 0.x1 x.36317 0.00008 0.15866 0.0 0.x8 x.00012 0.23576 0.00042 0.00030 0.x4 x.0 0.00347 0.00039 0.00175 0.

12E-12 5.32E-09 6.97E-09 2.87E-10 1.4 8.78E-10 4.9 4.2BE-19 1.3 4.47E-06 9.16E-07 5.87E-19 9.60E-05 1.17E-14 6.1 8.7 6.0 4.24E-09 5.27E-12 1.81E-09 4.12E-07 6.60E-18 8.19E-11 1.65E-11 3.27E-08 3.40E-18 7.69E-13 4.64E-16 8.16E-10 2.82E-21 4.12E-14 1.54E-10 3.80E-08 2.33E-14 6.96E-09 2.99E-16 5.79E-19 1.67E-21 4.28E-06 3.17E-16 6.01E-13 2.01E-06 5.05E-15 2.35E-16 4.0 5.10E-16 5.93E-11 1.01E-08 1.14E-15 5.20E-06 4.01E-07 5.44E-08 8.69E-20 8.85E-15 2.0 5.1 5.21E-20 6.59E-10 2.19E-13 2.63E-19 8.58E-14 1.79E-17 9.57E-09 3.35E-18 9.98E-11 4.06E-14 3.49E-09 1.02 0.77E-19 2.26E-12 3.0 8.82E-11 9.19E-16 2.87E-10 2.16E-16 3.0 7.14E-18 5.37E-08 7.81E-16 1.82E-20 4.98E-19 5.64E-08 1.98E-14 2.01E-16 1.2 9.36E-14 1.9 1.90E-13 3.4 6.31E-15 1.08 0.13E-12 4.2BE-10 4.56E-20 4.96E-13 1.49E-11 4.41E-07 4.72E-18 2.26E-16 3.28E-17 1.02E-12 9.31E-18 6.42E-21 .47E-05 9.76E-10 9.27E-06 7.07E-15 5.4 4.26E-16 6.50E-08 1.90E-08 1.63E-14 8.59E-08 5.53E-05 9.8 7.98E-10 1.1 9.26E-05 1.35E-15 2.08E-15 1.51E-20 1.49E-18 7.07E-07 6.88E-15 2.80E-07 1.51E-11 8.94E-14 1.98E-19 1.46E-21 3.61E-17 2.03 0.08E-05 6.79E-18 3.84E-13 9.39E-07 8.12E-12 7.3 5.24E-08 1.07E-11 2.31E-07 7.10E-06 6. Z 0.20E-17 6.75E-20 1.1 4.17E-10 3.40E-17 4.87E-17 1.09E-13 5.79E-15 1.16E-19 4.47E-10 7.22E-15 6.02E-14 5.48E-16 2.40E-10 1.47E-05 1.78E-06 1.02E-10 1.25E-09 6.19E-17 1.43E-06 6.34E-09 1.0 9.76E-16 8.45E-06 2.05 0.02E-11 5.29E-20 6.48E-06 1.91E-21 2.74E-09 4.54E-15 1.84E-10 2.46E-19 2.38E-10 1.33E-19 4.10E-08 5.95E-15 9.03E-05 6.87E-14 4.53E-09 1.13E-05 7.49E-15 7.09E-13 1.9 3.73E-18 1.2 5.21E-07 1.14E-19 1.48E-18 2.23E-09 2.10E-11 5.92E-17 9.01E-20 2.15E-16 1.29E-05 8.29E-20 2.60E-11 2.81E-09 1.8 9.05E-17 5.18E-09 6.56E-12 4.11E-08 1.52E-14 7.8 8.98E-14 1.54E-06 9.48E-10 3.76E-08 4.68E-05 1.94E-10 3.03E-01 5.28E-07 2.5 8.6 9.00 0.71E-18 8.56E-13 2.05E-12 4.37E-14 2.12E-16 2.02E-13 5.47E·14 3.70E-09 2.66E-14 2.80E-08 1.34E-16 6.4 5.37E-18 2.56E-18 3.68E-10 1.82E-15 9.69E-16 3.42E-12 1.91E-05 1.39E-09 4.24E-13 6.98E-11 1.25E-19 6.18E-08 4.71E-07 1.89E-07 1.37E-20 4.5 7.05E-12 5.02E-11 4.88E-13 5.63E-08 5.79E-16 2.17E-19 5.09E-18 1.41E-19 1.86E-06 6.33E-06 8.81E-11 1.67E-05 1.59E-17 2.4 7.14E-19 3.34E-19 6.61E-08 9.43E-19 7.07 0.32E-12 7.18E-12 1.55E-13 1.6 7.87E-12 9.25E-11 3.67E-09 2.85E-17 4.48E-07 5.86E-13 2.28E-12 9.89E-11 2.8 6.72E-12 5.42E-11 7.23E-18 1.77E-12 8.51E-13 7.00E-07 3.00E-17 3.17E-11 6.93E-20 9.58E-09 2.12E-17 5.56E-12 1.66E-08 2.73E-07 2.16E-18 3.59E-15 8.28E-12 6.48E-20 7.87E-20 1.25E-14 6.17E-19 2.6 6.54E-15 1.99E-20 3.60E-20 2.5 4.72E-13 8.83E-14 1.33E-11 7.7 9.23E-07 3.29E-08 1.26E-09 2.12E-18 2.42E-17 3.90E-19 2.91E-14 3.96E-07 6.90E-13 1.76E-14 1.34E-08 3.6 5.97E-09 1.87E-08 2.35E-17 1.32E-11 2.75E-17 1.06E-11 1.14E-14 1.89E-09 3.54E-12 8.13E-12 1.18E-05 7.88E-06 5.30E-10 7.09E-14 5.1 6.88E-09 2.68E-17 8.54E-16 7.15E-10 6.75E-12 1.71E-07 4.22E-15 1.47E-17 7.15E-13 4.88E-10 5.90E-21 5.32E-07 2.24E-19 3.21E-06 7.07E-10 2.00E-06 3.87E-12 3.28E-17 6.7 8.09E-08 6.62E-15 1.62E-13 8.73E-19 4.44E-19 1.30E-13 1.86E-12 1.00E-19 1.36E-08 4.00E-18 2.8 4.10E-19 2.52E-13 1.05E-07 4.66E-10 9.31E-15 6.26E-13 4.37E-08 1.6 8.28E-20 1.16E-06 2.45E-16 1.18E-07 4.04 0.50E-07 3.02E-15 2.8 5.03E-08 5.90E-15 1.31E-08 2.20E-12 6.24E-13 1.94E-20 4.50E-09 1.07E-17 1.29E-08 7.92E-18 1.08E-08 2.62E-06 2.01E-17 2.11E-10 1.05E-05 1.78E-07 3.26E-14 1.37E-17 7.19E-19 1.46E-19 2.74E-09 2.48E-06 3.63E-19 3.65E-16 3.7 7.65E-09 1.68E-07 2.60E-06 1.85E-20 3.75E-05 1.95E-08 1.12E-08 1.57E-06 4.56E-10 8.60E-17 4.7 5.88E-06 4.77E-21 3.23E-06 3.56E-06 2.13E-20 5.1 7.67E-17 1.71E-07 1.42E-13 7.18E-07 6.91E-17 2.05E-09 5.99E-07 1.86E-06 1.08E-10 5.16E-08 6.00E-06 3.47E-08 2.59E-07 5.64E-11 1.73E-06 3.02E-19 5.40E-09 1.68E-17 1.58E-20 2.98E-14 4.5 9.39E-15 4.92E-16 2.83E-19 2.97E-15 4.62E-07 9.52E-12 2.7 4.07E-18 5.40E-06 8.58E-12 1.44E-09 2.51E-11 4.48E-11 1.2 7.qxd 510 10/15/07 4:14 PM Page 510 The Desk Reference of Statistical Quality Methods Table A.14E-06 1.82E-05 1.66E-21 2.87E-17 3.25E-11 6.72E-13 1.44E-16 7.07E-20 1.77E-06 3.43E-16 1.99E-05 1.68E-14 2.2 Extended normal distribution.48E-16 2.81E-10 4.29E-06 2.16E-13 3.34E-18 1.09 4.76E-13 1.26E-21 4.58E-15 4.74E-16 4.28E-15 1.57E-05 1.82E-11 2.2 4.40E-20 3.04E·11 5.24E-20 2.05E-06 1.95E-18 9.64E-13 4.09E-18 2.53E-08 8.00E-18 3.18E-05 2.17E-05 1.00E-16 1.57E-18 1.00E-07 1.30E-13 3.64E-08 3.62E-16 1.32E-07 1.38E-11 1.10E-07 1.26E-09 4.60E-09 3.76E-12 9.10E-21 3.29E-17 2.40E-15 7.3 8.06E-20 1.61E-19 1.82E-18 1.22E-08 6.80E-20 9.33E-13 6.58E-14 1.00E-06 1.18E-16 4.90E-20 2.58E-14 4.3 7.06 0.05E-06 6.5 6.06E-20 5.06E-11 3.39E-13 1.125-07 7.5 5.22E-18 6.17E-13 6.23E-05 7.78E-07 2.09E-21 3.2 6.06E-08 2.31E-10 5.59E-11 1.58E-20 8.09E-19 5.46E-07 8.32E-20 3.24E-10 1.95E-08 3.00E-18 2.05E-17 1.59E-21 3.64E-21 3.72E-13 3.38E-20 7.66E-12 3.86E-14 9.30E-16 1.58E-07 1.20E-20 1.54E-13 2.59E-10 5.51E-17 1.62E-06 9.0 6.60E-11 8.73E-06 1.24E-09 3.35E-06 2.79E-08 1.38E-15 1.54E-07 9.21E-16 1.71E-11 9.08E-05 1.84E-10 1.36E-09 1.15E-09 1.52E-10 1.67E-19 3.21E-10 1.38E-14 3.17E-21 3.44E-15 2.24E-06 5.99E-09 2.25E-17 2.53E-11 2.50E-07 2.22E-21 4.53E-19 7.26E-06 1.83E-18 9.28E-13 2.80E-05 1.6 4.79E-07 4.57E-17 8.38E-10 7.86E-16 3.98E-08 4.09E-14 2.35E-20 1.42E-14 1.15E-06 7.55E-12 8.72E-15 1.87E-06 1.78E-09 5.65E-19 1.13E-18 1.39E-18 1.10E-13 1.40E-05 9.36E-10 4.45E-07 1.53E-06 4.9 3.3 9.34E-17 3.2 8.15E-06 1.70E-15 8.01 0.55E-08 2.64E-15 1.54E-18 2.57E-12 4.70E-08 9.62E-08 3.34E-05 8.42E-14 7.36E-05 1.88E-16 9.H1317_CH49-AP.11E-09 6.75E-19 8.45E-12 7.77E-09 2.32E-15 8.51E-17 1.11E-09 2.62E-06 5.52E-20 1.22E-10 6.96E-06 1.32E-10 2.50E-07 2.61E-10 1.56E-18 1.97E-06 2.32E-14 2.98E-07 3.33E-11 1.03E-16 5.68E-18 2.10E-15 1.9 2.88E-11 3.19E-09 2.79E-06 2.98E-09 3.37E-06 1.44E-21 4.92E-05 1.74E-14 8.20E-11 2.00E-12 1.93E-12 7.36E-12 7.85E-07 1.39E-07 3.07E-07 2.98E-17 4.81E-10 3.9 10.11E-21 4.70E-06 1.10E-12 3.65E-12 8.18E-11 1.3 6.69E-20 1.41E-14 4.4 9.94E-15 1.

513 11.754 9.108 17.684 10.668 24.229 24.051 0.109 6.471 22.qxd 10/15/07 4:14 PM Page 511 Appendix: Tables 511 Table A.668 16.678 2.442 20.400 18.386 2.978 46.663 12.920 5.668 14.849 31.154 10.077 37.152 3.667 33.322 6.961 1.90 0.442 13.532 1.244 17.121 15.671 5.664 28.254 14.646 9.956 6.668 14.348 23.996 4.822 13.407 2.958 20.219 5.303 3.838 42.982 44.404 47.239 13.75 0.654 42.667 35.900 20.552 28.894 15.540 21.959 36.976 12.986 15.870 26.072 7.764 43.098 24.613 36.423 8.315 30.147 27.543 41.500 26.470 10.05 C 0 1 2 3 4 5 6 7 8 9 10 0.152 14.453 31.320 30.916 36.292 22.831 11.668 20.450 30.208 19.845 40.404 23.731 27.771 12.844 19.329 12.068 14.525 15.667 24.669 8.693 1.586 33.H1317_CH49-AP.731 38.668 13.717 20.444 16.10 0.962 11 12 13 14 15 16 17 18 19 20 6.914 13.275 10.670 6.430 40.829 8.883 16.382 18.681 31.234 38.995 14.288 0.300 31.310 17.681 7.432 6.668 19.669 10.838 7.218 19.355 0.594 22.486 19.102 1.669 9.667 30.668 15.744 6.879 29.469 22.598 17.610 20.086 21.755 11.924 7.668 21.062 21 22 23 24 25 26 27 28 29 30 14.902 27.668 1.998 25.690 31 32 33 34 35 36 37 23.045 18.606 24.383 9.682 19.756 25.711 33.830 17.556 29.672 37.667 36.667 27.206 15.558 9.885 26.082 41.668 18.594 28.667 37.080 37.693 3.25 0.426 6.736 33.731 22.217 16.692 27.667 35.452 23. Pa 0.726 8.895 4.035 10.523 3.338 24.296 7.752 34.584 32.667 26.460 24.668 23.541 40.298 24.416 32.672 4.667 29.433 3.956 26.125 45.234 30.690 8.532 11.667 34.274 7.676 .970 2.021 0.025 28.745 2.389 39.746 21.656 5.422 11.668 22.540 48.548 17.491 34.842 13.50 0.237 25.667 28.570 20.620 0.957 39.95 0.592 21.519 10.345 28.966 29.652 21.3 Poisson Unity Values.365 32.148 14.727 2.672 14.010 25.286 3.872 44.105 0.802 11.198 38.246 10.818 1.058 19.184 29.613 3.836 34.098 28.028 27.981 4.464 9.907 16.667 32.366 1.128 21.788 39.428 33.670 7.152 25.302 25.434 15.135 11.668 12.808 23.020 2.369 4.241 31.300 11.695 5.886 23.792 32.719 16.668 17.083 5.994 9.674 3.209 26.705 16.901 41.782 18.169 0.975 18.226 46.221 7. Pa.167 29.668 25.774 27.907 30.113 18.890 5.

695 5.244 2.548 17.890 4.795 1.444 23.675 4.679 1.254 14.426 6.692 1.072 14.280 7.960 1.192 2.302 2.980 1.946 6.893 1.870 26.707 3.065 2.895 2.265 2.831 11.052 0.796 1.196 31.382 18.990 1.780 1.609 1.169 6.074 2.917 1.464 9.403 2.023 3.906 1.957 2.029 1.379 2.969 1.900 20.746 21. C ` = 0.458 2.035 10.327 30.177 2.367 2.757 1.707 1.865 1.4 Poisson Unity Values.520 2.122 2.981 4.058 19.590 58.240 2.05 a = 0.10 ` = 0.352 2.158 2.840 1.892 1.818 1.303 3.687 1.923 1.941 1.750 2.442 13.549 3.479 2.226 2.781 10.871 1.qxd 512 10/15/07 4:14 PM Page 512 The Desk Reference of Statistical Quality Methods Table A.619 1.875 1.104 2.307 2.768 2.890 1.010 25.699 5.246 10.922 1.653 1.046 2.894 15.970 2.071 2.05 ` = 0.604 3.752 2.05 a = 0.630 2.736 1.775 1.404 13.462 3.633 12.676 89.01 np 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 44.723 1.528 2.817 1.152 25.019 3.286 3.719 16.646 4.057 3.641 1.218 19.860 1.665 1.710 1.017 4.665 2.89 10.191 2.924 7.942 1.813 1.852 2.630 1.588 2.846 0.312 2.594 28.206 2. RQL/AQL.750 1.05 a = 0.613 3.030 1.098 2.298 24.352 5.001 1.850 1.831 1.366 1.599 1.954 1.023 2.145 2.781 18.739 1.349 7.594 22.127 2.355 0.442 2.618 2.968 2.435 4.265 3.103 2.764 1.509 4.999 1.731 27.460 29.066 .690 8.698 1.073 2.723 1.H1317_CH49-AP.890 5.

756 2.861 2.131 2.576 2.415 1.604 4.841 4.886 1.750 3.704 2.120 2. Critical tail areas (` for one-tail tests.153 21 22 23 24 25 26 27 28 29 30 1.086 2.921 2.812 12.05 0.313 1.943 1.310 1.671 1.303 1.971 2.807 2.571 2.763 2.485 2.706 4.074 2.289 1.617 2.467 2.473 2.356 1.423 2.055 3.925 5.711 1.947 2.831 2.365 3.353 2.453 5.660 2.330 1.684 1.787 2.250 3.326 2.734 1.110 2.119 3.771 1.833 3.740 1.878 2.056 2.706 1.032 3.32 14.078 3.000 1.069 2.252 3.771 2.228 31.078 1.721 1.057 3.160 2.135 3.321 1.980 2.303 3.080 2.567 2.773 4.820 6.462 2.314 1.782 1.915 2.363 1.690 3.717 1.497 3.286 3.143 2.779 2.372 6.598 4.326 3.091 3.174 3.5 t-distribution.10 0.479 2.898 2.372 3.333 1.499 3.650 2.492 2.581 11 12 13 14 15 16 17 18 19 20 1.821 2.533 1.132 2.197 3.093 2.845 3.753 1.306 2.708 1.718 2.101 2.895 1.383 1.746 1.714 1.428 3.012 2.qxd 10/15/07 4:14 PM Page 513 Appendix: Tables 513 Table A.005 0.701 1.657 9.998 2.707 3.106 3.602 2.045 2.390 2.747 3.518 2.977 2.528 3.819 2.169 127.048 2.797 2.725 2.317 4.H1317_CH49-AP.920 2.552 2.030 40 60 120 1.337 1.624 2.638 1.697 2.539 2.365 2.960 2.089 7.776 2.325 1.350 1.222 3.440 1.201 2.583 2.296 1.311 1.645 1.019 3.764 63.807 .316 1.500 2.052 2.476 1.015 1.965 4.104 3.042 2.047 3.318 1.761 1.508 2.860 Ç 1.397 1.681 2.314 2.038 3.145 2.064 2.345 1.896 2.699 1.355 3.060 2.703 1.067 3.341 1.328 1.319 1.860 1.541 3.282 1.182 2. `/2 for two-tail tests) Degrees of freedom 0.447 2.021 2.025 0.0025 1 2 3 4 5 6 7 8 9 10 3.658 2.010 0.358 2.323 1.457 2.729 1.262 2.179 2.315 1.796 1.833 1.

91 9.9 17.6 36.5 48.7 21. α/2 for two-tail tests) Degrees of freedom 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 50 60 70 80 90 100 0.0 17.25 7.6 15.24 14.5 13.8 38.3 13.59 10.88 10.0 11.87 5.5 16.8 28.5 19.207 0.3 51.49 4.qxd 514 10/15/07 4:14 PM Page 514 The Desk Reference of Statistical Quality Methods Table A.70 3.2 16.39 10.0 41.7 37.8 34.6 12.3 53.8 14.1 40.58 6.67 9.26 6.8 55.4 32.7 74.16 2.1 27.3 19.6 Chi-square distribution.8 31.3 5.9 48.7 28.4 16.1 10.8 14.5 45.40 5.43 8.36 2.2 35.17 2.4 85.8 14.20 2.3 27.5 34.5 101.0 17.6 40.8 20.3 35.06 1.9 16.2 31.58 1.H1317_CH49-AP.8 44.0 23.55 9.38 9.0 20.31 10.30 7.6 107.1 124.6 7.46 55.1 77.975 0.7 60.73 3.8 26.2 For number of degrees of freedom > 100.8 32.2 65.0 27.4 42.4 29.9 39.73 2. 2 .3 32.0398 0.995 0.4 23.7 12.8 14.89 6.484 0.96 8.6 118.94 4.69 46.04 7.5 96.5 20.60 3.24 1.2 11.0 106.7 66.2 28.02 7.3 140.005 0.09 10.1 12.57 5.29 82.7 16.1 12.3 0.91 7.3 49.2 38.1 13.5 26.44 13.56 8.3 42.87 11.7 25.0 59.5 28.1 41.7 47.79 8.010 0.1 13.60 29.831 1.0 26.33 64.3 18.9 34.7 18.2 116.1 22.5 36.94 19.3 71.64 2.4 69.07 3.9 18.5 24.8 30.57 4.26 7.1 15.3 51.4 40.6 15.5 19.07 4.9 35.7 38.69 2.2 26.0 22.9 113.0 104.57 7.6 25.100 0.7 37.6 118.9 23.8 67.1 39.0 16.072 0.23 5.63 6.0 52.83 3.6 12.78 9.3 29.99 7.2 51.2 44.66 16.89 10.8 63.47 17.4 83.04 14.26 9.020 0.5 11.71 4.2 34.82 9.5 3.6 28.7 26.33 3.050 0.24 10.8 21.0 33.8 57.70 6.3 23.82 4.412 0.9 40.1 90.28 73.2 36.8 24.77 20.49 11.6 46.4 37.3 24.9 43.21 0.7 79.84 7.9 30.025 0.1 129.6 43.9 0.6 41.711 1.17 4.18 2.01 5.352 0.0 13.64 9.6 51.03 8.1 13. calculate the approximate normal deviation Z = 2 X − 2 ( df ) − 1 .003 0.5 43.0439 0.61 2.3 22.14 5.15 1.26 6.6 12.5 32.0 35.9 11.103 0.6 14.216 0.4 16.989 1.34 1.7 33.4 40.5 79.05 37.60 5.4 35.3 95.84 5.2 59.1 31.5 21.3 11.5 92.29 18.01 0.8 43.7 18.2 0.2 67.61 6. Critical tail areas (α for one-tail tests.85 15.90 0.4 13.35 11.2 45.3 128.2 74.676 0.4 32.4 14.6 30.23 8.65 12.3 16.8 12.950 0.25 3.11 18.

428 0.794 0.490 1.880 1.173 3.660 1.508 0.9776 0.8862 0.9835 0.9869 0.946 0.880 1.9693 0.9892 0.136 0.109 0.9213 0.680 0.407 3.252 0.004 1.691 0.283 0.308 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 1.220 0.182 1.191 .552 1.921 0.640 3.824 0.659 1.030 0.9515 0.518 1.317 0.9727 0.216 0.565 3.438 Note: Factors A2.704 2.472 3.534 2.7979 0.693 2.510 0.679 1.406 0.267 2.078 3.194 0.266 2.9854 0.354 0.770 0.qxd 10/15/07 4:14 PM Page 515 Appendix: Tables 515 Table A.817 0.290 1.927 0.532 3.9862 0.059 2.419 0.239 0.606 0.864 1.350 0.128 1.114 2.258 0.184 1.789 0.858 3.089 1.206 0.276 0. D3.970 1.9896 — — — — 0.594 1.184 0.267 2.9876 0.308 0.689 3.735 3.729 0.H1317_CH49-AP.803 0.9823 0.568 2.847 2.482 0. Subgroup size E2 Ã2 2 3 4 5 6 7 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 2.785 0.497 0.849 0.9594 0.187 0.548 0.9400 0.282 2.203 0.763 0.477 1.382 0.572 1.619 0.287 1.503 1.763 1.9810 0.864 0.032 0.457 1.321 0.577 0.899 0.816 1.970 3.223 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 3.9754 0.237 0.813 0.534 0.778 3.954 1.716 1.118 0.455 1.588 3.373 0.534 1.816 1.023 0.076 0.836 0.9887 0.9882 0.647 0.7 Control chart factors I.718 0.777 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2.555 0.483 0.850 0.772 1.337 0.336 3.427 1.761 1.448 0.258 3.466 1.232 0. Subgroup size A2 d2 D3 D4 A3 c4 B3 B4 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 1.924 1.819 3.778 0. Use the average/ standard deviation control chart where n > 10.895 3.466 0.545 0.523 0.975 0.882 1.796 0.9794 0.9650 0.9845 0. Table A.8 Control chart factors II.574 2.326 2.445 1.663 0.185 0.284 0.628 1.739 0.886 0.099 1.633 0.931 — — — — — 0.881 0.618 1.646 1.698 0. and D4 are not appropriate (NA) for these sample sizes.

9 Bias factors for attribute studies.2803 .36 0.0863 .20 0.3312 .3984 .0264 .19 0.49 0.29 0.1626 .1200 .1334 .07 0.04 0.40 0.46 0.42 0.09 0.2541 .18 0.02 0.37 0.44 0.13 0.08 0.12 0.11 0.3668 .3420 .3621 .01 0.2444 .0681 .2613 .17 0.H1317_CH49-AP.2107 . Mr Cr BfMR BfCR Mr Cr BfMR BfCR 0.3989 .2709 .28 0.3925 .3867 .22 0.3101 .23 0.3836 .3251 .10 0.48 0.1872 .24 0.2323 .3739 .1989 .16 0.06 0.3538 .3712 .0488 .2227 .39 0.14 0.33 0.2966 .3970 .3945 .3977 .30 0.41 0.05 0.03 0.3814 .3885 .21 0.qxd 516 10/15/07 4:14 PM Page 516 The Desk Reference of Statistical Quality Methods Table A.43 0.50 .1040 .3778 .25 .3485 .32 0.45 0.2874 .27 0.38 0.3034 .1497 .31 0.34 0.3372 .15 0.3961 .35 0.3910 .26 0.3572 .47 0.1758 .3989 .3187 0.

025 10 8 7 16 14 13 23 21 20 32 29 27 41 39 36 8 0.10 0.05 0.05 0.05 0.05 0.025 7 6 — 4 0.10 0.10 0.05 0.025 14 11 10 22 19 17 32 28 26 42 38 35 54 49 46 66 62 58 80 75 71 94 89 84 110 104 99 127 120 115 13 0.10 0.025 11 10 8 19 16 14 27 24 22 36 33 31 46 43 40 58 54 51 70 66 62 10 0.05 0.025 16 13 11 25 21 19 35 31 28 46 42 38 59 54 50 72 67 62 86 81 76 102 96 91 118 112 106 136 129 123 154 147 141 174 166 160 15 0.10 0.025 7 6 — 13 11 10 5 0.025 8 7 6 14 12 11 20 19 17 6 0.10 0.10 0.025 15 12 10 23 20 18 33 30 27 44 40 37 56 52 48 69 64 60 83 78 73 98 92 88 114 108 103 131 125 119 149 142 136 14 0.05 0.05 0.H1317_CH49-AP.05 0.10 0.10 0.10 0.10 0.025 13 11 9 21 18 16 30 27 24 40 37 34 51 47 44 63 59 55 76 72 68 91 86 81 106 100 96 12 0.10 0.025 11 9 8 17 15 14 25 23 21 34 31 29 44 41 38 55 51 49 9 0.05 0.025 16 13 11 26 22 20 37 33 29 48 44 40 61 56 52 75 69 65 90 84 79 106 99 94 123 116 110 141 133 127 159 152 145 179 171 164 4 5 6 7 8 9 10 11 12 13 14 15 200 192 184 . n1 (smaller sample) n2 ` 3 3 0.10 0.10 Critical values of smaller rank sum for Wilcoxon-Mann-Whitney test.05 0.025 9 8 7 15 13 12 22 20 18 30 28 26 7 0.025 12 10 9 20 17 15 28 26 23 38 35 32 49 45 42 60 56 53 73 69 65 87 82 78 11 0.05 0.qxd 10/15/07 4:14 PM Page 517 Appendix: Tables 517 Table A.05 0.

90 2.0 AQL(%) .025 0.53 0.040 0.0 6. AOQL (%) f 0.0 1.46 1/2 1/3 1/4 1/5 1/7 1/10 1/15 1/25 1/50 1/100 1/200 1540 2550 3340 3960 4950 6050 7390 9110 11730 14320 17420 840 1390 1820 2160 2700 3300 4030 4970 6400 7810 9500 600 1000 1310 1550 1940 2370 2890 3570 4590 5600 6810 375 620 810 965 1205 1470 1800 2215 2855 3485 4235 245 405 530 630 790 965 1180 1450 1870 2305 2760 194 321 420 498 623 762 930 1147 1477 1820 2178 140 232 303 360 450 550 672 828 1067 1302 1583 84 140 182 217 270 335 410 500 640 790 950 53 87 113 135 168 207 255 315 400 500 590 36 59 76 91 113 138 170 210 270 330 400 23 38 49 58 73 89 108 134 175 215 255 15 25 32 38 47 57 70 86 110 135 165 10 16 21 25 31 38 46 57 72 89 106 6 10 13 15 18 22 27 33 42 52 62 5 7 9 11 13 16 19 23 29 36 43 3 5 6 7 8 10 12 14 18 22 26 0.11 Values of i and f for CSP-1 plans.22 1.40 0.H1317_CH49-AP.90 4.qxd 518 10/15/07 4:14 PM Page 518 The Desk Reference of Statistical Quality Methods Table A.5 10.198 0.015 0.010 0.12 11.5 2.79 1.94 7.074 0.65 1.046 0.33 0.113 0.143 0.10 0.033 0.15 0.25 0.018 0.065 0.5 4.

000 0.670 0.844 0.994 0.999 1.000 1.848 0.000 1.947 0.970 0.2 1.951 0.135 0.000 0.999 1.449 0.165 0.000 1.967 0.331 0.992 0.000 1.995 1.4 0.819 0.000 1.150 0.2 2.000 0.592 0.000 1.997 1.525 0.9 3.998 1.000 0.000 1.000 0.647 0.907 0.999 1.980 0.910 0.996 0.983 0.000 1.493 0.000 1.773 0.000 1.704 0.4 2.199 0.000 0.983 0.000 1.494 0.231 0.995 0.8 0.878 0.000 1.999 1.966 0.000 0.677 0.308 0.989 0.000 0.000 0.1 1.3 2.999 1.000 1.991 0.999 1.670 0.699 0.834 0.916 0.000 1.000 1.981 0.000 1.891 0.000 1.000 1.000 1.998 1.986 0.6 2.999 1.000 1.1 0.999 1.000 1.463 0.953 0.935 0.000 (continued ) .877 0.247 0.000 1.000 0.999 1.123 0.938 0.596 0.812 0.809 0.928 0.956 0.988 0.5 2.407 0.999 1.981 0.0 0 1 2 3 4 5 6 7 0.999 1.857 0.996 1.000 0.000 0.000 1.736 0.000 1.380 0.000 1.000 1.957 0.997 0.990 0.966 0.000 0.12 Cumulative Poisson distribution.000 1.000 1.999 1.000 1.000 1.446 0.000 1.434 0.082 0.992 0.986 0.4 1.368 0.000 1.956 0.355 0.000 0.000 1.731 0.994 0.000 0.997 0.000 1.998 1.000 1.998 1.000 1.000 1.880 0.964 0.976 0.000 1.977 0.916 0. λ x 0.607 0.999 1.000 1.995 0.992 0.1 2.963 0.000 1.998 1.000 1.994 0.9 1.000 1.937 0.qxd 10/15/07 4:14 PM Page 519 Appendix: Tables 519 Table A.067 0.809 0.993 0.202 0.558 0.839 0.570 0.3 1.8 2.249 0.100 0.996 0.061 0.000 λ x 2.000 1.971 0.998 1.943 0.7 2.815 0.000 1.000 1.000 1.000 0.714 0.000 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 0.000 1.000 0.857 0.000 0.000 0.000 0.000 0.000 1.000 1.996 0.5 0.267 0.986 0.074 0.996 0.938 0.000 1.000 0.993 0.623 0.000 0.2 0.497 0.000 1.999 1.5 1.273 0.0 0 1 2 3 4 5 6 7 8 9 0.215 0.799 0.000 0.055 0.000 1.3 0.974 0.8 1.000 λ x 1.H1317_CH49-AP.934 0.111 0.999 1.988 0.518 0.6 1.999 1.223 0.000 1.050 0.091 0.423 0.000 1.000 1.997 0.998 0.7 1.000 1.987 0.627 0.992 0.997 0.183 0.000 1.7 0.999 1.819 0.991 0.000 1.999 1.000 1.904 0.926 0.406 0.920 0.783 0.946 0.996 0.000 1.000 1.549 0.958 0.990 0.000 0.000 1.964 0.779 0.987 0.900 0.6 0.741 0.000 1.921 0.000 1.875 0.971 0.891 0.000 1.999 1.000 1.997 0.000 1.000 1.983 0.000 1.000 1.905 0.000 0.333 0.736 0.301 0.999 1.970 0.000 0.999 1.757 0.544 0.000 1.000 1.470 0.979 0.000 1.000 1.758 0.287 0.000 1.992 0.663 0.692 0.9 2.000 1.863 0.000 1.650 0.

453 0.980 0.326 0.000 1.913 0.6 3.171 0.999 1.007 0.879 0.377 0.000 1.3 4.997 0.000 0.998 0.905 0.000 1.474 0.955 0.551 0.801 0.qxd 520 10/15/07 4:14 PM Page 520 The Desk Reference of Statistical Quality Methods Table A.000 1.883 0.988 0.000 1.000 1.997 0.037 0.984 0.045 0.955 0.703 0.000 1.927 0.000 1.969 0.000 1.887 0.603 0.012 0.625 0.977 0.495 0.000 1.360 0.441 0.000 1.999 1.960 0.7 4.609 0.791 0.4 4.000 1.961 0.052 0.310 0.938 0.982 0.955 0.007 0.992 0.988 0.999 1.174 0.018 0.078 0.000 0.401 0.000 1.616 0.995 0.999 1.197 0.9 4.999 1.999 1.238 0.133 0.537 0.009 0.996 0.000 1.986 0.686 0.126 0.558 0.285 0.000 0.12 Cumulative Poisson distribution (continued).020 0.000 1.000 1.993 0.993 0.000 0.763 0.395 0.999 1.935 0.991 0.000 1.983 0.996 0.965 0.044 0.6 4.000 0.986 0.8 3.976 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0.978 0.321 0.990 0.000 1.964 0.017 0.185 0. λ x 3.960 0.000 1.000 0.030 0.000 0.163 0.000 1.152 0.000 1.265 0.000 0.5 3.099 0.085 0.785 0.998 0.072 0.943 0.014 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0.342 0.844 0.000 1.999 1.022 0.998 0.000 1.994 0.000 1.000 1.040 0.998 0.340 0.125 0.532 0.944 0.000 1.015 0.999 1.359 0.000 1.909 0.434 0.818 0.000 1.999 1.107 0.022 0.570 0.896 0.744 0.210 0.253 0.092 0.008 0.000 1.998 1.000 1.8 4.999 1.986 0.921 0.895 0.999 1.185 0.737 0.781 0.000 1.000 1.985 0.027 0.999 1.000 0.000 0.1 4.777 0.844 0.000 1.997 0.011 0.000 1.000 1.000 1.929 0.000 0.706 0.2 3.048 0.950 0.998 1.580 0.991 0.000 1.999 1.981 0.769 0.877 0.010 0.651 0.987 0.997 0.949 0.994 0.668 0.995 0.996 0.687 0.999 1.5 4.979 0.993 0.918 0.753 0.992 0.889 0.972 0.494 0.998 1.805 0.136 0.900 0.798 0.989 0.871 0.000 1.476 0.725 0.041 0.1 3.942 0.294 0.2 4.000 1.000 1.000 1.116 0.590 0.629 0.000 1.000 0.056 0.000 1.000 1.000 1.000 1.831 0.000 0.990 0.033 0.000 .000 1.000 1.000 1.868 0.000 1.995 0.000 0.269 0.000 1.856 0.648 0.H1317_CH49-AP.380 0.000 1.762 0.000 1.159 0.000 1.000 0.830 0.998 0.000 0.906 0.000 1.147 0.000 0.9 5.858 0.3 3.143 0.999 1.000 1.992 0.995 0.000 1.000 1.061 0.224 0.968 0.999 1.998 0.995 0.867 0.973 0.936 0.414 0.816 0.7 3.458 0.513 0.994 0.949 0.000 1.997 0.279 0.983 0.634 0.000 λ x 4.972 0.515 0.968 0.000 0.975 0.720 0.932 0.997 0.303 0.668 0.4 3.998 0.000 1.066 0.

59 2.23 3.35 2.06 2.56 1.98 3.30 2.98 2.49 5.77 2.47 2.66 1.55 2.18 3.70 1.00 1.04 2.83 1.41 1.68 1.36 2.92 3.44 5.83 1.49 5.59 3.26 1.83 2.97 1.56 2.17 2.52 2.19 3.06 2.46 40 50 60 80 100 2.39 2.53 1.28 3.45 58.52 57.05 2.84 1.16 2.00 1.10 2.94 1.21 1.71 2.73 1.67 1.24 2.81 1.42 1.63 1.76 3.54 2.72 2.94 1.73 1.82 1.72 2.59 1.30 2.37 59.78 3.90 1.49 1.27 2.01 1.10 6 7 8 9 10 3.73 1.85 1.79 1.84 1.77 1.18 2.24 2.82 1.57 1.48 1.84 2.33 1.89 1.61 3.84 1.13 63.85 1.46 5.06 11 12 13 14 15 3.35 1.76 1.03 2.46 2.04 1.79 2.81 2.85 1.70 2.26 3.14 3.63 1.63 2.40 58.98 1.19 2.98 1.00 (continued ) .28 1.31 2.99 2.56 2.36 1.73 2.12 2.28 2.97 1.75 2.91 1.56 2.18 3.81 1.66 1.31 4.97 1.74 1.33 2.23 1.93 1.23 2.12 2.86 2.78 3.96 2.16 2.38 2.07 2.34 1.66 1.25 3.25 2.61 1.64 1.06 2.46 3.3 9.32 2.62 1.89 1.17 3.34 1.19 2.91 1.93 2.7 9.29 1.58 1.80 2.73 2.87 1.41 2.44 2.91 1.29 2.92 2.21 62.65 1.79 1.70 1.76 3. ␣ = 0.11 2.07 2.17 2.06 49.61 1.48 2.54 1.50 1.77 2.12 2.00 2.69 1.27 2.84 3.94 2.61 22 24 26 28 30 2.38 1.97 2.62 1.46 1.5 9.69 1.00 1.59 2.08 2.44 1.48 1.72 1.65 1.35 2.51 1.84 2.39 4.53 5.01 1.01 1.92 1.73 1.2 9.01 2.50 1.01 3.97 2.50 2.16 2.59 1.34 2.61 1.43 2.54 1.23 3.61 2.2 9.30 61.54 1.22 2.30 2.17 2.22 2.08 1.12 1.14 3.10 2.05 3.00 2.92 3.82 1.51 2.49 5.86 2.57 1.91 1.46 3.03 2.98 1.12 63.03 3.88 1.9 8.77 3.18 1.96 1.05 3.13 2.52 2.86 1.76 1.67 1.71 1.95 3.50 2.88 1.72 1.49 1.54 4.31 2.38 1.14 3.61 2.3 9.80 1.24 2.88 2.41 1.79 1.12 2.00 1.21 200 2.92 2.59 2.82 3.52 1.71 1.35 2.99 1.85 1.21 2.90 1.51 1.74 1.79 1.16 2.39 1.31 2.2 9.24 1.2 9.46 3.88 1.90 1.98 1.53 1.08 1.14 1.48 1.62 2.86 1.13 3.09 2.43 1.20 2.88 1.24 5.18 2.96 1.15 2.72 2.09 1.37 5.46 4.06 2.49 2.19 2.38 1.63 1.62 2.29 2.56 2.18 2.94 1.26 1.39 1.20 2.52 3.91 2.10 1.14 2.01 1.69 1.53 1.44 1.88 500 2.87 1.13 F-table.67 1.67 1.78 1.8 9.54 2.7 9.16 2.86 1.13 2.50 1.69 1.15 3.75 1.63 1.54 4.08 2.87 1.78 1.31 1.60 1.11 3.42 2.68 1.27 3.32 2.32 60.63 1.07 2.36 1.2 9.76 1.02 2.09 2.qxd 10/15/07 4:14 PM Page 521 Appendix: Tables 521 Table A.42 2.86 1.16 5.63 1.31 2.79 1.79 1.48 1.09 2.27 2.96 1.16 1.11 3.17 62.28 2.09 2.36 2.89 2.H1317_CH49-AP.61 1.62 55.73 2.76 1.20 1.23 2.44 2.66 2.85 Ç 1.55 1.80 1.94 1.16 2.34 59.73 2.34 2.76 1.73 1.45 2.80 3.15 2.64 1.52 2.04 2.11 1.58 1.39 2.67 1.90 1.24 1.29 5.96 1.73 3.32 3.96 2.83 1.26 1.38 2.84 1.54 2.13 1.39 2.08 2.98 2.87 2.33 2.81 2.72 1.48 5.50 1.40 2.57 1.11 2.61 2.04 2.18 2.01 2.95 2.42 2.78 53.65 1.06 2.81 2.76 1.65 1.75 1.94 1.20 2.10 3.28 4.70 1.05 2.06 2.77 1.09 2.10 2.38 2.33 1.15 2.37 2.92 1.74 1.93 1.29 1.44 2.78 1.74 1.75 2.47 1.88 2.04 2.36 3.34 4.24 61.22 2.15 63.30 2.73 2.76 1.01 2.25 2.28 1.02 2.47 5.81 2.83 2.06 2.17 1.71 1.39 5.10 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 15 20 30 50 100 200 500 Ç 1 2 3 4 5 39.78 2.24 2.87 3.14 2.06 2.76 2.14 3.35 5.25 2.92 1.80 1.76 2.14 2.46 2.42 1.73 1.13 2.81 1.71 1.64 2.21 2.0 9.36 1.69 2.17 2.06 2.76 16 17 18 19 20 3.42 5.31 2.28 2.97 1.94 3.49 1.39 2.70 2.33 5.49 2.55 1.31 1.70 1.80 1.20 3.95 1.87 1.57 1.94 1.29 3.05 2.92 1.33 2.4 9.56 1.96 2.33 2.51 1.3 9.44 1.97 1.48 2.00 5.68 1.24 3.41 2.6 9.67 2.19 2.48 2.9 9.38 5.60 1.93 1.96 1.87 2.35 2.20 2.76 1.11 63.9 9.74 1.77 1.47 2.91 1.67 2.00 1.71 1.32 1.66 1.29 2.10 2.01 2.84 1.

36 2.00 2.08 2.76 2.48 4.88 239 19.57 2.31 2.98 2.23 2.81 6.09 2.59 2.55 2.97 3.01 2.80 2.27 2.92 2.47 3.0 9.92 3.15 2.81 3.38 2.82 4.30 2.48 2.72 2.55 2.60 2.67 2.66 2.95 1.26 4.38 4.28 2.66 245 19.60 2.28 2.13 3.68 3.90 2.94 4.29 3.64 2.92 1.15 4.60 4.21 2.29 3.09 2.3 8.07 2.49 2.99 1.19 2.37 3.94 1.96 2.98 1.11 4.39 2.15 2.23 2.90 2.82 2.76 4.4 8.33 3.22 2.99 2.33 2.10 2.02 4.14 2.34 3.15 3.44 3.60 247 19.80 2.10 4.54 2.78 2.22 2.04 26 27 28 29 30 4.47 2.97 2.17 3.96 3.49 3.04 2.3 9.74 5.26 2.74 2.32 5.68 2.74 2.59 3.17 2.82 4.4 8.07 3.25 2.46 2.35 3.55 6.12 3.90 1.11 2.49 2.91 4.77 242 19.33 4.59 5.06 3.96 4.02 2.12 2.33 2.93 1.55 2.25 2.87 4.40 2.00 1.18 2.51 2.91 3.63 2.48 2.18 4.79 3.08 2.07 2.19 230 19.13 2.90 1.71 2.71 5.31 3.10 3.19 2.31 2.48 3.18 3.40 2.20 2.15 2.60 2.96 2.27 2.18 3.00 1.39 3.73 3.67 2.4 8.96 32 34 36 38 40 4.17 2.34 2.69 2.98 3.58 3.97 1.46 4.02 2.4 8.14 2.71 2.11 2.01 1.43 2.08 2.23 2.91 3.27 2.14 4.35 3.35 2.4 8.68 5.21 2.32 2.97 2.55 3.77 2.41 225 19.12 6.48 2.20 2.12 4.12 2.44 2.69 2.89 6.66 2.57 2.58 2.57 2.82 241 19.64 2.05 2.45 2.84 3.20 3.28 4.46 2.34 2.13 2.07 4.07 2.01 6.14 2.32 4.20 4.03 3.95 1.51 2.99 2.55 3.73 5.4 8.4 8.01 2.87 .59 247 19.71 2.48 2.43 2.46 2.13 4.26 5.4 8.64 246 19.06 2.03 3.34 2.99 1.93 2.28 2.15 21 22 23 24 25 4.03 2.92 1.16 2.39 3.63 3.05 2.62 2.03 2.28 3.01 2.39 2.81 3.64 3.36 2.00 4.08 2.4 8.qxd 522 10/15/07 4:14 PM Page 522 The Desk Reference of Statistical Quality Methods Table A.30 2.29 2.2 9.35 2.01 3.94 3.11 2.37 2.30 4.23 3.37 2.18 2.77 2.04 2.13 2.20 2.67 5.4 8.09 2.18 2.00 2.99 1.35 2.84 4.40 2.82 2.58 6 7 8 9 10 5.71 2.70 24.29 2.94 5.35 4.41 4.74 3.18 2.59 2. ␣ = 0.33 2.28 3.70 2.95 1.22 2.28 6.41 2.05 2.89 1.26 2.61 2.69 2.24 3.98 2.22 2.05 3.07 3.83 4.79 2.29 3.70 2.70 2.68 3.96 2.32 2.23 2.85 2.15 2.16 3.79 216 19.19 2.38 2.13 F-table (continued).74 243 19.10 3.00 3.18 2.45 2.53 3.74 2.15 2.12 2.93 2.17 2.42 2.57 3.06 2.50 2.16 4.18 2.4 8.58 2.87 2.05 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 2 3 4 5 161 18.10 2.28 2.94 4.95 237 19.75 2.24 3.44 2.49 2.24 2.84 2.53 2.20 3.59 2.08 3.18 2.14 4.14 2.29 2.89 3.26 2.2 9.41 2.39 3.52 3.41 2.32 2.02 2.01 1.53 2.06 2.35 2.70 2.09 2.28 2.38 2.07 2.09 2.16 2.15 2.45 2.95 2.36 2.63 3.22 3.92 1.25 2.42 2.10 4.72 2.11 3.37 2.10 2.99 1.20 2.49 3.55 2.45 2.42 3.59 5.79 2.05 2.35 3.86 3.66 2.28 3.21 4.61 200 19.83 3.21 2.49 3.5 10.60 3.92 2.26 4.03 2.42 2.04 2.31 2.26 3.54 2.31 2.05 234 19.64 2.90 2.66 2.46 2.51 2.04 4.63 2.49 2.49 2.02 2.23 4.79 5.92 1.97 1.17 2.90 1.85 2.26 2.37 2.51 2.96 1.48 2.76 2.49 4.10 2.95 2.40 2.54 2.05 2.48 3.53 2.69 3.16 2.60 2.25 2.01 2.96 5.99 5.51 2.11 2.80 11 12 13 14 15 4.05 2.10 2.34 2.07 2.76 2.50 3.58 2.73 2.47 3.24 2.21 3.82 2.91 2.46 2.37 2.24 3.86 3.18 2.14 2.45 4.74 2.35 16 17 18 19 20 4.87 2.30 2.40 2.81 2.03 2.94 6.94 1.70 5.25 2.88 1.85 3.98 4.71 6.20 2.61 2.07 2.53 2.04 2.75 4.45 2.88 2.28 2.34 2.29 2.85 2.25 2.05 2.31 2.51 2.22 2.87 3.32 2.37 2.65 2.36 3.96 2.19 2.46 2.36 2.08 245 19.20 2.09 2.17 2.22 4.90 3.85 2.93 1.12 2.97 1.42 2.74 4.12 2.63 2.34 3.53 4.60 2.13 2.71 4.04 2.85 2.37 3.89 3.24 3.51 2.23 2.31 2.23 2.4 8.27 2.21 2.24 2.84 2.00 1.57 2.14 2.38 2.39 5.42 2.09 4.62 2.41 3.02 2.26 3.62 2.76 5.54 2.11 3.09 2.40 3.67 4.98 3.06 3.84 4.11 2.86 4.99 1.83 2.95 1.56 2.07 2.50 3.65 2.69 5.1 7.32 2.20 2.4 19.13 2.H1317_CH49-AP.62 246 19.44 3.42 2.20 2.99 2.90 3.51 3.26 3.27 2.67 2.85 6.03 2.

08 2.96 1.14 2.67 5.96 1.87 3.89 1.50 251 19.84 1.32 2.04 2.13 2.96 1.98 2.67 3.23 2.74 1.34 3.89 1.66 3.71 1.68 1.19 2.12 2.96 2.65 1.63 5.12 2.63 1.44 3.89 2.5 8.24 2.80 1.04 2.75 4.49 2.80 2.87 1.82 1.62 5.72 2.44 2.08 2.02 2.70 4.37 2.65 4.19 2.65 1.74 3.99 2.84 21 22 23 24 25 2.04 1.77 1.77 3.53 2.66 1.56 2.58 5.57 1.59 5.05 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 19 20 22 24 26 28 30 35 40 45 50 60 80 100 200 500 Ç 1 2 3 4 5 248 19.40 2.33 2.27 2.66 2.72 1.43 2.62 3.24 2.54 1.69 4.59 1.89 1.38 2.82 1.78 1.88 1.00 1.75 3.91 1.00 1.85 1.07 2.07 2.55 1.63 4.56 1.71 3.82 1.52 250 19.34 2.94 2.99 1.00 1.16 2.10 2.61 2.06 2.51 2.5 8.15 2.09 2.48 251 19.94 1.98 1.23 2.77 1.12 2.16 2.71 2.94 1.01 1.73 1.75 3.41 2.92 2.5 8.82 1.95 2.54 5.79 1.28 2.74 1.24 2.90 1.72 4.88 1.01 1.73 1.80 1.09 1.93 1.52 2.95 1.67 4.97 2.87 2.75 1.79 3.51 (continued ) .42 2.22 2.71 1.05 2.59 3.29 2.65 5.88 1.18 2.03 2.59 5.84 1.17 2.88 1.40 3.76 1.76 3.97 1.35 2.82 1.92 2.97 1.64 1.53 5.91 1.20 2.91 1.55 1.77 2.79 1.57 5.05 2.99 1.82 1.86 1.90 1.44 252 19.74 3.43 252 19.41 253 19.46 251 19.02 2.92 1.89 1.95 2.57 1.69 1.79 1.5 8.69 1.79 1.27 2.24 2.31 2.92 1.75 1.13 2.06 2.05 2.93 1.87 1.88 1.08 2.89 1.40 2.26 2.03 1.90 1.71 1.64 3.23 2.93 1.53 1.46 2.82 3.94 1.71 1.69 1.10 2.11 2.97 2.65 1.64 1.05 2.58 2.87 1.43 3.72 3.57 2.69 1.65 2.88 1.64 1.99 1.08 2.36 2.04 2.14 2.64 5.81 2.53 5.20 2.5 8.85 1.13 2.38 2.86 1.07 16 17 18 19 20 2.39 2.98 2.86 1.73 1.74 1.62 1.91 1.5 8.71 4.94 1.28 2.84 1.90 2.75 1.84 1.83 3.94 1.73 1.91 1.81 4.78 3.13 2.21 2.83 2.79 4.03 2.31 2.91 1.00 1.61 1.85 1.82 1.59 1.5 8.81 1.32 3.95 1.93 2.33 2.81 1.78 1.71 26 27 28 29 30 2.65 1.91 1.84 2.67 1.66 5.09 2.52 2.90 1.69 1.04 2.54 2.87 1.95 2.48 2.77 3.40 2.63 2.76 4.45 252 19.67 1.29 2.96 1.34 2.75 1.75 4.82 1.98 1.61 1.08 2.55 3.30 2.33 2.59 1.01 2.50 1.81 1.77 1.5 8.30 3.81 1.90 1.53 249 19.02 2.97 1.5 8.67 1.76 1.71 1.43 2.97 1.71 1.89 1.qxd 10/15/07 4:14 PM Page 523 Appendix: Tables 523 ␣ = 0.71 1.67 1.5 8.17 2.83 1.76 1.70 1.92 1.92 1.86 1.73 1.99 2.69 1.12 2.5 8.11 2.82 1.64 4.66 1.14 2.76 1.07 2.95 1.25 2.96 1.94 2.17 2.81 1.79 1.62 32 34 36 38 40 1.68 3.99 1.02 2.93 2.77 1.30 2.05 2.82 1.69 1.90 1.41 3.64 1.54 249 19.88 1.35 2.89 1.80 1.03 2.80 1.47 2.03 2.60 5.86 2.73 4.79 1.44 2.86 1.73 1.H1317_CH49-AP.61 1.80 4.41 2.84 1.85 1.55 5.76 2.02 1.81 1.62 5.60 3.32 2.36 2.25 2.74 1.85 1.55 3.16 2.80 1.82 1.78 1.77 4.81 1.93 1.93 1.02 1.56 249 19.59 2.47 2.19 2.06 2.47 2.38 3.07 2.26 2.29 2.86 2.88 1.10 2.67 1.09 2.92 1.21 2.79 1.11 2.95 2.63 3.88 2.96 1.65 1.08 2.95 1.96 1.5 8.14 2.21 2.22 2.4 8.70 3.79 1.27 2.85 1.77 1.10 2.91 2.37 6 7 8 9 10 3.5 8.08 2.83 1.76 1.62 1.08 2.84 3.12 2.57 248 19.53 1.54 11 12 13 14 15 2.33 3.46 3.69 1.5 8.75 1.02 1.83 1.98 1.84 1.36 3.71 3.77 1.01 1.97 1.84 1.23 2.20 2.17 2.85 1.56 5.19 2.01 2.00 1.25 2.78 1.76 1.80 1.77 1.88 3.03 1.85 1.80 1.71 1.93 1.39 2.10 2.42 2.31 2.11 2.84 1.65 3.61 1.81 3.30 254 19.04 2.01 2.97 1.5 8.4 8.75 1.15 2.87 1.87 1.84 1.12 2.72 3.19 2.41 254 19.85 1.39 3.01 1.84 1.85 1.88 1.55 2.95 1.67 1.31 2.07 2.10 2.79 2.22 2.66 4.15 2.91 1.37 254 19.49 2.88 3.50 250 19.95 1.51 2.92 1.77 1.98 1.91 1.15 2.74 1.20 2.73 1.79 1.98 1.87 1.77 1.16 2.22 2.73 1.06 2.26 2.46 2.16 2.

04 4.54 1.48 1.85 1.09 2.54 1.62 1.95 1.46 1.50 1.43 1.82 1.51 1.96 3.25 1.47 1.16 2.03 2.30 2.76 1.13 2.77 2.68 1.61 1.01 1.64 1.95 1.18 2.67 1.50 1.14 2.17 2.62 1.51 1.63 1.19 2.57 2.43 1.44 2.41 1.56 1.65 300 3.48 1.55 1.70 2.81 1.88 1.78 1.08 1.61 1.21 2.24 1.72 1.08 3.87 1.36 1.74 1.46 1.52 1.00 3.57 1.68 2.43 2.66 1.19 1.59 1.39 1.10 2.89 1.67 1.55 1.69 1.32 1.71 1.41 2.07 2.62 1000 3.85 1.79 1.75 1.02 2.07 2.49 1.32 1.52 1.62 1.92 1.03 2.88 1.70 1.83 1.63 1.16 3.67 1.45 1.62 1.57 1.54 1.83 1.64 1.53 2.03 2.60 1.09 3.42 2.50 1.67 1.97 1.32 1.98 3.56 1.72 1.11 2.22 2.76 1.42 1.51 1.89 1.46 1.85 1.83 1.74 1.26 1.24 2.74 1.83 2.49 1.23 2.78 1.68 1.76 2.16 2.31 1.31 1.57 1.51 1.76 1.59 1.58 1.04 2.03 3.44 1.52 1.47 1.43 1.96 1.13 1.32 2.56 2.58 1.24 2.88 1.96 1.80 1.25 2.77 1.94 1.19 300 1.70 1.59 1.66 1.43 1.71 1.39 2.69 1.52 1.68 1.81 55 60 65 70 80 4.47 1.91 1.44 55 60 65 70 80 1.57 1.90 1.52 1.05 2.78 1.79 1.36 1.19 3.80 1.70 1.78 1.36 2.55 1.80 1.13 2.66 1.72 1.40 1.71 2.55 1.57 1.52 1.91 1.88 1.68 1.04 2.45 1.07 3.15 1.37 1.44 1.59 2.81 1.30 1.20 3.03 2.71 1.49 1.98 1.33 1.99 3.qxd 524 10/15/07 4:14 PM Page 524 The Desk Reference of Statistical Quality Methods Table A.75 1.62 1.30 1.01 2.31 2.93 1.60 1.83 1.65 1.81 1.05 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 42 44 46 48 50 4.72 1.08 2.54 1.63 1.83 1.89 1.41 1.24 2.80 1.45 1.48 1.69 1.99 1.29 1.22 3.00 1.61 1.99 1.27 1.40 2.94 1.50 1000 1.70 1.65 1.60 1.56 1.71 1.59 1.67 1.81 2.55 1.51 1.00 Ç .69 1.94 3.11 1.21 1.10 2.28 1.69 1.42 2.39 1.35 2.08 2.41 1.95 1.93 1.55 1.06 4.75 1.01 1.76 1.61 1.18 2.87 3.35 1.98 1.73 1.23 2.04 2.16 1.99 1.73 1.17 2.21 2.77 1.61 1.90 1.44 2.89 1.11 2.97 1.74 1.72 1.76 1.79 1.68 1.06 2.90 1.60 1.56 1.29 2.63 1.80 2.88 1.61 1.72 1.62 1.98 1.53 1.54 1.77 1.57 1.47 2.52 1.04 2.41 1.82 2.73 90 100 125 150 200 3.56 1.62 1.89 1.27 1.11 2.79 1.84 1.92 1.71 1.03 2.42 1.51 1.71 1.40 1.65 1.84 3.75 1.84 1.45 1.67 1.14 2.87 1.82 1.58 1.79 1.49 1.36 1.58 2.H1317_CH49-AP.49 1.64 1.59 1.29 2.29 2.20 2.12 2.38 2. ␣ = 0.27 2.90 1.85 1.02 4.77 1.72 1.37 2.61 3.72 1.61 1.66 1.57 1.60 1.65 1.39 1.07 2.50 1.65 1.79 1.34 1.67 1.69 1.62 1.41 1.80 1.70 1.15 2.85 1.11 1.70 1.38 1.99 1.49 1.50 1.75 1.74 1.94 1.67 1.38 1.51 2.46 1.06 2.69 1.72 1.90 1.95 1.45 1.81 1.66 1.33 2.84 1.43 1.74 2.73 1.53 1.90 1.76 1.48 1.86 1.39 1.90 3.74 1.22 1.75 1.17 1.69 1.89 3.96 1.75 1.91 1.59 1.89 1.34 1.89 1.81 1.42 1.00 2.94 1.00 2.30 1.41 1.74 1.63 500 3.73 1.21 2.50 1.54 2.86 1.53 1.82 1.35 1.29 1.92 1.44 1.26 2.46 1.48 1.83 1.84 1.53 1.82 1.82 1.72 1.86 1.70 1.89 1.76 1.52 1.70 1.95 3.35 1.85 1.70 1.76 1.07 4.97 1.70 1.62 1.23 2.62 1.04 2.64 1.14 2.21 3.83 1.75 1.15 2.19 1.57 1.59 1.65 1.80 1.39 1.10 2.52 1.37 1.10 2.91 1.97 1.93 1.64 1.61 500 1.39 1.84 1.01 2.37 1.08 2.03 2.32 1.37 2.83 1.50 1.80 1.48 1.86 3.27 2.78 1.15 3.55 1.73 2.38 1.73 1.68 1.32 2.72 1.78 1.06 2.85 1.06 2.02 2.64 1.93 1.82 1.84 1.58 1.86 1.14 3.43 2.52 1.41 1.60 1.69 1.68 1.03 2.78 1.49 2.79 1.48 1.23 1.13 F-table (continued).82 1.53 1.37 1.58 1.05 4.39 1.00 2.72 1.88 1.16 2.99 1.91 1.69 1.58 1.22 2.01 2.80 1.76 1.66 1.48 1.50 2.35 1.67 1.87 1.60 1.84 1.45 1.40 2.65 2.63 1.53 1.92 3.76 1.26 1.60 1.80 1.54 1.77 1.23 1.57 1.65 1.79 2.38 1.93 1.77 1.73 1.46 1.60 2.77 1.65 1.96 1.86 1.59 1.53 1.32 90 100 125 150 200 1.66 1.59 1.70 1.48 1.49 1.68 1.86 1.57 1.78 1.34 1.65 1.28 1.13 2.75 2.42 1.66 2.94 1.64 1.64 1.75 1.73 1.49 1.48 1.79 1.74 1.11 2.60 Ç Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 19 20 22 24 26 28 30 35 40 45 50 60 80 100 200 500 Ç 42 44 46 48 50 1.62 1.63 1.00 1.95 1.20 2.63 1.13 3.65 1.93 1.22 1.77 1.55 1.48 1.82 1.61 1.54 1.10 3.38 2.45 1.58 1.55 1.47 1.31 2.57 2.84 1.64 1.41 1.87 1.53 1.85 3.46 1.97 1.64 1.46 2.87 1.86 1.11 2.

37 2.52 5.82 4.4 9.89 3.00 5.76 2.68 2.03 3.51 3.56 4.45 2.23 32 34 36 38 40 5.53 3.28 3.66 5.12 4.20 2.57 977 39.97 2.1 9.26 2.00 2.56 2.78 2.76 2.82 5.31 2.71 4.62 2.59 5.39 2.55 2.10 3.73 3.46 3.6 9.17 2.99 2.01 2.31 2.82 2.34 2.13 3.99 4.15 2.41 2.15 3.79 2.50 2.33 3.86 3.89 5.12 3.78 5.12 3.67 2.53 2.29 2.4 14.55 6.60 2.44 3.88 2.49 2.43 2.64 2.03 3.75 3.27 2.25 3.52 4.73 2.5 8.41 2.53 3.00 2.97 2.88 3.28 4.39 2.51 2.48 4.65 2.86 4.03 2.05 3.50 2.39 922 39.20 3.21 4.73 2.79 2.36 2.57 7.35 3.90 8.53 5.29 3.80 4.75 2.72 2.67 2.07 4.94 2.70 2.76 2.71 2.13 3.67 4.48 2.69 2.04 3.62 2.45 2.4 14.48 4.59 5.51 4.91 3.79 16 17 18 19 20 6.10 3.06 3.29 2.0 800 39.45 2.78 3.41 3.52 5.90 2.92 2.32 4.81 2.82 2.11 (continued ) .85 5.0 16.02 2.96 3.4 14.03 2.26 3.51 3.20 2.38 3.68 969 39.43 4.38 2.57 2.28 3.39 990 39.76 2.46 2.33 4.61 3.70 2.62 2.17 3.22 2.38 3.27 4.94 2.89 3.10 3.90 2.90 4.55 2.65 4.47 2.65 2.66 3.32 2.65 3.13 3.87 4.64 2.56 2.37 3.08 4.68 2.67 5.0 10.08 4.50 4.22 3.77 3.62 975 39.84 2.07 5.48 3.45 11 12 13 14 15 6.72 5.73 3.98 2.38 2.10 4.96 2.95 3.29 3.14 3.44 3.86 5.42 5.70 2.88 2.20 3.17 2.30 3.52 2.4 14.20 6.4 14.71 5.60 4.83 6.21 6.36 2.4 12.09 3.75 2.24 3.08 3.66 3.16 3.35 4.54 2.50 3.17 2.48 2.36 7.67 2.65 2.47 5.35 2.30 4.59 2.72 6.34 3.25 2.41 3.68 2.56 3.39 2.05 2.81 8.34 2.28 2.76 4.01 2.46 2.41 6.30 6.41 989 39.98 2.39 3.98 5.67 2.43 2.76 4.50 2.89 2.84 3.21 2.60 4.39 3.63 4.47 2.26 5.15 2.59 3.2 8.54 2.48 3.97 2.87 2.43 3.35 4.57 2.25 2.31 3.81 2.52 2.83 5.33 2.53 2.63 4.08 2.73 2.47 5.23 5.36 3.24 3.48 4.47 5.05 2.13 2.50 3.3 8.45 2.62 4.2 8.72 2.64 2.37 4.11 2.43 987 39.15 937 39.9 9.21 3.22 2.80 2.2 15.40 2.53 2.94 7.62 6.20 2.66 6.77 2.33 3.92 2.87 2.37 6 7 8 9 10 8.53 2.10 3.4 8.74 2.54 6.3 8.34 2.75 5.43 864 39.41 4.23 4.01 3.55 2.25 2.36 2.22 4.76 2.44 3.17 3.61 2.61 3.3 8.31 2.51 2.4 14.30 3.13 3.25 2.4 8.39 2.37 2.18 3.24 5.70 4.38 4.75 6.65 2.31 2.3 14.15 2.61 5.90 3.72 2.18 3.35 2.38 3.3 14.15 4.13 3.79 5.87 2.51 2.52 980 39.41 2.59 2.89 3.04 3.5 17.34 2.23 2.15 3.47 3.50 5.2 10.56 2.84 2.27 2.12 6.72 3.93 2.42 4.73 2.01 3.79 6.27 2.60 7.57 2.4 14.64 2.70 2.32 2.49 2.54 4.34 2.69 6.29 3.86 3.88 2.55 2.69 3.77 3.87 3.80 3.25 4.72 4.31 2.98 7.4 14.62 4.57 2.87 2.29 2.51 2.79 6.06 5.46 6.79 2.91 3.44 2.32 3.43 2.42 2.25 3.28 2.4 14.64 2.31 3.66 3.30 2.63 5.85 957 39.45 5.49 2.22 3.95 2.41 2.92 2.05 2.48 2.60 2.93 2.59 3.04 5.92 2.42 4.7 9.72 2.36 2.84 6.34 26 27 28 29 30 5.98 6.12 4.22 3.77 2.90 2.07 6.69 4.06 3.63 3.41 2.28 2.63 2.50 5.74 3.00 3.54 2.75 2.39 2.72 2.05 3.68 2.78 2.33 2.60 6.81 3.95 5.64 6.70 3.80 2.47 4.06 3.60 2.4 14.12 3.66 4.90 2.24 3.6 8.47 2.16 3.06 3.46 985 39.05 4.98 2.15 3.46 2.60 2.74 2.96 3.51 3.025 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 2 3 4 5 648 38.37 2.84 2.57 4.20 5.31 2.28 6.55 5.83 3.53 4.07 7.20 3.28 3.05 3.18 2.56 3.36 2.29 2.56 2.93 2.50 21 22 23 24 25 5.47 2.21 3.2 15.70 3.47 2.24 4.18 3.29 4.82 2.90 5.43 2.2 8.85 2.27 4.19 3.82 3.33 3.13 2.57 4.H1317_CH49-AP.76 900 29.59 2.57 2.85 2.48 2.4 14.67 3.qxd 10/15/07 4:14 PM Page 525 Appendix: Tables 525 ␣ = 0.87 2.81 2.61 2.19 2.15 3.25 2.72 5.01 3.59 2.05 3.22 3.52 4.92 3.09 4.69 4.24 4.43 2.5 8.09 3.53 2.69 2.20 3.99 2.82 2.22 2.4 14.29 3.97 4.62 2.27 3.49 983 39.62 2.59 2.32 4.20 4.58 3.65 2.72 3.29 3.76 3.15 4.57 2.48 3.45 2.98 948 39.42 2.76 963 39.3 8.92 5.44 2.61 3.39 2.12 4.

27 2.53 2.84 1.qxd 526 10/15/07 4:14 PM Page 526 The Desk Reference of Statistical Quality Methods Table A.80 1.44 6.00 1.25 2.49 6.05 1017 39.33 2.09 2.23 2.53 2.03 2.86 1.27 2.82 3.43 2.39 2.39 2.55 3.44 2.5 14.39 5.90 1.92 3.10 2.18 2.74 1.41 2.03 2.30 997 39.65 2.60 2.11 2.68 3.92 2.19 2.43 2.37 2.74 2.1 8.00 3.9 8.09 21 22 23 24 25 2.40 16 17 18 19 20 2.23 2.94 1.14 2.81 2.16 3.14 2.44 2.13 F-table (continued).66 1.99 1.23 3.24 4.19 2.01 2.40 3.35 2.38 2.31 3.10 2.64 2.03 2.76 3.88 1.00 1.53 6.02 2.48 4.08 2.74 1.02 3.35 2.98 1.14 4.11 2.22 2.51 2.68 1.23 2.36 2.32 2.0 8.22 2.06 2.33 2.17 2.97 2.44 2.59 2.29 2.60 2.04 2.05 2.03 2.12 2.26 4.30 2.63 2.82 1.47 2.16 2.90 1.92 1.99 1.03 2.96 2.06 2.25 3.56 6.12 4.00 1.92 4.37 2.73 3.67 3.35 3.00 1.33 2.36 6.5 10.85 4.0 8.17 2.33 6.38 6.33 5.04 2.42 2.24 2.13 2.48 6.56 2.41 6.93 1.83 1.96 2.25 2.05 2.11 2.17 2.20 2.42 3.25 2.32 2.03 2.04 2.90 1.52 2.20 4.27 2.61 2.92 1.1 8.47 3.14 1010 39.84 2.93 2.24 1001 39.87 2.83 1.18 1007 39.18 2.5 14.85 1.22 2.37 5.40 2.21 3.97 3.84 3.92 2.00 2.33 2.96 1.79 32 34 36 38 40 2.33 2.03 2.07 2.24 2.13 2.88 1.00 1.85 2.28 5.18 2.91 26 27 28 29 30 2.42 3.48 2.19 4.28 2.30 2.85 1.51 6.95 3.79 2.08 2.19 2.37 3.44 2.H1317_CH49-AP.00 2.15 2.24 2.10 2.15 4.09 2.94 1.2 8.13 2.12 2.5 13.21 2.1 8.96 1.24 2.35 2.92 1.1 8.17 4.67 2.12 2.56 2.87 1.36 2.05 2.80 2.55 3.13 2.03 2.2 8.33 3.53 6.35 923 30.13 2.61 2.75 1.14 3.35 2.31 5.0 8.88 1.90 1.93 1.58 3.29 2.01 1.09 2.0 8.56 3.04 4.91 1.02 6 7 8 9 10 5.11 2.27 2.18 3.81 1.86 3.46 2.49 2.08 4.32 2.09 4.61 3.33 2.97 2.03 2.92 2.21 2.41 2.88 1.53 2.07 2.02 2.30 2.47 2.06 2.27 6.0 8.76 2.04 2.80 1.32 6.19 2.63 2.51 2.76 1.01 1.71 1.45 3.96 2.33 3.31 2.07 2.07 2.43 2.28 999 39.08 2.44 3.07 2.34 5.15 2.02 2.11 2.50 2.27 2.45 2.77 3.92 1.33 2.5 14.90 2.33 995 29.75 2.00 2.11 2.24 3.09 2.97 1.13 2.10 4.15 2.90 3.68 2.11 2.16 1008 39.06 2.20 1005 39.91 2.85 2.4 14.29 3.84 2.22 2.56 3.17 4.98 1.38 3.94 2.95 1.44 5.15 2.45 2.16 2.08 2.90 1.58 2.05 2.26 2.79 1.61 3.29 2.83 1.64 .25 2.31 2.67 3.11 2.77 1.18 2.17 2.67 2.70 2.25 2.95 1.23 3.01 4.72 2.39 6.17 2.01 1.89 2.42 2.66 1.28 2.15 2.68 3.72 1.78 3.15 3.08 2.08 11 12 13 14 15 3.90 1.70 3.06 2.93 4.06 1016 39.97 1.52 2.97 1.0 8.07 4.98 2.20 2.50 2.13 2.5 14.13 2.91 3.64 3.05 2.17 2.46 2.76 1.12 2.5 14.28 2.31 2.025 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 19 20 22 24 26 28 30 35 40 45 50 60 80 100 200 500 Ç 1 2 3 4 5 992 23.23 1004 39.93 3.5 13.66 3.05 2.05 2.36 2.08 2.25 2.27 2.87 2.10 1013 39.41 2.85 1.81 1.94 1.82 2.81 3.13 2.74 2.58 2.62 2.65 2.85 1.53 3.39 2.41 2.31 2.73 2.95 2.69 2.87 1.09 2.14 2.49 2.20 2.21 2.12 4.15 2.18 2.87 1.28 3.21 2.74 3.78 2.19 2.23 2.58 2.13 2.76 3.84 1.99 1.89 2.96 2.69 1.09 2.97 1.92 1. ␣ = 0.82 1.77 1.20 2.98 2.99 4.59 3.74 1.88 2.96 4.20 2.85 1.5 14.81 1.16 2.15 2.79 1.42 5.56 2.23 2.29 6.50 2.9 8.5 14.41 2.5 14.53 2.94 1.02 1.49 3.10 2.68 3.01 2.86 2.98 1.5 14.95 1.5 14.99 1.85 1.36 2.27 2.70 2.95 2.69 1.57 2.1 8.42 2.95 1.26 6.88 1.17 3.48 2.03 1018 39.15 2.39 2.5 14.05 2.05 2.12 1012 39.04 2.88 4.38 3.22 2.51 3.37 2.9 8.47 4.47 2.09 2.36 2.63 2.49 2.23 4.1 8.20 3.71 1.80 2.70 3.27 2.08 2.12 2.48 6.77 2.64 3.21 2.54 2.01 1.24 2.96 4.4 14.99 2.90 1.38 2.10 2.26 1000 39.34 2.41 2.76 2.07 2.72 2.18 2.5 13.88 1.98 1.30 2.29 2.08 2.29 2.90 1.86 4.

86 1.73 1.00 1.32 2.05 2.03 2.34 4.15 2.80 1.58 1.86 3.07 2.68 1.18 2.32 2.30 1.29 2.80 1.83 1.87 1.45 1.91 1.11 2.82 1.33 2.42 2.08 3.67 1.93 1.13 2.01 1.95 1.23 1.62 1.14 2.01 1.88 1.26 2.37 1.90 1.67 1.36 2.49 1.67 1.71 2.01 2.74 1000 1.89 1.45 3.63 1.79 1.58 1.06 2.52 1.74 3.93 1.33 2.79 1.23 2.27 2.13 5.01 1.93 1.55 55 60 65 70 80 1.09 3.01 2.74 1.53 1.05 2.67 1.16 2.44 2.06 2.99 2.29 5.qxd 10/15/07 4:14 PM Page 527 Appendix: Tables 527 ␣ = 0.60 1.96 1.18 5.39 5.64 1.22 3.07 2.79 1.69 1.72 3.14 2.84 1.78 1.08 3.30 2.82 1.14 2.71 2.09 2.56 1.36 1.11 2.06 2.15 2.18 2.78 1.97 1.44 1.67 1.97 1.99 1.11 2.62 1.80 1.80 1.66 1.46 1.20 2.79 2.77 1.03 3.81 1.11 2.71 1.37 2.24 2.63 2.99 1.38 2.98 1.63 1.55 2.83 1.42 1.35 2.88 1.71 1.82 1.87 1.76 1.18 2.08 2.12 2.59 1.89 1. 3.04 2.87 1.94 1.13 5.95 1.99 3.58 1.05 2.21 2.67 1.78 1.61 1.65 1.15 2.05 3.37 5.09 2.97 1.18 2.78 3.31 1.85 1.81 2.97 1.17 2.65 1.52 1.88 1.48 1.51 1.86 1.10 3.93 1.09 2.59 1.00 1.48 1.93 1.62 1.94 1.19 2.72 1.75 1.02 2.77 1.82 1.90 1.89 1.05 2.43 2.69 2.65 1.86 1.57 2.83 1.27 3.48 1.89 1.92 1.08 2.99 1.91 3.62 1.67 2.68 1.22 2.70 2.77 1.03 2.98 1.64 1.11 2.03 2.49 2.85 1.55 2.50 2.68 1.78 1.10 2.61 1.95 1.22 2.42 3.48 1.95 2.50 1.91 1.51 2.24 2.78 1.88 1.05 2.68 1.30 2.77 1.84 2.80 1.56 1.64 1.36 2.56 1.03 2.99 2.47 1.18 2.62 1.91 1.69 3.29 2.91 1.48 2.27 5.80 1.56 1.87 1.96 1.11 2.37 1.06 2.03 2.69 1..35 1.26 2.77 1.47 1.65 1.61 1.01 1.86 1.39 1.01 1.48 1.00 2.46 1.94 1.72 1.03 2.66 1.22 2.72 1.54 1.41 2.71 1.88 2.45 1.58 2.34 1.97 1.75 1.36 3.54 1.46 1.33 1.62 1.85 1.59 2.43 1.82 1.77 1.04 2.84 1.70 1.36 2.91 1.73 1.13 2.58 1.72 1.87 2.33 2.28 1.60 1.20 2.30 1.22 3.11 2.75 1.64 1.84 1.75 1.67 1.70 2.82 1.39 3.08 2.06 2.81 1.44 1.80 1.38 2.84 1.53 1.50 1.96 1.07 2.23 1.73 2.21 1.48 1.07 2.77 1.26 2.86 1.75 500 1.75 Ç Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 19 20 22 24 25 28 30 35 40 45 50 60 80 100 200 500 Ç 42 44 46 48 50 2.66 1.82 1.40 3.17 2.57 2.00.75 1.04 2.42 2.88 1.62 1.07 3.40 90 100 125 150 200 1.30 2.45 1.14 1.86 1.34 2.10 2.99 1.21 2.93 3.65 2.89 1.11 3.09 2.57 1.83 1.71 1.27 2.83 3.88 1.42 2.68 1.87 1.13 2.34 1.85 1.74 1.96 1.16 2.58 1.11 2.70 1.07 2.98 2.27 1.83 1.72 1.91 1.51 1.05 2.14 2.78 1.80 1.25 2.32 2.32 1.71 1.14 2.51 2.49 2.81 2.45 1.41 2.76 3.28 2.58 2.12 2.83 2.15 5.25 5.29 2.79 2.93 90 100 125 150 200 5.62 1.20 2.93 1.05 2.06 2.51 1.51 2.61 2.84 1.40 1.20 3.04 3.60 1.27 2.54 1.06 2.60 1.90 1.14 2.69 1.45 2.03 2.47 2.12 2.35 1.28 2.57 1.72 1.50 1.64 1.87 1.68 1.34 2.95 1.20 5.58 1.98 3.69 1.32 1.82 300 5.59 1.51 1.19 1.76 1.96 1.61 2.30 2.46 2.23 2.70 1.87 2.25 3.38 1.97 1.03 4.69 1.18 1.10 2.51 1.43 2.97 2.15 2.43 1.26 2.77 1.94 1.82 1.32 2.85 2.78 1.74 1.90 1.93 1.59 2.63 1.74 1.24 2.93 2.31 3.45 2.13 2.66 1.06 2.58 1.57 1.80 1.84 1.95 1.03 55 60 65 70 80 5.35 3.89 2.65 2.97 1.39 2.09 1.54 1.80 1.86 1.48 2.80 2.31 2.55 1.39 2.94 1.56 1.44 1.91 1.95 1.72 1.84 1.54 1.19 2.53 2.92 1.80 1.73 1.78 1.47 1.82 1.23 2.80 1.99 1.60 2.92 1.91 1.95 2.31 5.14 1000 5.23 300 1.97 2.20 2.55 1.02 2.64 1.02 2.32 3.89 1.42 1.37 2.43 2.02 4.54 2.34 3.80 3.92 1.73 1.41 1.89 2.73 1.77 2.71 1.77 1.H1317_CH49-AP.89 1.94 1.22 2.53 1.42 1.74 1.57 1.52 1.85 1.00 1.60 1.28 3.67 2.93 1.04 2.92 2.99 1.20 2.70 1.78 1.63 2.74 1.81 1.74 1.56 1.02 3.35 1.16 1.90 1.58 1.99 1.16 500 5.73 2.84 3.61 2.50 1.60 1.63 1.00 Ç (continued ) .39 1.09 2.24 2.82 1.01 2.83 2.57 2.18 2.61 1.88 1.97 1.93 1.83 1.13 1.95 3.69 1.40 5.88 1.39 2.66 1.69 1.01 2.19 2.64 1.42 1.85 1.19 2.70 3.51 1.38 1.74 1.43 3.65 1.025 Degrees of freedom for the denominator (m2) Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 42 44 46 48 50 5.71 1.25 1.12 2.39 1.63 1.75 2.08 2.02 2.16 2.88 1.41 2.79 1.89 3.60 1.71 1.60 1.47 2.27 1.07 2.91 1.

85 7.59 3.42 3.3 28.58 5.46 11 12 13 14 15 9.21 3.62 3.47 5.78 2.0 11.1 9.44 4.02 2.26 3.34 3.4 27.72 6.06 4.56 7.89 2.16 3.83 2.52 2.62 4.67 4.36 3.94 3.56 9.3 28.70 3.47 5.74 2.03 2.73 2.67 5.41 5.03 3.6 10.52 3.09 3.81 5.89 6.31 3.10 4.83 3.04 3.4 26.77 5.03 2.44 7.27 3.67 5.1 608 99.00 4.9 15.34 4.4 9.14 4.85 2.37 5.68 7.3 10.31 4.87 6.49 5.05 2.2 10.10 6.61 5.78 3.89 4.1 9.89 3.83 2.52 4.66 3.12 3.H1317_CH49-AP.66 2.73 5.04 4.28 6.93 3.60 3.47 3.14 4.54 3.86 3.87 4.01 3.31 3.89 3.21 3.9 14.02 2.62 5.54 5.84 3.4 27.86 2.94 6.98 2.46 6.87 2.82 614 99.13 3.11 4.56 5.18 8.09 5.98 6.96 2.54 4.00 4.61 5.42 5.73 3.06 3.2 14.31 3.02 2.97 2.37 3.55 3.41 3.90 2.29 5.1 14.4 9.20 8.17 4.1 21.56 2.19 4.82 2.19 3.30 4.48 4.64 8.52 4.45 3.51 4.2 16.94 3.78 3.01 3.22 4.94 2.49 7.34 4.88 3.28 4.80 5.48 6.98 2.72 617 99.70 4.37 3.63 6.79 2.18 3.60 32 34 36 38 40 7.96 2.65 8.84 2.60 6.9 14.75 7.17 3.81 2.12 3.79 3.50 4.56 5.58 2.59 3.09 3.91 2.4 26.72 4.61 5.70 3.62 2.57 3.62 3.4 27.43 3.14 3.32 3.49 4.35 7.20 8.31 5.48 2.18 4.3 27.89 613 99.55 8.13 3.7 598 99.48 2.63 4.0 14.65 7.29 3.5 598 99.2 9.39 4.66 2.02 2.10 3.64 4.69 3.0 30.43 3.93 3.96 2.82 3.78 8.72 7.72 2.5 34.43 4.07 3.22 5.04 4.61 2.83 2.03 3.70 3.02 7.30 4.99 2.70 6.46 3.3 9.36 4.91 2.3 30.86 2.42 5.50 3.68 3.35 3.59 3.21 5.85 7.93 2.95 5.66 2.12 3.4 27.75 2.52 6.71 3.18 5.06 3.86 3.69 2.25 5.15 7.86 7.32 540 99.09 3.95 7.81 2.25 4.2 9.76 3.22 6.99 3.03 5.16 3.39 8.75 3.13 4.72 4.51 3.86 2.54 4.86 4.36 3.08 3.99 2.48 2.64 619 99.60 8.8 500 99.59 2.22 3.80 2.94 3.32 3.75 3.47 7.62 2.72 2.06 5.0 1.58 2.44 4.79 6.10 6.07 3.1 14.99 6.46 3.02 3.29 5.2 606 99. ␣ = 0.39 3.07 4.26 6.51 2.11 4.5 11.81 3.44 4.82 2.63 3.99 8.45 5.56 3.70 2.18 3.01 4.14 4.19 6.42 4.24 3.58 2.18 3.26 4.93 2.21 5.50 4.80 4.18 3.5 10.82 3.55 2.70 2.40 4.0 10.60 4.78 5.51 3.93 2.86 8.51 2.52 7.50 7.42 5.5 14.4 576 99.54 2.23 3.54 3.7 10.96 2.13 F-table (continued).21 3.94 7.64 2.90 2.64 3.95 2.23 3.21 6.42 16 17 18 19 20 8.15 3.05 3.66 7.58 4.87 4.60 7.06 611 99.30 3.7 1.91 3.93 6.79 2.09 3.4 27.24 3.63 2.55 2.32 3.26 3.7 16.01 5.45 2.53 5.88 2.86 4.94 4.34 4.99 3.62 2.68 618 99.41 4.45 6.39 3.05 3.57 4.01 6.82 2.17 3.30 4.39 4.60 7.74 5.11 6.7 12.77 7.90 2.90 3.89 4.57 4.96 3.26 3.67 2.8 14.68 2.46 4.46 3.19 3.18 5.92 2.21 5.50 3.35 4.91 5.4 27.76 2.55 9.93 5.68 2.36 5.13 3.33 3.99 6.50 3.55 2.91 3.78 2.18 3.07 8.03 3.56 4.74 2.56 5.72 2.64 7.65 9.29 3.06 7.70 2.75 3.37 3.89 3.35 3.36 3.4 26.3 14.25 4.74 2.02 3.02 2.74 4.40 3.78 3.99 3.4 25.97 3.45 2.66 2.87 2.24 5.22 3.15 3.12 3.72 5.65 3.32 5.05 3.7 15.82 7.5 16.99 21 22 23 24 25 8.22 11.68 3.98 3.75 2.09 3.67 3.52 3.93 2.10 4.03 3.79 2.72 5.89 2.82 4.77 4.56 3.75 2.2 9.46 4.82 4.24 3.42 .73 2.27 5.65 2.40 3.85 5.61 6 7 8 9 10 13.20 3.94 3.8 14.71 3.00 2.53 3.77 616 99.18 4.07 3.29 3.0 10.61 3.4 27.26 4.37 4.66 5.93 2.86 2.02 7.05 4.84 2.68 4.4 26.30 3.8 10.41 5.07 4.71 7.89 4.72 3.51 2.64 4.50 3.33 9.77 2.40 7.79 2.13 3.80 2.51 3.86 3.41 3.45 7.66 2.89 2.27 3.0 596 99.26 3.63 2.89 5.69 2.78 2.1 563 99.8 14.75 26 27 28 29 30 7.67 4.9 9.53 8.17 3.46 4.32 4.23 3.97 3.12 3.07 3.56 3.45 3.66 6.51 6.94 3.qxd 528 10/15/07 4:14 PM Page 528 The Desk Reference of Statistical Quality Methods Table A.34 5.45 4.31 5.80 3.76 4.2 15.71 2.22 4.30 3.20 4.8 18.31 3.61 4.010 Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Degrees of freedom for the denominator (m2) Multiply the numbers of the first row (n2 = 1) by 10 1 2 3 4 5 405 93.85 2.56 5.10 3.15 3.77 3.43 3.92 4.35 3.15 3.96 4.3 602 99.93 2.

53 2.7 9.5 9.43 625 99.33 629 99.5 26.22 2.83 1.56 2.67 2.16 2.9 9.31 6.51 3.45 2.26 2.13 2.70 3.39 2.66 3.56 2.18 5.11 2.22 3.81 2.7 9.16 2.36 2.88 3.55 2.08 2.96 2.51 3.50 2.24 3.67 2.54 3.33 2.50 2.73 2.12 4.78 3.13 2.90 1.02 3.97 2.70 4.98 1.50 2.8 9.35 2.28 4.75 4.29 2.38 628 99.32 2.02 1.07 4.33 2.05 2.63 2.02 2.42 2.19 2.36 3.30 2.77 4.48 2.7 9.92 2.00 1.33 7.76 2.5 13.64 2.12 3.25 4.75 2.42 2.39 2.7 14.89 3.37 4.42 4.58 621 99.2 13.14 2.4 13.71 2.88 3.41 2.60 2.03 2.96 4.32 2.33 3.34 2.1 13.81 2.17 26 27 28 29 30 2.05 2.29 2.10 3.13 2.33 2.20 2.52 4.52 2.38 2.70 2.97 2.83 3.99 3.70 2.29 3.88 2.5 26.24 2.00 2.20 2.59 2.21 2.59 3.72 2.6 9.21 2.73 3.52 2.30 7.22 2.57 2.5 26.3 13.94 1.22 3.93 2.44 2.55 623 99.55 2.61 2.57 2.59 2.78 2.24 3.6 14.35 3.49 2.94 5.57 3.12 7.14 7.27 3.51 2.91 11 12 13 14 15 4.81 4.74 2.17 3.53 2.29 2.65 4.5 26.26 2.04 2.93 5.0 9.72 3.49 2.15 4.33 2.5 26.60 2.9 9.9 9.35 2.99 2.96 2.88 4.36 2.90 3.81 3.87 2.04 637 99.18 2.93 3.11 5.40 2.53 3.45 2.22 3.24 2.92 3.06 2.97 1.38 2.51 2.53 2.78 2.36 2.43 2.71 2.03 4.8 9.16 2.23 2.41 7.19 2.91 5.61 2.54 4.72 2.25 2.62 3.62 2.02 2.70 2.54 2.0 9.09 5.23 2.16 3.24 631 99.92 2.55 2.28 2.18 3.44 2.12 3.16 2.36 2.47 2.28 2.72 2.10 2.23 5.46 2.95 3.14 3.26 2.67 4.73 4.88 5.5 26.94 1.86 1.94 3.94 2.26 3.12 2.97 2.44 4.4 13.39 2.89 2.32 2.47 2.42 21 22 23 24 25 2.43 3.30 2.84 1.32 2.36 2.19 2.98 1.11 5.08 7.01 5.57 2.66 3.6 13.89 2.43 7.35 2.29 2.99 4.66 2.33 4.42 2.78 3.82 5.30 3.44 2.17 7.58 2.09 4.39 2.28 6.90 2.68 2.00 2.94 3.40 4.31 3.06 2.66 2.27 2.56 3.03 2.50 2.84 2.5 26.16 633 99.13 3.46 2.32 2.12 2.04 6.36 7.5 26.65 4.99 5.36 2.85 2.06 5.54 2.40 2.07 2.76 2.64 2.60 3.78 3.75 2.18 3.26 3.53 2.60 2.5 26.78 4.07 2.53 2.48 2.70 2.62 2.39 630 99.42 2.62 2.36 3.44 2.55 2.67 2.31 2.25 2.23 2.12 3.18 3.11 2.28 4.34 2.30 3.010 Degrees of freedom for the numerator (ml) 19 20 22 24 26 28 30 35 40 45 50 60 80 100 200 500 Ç Degrees of freedom for the denominator (m2) Multiply the numbers of the first row (n2 = 1) by 10 1 2 3 4 5 620 99.83 2.59 3.67 2.05 3.23 2.25 7.6 13.78 2.90 1.69 2.4 13.96 6.16 5.04 5.49 2.55 2.57 4.09 2.67 2.7 14.39 2.94 2.21 2.34 3.5 26.03 2.00 2.H1317_CH49-AP.49 2.01 32 34 36 38 40 2.93 6.55 2.15 3.00 2.48 4.03 2.37 2.47 624 99.19 3.5 26.69 2.00 3.5 26.46 2.41 2.01 6.5 26.53 4.78 2.49 2.20 4.88 3.20 633 99.74 2.37 3.87 2.63 2.06 2.1 13.60 4.84 2.40 3.68 2.40 2.17 2.49 2.70 4.06 2.14 5.75 3.74 2.91 4.27 3.22 2.89 2.54 2.90 5.67 4.28 3.80 2.80 (continued ) .5 9.27 7.66 2.10 3.03 2.5 13.02 5.38 2.39 2.18 2.62 2.14 2.03 2.87 16 17 18 19 20 3.21 3.25 6.76 2.25 2.77 2.12 2.40 6.80 2.35 6.49 3.86 3.3 13.83 2.71 3.46 2.46 2.96 1.75 2.46 3.19 2.36 630 99.92 2.50 2.45 2.66 2.88 2.30 2.60 2.35 2.5 26.73 3.11 2.16 2.38 3.20 7.49 2.5 9.42 2.37 2.10 3.88 5.28 2.45 2.08 2.6 9.87 1.2 13.06 3.80 3.4 26.63 2.qxd 10/15/07 4:14 PM Page 529 Appendix: Tables 529 ␣ = 0.53 3.26 2.99 5.38 2.86 4.44 2.68 2.05 2.36 2.30 2.43 3.10 3.33 2.64 2.22 4.69 2.5 13.51 623 99.38 4.32 2.5 26.15 2.03 2.40 3.25 2.46 2.87 1.58 2.69 3.7 9.62 3.18 5.0 9.08 636 99.79 2.45 2.62 2.27 2.88 5.42 2.83 3.46 3.51 2.98 3.33 2.42 2.40 2.91 1.65 3.08 3.47 3.08 2.11 2.58 2.56 2.63 2.62 3.58 2.4 26.42 2.16 2.32 4.40 626 99.38 3.35 2.82 3.21 2.08 3.27 2.87 2.17 3.10 2.84 2.09 2.13 635 99.02 6 7 8 9 10 7.07 5.42 6.89 3.00 1.80 2.26 2.41 3.57 2.08 3.

09 2.50 2.93 1.35 1.34 2.55 2.91 1.93 1.82 1000 6.74 1.70 1.00 2.24 2.26 2.77 1.54 2.51 3.09 2.85 1.53 1.85 1.31 2.25 7.13 2.36 2.16 2.75 2.09 2.90 1.66 1.72 4.68 2.78 1.76 2.76 3.66 1.07 2.28 2.010 Degrees of freedom for the numerator (ml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Degrees of freedom for the denominator (m2) Multiply the numbers of the first row (n2 = 1) by 10 42 44 46 48 50 7.10 2.19 2.47 2.39 1.93 2.45 3.32 55 60 65 70 80 7.21 3.64 2.48 1.59 2.72 3.85 1.06 2.17 2.12 3.15 1.57 1.53 1.09 2.35 2.72 2.66 2.76 1.96 3.76 4.94 1.33 2.31 2.79 1.22 2.56 2.45 2.22 4.18 2.40 2.63 2.75 1.91 1.76 1.15 2.44 2.83 1.66 2.06 2.92 4.91 2.52 2.82 1.14 3.01 1.35 2.26 4.02 300 6.00 80 100 200 500 Ç Ç Degrees of freedom for the numerator (ml) 19 20 22 24 26 28 30 35 40 45 50 60 Degrees of freedom for the denominator (m2) Multiply the numbers of the first row (n2 = 1) by 10 42 44 46 48 50 2.54 3.86 1.12 2.26 2.12 2.51 2.11 2.92 3.66 1.81 1.93 6.60 3.82 1.88 3.19 2.47 2.08 3.78 1.79 2.72 1.84 1.94 500 1.23 2.89 1.20 2.49 90 100 125 150 200 2.34 2.65 3.94 1.38 2.38 3.69 1.38 2.12 2.91 1.53 2.04 2.53 2.85 1.77 1.92 1000 1.06 2.73 2.18 2.16 2.03 2.93 2.31 2.72 1.25 2.06 2.59 2.12 2.56 3.15 5.41 1.51 2.71 1.97 1.79 1.37 2.81 1.82 1.63 1.06 2.80 6.19 7.27 2.92 1.47 3.59 2.04 2.63 1.57 1.85 4.35 1.18 2.75 1.52 1.98 1.72 1.85 1.qxd 530 10/15/07 4:14 PM Page 530 The Desk Reference of Statistical Quality Methods Table A.75 4.24 2.10 2.43 2.37 2.82 2.05 3.80 1.98 1.86 2.22 7.18 2.83 1.40 2.47 2.07 2.99 1.89 2.01 6.73 2.93 2.89 1.52 2.64 2.09 2.71 4.15 2.79 1.78 1.80 1.08 4.13 F-table (continued).97 1.91 2.21 2.97 1.61 2.03 2.44 2.57 1.34 2.03 2.05 2.40 2.20 2.56 2.76 1.79 1.74 2.01 2.36 2.66 4.60 2.37 3.12 2.01 1.44 2.22 2.52 1.43 2.04 3.95 2.22 2.62 1.66 2.12 7.06 2.02 2.20 2.88 1.08 2.85 1.20 2.70 2.18 2.68 3.47 2.75 1.22 2.82 1.63 2.86 1.68 1.06 2.81 1.78 1.29 3.78 3.08 2.85 2.37 2.12 4.88 1.43 1.58 1.76 1.04 2.66 1.72 2. ␣ = 0.69 1.21 2.23 2.78 3.64 2.32 2.03 2.85 1.54 2.31 3.11 2.86 1.11 3.69 1.03 2.64 1.03 2.47 2.27 3.55 2.17 2.45 2.47 3.00 1.17 2.34 3.84 2.07 2.63 1.46 1.84 6.38 2.64 1.27 2.82 2.28 1.70 1.10 4.71 1.66 2.47 1.48 1.26 2.84 1.14 2.13 2.68 3.39 2.94 2.31 2.27 2.55 1.91 1.73 1.36 3.19 3.45 1.06 2.99 1.73 1.70 2.83 4.89 1.97 1.53 2.65 3.63 1.66 1.58 1.14 2.92 1.45 2.05 2.28 2.28 7.13 2.01 4.37 2.60 1.95 2.68 2.65 1.47 1.78 1.89 1.75 2.33 2.29 2.82 2.34 2.40 2.87 1.18 2.89 1.06 3.23 3.99 1.37 1.68 55 60 65 70 80 2.25 2.08 5.23 1.87 1.75 1.72 1.02 2.17 3.24 2.59 1.17 2.39 2.99 2.04 2.32 3.61 1.40 1.08 2.33 2.73 1.33 1.79 1.92 1.15 2.74 1.19 2.02 2.31 2.39 2.04 3.15 2.27 2.09 2.04 3.19 2.00 1.61 2.65 1.26 3.33 2.10 5.26 2.64 2.71 1.60 1.68 1.82 1.41 3.70 2.67 2.63 1.97 2.87 2.79 2.37 2.58 2.10 2.42 2.52 1.81 6.78 2.43 2.01 1.95 1.72 2.60 1.58 1.50 2.97 2.49 1.27 2.15 3.25 2.88 1.98 1.31 1.70 1.75 1.43 1.62 1.16 1.97 2.08 7.67 1.90 1.03 2.42 2.60 2.28 2.88 1.31 2.84 1.24 4.81 1.08 3.00 Ç .20 3.00 1.44 2.84 2.99 1.42 2.04 7.17 5.05 2.04 2.88 1.28 2.92 2.07 3.19 1.94 1.62 2.83 1.63 2.24 2.42 2.79 1.23 2.02 2.90 6.37 2.98 1.29 4.20 2.41 2.82 2.97 2.98 1.94 1.51 2.32 2.52 1.97 1.97 1.28 1.90 1.62 3.08 2.24 3.79 1.01 1.95 2.11 2.60 1.55 1.H1317_CH49-AP.12 2.15 2.95 1.49 3.80 1.12 5.80 1.36 2.53 2.57 2.35 2.47 1.43 3.80 3.80 2.85 1.22 3.38 1.50 2.84 1.55 2.33 2.50 2.21 2.33 1.22 1.68 1.31 2.73 1.92 1.62 1.73 1.94 3.69 4.97 1.28 2.00 2.22 2.54 1.78 4.55 1.88 4.55 1.61 3.86 2.48 2.94 1.00 1.01 3.35 2.56 1.83 1.69 1.98 5.23 2.78 2.69 2.89 2.22 2.98 2.80 2.46 2.34 3.92 1.29 2.74 1.76 1.09 3.66 4.16 4.14 2.98 4.24 2.03 2.02 1.10 3.54 1.86 1.42 2.41 3.23 2.91 1.72 1.15 2.95 1.94 1.41 1.30 2.50 1.13 2.44 2.89 1.54 1.03 2.08 2.20 3.90 1.78 1.15 2.28 300 1.15 2.17 90 100 125 150 200 6.44 1.14 2.00 2.08 2.44 3.36 1.11 1.24 2.23 2.17 2.43 1.58 1.30 2.25 1.20 2.28 2.56 2.76 1.96 1.48 2.84 2.13 2.10 2.29 2.46 2.53 2.74 3.20 2.92 1.12 2.95 4.69 2.95 1.07 2.11 2.50 2.01 2.85 500 6.80 2.06 4.63 3.59 2.69 1.41 2.20 2.

39442 1.21584 2.70676 1.22504 1.24691 1.08900 2.46542 1.92867 1.21578 1.98021 0.76624 1.7 1.H1317_CH49-AP.89165 0.36809 2.62477 1.99045 0.12483 2.4 1.30213 1.50615 1.16421 2.56965 1.28803 2.31942 2.5 1.98043 0.38520 1.08740 1.87317 0.75719 0.3 1.34288 2.42362 1.3 1.85694 0.4 1.62546 1.64749 1.95547 0.5 1.89785 2.13520 1.62881 1.98256 1.34919 1.73522 1.90665 0.88573 0.84087 1.92994 1.01890 2.80954 1.36545 1.09765 1.47040 1.8 1.9 1.38953 1.42636 1.58616 1.11533 1.69159 1.44187 1.04209 1.18159 1.04788 1.80187 1.37613 1.15293 1.00052 0.85810 1.04508 2.4 1.94209 0.98559 1.40961 1.17246 1.92839 1.30823 2.98014 0.16716 2.01700 1.23810 2.76628 0.22503 1.44249 1.89647 0.88245 0.30395 2.07022 2.19464 1.73718 0.05600 1.25683 1.28462 1.82856 0.87590 1.91895 1.68240 1.66484 1.19523 1.44180 1.40954 1.06053 Confidence 95% Z␣ = 1.07500 2.00581 0.57446 1.76014 0.61770 1.54629 1.62079 1.27111 2.8 0.51708 1.96139 0.41022 1.73232 1.88250 0.12757 1.81481 1.34504 1.58626 2.09110 2.81968 1.53215 1.05553 2.14461 1.7 0.22070 2.69143 1.77496 1.94770 1.74937 1.33418 1.48154 1.46017 1.07606 1.07888 2.08667 2.88818 1.18266 1.76667 0.00736 1.90215 2.55478 1.6 1.16329 2.82405 0.26099 1.60746 1.23227 2.48136 1.27084 2.45572 1.93461 0.84000 1.86258 0.37580 2.79387 0.44478 1.09858 1.94487 0.0 0.49815 2.03769 1.27094 1.1 1.51530 1.97989 0.00749 2.56627 1.97386 1.59428 1.18430 2.85457 1.33486 1.0 0.87847 1.82691 1.78824 1.85480 2.77438 1.19471 1.09830 1.07530 1.84009 1.25428 1.08786 1.16545 1.04887 1.46512 2.46224 1.34069 1.93711 0.96463 0.71931 1.62828 2.57125 1.11534 1.60724 1.75082 0.93513 0.86303 0.16553 1.87355 0.94796 0.39224 1.50356 1.82065 1.85878 1.35272 1.31517 1.54840 1.81275 1.1 1.56650 1.59541 1.9 2.01616 1.92081 0.43551 2.78737 0.84971 1.40736 0.7 0.90203 1.07920 1.9 1.97843 1.98260 0.43136 1.75813 1.72475 0.9 2.05053 1.04531 1.33 n 30 50 80 100 150 200 250 300 400 500 750 1000 1.14559 1.0 1.9 2.28797 1.74618 0.90365 0.38825 1.10557 2.33 n 30 50 80 100 150 200 250 300 400 500 750 1000 1.83898 0.28 Cpk desired 1.86632 0.39679 1.84097 0.66484 1.93248 1.17020 1.84622 1.07843 Confidence 99% Z␣ = 2.44477 1.18945 2.5 1.67032 1.04089 2.07554 2.06074 1.48895 1.69541 1.20680 1.97269 0.61189 1.23831 1.20419 2.17352 2.91300 1.73538 0.24913 1.96058 1.77478 0.43755 1.94780 1.08733 1.69153 2.98063 0.70288 1.64080 1.84378 0.40727 1.96911 0.73315 1.85063 0.42007 1.9 1.49432 1.7 0.00795 1.8 0.99278 1.85898 0.89645 0.12685 2.48960 1.14062 2.05223 1.77497 1.50646 1.1 1.83161 0.67901 1.86493 0.30268 1.16581 1.01372 1.15658 1.73241 1.10558 2.36906 1.73204 1.88248 1.33 Cpk desired 1.33 n 30 50 80 100 150 200 250 300 400 500 750 1000 1.10073 1.83529 1.87311 0.07040 1.86306 0.93389 1.47777 1.04740 0.41695 1.94863 1.90259 1.645 Cpk desired 1.53779 1.61099 1.20390 1.0 1.36920 1.06049 1.39248 1.81101 2.61365 1.57612 1.67734 1.77473 0.46098 1.42333 1.72870 0.48169 1.76634 0.82225 0.38380 0.8 1.76026 1.37817 1.59002 1.11211 2.45633 1.40750 2.12121 2.33485 1.17364 1.15819 2.19354 2.93431 1.75350 0.96689 1.84879 1.7 1.33127 1.47777 1.75548 1.09532 2.76131 2.99040 0.67391 1.49349 1.21585 2.25706 1.36122 1.18582 1.30861 2. Confidence 90% Z␣ = 1.09858 2.38944 1.29763 1.04650 1.03250 1.27939 1.29322 2.05159 0.81948 1.69192 1.05641 2.96239 0.73272 1.16423 1.28224 1.21922 1.49105 1.97471 2.20685 1.23886 2.40769 1.2 1.48942 1.32813 1.59411 1.81072 1.18423 2.03251 1.93226 1.59695 1.45623 1.95553 1.57157 1.79360 1.00578 0.06797 1.15036 1.78303 0.03394 2.37153 0.55477 1.37572 1.60827 1.14 Required Cpk for achieving a desired Cpk.89764 1.05503 1.80884 1.85804 1.71404 1.29253 1.62468 1.31522 1.78359 1.85603 0.79659 0.45370 1.74594 0.05388 1.00564 0.75195 2.79725 1.54930 1.14451 1.79683 2.51781 1.10821 1.61303 1.65697 1.14082 1.46838 2.73192 1.09740 2.32545 2.51717 1.34720 1.25671 1.19100 1.18178 2.73973 0.80932 0.92985 1.36804 1.64074 1.45043 1.70695 1.39931 1.35825 1.24399 1.99533 1.92075 0.94888 0.20507 2.75165 0.58663 1.65662 1.80711 1.07547 2.77818 1.2 1.85090 0.04838 1.41297 2.74943 1.27064 1.0 1.27815 1.94033 1.48092 1.3 1.6 1.54705 1.48101 2.79672 1.95766 2.27812 1.90486 1.95576 0.71995 1.69050 1.09853 1.30205 1.05543 2.16190 1.96674 1.13141 2.53221 1.96427 1.23445 1.66363 2.02083 2.40588 2.86357 1.23793 1.28650 2.98678 1.51546 1.2 1.12750 2.46994 1.04924 1.04004 1.qxd 10/15/07 4:14 PM Page 531 Appendix: Tables 531 Table A.55369 1.44348 1.93865 1.45618 1.54075 1.02517 2.16861 2.21880 1.11333 2.60043 1.50092 1.21651 1.74375 1.92311 0.75460 2.06688 1.15795 2.17628 1.06042 1.82726 1.64911 2.33225 1.45279 1.39922 1.55997 2.17228 1.67550 1.0 1.94270 0.14792 2.12608 1.10958 1.11284 .97653 1.03258 1.96681 1.47221 1.12400 1.01018 1.85069 1.45697 1.71339 1.80936 0.56247 2.91483 1.51396 1.89919 1.41556 1.88513 1.61144 2.33799 1.27056 1.70124 1.16319 2.42368 1.38384 1.6 1.33500 1.57258 1.8 1.88513 1.74599 1.78740 0.8 0.7 1.70658 1.27563 1.92728 1.99534 1.

qxd 10/15/07 4:14 PM Page 532 .H1317_CH49-AP.

188–190. 270–271 attributes. 23–32 format for use and discussion. 137. 19–20 operating characteristic curve. 194 Box. method for evaluating.H1317_CH50_IDX. 43–50 with variable subgroups. 25t calculation of percent. for attribute studies. 2 assignable causes. 148–149. 32 acceptance criteria.qxd 10/15/07 2:06 PM Page 533 Index A attribute inspection. 486–487 bias factors. 20–23 method for selecting acceptable quality level. 345–355 attribute studies. 15. 51–57 average/standard deviation control charts. 272–279 attribute inspection error. 77–81 attribute data. 139. 12–13 acceptance sampling for attributes.4). sequential simplex optimization. 459 attribute control charts. 395 burnout period. See acceptable quality limit (AQL) Arrhenius plot. effect of. 1 temperature-humidity bias. 59–63 averages (X bar). 259 aliases. See acceptance sampling control charts (ACC) accelerated testing. 165. 16–19 acceptance sampling for variables. p′ chart and u′ chart for. 35–41 acceptance sampling control charts (ACC). 197 brainstorming. 393. 516 binomial distributions. 141. 35–51 method for selecting. steps for establishing. 142–143 break-even point. 477–478. See average outgoing quality limits (AOQL) AQL. 394 533 . 327–328 B basic simplex algorithm. George. 9. 15–33 continuous sampling plans. on OC curve. 5–6 acceptable quality limit (AQL). 35–41 acceptance sampling plans. defined. 15 accuracy. 32 burn-in period. 30–32 acceptance number. 24. bias factors for. See also normal distributions Benard median rank. 156 alternative hypotheses. 253 average outgoing quality limits (AOQL). 192 negative. 99–104 ACC. 28–29 average/range control charts steps for establishing. 319 confidence interval for. 411–412 bathtub curve. 516 average moving range. 319. 16 acceptance sampling. 6–7 Arrhenius relationship. 392–393 bell curves. 32 MIL-STD-105E (ANSI/ASQ Z1. 1–8 objective of. 223–224 AOQL.

calculation of individual and average. statistical process control. 412 centroid of the remaining hyperface. W.. 398–401 for Cp. 371–374 for Cr. 375–376 . 99–104 single-sided limit. 24–29 CSP-2 plans. 449–457 short-run range. 78–79 variables control charts: averages. 284–285 Cp indices. 269–270 for constant failure rate. 23–32 CSP-1 plans. 16 continuity correction factor. 91–98 chi-square distribution table. 127–134 exponentially weighted moving average (EWMA). 375–376 for Cpk. 387–389 combined (gage) repeatability and reproducibility error (GR&R). 65–69 chemical stress. 29–32 control chart factors I (table). 429–430 short-run average. 281–297 short-run attribute. 43–57 chi-square. 105–108 confidence intervals for the average. testing. 79–81 Poisson distribution. 105–113 sample size required given a level of confidence C and acceptable error SE. 83–86 chi-square control charts. 1 chi-square contingency tables. 412 chart interpretation. 111–113 single-sided where the sample size is large relative to the population. 479–481 class intervals. 65–81. 327. 514 chi-square goodness-of-fit test. 251–257 modified control limit. 77–78 normal distribution. 9–11 for variation statistics. 268–269 constant failure rate. 60 modified. See also resolution. 438–440 variable. 69–77 variables control charts: individuals. 110 two-sided where the sample size is large relative to the population. C. 404–405 confidence interval for the average. 9–11 multivariate. 375–376 for the proportion. defined. 269–270 complex systems. 61 correlation. 192 continuous sampling plans (CSPs). 459–463 control chart factors II (table). 459–463 chart resolution. 245–248 for individual/moving range control charts. 110–111 single-sided where the sample size is small relative to the population. 253–254 for location statistics. 65–77 control limits for individual-median/range control charts. 99–102. 363 c charts.qxd 534 10/15/07 2:06 PM Page 534 Index C Carter. 264 combined repeatability and reproducibility (R&R). 43. 324–325 centroid. 515 control charts acceptance. confidence interval for. 119–126 demerit/unit (Du).H1317_CH50_IDX. 217–219 coefficient of correlation. 243–250 individual/moving range. 12–13 acceptance sampling. 395 consumer’s risk. determining. 515 SPC. 205–211 individual-median/range. 265–268 for reproducibility. of DOE attribute chart: p charts. 437–438 short-run individual/moving range. 91–98 defects/unit (u). 99–104 for combined repeatability and reproducibility. 77–81 average/range. 374–375 confidence interval for. 9 attribute. 345 central limit theorem. 105–113 for repeatability. 102–104 confidence interval for proportions. 108–110 two-sided where the sample size is small relative to the population. See regression and correlation covariance.

defined. 374–375 confidence interval for. defined. single-sided tests. 205–211 exponential model. 127–134 E Economic Control of Quality of Manufactured Product (Shewhart). 172–178 D data attribute. See continuous sampling plans (CSPs) cube plots. 139 data transformation. 369–371 required sample. 187–188 normal distribution as approximation of the binomial. 168–178 errors attribute inspection. using normal probability plots. 299–311 process capability indices. 328 defects/unit control charts (u charts).. statistical process control. See designs of experiments (DOE) designs of experiments (DOE). vs. 181–184 empirical predictions. 188–190. Harold F. testing. 392 . 155–158 normal probability plot. See designs of experiments (DOE) exponential distribution. 412 failure. 327–328 variables. 224–225 type II. 178–180 empirical predictions. 165–168 visualization of two-factor interactions. 197–203 EVOP. 127–134 descriptive statistics. 224–225 variables measurement. 168–178 fractional factorial experiments. 148–149 Plackett-Burman screening designs. 291–297 defect. 150–155 detection rules. 119–126. 375–376 critical decision factor. 135–140 designed experiments. 29–32 CSPs. normal. 327 case study of. 178–180 case study of. 24–29 values of i and f for.H1317_CH50_IDX. 195–196 cumulative Poisson distribution (table). 518 CSP-2 plans. 194 Poisson. 190–191 distributions. 158–165. 192 hypergeometric. 91 effects. defined. 501 CSP-1 plans. See evolutionary operation (EVOP) EWMA. 338–341 F face. 150–155 cumulative distribution tables. 259–270 visual inspection. 157 variation reduction. 519–520 curvature of model. 180–181 535 process for. 310–311 calculating. to achieve desired Cpk. 37 critical run sum (CRS). 172 alternative method for evaluating effects based on standard deviation. 214–215 Du (demerit/unit) control charts. 380–381 Cpm indices. 270–271 type I. 459–463 discrete distributions. See designs of experiments (DOE) double-sided tests. 270–271 evolutionary operation (EVOP). alternative method for evaluating. 395–407 exponentially weighted moving average (EWMA) control charts. 319–324 Dodge. 139. 141–148 resolution of. See exponentially weighted moving average (EWMA) control charts experiments. calculating. 341–342 DE control chart (Standard Euclidean distance). 376–379 Cr indices. 192–194 Pascal distribution. 141–185. 24.qxd 10/15/07 2:06 PM Page 535 Index Cpk adjusted for Sk and Ku. 121–126 demerit/unit (Du) control charts. 29 DOE. 187–196 binomial.

223 hypothesis testing conclusions and consequences for. 255 collecting. Ronald A. See null hypotheses Hoerl. 437 K Kelvin. 251–252 Ho. pre-control. M. See combined (gage) repeatability and reproducibility error (GR&R) H Ha. 254. 141–148 J Juran. 187–188 hypotheses alternative. 225–241 hypothesis tests selecting rejection criteria for. 166–167 G generator.qxd 536 10/15/07 2:06 PM Page 536 Index failure modes. 156 income distribution. 227–238 selecting test statistic for. 481–484 L Laney. 248 steps for constructing. 399–401 confidence interval for. See also T 2 control chart HTOL. 350 Hotelling’s T 2 statistic. 251–257. 218. 194 goodness-of-fit test. 224–225 type II error.H1317_CH50_IDX. 86 for any specified distribution. 138 Kurskall-Wallis rank sum test. 155–158 case study. 225–238 type I error. temperature and. 226–227 I identity. Pareto’s law of. 224–225 types of. 2 failure rates confidence interval for. 243–250 individual/moving range control charts. 213 fractional factorial experiments. 245–248 plotting data for. 270 just-in-time operations. 223–224 formulating.. 223–224 opposing. 349 Laney’s p′ control charts. 412 hypergeometric distributions. 345–355 Laney’s u′ control charts. 449–456 input factors. 158–165 frequency histograms. 316–317 kurtosis (Ku). 299–300. 363 GR&R. 345–355 leaf-stem plot. 43. 239–241 selecting test statistic and rejection criteria for. double-sided tests. 220–222 . Roger. 392 determining. See alternative hypotheses Hazen median rank. 398–399 defined. from time-censored (type I) data. 217–222 historical data. David B. 166. 148–149 hierarchy rule. 224–225 formulation of hypothesis for. 214–215 F value. calculating. 357 individual-median/range control charts control limits for. 168 high-temperature operating life (HTOL). 521–531 F-test. 217–222 F-table. 89 green zone. 1 hyperface. 281. J. See high-temperature operating life (HTOL) humidity stress. Lord (William Thomson). from failurecensored (type II) data. 156 geometric distribution. 1 histograms. 89–90 to Poisson. defined.. 86–88 for uniform distribution. 213–214 single-sided vs. 223–224 hypothesis about variance. 223–224 null. 395–397 Fisher.

135 null hypotheses. 192–194 for p charts. 148–149. 259–270 median. 30–32 M MAD. 410–411 shotgun approach to. 270 method of least squares. 270–279 consequences of. 383 linear least square fit. on OC curve. 79–81 table for extended. 471–478 Cpk adjusted for Sk and Ku using. 253 defined. 410–418 orthogonal settings.. 194 Nelson. See also individual/moving range control charts average. 319 modified control limit charts. Lloyd S.H1317_CH50_IDX. 19–32. 6 reliability and. 341 nonlinearity of model. See also testing for normal distribution as approximation of the binomial. M.. 264–265 variables measurement error. Taguchi loss function. 27–28 effect of acceptance criteria on. 383 MIL-STD-105E (ANSI/ASQ Z1. 383–389 location statistics. 138–139 for individual-median/range control charts. See mean time between failure (MTBF) multifactor strategy. 16 effect of. 169–172 overdispersion. 15 maximum allowed defect (MAD). method of. 223 optimization multifactor strategies. 410 strategies for. 137. 77–78 for CSP-1 plan. See also averages (X bar). 491–492 mode. C. 15 lot size. 313–316 nonsymmetric specifications.. 310–311 np charts. 223–224 O observed sample statistic. 357 lot acceptance plans. See operating characteristic (OC) curve operating characteristic (OC) curve. 319–324. 252–253 Lorenz. 392 measurement error assessment attribute and visual inspection error. See normal probability plots (NOPPs) normal distributions. 510 table for standard. 291–297 T2 chart. 30–32 effect of sample size on. 327. 226 OC curve. 316–317 Wilcoxon-Mann-Whitney test. 509 testing for. 35 minimum life characteristic. 299–311 nonparametric statistics Kruskall-Wallis rank sum test. 30–32 effect of lot size on. testing. 16–19.4) sampling plan. 471–484 normal probability plots (NOPPs). 244. A. See also averages (X bar) mean time between failure (MTBF). 35 lot acceptance sampling plans. 30–32 opposing hypotheses. 467–469 NOPP.qxd 10/15/07 2:06 PM Page 537 Index least squares. 252 MTBF. 410–411 multivariate control charts DE control chart. 245 for individual/moving range control charts. 4–5. 357 manufacturer’s risk. 9–11 moving ranges. 319. individual-median/range control charts Melsheimer. calculating. 281–291 537 N negative binomial distribution. 345 . 345 N (population). L. 172–178 nonnormal distribution Cpk. 272–273 mean. See maximum allowed defect (MAD) Manuale d’Economia Politica (Pareto).

227–238 reject limits. 137 Purcell. 327–342 data transformation. 138 Quality Control Handbook (Juran). 259 error determination. 376–379 Cr and Cp indices. 5–6 percentage (P). D. 9 reliability. 486–489 process capability indices. 392–393 burn-in period. 385–389 linear least square fit. 375–376 confidence interval for Cpk. Laney’s. 511 population (N )/sample relationship. 259 error determination.. 137 Pareto. 16 rejection criteria. 387–389 rejectable quality level (RQL). See also reliability modeling. 395 defined. S. 357 Pascal distribution. 405–406 standby parallel systems. 404–405 parallel systems. Vilfredo. 137 percent AOQL (average outgoing quality limits). 265–268 defined. 262–264 . 180–181 Poisson distribution. 512 (table). R. 363 description. 395–398 exponential distribution.. 369–381 confidence interval Cr and Cp. 218 R rainbow chart. 406 repeatability confidence interval for. 357 Pareto analysis.qxd 538 10/15/07 2:06 PM Page 538 Index P Q parallel systems. 261–262 example of. statistical process control (SPC). W. defined. 519–520 Poisson formula. 341–342 determining resolution for. 391–395. pre-control. 259 pre-control defined. 345–355 Pearson distributions. 393–395 reliability modeling. 363 probabilistic approach to optimization. 363–365 rules. 300 Peck. 374–375 proportion (P). 190–191 cumulative (table). 398–401 constant failure rate. selecting. 369–371 Cpm index. 434–436 using. 363 regression and correlation. 334–338 p′ control charts. 17–19 Poisson unity values RQL/AQL (table). 402–403 shared-load parallel systems. 371–374 Cpk. 28–29 Plackett-Burman screening designs. 410 probability plotting. 393–395 burnout period. 392 success run theorem. 35–39 reproducibility confidence interval for. 383–389 confidence for the estimated expected mean value. calculation of. 319 red zone. 364–365 vs. 135–136 precision. 77–81 exponential model. 403–404 parameters. 77–81 short-run. 363 range. 194 p charts. 403–404 series systems. statistics and. 391 determining failure rates. 394–395 confidence intervals and. 387 testing coefficient of correlation. 259–261 repeatable quality level (RQL). 401 useful life. 78–79 Poisson Unity Value method. See also reliability complex systems. 383–384 measuring linear relationship strength. when proportion defective is extremely low. 395–406 mean time between failure.H1317_CH50_IDX. pre-control. 338–341 resolution of. 268–269 defined. 384–385 inferences about sample slope. unreliability bathtub curve. 365–367 pre-control limits. 363 quality. 357–361 Pareto’s law of income distribution.

. 499 statistical process control (SPC) charts. 410–411 single-sided tests. 412–418 single factor at a time. 178–180 standard error.. 365–367 short-run average/range chart. 364–365 S sample/population relationship. Dorian. 409 variable size. 39–41 based on RQL. 412–418 defined. 449–450 derivation of the plotting characteristic for the variation statistic. 411. 37–39 continuous. 410 simplexes (simplices). 429–430 case study. 419 variable size. 99. statistical process control (SPC) charts standard deviation (S). 214–215 skewness (Sk). 430–433 short-run average control charts.H1317_CH50_IDX. pre-control. 410–411 theoretical optimum. See also simplexes (simplices) basic simplex algorithm.qxd 10/15/07 2:06 PM Page 539 Index resolution. 15 RQL. 19–32 Satterthwaite. on OC curve. 299–300. 409 shared-load parallel systems. 137. 450–451 selecting a value for expected average and average moving range. 20t. 145 standby parallel systems. See rejectable quality level (RQL). 402–403 Shainin. doubled-sided tests. 136 . 135–136 sample size. 438–448 shotgun approach. 410 simplex calculations. 459–463 statistics abbreviations for. sequential simplex optimization. 141 risk. 137 defined. sequential simplex optimization. 59. 437–438 short-run control charts. 440–448 539 short-run individual/moving range chart. 451 short-run p charts. 452–456 derivation of the plotting characteristic for the location statistic. A. 324–325 alternative method for evaluating effects based on. 459–463 detection rules. 226 Standard Euclidean distance (DE control chart). 16. 434–436 short-run range charts. 22t sampling plans based on AQL. E.4). 459–463 vs. 437 chart interpretation. 157. case study. 363 sequential simplex optimization. 406 statistical process control (SPC). 20–23. 91 Shewhart control charts. F. 363 shape. 37–39 based on α risk. vs. 255 pre-control. 449–457 case study. of DOE. 327–328 effect of. W. 449 zone charts. See sequential simplex optimization single factor at a time. 419–428 series systems. 291–297 standard order. 30–32 sample size code letters. defining. 43. 411–412 shotgun approach. 37–39 based on range R. See also sequential simplex optimization calculations. 99 manufacturer’s. 499 short-run attribute control charts. 363. 419–428 simplex optimization. 319–320. See statistical process control (SPC). repeatable quality level (RQL) rules average-range control chart. 23–32 MIL-STD-105E (ANSI/ASQ Z1. 37–39 based on β risk. 405–406 Shewhart. 437 short-run individual/moving range chart. See also chart resolution response variables. 48 individual/moving chart. 481–484 SPC.

345–355 unreliability. 1. 226 selecting. 165–168 variation statistics. 358–359 Taguchi loss function. 5–6 tested value. 349 Wilcoxon-Mann-Whitney test. 419–428 variables measurement error.qxd 540 10/15/07 2:06 PM Page 540 Index descriptive. 65–77 for averages. Laney’s. 150–155 type I error. 138–139 quality and. 471–484. 21f T tabular Pareto analysis. 471–478 skewness and kurtosis. 517 Wilcoxon sum of ranks. 139 variable size simplexes. 138 variation. 224–225 U u charts. See also normal distributions chi-square goodness-of-fit test. 2. 489–492 probability plotting. 136–137 location. 489 Wheeler. 20. 479–481 normal probability plots. 138–139 examples of. 281–291 for two parameters. Donald J.. N. 150–155 W Weibull.4 system. 259–270 variance. 224–225 type II error.. 225–238 THB. 239–241 variation reduction. See temperature-humidity bias (THB) accelerated testing theoretical optimum. 395 V values of i and f for CSP-1 plans (table). Waloddi. sequential simplex optimization. 252–253 vertex. 1 modes for accelerating. 226 testing for normal distribution. M. 411 vibrational stress. 493–498 . 1 Torrey. 1 success run theorem. 29 T 2 control chart. 138–139 for individual-median/range control charts. 284 two-factor interactions. See defects/unit control charts (u charts) u′ control charts. 518 variable control charts. 410 strength distribution. 1 visual inspection error. 481–484 test statistic computing. 313–316 critical values of small rank sum for. 465–470 Taguchi philosophy.H1317_CH50_IDX. 139 variables data. 69–77 for individuals. 486–489 Weibull plotting paper. 270–271 visualization of two-factor interactions. See also reliability useful life. visualizing. 485–492 plotting data using Weibull paper. 465 t-distribution. defined. 409 thermal stress. defined. hypothesis about. 225 table. 138–139 stochastic approach to optimization. 513 temperature-humidity bias (THB) accelerated testing. 327 variables. 213 defined. 485 Weibull analysis. 401–402 switching rules for ANSI Z1. 65–69 variable data. 245 for individual/moving range control charts. 1 stress distribution.

363 541 . 321–325 reporting. 99. pre-control. See also averages (X bar) zone charts. 499–507 Z-scores calculating.qxd 10/15/07 2:06 PM Page 541 Index X Z X bar.H1317_CH50_IDX. 326 Z test statistic. 226 Y yellow zone.

H1317_CH50_IDX.qxd 10/15/07 2:06 PM Page 542 .