This action might not be possible to undo. Are you sure you want to continue?

# UIC University of Illinois at Chicago IDS / IE 571 Statistical Quality Control and Assurance Instructor Sclove Textbook Montgomery

, Introduction to Statistical Quality Control

Notes on

ACCEPTANCE SAMPLING

Reference: Montgomery, Chapters 14 and 15. In this course we cover only Ch. 14.

**OUTLINE of these notes
**

1. INTRODUCTION 2. SAMPLING 3. ACCEPTANCE SAMPLING VERSUS CONTROL CHARTS 4. LOT FORMATION 5. SINGLE-SAMPLE ATTRIBUTES PLANS 6. RECTIFYING INSPECTION

**INTRODUCTION Statistical Quality Control (SQC) includes:
**

• • •

Statistical Process Control (SPC) Design of Experiments (DOC) Acceptance Sampling (AS)

Acceptance Sampling involves notions of statistical hypothesis testing. The testing done in Acceptance Sampling is very much like the hypothesis testing considered in statistics courses. A set of incoming units is called a lot . A sample of size n is obtained from the lot, and the number x of defectives observed. The lot is rejected if x > c, i.e., accepted if x < c, where c is the acceptance number. The problem is to choose (n, c) judiciously. Producer's Risk The Producer's Risk (Seller's Risk) is the probability of rejecting a good lot. It is the Type I error rate, or α ("alpha"). Consumer's Risk

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 2

The Consumer's Risk (Buyer's Risk) is the probability of accepting a bad lot. It is the Type II error rate, or β ("beta"). Different costs are associated with the types of errors. Decision Risk Analysis could be used to come up with an acceptance-sampling scheme to balance these. The cost of sampling can be included. A Word on Terminology In much of the literature on quality control the terms "defective" and "nondefective" are used. Many authors now use the terms "nonconfoming" and "nonconforming." There are at least two reasons for this. 1. The term "nonconforming" can be used to include two possibilities, "defective" and nonsalvageable on the one hand, or defective but salvageable on the other hand. 2. Some product-liability issues are involved. Presumably, liability would be greater if an item is "defective" rather than simply "not in conformance." 2. SAMPLING 2.1. REASONS FOR SAMPLING Less expensive than observing the whole lot. In destructive testing, you destroy only a relatively few units. 2.2. RANDOM SAMPLING You want to try to get a good representative sample from the lot. 3. ACCEPTANCE SAMPLING VERSUS CONTROL CHARTS An essential notion here is that of moving quality upstream, with Acceptance Sampling (AS) applied to incoming raw material, before it ever reaches the stage of manufacturing or production where SPC comes into the picture. 4. LOT FORMATION The text discusses the formation of lots, and a trade-off between small lots and large lots. The symbol N denotes the lot size. 5. SINGLE-SAMPLE ATTRIBUTES PLANS 5.1. INTRODUCTION Here we concentrate on the choice of sample size.

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 3

5.1.1. AS by Variables We first consider Acceptance Sampling (AS) by Variables. The true mean is denoted by µ ("mu"). Acceptance Sampling by Variables is testing a hypothesis concerning a mean, where in Acceptance Sampling the mean µ is, for example, the unknown average weight of material that is nonconforming. "). The test is based on the sample mean m. We first review the situation for testing H0: µ = µ0 against Ha : µ > µ0 at level or Type I error rate α ("alpha “). This is appropriate if x is a smaller-is-better variable, in which case we reject the lot if the sample mean is too large. When σ is specified, this test is to rejecth the lot if m > µ0 + zα SD(m) , where SD(m) is the standard deviation of m, SD(m) = σ / n1/2 . That is, the lot is rejected if the sample mean exceeds the hypothesized value of the mean, plus a one-sided margin of error equal to zα times the standard error of the mean. This procedure is based on a given n and level α. Power. The power at µ1 is the area to the right of zα - n1/2 (µ1 - µ0 )/σ under the standard Normal curve. If n is increased, this number moves to the left, and the area to its right, the power, increase. Sample size. If instead of being given the sample size, you are given Type I and Type II error rates, it is possible to find the required sample size: The sample size required for testing H0: µ = µ0 vs. H1 : µ = µ1 at level or Type I error rate α and with Type II error rate β ("beta"), that is, power equal to 1-β is n = σ2 (zα + zβ)2/(µ0 - µ1)2, where σ is the common standard deviation under H0 and H1 . Note that this depends upon the parameters only through the effect size ES = (µ0 - µ1)/ σ ; we have n = (zα + zβ)2/ ES2 2 or [ (zα + zβ) / ES ] . This formula tells how to determine n, given α and β. In fact, any two of the triplet (n,α, β), where • • • n = sample size α = Type I error rate -- Producer's (Seller's) risk β = Type II error rate -- Consumer's (Buyer’s) risk

determine the third. In defining this triplet, µ0 of course goes with α; µ1, with β. You can either specify β or µ1: Sometimes, β is set at a value like .20 (or .10), and µ1 is chosen to correspond to that. If there are different standard deviations, then n = (σ0 zα + σ1 zβ)2 / (µ0 - µ1)2 .

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 4

5.1.2. AS by Attributes Doing Acceptance Sampling by Attributes is like testing a hypothesis concerning a proportion p, where in Acceptance Sampling p is the true fraction defective. The value p0 of p under the null hypothesis is called the AQL (Acceptable Quality Level), or PQL (Producer's Quality Level). The value p1 of p under the alternative hypothesis is called the LTPD (Lot Tolerance Percent Defective), or CQL (Consumer's Quality Level). Remember that, mathematically speaking, dealing with an attribute is a special case of dealing with a variable, because an attribute can be taken as a (0,1)-variable, with mean µ = p = Pr{x = 1} and variance σ2 = pq, where q = (1-p). . An acceptance sampling scheme is a pair (n,c), where n is the sample size and the integer c is the acceptance number. We accept the lot if x < c. We reject the lot if x > c. When the Normal approximation to the Binomial distribution is satisfactory, the required sample size n can be found from the formula n = (σ0 zα + σ1 zβ)2/(µ0 - µ1)2 = [(p0 q0)1/2 zα + (p1q1)1/2 zβ ]2 / (p0 - p1)2 . The corresponding acceptance number c is given by rounding the quantity n p0 + 1/2 + zα (p0 q0 / n)1/2 . This (n,c) pair is based on the Normal approximation to the Binomial distribution, which is generally considered to be good if np > 5 and nq > 5. (See Appendix B for more details.) When these inequalities do not hold, the exact Binomial distribution or the Poisson approximation to it can be used; this may involve trial and error. Take an initial guess of (n,c) based on the Normal approximation, then see what alpha and beta you get, then alter (n,c) to bring alpha and beta within specifications. See the spreadsheet . Incorporating Costs: Decision Risk Analysis One of the homework assignments is on AS for Attributes and Decision Risk Analysis (decision trees). See the spreadsheet . At this writing, software for decision risk analysis includes Arborist, DPL, and Precision Tree from Palisade Decision Tools Software. 5.2. PROBABILITY OF LOT ACCEPTANCE Pa = Pr{x < c; p} 5.2.1. Types A and B

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 5

Sampling plans for screening single lots are called Type A plans and use the hypergeometric distribution. Here the model is that there is an existing single lot of size N and it actually contains a fixed number D of defectives, which is unknown to us. The true fraction defective is D/N. Sampling plans for screening a series of lots are called Type B plans and use the binomial distribution. Here the model is that there is a process which is producing a series of lots and the true fraction defective p is characteristic of the series of lots. ______________________________________________________ Type A Single lot Unknown number of defectives, D Hypergeometric Type B Series of lots Fraction defective, p Binomial

______________________________________________________

5.2.2. Distributions and their Approximations When n is small relative to N, the binomial can be used to approximate the hypergeometric. The Normal can be used to approximate the Binomial when np > 5. If p is very small, this approximation will not be good unless n is very large, but the Poisson approximation approximation to the Binomial can be used if np is about 1. 5.3. ACCEPTANCE SAMPLING SYSTEMS Sampling schemes may be single-sample, double-sample, or fully sequential.

6. RECTIFYING INSPECTION

The average outgoing quality (AOQ) is the average proportion among units shipped that are nonconforming. It is the average outgoing rate of nonconformance. It can be shown to be equal to AOQ = p (1-n/N ) Pa. Here Pa is the probability of acceptance of the given sampling scheme, as a function of p. To be passed through, a unit must be in a lot that is accepted. Thus the AOQ is the probability p that a unit is nonconforming in the first place, times the probability 1 - n/N that it's not included in a

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 6

sample (and thus inspected and removed or rectified), times the probability Pa that its lot is accepted. We write AOQ(p) -- AOQ as a function of p -- to emphasize the fact that AOQ depends upon p. For example, if the rule is to accept only those lots from which the sample contains no nonconforming units, then Pa = (1-p) n , and AOQ = p(1-n/N)Pa = p(1-n/N)(1-p)n. It is as if we should say "Average Outgoing Lack of Quality" instead of "Average Outgoing Quality," because high AOQ means the number of outgoing nonconforming items is high. As p moves from 0 to 1, the AOQ rises and then falls. This is because the AOQ will be low if p is small, because there just aren't many nonconforming units, and will be low also if p is high, because then it's very likely to find nonconforming units and thus reject the lot. Thus, the value p* of p for which the AOQ function has a maximum is an intermediate value of p. The maximum value of AOQ is called the average outgoing quality limit (AOQL): AOQL = max p AOQ(p); that is, p* = arg maxp AOQ(p) , the argument of the function AOQ(p) that maximizes it, and AOQL = AOQ(p*) . ______________________________________________________________________________ APPENDIX A: AOQL

The value, say p*, where AOQ(p) reaches its maximum can be found using calculus, by differentiating the function and setting the derivative equal to 0. Then, AOQL = AOQ(p*). Example. Suppose the acceptance rule is to accept only lots for which the sample contains no nonconforming units. Then the probability of acceptance Pa is Pa= (1-p)n. Then AOQ = p(1- n/N) Pa = p(1-n/N)(1-p)n , a polynomial of degree n+1 in p. Taking the derivative and setting it equal to 0, we have dAOQ/dp = ( 1- n/N )[(dp/dp)(1-p)n + p d( 1-p )n/dp] = (1-n/N)[(1)(1-p)n - pn (1-p)(n-1)] = 0, which gives (1-p) = np, or p = 1/(n+1) = p*, say. For example, if n = 9, then p* = .1 . Note that, although the derivative is a polynomial of degree n in p, setting it equal to 0 gave an equation of degree one (linear equation) in p.

APPENDIX B: Approximating the Binomial Distribution The matter of approximating binomial distributions with small p becomes important here. The Normal approximation is good if np > 5 and nq > 5. More generally, in applying the Central

IDS / IE 571 – Statistical Quality Control & Assurance (Sclove) – Notes on Aceptance Sampling p. 7

Limit Theorem, the error in approximating the cumulative standard normal distribution Φ(t) is less 3 than 2Γ/n1/2, where Γ is the parent population's third absolute central moment, divided by σ . The Poisson approximation is generally considered to be good if p is small, n is large, and np is intermediate in size. A better way to look at this is that p and np3 should be small. An excellent reference on approximations to distributions is Chapter 7 of Blum and Rosenblatt (1972).

Additional References Julius Blum and Judah Rosenblatt (1972). Probability and Statistics. Saunders, Philadelphia. These notes Copyright © 2007 Stanley Louis Sclove Created 2001: Feb 22 latest revision 2007: Nov 16