Professional Documents
Culture Documents
x
= the standard deviation of the parent population.
It is calculated by:
Equation 7.3
(i = 1, 2, , nN) where (nN) = the total number of observations made. nN is the sample subgroup size (n) multiplied by
the number of sample subgroups taken (N). For small values of nN, the denominator is often nN - 1.
Rather than going through the laborious calculations for sigma, the range R is the preferred measure. R will be employed
for the control charts used in this chapter.
The grand mean used for the control chart can be measured in different ways, thus:
and the average range is:
The data used for and come from Table 7.4 above.
+
x
x x
nN
i
i
nN
!
$
!
&
( )
2
1
x
x
nN
x
N
i
i
nN
j
j
N
! ! ! ! !
&
&
606
4 5
15150
5
3030
( )( )
.
.
R
R
N
j
j
N
! ! !
!
&
5
5
100
1
.
R
x
-
----
0
2.28
Table 7.5 Conversion Factors for
1.82 0.18
0.42
0
Subgroup Size
(n)
3.27
0.34
A
2
1.86
7
0.37
D
4
3
2
1.92
2.00
0.08
2.57
0.58
- and R-Charts
5 0
0.73
9
D
3
1.02
4
8
0
6
2.11
0.14
1.88
0.48 0
The R measure must now be converted to represent the sample standard deviation. The conversion factors relating the
range to the sample standard deviation have been tabled and are widely available. The conversion factor for upper and
lower 3-sigma limits is A
2
. It is given below in Table 7.5.
1.78
1.59
15 1.65
0.41
0.31
12
0.18 20
1.72
0.35
0.28 0.27
10 0.22
0.22
x
III
FFF
I I I I
I I I I
I I I I
A
2
is used for UCL and LCL for the -chart.
D
3
is used for the LCL for the R-chart.
D
4
is used for the UCL for the R-chart.
The meaning of A
2
multiplied by is shown below:
Equation 7.4
The calculations for the upper control limit and for the lower using three standard deviations (
3
) are as
follows:
The calculations for the Chocolate Truffle Factory are:
= 30.30 + (0.73)(1.00) = 31.03
Equation 7.5
= 30.30 - (0.73)(1.00) = 29.57
Equation 7.6
How to Interpret Control Limits
The limits used in Figure 7.10 resulted from the above calculations. The 3:00 p.m. point would have been outside the
control limits if two standard deviations had been used instead of three. The 3:00 p.m. point was 30.90 and the for
two sigma would have been 30.79. Using Equation 7.4, (x-bar) = 0.243, 2(x-bar) = 0.487 = 0.49. Then, from
Equation 7.5:
= 30.30 + 0.49 = 30.79.
The number of standard deviations used will control the sensitivity of the warning system. Three-sigma charts remain
silent when two-sigma charts sound alarms. Alarms can signal out-of-control as well as false alarms.
Understanding how to set the control limits is related to balancing the costs of false alarms against the dangers of missing
real ones. One-sigma limits will sound more alarms than two sigma ones. The judicious choice of k for is an
operations management responsibility in conjunction with other participants to assess the two costs stated above. There
must be an evaluation of what happens if the detection of a real problem is delayed. The systems-wide solution is the best
one to use.
The probability of a subgroup mean falling above or below 3-sigma limits by chance rather than by cause is 0.0028,
which is pretty small, but still real. For 2 sigma it would be 0.0456 which is not small. k also can be chosen between 2 and
3 sigma. In some systems, two successive points must go out of control for action to be taken. There are fire departments
that require two fire alarms before responding because of the high frequency of false alarms. When a response is made to
out-of-control signals, knowledge of the process is the crucial ingredient for remedying the problem.
x
R
A R k
n
x x
x
2
3 3 ! ! ! + +
+
UCL
x
LCL
x
/
UCL A R and LCL A R
x
x x
x
! % ! $
2 2
UCL
x
LCL
x
UCL
x
/k+
x
UCL
- ..
7.10
Method 6Control Charts for Ranges as VariablesR-Charts
The -chart is accompanied by another kind of control chart for variables. Called an R-chart, it monitors the stability of
the range. In comparison, the -chart checks the stationarity of the process mean, i.e., checks that the mean of the
distribution does not shift aboutwhile the R-chart checks that the spread of the distribution around the mean stays
constant. In other words, R-charts control for shifts in the process standard deviation.
The - and R-charts are best used together because sometimes an assignable cause creates no shift in the process average,
nor in the average value of the standard deviation (or ), but a change occurs in the distribution of R values. For
example, if the variability measured for each subgroup is either very large or very small, the average variability would be
the same as if all subgroup R values were constant. Remembering that the sampling procedure is aimed at discovering the
true condition of the parent population, the use of - and R-charts provides a powerful combination of analytic tools.
Figure 7.12 The R-Chart: 3-sigma limits
The equations for the control limits for the R-chart are:
The Chocolate Truffle Factory data from Table 7.4 above can be used to illustrate the control chart procedures.
Continuing to use k = 3 sigma limits and the conversion factors given in Table 7.5 above, the R-chart control parameters
are obtained.
These are used to derive the control limits.
UCLR = (2.28) (1.00) = 2.28 and LCLR = (0.00)(1.00) = 0.00
Figure 7.12 shows that the R-chart has no points outside the control limits. The same pattern prevails as with the -chart.
The 3:00 p.m. point is close to the upper limit. This further confirms the fact that the process is stable and is unlikely to do
x
x
x
x
R
UCL D R and LCL D R
R
R
! !
4 3
D n D n R
4 3
4 2 28 4 0 100 fo for and =
5.00
5
r . , , . ! ! ! ! !
x
R
2.5
f----------------UCl . 2.28
2
1.5
1 f-----------f-------'.;R . 1.00
0.5
o -sc;::==::
10AM l1AM lPM 3PM
HOUB 01 Day
lCL 0.00
4PM
much better. However, the sample is small and good chart use would throw out the 3:00 p.m. sample and take about 25
more samples before making any decision.
7.11
Method 6Control Charts for Attributesp-Charts
The p-chart shows the percent defective in successive samples. It has upper and lower control limits similar to those
found in the - and R-charts. First, compare what is measured for the p-chart to what is measured for the -chart.
Monitoring by variables is more expensive than by attributes. The inspection by variables requires reading a scale and
recording many numbers correctly. Error prevention is critical. The numbers have to be arithmetically manipulated to
derive their meanings.
In comparison, with the p-chart, a go/no-go gauge, a pass-fail exam, or a visual checkgood or badcan be used.
All of these measurement methods are examples of the simplicity of monitoring by attributes.
Only one chart is needed for the attribute, p, which is the percent defective. The p-chart replaces the two charts and R.
Also, only the upper control limit really matters. The lower control limit is always better as it gets closer to zero.
The p is defined as follows:
Assume that the Chocolate Truffle Factory can buy a new molding machine which has less variability than the present
one. CTFs operations managers in cooperation with the marketing managers and others have agreed to redefine
acceptable standards so that profits can be raised while increasing customer satisfaction. Also, it was decided to use a
p-chart as the simplest way to demonstrate the importance of the acquisition of the new molding machine.
Previously, a defective product was defined as a box of truffles that weighed less than a pound. What was in the box was
allowed to vary, as long as the pound constraint was met. Now the idea is to control defectives on each pieces weight.
The new definition of a reject is as follows: a defective occurs when x
i
< 29.60 grams or x
i
> 30.40 grams. This standard
set on pieces is far more stringent than the earlier one set on box weight. The new standards will always satisfy CTFs
criteria to deliver nothing less than a pound box of chocolate.
It has been suggested that the data of Table 7.4 above be tested against the p-chart criteria. Rejects are marked with an R
on the table. The number of rejects (NR) is counted, and p is calculated for each subgroup in Table 7.6.
Table 7.6
Weight of Chocolate Truffles (in grams) (R = rejects due to excessive weight)
Subgroup j I II III IV V
Time: 10 a.m. 11 a.m. 1 p.m. 3 p.m. 4 p.m. Sum
i
x
I
x
II
x
III
x
IV
x
V
1 30.50R 30.30 30.15 30.60R 30.15
2 29.75 31.00R 29.50R 32.00R 30.25
3 29.90 30.20 29.75 31.00R 30.50R
x x
x
p
number rejected
number inspected
!
4 30.25 30.50R 30.00 30.00 29.70
NR 1 1 3 1 8
p 0.25 0.50 0.25 0.75 0.25
Table 7.7
Summary of the Findings (NR = number of rejects)
Subgroup No. NR n p
1 1 4 0.25
2 2 4 0.50
3 1 4 0.25
4 3 4 0.75
5 1 4 0.25
Sum 8 20
The p values are extraordinarily high. CTF either has to go back to the old standard or get the new machine. This analysis
will provide the motivation for changing machines. The p-chart parameters are calculated as follows: first, compute
= Average Percent Defective which is defined by:
Companies that subscribe to the concepts of TQM are committed to obtaining defective rates that are less than 1%, and
this is 40%. It is evident that the new criteria and the present process are incompatible. Constructing the p-chart confirms
this evaluation.
The control chart for single attributes, known as the p-chart (there is a c-chart for multiple defects described below),
requires the calculation of the standard deviation (s
p
). For the p-chart computations, the binomial description of
variability is used:
Shifting to more conservative 1.96 sigma limits means that 95% of all subgroups will have a p value that falls within the
control limits
6
which are determined as follows:
UCLp = + 1.96s
p
= 0.40 + 1.96(0.245) = 0.40 + 0.48 = 0.88
and
LCLp = - 1.96s
p
= 0.40 - 0.48 = - 0.08 < 0.000 = 0.00
p
p
number rejected
number inspected
! ! !
&
8
20
0 4 0 .
s
p p
n
p
!
$
! ! !
( ) ( . )( . )
. .
1 040 060
4
006 0245
p
p
1
I I I I I I I
I I I I I I I
I I I I I I I
I I I I I I I
FJ-
The LCL cannot be less than zero, so it is set at zero.
Note that UCL
p
and LCL
p
are functions of s
p
which can vary with the sample size n. Thus, if the 11:00 a.m. sample
had nine measured values, instead of four, then
s
p
has decreased, and this would reduce the spread between the limits, just for the 11:00 a.m. portion of the chart.
The control limits can step up and down according to the sample size, which is another degree of flexibility associated
with the p-chart.
The appropriate control chart for percent defectives for the Chocolate Truffle Factory is shown in Figure 7.13.
Figure 7.13 The p-Chart
There are no points outside the UCL, although the 3:00 p.m. value of 0.75 is close to the upper limits value of 0.88. The
same reasoning applies, namely the process is stable, but the 3:00 p.m. sample should be eliminated from the calculations
and 25 more samples of four should be taken.
c-Charts for Attributes: the Number of Defects per Part
The
c
-chart shows the number of defectives in successive samples. It has upper and lower control limits similar to those
found in the , R, and p-charts.
The finish of an automobile or the glass for the windshield are typical of products that can have multiple defects. It is
common to count the number of imperfections for paint finish and for glass bubbles and marks. The expected number of
defects per part is . The statistical count c of the number of defects per part lends itself to control by attributes which is
similar to p, the number of defectives in a sample.
It is reasonable to assume that c is Poisson distributed because the Poisson distribution describes relatively rare events. (It
was originally developed from observations of the number of Prussian soldiers kicked by horses.) There is an insignificant
likelihood that a defect will occur in any one place on the product unless there is an assignable cause. That is what the
c-chart is intended to find out.
The standard deviation of the Poisson distribution is the square root of . Thus:
s
p p
n
p
!
$
! ! !
( ) ( . )( . )
. .
1 040 060
9
0027 0163
x
c
c
UCL c k c k
c c
c ! % ! % +
FI-
1 R
0.8
O.,
O.,
0.2
O
lDAM l1AM lPM 3PM
Hllllrs 01 Day
UCl .0.88
ji. 0.40
lCl .. 0.00
4P.
and
The number of defects are counted per unit sample and is estimated as follows:
Sample No. Number of
Defects
Sample No. Number of
Defects
1 3 6 5
2 2 7 3
3 5 8 4
4 8 9 3
5 2 10 3
Legend: Total Number of Defects = 38, Total Number of Samples = 10, Average Number of Defects per Sample, = 3.8
The control limits are calculated:
ULC
c
= 3.8 + 2
c
= 3.8 + 2(1.95) = 7.70
and
LCL
c
= 3.8 - 2
c
= 3.8 - 2(1.95) = -0.10
A negative LCL is interpreted as zero and the UCL is breached by the fourth sample. This violation seems to indicate that
all is not well with this process. The control chart is shown below.
Figure 7.14 c-Chart for Number of Defects per Part
It is possible to throw out the fourth sample and recalculate the charts. In reality, larger samples would be taken to draw
LCL c k c k
c c
c ! $ ! $ +
c
1I
I I
I I
I I
I I
I I
r-- out -ol-control poln!
3
_______ -f\' \ ________________ UCL
c
7.70
/ \
/ \
/ \
/
I \ ,
/ \ / \
____ -1 _ __ _ .1_ -1-..2: - - -------.....- - - - - - - - e s3.S0
/ \ I \ // "-
,, / \/ " '> - -<
7
5
2
'y V
, LL __________________________ teL
e
_0.00
up the initial charts and then further samples would determine the stability of the process.
7.12
Method 7Analysis of Statistical Runs
A run of numbers on a control chart is successive values that all fall either above or below the mean line.
Consider this scenario. For months, the control chart has indicated a stable system. However, the latest sample mean falls
outside the control limits. The following conjectures should be made:
1. Has the process changed or is this a statistical fluke?
2. Was the out-of-control event preceded by a run of sample subgroup mean values?
A run is a group of consecutive sample means that all occur on one side of the grand mean.
3. If there was a run of six or more, assume that something is changing in the process. Look for the problem.
Sample the process again, as soon as possible. Make the interval between samples smaller than usual until
the quandary is resolved.
Sometimes a separate run chart is kept to call attention to points that are falling on only one side of . Separate charts are
not required if the regular control charts are watched for runs.
The -chart in Figure 7.10 shows no runs. The R-chart in Figure 7.12 shows a run of 3 consecutive points below the
grand mean. The p-chart in Figure 7.13 shows no runs. Charts often show short runs of two, three or four, because short
runs are statistically likely.
Runs can appear on either side of the grand mean,
.
The probability of two consecutive s falling above is 1/2 x 1/2 = 1/4.
The probability of two consecutive s falling below is 1/2 x 1/2 = 1/4.
The probability that there is a run of two, either above or below is 1/4 + 1/4 = 2/4 = 1/2.
The probability of a run of three above is (1/2)
3
= 1/8, and below it is the same.
The probability of a run of three being either above or below is (1/8 + 1/8) = 2/8.
To generalize, the probability of a run of r above the grand mean is (1/2)
r
, and the same below, so above and below is:
Then, the probability of a run of six (r = 6) is:
Note that a run of 6 having a probability of 1/32 = 0.03125 has about the same probability that a sample mean has of
being above the UCL with 1.86 limits. Thus, a run of six adds serious weight to the proposition that something has gone
awry.
The run chart also shows whether the sample means keep moving monotonically, either up or down. Monotonically
means always moving closer to the limit without reversing direction. The run of 3 in Figure 7.12 is not monotonic. A
monotonic run signals even greater urgency to consider the likelihood that an assignable cause has changed the processs
behavior. Statistical run measures neither take monotonicity nor the slope of the run into account. Steep slopes send more
urgent signals of change.
Start-ups in General
Many, if not most, processes, when they are first implemented, moving from design to practice, are found to be out of
x
x
x
x
x
p r
r r
r
( ) ( )( / ) ( / ) ! ! !
$
$
2 1 2 1 2
1
2
1
1
p r ( ) .
( )
! ! ! ! !
$
6
1
2
003125
6 1
1
2
1
32
5
x
x
x
control by SPC models. Process managers working with the SPC charts and 25 initial samples can remove assignable
causes and stabilize the process. The continuity of the process that is required is characteristic of flow shop type
configurations. Nevertheless, the p-chart has been used with orders that are regularly repeated in the job shop.
7.13
The Inspection Alternative
Some firms have practiced inspecting out defectives instead of going through the laborious process of classifying
problems, identifying causes and prescribing solutions. This approach to process management is considered bad practice.
Inspection, in conjunction with SPC, plays an important role by providing sample means, allowing workers to check their
own output quality, and inspecting the acceptability of the suppliers shipments (called acceptance sampling).
Inspections and inspectors reduce defectives shipped. They do not reduce defectives made.
100% Inspection, or Sampling?
An option for monitoring a process is using one hundred percent (100%) inspection of the output. There are times when
100% inspection is the best procedure. When human checking is involved, it may be neither cost-efficient nor
operationally effective. Automated inspection capabilities are changing this.
The use of 100% inspection makes sense for parachutists and for products where failure is life-threatening. Inspections
which are related to life-and-death dependencies deserve at least 100% and, perhaps, even 1000% inspection. For
parachute inspections and surgical cases, a second opinion is a good idea. It is human nature that it is easier to spot defects
in the works of others than in one's own work.
NASA continuously monitors many quality factors at space launchings. Aircraft take-off rules are a series of checks that
constitute 100% inspection of certain features of the plane. There are no serious suggestions about doing away with these
forms of 100% inspection. There have been occasions when pilots forgot to do a checking procedure with dire
consequences.
An economic reason to shun 100% inspection is that it is often labor intensive. This means it is likely to be more
expensive than sampling the output. Sampling inspection is the other alternative. The few are held to represent the many.
The body of statistical knowledge about sample sizes and conditions is sound and is treated shortly.
A common criticism of 100% inspection is that it is not fail-safe because it is human to err. Inspectors boredom makes
them very human in this regard. Sampling puts less of a burden on inspectors.
Inspection procedures performed by people are less reliable and have lower accuracy when the volume of throughput is
high. Imagine the burden of trying to inspect every pill that goes into a pharmaceutical bottle. If the throughput rate is one
unit per second and it takes two seconds to inspect a unit, then two inspectors are going to be needed and neither of them
will have any time to go to the bathroom. With present scanner technology the dimensions of the problem are changed.
In such a situation sampling inspection is a logical alternative. If destructive testing is required (as with firecrackers),
sampling is the only alternative. Otherwise, the firecracker manufacturer will have no product to sell. Destructive testing
is used in a variety of applications such as for protective packaging and for flammability. The sampling alternative is a
boon to those manufacturers.
Technological and Organizational Alternatives
The situation for inspection is changing in several ways, with important consequences for process management and
quality control.
This robot has visual capabilities and was developed to inspect
and monitor hazardous materials. Source: Martin Marietta Corporation
As automated inspection becomes more technologically feasible, less expensive and/or more reliable than sampling, the
balance shifts to 100% technology-based inspection. This is a definite new trend which promises consumers a step
function improvement in the quality of the products they buy.
Robot inspectors have vision capabilities that transcend that of humans, including infrared and ultraviolet sight. They
have optical-scanning capabilities with speeds and memories that surpass humans. Additionally, they have the ability to
handle and move both heavy and delicate items as well as dangerous chemicals and toxic substances.
There are other changes occurring with respect to who does inspections. In many firms, each worker is responsible for the
quality control of what he or she has done. This is proving effective, but the job must be such that the operator is in a
position to judge the quality, which is not always the case. For those situations, alternative quality controls are developed
and supported by teamwork.
Teamwork originated with Japanese manufacturing practice, but has spread so widely that it cannot be associated with
any one countrys style. Rather, it should be identified with the type of organization that accepts empowerment of all
employees. The employees are often called "associates."
Many organizations now champion the practice of having associates inspect their own work. No other inspectors are used.
Quality auditors are gone. There may be a quality department, but it is there to back up the associates with consulting
advice. This variant of traditional 100% inspection has been winning adherents in high places in big companies all over
the world.
7.14
Acceptance Sampling (AS)
Acceptance sampling is the process of using samples at both the inputs and the outputs of the traditional input-output
model. Acceptance sampling uses statistical sampling theory to determine if the suppliers outputs shipped to the
producer meet standards. Then, in turn, AS is used to determine if the producers outputs meet the producers standards
for the customers. Acceptance sampling checks conformance to agreed upon standards. It should be noted that SPC is
used during the production process. Also, AS operates with the assumption that the production process is stable.
In the 1920s, at Western Electric Company, the manufacturing division of AT&T, statistical theory was used to develop
sampling plans that could be employed as substitutes for 100% inspection. Usually, the purchased materials are shipped in
lot sizes, at intervals over time. The buyer has set standards for the materials and inspects a sample to make certain that
the shipment conforms. Sometimes the supplier delivers the sample for approval before sending the complete shipment.
Military purchasing agencies were one of the first organizations to require the use of acceptance sampling plans for
purchasing. There were published military standards such as U.S. MIL STDS-105 D, and more recently ANSI/ASQC
Z1.4-1981.
Similar standards applied in Canada and other countries as well. Sampling plans are often part of the buyer-supplier
contract.
Acceptance sampling is particularly well suited to exported items. Before the products are shipped to the buyer, they are
inspected at the exporting plant. Only if they pass are they shipped. The same reasoning makes sense for shipments that
move large distances, even within the same country.
There are companies that specialize in acting as an inspection agent for the buyer. The inspectors sample the items in the
suppliers plants with their full cooperation. Suppliers that have a trust relationship with their customers sample their own
output for acceptability to ship.
When done properly, acceptance sampling is always used in conjunction with statistical quality control (SQC). It is
assumed that when the lot was made the process was stable with a known average for defectives ( ) which is used for
either the average percent or average decimal fraction of defective parts. The sample is expected to have that same percent
defectives.
Variability is expected in the sample. This means that there are more or less defectives in the sample than exactly the
process average ( ). A sampling plan sets limits for the amount of variability that is acceptable. The reasons that the
limits might be exceeded include:
1. The number of defectives found in the sample exceeded the limits even though the process was stable with
a process average of ( ). There is a limit as to how much variability will be allowed before this assumption
(1.) and the lot are rejected. With 3-sigma limits there is the expectation that 27 out of 10,000 sample mean
points will fall outside of the control limits and 13.5 out of 10,000 will be above the upper control limit.
2. The statistical variability exceeded the limits because the process was out of control (not stable) during the
time that the lot was being made. For example, 35 out of 10,000 sample mean points fall outside of the
3-sigma limits.
If the sample has too many defectives, then the lot can be rejected and returned to the manufacturer or it can be detailed.
This means using 100% inspection to remove defectives. If the inspection was done at the suppliers plant, then there is
no need to pay to transport the defectives in order to return them. Also, there is advanced warning that the defective lot
has been rejected. This can alter the production schedule and lead to expediting the next lot.
Acceptance Sampling Terminology
The basic elements of sampling plans are:
1. N = the lot size. This is the total number of items produced within a homogeneous-production run.
Homogeneity means that the system remained unchanged. Often, conditions for a homogeneous run are
satisfied only for the specific shipment. Large shipments can entail several days of production. Each time the
process is shutdown, or a new shift takes over, there is the potential for changes impacting the process.
SQC can be used to check if the process remains stable.
Every time there is a change of conditions, such as a new start-up (new learning for the learning curve), the
stability of the process has to be checked. Each time one worker relieves another, it should be assumed that a
new lot has begun. There may be a new start-up after a coffee break or the lunch hour.
It is assumed that the sample is representative of the lot. It also is assumed that the average percent defectives
( ) produced by the process does not change from the beginning to the end of the run. If does change, the
sample should pick up that fact because there will be too many or too few defectives in the sample. The "too
few" is not a cause for concern.
2. n = the sample size. The items to be inspected should be a representative sample drawn at random from
the lot. Inspectors know better than to let suppliers choose the sample.
p
p
p
p
p
3. c = the acceptance number.
4. The sampling criterion is defined as follows:
n items are drawn from a lot of size N,
k items are found to be defective,
if k > c, reject the entire lot,
if k = c, accept the lot,
if k < c, accept the lot.
Example: 5 items (n = 5) are drawn from a lot size of 20 (N = 20). The acceptance number is set at c = 2. Then if 3 items
are found to be defective (k = 3), reject the entire lot because k > c. If 2 or less items are found to be defective, k is either
equal to or less than c, therefore, accept the lot. It makes sense to remove the defectives from the lot. This means that n - 2
units are returned to the lot. The defectives may be able to be repaired or require scrapping.
When the acceptance number is c = 0, one defective rejects the lot.
O.C. Curves and Proportional Sampling
Operating characteristic curves (O.C. curves) show what happens to the probability of accepting a lot (P
A
) as the actual
percent defective in the lot (p) goes from zero to one. Each O.C. curve is a unique sampling plan. The construction of
O.C. curves is the subject of the material that follows. However, to become familiar with reading O.C. curves is the first
requisite.
A good way to start is to examine why the use of proportional sampling does not result in the same O.C. curve for any lot
size. Proportional sampling is equivalent to taking a fixed percent (PRC) of any size lot (N) for the sample (n). Thus:
Set PRC at (say) 20%. Then if the lot size is N = 100, the sample size n = 20. If N = 50, n = 10; and if N = 20, n = 4.
Figure 7.15 shows the three O.C. curves that apply where c = 0.
PRC
n
N
n
PRC N
!
!
L
N
M
O
Q
P
100
100
and
( )
The three O.C. curves show that proportional sampling plans will not deliver equal risk. The n = 4 plan is least
discriminating. Its probability of accepting lots with 10 percent defectives is about 60 percent. The n = 10 plan has a P
A
around 35 percent and the N = 20 plan has a P
A
about 10 percent. Each plan produces significantly different P
A
s.
Figure 7.16 O.C. Curves with c = 0 and c = 1
Figure 7.15 O.C. Curves for Three Proportional Sampling Plans
0.10
o
o
...
Oecimal Fraction Defective , p
,
,
,
,
-- ...
- ,
D: " ' ''J
0.8 ,
.,. " "o
! .. ' . -,--------
.. "-
0.4
+ ()
,
lO
-
________________ ______
O 4 10 20 25
Percent DelecUve, 100p
It has been established that O.C. curves with the same sampling proportion (n/N ratio) do not provide the same probability
of acceptance. They are not even similar O.C. curves.
Also, the acceptance number c has a big effect on the probability of acceptance. This can be seen by studying the O.C.
curves in Figure 7.16.
When c is increased from 0 to 1 for n = 10 and N = 50, there is a marked decrease in the discriminating ability of the
sampling plan. This is because the probability of accepting a lot with a given value of p rises significantly when the
acceptance number, c, increases from zero to one. Consequently, c provides control over the robustness of the sampling
test. c = 0 is always a tougher plan than c > 0.
The other controlling parameter is n. Return to Figure 7.15 and notice that as n goes from 20 to 10 to 4, the probability of
acceptance increases. The larger the value of n, the more discriminating is the sampling plan. The sample size n drives the
value of P
A
, whereas the effect of N is not great, except when it is quite small. The curves in Figure 7.15 above would not
differ much from those that would apply there if all of the sampling plans had the same N = 50.
Designing a sampling plan requires determining what the values of P
A
should be for different levels of p. This is to
provide the kind of protection that both the supplier and the buyer agree upon. Then, the O.C. curve can be constructed by
choosing appropriate values for n and c. Note that an agreement is required. The design is not the unilateral decision of
the buyer alone. The situation calls for compromise and negotiation between the supplier and the buyer. Figure 7.17
shows what is involved.
Figure 7.17 Buyers and Suppliers Risk on O.C. Curve
Percent Defective 100p
Negotiation between Supplier and Buyer
Two shaded areas are marked alpha & beta ( and ) in Figure 7.17. The area is called the suppliers risk. gives the
probability that acceptable lots will be rejected by the sampling plan which the O.C. curve represents. Acceptable lots are
1.00
0.90
.:
0.80
"
~
0.70
"
-
0.60
E
"-
0.50
<
o
~
0.40
~
0.30
~
~
0.20
0.10
O
O
~ I
~
,:. 1
1:. 1
5; 1
>
-:: 1
81
.t I
2
1
1
I
I
I
Supplier's Risk Is Ihe
a error - probabilily 01 0.051hal qualily superior
lo Ihe proce" avera\le will be re ecled
Buyer's Rlsk Is Ihe
lo Ihe lTPD will be accepled
(
jl error -Ihal quality Interior
Percenl Defective l OOp
those which have a fraction defective p that is equal to or less than the process average , which operates as the limit
for .
The threshold for is also called the average quality limit (AQL). It makes sense that the supplier does not want lots
rejected that are right on the mark or better (lower) than the process average. A sensible buyer-supplier relationship will
be founded on the buyer knowing the process average.
The beta area represents the probability that undesirable levels of defectives will be accepted by the O.C. curve of the
sampling plan. Beta is called the buyers risk. The limiting value for beta, defined by the buyer, is called the lot tolerance
percent defective, LTPD or p
t
. If decimal fraction defective is used instead of percent defective, the limit is called lot
tolerance fraction defective (LTFD). It is also known as the reject quality limit (RQL).
LTPD (or LTFD or RQL) sets the limit that describes the poorest quality the buyer is willing to tolerate in a lot. The buyer
wants to reject all lots with values of p that fall to the right of this limit on the x-axis of Figure 7.17. The use of sampling
methods makes this impossible. There is a probability that the sample will falsely indicate that the percent defective is less
than the limiting percent defective p
t
. This is the probability of accepting quality that is inferior to the LTPD. Therefore,
the buyer compromises by saying that no more than beta percent of the time should the quality level pt be accepted by the
sampling procedure.
Type I and Type II Errors
Decision theory has contrasted the two kinds of errors with which the alpha and beta risks of error correspond. These are
Type I and Type II decision errors. The error falsely rejects good lots. It typifies the product that would have become a
great success had it been launched, but it was bypassed because the test rejected it.
Alpha is known as a Type I error and identified as an error of omission.
The error is the probability of incorrectly accepting bad lots, equivalent to making the mistake of launching a new
product failure because the test incorrectly accepted it as a winner.
Beta is known as a Type II error and identified as an error of commission.
You have to know the costs of making each kind of error in specific circumstances to decide which is preferable. It is
important to note that these two errors are related. As you get less of one you get more of the other.
The Cost of Inspection
Given the process average, , , and the consumers specification of p
t
, and , then a sampling plan can be found that
minimizes the cost of inspection. This cost includes the charge for detailing, which entails inspecting 100 percent of the
items in all rejected lots to remove defective pieces.
Such a sampling plan imputes a dollar value to the probability ( ) of falsely rejecting lots of acceptable quality. The
supplier can decrease the -type risk by improving quality and thereby reducing the process average. Real improvement
to might be costly. It is hoped that the buyer would be willing to accept part of this increased cost in the form of higher
prices. In one way or another, the supplier and the buyer will share the cost of inspection. Therefore, it is logical to agree
that the rational procedure to be followed is to minimize inspection costs as they negotiate about the values for LTPD
and .
The sampling plan will deliver the specified and risks for given levels of and p
t
. If the process average , is the
true state of affairs, then percent of the time, N - n pieces will be detailed. This means that the average number of pieces
inspected per lot will be n + (N - n)( ). This number is called the average sample number (ASN).
Average Outgoing Quality Limits (AOQL)
An important acceptance sampling criterion is average outgoing quality (AOQ). AOQ measures the average percentage of
defective items the supplier will ship to the buyer, under different conditions of actual percent defectives in the lot.
p
p
p
p
p p
In each lot, the sample of n units will be inspected. The remaining (N - n) units will only be inspected if the lot is rejected.
The expected number of defectives in the unsampled portion of the lot is . Out of every 100 samples, P
A
will be
the percent of samples passed without further inspection. The defective units in the samples passed will not be replaced
(detailed).
On the other hand, 1 - P
A
of the 100 samples will be rejected and fully detailed identifying all defectives. Thus:
(P
A
)( )(N - n)
is the average or expected number of defectives that will be passed without having been identified for every N units in a
lot.
The average outgoing quality (AOQ) changes value as p goes from zero to one. Thus:
AOQ = P
A
p (N - n)/N Equation 7.7
Note that P
A
changes with p in accordance with the specifics of the sampling plan, which is based on the values of n and c
that are chosen. Thus, AOQ is a function of all the elements of a sampling plan. It is, therefore, a useful means of
evaluating a sampling plan. Specifically, for each value of p, an AOQ measure is derived. It is the expected value of the
fraction defectives that would be passed without detection if the process was operating at value p. The AOQ measure
reaches a maximum levelcalled the average outgoing quality limit (AOQL)for some particular value of p. This is
shown in Figure 7.18.
Figure 7.18 Average Outgoing Quality
as a function of the process fraction defective, p
Table 7.8 presents the AOQ computations which are based on Equation 7.7 above. The AOQL value occurs when p = 0.5.
This particular sampling plan (P
A
= f ( p )) is developed by the hypergeometric method described in Supplement 7.
p
p N n
( )
$
0.24
0.22
0.20
e
~ 0.12
0.10
0.02
00 . l5
AODL . O.1875
"
L'
Flilcllon Dltlcllwl, P
p P
A
(N - n)/N AOQ
0 1 3/4 0
1/4 3/4 3/4 9/64
1/2 1/2 3/4 12/64 (AOQL)
3/4 1/4 3/4 9/64
1 0 3/4 0
The maximum value of AOQ, the AOQL has been calculated as 12/64. This average outgoing quality limit (AOQL)
describes the worst case of fraction defectives that can be shipped, if it is assumed that the defectives of rejected lots are
replaced with acceptable product and that all lots are thoroughly mixed so that shipments have homogeneous quality.
Multiple-Sampling Plans
Multiple-sampling plans are utilized when the cost of inspection required by the single-sampling plan is too great.
Single-sampling plans require that the decision to accept or reject the lot must be made on the basis of the first sample
drawn. With double sampling, a second sample is drawn if needed. Double sampling is used when it can lower inspection
costs.
The double-sampling plan requires two acceptance numbers c
1
and c
2
such that c
2
> c
1
. Then, if the observed number of
defectives in the first sample of size n
1
is k
1
:
1. Accept the lot if k
1
c
1
.
2. Reject the lot if k
1
> c
2
.
3. If c
1
< k
1
c
2
, then draw an additional sample of size n
2
.
The total sample is now of size n 1+ n
2
.
The total number of defectives from the double sample is now k
1
+ k
2
. Then:
4. Accept the lot if (k
1
+ k
2
) c
2
.
5. Reject the lot if (k
1
+ k
2
) > c
2
.
Double sampling saves money by hedging. The first sample, which is small, is tested by a strict criterion of acceptability.
Only if it fails are the additional samples taken. Double-sampling costs have to be balanced against the costs of the single
large sample.
Multiple sampling follows the same procedures as double sampling where more than two samples can be taken, if
indicated by the rule: c
j
< k
j
c
(j + 1)
. As with double sampling, the sample sizes are specified with upper and lower acceptance
numbers which get larger for additional samples. When the cumulative number of defects exceeds the highest acceptance
number c ,
n
the lot is rejected. Thus, reject the lot if:
(k
1
+ k
2
+ + k
j
+ + k
n
) > c
n
In the same sense, the costs of multiple-sampling plans can be compared to single or double sampling. Sequential
sampling may be cost effective where many samples of large size have to be taken regularly. The sequential sampling
procedure is to take successive samples with rules concerning when to continue and when to stop. In effect, it determines
the spacing shown in Table 7.3 above.
Table 7.8 Computing Average Outgoing Quality Limit
1
1
1
1
11I
I I I
I I I
I I I
I I I
I I I
Tables exist for single- and double-sampling plans which make the choice of a plan and the design of that plan much
simpler.
7
7.15
Binomial Distribution
For the binomial distribution to apply, the sample size should be small as compared to the lot size. The criterion is given
after Equation 7.8, which is the binomial distribution.
Equation 7.8
n = sample size; p = the decimal fraction defective and q = 1 - p.
P
A
is the probability of acceptance as previously used in the chapter. The lot size N is assumed to be infinite so it does not
appear in Equation 7.8. The assumption of large enough N is considered to be met when n/N <= 0.05.
When j = 0, the probabilities are calculated for the sampling plan with c = 0. Just to be sure that the factorial is known,
it is: n! = n(n - 1)(n - 2)(1) and 0! = 1! = 1.
The O.C. curve for n = 100, c = 0, is calculated from the computing Equation 7.9 for sample size n = 100, and the
acceptance number c = 0. With this sample size, it is expected that the lot is 2,000 or larger. Also, this sample size has
been chosen to enable a later comparison with the use of the Poisson distribution for creating O.C. curves and because
most tables for the binomial distribution do not go up to n = 100.
8
Equation 7.9
Note that 100! is divided by 100! which equals one and p
0
= 1. Table 7.9 is derived by using this binomial computing
formula with the beginning of the entire range of possible values of fraction defectives.
Table 7.9 Binomial Derivation of O.C. Curve with j= 0
Fraction Defective
(p)
1 - p P
A
= (1 - p)100
0.00 1.00 1.000
0.01 0.99 0.366
0.02 0.98 0.133
0.04 0.96 0.017
0.05 0.95 0.006
. . .
. . .
1.00 0.00 0.000
P prob j n p
n
j n j
p q
A
j n j
! !
$
$
( ; , )
!
!( )!
( )( )
prob p p q q p ( ; , )
!
!( )!
( )( ) ( ) 0100
100
0 100 0
1
0 100 100 100
!
$
! ! $
1I
Figure 7.19 shows the plot of this O.C. curve.
For the acceptance number c = 1, it is necessary to derive the binomial for j = 0 and j = 1. Add these results together, thus:
Prob(0; 100, p) + Prob(1; 100, p) will be used to derive the O.C. curve for n = 100 and c = 1. The computations follow in
Equation 7.10:
Equation 7.10
In this case 100! is divided by 99! which leaves 100 as a remainder. The result can be used for computing Table 7.10 with
the beginning of the entire range of possible values of fraction defectives, p.
Table 7.10 Binomial Derivation of O.C. Curve with j= 1
Fraction Defective
(p)
q = 1 - p P
A
= 100(1 - p)
99
0.00 1.00 0.000
0.01 0.99 0.370
0.02 0.98 0.271
0.04 0.96 0.070
0.05 0.95 0.031
. . .
. . .
1.00 0.00 0.000
Calculation of P
A
= 100p(1 - p)
99
when q = 0.99, 0.98, and so on is done readily with a math calculator or the computer.
However, for those who want a computing method without electronic aids, use logarithms and antilogs remembering that
Figure 7.19 Binomial O.C. Curvec= 0
prob p p q p p ( ; , )
!
!( )!
( )( ) ( ) 1 100
100
1 100 1
100 1
1 99 99
!
$
! $
Probabilily 01 Acceplance, P.I.
.50
.25
'.,
o . o ~ ~ ~ ~
0.0 .01 .02 .03 .04 .05
Fracll on Delective, p
decimals have negative characteristics.
To calculate the c = 1 plan, add the P
A
values for each row of Tables 7.9 and 7.10 above. Thus,
Table 7.11 Binomial Derivation of O.C. curve with c = 1
Fraction
Defective (p)
P
A
(c = 0)
P
A
(j = 0)
P
A
(j = 1) P
A
(c = 1)
Sum P
A
(j = 0 and j = 1)
0.00 1.000 0.000 1.000
0.01 0.366 0.370 0.736
0.02 0.133 0.271 0.404
0.04 0.017 0.070 0.087
0.05 0.006 0.031 0.037
. . . .
. . . .
1.00 0.000 0.000 0.000
Figure 7.20 shows the plot of this O.C. curve and that of Figure 7.19 so that a comparison can be made.
Figure 7.20 Binomial O.C. Curve with c = 1
As expected, the O.C. curve for c = 0 is far more stringent than for c = 1.
11II
Probabilily 01 Acceplance, P
A
1.00
.75
<"
.50
<
"
.25
D . o ~ h ~ ~
0.0 .01 .02 .03 .04 .05 .06
Fracllon Deleelive, p
An even easier method for creating O.C. curves uses the Poisson distribution
9
to approximate the binomial distribution
when p is small (viz., when p < 0.05). The mean of this distribution is m = np with n the sample size and p the fraction
defective. A sample size of n => 20 is customary. Like the binomial, it assumes sampling with replacement which means
that N is large enough to be treated as not affected by sampling.
The fundamental equation for the Poisson distribution is Equation 7.11:
Equation 7.11
Let j = 0, which is equivalent to c = 0, and let n = 100. Then, because m = np, the mean m = 100p. Thus,
Prob (0; m) = e
-100p
. It is simplest however to use the mean value in the center column of Table 7.12 to calculate the
values of the probability of acceptance in Table 7.12.
Table 7.12 Poisson Derivation of O.C. Curve with j = 0
Fraction Defective
(p)
100p = m
P
A
= e
-m
0.00 0 1.000
0.01 1 0.368
0.02 2 0.135
0.04 4 0.018
0.05 5 0.007
. . .
. . .
1.00 100 0.000
Observe that this result is essentially the same as that obtained by using the binomial distribution. As with the binomial,
a sampling plan where c = 1 requires computations of P
A
for j = 0, and j = 1.
The computing equation is:
Equation 7.12
Again, it is easier to use m in the middle column of Table 7.13 to calculate the amount that j = 1 increases the probability
of acceptance.
7.16
Poisson Distribution
P prob j m
m e
j
A
j m
! !
$
( ; )
!
P prob m
m e
me
A
m
m
! ! !
$
$
( ; )
!
1
1
1
0.01
Table 7.13 Poisson Calculations for O.C. Curve with j= 1
100p = m
0.368
Fraction Defective
(p)
0
1
0.00 1.000
P
A
= me
-m
.
0.034
0.000
0.04
.
2
0.05
4
0.271
.
Summing the third column contributions to P
A
row by row from Tables 7.12 and 7.13 above, Table 7.14 is derived.
. .
0.02
5
0.073
1.00
.
100
0.000
0.406
0.04
The Poisson approximation of the binomial is good for small values of p. Compare the binomial tables with the Poisson
tables through the range of p = 0.00 to 0.05. These O.C. curves are sufficiently similar that either approach can be used.
However, at 0.05 the error rises above 10 percent. Previously, it was stated that the Poisson should be used with p < 0.05,
or 5 percent. Because the binomial is always more accurate, if there is doubt about using the Poisson, the binomial is not a
difficult alternative.
It will be instructive to compare one larger value (for c = 0 and p = 0.10). The binomial result is:
P
A
= (1 - 0.10)
100
= 0.000027
The Poisson result is:
P
A
= e
-100(0.10)
= 0.000045
The Poisson approximation yields an error of 67 percent when p = 0.10, which is substantial. That is (45-27)/27 = 0.67.
1.000
0.000 0.000
(j = 1)
1.00
.
0.01
.
0.091
0.041
1.000
0.368
0.018
0.000
0.05
0.271
Fraction
Defective (p)
0.736
P
A
(c = 0)
( j = 0)
0.135
P
A
(c = 1)
Sum P
A
(j = 0 and j = 1)
0.034
0.02
0.368
0.073
0.007
0.00
Table 7.14 Poisson Distribution for O.C. Curve with c = 1
SUMMARY
The methodology of quality management is extensive. This chapter presents the major approaches used including data
- and R-charts of
SQC. Why is this so? What kind of quality management can be used?
11. Stylish Packaging is a flow shop with a high-volume, serial-production line making cardboard boxes. An
important quality is impact resistance, which requires destructive testing. Can SQC methods be used with
destructive testing? What quality control method(s) are recommended? Discuss.
12. Can the consumer tell when an organization employs SQC? Can the consumer tell when it does not
employ SQC?
13. Differentiate between assignable and chance causes of variation. What other names are used for each
type of cause? Give some examples of each kind of variation. How can one tell when an assignable cause of
variation has arisen in a system?
14. What is a stable system? Why is the condition of stability relevant to the activities of P/OM?
15. Why is the use of statistical quality control by drug and food manufacturers imperative? Would you
recommend 2-sigma or 3-sigma control limits for food and drug applications of SQC? Explain.
16. One of the major reasons statistical quality control is such a powerful method is its use of sequenced
inspection. Explain why this is so.
17. How should the subgroup size and the interval between samples be chosen? Is the answer applicable to
both high-volume, short-cycle time products (interval between start and finish) and low-volume, long-cycle
time products?
check sheets, histograms, scatter diagrams, Pareto charts, Ishikawa/fishbone diagrams. Then the most important of all,
statistical control charts. ( -, R-, p- and c-charts) are explainedhow they work and why they work. Average outgoing
quality limits and multiple sampling are part of the treatment of acceptance sampling methods used for buyer-supplier
quality relations.
Review Questions
1. Develop a data check sheet for:
a. basketball scores
b. top ten players for runs batted in (RBIs)
c. some football statistics (provide some options)
2. Develop a data check sheet for set-up times for:
a. service time in a sit-down restaurant
b. service time in a fast-food restaurant
Note: First define service time explicitly.
3. Draw a fishbone diagram for the causes of poor reports.
4. An airline has requested a consultant to analyze its telephone reservation system. As the consultant, draw a
fishbone diagram for the causes of poor telephone service.
5. Prepare a cause and effect analysis for the qualities of a fine cup of tea.
6. How do run charts give early warning signals that a stable system may be going out of control?
7. What is the purpose of control charts?
8. What would a go/no-go gauge which has tolerances of 0.04 look like for 2.54 centimeter nails? Draw it.
9. Explain why service systems are said to have higher levels of variability from chance causes than
manufacturing systems.
10. A job shop that never produces lot sizes greater than 20 units cannot employ the
x
x
18. Differentiate between the construction of the -, R-, p- and c-charts. Also distinguish between the
applications of these charts. Are the charts applicable to both high-volume, short-cycle time products
(interval between start and finish) and low-volume, long-cycle time products?
19. What is meant by "between-group" and "within-group" variability? Explain how these two types of
1
110
Sample
No.
variability constitute the fundamental basis of SQC.
20. Use the concepts of "within-group" variability and "between-group" variability to answer the following
questions:
a. Are the designers specifications realistic?
b. Is the production process stable?
c. Is the process able to deliver the required conformance specifications?
Problem Section
1. Draw a scatter diagram for the following data where it is suggested that df is a function of T.
5
T
112 73
120 80
Sample
1
df
112
2
4
80
3
2
T
118 78
65
73
df
78
75
120
110
105 3 65 105
118
75
4
5
If T is the temperature of the plating tank and df is the measure of defective parts per thousand, after plating,
does there seem to be a special relationship (correlation) between T and df?
2. After set up in the job shop, the first items made are likely to be defectives. If the order size calls for
125 units, and the expected percent defective for start-up is 7 percent, how many units should be made?
3. A subassembly of electronic components, called M1, consists of 5 parts that can fail. Three parts have
failure probabilities of 0.03. The other two parts have failure probabilities of 0.02. Each M1 can only be
tested after assembly into the parent VCR. It takes a week to get M1 units (the lead time is one week), and
the company has orders for the next five days of 32, 44, 36, 54, and 41. How many M1 units should be on
hand right now so that all orders can be filled?
4. Use SPC with the table of data below to advise this airline about its on-time arrival and departure
performance. These are service qualities highly valued by their customers. The average total number of
flights flown by this airline is 660 per day. This represents three weeks of data. It is suggested that a late
flight be considered as a defective. Draw up a p-chart and analyze the results. Discuss the approach.
x
1II
I I I
I I I
I I I
I I I
I I I
The Number of Late Flights (NLF) Each Day
Day NLF Day NLF Day NLF
1 31 8 43 15 22
2 56 9 39 16 34
3 65 10 41 17 29
4 49 11 37 18 31
5 52 12 48 19 35
6 38 13 45 20 44
V
-chart and draw the revised chart. Discuss the results.
7. Recalculate the R-chart parameters without 3:00 p.m. data and using the new data supplied in Problem 6
above for Table 7.4. Discuss the results.
8. What is the probability of a run of eight points all above the mean? What is the probability of a run of ten
points above or below the mean?
9. For the data tabulated below, draw the p-chart. Compare these results with those based on Table 7.7.
6. For the data in Table 7.4 discussed earlier, throw out the 3:00 p.m. data and substitute instead the
following four values of x
i
: (30.15), (29.95), (29.80), and (30.05). Recalculate the parameters for the
47
29.85 29.65
29.60
29.80
Time:
21
29.70
37
5. New data have been collected for the CTF. It is given in the table below. Start your analysis by creating
an
4
10 a.m.
29.90
Subgroup j
29.95 30.05
11 a.m.
30.00
III
x
i
29.95 3
30.05
31.05
1 p.m.
14
1
33
30.00 30.25
30.60
29.00
-chart and then create the R-chart. Provide interpretation of the results.
Weight of Chocolate Truffles (in Grams)
I
29.75
IV
7
2 30.50
Sum 3 p.m.
II
29.90
4 p.m.
29.80
x
x
111111
111111
111111
111111
111111
111111
111111
111111
1111111
Subgroup
No.
NR n p
1 1 9
2 2 9
3 1 9
4 3 16
5 1 9
SUM SUM
10. A food processor specified that the contents of a jar of salsa should weigh 14 0.10 ounces net. A
statistical quality control operation is set up and the following data are obtained for one week:
Sample No.
1 14.10 14.06 14.25 14.06
2 13.90 13.85 13.80 14.00
14.40
5
14.15
13.90
14.00
13.95
14.20 3
14.60
b. Construct an R-chart based on these 5 samples.
c. What points, if any, have gone out of control?
d. Discuss the results.
11. Use a data check sheet to track the Dow Jones average, regularly reported on the financial pages of most
newspapers. Record the Dow Jones closing index value on a data check sheet every day for one week. Do an
Ishikawa analysis, trying to develop hypotheses concerning what causes the Dow Jones index to move the
way it does. Draw scatter diagrams to see if the hypothesized causal factors are related to the Dow Jones.
12. A method for determining the number of subgroups that will be sampled each day will be developed.
Apply it to the situation where the total number of units produced each day is 1,000. The subgroup sample
size is 30 units and the interval between subgroups is 200 units.
a. How many subgroups will be sampled each day?
b. How many units will be sampled each day?
c. What should be done if the sampling method damages one out of three units?
Method: Dividing the production rate per day by the subgroup sample size plus the interval between samples
(given in units) determines the number of samples to be taken per day. If the subgroup sample size is n = 3
units, and the interval between sample subgroups is t = 4 units, and the total number of units produced each
day is P = 210, then the calculation to determine the number of subgroups sampled each day, called NS, is as
follows:
NS = P/(n + t) = 210/(3 + 4) = 30
The total number of units that are sampled and tested for quality is n(NS) = 90. This means that 90 units
would be withdrawn from the production line and tested. If the quality-testing procedure damages the
product in any way, the sample interval would be increased and the sample size might be reduced to two.
-chart based on these 5 samples. a. Construct an
14.30 14.10
14.10 4
14.05
13.95
x
/
1III
Readings and References
G. Bounds, L. Yorks, M. Adams, and G. Ranney, Beyond Total Quality Management,
McGraw-Hill Book Co., Inc., 1994.
P. E. Crosby, Quality Is Free, NY, McGraw-Hill Book Co., Inc., 1979.
P. E. Crosby, Quality Is Still Free, 1st Ed., NY, McGraw-Hill Book Co., Inc., 1995.
D. J. Crowden, Statistical Methods in Quality Control, Englewood Cliffs, NJ, Prentice-Hall, Inc., 1957.
W. E. Deming, Out of the Crisis, Boston, MA: MITCenter for Advanced Engineering Studies, 1986.
Harold F. Dodge, and Harry G. Romig, Sampling Inspection Tables, Single and Double Sampling,
2nd Ed., NY, Wiley, 1959.
Eugene L. Grant, and Richard S. Leavenworth, Statistical Quality Control, 6th Ed., NY,
McGraw-Hill Book Co., Inc., 1988.
Richard Tabor Greene, Global Quality: A Synthesis of the Worlds Best Management Methods, Burr Ridge, IL,
Irwin, 1993.
A. V. Feigenbaum, Total Quality Control, 3rd. Ed., NY, McGraw-Hill Book Co. Inc., 1991.
K. Ishikawa, What is Total Quality Control? The Japanese Way, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1985.
J. M. Juran, Quality Control Handbook, 4nd Ed., NY, McGraw-Hill Book Co., Inc., 1988.
S. B. Littauer, "Technological Stability in Industrial Operations," Transactions of the New York Academy of
Sciences, Series II, Vol. 13, No. 2, (Dec. 1950), pp. 6772.
Jeremy Main, Quality Wars: The Triumphs and Defeats of American Business, A Juran Institute Report,
The Free Press, NY, 1994.
P. R. Scholtes, et. al., The Team Handbook, Joiner Associates, (608-238-8134), Madison, WI, 1989.
W. A. Shewhart, Economic Control of Quality of Manufactured Product, Princeton, NJ,
D. Van Nostrand Co., Inc., 1931.
Shigeo Shingo, Zero Quality Control, CT, Productivity Press, 1986.
H. Wadsworth, K. Stephens, and A. Godfrey, Modern Methods for Quality Control and Improvement,
NY, Wiley, 1986.
SUPPLEMENT 7
Hypergeometric O.C. Curves
With this supplement, three methods of designing O.C. curves for single sampling will have been developed. Why bother
with this third method when the binomial and the Poisson have been shown to be close enough under the demonstrated
conditions? The reason for choosing one method instead of another is based on the relationship of the sample size n to the
lot size N. In the case of the hypergeometric, the small size of N makes it necessary to do sampling without replacement.
This means that if there is a lot size of N = 4, and it is known that there is one defective in that lot, then for the first
sample there is a one in four chance of drawing the defective part. If the defective is not found in this first sample, then
the second sample has a one out of three chance of being the defective. The probability goes from 25 to 33.3 to 50
percent. The hypergeometric distribution takes this effect into account, whereas the binomial and the Poisson do not.
Stating this in yet another way, the hypergeometric distribution is used when the lot size N is small and successive
sampling affects the results. Each next sample reduces the number of unsampled units remaining in the lot. Thus, the
sample is first drawn from N units, then from N - 1, N - 2, units, etc.
If items are replaced to keep N of constant size, there is a significant chance that the same item would be sampled more
than once. Also, with destructive testing, sampled items cannot be replaced because they have to be destroyed to be
tested.
The hypergeometric distribution given in Equation 7.13 applies to a sampling plan with acceptance number, c = 0.
(Repeated note: the factorial of any number is written m! and is calculated by (m)(m - 1)(m - 2)(1).)
Equation 7.13
x = the possible number of defectives in the lot and
x/N = p is the possible fraction defective of the lot.
As x is varied from 0 to N, the appropriate values of P
A
are determined. This allows a plot of P
A
vs. p (or 100 p,
the percent defective). Thus, in the case where: N = 4, n = 1, and c = 0, Equation 7.14 for P
A
is:
Equation 7.14
where p is the symbol for fraction defective. The right-most term of Equation 7.14 is (1 - p).
It is obtained by substituting x = Np = 4p into the next to the last term of the equation.
Varying p to derive P
A
results in Table 7.15. This is the same distribution that was used to derive the values of AOQ and
0.75
The hypergeometric formula when c > 0 has the same additive requirements for the probability of acceptance with j = 0
and j = 1 that were shown for the binomial and Poisson distributions.
AOQL in this chapter.
Table 7.15 Hypergeometric Distribution
0.00
x
1.00
3
4
PA
1.00 0
0.50
0.75
0.00
2
0.25
1 0.25
x/N = p
0.50
P
N x N n
N x n N
A
!
$ $
$ $
( )!( )!
( )!( )!
P
x
x
p
A
x
x
x
!
$
$
! ! ! $
$
$
$ ( )!( )!
( )!( )!
( )!
( )!
4 3
3 4
1
4
4 3
4
4
Equation 7.15 gives the general hypergeometric equation.
Equation 7.15
The numerical example first derives the hypergeometric O.C. curve for c = 0 with j = 0, and then, using addition, for j = 1
and finally for c = 1. Also, for this example, the lot size is N = 50 and the sample size is n = 10. All calculations are based
on Equation 7.15 above.
Table 7.16 Hypergeometric for O.C. Curves with j = 0 and c = 0
x p = x/N P
A
0 0.00 50!10!40! = 1.000
10!40!50!
2 0.04 48!2!10!40! = 0.637
10!38!2!50!
5 0.10 45!5!10!40! = 0.311
10!35!5!50!
10 0.20 40!10!10!40! = 0.083
10!30!10!50!
The next step is to determine the additions to the probability of acceptance when j = 1.
Table 7.17 Hypergeometric for O.C. Curves with j = 1
x p = x/N P
A
+
0 0.00 There are no defectives in the lot, therefore the additional
probability of acceptance with the relaxed acceptance
criterion of j = 1 is zero. PA+ signifies that this is
measuring the additional probability of acceptance.
Table 7.18 adds the respective rows of Tables 7.16 and 7.17 above.
5 45!5!10!40! = 0.432
9!36!4!50!
2
10
48!2!10!40! = 0.326
9!39!50!
40!10!10!40! = 0.268
9!31!9!50!
0.04
0.20
0.10
P
N x x n N n
n j N x n j j x j N
j
C C
C
n j
N x
j
x
n
N
! !
$ $
$ $ $ % $
$
$
( )! ! !( )!
( )!( )! !( )!( )!
I I I
1I I
1I I
1I I
1I I
I I I
1
I
11r-------1 - ---1
11f---- 1 ___ 1
1II
Table 7.18 The c = 1 Hypergeometric Distribution
x p c = 0
j = 0
j = 1 P
A
c = 1
0 0.00 1.000 0.000 1.000
2 0.04 0.637 0.326 0.963
5 0.10 0.311 0.432 0.743
10 0.20 0.083 0.268 0.351
This c = 1 plan is a nonrigorous acceptance plan because it has such high probabilities of accepting 10 percent and even
20 percent defectives.
Problems for Supplement 7
1. Develop the hypergeometric for N = 5, n = 2, and c = 0.
Write out the table for P
A
. Draw the O.C. curve.
2. Develop the hypergeometric for N = 5, n = 2, and c = 1.
Write out the table for P
A
. Draw the O.C. curve.
3. Develop the hypergeometric for N = 50, n = 2, and c = 0.
Construct the table for P
A
. Draw the O.C. curve.
Compare this result to the hypergeometric distribution that was developed in the supplement for N = 50,
n = 10, and c = 0.
4. Develop the hypergeometric for N = 50, n = 2, and c = 1. Write out the table for P
A
. Draw the O.C. curve.
Compare this to the hypergeometric distribution that was developed in the supplement for N = 50,
n = 10 and c = 1.
5. The situations in Problems 3 and 4 meet the binomial requirement that n/N < 0.05.
Use the binomial distribution to create the table for P
A
. Draw the O.C. curve.
Compare the results of the hypergeometric in this case with those of the binomial distribution.
6. Comment on the use of the Poisson distribution for the situations given in Problems 3 and 4 above.
Notes . . .
1. W. E. Deming, The New Economics, for Industry, Government, Education, Cambridge, MA, MIT Center
for Advanced Engineering Studies, 1993, p. 135. Deming used Study, not Check.
2. Shigeo Shingo, Zero Quality Control, CT, Productivity Press, 1986, p. 32.
11111
I I I I I
I I I I I
I I I I I
I I I I I
3. Walter A. Shewhart, Economic Control of Quality of Manufactured Product, Princeton, NJ, D. Van
Nostrand, 1931. Walter Shewharts seminal work dates back to the 1930s.
4. "They use a less environmentally friendly system," Tom Meschievitz, director of Paint Engineering,
General MotorsNorth American Operations. "Big 3 in low-pollution paint venture," The Boston Globe, May
26, 1994.
5. K. Ishikawa, What is Total Quality Control? The Japanese Way, Englewood Cliffs, NJ, Prentice-Hall,
1987.
6. It is useful to note the effect of on the probability that a sample mean falls within the control limits.
k k
1.00 68.26% 2.00 95.44%
1.64+ 90.00% 3.00 99.73%
1.96 95.00% 4.00 99.99%
7. Harold F. Dodge, and Harry G. Romig, Sampling Inspection Tables, Single and Double Sampling, NY,
Wiley, 2nd ed., 1959.
8. A classic collection of mathematical tables is in The Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, U.S. Department of Commerce, National Bureau of Standards, Applied
Mathematics Series 55, June, 1964.
9. Poisson Tables are readily available. They can be used to determine the probabilities of c or less defects
where c and j are equivalent.
PEOPLE - W. Edwards Deming -- The Father of Quality
In the early 1980s, corporate America began to search for ways to combat the intense competition from Japanese auto and
electronics manufacturers. In their search they rediscovered W. Edwards Deming, a key figure in Japans post-World War
II economic recovery. A statistician with a Yale Ph.D. in mathematical physics, Deming first went to Japan as a U.S. War
Department representative in 1947 to help rebuild the shattered country. He was invited back time after time to teach
business executives and engineers his principles of quality.
Deming preached a systems approach to quality that emphasized identifying and eliminating defects throughout the
production process rather than inspecting for defects at the end of the process. The methods he used, originated by Walter
Shewhart and referred to as statistical process control (SPC), were widely accepted and applied by Japanese companies.
In 1951 Japan established the prestigious Deming Prize, awarded annually to companies worldwide in recognition of
outstanding quality.
A 1980 NBC television documentary entitled "If Japan CanWhy Cant We?" was largely responsible for American
industrys rediscovery of Deming. The documentary recognized Demings significant contribution to Japans success in
developing superior-quality products that were growing ever more popular with American consumers. Shortly afterward,
Ford, General Motors and other U.S. companies sought Demings help. From that point until his death in 1993, Deming
worked ceaselessly to teach American companies how to achieve quality through continual improvement, teamwork, and
customer satisfaction.
/k+
P
x
P
x
P
x
I
I -
I
I -
I I I I
I I I I
I I I I
Sources: John A. Byrne, "Remembering Deming, The Godfather of Quality," Business Week, Jan. 10, 1994, p. 44,
and Peter Nulty, "The National Business Hall of Fame," Fortune, Apr. 4, 1994, p. 124.
QUALITY - Copley Pharmaceutical Inc.
Quality errors that result in the recall of products have costly, even disastrous, consequences.
Adherence to a rigorous quality control program is not important only to a company's sales, market share, and customer
loyalty. Quality errors resulting in the recall of products can have costly, even disastrous, consequences for a
manufacturer. Just ask Copley Pharmaceutical Inc., a Canton, Massachusetts-based company that manufactures and sells a
variety of prescription and over-the-counter drugs.
Copley recently incurred a $6 million expense in a voluntary recall of one of its asthma medications. Due to suspected
microbial contamination of its 0.5 percent Albuterol Sulphate Inhalation Solution, Copley recalled 20 milliliter vials
produced in certain manufacturing lots. The solution is used by asthma sufferers and patients with other respiratory
problems. Copley said sales of the product were "significant" to its operations, accounting for about 14 to 16 percent of its
total sales.
Copley was the first company to market with the product, notes Natwest's Jack Lamberton, an industry observer. "The
first generic to get shelf space usually keeps it." However, the recall provided an opportunity for Copley's main
competitor, Schering-Plough Corp. "Schering will fill the vacuum," Lamberton says. "It is unlikely Copley will ever get
back the sales."
By Copley's conservative estimate, the 0.5 percent Albuterol will be back on the market within six months of the recall.
Source: "Copley says Albuterol recall to cost $6 million," Reuter Business Report, January 6, 1994.
TECHNOLOGY - U.S. Postal Service Uses Optical Scanning to Speed Mail Sorting
In recent years, optical scanning has become commonplace in grocery stores, department stores, and other retail
businesses. The use of optical scanning equipment, which reads bar codes affixed to products, improves the speed and
accuracy of the customer check-out process. It also provides a continually updated record of sales and inventories. Many
non-retail entities also use optical scanning to increase efficiency and reduce costs. The United States Postal Service, for
example, uses scanning and other technology extensively in its operations.
Optical scanning greatly speeds up the mail sorting process. One person can manually sort about 500 pieces of mail an
hour, but a scanning machine can sort 30,000 to 40,000 pieces an hour. In addition, the cost of sorting by hand is
approximately $42 per 1,000 pieces, compared to about $4 per 1,000 pieces for automated sorting. The time and cost
savings from optical scanning are enormous given the volume of mail handled each dayabout one-half billion pieces.
An estimated 40 percent of the mail is bar coded when received by the Postal Service and can be scanned immediately.
Another 40 percent bears typed or printed addresses that can be read by sophisticated computers, called optical character
readers. After reading the address, this same computer bar codes the mail, which can then be sorted by the scanning
equipment. Mail with handwritten addresses or incomplete addresses goes through further electronic processing to supply
the necessary bar-coding information. This processing is often done by private companies, such as DynCorp, operating at
various sites throughout the country. This processing approach is viewed by postal officials as an interim step until
technology is developed to the point that computers can read handwritten mail.
Optical Scanning to Speed Mail Sorting
Source:"DynCorp Takes High-Tech to Mail-Carrying Business," Central Penn Business Journal, April 4, 1994.