Professional Documents
Culture Documents
Discrete Data - Attribute R&R
Discrete Data - Attribute R&R
MSA Example:
Attribute or
Categorical Data
Definitions
Accuracy: Overall agreement of the measured value with the true value (which may be an
“expert” value). Bias plus precision.
Attribute Measurement System: Compares parts to a specific set of criteria and accepts the
item if the criteria are satisfied.
Bias: A systematic difference from the true value. Revealed in the differences in averages from
the true value.
Repeatability: The variation observed when the same operator measures the same item
repeatedly with the same device.
Reproducibility: The variation observed when different operators measure the same parts using
the same device, sometimes it can be the same operator using different devices.
The list provides a quick reference for key terms used in Measurement System Analysis.
Variation
Product Variability in the (Observed Variability)
Process Variability measurement Total Variability
process
+ =
MSA for Continuous Processes 3 .PPT All Rights Reserved, Juran Institute, Inc.
Quantifying Variation
Like all processes, the measurement process has CTQs. The graph above lists
some of the most common CTQs used for the measurement process. MSA
quantifies the amount of variation for:
Accuracy
Repeatability
Reproducibility
Stability (this is typically covered in the Black Belt workshop)
Linearity (covered in Black Belt workshops)
Bias
True
Value or Possible Causes of Bias
Standard
Sensor not properly
calibrated
Bias
Improper use of sensor
Unclear procedures
Observed
Average Human limitations
MSA for Continuous Processes 4 .PPT All Rights Reserved, Juran Institute, Inc.
Bias
Bias is the difference between the observed average of measurements and the true
average. Validating accuracy is the process of quantifying the amount of bias in the
measurement process. Experience has shown that bias and linearity are typically
not major sources of measurement error for continuous data, but they can be.
In service and transaction applications, evaluating bias most often involves testing
the judgment of people carrying out the measurements.
¾Example
A team wants to establish the accuracy of its process to measure defects in
invoices. First, they gather a “standard” group of invoices and have an “expert”
panel establish the type and number of defects in the group. Next, they have the
standard group of invoices measured by the “normal” measurement process.
Differences between averages the measurement process came up with, and what
the known defect level was from the expert panel represented the bias of the
measurement process.
Repeatability
Equipment
Gage instrument needs
maintenance
The gage needs to be more
rigid
People
Environmental conditions
Repeatability (lighting, noise)
Physical conditions (eyesight)
MSA for Continuous Processes 5 .PPT All Rights Reserved, Juran Institute, Inc.
Repeatability
Reproducibility
Mean of
the measurements Possible Causes of Poor
of Operator B Reproducibility
Mean of
the measurements Measurement procedure is not
of Operator A clear
MSA for Continuous Processes 6 .PPT All Rights Reserved, Juran Institute, Inc.
Reproducibility
MSA for Continuous Processes 7 .PPT All Rights Reserved, Juran Institute, Inc.
Listed here are the key highlights of conducting an MSA for attribute or categorical
data. The “parts” can be invoices, parts or reason codes for customer returns, for
example.
MSA for Continuous Processes 8 .PPT All Rights Reserved, Juran Institute, Inc.
This shows the results of 2 rounds using 2 appraisers, assessing the same 20
items.
MSA for Continuous Processes 9 .PPT All Rights Reserved, Juran Institute, Inc.
Packers
Catwalk
Cutter
Inspector
Glass
MSA for Continuous Processes 10 .PPT All Rights Reserved, Juran Institute, Inc.
MSA for Continuous Processes 11 .PPT All Rights Reserved, Juran Institute, Inc.
MSA for Continuous Processes 12 .PPT All Rights Reserved, Juran Institute, Inc.
There were two outcomes in this inspection or measurement process: pass or fail.
Twenty pieces, a team of inspectors, and two rounds (or trials) were used in the
MSA.
Excerpt of
full data
for 20
inspectors
MSA for Continuous Processes 13 .PPT All Rights Reserved, Juran Institute, Inc.
This slide shows the data in MINITAB®. The “Standard” column documents the
correct or expert answer for each piece of glass.
80 80
Percent
Percent
70 70
60 60
50 50
40 40
l t l t
n y o lly Tim E a r e v e ni e oh n n ie lin rry n y o lly T im E a r e v e ni e oh n n ie lin rry
To M n
S t J ea J Ro C L
n a To M n
S t J ea J Ro C L
n a
Appraiser Appraiser
MSA for Continuous Processes 14 .PPT All Rights Reserved, Juran Institute, Inc.
The graph on the left shows the agreement (or repeatability) of each appraiser
between Trial 1 and Trial 2.
The graph on the right shows the agreement of each appraiser with the Standard.
As shown on both graphs, the blue dots show the percent agreement and the
redlines are the 95% confidence intervals.
MSA for Continuous Processes 15 .PPT All Rights Reserved, Juran Institute, Inc.
MSA for Continuous Processes 16 .PPT All Rights Reserved, Juran Institute, Inc.
This slide shows the agreement of each Appraiser (across both trials) with the
Standard.
For example, Larry has 89% agreement with the Standard but, Allen only has 39%
agreement with the Standard.
MSA for Continuous Processes 17 .PPT All Rights Reserved, Juran Institute, Inc.
This shows the level of agreement across all Appraisers. In this case, only 5.56%
agreement!
MSA for Continuous Processes 18 .PPT All Rights Reserved, Juran Institute, Inc.
Given the results of the MSA study, what could have caused the poor agreement?
And what should be done to improve the measurement system?
The measurement system must be improved and tested again (with another MSA
study) to reach at least 90% agreement before the data can be used for baselining
process performance or further analysis.