Professional Documents
Culture Documents
MSA For Attribute or Categorical Data IG PDF
MSA For Attribute or Categorical Data IG PDF
MSA Example:
Attribute or
Categorical Data
#### XXXX.#### vx
The list provides a quick reference for key terms used in Measurement System Analysis.
#### XXXX.#### vx
Product Variability
Process Variability
Variation
in the
measurement
process
(Observed Variability)
Total Variability
Quantifying Variation
Like all processes, the measurement process has CTQs. The graph above lists
some of the most common CTQs used for the measurement process. MSA
quantifies the amount of variation for:
Accuracy
Repeatability
Reproducibility
#### XXXX.#### vx
True
Value or
Standard
Bias
Observed
Average
Human limitations
Bias
Bias is the difference between the observed average of measurements and the true
average. Validating accuracy is the process of quantifying the amount of bias in the
measurement process. Experience has shown that bias and linearity are typically
not major sources of measurement error for continuous data, but they can be.
In service and transaction applications, evaluating bias most often involves testing
the judgment of people carrying out the measurements.
Example
A team wants to establish the accuracy of its process to measure defects in
invoices. First, they gather a standard group of invoices and have an expert
panel establish the type and number of defects in the group. Next, they have the
standard group of invoices measured by the normal measurement process.
Differences between averages the measurement process came up with, and what
the known defect level was from the expert panel represented the bias of the
measurement process.
#### XXXX.#### vx
Repeatability
People
Environmental conditions
(lighting, noise)
Physical conditions (eyesight)
Repeatability
Repeatability is the variation in measurements obtained when one operator uses
the same measurement process for measuring the identical characteristics of the
same parts or items.
Repeatability is determined by taking one person, or one measurement device, and
measuring the same units or items repeatedly. Differences between the repeated
measurements represent the ability of the person or measurement device to be
consistent.
Possible causes of the lack of repeatability are listed on the slide.
#### XXXX.#### vx
Mean of
the measurements
of Operator B
Mean of
the measurements
of Operator A
Reproducibility
Reproducibility
Reproducibility is very similar to repeatability. The only difference is that instead of
looking at the consistency of one person, you are looking at the consistency
between people.
Reproducibility is the variation in the average of measurements made by different
operators using the same measurement process when measuring identical
characteristics of the same parts or items.
Possible causes of poor reproducibility include: measurement process is not clear,
operator not properly trained in using the measurement system, and operational
definitions are not clear nor well established.
#### XXXX.#### vx
Listed here are the key highlights of conducting an MSA for attribute or categorical
data. The parts can be invoices, parts or reason codes for customer returns, for
example.
#### XXXX.#### vx
G
G
G
NG
G
G
NG
NG
G
G
G
G
G
G
NG
G
G
G
G
NG
Appraiser B
G
G
NG
NG
G
G
NG
NG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
NG
G
G
NG
NG
G
G
G
G
NG
G
G
G
G
G
G
G
G
G
G
NG
G
NG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
NG
G
G
NG
G
G
G
G
G
G
G
G
G
G
G
G
G
G = Good
NG = Not Good
This shows the results of 2 rounds using 2 appraisers, assessing the same 20
items.
#### XXXX.#### vx
#### XXXX.#### vx
Packers
Catwalk
Cutter
Inspector
Glass
10
#### XXXX.#### vx
11
#### XXXX.#### vx
There were two outcomes in this inspection or measurement process: pass or fail.
Twenty pieces, a team of inspectors, and two rounds (or trials) were used in the
MSA.
12
#### XXXX.#### vx
Excerpt of
full data
for 20
inspectors
This slide shows the data in MINITAB. The Standard column documents the
correct or expert answer for each piece of glass.
13
#### XXXX.#### vx
Assessment Agreement
Within Appraisers
Appraiser vs Standard
100
95.0% C I
P ercent
90
90
80
80
Percent
Percent
100
70
60
95.0% C I
P ercent
70
60
50
50
40
40
l
t
n y o lly Tim E a r e v e ni e oh n n ie lin rry
n
n
a
To M
S t J ea J Ro C L
l
t
n y o lly T im E a r e v e ni e oh n n ie lin rry
a
n
n
To M
S t J ea J Ro C L
Appraiser
Appraiser
The graph on the left shows the agreement (or repeatability) of each appraiser
between Trial 1 and Trial 2.
The graph on the right shows the agreement of each appraiser with the Standard.
As shown on both graphs, the blue dots show the percent agreement and the
redlines are the 95% confidence intervals.
14
#### XXXX.#### vx
15
#### XXXX.#### vx
This slide shows the agreement of each Appraiser (across both trials) with the
Standard.
For example, Larry has 89% agreement with the Standard but, Allen only has 39%
agreement with the Standard.
16
#### XXXX.#### vx
This shows the level of agreement across all Appraisers. In this case, only 5.56%
agreement!
17
#### XXXX.#### vx
Given the results of the MSA study, what could have caused the poor agreement?
And what should be done to improve the measurement system?
The measurement system must be improved and tested again (with another MSA
study) to reach at least 90% agreement before the data can be used for baselining
process performance or further analysis.
18
#### XXXX.#### vx