Professional Documents
Culture Documents
* Standard
* Work piece
* Instrument
Measuremnt Measurement result
* Person
* Procedure
* Envirnoment
LOCATION Spread
BIAS Repeatability
Linerity Reproducibility
Stability
BIAS
"The difference between average of measured value and True value ( Reference value )"
BIAS = TRUE VALUE ( OBSERVERD VALUE - REFERENCE VALUE )
Procedure to determine BIAS
5. Compute average Bias 6.Compute repeatability standard deviation 7. Determine acceptabilty of repeatability
8. Determine Bias standard error 9. Determine t stastitics & confindence limit of bias 10. Decision making
TRIALS TRUE VALUE ( REFERENCE VALUE) OBSERED VALUE STEP 3 : - BIAS = Xi - REFERENCE VALUE
1 6 5.8 -0.2
2 6 5.7 -0.3
3 6 5.9 -0.1
4 6 5.9 -0.1
5 6 6 0
6 6 6.1 0.1
7 6 6 0
8 6 6.1 0.1
9 6 6.4 0.4
10 6 6.3 0.3
11 6 6 0
12 6 6.1 0.1
13 6 6.2 0.2
14 6 5.6 -0.4
15 6 6 0
SUM OF BIAS 0.1
Average bias = Sum of Bias / no of trials AVERAGE BIAS 0.0067
STEP :-6 Compute repeatability standard deviation
LINEARITY
LINEARITY STEPS :-
Determine process range Select reference sample Determine reference value Calculate BIAS
Check linear
Determine repeatability error Draw confidence band Draw best line
relation
TAKE DECISION
STABILITY
"Stability is the change of Bias over time"
The total variation in the measurement obtained with a measurement system-
* on the same master or parts
* when measuring a single characteristics,
* over an extended time period
Procedure to determine stability :-
* Selection of reference standard : Refer bias study
* Establish reference value
* Data collection :
* Decide subgroup size
* Decide subgroup frequency
* Collect data for 20-25 subgroups
* Analysis :
* Determine control limits for X bar - R chart
* Plot data on chart
* Analyze for any out of control situation.
* Decision :
Mrasurement system is stable if no out of control situation obserbed other wise not stable and need improovement.
Repeatability
The variation in measurement obtained :
* With one measurement instrument
* When used several times
* By one appraiser
* While measuring the same characteristics
* On the same part.
Repeatability
Reproducibility
"The variation in the average of measurements"
* Made by different appraisers
* Using the same measuring instrument
* When measuring the identical characteristics
* On the same parts
s ²( Sigma)² = s ² EV + s ²AV
R & R - STUDY
Three methods
1. Range methods
2. X bar - R methods
3. ANOVA method ( preferable in case appropriate computer program )
RANGE CHARTS
UCLr = D4*R double bar = 3.27 * 0.0013 = 0.0043 , D4 = 3.27 FOR 2 TRIALS & 2.58 FOR 3 TRIALS
LCLr=D3*Rdouble bar = 0 * 0.0013 = 0, D 3 = 0 FOR TRIALS < 7
AVERAGE CHARTS
FORMULAS
● REPEATABILITY (EV) = R double bar * k1 K1 = 0.8862(2 trials) , 0.5908 (3 trials)
● REPRODUCIBILITY (AV) = √ (Xbar diff * K2)² - ( EV)²/nr k2 = 0.7071 ( 2 app.) , (0.5231(3 app.) Where n= no of parts , r = no of trials
● (GRR) = √ (EV)² + (AV)²
● PART TO PART VARIATION (PV) = Rp * K3
● TOTAL VARIATION (TV) = √ (GRR)² + (PV)²
● % EV = 100 (EV / TV)
● % AV = 100 (AV / TV)
● % GRR = 100 (GRR / TV)
● % PV = 100 (PV / TV)
● ndc = 1.41 ( PV / GRR)
MSA analysis by Probability method - ATTRIBUTE
PROCEDURE :-
Points to be consider before study
1. Number all the parts.
2. Identify the appraisrers from those who operate the gauge.
3. Give one part to one appraiser in random order.( In such away that the appraiser should not be able to know the number ).
4. Then give all parts to different appraisers in different orders.
5. Repeat the steps and record the results.
Select n more than 12 parts
1. Approximately 25% closer to lower specification limit. ( conforming & non conforming)
2. Approximately 25% closer to upper specification limit. (conforming & non conforming )
3. Remaining bothe conforming & non conforming.
4. Note down the correct measurement attribute. ( True status)
5. Decide the number of appraiser & no of trials
6. Record the measurement result in data sheet.
TYPE 1 ERROR :- Calling good part as bad. It is also called producer risk or alpha errors.
TYPE 2 ERROR:- Calling bad part as good. It is also called consumer risk or beta errors .
MSA FOR ATTRIBUTE DATA BY PROBABILITY METHOD FOR APPRAISER A
No of Appraiser A Appraiser B 1. Calling good as good
True status
parts Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 or 11(T1) +7(T2)+10(T3) = 28 { T For trial }
1 G G G G G G G
2 B B G B G G B 2. Calling good as bad(type2 error)
3 G G G G G G G or 0+4+1=5
4 B G B B G B G
5 G G G G G G G 3. Calling bad as good(type2 error)
6 G G B G G G G or 3+2+1=6
7 B B G B B B B
8 G G B G G G G 4. Calling bad as bad
9 G G G G G G G or 6 + 7 + 8 = 21
10 B B B G G B B
11 B G B B B B B FOR APPRAISER B
12 G G G G G G G 1. Calling good as good
13 B B B B B B B or 10 + 9 + 10 = 29
14 G G G G G G G
15 B B B B B B B 2. Calling good as bad
16 B G B B B B B or 1+2+1=4
17 G G G G G G G
18 G G B G G B G 3. Calling bad as good
19 G G B B B B B or 3+1+1=5
20 B B B B B B B
TRUE STATUS Appraiser A Appraiser B TOTAL COUNT 4. Calling bad as bad
Calling G - G 28 .
+ 29 57 or 6 + 8 + 8 = 22
Calling G - B 5 + 4 9 Total correct decision = 57 ( G -G ) + 43 ( B - B ) = 100
+
.
Calling B - B 21 . 22 43 = 20
PROBABILITY METHOD
Effectiveness ( E )
.
= Total correct decision
Total decision .
= 57 + 43
120
100
120 .
= 0.833
* Pmiss (pm)
.
= Total miss
Total oppurtunities for miss .
= 11( type 2 error)
43+11 .
= 11
54 .
= 0.204
IF
KAPPA METHOD
5 B B B B B B B B also declared it good. NO OF COUNTS = 22
6 B B G B B B B
7 G G B G G G G 4. When appraiser A declared good ,
8 B B G B B B B appraiser B declared it bad . NO OF
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G Kappa B / w true status & appraiser A
12 B B B B B B B 1. When true status is Bad, Appraiser A also
13 G G G G G G G declared Bad. Number of
14 B B B B B B B 2. When true staus is Good , Appraiser A
15 G G G G G G G also declared Good . Number of
16 B B G G G G G 3. When true status is Good , But appraiser A
17 B B B B B B B declared it Bad. Number of counts = 6
18 B B G B B G B 4. When true status is Bad but appraiser A
19 B B G G G G G declared it Good . Number of
20 G G G G G G G counts = 5
1 . Expected count =
.
34 Row total1 * 34 Coloum total1 / 60 =
.
19.3
2. Expected count =
.
34 Row total1 * 26 Coloumn total 2 / 60 =
.
14.7
3.Expected count =
.
26 Row total2 * 34 Coloumn total1 / 60 =
.
14.7
4.Expected count =
.
26 Row total2 * 26 Coloumn total2 / 60 =
.
11.3
Calculate Kappa ( A * B cross tabulation )
P0 = Sum of observed proportion in diagnal cells = ( 30 + 22 ) / 60 = 52 / 60 = 0.867
Pe = Sum of expected proportion in diaognal cells (19.30 +11.30 ) / 60 = 30.6 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.867 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.728
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement
INFERENCE : Kappa more than 0.75 is a good agreement , and less than 0.40 is a poor agreement, Kappa observed betwen appraiser A & B
is 0.728 ( Near abot OK ). It means much variation between appraisers good agreement. ( Acceptable)
Kappa m ethod b/w true status and appraiser A
A * True cross tabulation
True status
Expected count = row total * coloum n total / Grand total Total
(B) Bad (G) Good
Count 28 6 34 RT1
( B ) Bad
A appraiser
Expected count 18.7 15.3
Count 5 21 26 RT2
( G ) Good
Expected count 14.3 11.7
Total Count 33 CT 1 27 CT2 60
Expected count
1 . Expected count =
.
34 Row total 1 * 33 Coloum total 1 / 60 =
.
18.7
2. Expected count =
.
34 Row total 1 * 27 Coloumn total 2 / 60 =
.
15.3
3.Expected count =
.
26 Row total 2 * 33 Coloumn total 1 / 60 =
.
14.3
4.Expected count =
.
26 Row total 2 * 27 Coloumn total 2 / 60 =
.
11.7
Calculate Kappa ( A * True status tabulati on )
P0 = Sum of observed proportion in diagnal cells = ( 28 + 21 ) / 60 = 49 / 60 = 0.817
Pe = Sum of expected proportion in diaognal cells (18.7 + 11.7 ) / 60 = 30.4 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.817 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.628
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement
Kappa method b/w true status and appraiser B
B * True cross tabulation
True status
Expected count = row total * coloumn total / Grand total Total
(B) Bad (G) Good
Count 29 4 34 RT1
( B ) Bad
A appraiser
Expected count 18.7 15.3
Count 5 22 26 RT2
( G ) Good
Expected count 14.3 11.7
Total Count 33 CT1 27 CT2 60
Expected count
1 . Expected count =
.
34 Row total1 * 33 Coloum total1 / 60 =
.
18.7
2. Expected count =
.
34 Row total1 * 27 Coloumn total 2 / 60 =
.
15.3
3.Expected count =
.
26 Row total2 * 33 Coloumn total1 / 60 =
.
14.3
4.Expected count =
.
26 Row total2 * 27 Coloumn total2 / 60 =
.
11.7
Calculate Kappa ( A * True status tabulation )
P0 = Sum of observed proportion in diagnal cells = ( 29 + 22 ) / 60 = 51 / 60 = 0.85
Pe = Sum of expected proportion in diaognal cells (18.7 + 11.7 ) / 60 = 30.4 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.817 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.696
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement
DEFINITIONS
1. True value :- Actual value of an artifact unknown and unknowable .
2. Reference Value :- Accepted Value of an artifact used as a surrogate to the true value .
3. Uncertainty :- An estimated range of values about the measured value in which the true value is believed to be contained .
4. Gauge :- Gauge is any device used to obtained measurements, frequently used to refer specifically the devices used on shop floor , includes GO / NO GO
devices .
5. Discrimination :- The ability of measuring the smallest difference .
6. Measurement :- Assignment of numbers ( values ) to material things to represent the relationship among them with respected to particular properties .
7. Calibration :- A set of operations that establish , under specified conditions , the relationship between a measuring device and a traceable standard of
known reference value and uncertainly .
8. Validation :- Validation is confirmation , through the provision of objective evidence , that the requirements for a specific intended use or application
have been fulfilled .
THANKS