You are on page 1of 27

Reference :-MSA

MACE material Prepared by : - Nitesh pandey


→ What is measurement system analysis
ANS:- Assignement of numbers (values ) to the material things against a given standard is called measurement.
→ What is measurement system?
ANS:- Measurement system is measurement process

INPUT PROCESS OUTPUT

* Standard
* Work piece
* Instrument
Measuremnt Measurement result
* Person
* Procedure
* Envirnoment

Decision( Action) Analysis


SWIPPE is the measurement system inputs that affect the measurement system result
→ What is measurement system analysis?
ANS:- Study of effect of measurement system on measurement reasults & assessing their suitabilitty for product or process control.
PROPERTIES OF A GOOD MEASUREMENT SYSTEM
1. Adequate discrimination (Resolution) 2. Under statistical control 3. Accuracy 4. Precision
DISCRIMINATION :- The ability to measure the smallest difference.
* It should be small relative to the 1. Process variation 2. Specification limit ( Tolerance)
* Rule of 1 /10 should be follow as a straight point.
* Least count or resolution of equipement should be 1 /10 of the process variation ( 10 Dta categories).
* Data categories:- The number of groups in which measurement data ( Results) can be obtained by using ms.
EXAMPLE:-
* Pocess variation : 3.94-4.06 mm
* Equipement : Vernier caliper L.c. (0.02mm)
* Group of readings : 3.94, 3.96, 4.00, 4.02, 4.04, 4.06
* Data categories : 7
* Here process variation = 4.06 - 3.94 = 0.12
*When we apply 1 / 10 rule then 1/10*0.12 = 0.012. But vernier caliper has least count = 0.02 hence VC can not meaure this variation beacause 0.02 > 0.012.
UNDER STATISTICAL CONTROL :- It means that there should be no any special cause variation with in MS.

ACCURACY :- Closeness to the true value. It causes Location error.

PRECISION :- Closeness to the repeated value. It causes Spread error.


Types of measurement system error
Measurement system error

LOCATION Spread

BIAS Repeatability
Linerity Reproducibility
Stability

BIAS
"The difference between average of measured value and True value ( Reference value )"
BIAS = TRUE VALUE ( OBSERVERD VALUE - REFERENCE VALUE )
Procedure to determine BIAS

1. Reference sample selection &


2. Collect data 3. Determine Bias 4. Plot Bias histogram
determine average reference.

5. Compute average Bias 6.Compute repeatability standard deviation 7. Determine acceptabilty of repeatability

8. Determine Bias standard error 9. Determine t stastitics & confindence limit of bias 10. Decision making

STEP 1 :- a) Reference sample selection


It may be - 1. Sample piece else 2. Production parts else 3. Similar other components else 4. Metrology standard
:- b) Determine average reference value
For this, first of all
1. Identify the location( Where to measure)
2. Measure the part for n > or = 10 times, in standard room / tool room with better measuring equipements and standard measuring methods.
Reference value = Average of measured value calculated by standard room person
STEP :- 2 Collect data
Under routine measurement condition for n > or = 10 times .
TRIALS TRUE VALUE ( REFERENCE VALUE) OBSERED VALUE STEP 3 : - BIAS = Xi - REFERENCE VALUE
1 6 5.8 -0.2
2 6 5.7 -0.3
3 6 5.9 -0.1
4 6 5.9 -0.1
5 6 6 0
6 6 6.1 0.1
7 6 6 0
8 6 6.1 0.1
9 6 6.4 0.4
10 6 6.3 0.3
11 6 6 0
12 6 6.1 0.1
13 6 6.2 0.2
14 6 5.6 -0.4
15 6 6 0

STEP :- 3 Determine bias for each readings ( Bias = Xi - Reference value )

STEP :- 4 Plot Bias histogram


STEP :- 5 Compute average bias

TRIALS TRUE VALUE ( REFERENCE VALUE) OBSERED VALUE STEP 3 : - BIAS = Xi - REFERENCE VALUE
1 6 5.8 -0.2
2 6 5.7 -0.3
3 6 5.9 -0.1
4 6 5.9 -0.1
5 6 6 0
6 6 6.1 0.1
7 6 6 0
8 6 6.1 0.1
9 6 6.4 0.4
10 6 6.3 0.3
11 6 6 0
12 6 6.1 0.1
13 6 6.2 0.2
14 6 5.6 -0.4
15 6 6 0
SUM OF BIAS 0.1
Average bias = Sum of Bias / no of trials AVERAGE BIAS 0.0067
STEP :-6 Compute repeatability standard deviation

Observed value - avg. Of Observed value


Trials True value ( ( x̅ ) Obsered value (Xi) (Xi - x̅i)²
OR ( Xi - x̅i ) = Xi - 6.0067
1 6 5.8 -0.2067 0.0427
2 6 5.7 -0.3067 0.0941
3 6 5.9 -0.1067 0.0114
4 6 5.9 -0.1067 0.0114
5 6 6 -0.0067 0.0000
6 6 6.1 0.0933 0.0087
7 6 6 -0.0067 0.0000
8 6 6.1 0.0933 0.0087
9 6 6.4 0.3933 0.1547
10 6 6.3 0.2933 0.0860
11 6 6 -0.0067 0.0000
12 6 6.1 0.0933 0.0087
13 6 6.2 0.1933 0.0374
14 6 5.6 -0.4067 0.1654
15 6 6 -0.0067 0.0000
x̅i = 6.0067
EV = s r = √ ∑(Xi - X̅i )² / n-1 = √0.6293 / 14 = 0.2120

STEP :- 7 Determine acceptability of repeatability


* % EV = 100 ( EV / TV ) = 100 (s r / TV ) , Where TV is total process variation.
or %EV = 100 (0.2120 / 2.5 ) = 100 ( 0.0848 ) = 8.48%. Here TV value 2.5 is a assumed value, it is not a really calculated value.

STEP : - 8 Determine Bias standard error


s b = s r / √n
s b = 0.2120 / √15 = 0.0547
STEP :- 9 Determine confidence limit
* Lower limit ( L ) = BIAS bar - t s b Alpha two trials 0.05
1
* Upper limit ( U ) = BIAS bar + t s b Sample size DF
2 1 12.71
* t can be obtained from table 3 2 4.303
2
* alpha ( preferably 0.05 ) is measured of confidence 4 3 3.182
5 4 2.776
* Lower limit ( L ) = 0.0067 - 2.145 * 0.0547 = 0.1106 6 5 2.571
3
* Upper limit ( U ) = 0.0067 + 2.145 * 0.0547 = 0.1240 7 6 2.447
8 7 2.365
9 8 2.306
STEP :- 10 DECISION MAKING 10 9 2.262
11 10 2.228
Bias is acceptable 12 11 2.201
At 100 ( 1 -α ) % confidence level 13 12 2.179
if 14 13 2.16
L<0<U 15 14 2.145
16 15 2.131
Inference : - L = - 0.1106 & U = 0.1240 17 16 2.12
Zero lies between L & U, Hence bias is acceptable 18 17 2.11
19 18 2.101
20 19 2.093
IF BIAS IS STATISTICALLY NON ZERO
* POSSIBLE CAUSE CAN BE :-
1. Error in master or reference value. Check mastering procedure.
2. Worn instruments. This can show up in stability analysis and will suggest the maintenance or refurbishment schedule.
3. Instrument made to wrong dimensions.
4. Instrument measuring wrong characteristics.
5. Instrument not calibrated properly.
6. Improper use by operator. Review instrumets instructions.
LINEARITY
"The change of BIAS with respect to size is called BIAS"

LINEARITY

LINEARITY STEPS :-
Determine process range Select reference sample Determine reference value Calculate BIAS

Check linear
Determine repeatability error Draw confidence band Draw best line
relation

TAKE DECISION
STABILITY
"Stability is the change of Bias over time"
The total variation in the measurement obtained with a measurement system-
* on the same master or parts
* when measuring a single characteristics,
* over an extended time period
Procedure to determine stability :-
* Selection of reference standard : Refer bias study
* Establish reference value
* Data collection :
* Decide subgroup size
* Decide subgroup frequency
* Collect data for 20-25 subgroups

* Analysis :
* Determine control limits for X bar - R chart
* Plot data on chart
* Analyze for any out of control situation.

* Decision :
Mrasurement system is stable if no out of control situation obserbed other wise not stable and need improovement.
Repeatability
The variation in measurement obtained :
* With one measurement instrument
* When used several times
* By one appraiser
* While measuring the same characteristics
* On the same part.

s ( sigma)repeatability = Rtrial / d2* = K1 R Where K1 = 1 / d 2*


NOTE :- Repeatabilitty is commonly referred as "equipement variation (EV)" although this is misleading. Infact repeatability is within system ( SWIPPE )
variation.

Repeatability
Reproducibility
"The variation in the average of measurements"
* Made by different appraisers
* Using the same measuring instrument
* When measuring the identical characteristics
* On the same parts

s ( sigma) Reproducibilitty = R appraiser / d2* = K2 , Where k2 = 1 / d2*

NOTE :- This is also known as AV "( Appraiser variation)"


Gauge repeatability and reproducibility ( GRR )
An estimate of the combined variation of repeatabilty and reproducibility
GRR is the variance equal to the sum of EV & AV varience

s ²( Sigma)² = s ² EV + s ²AV

R & R - STUDY
Three methods
1. Range methods
2. X bar - R methods
3. ANOVA method ( preferable in case appropriate computer program )

R & R - Average and range methods


Conducting the study
1. SELECTIOIN OF SAMPLE : n > or = 10 parts depending on size, measurement time / cost etc ( representing process variation ).
2. IDENTIFICATION : 1 to n ( not visible to the appraisers. )
3. Location marking : (easily visible & identifiable by the appraisers ).
4. Selection of appraisers ( K ) : 2-3 appraisers
5. Selection of measuring equipments : ( Calibrated equipment )
6. Deciding no of trials ( r ) : 2 - 3
7. Data collectiion : Using data collection sheet , etc
R & R DATA COLLECTION
1. Enter the trial 1 & 2 results for appraiser " A " in row 1 & 2 against n = 10 ( Number of parts) .
2. Repeat the process for appraiseer " B " & enter the result in row 6 & 7.
3 .Calculate the average & range of trial 1 & 2 of appraiser "A" and put it in coloumn 1 , 2, 3 .....10 in row 4 & 5.
4. Calculate the average of average that is "Xa bar" = ( X1+X2+...X10 ) & put the result in row 4 .
5. calculate the average of range that is "Ra bar" = ( R1+R2+R3+...R10 ) & put the result in row 5.
6. Repeat the steps for appraiser " B " & enter the results in row 9 & 10 respectively.
7. Calculate the average of row 4 & 9 th results and enter it in the row 11 That is "Xp" bar ( part average).
8. Calculate the range of row 11th results that is " Rp"
9. Calculate "R double bar" = (Ra bar + Rb bar) / no of appraiser & enter the result in row 12.
10. Calculate the X bar difference = max of ( Xa bar , Xb bar ) - min of ( Xa bar , Xb bar ) & enter the result in row 13.

R & R ANALYSIS - GRAPHICAL

RANGE CHARTS

UCLr = D4*R double bar = 3.27 * 0.0013 = 0.0043 , D4 = 3.27 FOR 2 TRIALS & 2.58 FOR 3 TRIALS
LCLr=D3*Rdouble bar = 0 * 0.0013 = 0, D 3 = 0 FOR TRIALS < 7
AVERAGE CHARTS

UCLx = Xp double bar + A2 * R double bar = 48.060 + (1.88 * 0.0013)= 48.0624


LCRx = Xp double bar - A2 * R double bar = 48.060 -(1.88 *0.0013) =48.0576

INTERPRETATION OF Xbar CHART


The area within control limits represents the measurement (noise). Since the group of parts used in the study represents the process variation ,
approximately one half or more of the averages should fall outside the control limits. If the data show this pattern, the the measurment system should be
adequate to detect part to part variation and the measurment system can provide usefull information for analysing and controlling the process if less than
half fall outside the contrtol limits the either the measurement system lacks adequate effective resolution or the sample does not represent the expected
process variation.
Gauge repeatability & reproducibility for variable gauges
OPER TRIA PART AVERAGE ROW NO
ATOR L 1 2 3 4 5 6 7 8 9 10
1 48.06 48.055 48.05 48.07 48.06 48.06 48.06 48.064 48.07 48.066 1
A 2 48.061 48.056 48.06 48.07 48.06 48.06 48.06 48.066 48.06 48.063 2
3
AVERAGE 48.061 48.056 48.05 48.07 48.06 48.06 48.06 48.065 48.06 48.0645 Xa bar= 48.0609 4
RANGE 0.001 0.001 0.001 0 0.001 0.001 0.003 0.002 0.003 0.003 Ra bar= 0.0016 5
1 48.06 48.057 48.05 48.07 48.05 48.06 48.06 48.064 48.07 48.063 6
B 2 48.06 48.056 48.06 48.07 48.05 48.05 48.06 48.065 48.07 48.063 7
3 8
AVERAGE 48.06 48.057 48.05 48.07 48.05 48.05 48.06 48.065 48.07 48.063 Xb bar = 48.0597 9
RANGE 0 0.001 0.003 0 0.001 0.001 0.003 0.001 0 0 Rb bar=0.001 10
● Rp =0 .0105 (Max of Xp bar)-( Min
Part
48.055 48.065 Of Xp bar) . EXample- 48.065-48.055=0 .010
average 48.06 48.056 48.06 48.06 48.06 48.065 48.06 48.0638 11
min max ● Xp double bar = 48.060 (Average of
(Xp bar)
Xp bar)
R double bar=(Ra bar+ Rb bar)/ No of appraiser
R double bar = 0.0013 12
OR (0.0016+0.001)/2 = 0.0013

Xbar difference = 0.0125


X bar difference=Max of (Xa bar,Xb bar)- Min( Xa bar, Xb bar) 13
Example: - ( 48.065 -48.053) = 0.0125
R & R ANALYSIS - NUMERICAL
Gauge repeatability and reproducibility - Calculation
Part NO & Name : Gauge Name : date :
Characteristics : Gauge no : performed by:
Specification : Gauge type :
DATA:- R bar diff = 0.012 Rp = 0.010 R double bar = 0.0013
Measurment unit analysis % Total variation
● Repeatability (EV) = R double bar * K1 ● % EV = 100 (EV / TV )
= 0.0013 * 0.8862 = 0.0012 = 100 (0.0012 / 0.0034) = 15.3 %

●Reproducibility (AV) = √ (Xbar diff * K2)² - ( EV)²/nr ● % AV = 100 (AV/ TV )


= √(0.0012 * 0.7071)² -( 0.012)²/ 10 * 2 = 0.0006 = 100 ( 0.0006 / 0.0012) = 21.5 %
● GRR = √ ((EV)²+ (PV)² = √ (0.0012)² + (0.0006)² = 0.0014 ● % GRR = 100 (GRR/TV)
=100 ( 0.0014 / 0.0034) = 41.2%
● (PV) = Rp * K3 = 0.010 * 0.3146 = 0.0031 NOTE :- K3= 0.3146 FOR 10 PARTS ● % PV = 100 (PV/ TV)
= 100 (0.0031 / 0.0034 ) = 91.2%
● (TV) = √ (GRR)² + (PV)² = √(0.0014)² + (0.0031)² = 0.0034 ● ndc = 141 (PV / GRR )
= 141. (0.0031 / 0.0014 ) = 141 (2.214 ) = 3.122

INTERPRETATION OF CALCULATED VALUES


FOR % G R& R
If
% GRR (ERROR) < 10 % - MS is acceptable
10% < % GRR( ERROR) < 30% - MS be acceptable with justification
% GRR ( ERROR) > 30% MS needs improovement.
For ndc :-
ndc should be always > Or = to 5.
Interface :- % GRR = 41.2 % and ndc = 3.12 hence it is not acceptable and needs improovement.
NO. OF DISTINCT CATEGORIES ( NDC )
* AIAG suggests that when the number of categories is less than 2 , ms is no value for controlling the process hence one part can not distinguished from another.
* when number of categories is 2 data can be categories in 2 high & low
* When number of categories is three then high , middle, low.
* A value 5 and more denotes an acceptable measurement system.

FORMULAS
● REPEATABILITY (EV) = R double bar * k1 K1 = 0.8862(2 trials) , 0.5908 (3 trials)

● REPRODUCIBILITY (AV) = √ (Xbar diff * K2)² - ( EV)²/nr k2 = 0.7071 ( 2 app.) , (0.5231(3 app.) Where n= no of parts , r = no of trials
● (GRR) = √ (EV)² + (AV)²
● PART TO PART VARIATION (PV) = Rp * K3
● TOTAL VARIATION (TV) = √ (GRR)² + (PV)²
● % EV = 100 (EV / TV)
● % AV = 100 (AV / TV)
● % GRR = 100 (GRR / TV)
● % PV = 100 (PV / TV)
● ndc = 1.41 ( PV / GRR)
MSA analysis by Probability method - ATTRIBUTE
PROCEDURE :-
Points to be consider before study
1. Number all the parts.
2. Identify the appraisrers from those who operate the gauge.
3. Give one part to one appraiser in random order.( In such away that the appraiser should not be able to know the number ).
4. Then give all parts to different appraisers in different orders.
5. Repeat the steps and record the results.
Select n more than 12 parts

1. Approximately 25% closer to lower specification limit. ( conforming & non conforming)
2. Approximately 25% closer to upper specification limit. (conforming & non conforming )
3. Remaining bothe conforming & non conforming.
4. Note down the correct measurement attribute. ( True status)
5. Decide the number of appraiser & no of trials
6. Record the measurement result in data sheet.

Types of errors in attribute measurement system

TYPE 1 ERROR :- Calling good part as bad. It is also called producer risk or alpha errors.

TYPE 2 ERROR:- Calling bad part as good. It is also called consumer risk or beta errors .
MSA FOR ATTRIBUTE DATA BY PROBABILITY METHOD FOR APPRAISER A
No of Appraiser A Appraiser B 1. Calling good as good
True status
parts Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 or 11(T1) +7(T2)+10(T3) = 28 { T For trial }
1 G G G G G G G
2 B B G B G G B 2. Calling good as bad(type2 error)
3 G G G G G G G or 0+4+1=5
4 B G B B G B G
5 G G G G G G G 3. Calling bad as good(type2 error)
6 G G B G G G G or 3+2+1=6
7 B B G B B B B
8 G G B G G G G 4. Calling bad as bad
9 G G G G G G G or 6 + 7 + 8 = 21
10 B B B G G B B
11 B G B B B B B FOR APPRAISER B
12 G G G G G G G 1. Calling good as good
13 B B B B B B B or 10 + 9 + 10 = 29
14 G G G G G G G
15 B B B B B B B 2. Calling good as bad
16 B G B B B B B or 1+2+1=4
17 G G G G G G G
18 G G B G G B G 3. Calling bad as good
19 G G B B B B B or 3+1+1=5
20 B B B B B B B
TRUE STATUS Appraiser A Appraiser B TOTAL COUNT 4. Calling bad as bad
Calling G - G 28 .
+ 29 57 or 6 + 8 + 8 = 22
Calling G - B 5 + 4 9 Total correct decision = 57 ( G -G ) + 43 ( B - B ) = 100
+
.

Calling B- G 6 5 11 Total wrong decision = 9 ( G- B) Fpa + 11 (B - G ) Miss


+
.

Calling B - B 21 . 22 43 = 20
PROBABILITY METHOD

Effectiveness ( E )
.
= Total correct decision
Total decision .
= 57 + 43
120
100
120 .
= 0.833

* Probability false alarm (Pfa)


.
= Total false alarm
Total oppurtunities of fals alarm .
= 9 ( type 1 error )
57+9 .
= 9
66 .
= 0.136

* Pmiss (pm)
.
= Total miss
Total oppurtunities for miss .
= 11( type 2 error)
43+11 .
= 11
54 .
= 0.204

IF

Parameter Acceptable Marginal Unacceptable Obserbed Result


E > 0.90 0.80 -0.90 < 0.80 0.833 With in marginal spec.
Pfa < 0.05 0.05 -0.10 > 0.10 0.136 Unacceptable due to more than 0.10
Pm < 0.012 0.02 - 0.05 > 0.05 0.204 Unacceptable due to more than 0.05
Kappa method between appraiser A appraiser B Kappa B / w appraier A & B
1. When appraiser A declared bad, appraiser
Appraiser A Appraiser B
NO of parts True status B alsc declared it bad. NO OF COUNTS = 30
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3
1 B B B B B B B 2. When appraiser A declared bad, appraiser
2 G G B G B B G B declared it good. NO OF COUNTS = 4
3 B B B B B B B
4 G B G G B G B 3. When appraiser A declared good, appraiser

KAPPA METHOD
5 B B B B B B B B also declared it good. NO OF COUNTS = 22
6 B B G B B B B
7 G G B G G G G 4. When appraiser A declared good ,
8 B B G B B B B appraiser B declared it bad . NO OF
9 B B B B B B B
10 G G G B B G G
11 G B G G G G G Kappa B / w true status & appraiser A
12 B B B B B B B 1. When true status is Bad, Appraiser A also
13 G G G G G G G declared Bad. Number of
14 B B B B B B B 2. When true staus is Good , Appraiser A
15 G G G G G G G also declared Good . Number of
16 B B G G G G G 3. When true status is Good , But appraiser A
17 B B B B B B B declared it Bad. Number of counts = 6
18 B B G B B G B 4. When true status is Bad but appraiser A
19 B B G G G G G declared it Good . Number of
20 G G G G G G G counts = 5

Kppa B / W true status & appraiser B


When true status is Bad , appraiser B also declared Bad. Number of counts = 29
When true status is Good, appraiser B also declared Good. Number of counts = 22
When true status is good, appraiser B declared Bad. Number of counts = 5
when true status is bad , appraiser B declared Good. Number of counts = 4
1. Kappa method ( Between appraiser A and appraiser B )
A * B cross tabulation
Expected count = row total * coloumn total / 60 B appraiser Total
(B) Bad (G) Good
Count 30 4 34 RT1
( B ) Bad
Expected count 19.3 14.7
A appraiser
Count 4 22 26 RT2
( G ) Good
Expected count 14.7 11.3

Total Count 34 CT1 26 CT2 60


Expected count

1 . Expected count =
.
34 Row total1 * 34 Coloum total1 / 60 =
.
19.3

2. Expected count =
.
34 Row total1 * 26 Coloumn total 2 / 60 =
.
14.7

3.Expected count =
.
26 Row total2 * 34 Coloumn total1 / 60 =
.
14.7

4.Expected count =
.
26 Row total2 * 26 Coloumn total2 / 60 =
.
11.3
Calculate Kappa ( A * B cross tabulation )
P0 = Sum of observed proportion in diagnal cells = ( 30 + 22 ) / 60 = 52 / 60 = 0.867
Pe = Sum of expected proportion in diaognal cells (19.30 +11.30 ) / 60 = 30.6 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.867 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.728
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement

INFERENCE : Kappa more than 0.75 is a good agreement , and less than 0.40 is a poor agreement, Kappa observed betwen appraiser A & B

is 0.728 ( Near abot OK ). It means much variation between appraisers good agreement. ( Acceptable)
Kappa m ethod b/w true status and appraiser A
A * True cross tabulation
True status
Expected count = row total * coloum n total / Grand total Total
(B) Bad (G) Good
Count 28 6 34 RT1
( B ) Bad
A appraiser
Expected count 18.7 15.3
Count 5 21 26 RT2
( G ) Good
Expected count 14.3 11.7
Total Count 33 CT 1 27 CT2 60
Expected count

1 . Expected count =
.
34 Row total 1 * 33 Coloum total 1 / 60 =
.
18.7

2. Expected count =
.
34 Row total 1 * 27 Coloumn total 2 / 60 =
.
15.3

3.Expected count =
.
26 Row total 2 * 33 Coloumn total 1 / 60 =
.
14.3

4.Expected count =
.
26 Row total 2 * 27 Coloumn total 2 / 60 =
.
11.7
Calculate Kappa ( A * True status tabulati on )
P0 = Sum of observed proportion in diagnal cells = ( 28 + 21 ) / 60 = 49 / 60 = 0.817
Pe = Sum of expected proportion in diaognal cells (18.7 + 11.7 ) / 60 = 30.4 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.817 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.628
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement
Kappa method b/w true status and appraiser B
B * True cross tabulation
True status
Expected count = row total * coloumn total / Grand total Total
(B) Bad (G) Good
Count 29 4 34 RT1
( B ) Bad
A appraiser
Expected count 18.7 15.3
Count 5 22 26 RT2
( G ) Good
Expected count 14.3 11.7
Total Count 33 CT1 27 CT2 60
Expected count

1 . Expected count =
.
34 Row total1 * 33 Coloum total1 / 60 =
.
18.7

2. Expected count =
.
34 Row total1 * 27 Coloumn total 2 / 60 =
.
15.3

3.Expected count =
.
26 Row total2 * 33 Coloumn total1 / 60 =
.
14.3

4.Expected count =
.
26 Row total2 * 27 Coloumn total2 / 60 =
.
11.7
Calculate Kappa ( A * True status tabulation )
P0 = Sum of observed proportion in diagnal cells = ( 29 + 22 ) / 60 = 51 / 60 = 0.85
Pe = Sum of expected proportion in diaognal cells (18.7 + 11.7 ) / 60 = 30.4 / 60 = 0.51
Kappa = P0 - Pe / 1 - Pe = 0.817 - 0.51 / 1 - 0.51 = 0.357 / 0.49 = 0.696
Kppa more than 0.75 : Good agreement
Kappa less than 0.40 : Poor agreement
DEFINITIONS
1. True value :- Actual value of an artifact unknown and unknowable .
2. Reference Value :- Accepted Value of an artifact used as a surrogate to the true value .
3. Uncertainty :- An estimated range of values about the measured value in which the true value is believed to be contained .
4. Gauge :- Gauge is any device used to obtained measurements, frequently used to refer specifically the devices used on shop floor , includes GO / NO GO
devices .
5. Discrimination :- The ability of measuring the smallest difference .
6. Measurement :- Assignment of numbers ( values ) to material things to represent the relationship among them with respected to particular properties .
7. Calibration :- A set of operations that establish , under specified conditions , the relationship between a measuring device and a traceable standard of
known reference value and uncertainly .
8. Validation :- Validation is confirmation , through the provision of objective evidence , that the requirements for a specific intended use or application
have been fulfilled .
THANKS

You might also like