Professional Documents
Culture Documents
Instructions
1) The following spreadsheet is used to calculate Gage Repeatability & Reproducibility for attribute effectiveness, in which up to 100
samples can be evaluated, using 2 or 3 operators.
2) In the Data Entry worksheet fill in the appropriate information in the Scoring Report section and enter the type of attributes you are
evaluating in the Attribute Legend section. YOU MUST ENTER THE INFORMATION IN THE ATTRIBUTE LEGEND SECTION OR
THE SPREADSHEET WILL NOT WORK. The attributes can be either alpha or numeric, i.e.., Yes, No; pass, fail; go, stop, or 1, 2.
You must be consistent throughout the form and spell properly.
3) If you or an expert has selected samples to be evaluated and you know what attributes these samples are, enter this information in
the Attribute sample column. This will enable you to determine how well each operator can evaluate a set of samples against a
known standard. You do not need to enter information in this column for the spreadsheet to work although you will not be
able to assess the operators against known standards.
4) You do not have to specify how many operators or the # of samples that you will be evaluating during the test. Simply enter the data
into the spreadsheet under the specific operator. Remember the attributes must be spelled properly or the spreadsheet will not
analyze the data correctly.
5) To print a copy of the report click on the Print Report icon on the Data Entry page.
6) To delete the data in the spreadsheet, click on the Delete Data icon on the Data Entry page.
7) To delete all and begin a new test, click on the Delete All icon on the Data Entry page.
8) To see a demo of this spreadsheet, click on the Demo icon on the Data Entry page. Move around the spreadsheet to see the data.
When you are finished, click the Delete All icon on the Data Entry page to delete all data to being entering your own data.
The 95% UCL and 95% LCL represent the 95% upper and lower confidence limits on the binomial distribution. The Calculated
Score is the basic computation reported on the report page for % Appraiser and % Score vs Attribute. The 95% confidence interval
represents the range within which the true Calculated Score lies given the uncertainty associated with limited sample sizes. As
sample size increases (in this case, Total Inspected) the confidence interval will get smaller and smaller which indicates more
reliable estimates of the true percentages. In the case of the Demo data, the true Calculated score for Operator 1 could be as low
as 76.8% given that only 14 samples were inspected, even though there was a 100% Appraiser value calculated. Also, even though
Operator 2 had a lower score, Operators 1 and 3 cannot be distinguished from Operator 2 because the calculated score of #2
(78.6%) lies within the confidence limits for Operators 1 and 3.
With a worksheet limitation of 100 samples, at best a lower 95% limit of 96.4% can be calculated. Thus, we would have to say that
an inspector could be as bad as 96% efficient, even though he/she missed no calls.
Trial 30
Match 29 Try out different combinations of number of samples and number of matches to see the effects of
sample size. In this case, a sample size of 30 with one non-match will yield a 17% confidence
UCI 99.9%
interval. In order to get reasonabile reliability in estimates of efficiency, large sample sizes will be
Score 96.7% required.
LCI 82.8%
Revision History
Revision Date Description
A 5/20/2013 New Release
B 6/16/2017 Add 3rd trials for each operators.
Added Kappa Analysis, Relayout Report, add conclusion, remark and signature fields.
Added more data in Demo Data (50 sample * 3 operators * 3 trials).
Changed macro to copy addition Demo Data.
C 9/22/2017 Auto highlight decision that different from attribute, in Report Sheet.
Added Acceptance Criteria, and link the score and matrics to the criteria to set color.
Added Report reference number field for that link to each reports.
Added Verification Sheets, for medical plant used, to document is validation report.
Screen %
Screen % %Score vs %Score vs %Score vs
Effective Score % Appraiser1 % Appraiser1 % Appraiser1
Effective Score3 Attribute2 Attribute2 Attribute2
vs Attribute4
Metric
Total Inspected 0 0 0 0 0 0 0 0
# Matched 0 0 0 0 0 0 0 0
95% UCL
Score
95% LCL
Result
Kappa Analysis
Kappa Value
P
Result
Chart Data
%% Appraiser Score
Appraiser All Operator #1 Operator #2 95% Operator
LCL %
#3 Appraiser Score vs Attribute 95% LCL
% Efficiency
95% UCL
1.1 1.1
Score 1
% Efficiency
1
95% LCL
0.9 0.9
0.8 0.8
% Appraiser All Operator #1 Operator #2 Operator #3
0.7
95% UCL 0.7
Score0.6 0.6
95% LCL
0.5 0.5
0.4 0.4
0 0
0.3 0.3
Total Qty 0 0
0.2 0.2
0.1 0.1
0 0
All Operator #1 Operator #2 Operator #3 All Operator #1 Operator #2 Operator #3
Conclusion: Accept
Remark:
DATE: PRODUCT:
NAME: BUSINESS:
Chart Data
95% 95%
%% Appraiser
Appraiser Score
All %
Operator #1 Operator #2 Operator #3 Appraiser Score vs Attribute
LCL LCL
% Efficiency
% Efficiency
95% 1.1
UCL 1.1
Score 1 1
95% LCL 0.9
0.9
0.8 0.8
% Appraiser All Operator #1 Operator #2 Operator #3
95% 0.7
UCL 0.7
Score
0.6 0.6
95% 0.5
LCL 0.5
0.4 0.4
0 0 0.3
0.3
Total Qty 0 0
0.2 0.2
0.1 0.1
0 0
All Operator #1 Operator #2 Operator #3 All Operator #1 Operator #2 Operator #3
Conclusion: Accept
Remark:
DATE: PRODUCT:
NAME: BUSINESS:
Misclassification
Operator #1 Operator #2 Operator #3
Assessment Agreement Assessment Agreement Assessment Agreement
Sample # False Neg False Pos Mixed False Neg False Pos Mixed False Neg False Pos Mixed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Kappa: Accuracy
Po = ΣPi, Pi =
Operator #1 Operator #2 Operator #3
Sample # All Try #1 Try #2 Try #3 Try #1 Try #2 Try #3 Try #1 Try #2 Try #3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
J
Pe = ΣPj j Overall Try #1 Try #2 Try #3 Try #1 Try #2 Try #3 Try #1 Try #2 Try #3
1
2
Pj
Pe = ΣPj
1
1-Pj
2
Sum
1
1-2Pj
2
Sum
1
Pj(1-Pj) P (1-P )
j
2
Sum
j
1
(1-2Pj)
2
Sum
m
n
Kappa, K
VAR
Z
p(vs > 0)
Overall Operator #1 Operator #2 Operator #3
m 0 0 0 0
n 0
Kappa, K
VAR
Z
p(vs > 0)
Pe = ΣPj
1
1-Pj
2
Sum
1
1-2Pj
2
Sum
1
Pj(1-Pj) P (1-P )
j
2
Sum
j
1
(1-2Pj)
2
Sum
m
n 0 0 0 0
Kappa, K
VAR(K)
Z
p(vs > 0)
Pe = ΣPj
1
1-Pj
2
Sum
1
1-2Pj
2
Sum
1
Pj(1-Pj) P (1-P )
j
2
Sum
j
1
(1-2Pj)
2
Sum
m
n
Kappa, K
VAR(K)
Z
p(vs > 0)
Data used:
Sample #Attribute O1T1 O1T2 O1T3 O2T1 O2T2 O2T3 O3T1 O3T2 O3T3
1 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
2 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
3 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
4 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
5 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
6 Pass Pass Pass Fail Pass Pass Fail Pass Fail Fail
7 Pass Pass Pass Pass Pass Pass Pass Pass Fail Pass
8 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
9 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
10 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
11 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
12 Fail Fail Fail Fail Fail Fail Fail Fail Pass Fail
13 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
14 Pass Pass Pass Fail Pass Pass Pass Pass Fail Fail
15 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
16 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
17 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
18 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
19 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
20 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
21 Pass Pass Pass Fail Pass Fail Pass Fail Pass Fail % Appraiser Score
22 Fail Fail Fail Pass Fail Pass Fail Pass Pass Fail 110.0%
% Efficiency
23 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass 100.0%
24 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass 90.0%
25 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail 80.0%
26 Fail Fail Pass Fail Fail Fail Fail Fail Fail Pass
70.0%
27 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
60.0%
28 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
29 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass 50.0%
30 Fail Fail Fail Fail Fail Fail Pass Fail Fail Fail 40.0%
31 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass 30.0%
32 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass 20.0%
33 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
10.0%
34 Fail Fail Fail Pass Fail Fail Pass Fail Pass Pass
0.0%
35 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass All Operator #1 Operator #2
36 Pass Pass Pass Fail Pass Pass Pass Pass Fail Pass
37 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
38 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
39 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
40 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
41 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
42 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
43 Pass Pass Fail Pass Pass Pass Pass Pass Pass Fail
44 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
45 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
46 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
47 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
48 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
49 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
50 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
Total Inspected 50 50 50 50 50 50 50 50
# Matched 39 39 42 42 45 45 40 40
95% UCL 88.5% 88.5% 92.8% 92.8% 96.7% 96.7% 90.0% 90.0%
Score 78.0% 78.0% 84.0% 84.0% 90.0% 90.0% 80.0% 80.0%
95% LCL 64.0% 64.0% 70.9% 70.9% 78.2% 78.2% 66.3% 66.3%
Result Reject Reject Marginal Marginal Accept Accept Marginal Marginal
Kappa Analysis
Kappa 0.7936 0.8592 0.7600 0.8802 0.8451 0.9226 0.7029 0.7747
SE 0.02357 0.0471 0.0816 0.0816 0.0816 0.0816 0.0816 0.0816
Z 33.6698 18.2260 9.3081 10.7806 10.3500 11.2996 8.6089 9.4881
P 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Result Accept Accept Accept Accept Accept Accept Marginal Accept
Chart Data
%% Appraiser
Appraiser Score All Operator #1 Operator #2 Operator%
95% LCL #3Appraiser Score vs Attribute 95% LCL
110.0%
95% UCL 88.5% 92.8% 96.7% 110.0%
90.0%
% Efficiency
% Efficiency
80.0% 80.0%
% Appraiser All Operator #1 Operator #2 Operator #3
70.0% 70.0%
95% UCL 88.5% 92.8% 96.7% 90.0%
60.0% 60.0%
Score 78.0% 84.0% 90.0% 80.0%
50.0%
95% LCL50.0% 64.0% 70.9% 78.2% 66.3%
40.0% 40.0%
30.0% 0 0 30.0%
Total Qty 20.0% 0 0 20.0%
10.0% 10.0%
0.0% 0.0%
All Operator #1 Operator #2 Operator #3 All Operator #1 Operator #2 Operator #3
MINITAB OUTPUT
————— 9/22/2017 12:16:15 PM ————————————————————
Verification-GoodData.MPJ
Attribute Agreement Analysis for O1T1, O1T2, O1T3, O2T1, O2T2, O2T3, O3T1, O3T2, O3T3
Within Appraisers
Assessment Agreement
Assessment Agreement
# Matched: Appraiser’s assessment across trials agrees with the known standard.
Assessment Disagreement
# Pass / # Fail /
Appraiser Fail Percent Pass Percent # Mixed Percent
1 0 0.00 0 0.00 8 16.00
2 0 0.00 0 0.00 5 10.00
3 0 0.00 0 0.00 10 20.00
Between Appraisers
Assessment Agreement
Assessment Agreement
90 90
85 85
Percent
Percent
80 80
75 75
70 70
65 65
1 2 3 1 2 3
Appraiser Appraiser
Data used:
Sample #Attribute O1T1 O1T2 O1T3 O2T1 O2T2 O2T3 O3T1 O3T2 O3T3
1 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
2 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
3 Pass Fail Fail Pass Pass Fail Fail Fail Pass Fail
4 Pass Fail Fail Pass Pass Fail Fail Fail Pass Fail
5 Fail Fail Fail Pass Pass Fail Fail Fail Pass Fail
6 Fail Pass Pass Fail Pass Pass Fail Pass Fail Fail
7 Pass Pass Pass Pass Pass Pass Pass Pass Fail Pass
8 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
9 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
10 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
11 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
12 Fail Fail Fail Fail Fail Fail Fail Fail Pass Fail
13 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
14 Fail Pass Pass Fail Pass Pass Pass Pass Fail Fail
15 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
16 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
17 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
18 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
19 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
20 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
21 Pass Pass Pass Fail Pass Fail Pass Fail Pass Fail
22 Fail Fail Fail Pass Fail Pass Fail Pass Pass Fail
23 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
24 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
25 Fail Fail Fail Pass Pass Fail Fail Fail Pass Fail
26 Fail Fail Pass Pass Pass Fail Fail Fail Pass Pass
27 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
28 Fail Pass Pass Pass Pass Pass Fail Fail Pass Pass
29 Pass Pass Fail Fail Pass Pass Pass Pass Pass Pass
30 Fail Fail Fail Pass Pass Fail Pass Fail Pass Fail
31 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
32 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
33 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
34 Fail Fail Fail Pass Fail Fail Pass Fail Pass Pass
35 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
36 Pass Pass Pass Fail Pass Pass Pass Pass Fail Pass
37 Pass Fail Fail Fail Fail Fail Fail Fail Fail Fail
38 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
39 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
40 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
41 Pass Pass Pass Pass Pass Pass Pass Pass Pass Pass
42 Pass Fail Fail Fail Fail Fail Fail Fail Fail Fail
43 Pass Pass Fail Pass Pass Pass Pass Pass Pass Fail
44 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
45 Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail
46 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
47 Fail Pass Pass Pass Pass Pass Pass Pass Pass Pass
48 Pass Fail Fail Fail Fail Fail Fail Fail Fail Fail
49 Fail Pass Pass Pass Pass Pass Fail Fail Pass Pass
50 Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail
Metric
Total Inspected 50 50 50 50 50 50 50 50
# Matched 31 15 35 17 37 20 33 16
95% UCL 75.3% 44.6% 82.1% 48.8% 85.4% 54.8% 78.8% 46.7%
Score 62.0% 30.0% 70.0% 34.0% 74.0% 40.0% 66.0% 32.0%
95% LCL 47.2% 17.9% 55.4% 21.2% 59.7% 26.4% 51.2% 19.5%
Result Reject Reject Reject Reject Reject Reject Reject Reject
Kappa Analysis
Kappa 0.5882 -0.0234 0.5238 -0.0428 0.5701 0.0152 0.4732 -0.0426
SE 0.02357 0.0471 0.0816 0.0816 0.0816 0.0816 0.0816 0.0816
Z 24.9550 -0.4971 6.4153 -0.5247 6.9823 0.1857 5.7961 -0.5219
P 0.0000 0.6904 0.0000 0.7001 0.0000 0.4263 0.0000 0.6991
Result Marginal Reject Marginal Reject Marginal Reject Marginal Reject
Chart Data
% Appraiser
% Appraiser Score All Operator #1 Operator #2 % Appraiser
95% LCL Operator #3 Score vs Attribute 95% LCL
95% UCL
110.0% 75.3% 82.1% 85.4% 78.8%
110.0%
% Efficiency
% Efficiency
80.0% 80.0%
% Appraiser All Operator #1 Operator #2 Operator #3
70.0% 70.0%
95% UCL 44.6% 48.8% 54.8% 46.7%
60.0%
Score 60.0% 30.0% 34.0% 40.0% 32.0%
95% LCL
50.0% 17.9% 21.2% 26.4% 19.5%
50.0%
40.0% 40.0%
30.0% 0 0 30.0%
Total Qty 0 0
20.0% 20.0%
10.0% 10.0%
0.0% 0.0%
All Operator #1 Operator #2 Operator #3 All Operator #1 Operator #2 Operator #3
Verification-BadData.MPJ
Attribute Agreement Analysis for O1T1, O1T2, O1T3, O2T1, O2T2, O2T3, O3T1, O3T2, O3T3
Within Appraisers
Assessment Agreement
Assessment Agreement
# Matched: Appraiser’s assessment across trials agrees with the known standard.
Assessment Disagreement
# Pass / # Fail /
Appraiser Fail Percent Pass Percent # Mixed Percent
1 15 55.56 3 13.04 15 30.00
2 14 51.85 3 13.04 13 26.00
3 13 48.15 4 17.39 17 34.00
Between Appraisers
Assessment Agreement
Assessment Agreement
70 70
60 60
Percent
Percent
50 50
40 40
30 30
20 20
1 2 3 1 2 3
Appraiser Appraiser