You are on page 1of 21

Location - Aguascalientes - Standard CA 1012134–AGC-A01 Administración de análisis del sistema de medición

- Annex 01 - Attributive GRR

Location

Standard

CA 1012134–AGC-A01 Administración de análisis del sistema de medición

Annex ´01

Attributive GRR

Document History: CA 1012134–AGC-A01-A01

Version Responsible Function Details Effective

1 N/A N/A N/A

.-Creación de Anexo A01 para GRR atributivos, se


actualizan líneas en la RASI para agregar la
QMS Leader, Jose referencia del uso de este template.
2 Marzo 4, 2021
Palomera - Se actualizan numero de anexos corporativos del
procedimiento CA 1012134 por cambios en regla
superior.
ATTRIBUTIVE GRR CA 1012134–AGC-A01-A01

Instructions:
The following spreadsheet is used to calculate an attribute MSA, in which 50 samples will be evaluated
using 3 operators and 3 trials per sample.

1. Please fill out the general information of attributive MSA as follow:

Please specify the general product information Please, specify the general gauge /
as product name, part number, characteristic equipment information as gauge name,
that the operators are checking and the name identification, the tolerance of the product for
of the person who perform the MSA to the PASS / FAIL decision and the date on which
operators. the MSA was conducted.

Please, specify the complete operator name and


his/her employee identification number.

2. Enter operator result of each inspection:

Result of each operator per trial. True value of


Only 0 and 1 values shall be captured. the unit

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
0 = Not OK parts
1 = OK parts

Code column can be used to identify You can add the


not OK parts (-), OK parts (+) and measurement
parts that are in the limit (x) value of each unit.

3. Evaluate MSA result (kappa and operator effectiveness) according acceptance criteria:

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Troubleshooting:

Attribute Agreement Analysis consists of creating cross tabulation tables that display the observed frequency
distribution of potential outcomes (Accept or Reject) using tables of counts.

Kappa statistic is the main metric used to measure how good or bad an attribute measurement system is.
The Kappa statistic is used to summarize the level of agreement between raters after agreement by chance
has been removed. It tests how well raters agree with themselves (repeatability) and with each other
(reproducibility).

The attribute agreement analysis uses the following cross tabulations:

a) Pairwise Agreement Between Appraisers – Per Evaluation (Reproducibility)


b) Agreement Between Each Appraiser versus the Reference Standard – Per Evaluation (Bias)
c) Agreement Within Appraisers – Across Trials (Repeatability)
d) Agreement Between Each Appraiser versus the Reference Standard – Across Trials (Bias)
e) Agreement Between Appraisers – Across Trials (Reproducibility)
f) Agreement Across All Appraisers versus the Reference Standard (Bias)

Sources of Variability

AIAG MSA reference manual and VDA 5 gives us some potential sources of variation in the measurement
system, we can use it for problem solving and risk analysis.

VDA 5 approach; Section 4.1 Influence causing the uncertainty of measurament result; Page 26

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
AIAG approach; Section B, source of variation; Page 17

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
ATTRIBUTIVE GRR CA 1012134–AGC-A01-A01

Instructions:
La siguiente hoja de cálculo se utiliza para calcular un atributo MSA, en el que se evaluarán 50 muestras
utilizando 3 operadores y 3 ensayos por muestra.

1. Please fill out the general information of attributive MSA as follow:

Especifique la información general del Por favor, especifique la información general


producto como nombre del producto, número del medidor / equipo como el nombre del
de pieza, característica que los operadores medidor, la identificación, la tolerancia del
están verificando y el nombre de la persona producto para la decisión PASS / FAIL y la
que realiza el MSA a los operadores. fecha en que se realizó el MSA.

Por favor, especifique el nombre completo del


operador y su número de identificación de
empleado.

2. Introduzca el resultado del operador de cada inspección:

Valor
Resultado de cada operador por ensayo. verdadero
de la unidad

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
0 = Not OK parts
1 = OK parts

La columna de código se puede Puede agregar el


utilizar para identificar partes no valor de medición
correctas (-), partes correctas (+) y de cada unidad.
piezas que están en el límite (x)

3. Evaluar el resultado de MSA (kappa y efectividad del operador) de acuerdo con los criterios de aceptación:

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Troubleshooting:

El análisis de acuerdo de atributos consiste en crear tablas de tabulación cruzada que muestran la frecuencia observada
distribución de resultados potenciales (Aceptar o Rechazar) utilizando tablas de recuentos.

La estadística Kappa es la métrica principal utilizada para medir qué tan bueno o malo es un sistema de medición de atributos.
La estadística Kappa se utiliza para resumir el nivel de acuerdo entre los evaluadores después del acuerdo por casualidad.
se ha eliminado. Prueba qué tan bien los evaluadores están de acuerdo consigo mismos (repetibilidad) y entre sí.
(reproducibilidad).

El análisis de acuerdo de atributos utiliza las siguientes tabulaciones cruzadas:

a) Acuerdo de pareja entre tasadores – por evaluación (reproducibilidad)


b) Acuerdo entre cada tasador versus el estándar de referencia – Por evaluación (sesgo)
c) Acuerdo dentro de los tasadores – A través de los ensayos (repetibilidad)
d) Acuerdo entre cada evaluador versus el estándar de referencia – A través de los ensayos (sesgo)
e) Acuerdo entre tasadores – a través de juicios (reproducibilidad)
f) Acuerdo entre todos los tasadores frente al estándar de referencia (sesgo)

Sources of Variability

El manual de referencia de AIAG MSA y VDA 5 nos da algunas fuentes potenciales de variación en la medición
sistema, podemos usarlo para la resolución de problemas y el análisis de riesgos.

Enfoque VDA 5; Sección 4.1 Influencia que causa la incertidumbre del resultado de la medición; Página 26

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
AIAG approach; Section B, source of variation; Page 17

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
frecuencia observada

ma de medición de atributos.
acuerdo por casualidad.
dad) y entre sí.

n la medición

Internal CA XXXXXXX
© Continental AG. 2018 Version 1
Internal CA XXXXXXX
© Continental AG. 2018 Version 1
ATTRIBUTIVE GRR CA 1012134–AGC-A01-A01

Product name: Gauge/equipment name:


Product part number: Gauge/equipment ID:
Characteristic inspected: Tolerance:
Operation/process: Date:
Performed by: Area:
Operator A name: ID number:
Operator B name: ID number:
Operator C name: ID number:

OPERADOR A OPERADOR B OPERADOR C Ref. Code


Measure
Part # Attribute (optional) Value
Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 Trial 1 Trial 2 Trial 3 Value (optional)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

Internal CA 0608454
© Continental AG. 2018 Version 1
49
50

Internal CA 0608454
© Continental AG. 2018 Version 1
Numerical Analysis

A * B Cross Tabulation A * Ref Cross Tabulation


B Ref
00 1.00 Total 00 1.00 Total
A 00 Count 0 0 0 A 00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
1.00 Count 0 0 0 1.00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
Total Count 0 0 0 Total Count 0 0 0
Expected count 0.00 0.00 0.00 Expected count 0.00 0.00 0.00
B * C Cross Tabulation B * Ref Cross Tabulation
C Total Ref Total
00 1.00 00 1.00
B 00 Count 0 0 0 B 00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
1.00 Count 0 0 0 1.00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
Total Count 0 0 0 Total Count 0 0 0
Expected count 0.00 0.00 0.00 Expected count 0.00 0.00 0.00
A * C Cross Tabulation C * Ref Cross Tabulation
C Total Ref Total
00 1.00 00 1.00
A 00 Count 0 0 0 C 00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
1.00 Count 0 0 0 1.00 Count 0 0 0
Expected count 0.0 0.0 0.00 Expected count 0.0 0.0 0.00
Total Count 0 0 0 Total Count 0 0 0
Expected count 0.00 0.00 0.00 Expected count 0.00 0.00 0.00

a) Pairwise Agreement Between Appraisers – Per b) Agreement Between Each Appraiser versus the
Evaluation (Reproducibility) Reference Standard – Per Evaluation (Bias)

A B C A B C
B * B Cross Tabu
A ---- Kappa

B ----

C ----

c) Agreement Within Appraisers – Across Trials


Source: A B C (Repeatability)
Number Evaluated:
Number Matched: A B C
Number Not Matched:
Kappa
False Reject:
False Accept:
Mixed: d) Agreement Between Each Appraiser versus the
Upper 95% Conf. Bound: 1 1 1 Reference Standard – Across Trials (Bias)
Proportion Matched (p):
Lower 95% Conf. Bound: A B C
Kappa

Internal CA 0608454
© Continental AG. 2018 Version 1
e) Agreement Between Appraisers – Across Trials f) Agreement Across All Appraisers versus the Reference
(Reproducibility) Standard (Bias)
A * B * C Cross Ta
Number Evaluated: 0 Number Evaluated: 0
Number Matched: 0 Number Matched: 0
Number Not Matched: 0 Number Not Matched: 0
Upper 95% Conf. Bound: Upper 95% Conf. Bound:
Proportion Matched: Proportion Matched:
Lower 95% Conf. Bound: Lower 95% Conf. Bound:

Kappa: Kappa:

g) Effectiveness

Effectiveness Miss Rate False Alaram Rate


A

Acceptance Criteria

Decision Measurement system Effectiveness Miss Rate False Alaram Rate


Acceptable for the Appraisers ≥ 90% ≤ 2% ≤ 5%
Marginally Acceptable for the Appraiser - May
need Improvement
≥ 80% ≤ 5% ≤ 10%

Unacceptable for the Appraiser - Needs


< 80% >5% >10%
improvement

Kappa Decision Comments

0.75 < Kappa ≤ 1 Acceptable Indicates good to excellent agreement

Decision should be based upon importance of


May be acceptable for application measurement, cost of measurement
0.40 ≤ Kappa ≤ 0.75
some applications device, and cost of rework or repair.
Acceptance shall be approved by the customer.

0 < Kappa < 0.40 Unacceptable Indicates poor agreement

Internal CA 0608454
© Continental AG. 2018 Version 1

You might also like