You are on page 1of 24

Community Services

Ahmad Tarabaih
Assistant Professor
Community Services
Lesson Plan
• Subject 3: Reliability and validity of data
– Training and calibrating examiners

– Duplicate examinations

– Estimating reproducibility of recordings

2
Community Services

• Validity of the measuring instrument represents


the degree to which the scale measures what it
is expected to measure.

• Reliability refers to the degree to which


measurement produces consistent outcomes.
3
Community Services
Training and calibrating examiners
• In an epidemiological survey undertaken by a
team, it is essential that the participating
examiners are trained to make consistent
clinical judgements.

• There should be close agreement between


assessments in population groups.
4
Community Services
Training and calibrating examiners
• Two main reasons for variability in clinical scoring:

– Inconsistency in scoring different levels of an oral

disease

– Physical and psychological factors related to the

examiner, such as fatigue/ fluctuations in interest of the

study/ difference in visual acuity and tactile sense.


5
Community Services
Training and calibrating examiners

• Objectives of standardization and calibration:

– Ensure uniform interpretation

– Understanding and application of the criteria and

codes for various diseases.

– Each examiner can examine consistently.


6
Community Services
Training and calibrating examiners
• In case the survey will be conducted by a group
of examiners, it is recommended to appoint a
“validator” for the survey team.

• The “calibrator” should examine at least 25


subjects, that will also be examined by each
member of the survey team.
7
Community Services
Training and calibrating examiners
• Time required for:
– training in the use of criteria and knowledge of
indices
– Calibration and practising the procedure

• Two types of reliability:


– Intra examiner
– Inter examiner
8
Community Services
Intra examiner Reproducibility
• Assessment the consistency of each individual
examiner.

• The examiner should first practice the


examination on a group of 10 subjects with
wide range of levels of disease conditions.

9
Community Services
Intra examiner Reproducibility
• The examiner should then determine how
consistently he/she can apply the diagnostic
criteria by examining a group of 25 subjects twice,
ideally on successive days, or with a time interval
of at least 30 minutes between examinations.
• The subjects should be pre-selected so that they
collectively represent the full range of conditions
expected to be assessed in the actual survey.
10
Community Services
Inter examiner Reproducibility
• Assessment the consistency of between
examiners.
• Each examiner should first practice the
examination on a group of 10 subjects, then
independently examine the same group of 20
or more subjects, and compare his/her findings
with those of the other examiners in the team.
11
Community Services
Inter examiner Reproducibility
• In case of major discrepancies, subjects are re-examined
so inter examiner differences are reviewed and resolved
by group discussion.
• The group must examine with rescannable consistency
using a common standard.
• Consistency level should range between 85-95%.
• Exclude from survey team persons with consistently
significant different from the majority of team.
12
Community Services
Inter examiner Reproducibility
• Failure to have assessment in a consistent
manner, will lead to missed or wrongly
interpreted disease prevalence or severity.
• In actual survey, examiners should all examine
similar proportions of each major subgroup of
the sample population, because there will be
always some variations between examiners.
13
Community Services
Estimating reproducibility of
recordings
• Inter and Intra examiner consistency can be
assessed in number of ways:
– The percentage of agreement between scores
• Number of agreement scores/ total scores

In case of low caries prevalence, this method does


not produce accurate assessment of reproducibility.
15
Community Services
Estimating reproducibility of
recordings
– A more reliable way of assessing overall agreement
between examiners is the Kappa statistic.
• It relates the actual measure of agreement with the
degree of agreement which would have occurred by
chance.
• For simple data sets, calculating Kappa (k) by hand is
straightforward.
• In larger data sets, you will probably want use software
like SPSS
16
Community Services
Estimating reproducibility of
recordings
• The formula to calculate kappa is:

– Po = the relative observed agreement among examiner


– Pe = the hypothetical probability of chance agreement

17
Community Services
Estimating reproducibility of
recordings
• Example question:
– The following hypothetical data comes from a medical test
where two radiographers rated 50 images for needing
further study. The researchers (A and B) either said Yes (for
further study) or No (No further study needed).
• 20 images were rated Yes by both.
• 15 images were rated No by both.
• Overall, examiner A said Yes to 25 images and No to 25.
• Overall, examiner B said Yes to 30 images and No to 20
18
Community Services
Estimating reproducibility of
recordings
• Step 1:
– Calculate po (the observed proportional agreement):
20 images were rated Yes by both.
15 images were rated No by both.

– Po = number in agreement / total = (20 + 15) / 50 = 0.70

19
Community Services
Estimating reproducibility of
recordings
• Step 2:
– Find the probability that the examiners would randomly
both say Yes.
Examiner A said Yes to 25/50 images, or 50%(0.5).
Examiner B said Yes to 30/50 images, or 60%(0.6).

– The total probability of the raters both saying Yes


randomly is:
0.5 x 0.6 = 0.30
20
Community Services
Estimating reproducibility of
recordings
• Step 3:
– Calculate the probability that the examiners would
randomly both say No.
Examiner A said No to 25/50 images, or 50%(0.5).
Examiner B said No to 20/50 images, or 40%(0.6).

– The total probability of the raters both saying No


randomly is:
0.5 x 0.4 = 0.20
21
Community Services
Estimating reproducibility of
recordings
• Step 4:
– Calculate Pe.
• Add your answers from Step 2 and 3 to get the overall
probability that the raters would randomly agree.
Pe = 0.30 + 0.20 = 0.50

22
Community Services
Estimating reproducibility of
recordings
• Step 5:
– Insert your calculations into the formula and solve:

– k = (Po – pe) / (1 – pe )= (0.70 – 0.50) / (1 – 0.50) =


0.40

23
Community Services
Estimating reproducibility of
recordings
• The Kappa statistic varies from 0 to 1, where.
– 0 = agreement equivalent to chance.
– 0.1 – 0.20 = slight agreement.
– 0.21 – 0.40 = fair agreement.
– 0.41 – 0.60 = moderate agreement.
– 0.61 – 0.80 = substantial agreement.
– 0.81 – 0.99 = near perfect agreement
– 1 = perfect agreement.

24

You might also like