You are on page 1of 11

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/284277179

JOURNAL CLUB: Radiology Report Addenda: A


Self-Report Approach to Error Identification,
Quantification, and Class....

Article in American Journal of Roentgenology · December 2015


DOI: 10.2214/AJR.15.14891

CITATIONS READS

3 51

3 authors, including:

Mohammad Mansouri Hani Abujudeh


Harvard Medical School ddd
38 PUBLICATIONS 42 CITATIONS 155 PUBLICATIONS 1,374 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Mohammad Mansouri on 05 March 2016.

The user has requested enhancement of the downloaded file.


H e a l t h  C a r e  Po l i c y  a n d   Q u a l i t y  •   O r i g i n a l  R e s e a r c h

Brigham et al.
Radiology Report Addenda

Health Care Policy and Quality


Original Research

JOURNAL CLUB:
Radiology Report Addenda:
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

A Self-Report Approach to
Error Identification, Quantification, 
JOURNA L CLUB and Classification
Leeann R. Brigham1 OBJECTIVE. The purpose of this study was to analyze report addenda to assess the self-
Mohammad Mansouri1,2 reported error rate in radiologic study interpretation, the types of errors that occur, and the
Hani H. Abujudeh1,2 distribution of error by image modality.
MATERIALS AND METHODS. Addenda to all diagnostic radiology reports were
Brigham LR, Mansouri M, Abujudeh HH compiled over a 1-year period (n = 5568). The overall error rate was based on addenda fre-
quency relative to the total number of studies performed. Addenda written over the most re-
cent 2-month interval (n = 851) were classified into five major categories of predominant error
type: underreading, overreading, poor communication, insufficient history, and poor tech-
nique. Each category was further divided into multiple subtypes.
RESULTS. Diagnostic studies at our hospital had an error rate of 0.8%. Errors of poor
communication occurred most frequently (44%), followed by underreading (7%), insufficient
history (21%), overreading (8%), and poor technique (1%). Analyzed by imaging modality,
most errors occurred in PET (19.45 per 1000 studies), followed by MRI (13.86 per 1000 stud-
ies) and CT (12.45 per 1000 studies).
CONCLUSION. Through the use of report addenda to calculate error, discrepancy be-
tween individual radiologists is removed in a reproducible and widely applicable way. This
approach to error typology eliminates sample bias and in a departure from previous analyses
of difficult cases shows that errors of communication are most frequent, representing a clear
area for targeted improvement.

D
iagnostic errors in radiology proach would be a valuable tool for evaluat-
negatively affect patient care, ing error rates across institutions.
leading to delays in diagnosis Reducing errors requires an understand-
[1], delays in treatment, failure to ing of why they occur. To elucidate underly-
Keywords: addenda, diagnostic error, errors, detect complications, failure of proper sur- ing causes, previous researchers have sorted
quality assurance veillance, and performance of unnecessary diagnostic errors into categories, capturing
or contraindicated studies [2]. Though errors error types ranging from cognitive errors
DOI:10.2214/AJR.15.14891
are common, measuring them is a challenge to breakdowns in communication [1, 8, 9].
Received April 17, 2015; accepted after revision because the definition of “truth” varies, even Studies have been limited by bias in case se-
June 3, 2015. among experienced radiologists [3]. Wide- lection because they have focused only on
spread implementation of peer review proc- errors occurring in difficult cases, selected
Based on a presentation at the ARRS 2015 Annual
esses provides valuable data on discrepancy mainly from conferences and personal files
Meeting, Toronto, ON, Canada.
rates, but the findings must be interpreted [1, 8, 9]. Although this strategy enriches the
1
Department of Radiology, Harvard Medical School, with cautious consideration of inherent vari- sample of errors, the case sampling is like-
Boston, MA. ability in opinion. Discrepancy rates in the ly not representative of the errors occurring
2
literature vary widely, ranging from 0.8% to in hospitals on a daily basis. Errors in daily
Department of Radiology, Massachusetts General
Hospital, 55 Fruit St, FND-220, Boston, MA.
58% [4–7]. The disagreement likely repre- practice may differ completely from those in
Address correspondence to H. H. Abujudeh sents methodologic variability, including difficult cases and may have a greater effect
(habujudeh@partners.org). differences in discrepancy definitions, types on patient care.
of examinations, selection of cases, patient In this study, we used addenda to diagnos-
AJR 2015; 205:1230–1239
populations, and individual radiologists [5]. tic radiology reports to identify and analyze
0361–803X/15/2056–1230 Methodologic variability makes error analy- diagnostic errors. At our institution, an at-
ses difficult to replicate and results difficult tending radiologist signs all radiology reports,
© American Roentgen Ray Society to integrate. A standard and reproducible ap- and only this author of the original report can

1230 AJR:205, December 2015


Radiology Report Addenda

TABLE 1: Error Classification System


Cause of Error Explanation
Underreading Finding is missed
Location Finding is missed because of location outside the area of clinical interest
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

Satisfaction of search Finding is missed after the first abnormality is found


Overreliance on previous examination Finding is missed because of overreliance on previous examination findings
Incomplete description Finding is appreciated but not sufficiently characterized in report
Not all images read Finding is missed because not all images are read
Overreading Finding is overcalled or misinterpreted
Faulty interpretation Finding is appreciated but attributed to the wrong cause
Limited differential Finding is appreciated, but all possible causes are not considered
Normal variant Normal anatomic variant is misinterpreted as abnormal
Poor communication Communication error in report or physician contact
Typographic error Typographic error in report (e.g., initial report “now new large intracranial hemorrhage,” corrected to “no new large
intracranial hemorrhage”)
Erroneous report Report generated in error (e.g., “Please disregard. This report belongs to a different patient.”)
Erroneous recommendations Wrong or missing follow-up recommendations (e.g., renal ultrasound showed liver lesions in pregnant patient; initial
recommendation for postpartum liver MRI corrected to unenhanced liver MRI)
Erroneous technique Wrong or missing study methods (e.g., “Please note the technique was incorrectly applied. The correct technique is.. ”)
Physician communication Wrong or missing documentation of contact with physician (e.g., “The critical findings in this report were reported to
the responding clinician, XX, who responded indicating that the communication was understood.”)
Laterality error Right-left confusion in report (e.g., “The impressions should all state right breast. The left breast was not evaluated.”)
Insufficient history Error preventable with more complete history
Clinical history Finding missed or misinterpreted because of inaccurate or incomplete clinical history
Previous examination Finding missed or misinterpreted because of failure to consult previous radiologic studies
Poor technique Finding missed because of study limitations
Limited views Views are too limited to address clinical question
Artifact related Artifact makes finding difficult to interpret

create an addendum. The presence of a report ror rate in radiologic study interpretation, tional procedures were excluded. Over this 1-year
addendum is an implicit acknowledgment of the types of errors that occur, and the distri- period, 5568 addenda were created over 719,855
error, representing the ultimate truth by elim- bution of errors across imaging modalities. total examinations performed in the radiology de-
inating any discrepancies between readers. Given the expected underestimation with partment, which we used to calculate our error
This self-report method introduces its own set this approach, we hypothesized that self-re- rates. We further classified addenda by error type
of biases because it depends on the original ported error rates would be lower than those for the most recent 2-month interval (April 2014
author’s, first, recognizing that an error has determined by peer review methods. Regard- through May 2014), for a sample size of 851 cas-
occurred and, second, documenting the er- ing error classification, we hypothesized that es. For an overview of the addendum process, an
ror with an addendum. Errors that elude de- errors present in addenda would have a dif- example of an original report, the addendum, and
tection and different practices among radiolo- ferent type distribution than previous sam- the corresponding image are shown in Figure 1.
gists lead to underestimation of the true error ples drawn from difficult cases.
rate; however, this method allows us to deter- Error Classification
mine the lowest possible error rate. The true Materials and Methods Using addenda text, we classified errors of in-
error rate must be at minimum the rate of self- This study was approved by the institutional re- terpretation and reporting into the five following
reported errors. Another advantage of our ad- view board at our hospital and considered com- major categories: underreading, overreading, poor
denda approach is its reproducibility. The pliant with HIPAA. The requirement for informed communication, insufficient history, and poor tech-
variability in reported error rates likely re- consent was waived. nique. When a specific cause was identified, we
flects the lack of a concrete method. Because further classified errors into category subtypes. A
most institutions have an addenda system al- Materials list of all categories and subtypes with explanations
ready built into clinical work flows, our meth- All diagnostic radiology reports generated is presented in Table 1, and examples are shown in
od could be easily and widely adopted. from June 2013 through May 2014 that had adden- Figures 2–5. We modeled our classification system
The purpose of this study was to analyze da were compiled through a search of the radio- on that of Kim and Mansfield [1], adapting it to bet-
report addenda to assess the self-reported er- logic information system. Reports from interven- ter capture errors found in addenda. This required

AJR:205, December 2015 1231


Brigham et al.

expanding, merging, and eliminating some catego- TABLE 2: Error Categorization Results


ries. For example, we eliminated the category lack
Error Type n Category % Overall %
of knowledge, because it was not possible to deter-
mine from the addenda text whether this was an un- Underreading 228 — 27
derlying cause. If an addendum had multiple error Location 19 8 2
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

types, we classified by the predominant error.


Satisfaction of search 48 21 6

Data Analysis Not all images read 6 3 1


We calculated the overall self-reported error Finding not fully characterized 77 34 9
rate for diagnostic radiologic interpretation using Overreliance on previous examinations 8 4 1
frequency of addenda relative to the total num-
Other 70 31 8
ber of studies performed over a 1-year interval
and over the smaller 2-month interval. After er- Overreading 67 — 8
ror classification, we calculated the frequencies of Limited differential diagnosis 32 48 4
error types as a percentage of the total number of Normal variant 15 22 2
errors. We also calculated the rate of each error
Faulty interpretation 17 25 2
type for total number of studies performed at the
hospital. We sorted studies with addenda by im- Other 3 4 0
aging modality (CT, MRI, ultrasound, radiogra- Poor communication 373 — 44
phy, fluoroscopy, PET, and nuclear imaging). This Typographic error 74 20 9
allowed us to calculate the overall error rate for
Erroneous report 23 6 3
each modality and identify any modality-depen-
dent trends in error types. For modality data, we Erroneous recommendation 37 10 4
corrected the error frequencies for the number of Erroneous technique 120 32 14
studies performed with each modality. Physician communication 58 16 7
Laterality error 19 5 2
Results
We calculated an overall self-reported Other 42 11 5
error rate of 7.74 per 1000 (≈0.8%) studies Incomplete history 177 — 21
performed at our hospital. This includes all Clinical history 42 24 5
diagnostic studies performed over 1 year
Previous examination 135 76 16
across the entire radiology department, ex-
cluding interventional procedures. We clas- Poor technique 6 — 1
sified a total of 851 errors occurring over a Limited views 3 50 0
2-month period. A summary of our classifi- Artifact related 3 50 0
cation results is shown in Table 2. The fre-
Total addenda errors classified 851 — 100
quencies of each major error type as a per-
centage of the total number of errors from Note—Dash (—) indicates not applicable. Data are number of errors classified into each major category and
subcategory. Frequencies are also expressed as percentages of all errors classified (overall %) and for each
largest to smallest were 44% (n = 373) for subcategory as a percentage of the major category.
poor communication, 27% (n = 228) for un-
derreading, 21% (n = 177) for insufficient
history, 8% (n = 67) for overreading, and 1% munication, 1.77 for underreading, 1.38 for rors were equally predominant (40% each). In
(n = 6) for poor technique (Fig. 6). insufficient history, 0.52 for overreading, and other modalities, underreading was the sec-
Among communication errors, the most 0.05 for poor technique. ond most common error type. The exceptions
common subtype was erroneous technique We classified studies with errors into were radiology and nuclear studies, for which
details (32%), followed by typographic er- seven imaging modalities: CT, MRI, ultra- incomplete history was second most common
rors (20%), errors in communicating results sound, radiography (including mammogra- (32% for radiology, 22% for nuclear imaging).
to physicians (16%), erroneous recommen- phy), fluoroscopy, PET, and nuclear imaging.
dation (10%), erroneous report (6%), and Error rates for every 1000 cases in the re- Discussion
laterality errors (5%). Other errors not fitting spective modalities were 12.45 for CT, 13.86 We calculated an overall self-reported er-
into subtypes constituted 11% of communi- for MRI, 4.36 for ultrasound, 3.84 for radi- ror rate of 0.8% for radiologic study interpre-
cation errors. The subtype breakdowns for ography, 3.31 for fluoroscopy, 19.45 for PET, tation at our institution. This number is lower
errors of underreading, overreading, incom- and 5.80 for nuclear studies (Fig. 8). than most of the previously reported peer re-
plete history, and poor technique are shown The breakdown of error types per modal- viewed rates, which range from 0.8% to 58%
in Table 2. When corrected for the total num- ity is shown in Figure 9. Poor communication [4–7]. At the low end of this range, discrep-
ber of radiologic studies performed over the remained the predominant source of error ancy rates were 2.91% for difficult cases and
same 2-month period (Fig. 7), the error rates for most modalities. In fluoroscopy, howev- 0.8% for nondifficult cases in the RadPeer
for every 1000 cases were 2.90 for poor com- er, poor communication and underreading er- program [4], 3.48% for double readings of

1232 AJR:205, December 2015


Radiology Report Addenda

2% of daily cases in a group practice [6], and radiologists may also have different thresh- cation category present in previous schemas
2.9–5.4% for six different community hospi- olds for disagreeing with the original read- [1, 8, 9]. Similarly, we cannot analyze what
tals [7]. At the higher end of this range, ma- ing. For reports with minor errors, such as prompted the review that generated the ad-
jor discrepancies occurred in interpretation typographic errors that do not affect mean- dendum, which may affect the types of er-
of abdominal and pelvic CT examinations at ing, whether the radiologist writes an adden- rors detected. Calls from ordering physi-
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

a rate of 26% among three experienced radi- dum to the report is likely insignificant. We cians often prompt review, which may be
ologists and climbed to 58% when minor dis- think it is fair to assume that for known er- biased toward communication errors and
crepancies were included [5]. rors affecting clinical management, most if against underreading errors. Calls may also
Given that peer review and addenda meth- not all radiologists would write an adden- be more common in certain specialty ar-
ods are not directly comparable, it is not dum. Therefore, addenda likely capture all eas and among certain providers. We may
surprising that self-reported error rates are known errors important to patient care. be able to glean some information about the
lower. The rate of self-reported errors is the As in previous studies, a large propor- prompt by considering the time elapsed be-
lowest possible error rate, encompassing tion of our observed errors were due to un- tween the original report and creation of the
only errors that come to the reading radiolo- derreading. Unlike in other studies, however, addendum. For example, if a reading radiolo-
gist’s attention. These errors represent only poor communication was the most common gist creates an addendum immediately after
the tip of the iceberg and are underestimates source of error in addenda, accounting for ap- signing the original report, he or she likely
of the true error rate. Peer review, on the oth- proximately 44%. Of communication errors, has caught the error, which may correlate
er hand, may overestimate true error rates we found that incorrectly reported technique with certain error types. Future study of the
by counting both true error and variability details were most common (32%), followed time course of addenda generation may pro-
in opinion. Unless the original interpreting by typographic errors (20%) and errors in vide clues to the report review prompt.
radiologist is involved in determining wheth- communicating results to physicians (16%).
er an error has occurred, it is impossible to Technique details can have a profound effect Conclusion
disentangle the two. on patient care. Knowing whether the patient We used report addenda to identify, quan-
Our approach of analyzing addenda cir- received IV contrast material is an important tify, and classify errors occurring in daily ra-
cumvents the issue of interreader variabili- detail for a clinical team. Typographic errors diology practice at a large academic teaching
ty. In creating the addendum the radiologist can also have detrimental effects when they hospital. We found that self-reported errors
acknowledges an error, so there is no debate change the meaning of the report: “no new occur at a rate of 0.8%, representing the low-
that an error has occurred. The cost is more intracranial hemorrhage” is vastly different est possible error rate. Our classification of
missed errors that peer review might have from “now new intracranial hemorrhage.” error types showed that errors of commu-
identified, and the true error rate is likely The rate of communication errors was es- nication predominated at 44%, signifying a
somewhere in between. The complementary timated to be between 0% and 15% in pre- clear area for improvement. Our high preva-
use of both approaches would be a power- vious analyses [1, 8, 9]. This incongruity is lence of communication errors also denotes a
ful approach to refining our understanding of likely due to the more representative sample departure from previous typologic analyses,
true error frequency. of typical cases in our study. Previous anal- suggesting that the errors occurring in diffi-
A major advantage of the addenda method yses of predominantly difficult cases may cult cases may not be representative of errors
is minimization of the subjectivity present have selected for errors due to underreading occurring in daily practice. Finally, we found
in previous analyses. Each previous method or overreading, whereas communication er- that 3D imaging techniques (PET, MRI, and
uses severity criteria to filter out unimport- rors may be less likely to come up in case CT) are most prone to error. Knowledge
ant errors. For example, Borgstede et al. [4] conferences. Known errors, the subset iden- about the frequency and types of errors is
and Abujudeh et al. [5] both focused on ma- tified by self-report, may be biased toward vital for improving the quality of care. Our
jor disagreements, which were errors judged communication errors because they are easy addendum approach, with its ability to sys-
to have clinical significance. Similarly, Soffa to detect. If the ordering physician prompts tematically aggregate errors in a widely and
et al. [6] included only errors that might or the review for creation of a report adden- easily reproducible way, may be a powerful
probably would adversely affect a patient’s dum, he or she may be more apt to detect a tool for studying errors across institutions.
condition. Though well-defined criteria can typographic error rather than a subtle finding
minimize the subjectivity present in this hu- missed by the radiologist. Our results show References
man factor, they cannot completely elimi- that communication errors occur far more 1. Kim YW, Mansfield LT. Fool me twice: delayed
nate it. Because addenda provide a reliable frequently in the everyday practice of radiol- diagnoses in radiology with emphasis on perpetu-
and easy way to look at all errors in aggre- ogy than previous analyses suggest. ated errors. AJR 2014; 202:465–470
gate, we bypassed the decision of whether an Our use of addenda as markers of errors 2. Pinto A, Caranci F, Romano L, Carrafiello G,
error is clinically important. does have limitations. The retrospective na- Fonio P, Brunese L. Learning from errors in radi-
The decision of whether an error is clini- ture disconnects our analysis from the origi- ology: a comprehensive review. Semin Ultra-
cally important can occur at the level of the nal thought processes leading to error dur- sound CT MR 2012; 33:379–382
radiologist, because different radiologists ing image interpretation, making it difficult 3. Armato SG 3rd, Roberts RY, Kocherginsky M, et
have different thresholds for creating adden- to differentiate between some cognitive er- al. Assessment of radiologist performance in the
da. We cannot control for this variable, which ror types. For example, we cannot determine detection of lung nodules. dependence on the defi-
limited our analysis but is not unique to this from addendum text whether a missed find- nition of “truth.” Acad Radiol 2009; 16:28–38
method. In peer review processes, reviewing ing is due to lack of knowledge—a classifi- 4. Borgstede JP, Lewis RS, Bhargavan M, Sunshine

AJR:205, December 2015 1233


Brigham et al.

JH. Radpeer quality assurance program: a multi- 6. Soffa DJ, Lewis RS, Sunshine JH, Bhargavan M. munity hospitals. Acad Radiol 1998; 5:148–154
facility study of interpretive disagreement rates. Disagreement in interpretation: a method for the 8. Smith MJ. Error and variation in diagnostic
J Am Coll Radiol 2004; 1:59–65 development of benchmarks for quality assurance radiography. Springfield, IL: Charles C Thomas, 1967
5. Abujudeh HH, Boland GW, Kaewlai R, et al. Ab- in imaging. J Am Coll Radiol 2004; 1:212–217 9. Renfrew DL. Franken EA, Berbaum KS, Weigelt
dominal and pelvic computed tomography (CT) in- 7. Siegle RL, Baram EM, Reuter SR, Clarke EA, FH, Abu-Yousef MM. Error in radiology: classifica-
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

terpretation: discrepancy rates among experienced Lancaster JL, Mcmahan CA. Rates of disagree- tion and lessons in 182 cases presented at a problem
radiologists. Eur Radiol 2010; 20:1952–1957 ment in imaging interpretation in a group of com- case conference. Radiology 1992; 183:145–150

A B

Fig. 1—Example of addendum process.


A, Facsimile shows initial radiology report text for CT of abdomen and pelvis with
IV contrast administration.
B, Facsimile shows addendum text describing breast nodule missed during initial
reading.
C, 94-year-old woman with endometrial thickening and large deep vein thrombosis.
Axial CT image shows missed breast nodule in right breast. This error was
classified as underreading due to location, given that breast finding was seen in
most superior image of scan obtained to assess for endometrial abnormality.
C

1234 AJR:205, December 2015


Radiology Report Addenda
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

A B

C D

Fig. 2—Examples of underreading errors.


A, 65-year-old man with right temporo-insular diffuse astrocytoma, status post radiation therapy. Location
error. Axial T2-weighted contrast-enhanced MR image obtained to assess known brain tumor. Radiologist
missed parotid gland mass on most inferior slice in all sequences.
B, 83-year-old man with history of bladder cancer and previously detected lung nodules on chest CT.
Satisfaction of search error. Chest CT coronal image. Radiologist initially reported multiple pulmonary nodules
and ground-glass opacities but missed pleural lipoma.
C, 67-year-old woman with pelvic bleeding and history of nephrolithiasis. Error due to overreliance on previous
examination. Axial abdominopelvic CT image. Radiologist initially commented on bilateral nephrolithiasis that
was not substantially altered compared with previous studies and found no explanation for new pelvic bleeding
for which study was requested. Radiologist missed new ureteral stone adjacent to stent.
D, 57-year-old man with history of lung cancer, status post four cycles of chemotherapy. Incomplete description
of finding. Coronal chest CT image obtained for evaluation of effectiveness of four chemotherapy cycles, but
initial reading did not specify measurements of mass compared with previous examinations.
E, 70-year-old man with suspected stroke. Not all images read. Coronal CT angiographic image of brain with 3D
vessel reconstruction obtained for evaluation for suspected stroke. Initial reading was no new infarction. Reading
was finalized before 3D images were available. Three-dimensional images later revealed occlusion of right anterior
M3 branch of middle carotid consistent with thrombus. RMCA = right middle cerebral artery.
E

AJR:205, December 2015 1235


Brigham et al.
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

Fig. 3—Examples of overreading errors.


A, 64-year-old woman with history of endometrial
cancer, status post resection and chemotherapy.
Faulty interpretation error. Coronal abdominopelvic
CT image. Initial impression was new pelvic mass,
likely recurrence of endometrial tumor. On further
review mass was determined to be distended
obstructed cecum.
B, 38-year-old man with painful, swollen scrotum.
Limited differential diagnosis error. Sagittal
ultrasound image of left side of scrotum. Initial report
gave incomplete differential diagnosis for necrotic
tumor. Addendum was created to include torsion with
necrosis as possibility.
C, 27-year-old man with known pulmonary nodules
and cavitary lesion. Normal variant error. Axial CT
image of chest. Initial report was new tracheal mass,
A B which on further review was thought to be adherent
mucus.

A B
Fig. 4—Examples of insufficient history errors.
A, 58-year-old woman with right-sided pelvic pain, status post left oophorectomy. Error of insufficient clinical history. Transverse pelvic ultrasound image of left adnexa.
Initial impression was normal findings with both ovaries unremarkable. Further review revealed that patient had undergone previous left oophorectomy, so structure
mimicking left adnexa is probably loop of bowel.
B, 66-year-old woman with history of metastatic breast cancer and bilateral percutaneous nephrostomy tubes presenting with left-sided flank pain. Error of insufficient
previous radiologic examination history. Abdominopelvic CT axial image. Initial impression was moderate ascites with large blood clot in pelvis. With knowledge that
area of high attenuation was present on multiple previous scans, radiologist revised report to “neoplastic process rather than blood clot.”

1236 AJR:205, December 2015


Radiology Report Addenda
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

A B
Fig. 5—Examples of poor technique errors.
A, 56-year-old man with right arm pain at rest, status post right subclavian artery stent placement. Limited view. CT angiographic image of upper extremity ordered for
evaluation of right subclavian stent, which is completely outside FOV.
B, 25-year-old woman with indeterminate anterior mediastinal soft tissue thickening. Artifact. Axial MR image of thymus. Initial reading included thymic shift ratios,
which were inaccurate owing to image artifact.

Fig. 6—Chart shows breakdown of major error types in reports with addenda
1%
(n = 851).

8%

Underreading
27%
21% Poor communication

Insufficient history

Overreading

44% Poor technique

Overall* 7.73 Overall* 7.73


Fluoroscopy 3.31
Imaging Modality

Underreading 1.77
Nuclear 5.80
Error Type

Overreading 0.52 PET 19.45


Poor communication 2.90 MRI 13.86
CT 12.45
Insufficient history 1.38
Ultrasound 4.36
Poor technique 0.05 Radiography 3.84

0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 25
Error Rate per 1000 Error Rate per 1000

Fig. 7—Chart shows error rates by type classification. Each rate is depicted per Fig. 8—Chart shows error rate by imaging modality. Radiography includes
1000 studies performed over same 2-month interval. Overall error rate (asterisk), mammography. Each rate is depicted per 1000 studies performed within modality
however, is calculated over 1-year period. over same 2-month interval. Overall error rate (asterisk), however, is calculated
over 1-year period.

AJR:205, December 2015 1237


Brigham et al.

Fig. 9—Chart shows frequency of error types by imaging modality. For each
300 Poor technique imaging modality, frequencies of error types are shown for 851 addenda
Incomplete history categorized. Radiography includes mammography.
250
Overreading
Error Frequency

Underreading
200
Poor communication
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

150

100

50

0
y

nd

ar

y
R
ph

op
C

PE

le
M
ou
ra

uc

sc
as
g

ro
N
io

ltr

uo
ad

Fl
R

Imaging Modality

F O R YO U R I N F O R M AT I O N
This article has been selected for AJR Journal Club activity. The accompanying Journal Club
study guide can be found on the following page.

1238 AJR:205, December 2015


Radiology Report Addenda

AJR Journal Club

Study Guide
Radiology Report Addenda: A Self-Report Approach to Error
Downloaded from www.ajronline.org by Library Of Medicine on 03/05/16 from IP address 128.103.149.52. Copyright ARRS. For personal use only; all rights reserved

Identification, Quantification, and Classification 
Joseph J. Budovec1, Margaret Mulligan1, Alan Mautz 2
1Medical College of Wisconsin, Milwaukee, WI
2 The Aroostook Medical Center, Presque Isle, ME

jbudovec@mcw.edu, mmulliga@mcw.edu, amautz@emhs.org*

Introduction
1. What is the question being asked? Is this question relevant and timely?
2. How does the method implemented in this study differ from other quality assurance or peer-review methods?

Methods
3. What is the design of this study? What are the limitations inherent to this type of study design? Does the study appropriately address
these limitations?
4. What are the advantages and disadvantages of using frequencies?
5. What type of data analysis was conducted?
6. What kinds of errors are described? Are errors categoric or noncategoric variables? Is it possible to quantify errors? Are all errors the
same in magnitude or effect? Are the categories and subcategories of error types used in the study adequate to assess these issues?

Statistics
7. This study makes assumptions regarding addenda. Are these assumptions valid?

Results
8. Was the research question answered?
9. What did the investigators of this study intend to accomplish by performing this study? Did the study achieve that goal?
10. What questions did the study raise?

Discussion
11. How do the results of this study compare with error analysis performed at your institution or practice? Has your institution or practice im-
plemented a similar quality improvement project? What are the obstacles in implementing such an analysis? How might these obstacles
be overcome?
12. The study states that addenda reflect the “ultimate truth” regarding the presence of an error based on the original interpreting radiologist
creating the addendum and disclosing the error. Do you agree with this assertion? What utility do you see in distinguishing between “true
error” as defined by addendum creation and error in peer review that may simply represent variability in opinion?

Background Readings
1. Kim YW, Mansfield LT. Fool me twice: delayed diagnoses in radiology with emphasis on perpetuated errors. AJR 2014; 202:465–470
2. Pinto A, Caranci F, Romano L, Carrafiello G, Fonio P, Brunese L. Learning from errors in radiology: a comprehensive review. Semin Ultrasound CT MRI 2012; 33:379–382

*Please note that the authors of the Study Guide are distinct from those of the companion article.

AJR:205, December 2015 1239

View publication stats

You might also like