You are on page 1of 13

Artificial Intelligence / Intelligence artificielle

Canadian Association of
Radiologists’ Journal
Artificial Intelligence Solutions for Analysis 2021, Vol. 72(1) 60-72
ª The Author(s) 2020
Article reuse guidelines:
of X-ray Images sagepub.com/journals-permissions
DOI: 10.1177/0846537120941671
journals.sagepub.com/home/caj

Scott J. Adams, MD1, Robert D. E. Henderson, MBA, PhD1 ,


Xin Yi, PhD1, and Paul Babyn, MDCM1

Abstract
Artificial intelligence (AI) presents a key opportunity for radiologists to improve quality of care and enhance the value of radiology
in patient care and population health. The potential opportunity of AI to aid in triage and interpretation of conventional radio-
graphs (X-ray images) is particularly significant, as radiographs are the most common imaging examinations performed in most
radiology departments. Substantial progress has been made in the past few years in the development of AI algorithms for analysis
of chest and musculoskeletal (MSK) radiographs, with deep learning now the dominant approach for image analysis. Large public
and proprietary image data sets have been compiled and have aided the development of AI algorithms for analysis of radiographs,
many of which demonstrate accuracy equivalent to radiologists for specific, focused tasks. This article describes (1) the basis for
the development of AI solutions for radiograph analysis, (2) current AI solutions to aid in the triage and interpretation of chest
radiographs and MSK radiographs, (3) opportunities for AI to aid in noninterpretive tasks related to radiographs, and (4) con-
siderations for radiology practices selecting AI solutions for radiograph analysis and integrating them into existing IT systems.
Although comprehensive AI solutions across modalities have yet to be developed, institutions can begin to select and integrate
focused solutions which increase efficiency, increase quality and patient safety, and add value for their patients.

Résumé
L’intelligence artificielle (AI) constitue une occasion fondamentale d’amélioration de la qualité des soins pour les radiologistes et
permet d’accroı̂tre la valeur de la radiologie pour les soins des patients et la santé de la population. Les potentiels de l’IA à contribuer
au tri et à l’interprétation des radiographies conventionnelles sont particulièrement significatifs dans la mesure où les radiographies
sont l’examen d’imagerie le plus fréquemment réalisé dans la plupart des départements de radiologie. Des progrès substantiels ont
été réalisés au cours de ces dernières années dans le développement d’algorithmes d’IA pour l’analyse des radiographies du thorax et
de l’appareil locomoteur; l’apprentissage approfondi constitue maintenant la démarche dominante pour l’analyse des images. De
grands ensembles de données d’imageries publiques et privées ont été rassemblés et ont contribué à l’élaboration d’algorithmes d’IA
pour l’analyse des radiographies; un grand nombre de ces algorithmes obtiennent une précision équivalente à celle des radiologies
pour des tâches spécifiques et ciblées. Cet article décrit (1) la base du développement des solutions d’IA pour l’analyse des
radiographies, (2) les solutions actuelles offertes par l’IA pour contribuer au tri et à l’interprétation des radiographies thoraciques et
musculo-squelettiques (3) les possibilités offertes par l’IA pour contribuer à des tâches non interprétatives liées à des radiographies,
et (4) des réflexions concernant les pratiques radiologiques sélectionnant des solutions d’IA pour l’analyse des radiographies et les
intégrant dans des systèmes existants de technologie de l’information. Bien que des solutions globales d’IA intermodalités restent à
élaborer, les établissements peuvent commencer à sélectionner et intégrer des solutions ciblées qui augmentent l’efficacité, la qualité
et la sécurité des patients, avec une valeur ajoutée pour leurs patients.

Keywords
artificial intelligence, machine learning, deep learning, X-rays, image analysis

Introduction
Conventional radiography (x-ray imaging) is the most common
imaging modality in most practice settings around the world.1 1
Department of Medical Imaging, Royal University Hospital, University of
Due to the large volume of radiographs performed each day, Saskatchewan, Saskatoon, Canada
radiography is a prime target for the development and imple-
Corresponding Author:
mentation of artificial intelligence (AI) solutions which may Scott J. Adams, MD, Department of Medical Imaging, Royal University Hospital,
increase efficiency and improve quality. Significant advances University of Saskatchewan, 103 Hospital Drive, Saskatoon, Canada S7N 0W8.
in AI have been achieved due to the development of machine Email: scott.adams@usask.ca
Adams et al 61

learning approaches including representation learning and deep priori), intermediate layers in a convolutional neural network
learning, increased computational capacity, and increased data extract salient features from the image. The final layer in the
available for algorithm training.2,3 These advancements have network performs classification. Popular convolutional neural
reignited efforts among academic research groups and industry networks for image analysis include, for example, ResNet,14
to develop AI solutions. Many efforts have focused on conven- DenseNet,15 AlexNet,16 and GoogLeNet.17
tional radiographs due to their importance in radiology prac- Although deep learning methods have demonstrated super-
tices, the large amounts of image data available for training ior performance for many image analysis tasks, substantial
algorithms, and their simplicity as a 2-dimensional representa- amounts of labeled data are required to train networks to opti-
tion of a 3-dimensional object. mize performance. The availability of high-quality labeled data
Artificial intelligence for analysis of conventional radio- from representative populations is considered a key require-
graphs has achieved high performance for a number of use ment. One of the best known publicly available chest radio-
cases,4-6 and an increasing number of AI solutions for radio- graph data sets is ChestX-ray14, a data set of 112 120
graph analysis are being commercialized,7,8 providing new radiographs released by the National Institutes of Health Clin-
opportunities for radiology practices to use AI to add value ical Center in 2017.18 Ground truth for this data set was derived
to clinical workflow and patient care. This article describes using natural language processing (NLP) to extract the occur-
(1) the basis for the development of AI solutions for radiograph rence of 14 different pathologies, including atelectasis, cardi-
analysis, (2) a selection of current AI solutions to aid in the omegaly, effusion, infiltration, mass, nodule, pneumonia,
triage and interpretation of chest radiographs and musculoske- pneumothorax, consolidation, edema, emphysema, fibrosis,
letal (MSK) radiographs, (3) opportunities for AI to aid in pleural thickening, and hernia, from radiologists’ reports. Other
noninterpretive tasks related to radiographs, and (4) considera- large radiograph data sets are described in Table 1. Together,
tions for radiology practices selecting AI solutions and inte- these data sets represent the current state-of-the-art for usable,
grating them into existing IT systems. Artificial intelligence for public, de-identified radiographs from which to derive models
mammography has been recently reviewed in other articles9,10 for the prediction of pathology in chest and MSK radiography.
and is beyond the scope of this review.
What AI Solutions for Chest Radiograph Analysis Are
What Is the Basis for the Development of AI Algorithms Currently Available and on the Horizon?
for Radiograph Analysis?
The development of AI algorithms for chest radiograph analy-
Radiography was the first modality for which computer-aided sis has been significantly aided by the development of large
approaches were developed in medical imaging. In 1963, Lod- data sets of chest radiographs. While detection of lung nodules
wick et al described a coding system for a computer to subse- was one of the earliest applications of AI for radiograph anal-
quently determine the significance of imaging features on ysis,12 use cases which have recently received significant atten-
radiographs for determining lung cancer prognosis.11 The tion by research groups and vendors include those for
method of extracting defined features from each radiograph, pneumothorax detection, pleural effusion detection, tuberculo-
such as the shape, density, and margin of lesions, paved the sis screening, and more general algorithms detecting multiple
way for more sophisticated methods of computer-aided diag- pathologies on chest radiographs. Assessment of catheters on
nosis in radiography. The subsequent 4 decades brought rise to radiographs is also an emerging area of investigation. Outputs
2 dominant techniques for computer-aided diagnosis: rules- for these algorithms are generally a flag and/or ‘‘heat map’’
based approaches, which involve specific step-by-step coding which indicate possible pathologies or areas on which the user
to analyze images, and machine learning.12 In machine learn- might wish to focus. A growing number of AI products for
ing, image features are used as inputs to classifiers which in radiograph analysis have received United States Food and Drug
turn identify combinations of features to make a classification Administration (FDA) 510(k) approval and CE marking (certi-
or prediction (such as the presence or absence of pathology). fication marking indicating conformity with standards within
Although the types of features extracted from images—based the European Economic Area), though a smaller number have
on techniques such as Fourier analysis, co-occurrence matrices, received Health Canada approval (Table 2). There is significant
and wavelet transforms—have expanded over the subsequent potential at this stage to use these solutions to triage imaging
decades, traditional machine learning approaches have relied studies for urgent radiologist review and as a second reader
on engineering and extracting specific features from images.12 when interpreting radiographs.
Since 2012, deep learning, a machine learning method
which often uses neural networks composed of multiple layers Pneumothorax detection. Products for computer-aided triage and
to transform input data to outputs, has become the dominant notification for pneumothoraces on frontal chest radiographs
approach for medical image analysis, including analysis of have been developed by GE Medical Systems and Zebra Med-
radiographs. In contrast to earlier approaches, deep learning ical Vision.7,8 Both products have received US FDA approval
in radiographic analysis is often based on convolutional neural though neither has received Health Canada approval to date.
networks which serve as both feature extractors and classi- These products provide passive notifications of pathological
fiers.13 Using images as input (without features defined a findings during image transfer to the picture archiving and
62 Canadian Association of Radiologists’ Journal 72(1)

Table 1. Large Radiograph Data Sets Available for Training AI Algorithms.

Number and type of


Name of data set Institution radiographs Labels Labeling method
Chest radiographs
ChestX-ray1418 National Institutes of 112 120 chest radiographs Presence/absence of 14 pathologies, Natural language
Health Clinical from 30 805 patients including atelectasis, processing from
Center (United cardiomegaly, effusion, infiltration, radiology reports
States) mass, nodule, pneumonia,
pneumothorax, consolidation,
edema, emphysema, fibrosis,
pleural thickening, and hernia
CheXpert69 Stanford Hospital 224 316 chest radiographs Presence/absence of 14 pathologies Natural language
(United States) from 65 240 patients (as above) processing from
radiology reports;
subset manually
labeled by radiologists
MIMIC29 Beth Israel Deaconess 227 835 studies (including Radiologist-generated free-text NA
Medical Center frontal and lateral reports for each study
(United States) radiographs for a total of
377 110 images) from
65 379 patients
PadChest70 Hospital Universitario 160 868 chest radiographs 174 different labels using Unified 27% manually annotated
de San Juan, from 69 882 patients Medical Language System by physicians;
Alicante (Spain) terminology; differential diagnoses remainder labeled
annotated with 19 different labels using a multilabel text
classifier
MSK radiographs
MURA71 Stanford University 14 863 studies of the upper Normal/abnormal Manually labeled by
(United States) extremities radiologists
LERA72 Stanford University 182 studies of the lower Normal/abnormal Manually labeled by
(United States) extremities radiologists
The Osteoarthritis Multicenter study 8892 knee radiographs Kellgren and Lawrence Manually labeled by
Initiative73 sponsored by the osteoarthritis grades radiologists
National Institutes
of Health (United
States)
Digital Hand Atlas74 Children’s Hospital of 1400 hand radiographs Sex, ethnicity, and bone age Based on radiology
Los Angeles report
(United States)
RSNA 2017 AI Stanford University 14 236 hand radiographs Sex and bone age Based on radiology
Challenge44 and the University report
of Colorado
(United States)

Abbreviations: AI, artificial intelligence; MSK, musculoskeletal; RSNA, Radiological Society of North America.

communication system (PACS) and produce a secondary cap- send a notification to the PACS worklist was 22.1 seconds.8
ture Digital Imaging and Communications in Medicine Although impressive results were demonstrated in reducing
(DICOM) image that presents the AI results. Sensitivity and time to interpretation for time-sensitive images, the clinical
specificity were as high as 93.15% and 92.99%,8 and prelimi- impact of this solution may vary widely across clinical settings
nary studies have suggested that AI tools which prioritize and be dependent on current turnaround times and the inci-
radiographs for urgent review may reduce time to interpretation dence of radiographs with critical findings. Further research
of time-sensitive images. Among 3 US board-certified radiol- is required to better understand the clinical effectiveness of
ogists who each read 588 radiographs, time to interpretation of these solutions. Current products are indicated for only adult-
time-sensitive images was a mean of 8.05 minutes (95% CI: size patients,7 and dedicated algorithms trained on pediatric
5.93-10.16 minutes) when the product was used to create a data sets will likely be required for pediatric radiographs.
prioritized worklist, compared to 68.98 minutes (95% CI:
60.53-77.43 minutes) for the standard of care.8 The average Pleural effusion detection. A similar focused product developed
performance time for the product to assess the radiograph and by Zebra Medical Vision for detection of pleural effusions on
Adams et al 63

Table 2. Selection of Commercial AI Algorithms for Analysis of Radiographs.

Regulatory statusb

Health CE
Product (vendor) Clinical indication (role in radiology workflow) Performancea FDA Canada mark
Chest radiograph analysis
Critical Care Suite Pneumothorax detection (worklist AUROC 0.9607 Yes No No
(GE Medical prioritization in PACS) Sensitivity 84.3%
Systems)7 Specificity 93.5%
HealthPNX (Zebra Pneumothorax detection (worklist AUROC 0.983, Yes No Yes
Medical Vision)8 prioritization in PACS) Sensitivity 93%
Specificity 93%
Reduced triage time for critical results; mean
time 8.05 minutes (using HealthPNX for
triage) vs 68.98 minutes (standard of care/
first in, first out approach)
HealthCXR (Zebra Pleural effusion detection (worklist AUROC 0.9885 Yes No Yes
Medical Vision)19 prioritization in PACS) Sensitivity 96.74%
Specificity of 93.17%
qXR (Qure)20 Detection of multiple abnormalities (worklist AUROC ranging from 0.844 (hilar enlargement) No No Yes
prioritization in PACS and prepopulating to 0.967 (pneumothorax)
reports with findings) Sensitivity 90%-95%
Specificity 56%-90%
INSIGHT CXR Detection of 10 abnormalities on chest Algorithm demonstrated significantly higher No No Yes
(Lunit)75 radiographs (outputs include heatmaps performance than 3 physician groups in
indicating location of detected lesions and image-wise classification, with AUROC of
abnormality scores reflecting the probability 0.983 (algorithm) vs 0.814-0.932 (among 3
that the detected region is abnormal) physician groups); P < .005. Algorithm
demonstrated significantly higher
performance than 3 physician groups in
lesion-wise localization, with AUROC 0.985
(algorithm) vs 0.781-0.907 (3 physician
groups); P < .001
Vuno Med-Chest Detection of nodules/masses, consolidation, Algorithm demonstrated significantly higher No No Yes
X-ray (Vuno)76 interstitial opacities, pleural effusions, and performances on image-wise normal/
pneumothoraces (outputs include heatmaps abnormal classification, with AUROC 0.985
indicating location of detected lesions and (algorithm) vs 0.958 (readers); P ¼ .001
abnormality scores reflecting the probability
that the detected lesion is abnormal)
xrAI (1QBit)21 Detection of multiple abnormalities on chest Limited results publicly available. xrAI improved No Yes No
radiographs (outputs include heatmaps diagnosis by up to 20% across groups
indicating location of detected lesions and inclusive of radiologists, emergency doctors,
abnormality scores reflecting the probability family doctors, pulmonologists, residents,
that the detected lesion is abnormal) and medical students
ClearRead Xray Creation of second image with bone Area under the localized ROC curve for lung Yes Yes Yes
Bone Suppress suppression nodule detection 0.460 (unaided) vs 0.558
(Riverain (aided by visualization software); P ¼ .0001
Technologies)77 Sensitivity 49.5% (unaided) vs 66.3% (aided by
visualization software); P < .0001
Specificity 96.1% (unaided) vs 91.8% (aided by
visualization software); P ¼ .004
ClearRead Xray Creation of second image with increased Reduction of image interpretation time by 19% Yes Yes Yes
Confirm contrast of tubes and catheters
(Riverain
Technologies)30
MSK radiograph analysis
BoneXpert Determination of bone age Root mean square error of 0.63 years No No Yes
(Visiana)48

(continued)
64 Canadian Association of Radiologists’ Journal 72(1)

Table 2. (continued)
Regulatory statusb

Health CE
Product (vendor) Clinical indication (role in radiology workflow) Performancea FDA Canada mark
16 Bit47 Determination of bone age Mean absolute difference of 4.265 months No No No
compared to 5-7 months among 4
radiologists
IB Lab PANDA Determination of bone age Limited results publicly available No No Yes
(Image Biopsy
Lab)43
OsteoDetect Detection of distal radius fractures (second AUROC 0.965, sensitivity 0.921, specificity Yes No No
(Imagen reader) 0.902
Technologies)50 Reader study (including radiologists and
nonradiologist clinicians): AUROC 0.889
(aided) vs 0.840 (unaided); sensitivity 0.803
(aided) vs 0.747 (unaided); specificity 0.914
(aided) vs 0.889 (unaided)
IB Lab KOALA Osteoarthritis assessment on knee radiographs Sensitivity 0.87 and specificity 0.83 for detection Yes No Yes
(Image Biopsy (outputs include measurement of joint space of Kellgren-Lawrence status 2 or higher
Lab)51 width, presence of features of osteoarthritis, Sensitivity 0.83 and specificity 0.80 for joint
and Kellgren-Lawrence grade) space narrowing
IB Lab HIPPO Automatic measurement of hip angles on Limited results publicly available No No No
(Image Biopsy radiographs including lateral center edge and
Lab)43 caput-collum-diaphyseal angles
Abbreviations: AI, artificial intelligence; AUROC, area under the receiver operating characteristic curve; FDA, Food and Drug Administration; MSK, musculos-
keletal; PACS, picture archiving and communication system; ROC, receiver operating characteristic curve.
a
Current product performance may be different than previously published.
b
Regulatory status at the time of writing.

chest radiographs has also received FDA approval (though not products such as Qure’s qXR, a CE-mark certified tool for
Health Canada approval at the time of writing). In a validation detection of multiple abnormalities on chest radiographs such
study of 554 chest radiographs, the product demonstrated an as cardiomegaly, consolidation, and pleural effusions, have
area under the receiver operating characteristic curve been used for tuberculosis screening,23 though performance
(AUROC) of 0.9885, sensitivity of 96.74%, and a specificity metrics specific for tuberculosis have not been reported. Arti-
of 93.17%.19 Other more general AI products detect pleural ficial intelligence algorithms which are trained on data sets
effusions among other pathologies.20,21 with radiographs with a tuberculosis label specifically will be
critical to ensuring sufficient specificity for screening and
diagnosis.
Tuberculosis screening. Substantial progress has also been made
toward developing dedicated AI algorithms for tuberculosis
screening, with potential applications in resource-poor settings. Detection of multiple pathologies on chest radiographs. While
Lakhani and Sundaram found that an ensemble of the AlexNet many of the products which have received regulatory approval
and GoogLeNet deep convolutional neural networks achieved to date have had narrow use cases, such as pneumothorax
an AUROC of 0.99 in classifying images as having pulmonary detection or pleural effusion detection, AI algorithms for detec-
manifestations of tuberculosis or as normal.5 Having a radiol- tion of multiple pathologies using a single network are increas-
ogist review only the cases where outputs from the 2 deep ingly being developed and commercialized. CheXNet is an
convolutional neural networks differed resulted in an overall example of a convolutional neural network developed by
sensitivity of 97.3% and specificity 100%, suggesting a role for Andrew Ng’s group at Stanford University trained with the
a ‘‘radiologist-augmented approach’’ where a radiologist ChestX-ray14 data set, which achieved state-of-the-art results
reviews only a subset of all images.5 A multisite evaluation for 14 pathologies on chest radiographs, with AUROC values
of 3 commercially developed products for classification of ranging from 0.73 to 0.94.4 Another algorithm, CheXNeXt,24
chest radiographs with tuberculosis-related abnormalities was trained and internally validated on the ChestX-ray8 data
resulted in AUROC values from 0.92 to 0.94. At a sensitivity set. Performance of the algorithm was compared to 6 board-
to match that of 2 radiologists, specificities of 2 of the 3 deep certified radiologists and 3 senior radiology residents from 3
learning systems were significantly higher than that of the 2 academic institutions. Results showed the model outperformed
radiologists, potentially reducing the number of recommenda- radiologists for identification of one pathology (atelectasis),
tions for nucleic acid amplification testing.22 Commercialized radiologists outperformed the model for 3 of the pathologies,
Adams et al 65

Figure 1. Artificial intelligence (AI) solution (xrAI, 1QBit, Vancouver, Canada) for chest radiograph analysis. Potential areas of abnormality with
a corresponding probability for the presence of the abnormality are displayed on a secondary capture Digital Imaging and Communications in
Medicine (DICOM) image. Images used with permission.

and the remaining 10 were of similar performance, with model among radiologists than would typically be achieved in a
AUROC values ranging from 0.70 to 0.94. Other studies5,25-27 clinical setting. Although there are a growing number of AI
have generally achieved AUROC values that are similarly algorithms which detect multiple types of pathology on
high. chest radiographs, most are trained to detect only a subset
xrAI, developed by Canada-based 1QBit, is one of the few of pulmonary, pleural, or mediastinal findings and few con-
Health Canada approved products for chest radiograph analysis sider any bone findings, largely a reflection of labels on
(Figure 1). This product provides a numerical output of the available data sets. Further clinical studies are required to
confidence the algorithm has for the presence of a lung or determine whether a radiograph labeled as normal by an AI
pleural abnormality (including consolidation, nodules, masses, solution may lead to passivity and potential misses by radi-
cavities, effusions, and pneumothoraces) and displays the area ologists for findings which the algorithm has not been
of abnormality on the image.21 However, the algorithm does trained to detect.
not currently provide an indication on the type of abnormality
which is detected. Assessment of lines and tubes. Despite the increasing interest in
Park et al evaluated the potential of AI to demonstrate per- automatic interpretation of chest radiographs using deep learn-
formance superior to radiologists’ interpretation of chest radio- ing approaches stimulated by large open access data sets such
graphs using a data set with groundtruth defined with reference as Chest-Xray1418 and MIMIC,29 image annotations to support
to recent computed tomography when available.28 In lesion- the development of algorithms for catheter placement evalua-
wise detection, the model outperformed all 9 readers.28 It is tion has been relatively ignored and development toward
anticipated that AI algorithms may increasingly achieve per- computer-assisted catheter evaluation systems is still limited.
formance surpassing diagnostic ability of radiologists for spe- Current commercial products for catheter assessment on radio-
cific, focused tasks if algorithms are trained on data sets with graphs are limited to those which create a second image with
cross-sectional imaging correlation and pathological bone suppression and increased contrast of tubes and cathe-
diagnoses. ters.30 Assessment of catheter placement remains by manual
However, comparisons of the performance of AI algo- inspection, and commercial products for automated assessment
rithms to radiologists must be considered in light of the fact are yet to be available.
that in some studies the radiographs which radiologists Recently, work toward dedicated machine learning algo-
interpreted were of lower resolution than in typical clinical rithms for assessment of catheters on adult and pediatric radio-
settings, potentially decreasing radiologists’ performance graphs has been described. Singh et al formulated this as a
and lowering the standard which AI algorithms are com- binary classification problem (normal vs abnormal catheter
pared to.24 Additionally, radiologists in many studies do not position) and achieved an AUROC of 0.87 in evaluating the
have access to past medical history, other views, or previous position of enteric feeding tubes on adult chest and abdomen
imaging studies of the same patient, which may each assist radiographs.31 Although this result looks promising, other
in diagnosis.4,24 This may also result in a lower performance information, including the location of the catheter tip with
66 Canadian Association of Radiologists’ Journal 72(1)

respect to anatomical landmarks, the general course of the and by 4 independent radiologists using the Greulich and Pyle
catheter, and other types of catheters that are present on the standard.44 Overall performance was measured as a mean abso-
radiograph, is also required. lute difference (MAD) between the model and ground truth. Of
To support the interpretability of machine learning algo- 105 total submissions, the top 5 achieved MAD values between
rithms and evaluate the cause of failures, decomposition of the 4.2 and 4.5 months from the ground truth ages. Mean absolute
task of catheter assessment into a few corresponding subtasks is difference values for the 4 readers ranged from 5 to 7 months,
desirable. Furthermore, considering the variety of catheters that emphasizing that machine learning models trained with large
are commonly present in radiographs and the low spatial sup- amounts of data to answer focused questions can reduce varia-
port of these thin structures, a direct end-to-end classification tion in interpretation. All top 5 challenge winners except the
system may not generalize well and yield acceptable results. fourth-place team used deep learning, and common themes
For example, Subramanian et al fine-tuned a pretrained Dense- among winning entries included the use of data augmentation,
Net to classify the presence of central venous catheters but preprocessing, and ensembles.45 Performance of all top 10
achieved only moderate results.32 Their final approach com- entries surpassed that from a paper published that same year
bines deep learning for an approximate segmentation of cathe- from the group which established the data set,40 emphasizing
ters and conventional machine learning with catheter geometric the benefits of a coordinated approach to solving medical ima-
features from these regions with spatial location priors.32 ging problems.46 A number of commercial products have since
In order to segment catheters, pixel-wise annotations are been developed to assess bone age.47,48
generally required. Due to the scarcity of labels, synthetic data Fracture detection. There have also been recent promising
also play an important role toward building a functional sys- studies describing AI for fracture detection.6,41,42,49 Lindsey
tem. Yi et al proposed a way to generate synthesized catheters et al6 used a data set of 135 845 MSK radiographs from across
on pediatric radiographs and used synthetic data to train a anatomic regions with groundtruth defined by one or more
segmentation network for commonly seen catheters including
orthopedic surgeons, and achieved an AUROC of 0.967. In a
endotracheal tubes, nasogastric tubes, and umbilical cathe-
study which assessed how the algorithm may change fracture
ters.33 A similar approach has been explored for the detection
detection performance among emergency physicians interpret-
of endotracheal tubes on adult chest radiographs, achieving an
ing posterior–anterior and lateral-view wrist radiographs, a
AUROC of 0.99 in classifying the presence of endotracheal
statistically significant improvement in both sensitivity and
tubes.34 A more in-depth review of the current status of AI for
specificity for detection of fractures was observed when emer-
catheter placement assessment can be found in the study by Yi
gency medicine physicians were aided by the algorithm com-
et al.35 In 2019, IBM and MICCAI cohosted the Multimodal
Learning for Clinical Decision Support Challenge that was pared to when they were unaided. This resulted in a relative
dedicated to catheter detection and classification.36 Since the reduction of the emergency medicine physicians’ misinterpre-
data set now is publicly accessible, we expect to see more tation rate by 47.0%.6 A product to detect distal radius fractures
research regarding assessment of catheters and tubes on radio- based on this work has received US FDA approval,50 though
graphs in the near future. not Health Canada approval at the time of writing. This product
may minimize the number of patients recalled following dis-
charge from the emergency department due to missed fractures.
What AI Applications for MSK Radiograph Analysis Are However, studies investigating the effectiveness of such algo-
Currently Available and on the Horizon? rithms in real clinical settings have yet to be conducted.
Musculoskeletal radiographs are efficient first-line investiga-
tions for trauma, bony tumors, and skeletal dysplasias, among Osteoarthritis assessment. Another product which has received
other indications. A variety of use cases for AI to aid in the FDA clearance (though not Health Canada approval) provides
interpretation of MSK radiographs have been proposed, rang- measurement of joint space width, features of osteoarthritis,
ing from automating measurements such as the femoral neck- and a Kellgren-Lawrence grade for knee radiographs. The
shaft angle or Insall-Salvati ratio, detecting fractures, assessing output from the software can be viewed in a DICOM viewer
skeletal maturity (bone age), and providing a probability of a workstation.51 It is anticipated that in the future the algorithm
bone lesion being benign or malignant.37 Determining bone outputs may be directly exported to voice recognition report-
age, detecting fractures, grading osteoarthritis, and ing software, allowing for increased efficiency and reporting
automating measurements are some of the most prominent standardization and decreased interobserver variability across
use cases explored in the literature6,38-43 and for which AI
radiologists.
solutions have recently been commercialized.
Bone age. The development of algorithms to determine bone
age received significant attention as this use case was selected Measurement. Although not FDA or Health Canada approved,
for the inaugural Radiological Society of North America another product on the horizon automates measurement of hip
Machine Learning Challenge in 2017. A data set of 14 236 angles on radiographs (including lateral center-edge and caput-
hand radiographs was released for the challenge, with ground collum-diaphyseal angles), which may increase efficiency and
truth ages derived from the respective clinical radiology reports decrease interobserver variability.43
Adams et al 67

Table 3. Applications for AI Across Radiography.

AI beyond the radiology AI within the radiology department


department
Potential uses Referring clinician Technologist Radiologist Clerical staff
Exam ordering and - Clinical decision - Scheduling of exams
scheduling support to assess with greatest urgency
imaging
appropriateness
Image acquisition - Immediate notification regarding
and processing appropriate penetration, exposure,
coverage, artefacts, and laterality
markers
- Assessing imaging quality off-line as
part of quality control programs
Exam triage - Immediate notification - Worklist
of possible critical prioritization for
findings exams with possible
critical findings
Image interpretation - Heat maps indicating - Heat maps indicating
areas to focus on areas to focus on
- Preliminary diagnoses - Preliminary diagnoses
- Optimize hanging
protocols and image
orientation
- Summaries of clinical
history and relevant
previous
investigations
Reporting - Predrafted reports
with imaging findings
described
- Flag for errors in
laterality
- Lexical simplification
for patient portals
Follow-up - Ensure follow-up of - Ensure follow-up of
investigations investigations
recommended in recommended in
radiology reports radiology reports

Abbreviation: AI, artificial intelligence.

How Can AI Be Used Beyond Image Interpretation for for appropriate penetration, exposure, coverage, artefacts, or
Noninterpretive Tasks? suspected discrepancies in laterality markers and immediately
notify technologists about potential quality concerns. Artificial
Although much attention surrounding AI for analysis of radio- intelligence may also be used offline to flag radiographs of
graphs has focused on tasks related to image interpretation, there concern as part of a quality assurance program.53
is great potential for AI to be used across medical imaging tasks Solutions to optimize hanging protocols based on individual
ranging from exam ordering, image acquisition and processing, preferences and solutions to orient radiographs in standard orien-
interpretation, reporting, and patient follow-up, with potential tations (particularly in cases where anatomy may be in nonstan-
users of AI including not only radiologists but also referring clin- dard orientations due to difficulty in patient positioning) may
icians, technologists, and clerical staff (Table 3).52 While some draw upon machine learning and computer vision techniques.
noninterpretive applications of AI, such as voice recognition, are Presentation of a patient’s problem list, laboratory results, and
well established, many of these use cases are in their infancy. consultation letters in PACS54 and automated summarization of
At the exam ordering stage, there is potential for AI to ana- pertinent clinical information55,56 such as location of focal ten-
lyze text within the electronic medical record and results from derness when interpreting MSK radiographs, or temperature and
laboratory investigations and previous imaging studies to help white blood cell count when interpreting a chest radiograph—
determine the appropriateness of a given imaging study. At the may result in more relevant interpretations of radiographs and
image acquisition stage, AI may be used to analyze radiographs increase the value radiologists provide to patient care.
68 Canadian Association of Radiologists’ Journal 72(1)

Figure 2. Radiology workflow incorporating artificial intelligence (AI). Images are sent to the Digital Imaging and Communications in Medicine
(DICOM) router, which then sends exams which are compatible for assessment to the AI algorithm, which is either on premise or in the cloud.
The AI algorithm processes the images and sends image analysis results to picture archiving and communication system (PACS) as either a
notification flag (eg, flagging a case for urgent review) or secondary capture DICOM image (eg, highlighting regions which the radiologist should
focus on). Artificial intelligence results may also be displayed at the radiologist workstation through the reporting/voice recognition software
interface. Additionally, AI results may be available to clinicians using an enterprise viewer. Adapted from Dikici et al.68

As patients increasingly have access to their radiology urgent results, reducing errors or missed findings, reducing
reports and seek to understand their reports, text simplification, variability among radiologists, and increasing radiologists’
and particularly lexical simplification, has become an impor- efficiency (by automating tasks such as measurement or pre-
tant use case.57,58 However, while there have been many populating reports with findings). However, there is a paucity
reports describing text summarization and text generation of real-world evidence regarding whether these benefits will in
using machine learning techniques, efforts to date have found fact be realized.
limited success compared to work in image analysis.55,59 Most AI products for radiograph analysis currently focus on
Finally, identifying and coding reports with recommendations (1) triaging radiographs for urgent interpretation by prioritizing
for further imaging using traditional NLP techniques60 or new them in PACS worklists and (2) serving as a second reader or
deep learning techniques61 may ensure that patients requiring assistant to radiologists or nonradiologist clinicians interpret-
further imaging are not lost to follow-up. ing radiographs. A typical workflow incorporating AI is pre-
sented in Figure 2. Business cases for implementing AI for
radiograph analysis will need to balance integration and imple-
What Should Be Considered When Selecting AI Products
mentation costs and ongoing software licensing fees with
for Radiograph Analysis and Integrating Them into potential efficiency gains and cost savings and potential impact
Existing IT Systems? on patient safety and quality of care. Although numerous start-
A primary question for hospitals, clinics, and health systems up companies have recently launched and there is increasing
considering implementing AI solutions for radiograph analysis interest in AI among established imaging vendors, the market
into clinical workflows is how AI solutions can add value, remains fragmented and comprehensive AI solutions across
including by increasing quality and/or by decreasing costs. modalities have yet to be developed.62 This makes AI market-
Potential benefits may include reducing turnaround times for places, which allow institutions to select specific algorithms
Adams et al 69

from multiple vendors based on the greatest needs of their practice from an increasing number of FDA- and CE-
institution, appealing for many institutions.63 As radiology approved solutions, and a currently smaller number of Health
practices consider implementing AI, aspects such as whether Canada approved solutions.
their IT teams have the time and resources to commit to system
integration, whether the product ‘‘plugs in’’ to an existing prod- Declaration of Conflicting Interests
uct or whether a complete install/integration project is required,
The author(s) declared no potential conflicts of interest with respect to
and vendor compatibility or the availability of vendor-neutral
the research, authorship, and/or publication of this article.
solutions, will need to be considered.64 Institutions may also be
faced with deciding whether to proceed with on-premise or
cloud-based systems. On-premise systems may require addi- Funding
tional upfront costs and additional IT expertise and integration The author(s) received no financial support for the research, author-
time, though they may minimize the risk of personal health ship, and/or publication of this article.
information leaving the health system.65 Clinical colleagues
in departments outside of radiology are anticipated to have ORCID iD
increasing interest in using AI for preliminary imaging diag-
Robert D. E. Henderson, MBA, PhD https://orcid.org/0000-0003-
noses, and radiology departments should carefully manage the 2909-5384
risks and opportunities of facilitating AI tools beyond the radi-
ology department.
Additional considerations are whether local data will be References
used to fine-tune pretrained algorithms to optimize perfor- 1. Bindman RS, Miglioretti DL, Larson EB. Rising use of diagnostic
mance for local settings, as AI solutions may demonstrate vary- medical imaging in a large integrated health system. Health Aff.
ing levels of performance based on imaging equipment, patient 2008;27(6):1491-1502. doi:10.1377/hlthaff.27.6.1491
demographics, or disease prevalence. In the future, institutions 2. Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:
may need to determine whether the algorithm will be trained 436-444. doi:10.1038/nature14539
with additional data to continue to improve performance over 3. Hinton G. Deep learning—a technology with the potential to
time while the product is in use. The use of federated, distrib- transform health care. JAMA. 2018;320(11):1101-1102. doi:10.
uted workflow orchestrations—where machine learning com- 1001/jama.2018.11100
putations are performed locally and only computation results 4. Rajpurkar P, Irvin J, Zhu K, et al. CheXNet: radiologist-level
are sent to the cloud—may be appealing to allow for continued pneumonia detection on chest x-rays with deep learning. 2017;
algorithm training across sites without personal health infor- arXiv:1711.05225.
mation leaving the institution.66 In the future, vendors and 5. Lakhani P, Sundaram B. Deep learning at chest radiography:
institutions may wish to, or be required to, participate in pro- automated classification of pulmonary tuberculosis by using con-
grams to monitor real-world clinical performance of AI volutional neural networks. Radiology. 2017;284(2):574-582.
solutions.67 doi:10.1148/radiol.2017162326
6. Lindsey R, Daluiski A, Chopra S, et al. Deep neural network
improves fracture detection by clinicians. Proc Natl Acad Sci U
Conclusion S A. 2018;115(45):11591-11596. doi:10.1073/pnas.1806905115
The volume of radiographs performed each day across radi- 7. GE Healthcare. 510(k) Summary K183182. 2019. Accessed July
ology practices makes triage and automated interpretation of 18, 2020. https://www.accessdata.fda.gov/cdrh_docs/pdf18/K183
radiographs particularly appealing use cases for AI to increase 182.pdf
the value that radiology provides to patient care. Artificial 8. Zebra Medical Vision. 510(K) Summary—HealthPNX. 2019.
intelligence solutions have achieved high diagnostic perfor- Accessed July 18, 2020. https://www.accessdata.fda.gov/cdrh_
mance for focused tasks related to interpretation of radio- docs/pdf19/K190362.pdf
graphs, though further research—including clinical 9. Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast
effectiveness studies—is needed to better understand the clin- cancer—recent development and challenges. Br J Radiol. 2020;
ical impact of AI in radiology departments and health care 93(1108):20190580. doi:10.1259/bjr.20190580
systems. It is anticipated that development work in the com- 10. Geras KJ, Mann RM, Moy L. Artificial intelligence for mammo-
ing years will continue to expand labeled radiograph data sets graphy and digital breast tomosynthesis: current concepts and
available for training and testing AI algorithms; broaden the future perspectives. Radiology. 2019;293(2):246-259. doi:10.
scope of use cases for AI for analysis of radiographs across 1148/radiol.2019182627
interpretive tasks and noninterpretive tasks; expand the use of 11. Lodwick GS, Keats TE, Dorst JP. The coding of roentgen images
AI across patient populations, including pediatrics; and for computer analysis as applied to lung cancer. Radiology. 1963;
enable simple, vendor-neutral integration of AI solutions into 81:185-200. doi:10.1148/81.2.185
legacy IT systems. Although comprehensive AI solutions 12. Van Ginneken B, Hogeweg L, Prokop M. Computer-aided diag-
across modalities have yet to be developed, institutions can nosis in chest radiography: Beyond nodules. Eur J Radiol. 2009;
begin to integrate focused solutions which add value to their 72(2):226-230. doi:10.1016/j.ejrad.2009.05.061
70 Canadian Association of Radiologists’ Journal 72(1)

13. van Ginneken B.Fifty years of computer analysis in chest ima- with free-text reports. Sci Data. 2019;6(1):317. doi:10.1038/
ging: rule-based, machine learning, deep learning. Radiol Phys s41597-019-0322-0
Technol. 2017;10(1):23-32. doi:10.1007/s12194-017-0394-5 30. Riverain Technologies. Traditional 510(k) Premarket Notifica-
14. He K, Zhang X, Ren S, Sun J. Deep residual learning for image tion ClearRead þConfirm. 2012. Accessed March 12, 2020.
recognition. In: Proceedings of the 2016 IEEE Conference on https://www.accessdata.fda.gov/cdrh_docs/pdf12/K123526.pdf
Computer Vision and Pattern Recognition; Las Vegas, NV; 31. Singh V, Danda V, Gorniak R, Flanders A, Lakhani P. Assess-
2016:770-778. doi:10.1109/CVPR.2016.90 ment of critical feeding tube malpositions on radiographs using
15. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely deep learning. J Digit Imaging. 2019;32(4):651-655. doi:10.1007/
connected convolutional networks. In: Proceedings of the 2017 s10278-019-00229-9
IEEE Conference on Computer Vision and Pattern Recognition; 32. Subramanian V, Wang H, Wu JT, Wong KCL, Sharma A, Syeda-
Honolulu, HI; 2017:4700-4708. doi:10.1109/CVPR.2017.243 Mahmood T. Automated detection and type classification of cen-
16. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification tral venous catheters in chest X-rays. In: International Conference
with deep convolutional neural networks. Commun ACM. 2017; on Medical Image Computing and Computer-Assisted Interven-
60:84-90. doi:10.1145/3065386 tion. 2019:522-530. Springer, Cham. doi:10.1007/978-3-030-
17. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. 32226-7_58
In: Proceedings of the 2015 IEEE Conference on Computer 33. Yi X, Adams S, Babyn P, Elnajmi A. Automatic catheter and tube
Vision and Pattern Recognition; Boston, MA; 2015:1-9. doi:10. detection in pediatric X-ray images using a scale-recurrent net-
1109/CVPR.2015.7298594 work and synthetic data. J Digit Imaging. 2019;33(1):181-190.
18. Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. ChestX- doi:10.1007/s10278-019-00201-7
Ray8: hospital-scale chest X-ray database and benchmarks on 34. Frid-Adar M, Amer R, Greenspan H. Endotracheal tube detection
weakly-supervised classification and localization of common and segmentation in chest radiographs using synthetic data. In:
thorax diseases. In: Proceedings of the 2017 IEEE Conference
International Conference on Medical Image Computing and
on Computer Vision and Pattern Recognition; Honolulu, HI;
Computer-Assisted Intervention. 2019:784-792. Springer, Cham.
2017:3462-3471. doi:10.1109/CVPR.2017.369
doi:10.1007/978-3-030-32226-7_87
19. Zebra Medical Vision. 510(K) Summary—HealthCXR. 2019.
35. Yi X, Adams SJ, Henderson RDE, Babyn P. Computer-aided
Accessed July 18, 2020. https://www.accessdata.fda.gov/cdrh_
assessment of catheters and tubes on radiographs: How good is
docs/pdf19/K192320.pdf
artificial intelligence for assessment? Radiol Artif Intell. 2020;
20. qure.ai. qXR detects various abnormalities on Chest X-Rays n.d.
2(1):e190082. doi:10.1148/ryai.2020190082
Accessed March 12, 2020. http://qure.ai/qxr.html
36. ML-CDS 2019: Challenge. 2019. Accessed August 10, 2019.
21. 1QBit. xrAI n.d. Accessed April 15, 2020. https://1qbit.com/xrai/
http://www.mcbr-cds.org/challenge/challenge-description.html
22. Qin ZZ, Sander MS, Rai B, et al. Using artificial intelligence to
37. American College of Radiology Data Science Institute. Define-AI
read chest radiographs for tuberculosis detection: a multi-site
Directory n.d. Accessed March 12, 2020. https://www.acrdsi.org/
evaluation of the diagnostic accuracy of three deep learning sys-
DSI-Services/Define-AI
tems. Sci Rep. 2019;9(1):1-10. doi:10.1038/s41598-019-51503-3
38. Thodberg HH, Kreiborg S, Juul A, Pedersen KD. The bonexpert
23. Qure.AI. qXR is used for TB screening worldwide n.d. Accessed
method for automated determination of skeletal maturity. IEEE
April 15, 2020. http://qure.ai/qxr-tuberculosis.html
24. Rajpurkar P, Irvin J, Ball RL, et al. Deep learning for chest radio- Trans Med Imaging. 2009;28(1):52-66. doi:10.1109/TMI.2008.
graph diagnosis: a retrospective comparison of the CheXNeXt 926067
algorithm to practicing radiologists. PLoS Med. 2018;15(11): 39. Spampinato C, Palazzo S, Giordano D, Aldinucci M, Leonardi R.
e1002686. doi:10.1371/journal.pmed.1002686 Deep learning for automated skeletal bone age assessment in X-
25. Huang Z, Lin J, Xu L, et al. Fusion high-resolution network for ray images. Med Image Anal. 2017;36:41-51. doi:10.1016/j.
diagnosing chest X-ray images. Electronics. 2020;9(1):190. doi: media.2016.10.010
10.3390/electronics9010190 40. Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV,
26. Allaouzi I, Ahmed MB. A novel approach for multi-label chest X- Langlotz CP. Performance of a deep-learning neural network
ray classification of common thorax diseases. IEEE Access. 2019; model in assessing skeletal maturity on pediatric hand radio-
7:64279-64288. doi:10.1109/ACCESS.2019.2916849 graphs. Radiology. 2018;287(1):313-322. doi:10.1148/radiol.
27. Pan I, Agarwal S, Merck D. Generalizable inter-institutional clas- 2017170236
sification of abnormal chest radiographs using efficient convolu- 41. Olczak J, Fahlberg N, Maki A, et al. Artificial intelligence for
tional neural networks. J Digit Imaging. 2019;32(5):888-896. doi: analyzing orthopedic trauma radiographs: deep learning algo-
10.1007/s10278-019-00180-9 rithms—are they on par with humans for diagnosing fractures?
28. Park S, Lee SM, Lee KH, et al. Deep learning-based detection Acta Orthop. 2017;88(6):581-586. doi:10.1080/17453674.2017.
system for multiclass lesions on chest radiographs: comparison 1344459
with observer readings. Eur Radiol. 2019;30(3):1359-1368. doi: 42. Varma M, Lu M, Gardner R, et al. Automated abnormality detec-
10.1007/s00330-019-06532-x tion in lower extremity radiographs using deep learning. Nat
29. Johnson AEW, Pollard TJ, Berkowitz SJ, et al. MIMIC-CXR, a Mach Intell. 2019;1(12):578-583. doi:10.1038/s42256-019-
de-identified publicly available database of chest radiographs 0126-0
Adams et al 71

43. Image Biopsy Lab. Artificial Intelligence Driven Solutions. 2020. patient-centered radiology. J Am Coll Radiol. 2020:S1546–
Accessed March 12, 2020. https://imagebiopsylab.com/ai-driven- 1550(20)30031–30034. doi:10.1016/j.jacr.2020.01.007
solutions/ 59. Hu Z, Yang Z, Liang X, Salakhutdinov R, Xing EP. Toward
44. Halabi SS, Prevedello LM, Cramer JK, et al. The RSNA pediatric controlled generation of text. In: Proceedings of the 34th Inter-
bone age machine learning challenge. Radiology. 2019;290(2): national Conference on Machine Learning (ICML 2017); Syd-
498-503. doi:10.1148/radiol.2018180736 ney, Australia; PMLR 2017;70:1587-1596.
45. Siegel EL. What can we learn from the RSNA pediatric bone age 60. Dreyer KJ, Kalra MK, Maher MM, et al. Application of recently
machine learning challenge? Radiology. 2019;290(2):504-555. developed computer algorithm for automatic classification of
doi:10.1148/radiol.2018182657 unstructured radiology reports: validation study. Radiology.
46. Prevedello LM, Halabi SS, Shih G, et al. Challenges related to 2005;234(2):323-329. doi:10.1148/radiol.2341040049
artificial intelligence research in medical imaging and the impor- 61. Chen MC, Ball RL, Yang L, et al. Deep learning to classify
tance of image analysis competitions. Radiol Artif Intell. 2019; radiology free-text reports. Radiology. 2018;286(3):845-852.
1(1):e180031. doi:10.1148/ryai.2019180031 doi:10.1148/radiol.2017171115
47. 16 Bit. Predicting skeletal age n.d. Accessed July 18, 2020. 62. Alexander A, Jiang A, Ferreira C, Zurkiya D. An intelligent future
https://www.16bit.ai/bone-age for medical imaging: A market outlook on artificial intelligence
48. Visiana. BoneXpert version 3.0 released 2019. 2019. Accessed for medical imaging. J Am Coll Radiol. 2020;17(1 pt B):165-170.
July 18, 2020. https://bonexpert.com/september-2019-bonexpert- doi:10.1016/j.jacr.2019.07.019
version-3-0-released/ 63. Parekh S. Selecting an AI Marketplace for Radiology: Key Con-
49. Yu JS, Yu SM, Erdal BS, et al. Detection and localisation of hip siderations for Healthcare Providers. Imaging Technol News;
fractures on anteroposterior radiographs with artificial intelli- 2019. Accessed March 12, 2020. https://www.itnonline.com/arti
gence: proof of concept. Clin Radiol. 2020;75(3):237.e1-e9. cle/selecting-ai-marketplace-radiology-key-considerations-health
doi:10.1016/j.crad.2019.10.022 care-providers
50. Evaluation of Automatic Class III Designation for OsteoDetect n. 64. Browning T, O’Neill T, Ng Y, Fielding JR, Peshock RM. Special
d. Accessed July 18, 2020. https://www.accessdata.fda.gov/cdrh_ considerations for integrating artificial intelligence solutions in
docs/reviews/DEN180005.pdf urban safety-net hospitals. J Am Coll Radiol. 2020;17(1 Pt B):
51. IB Lab GmbH. 510(k) Summary IB Lab’s KOALA. 2019. 171-174. doi:10.1016/j.jacr.2019.08.016
Accessed July 18, 2020. https://www.accessdata.fda.gov/cdrh_ 65. Freund K. AI and HPC: Cloud or on-premises hosting. 2019.
docs/pdf19/K192109.pdf Accessed April 15, 2020. http://www.moorinsightsstrategy.com/
52. Choy G, Khalilzadeh O, Michalski M, et al. Current applications wp-content/uploads/2019/02/AI-And-HPC-Cloud-Or-On-Pre
and future impact of machine learning in radiology. Radiology. mises-Hosting-By-Moor-Insights-And-Strategy.pdf
2018;288(2):318-328. doi:10.1148/radiol.2018171820 66. Chang PJ. Moving artificial intelligence from feasible to real:
53. Lakhani P, Prater AB, Hutson RK, et al. Machine learning in time to drill for gas and build roads. Radiology. 2020;294(2):
radiology: applications beyond image interpretation. J Am Coll 432-433. doi:10.1148/radiol.2019192527
Radiol. 2018;15(2):350-359. doi:10.1016/j.jacr.2017.09.044 67. Allen B. The role of the FDA in ensuring the safety and efficacy
54. Philips. Philips is first to bring adaptive intelligence to radiology, of artificial intelligence software and devices. J Am Coll Radiol.
delivering a new approach to how radiologists see, seek and share 2019;16(2):208-210. doi:10.1016/j.jacr.2018.09.007
patient information. 2016. Accessed March 12, 2020. https:// 68. Dikici E, Bigelow M, Prevedello LM, White RD, Erdal BS. Inte-
www.philips.com/a-w/about/news/archive/standard/news/press/ grating AI into radiology workflow: levels of research, produc-
2016/20161127-philips-is-first-to-bring-adaptive-intelligence-to- tion, and feedback maturity. J Med Imaging. 2020;7(1):016502.
radiology.html doi:10.1117/1.jmi.7.1.016502
55. Rush AM, Chopra S, Weston J. A neural attention model for 69. Irvin J, Rajpurkar P, Ko M, et al. CheXpert: A large chest radio-
abstractive sentence summarization. In: Proceedings of the graph dataset with uncertainty labels and expert comparison. Proc
2015 Conference on Empirical Methods in Natural Language AAAI Conf Artif Intell. 2019;33(1):590-597. doi:10.1609/aaai.
Processing. September 2015. 379-389. Lisbon, Portugal. v33i01.3301590
56. Nallapati R, Zhou B, dos Santos C, Gulçehre Ç, Xiang B. Abstrac- 70. Bustos A, Pertusa A, Salinas JM, de la Vayá MI. PadChest: a large
tive text summarization using sequence-to-sequence RNNs and chest x-ray image dataset with multi-label annotated reports.
beyond. In: Proceedings of the 20th SIGNLL Conference on Com- 2019;arXiv:1901.07441.
putational Natural Language Learning. February 2016. 280-290. 71. Rajpurkar P, Irvin J, Bagul A, et al. MURA: Large dataset for
Berlin, Germany. doi:10.18653/v1/k16-1028 abnormality detection in musculoskeletal radiographs. In: Pro-
57. Qenam B, Kim TY, Carroll MJ, Hogarth M. Text simplification ceedings of the 1st Conference on Medical Imaging with Deep
using consumer health vocabulary to generate patient-centered learning; Amsterdam, the Netherlands; 2018.
radiology reporting: translation and evaluation. J Med Internet 72. Stanford University Center for Artificial Intelligence in Medicine
Res. 2017;19(12):e417. doi:10.2196/jmir.8536 & Imaging. LERA- Lower Extremity RAdiographs n.d. Accessed
58. Adams SJ, Tang R, Babyn P. Patient perspectives and priorities July 18, 2020. https://aimi.stanford.edu/lera-lower-extremity-
regarding artificial intelligence in radiology: opportunities for radiographs
72 Canadian Association of Radiologists’ Journal 72(1)

73. NIMH Data Archive. The Osteoarthritis Initiative n.d. Accessed thoracic diseases on chest radiographs. JAMA Netw Open. 2019;
July 18, 2020. https://nda.nih.gov/oai/ 2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
74. Gertych A, Zhang A, Sayre J, Kurkowska SP, Huang HK. Bone 76. VUNO. Publications n.d. Accessed July 18, 2020. https://www.
age assessment of children using a digital hand atlas. Comput Med vuno.co/publications?page¼3
Imaging Graph. 2007;31(4-5):322-331. doi:10.1016/j.compmedi- 77. Freedman MT, Lo SCB, Seibel JC, Bromley CM. Lung
mag.2007.02.012 nodules: Improved detection with software that suppresses the
75. Hwang EJ, Park S, Jin KN, et al. Development and validation of a rib and clavicle on chest radiographs. Radiology. 2011;260(1):
deep learning-based automated detection algorithm for major 265-273.

You might also like