Professional Documents
Culture Documents
net/publication/374350920
CITATIONS READS
0 25
7 authors, including:
All content following this page was uploaded by Fares Bougourzi on 04 November 2023.
Abstract
In recent years, Machine Learning (ML) approaches have shown promising
results in achieving many medical imaging analysis tasks. In particular, Bone
Metastases (BM) analysis is getting significant interest from both medical
and computer vision communities due to its critical and challenging aspect.
Despite the research effort, detecting BM is still an open problem primarily
because of the lack of available datasets. This is due to two main obstacles:
(i) the huge time required for data collection and annotation and (ii) data
privacy constraints. To address these challenges, we propose BM-Seg, a new
dataset for BM segmentation from CT-scans. Our BM-Seg dataset consists
of 1517 CT images from 23 patients, where BM and bone regions are labeled
by three expert radiologists. BM-Seg is constructed to cover the diversity of
bone metastases in terms of location, organ, and severity.
Moreover, we propose a new CNN-based approach for BM segmentation
in which two main contributions are presented. First, we introduce a Dual-
Decoder Attention Unet++ (DAUnet++) architecture based on Unet++
∗
Corresponding author
E-mail addresses: marwa.afnouch@enis.tn, olfa.gaddour@enis.tn, yosr.hentati@yahoo.fr,
faresbougourzi@gmail.com, med.abid@enis.tn, Abdelmalik.Taleb-Ahmed@uphf.fr, ih-
sen.alouani@uphf.fr
1. Introduction
Metastases refer to a group of abnormal cells that develop outside of
their normal borders and spread to other organs. In particular, BM is a
cancer that originates in the bodys organs such as breast, lung, or prostate,
and spread to the bone. In 90 % of cancer deaths, metastases were found
to be a contributing factor Steeg (2006); Seyfried and Huysentruyt (2013).
Bone ranks third behind liver and lung, among the most common sites of
metastasis Macedo et al. (2017). Statistics show that more than 70 % of
patients with breast and prostate cancer have BM O’Sullivan (2015). In this
regard, the early detection of bone cancers is essential to make the right
decision Coleman et al. (2020).
Tracking BM requires medical imaging to diagnose diseases and determine
treatment effectiveness. Several medical imaging modalities have been used,
including Computed tomography (CT), MRI (Magnetic Resonance Imaging),
bone scans, and PET scans, where each imaging modality has its advantages
and drawbacks Turpin et al. (2020). In particular, CT-scans have a high
spatial resolution, which allows the detection of large bone lesions as well
as mild and small ones.O’Sullivan (2015). They also offer a simultaneous
evaluation of primary bone and BM lesions. Besides, CT-scans are commonly
available at a relatively low-cost Hammon et al. (2013). In the clinical setting,
CT-scans are the most commonly used imaging modality for both baseline
staging tests and serial surveillance of patients with cancer Heindel et al.
(2014).
2
However, the detection of BM from CT-scans is both challenging and
time-consuming for the following reasons: (i) since bones are throughout
the body, radiologists must examine all slices Noguchi et al. (2020); (ii)
the radiological appearance of BM depends on lesion types Heindel et al.
(2014). Therefore, no single window parameter can adequately represent all
bone metastases; and (iii) benign mimics such as fractures, bone islands, and
degenerative changes can be confused with BM lesions, which complicate the
diagnosis Vandemark et al. (1992). Computer-Aided Diagnosis (CAD) can
assist radiologists in finding small lesions that might otherwise be missed
Jadon (2020). Consequently, there is a high demand for CAD systems for
BM on CT-scans.
Recently, Deep Learning methods have gained much attention to solve
several visual-related tasks Bougourzi et al. (2020); Litjens et al. (2017);
Chakraborty and Mali (2021); Bougourzi et al. (2022). In particular, Con-
volutional Neural Networks (CNNs) achieved state-of-the-art performance
on various types of image recognition problems Vantaggiato et al. (2021);
Bougourzi et al. (2021). Among these problems, Semantic Segmentation is
one of the most studied topics in the last years, where each pixel in an im-
age is assigned to one of a set of labels Hiramatsu et al. (2018). In the
medical imaging field, automatic segmenting of infection lesions is a critical
step in diagnosing and understanding the disease Diniz et al. (2022); Allah
et al. (2023). In recent years, MRI and CT imaging modalities have been
widely used alongside machine learning approaches for automatic lesion seg-
mentation da Cruz et al. (2022); Lei et al. (2021). However, the automated
segmentation of BM using CT-scans has received relatively little attention
Noguchi et al. (2022); Chmelik et al. (2018).
To the best of our knowledge, there is no publicly available dataset associ-
ated with BM segmentation. This work aims to create a benchmark dataset
for BM segmentation from CT-scans. Moreover, we propose a new CNN-
based approach for BM segmentation to further improve the segmentation
performance compared with state-of-the-art architectures.
The main contributions of this work can be listed as follows:
1. We introduce BM-Seg dataset for segmenting BM from CT-scans. The
BM-Seg dataset is publicly available at https://BMseg shortlink.edu
2. We present DAUnet++, a new architecture derived from Unet++ that
combines attention gates, and dual decoders. DAUnet++ efficiently
segments both BM and bone regions.
3
3. To boost the BM segmentation, we propose an ensemble approach
which we baptize EDAUnet++. In this approach, five trained DAUnet++
models are used to enhance predictions of the single models, by aver-
aging the predicted probabilities from different estimators.
2. Related Work
In recent years, there has been considerable interest in the automatic as-
sessment of BM from different medical imaging modalities such as bone scans,
CT-scans, and SPECT. The state-of-the-art work focuses on two main tasks:
(i) classifying BM scans into normal and abnormal cases and (ii) segmenting
lesions. Recently, several recent research works have dealt with the classifi-
cation of BM by using deep learning methods Papandrianos et al. (2020b);
Aslantas et al. (2016); Masoudi et al. (2021); Pi et al. (2020); Li et al. (2020);
Guo et al. (2022); Apiparakoon et al. (2020). On the other hand, automated
segmentation of metastatic lesions is still in its infancy. Shimizu et al. (2020)
proposed an image interpretation system for the skeleton segmentation and
extraction of hotspots of a metastatic bone from a whole-body bone scinti-
gram based on deep learning. Zhang et al. (2021) developed a segmentation
algorithm based on UNet with an attention mechanism for SPECT bone
segmentation, which can automatically identify the location of BM. Moreau
et al. (2020) compared different approaches of bones and bone metastatic
lesions in breast cancer segmentation. Two deep learning methods based on
U-Net were developed and trained to segment either both bones and bone
lesions or bone lesions alone on PET/CT images. Later, Fan et al. (2021)
proposed a deep learning algorithm for spinal metastasis from lung cancer.
4
Table 1: Summary of existing Bone Metastasis Datasets
5
Aslantas et al. (2016) Bone scan 130 images [ 30 Normal;100 BM] 60 patients Classification Private
9133 images[ 2991 BM; 6142 Normal]
Han et al. (2021) Bone scan Classification Private
5342 patients
2.880 scans
Masoudi et al. (2021) CT Classification Private
114 patients [41 BM]
269 BM scans
Noguchi et al. (2022) CT Segmentation Private
169 patients
Moreau et al. (2020) PET/CT 24 patients Segmentation Private
260 scans
Cao et al. (2023) SPECT Segmentation Private
130 patients
76 patients
Lin et al. (2020) SPECT Classification Private
152 scans
23 patients
The proposed dataset CT Segmentation Public
1517 images
They proposed a dilated convolutional U-Net (DC-U-Net) model to seg-
ment the energy/spectral CT images. More recently, Noguchi et al. (2022)
used CT images and proposed a segmentation approach based on three convo-
lutional neural networks (CNNs): a 2D UNet-based network for segmenting
bones, a 3D UNet-based network for segmenting candidate regions, and a
3D ResNet-based network for reducing false-positive results. Although the
state-of-the-art methods achieved promising results, their impact is limited
by using private datasets, which prevents the community from exploiting the
data and building upon their findings. In addition, existing works have used
several imaging modalities including MRI, bone scan, and PET while few
works used CT-scan despite its distinct advantages. Table 1 shows a com-
parison of existing BM datasets. Based on Table 1, the main limitation of
existing datasets is their small size, which can lead to overfitting. To over-
come these problems, we created a new BM segmentation dataset and made
it publicly available to the research community.
3. BM-Seg Dataset
In this section, the following steps to create the BM-Seg dataset are de-
scribed in Figure 1.
6
Initially, a contrast substance is injected into the patient’s veins to high-
light any bone lesions. Once a patient exit the CT scanner, the CT images
are sent to three radiologists for examination. After interpreting data by
doctors, the regions of interest (RoIs) that represent the bone and lesion re-
gions for each CT-scan slice are marked using the Apeer tool Zeiss (2021).
At the end of the process of building BM-Seg dataset, we obtain both bone
and lesion masks for each BM image.
Figure 2: BM examples from BM-Seg. Eight slices from different patients are reported.
The first row contains BM images, and the second row contains an annotation of each
example (colors refer to lesion locations).
7
The selected slices were converted to JPEG format, then bone and BM
regions were manually segmented to extract ground truth (GT) masks using
Apeer software. Figure 3 shows four extracted CT-scan images from BM-seg
with their corresponding ground truth masks. The selected images in the
figure belong to different sites in the human skeleton and have various BM
lesion sizes. As the labelling process is time consuming, 70 infected slices
were randomly selected from each CT-scans. On the other hand, in the CT-
scans that have less than 70 infected slices, all infected slices are selected. In
total, 1517 slices were annotated by creating the bone metastasis and bone
masks.
Figure 3: Examples of CT images with various BM sizes and different positions in the
bone skeleton.
8
contrast to prior studies that excluded misleading examples and concentrated
on only one type of primary cancer. Thus, a trained system that uses BM-Seg
is expected to be more appropriate for routine clinical applications.
• Bone lesions may appear not just in one region but also along several
sites such as ribs, pelvis, and spine. To identify and distinguish bone
lesions, human professionals should scrutinize all the CT-scan slices.
9
Figure 4: Osteolytic lesion appearance (circles in green) vs Disc appearance (circles in
red) in BM-Seg dataset
10
it is difficult to distinguish between disc in CT-scan and bone lesion, espe-
cially osteolytic lesions where radiologists have to spend considerable time
verifiying one patient scan. Figure5 shows an additional issue in BM seg-
mentation which is the difficulty to differentiate between osteoblastic lesions
and degenerative bones as they have similar appearances.
4. Proposed Approach
In this section, we will describe the design of our proposed EDAUnet++
approach for BM segmentation.
11
from 512 to 256, 128, 64, 32, and 1, respectively, using five BCBlocks. To
transmit the context information that the encoder has recovered to the de-
coders of the relevant layers, we used nested layers that allow the extraction
of more efficient hierarchical features.
• Attention gates are integrated between BCBlocks to find the most cru-
cial information for the segmentation task before propagating it to the
decoder part. Each attention gate combines the output of the previous
BCBlock with the output of the corresponding up-sampled low-density
block.
12
• Up-sampling and skip connections are used to extract representative
features and eliminate semantic gaps between features. The outputs
of the attention gates are concatenated with the output from the up-
sample of the lower hop path to reach the next nested layer.
Let xi,j represent the output of the BCBlock X i,j , where i refers to the
feature depth in the encoder and j to the depth of the convolution layer in
the nested block. We define the extracted feature map of the convolution
layer xi,j as follows:
(
H[xi−1,j ] j=0
xi,j = R j−1 i,n i+1,j−1
(1)
H[ n=0 Ag(x ), U p(x )] j > 0
4.3. EDAUnet++
Ensemble models are created in ML by merging the predictions of vari-
ous independent models to enhance the overall predictions. The aim of the
Ensemble method is to reduce generalization errors in ML algorithms and
13
Figure 8: The proposed ensemble framework for BM segmentation
14
LossL = LBCE (yL , yˆL ) (4)
LossB = LBCE (yB , yˆB ) (5)
where yL ,yB are the bone and metastasis predictions, respectively. yˆL is the
lesion mask and yˆB is the bone mask.
TP + TN
Accuracy = (7)
(T P + T N + F P + F N )
TP
P recision = (8)
(T P + F P )
15
TN
Specif icity = (9)
(T N + F P )
TP
Sensitivity = (10)
(T P + F N )
(T arget ∩ P redicted)
IoU = (11)
(T arget ∪ P redicted)
N
1 X T Pi
Dice-score = 2× (12)
N i=1 (2 × T Pi + F Pi + F Ni )
16
AttUnet, Unet++ and AttUnet++, respectively. EDAUnet++ outperforms
Unet, AttUnet, Unet++, and AttUnet++ by 5.9%, 6.06%, 5.6%, 4.86%
for IoU metric. Concerning accuracy, specificity, sensitivity, and precision
metrics, EDAUnet++ still provides the highest results.
17
Table 5: F1-score of 5-fold cross-validation experiments for BM segmentation
18
Figure 9: Visual comparison of BM segmentation results
In Figure 10, the output results of five runs of DAUnet++ are compared
with the output of the ensemble method. Clearly, EDAUnet++ consistently
provides the best result when it comes to F1-score, regardless of the location
and size of the lesion. As an example, in the second row, EDAUnet++
reached 77.44% of F1-score while DAUnet++ did not exceed 75% in the
five runs (M1-M5). Particularly for small-scale metastases as shown in the
third row, our model is able to extract more lesion features with greater
F1-score, demonstrating its ability to learn less redundant information and
integrate global context. Consequently, metastatic bone segmentation can
be greatly improved with EDAUnet++ by extracting more comprehensive
background features, providing more detailed information, and obtaining a
more comprehensive semantic representation.
19
Table 6: Ablation results on BM-Seg dataset
Ablation Dataset
Architecture Att DD En F1-score Dice IoU
Unet++ ✗ ✗ ✗ 79.74 71.99 66.31
AttUnet++ ✓ ✗ ✗ 80.27 72.63 67.06
DAUnet++ ✓ ✓ ✗ 82.27 75.70 69.89
EDAUnet++ ✓ ✓ ✓ 83.67 77.05 71.92
20
and IoU metrics. Specifically, the dual decoder structure can improve the
F1-score, Dice, and IoU by 2.0%, 3.0%, and 2.0%, respectively. The ensem-
ble method can also enhance the F1-score, Dice, and IoU by 1.0%, 2.0%, and
2.0% respectively compared to DAUnet++.
Both visualization results and test metrics demonstrate that the pro-
posed approach can capture semantic information from both the bone and
metastatic bone regions, and outperforms other comparison models for BM
segmentation in complex situations, where multi-size lesion coexist and the
small-sized lesions are prone to being missed due to the similarity in appear-
ance.
Although our research demonstrated the advantages and prospects of us-
ing deep learning for BM segmentation from the CT-scan, still there is need
for more work to be done for BM analysis. It should be noted that our
BM-Seg dataset was collected from one center, which is UHC (University
hospital center Hedi Chaker, Sfax, Tunisia), where patients mainly come
from the south of Tunisia. Pathological data can diverge widely in clinical
scenarios from different regions and races. Moreover, larger datasets can help
in creating real scenario application for BM analysis. However, due to patient
scarcity and privacy issues, building a large dataset is very challenging. For-
tunately, our work has attracted more hospitals to join our future project in
the next BM-Seg version. Our aim is to enlarge the current BM-Seg dataset
to include BM recognition, localization and segmentation.
7. Conclusion
This paper proposes BM-Seg, the first publicly available annotated dataset
to automate the segmentation of metastatic lesions in bone CT-scan images.
We hope that this benchmark will help the community achieve more progress
on the BM segmentation problem. Moreover, we proposed EDAUnet++, an
ensemble method, which highlights areas of high uptake in CT-scan images,
to improve segmentation performance. The proposed approach outperforms
the popular cutting-edge models evaluated in our experiments and improves
significantly the F1-score.
In our future work, we will extend BMseg to create more diverse BM
dataset from multi-center, which will allow to study the generalization ability
of the machine learning algorithms to imitate the real scenario application.
21
Acknowledgement
The authors would like to express their deepest gratitude to Doctors
Zaineb Mnif and Hela Fendri for their helpful advice on various technical
issues examined in this paper, and to the Tunisian Ministry of Health for
giving permission to conduct this study.
This work was supported by The doctoral school ’sciences & technologies’
of the National School of Engineers of Sfax (ENIS) and the Computer and
Embedded Systems (CES) laboratory.
References
Allah, A.M.G., Sarhan, A.M., Elshennawy, N.M., 2023. Edge u-net:
Brain tumor segmentation using MRI based on deep u-net model with
boundary information. Expert Systems with Applications 213, 118833.
URL: https://doi.org/10.1016%2Fj.eswa.2022.118833, doi:10.1016/
j.eswa.2022.118833.
Aslantas, A., Dandil, E., Salam, S., Çakirolu, M., et al., 2016. Cadboss: A
computer-aided diagnosis system for whole-body bone scintigraphy scans.
Journal of Cancer Research and Therapeutics 12, 787.
Bougourzi, F., Distante, C., Ouafi, A., Dornaika, F., Hadid, A., Taleb-
Ahmed, A., 2021. Per-COVID-19: A Benchmark Dataset for COVID-
19 Percentage Estimation from CT-Scans. Journal of Imaging 7, 189.
doi:10.3390/jimaging7090189. number: 9 Publisher: Multidisciplinary
Digital Publishing Institute.
Bougourzi, F., Dornaika, F., Barrena, N., Distante, C., Taleb-Ahmed, A.,
2022. CNN based facial aesthetics analysis through dynamic robust losses
and ensemble regression. Applied Intelligence URL: https://doi.org/
10.1007/s10489-022-03943-0, doi:10.1007/s10489-022-03943-0.
22
Bougourzi, F., Dornaika, F., Mokrani, K., Taleb-Ahmed, A., Ruichek, Y.,
2020. Fusing transformed deep and shallow features (ftds) for image-
based facial expression recognition. Expert Systems with Applications
156, 113459.
Cao, Y., Liu, L., Chen, X., Man, Z., Lin, Q., Zeng, X., Huang, X.,
2023. Segmentation of lung cancer-caused metastatic lesions in bone
scan images using self-defined model with deep supervision. Biomedi-
cal Signal Processing and Control 79, 104068. URL: https://doi.org/
10.1016/j.bspc.2022.104068%2Fj.compbiomed.2020.103767, doi:10.
1016/j.compbiomed.2020.103767.
Cheng, D.C., Liu, C.C., Hsieh, T.C., Yen, K.Y., Kao, C.H., 2021.
Bone metastasis detection in the chest and pelvis from a whole-
body bone scan using deep learning and a small dataset. Electronics
10, 1201. URL: https://doi.org/10.3390%2Felectronics10101201,
doi:10.3390/electronics10101201.
Chmelik, J., Jakubicek, R., Walek, P., Jan, J., Ourednicek, P., Lam-
bert, L., Amadori, E., Gavelli, G., 2018. Deep convolutional neu-
ral network-based segmentation and classification of difficult to define
metastatic spinal lesions in 3d CT data. Medical Image Analysis 49, 76–
88. URL: https://doi.org/10.1016%2Fj.media.2018.07.008, doi:10.
1016/j.media.2018.07.008.
Coleman, R.E., Brown, J., Holen, I., 2020. Bone metastases. Abeloff’s
Clinical Oncology , 809–830.
da Cruz, L.B., Júnior, D.A.D., Diniz, J.O.B., Silva, A.C., de Almeida, J.D.S.,
de Paiva, A.C., Gattass, M., 2022. Kidney tumor segmentation from com-
puted tomography images using DeepLabv3 2.5d model. Expert Systems
with Applications 192, 116270. URL: https://doi.org/10.1016%2Fj.
eswa.2021.116270, doi:10.1016/j.eswa.2021.116270.
Diniz, J.O.B., Ferreira, J.L., Cortes, O.A.C., Silva, A.C., de Paiva, A.C.,
2022. An automatic approach for heart segmentation in CT scans through
23
image processing techniques and concat-u-net. Expert Systems with Appli-
cations 196, 116632. URL: https://doi.org/10.1016%2Fj.eswa.2022.
116632, doi:10.1016/j.eswa.2022.116632.
Fan, X., Zhang, X., Zhang, Z., Jiang, Y., 2021. Deep learning-based identi-
fication of spinal metastasis in lung cancer using spectral ct images. Sci-
entific Programming 2021.
Guo, Y., Lin, Q., Zhao, S., Li, T., Cao, Y., Man, Z., Zeng, X., 2022. Au-
tomated detection of lung cancer-caused metastasis by classifying scinti-
graphic images using convolutional neural network with residual connec-
tion and hybrid attention mechanism. Insights into Imaging 13, 1–13.
Hammon, M., Dankerl, P., Tsymbal, A., Wels, M., Kelm, M., May, M.,
Suehling, M., Uder, M., Cavallaro, A., 2013. Automatic detection of
lytic and blastic thoracolumbar spine metastases on computed tomogra-
phy. European Radiology 23, 1862–1870. URL: https://doi.org/10.
1007%2Fs00330-013-2774-5, doi:10.1007/s00330-013-2774-5.
Han, S., Oh, J.S., Lee, J.J., 2021. Diagnostic performance of deep learning
models for detecting bone metastasis on whole-body bone scan in prostate
cancer. European Journal of Nuclear Medicine and Molecular Imaging
49, 585–595. URL: https://doi.org/10.1007%2Fs00259-021-05481-2,
doi:10.1007/s00259-021-05481-2.
Heindel, W., Gübitz, R., Vieth, V., Weckesser, M., Schober, O., Schäfers, M.,
2014. The diagnostic imaging of bone metastases. Deutsches Ärzteblatt
International 111, 741.
Hiramatsu, Y., Hotta, K., Imanishi, A., Matsuda, M., Terai, K., 2018. Cell
image segmentation by integrating multiple CNNs, in: 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW), IEEE. URL: https://doi.org/10.1109%2Fcvprw.2018.
00296, doi:10.1109/cvprw.2018.00296.
24
Jadon, S., 2020. A survey of loss functions for semantic segmentation, in:
2020 IEEE Conference on Computational Intelligence in Bioinformatics
and Computational Biology (CIBCB), IEEE. pp. 1–7.
Lei, X., Yu, X., Chi, J., Wang, Y., Zhang, J., Wu, C., 2021. Brain tumor
segmentation in MR images using a sparse constrained level set algorithm.
Expert Systems with Applications 168, 114262. URL: https://doi.org/
10.1016%2Fj.eswa.2020.114262, doi:10.1016/j.eswa.2020.114262.
Li, C., Tan, Y., Chen, W., Luo, X., Gao, Y., Jia, X., Wang, Z., 2020. Atten-
tion unet++: A nested attention-aware u-net for liver ct image segmenta-
tion, in: 2020 IEEE International Conference on Image Processing (ICIP),
IEEE. pp. 345–349.
Lin, Q., Luo, M., Gao, R., Li, T., Man, Z., Cao, Y., Wang, H., 2020.
Deep learning based automatic segmentation of metastasis hotspots in
thorax bone SPECT images. PLOS ONE 15, e0243253. URL: https:
//doi.org/10.1371%2Fjournal.pone.0243253, doi:10.1371/journal.
pone.0243253.
Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian,
M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I., 2017. A survey
on deep learning in medical image analysis. Medical image analysis 42,
60–88.
Macedo, F., Ladeira, K., Pinho, F., Saraiva, N., Bonito, N., Pinto, L.,
Gonçalves, F., 2017. Bone metastases: an overview. Oncology reviews
11.
Masoudi, S., Mehralivand, S., Harmon, S.A., Lay, N., Lindenberg, L., Mena,
E., Pinto, P.A., Citrin, D.E., Gulley, J.L., Wood, B.J., et al., 2021. Deep
learning based staging of bone lesions from computed tomography scans.
IEEE Access 9, 87531–87542.
Moreau, N., Rousseau, C., Fourcade, C., Santini, G., Ferrer, L., Lacombe,
M., Guillerminet, C., Campone, M., Colombié, M., Rubeaux, M., et al.,
2020. Deep learning approaches for bone and bone lesion segmentation
on 18fdg pet/ct imaging in the context of metastatic breast cancer, in:
25
2020 42nd Annual International Conference of the IEEE Engineering in
Medicine & Biology Society (EMBC), IEEE. pp. 1532–1535.
Noguchi, S., Nishio, M., Sakamoto, R., Yakami, M., Fujimoto, K., Emoto,
Y., Kubo, T., Iizuka, Y., Nakagomi, K., Miyasa, K., et al., 2022.
Deep learning–based algorithm improved radiologists performance in bone
metastases detection on ct. European Radiology , 1–12.
Noguchi, S., Nishio, M., Yakami, M., Nakagomi, K., Togashi, K., 2020. Bone
segmentation on whole-body CT using convolutional neural network with
novel data augmentation techniques. Computers in Biology and Medicine
121, 103767. URL: https://doi.org/10.1016%2Fj.compbiomed.2020.
103767, doi:10.1016/j.compbiomed.2020.103767.
Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K.,
Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B., et al., 2018. At-
tention u-net: Learning where to look for the pancreas. arXiv preprint
arXiv:1804.03999 .
26
Pi, Y., Zhao, Z., Xiang, Y., Li, Y., Cai, H., Yi, Z., 2020. Auto-
mated diagnosis of bone metastasis based on multi-view bone scans us-
ing attention-augmented deep neural networks. Medical Image Analysis
65, 101784. URL: https://doi.org/10.1016%2Fj.media.2020.101784,
doi:10.1016/j.media.2020.101784.
Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks
for biomedical image segmentation, in: International Conference on Med-
ical image computing and computer-assisted intervention, Springer. pp.
234–241.
Shimizu, A., Wakabayashi, H., Kanamori, T., Saito, A., Nishikawa, K.,
Daisaki, H., Higashiyama, S., Kawabe, J., 2020. Automated measurement
of bone scan index from a whole-body bone scintigram. International Jour-
nal of Computer Assisted Radiology and Surgery 15, 389–400.
Steeg, P.S., 2006. Tumor metastasis: mechanistic insights and clinical chal-
lenges. Nature medicine 12, 895–904.
Turpin, A., Girard, E., Baillet, C., Pasquier, D., Olivier, J., Villers, A.,
Puech, P., Penel, N., 2020. Imaging for metastasis in prostate cancer: a
review of the literature. Frontiers in Oncology 10, 55.
Vantaggiato, E., Paladini, E., Bougourzi, F., Distante, C., Hadid, A., Taleb-
Ahmed, A., 2021. COVID-19 Recognition Using Ensemble-CNNs in Two
New Chest X-ray Databases. Sensors 21. URL: https://www.mdpi.com/
1424-8220/21/5/1742, doi:10.3390/s21051742.
Zhang, J., Huang, M., Deng, T., Cao, Y., Lin, Q., 2021. Bone metastasis
segmentation based on improved u-net algorithm, in: Journal of Physics:
Conference Series, IOP Publishing. p. 012027.
27
Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J., 2018. Unet++:
A nested u-net architecture for medical image segmentation, in: Deep
learning in medical image analysis and multimodal learning for clinical
decision support. Springer, pp. 3–11.
28