Professional Documents
Culture Documents
net/publication/336391222
Deep Learning Model Integrating Dilated Convolution and Deep Supervision for
Brain Tumor Segmentation in Multi-parametric MRI
CITATIONS READS
6 347
4 authors:
All content following this page was uploaded by Su Ruan on 21 October 2019.
1 Introduction
Brain tumor is one of the most aggressive cancers in the world. Gliomas are the
most common brain tumors that arise from glial cells. According to the malig-
nant degree of gliomas [1], they can be categorized into two grades: low-grade
gliomas (LGG) and high-grade gliomas (HGG), the former one tend to be be-
nign, grow more slowly with lower degrees of cell infiltration and proliferation,
the latter one are malignant, more aggressive and need immediate treatment.
Magnetic resonance imaging (MRI) is a widely used imaging technique to assess
these tumors, because it offers a good soft tissue contrast without radiation.
The commonly used sequences are T1-weighted, contrast enhanced T1-weighted
(T1c), T2-weighted and Fluid Attenuation Inversion Recovery (FLAIR) images.
?
Supported by the Normandie Regional Council via the MoNoMaD project (Grant
number: 18P03397/18E01937).
2 Tongxue Zhou, Su Ruan, Haigen Hu, and Stéphane Canu
2 Method
2.1 Dataset and Pre-processing
The datasets used in the experiments come from BraTS 2017 and 2018 train-
ing sets and validation sets. The training set includes 210 HGG patients and
75 LGG patients. The validation set includes 46 and 66 patients, respectively.
Each patient has four image modalities including T1-weighted, contrast en-
hanced T1-weighted (T1c), T2-weighted and Fluid Attenuation Inversion Recov-
ery (FLAIR) images. All data used in the experiments have been pre-processed
with a standard procedure. The N4ITK [11] method is first used to correct the
distortion of MRI data and intensity normalization is applied to normalize each
modality of each patient. To exploit the spatial contextual information of the
image, we used the 3D image and clip and resize the image from 155 × 240 × 240
to 128 × 128 × 128.
The proposed Res dil block uses dilated convolution that defines the spacing
between the values in a kernel [12], which can increase the receptive field. For
example, a 3 × 3 kernel with a dilation rate of 2 will have the same field of view
4 Tongxue Zhou, Su Ruan, Haigen Hu, and Stéphane Canu
C PN
X j=1pic gic +
Lf ocal tversky = (1− PN PN PN )1/γ (2)
i=1 j=1 pic gic + α p g
j=1 ic ic + p g
j=1 ic ic +
PC PN
i=1 j=1 pic gic +
Ldice = 1 − 2 PC PN (3)
i=1 j=1 pic + gic +
where N is the set of all examples, C is the set of the classes, wc represents the
weight assigned to the class c, pic is the probability that pixel i is of the tumor
class c and pic is the probability that pixel i is of the non-tumor class c. The
same is true for gic and gic , and is a small constant to avoid dividing by 0.
Table 1. The distribution of classes on BraTS 2017 training set, NET: Non Enhancing
Tumor, NCR: Necrotic.
Table 2. Comparison with different loss functions on BraTS 2017 training set.
Quantitative Analysis We randomly split 20% (57) of the training sets (285)
in BRATS 2017 as local validation set. Table 3 shows the contributions of each
components in the network on local validation set. We refer to our basic U-Net
without res dil block and deep suprvision as base. We can see an increase of
dice score, sensitivity and Hausdorff across all tumor regions when we added the
proposed strategies gradually. More precisely, we achieve Dice scores of 88.5, 84.5,
73.4 for whole, core and enhancing tumor, respectively. Table 4 shows the results
on Brats 2017 and Brats 2018 validation sets. To further verify the effectiveness of
the proposed method, we compare the performance of our method with original
U-Net and a state-of-the-art U-Net-like network [9] on BraTS 2017 dataset in
Table 5. We get the best result of sensitivity and Hausdorff on whole tumor.
For tumor core, we achieve the best performance on all evaluation metrics. For
enhancing tumor, we obtain the best result of dice and specificity. In general, the
proposed method achieves better segmentation result on Dice score than others.
Table 4. Results on BraTS 2017 and BraTS 2018. Val: validation set
capable of segmenting large tumor region (necrotic and edema) as well as the
difficult region (enhancing tumor). The results show that the proposed method
achieves almost the same results as the real annotation. To verify the effectiveness
of the proposed method, the relative quantity evaluation results are shown in
Table 6. In accordance with the qualitative result, each sample obtains a high
dice score on the three brain tumor regions.
Fig. 4. Qualitative comparison of different methods on patient Brats 2017 TCIA 201 1.
Edema is shown in green, enhancing tumor in red and necrotic in blue.
4 Conclusion
References
1. Bjoern H Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Key-
van Farahani, Justin Kirby, Yuliya Burren, Nicole Porz, Johannes Slotboom,
Roland Wiest, et al. The multimodal brain tumor image segmentation benchmark
(brats). IEEE transactions on medical imaging, 34(10):1993–2024, 2014.
2. Shaoguo Cui, Lei Mao, Jingfeng Jiang, Chang Liu, and Shuyu Xiong. Automatic
semantic segmentation of brain gliomas from mri images using a deep cascaded
neural network. Journal of healthcare engineering, 2018, 2018.
3. Xiaomei Zhao, Yihong Wu, Guidong Song, Zhenye Li, Yazhuo Zhang, and Yong
Fan. A deep learning model integrating fcnns and crfs for brain tumor segmenta-
tion. Medical image analysis, 43:98–111, 2018.
4. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks
for semantic segmentation. In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 3431–3440, 2015.
5. Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron
Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, and Hugo Larochelle.
Brain tumor segmentation with deep neural networks. Medical image analysis,
35:18–31, 2017.
6. Guotai Wang, Wenqi Li, Sébastien Ourselin, and Tom Vercauteren. Automatic
brain tumor segmentation using cascaded anisotropic convolutional neural net-
works. In International MICCAI Brainlesion Workshop, pages 178–190. Springer,
2017.
7. Konstantinos Kamnitsas, Christian Ledig, Virginia FJ Newcombe, Joanna P Simp-
son, Andrew D Kane, David K Menon, Daniel Rueckert, and Ben Glocker. Efficient
multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation.
Medical image analysis, 36:61–78, 2017.
8. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional net-
works for biomedical image segmentation. In International Conference on Medi-
cal image computing and computer-assisted intervention, pages 234–241. Springer,
2015.
9. Fabian Isensee, Philipp Kickingereder, Wolfgang Wick, Martin Bendszus, and
Klaus H Maier-Hein. Brain tumor segmentation and radiomics survival prediction:
Contribution to the brats 2017 challenge. In International MICCAI Brainlesion
Workshop, pages 287–297. Springer, 2017.
10. Konstantinos Kamnitsas, Wenjia Bai, Enzo Ferrante, Steven McDonagh, Matthew
Sinclair, Nick Pawlowski, Martin Rajchl, Matthew Lee, Bernhard Kainz, Daniel
Rueckert, et al. Ensembles of multiple models and architectures for robust brain
tumour segmentation. In International MICCAI Brainlesion Workshop, pages 450–
462. Springer, 2017.
11. Brian B Avants, Nick Tustison, and Gang Song. Advanced normalization tools
(ants). Insight j, 2:1–35, 2009.
12. Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convo-
lutions. arXiv preprint arXiv:1511.07122, 2015.