Professional Documents
Culture Documents
Post-contrast Segmentation
Image Map of Heart
(a) FCN1 for heart segmentation based on the post-contrast image
SubNet1
SubNet2
Segmentation Map
of Dense Tissue
(b) FCN2 for segmentation of dense tissue based on the pre-contrast image, with breast mask as guidance information
Fig. 2: Illustration of the proposed FCN model for the segmentation of three types of tissues (i.e., the heart, dense tissue,
and fat tissue). There are two sub-networks, including (a) FCN1 for the segmentation of the heart, and (b) FCN2 for the
segmentation of dense tissues.
rection in this study. Our assumption here is that the same TABLE I: Statistical information of imaging parameters for
tissue (e.g., fat or dense tissue) in different images would share training and test datasets.
a similar intensity range. Then, it is possible to normalize
the DCE-MR images by treating different tissues (i.e., air, Magnetic Field TE TR
fat tissue, dense tissue, and heart) separately. Specifically, a 1.5T 3.0T > 2.0ms < 2.0ms > 4.5ms < 4.5ms
deep-learning-based segmentation method is first developed to Training Set 149 211 265 95 265 95
perform tissue segmentation accurately and efficiently. Then, Test Set 48 52 79 21 79 21
a rectified linear unit (ReLU), and a 3 × 3 × 3 max pooling value for the heart). Specifically, for the given N training
operation with stride 2 for down-sampling. In the decoding images, we can obtain a vector mt = [mt1 , mt2 , · · · , mtN ]
path, each step consists of a 3×3×3 up-convolution, followed (t = 1, · · · , 4) for four types of tissues. For each type of
by concatenation with the corresponding feature map from the tissue, we can get a rank of images using the median tissue
encoding path and one 3×3×3 convolution (each followed by value. By averaging four ranks, we select the median subject
a ReLU). A Dice loss function [12] is used in the last layer of from all training data. Here, the common value tissue intensity,
FCN1 for generating a segmentation map for the heart. Hence, denoted as M t , is then computed as the median intensity from
this network can not only grasp a large image area using small the median subject in the t-th tissue. That is, there are four
4
kernel sizes but also still keeps high segmentation accuracy, intensity values {M t }t=1 in the common intensity space, with
due to the encoding and decoding paths. each one denoting the common value for each tissue computed
In the second FCN (i.e., FCN2 in Fig. 2 (b)), we first based on the training data.
segment the breast from the pre-contrast image in the first In the testing stage, given a new subject, pixel values
stage, and then segment the dense and fat tissues based on of each tissue type are transformed to take values of the
the breast region in the second stage. Since breast tissues (i.e., archetype patient. Specifically, the proposed FCN model is first
dense tissue and fat tissue) appear in the breast region, we applied to its pre-contrast and post-contrast images to generate
would like to generate a region of interest (ROI) that can the segmentation of four types of tissue/material. With the
exclude confounding regions (e.g., air and heart), by using segmented tissues, we would like to construct a piecewise
a breast mask as the ROI to remove the most enhanced linear mapping between the testing image and the common
organs. Specifically, there are two cascaded sub-networks (i.e., intensity space, by treating each tissue individually. That is, the
4
SubNet1 and SubNet2) in FCN2. The first stage (i.e., SubNet1) median value {V t }t=1 for each tissue in the testing image is
aims to generate a breast mask from the whole input image, normalized to its corresponding median value in the common
while the second stage (i.e., SubNet2) is used to segment the intensity space (the archetype patient). Therefore, the control
4
dense tissue from the fat tissue in the breast region. The input points for the piecewise linear mapping are {(V t , M t )}t=1 .
of SubNet1 is a pre-contrast image, while the output is a Then all values in between of the control points in the test
breast mask. In SubNet2, the input is the masked image (using image are interpolated linearly between the two control points.
segmented breast region generated by SubNet1 as a mask to For intensity values that are larger than the median intensity
select the region of interest from the pre-contrast image), and of the heart (e.g., the highest value), we simply extend the
the output is the segmented dense tissue. Here, SubNet1 and mapping trend between dense tissue and heart to get the slope
SubNet1 share the similar architecture as FCN1, while the of linear mapping. For example, as the curve shown in Fig.
difference is that the activation function of the last layer in both 0, the x-axis defines the intensity space of testing image and
SubNet1 and SubNet2 is a sigmoid function to normalize the the y-axis defines the common intensity space of all training
4
output into [0, 1]. In particular, based on the segmented dense images. These points (i.e., {(V t , M t )}t=1 ) means the median
tissue, the fat tissue can be generated by a subtraction between values of air, fat tissue, dense tissue, and heart.
the breast masked image and the segmented map of dense Note that, the values for air, fat, and dense tissue (i.e.,
3 3
tissue. After obtaining the dense tissue, we can obtain the fat {V t }t=1 or {M t }t=1 ) are calculated from pre-contrast image,
tissue by excluding the dense tissue from the breast mask. and the value for the heart (i.e., V 4 or M 4 ) is calculated from
Therefore, using the proposed FCN1 and FCN2 models, three the first post-contrast image. It is also worth noting that we
types of tissues (i.e., the heart, dense tissue, and fat tissue) employ the 10% maximum tissue value for heart instead of
can be segmented from the input pre-contrast and post-contrast using median value, because of the inhomogeneity of heart.
MR images for each subject. For the other tissues, we employ the median tissue values.
Finally, the same mapping function is applied to all pre- and
post-contrast images.
C. Step 2: Piecewise Linear Mapping
In this work, we assume that the same type of tissues
III. E VALUATION
(e.g., fibroglandular tissue) or material from different sub-
jects should have similar image intensity, and the intensities The goal of the proposed normalization algorithm is trans-
of different tissue types would be different. Accordingly, a forming the images to the same framework where the pixel
piecewise linear mapping method is proposed to normalize values are directly comparable. Therefore, the expectation is
the image intensity of different subjects into a common that that following the normalization, the average pixel values
space, based on the segmentation results for different tissues of different types of tissue are similar regardless of which
generated by the above-mentioned FCN model. scanner parameters were used. Furthermore, it is expected that
First, we defined a typical value for each tissue/material by following normalization, algorithmic features (computer-aided
identifying an archetype subject in our training data in the fol- diagnosis features/radiomics features) do not systematically
lowing way. In the training stage, the common tissue intensity differ between sets of images acquired using different scanning
for each type of tissue is first computed from the training MR parameters. Finally, we expect that image feature extracted
images. We select the median image from all training MR from a set of normalized images will generally be as useful
images using the average rank of four median tissue values. or more useful for prediction of various quantities (e.g. tumor
(see next paragraph on the details of calculating the median genomic or patient outcomes) than the same features extracted
4
Fat tissue (before normalization) Dense tissue (before normalization) Heart (before normalization)
3000 3000 3000
2.5 3T 2500 2.5 3T 2500 2.5 3T 2500
1.5T 1.5T 1.5T
2000 2000 2000
2.0 1500 2.0 1500 2.0 1500
TE
TE
TE
1000 1000 1000
1.5 500 1.5 500 1.5 500
0 0 0
3.5 4.0 4.5 5.0 5.5 6.0 6.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5
TR TR TR
Fat tissue (after normalization) Dense tissue (after normalization) Heart (after normalization)
3000 3000 3000
2.5 3T 2500 2.5 3T 2500 2.5 3T 2500
1.5T 1.5T 1.5T
2000 2000 2000
2.0 1500 2.0 1500 2.0 1500
TE
TE
TE
TE
TE
TE
TE
1.0 Before Normalization Before Normalization Before Normalization 7000 Before Normalization 1600 Before Normalization
1.8 800
6000 1400
0.8 1.6 5000 1200
600
1.4 4000 1000
0.6 1.2 400 800
F1
F2
F3
F4
F5
3000
1.0 600
0.4 200 2000 400
0.8
1000 200
0.2 0.6
0 0 0
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
8 800
16 Before Normalization Before Normalization Before Normalization Before Normalization Before Normalization
7 1.6 100 700
14
12 6 80 600
1.5 500
10 5 60
F10
F6
F7
F8
F9
1.4 400
8 4 40 300
6 200
4 3 1.3 20
100
2 2 1.2 0
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
500 10000 2250 1200
Before Normalization Before Normalization 1750 Before Normalization 2000 Before Normalization Before Normalization
8000 1500 1750 1000
400
1250 1500 800
300 6000 1250
1000 600
F11
F12
F13
F14
F15
4000 750 1000
200 750 400
500
100 2000 500 200
250 250
0 0 0
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
(a) TE
1.0 Before Normalization Before Normalization Before Normalization 7000 Before Normalization 1600 Before Normalization
1.8 800
6000 1400
0.8 1.6 5000 1200
600
1.4 4000 1000
0.6 1.2 400 800
F1
F2
F3
F4
F5
3000
1.0 600
0.4 200 2000 400
0.8
1000 200
0.2 0.6
0 0 0
4 5 6 4 5 6 4 5 6 4 5 6 4 5 6
TR TR TR TR TR
8 800
16 Before Normalization Before Normalization Before Normalization Before Normalization Before Normalization
7 1.6 100 700
14
12 6 80 600
1.5 500
10 5 60
F10
F6
F7
F8
F9
1.4 400
8 4 40 300
6 200
4 3 1.3 20
100
2 2 1.2 0
4 5 6 4 5 6 4 5 6 4 5 6 4 5 6
TR TR TR TR TR
500 10000 2250 1200
Before Normalization Before Normalization 1750 Before Normalization 2000 Before Normalization Before Normalization
8000 1500 1750 1000
400
1250 1500 800
300 6000 1250
1000 600
F11
F12
F13
F14
F15
(b) TR
Fig. 6: Feature distribution with different imaging parameters for the original images. (a) TE and (b) TR.
F2
F3
F4
F5
1.5 1500
0.4 200 300
1.0 1000
200
0.2 100 500
0.5 100
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
20.0 8 1.60 1000
After Normalization After Normalization After Normalization After Normalization After Normalization
17.5 7 1.55 100 900
15.0 6 1.50 80 800
12.5 5 1.45 700
60
F10
F6
F7
F8
F9
10.0 4 1.40 600
7.5 1.35 40
3 500
5.0 1.30 20
2 400
2.5 1.25
300
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
550 900 1400 700
After Normalization After Normalization After Normalization After Normalization After Normalization
500 4000 800 1300
450 700 1200 600
400 3000 600 1100 500
1000
F11
F12
F13
F14
F15
350 500
2000 400 900 400
300
300 800 300
250 1000 200 700
200 600 200
100
1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5 1.5 2.0 2.5
TE TE TE TE TE
(a) TE
1.0 500 3000 700
After Normalization After Normalization After Normalization After Normalization After Normalization
2.5 600
0.8 400 2500
2.0 2000 500
0.6 300 400
F1
F2
F3
F4
F5
1.5 1500
0.4 200 300
1.0 1000
200
0.2 100 500
0.5 100
4 5 6 4 5 6 4 5 6 4 5 6 4 5 6
TR TR TR TR TR
20.0 8 1.60 1000
After Normalization After Normalization After Normalization After Normalization After Normalization
17.5 7 1.55 100 900
15.0 6 1.50 80 800
12.5 5 1.45 700
60
F10
F6
F7
F8
F9
F12
F13
F14
F15
350 500
2000 400 900 400
300
300 800 300
250 1000 200 700
200 600 200
100
4 5 6 4 5 6 4 5 6 4 5 6 4 5 6
TR TR TR TR TR
(b) TR
Fig. 7: Feature distribution with different imaging parameters for the images after normalization using the proposed method.
(a) TE and (b) TR.
potential of the proposed method in real-world applications. speed is also limited by the memory of GPU, since we cannot
predict the entire segmentation map for each image. We have
E. Computational Efficiency to split the images into many subimages and predict them
separately, and then reconstructing them into one segmentation
An important advantage of our algorithm is that it is
map. This is time-consuming since the convolution calculation
relatively fast. In Table II, we report the computational cost for
is repeated multiple times for many voxels. Therefore, the
breast segmentation, heart segmentation, dense tissue segmen-
speed of our normalization can be further improved using GPU
tation, and intensity mapping respectively, using one Nvidia
with large memories.
GPU (GTX 1080) with 8 GB GPU RAM. From the table, we
can notice that our method requires only less than 30 seconds
to complete the entire normalization process. Note that, our
8
F6
F6
F6
8 4 10.0 6
6 7.5
4
2 5.0
4
2.5 2
2
1.4 1.6 1.8 2.0 2.2 2.4 2.6 1.4 1.6 1.8 2.0 2.2 2.4 2.6 1.4 1.6 1.8 2.0 2.2 2.4 2.6 1.4 1.6 1.8 2.0 2.2 2.4 2.6
TE TE TE TE
Before normalization Before normalization (denoising) After normalization 14
After normalization (denoising)
16 20.0
8 12
14 17.5
15.0 10
12 6
10 12.5 8
F6
F6
F6
F6
8 4 10.0 6
6 7.5
4
2 5.0
4
2.5 2
2
3.5 4.0 4.5 5.0 5.5 6.0 6.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5 3.5 4.0 4.5 5.0 5.5 6.0 6.5
TR TR TR TR
Fig. 8: Feature distribution of F6 concerning TE and TR, using the denoising strategy. The first row is for the parameter of
TE and the second row is for the parameter of TR.
F1 F2 F3 F4 F5
1.0 1.0 1.0 1.0 1.0
Before (AUC=0.50) Before (AUC=0.57) Before (AUC=0.44) Before (AUC=0.40) Before (AUC=0.42)
0.8 After (AUC=0.51) 0.8 After (AUC=0.57) 0.8 After (AUC=0.60) 0.8 After (AUC=0.56) 0.8 After (AUC=0.59)
Sensitivity
Sensitivity
Sensitivity
Sensitivity
Sensitivity
0.6 0.6 0.6 0.6 0.6
0.4 0.4 0.4 0.4 0.4
0.2 0.2 0.2 0.2 0.2
0.0 0.0 0.0 0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1-Specificity 1-Specificity 1-Specificity 1-Specificity 1-Specificity
F6 F7 F8 F9 F10
1.0 1.0 1.0 1.0 1.0
Before (AUC=0.60) Before (AUC=0.51) Before (AUC=0.49) Before (AUC=0.69) Before (AUC=0.43)
0.8 After (AUC=0.60) 0.8 After (AUC=0.57) 0.8 After (AUC=0.51) 0.8 After (AUC=0.68) 0.8 After (AUC=0.51)
Sensitivity
Sensitivity
Sensitivity
Sensitivity
Sensitivity
0.6 0.6 0.6 0.6 0.6
0.4 0.4 0.4 0.4 0.4
0.2 0.2 0.2 0.2 0.2
0.0 0.0 0.0 0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1-Specificity 1-Specificity 1-Specificity 1-Specificity 1-Specificity
F11 F12 F13 F14 F15
1.0 1.0 1.0 1.0 1.0
Before (AUC=0.44) Before (AUC=0.64) Before (AUC=0.60) Before (AUC=0.59) Before (AUC=0.41)
0.8 After (AUC=0.59) 0.8 After (AUC=0.57) 0.8 After (AUC=0.54) 0.8 After (AUC=0.55) 0.8 After (AUC=0.55)
Sensitivity
Sensitivity
Sensitivity
Sensitivity
Sensitivity
F. Limitations: Normalizing Average Pixel Values VS. Chang- further investigate this issue, we apply a de-noising strategy
ing Noise Characteristics using a median filter to the high TE/TR images, and find the
resulting features (i.e., standard deviation of the voxel-wise
We observed that for a one analyzed feature, namely SER of tissue) with different TE/TR become closer in their
standard deviation of SER, our normalization algorithm did distribution, as shown in Fig. 8. These results imply that the
not result in a significant alignment of the features values proposed normalization method can remove the effect of the
for different scanner parameters as observed for all other overall intensity variation for feature extraction, and could not
analyzed features. The likely reason is that the SER map is mitigate the effect of image noise in this specific case.
more sensitive to data noise. Generally, the mean value of
SER map can potentially remove the effect of noise, but the Additionally, In Fig. 11, we show the mapping functions
standard deviation can be easily affected by the noise. To for 25 cases randomly selected from the testing set. Fig. 11
9
demonstrates that different slopes are used for normalizing TABLE II: Computational costs of different steps in the
different tissues, to ensure that the same type of tissue among proposed image normalization approach for one image of
different images would have the similar intensity range. 512 × 512 × 90 with the spacing of 0.68 × 0.68 × 2mm3 .
Normalized Space
Normalized Space
Normalized Space
Normalized Space
Fat Tissue Fat Tissue Fat Tissue Fat Tissue Fat Tissue
1000 Dense Tissue 1000 Dense Tissue 2000 Dense Tissue 1000 Dense Tissue 1000 Dense Tissue
Heart Heart Heart Heart Heart
0 0 0 0 0
0 1000 2000 3000 4000 0 1000 2000 0 500 1000 1500 2000 2500 0 100 200 300 400 0 1000 2000 3000
Subject#1 Subject#2 Subject#3 Subject#4 Subject#5
2000 2000 2000
Air Air Air Air Air
Normalized Space
Normalized Space
Normalized Space
Normalized Space
Normalized Space
2000 2000
Fat Tissue Fat Tissue Fat Tissue Fat Tissue Fat Tissue
1000 Dense Tissue 1000 Dense Tissue Dense Tissue 1000 Dense Tissue Dense Tissue
Heart Heart 1000 Heart Heart 1000 Heart
0 0 0 0 0
0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 0 1000 2000 3000 0 1000 2000 3000 4000 0 1000 2000
Subject#6 Subject#7 Subject#8 Subject#9 Subject#10
2000 Air 4000 Air Air Air Air
Normalized Space
Normalized Space
Normalized Space
Normalized Space
Normalized Space
2000 2000
Fat Tissue Fat Tissue Fat Tissue 2000 Fat Tissue Fat Tissue
Dense Tissue 2000 Dense Tissue Dense Tissue Dense Tissue Dense Tissue
1000 Heart Heart 1000 Heart 1000 Heart 1000 Heart
0 0 0 0 0
0 1000 2000 3000 4000 0 500 1000 1500 0 500 1000 1500 0 200 400 600 800 0 500 1000 1500
Subject#11 Subject#12 Subject#13 Subject#14 Subject#15
Normalized Space
Normalized Space
Normalized Space
Normalized Space
2000 Fat Tissue 2000 Fat Tissue Fat Tissue 2000 Fat Tissue Fat Tissue
Dense Tissue Dense Tissue 1000 Dense Tissue Dense Tissue 2000 Dense Tissue
1000 Heart 1000 Heart Heart 1000 Heart Heart
0 0 0 0 0
0 200 400 600 800 0 100 200 300 400 0 500 1000 1500 0 1000 2000 3000 0 50 100 150 200
Subject#16 Subject#17 Subject#18 Subject#19 Subject#20
2000 3000 3000 2000
Air Air Air Air 2000 Air
Normalized Space
Normalized Space
Normalized Space
Normalized Space
Normalized Space
Fat Tissue 2000 Fat Tissue 2000 Fat Tissue Fat Tissue Fat Tissue
1000 Dense Tissue Dense Tissue Dense Tissue 1000 Dense Tissue Dense Tissue
Heart Heart Heart Heart 1000 Heart
1000 1000
0 0 0 0 0
0 500 1000 1500 2000 0 100 200 300 0 1000 2000 3000 0 50 100 150 200 250 0 1000 2000 3000 4000
Subject#21 Subject#22 Subject#23 Subject#24 Subject#25
Fig. 11: The proposed piecewise linear mapping for 25 cases in the testing set. The x axis defines the intensity space of testing
image and the y axis defines the common intensity space of all training images.
[6] A. Saha, X. Yu, D. Sahoo, and M. A. Mazurowski, “Effects of MRI instance learning for brain disease diagnosis,” Medical Image Analysis,
scanner parameters on breast cancer radiomics,” Expert Systems with vol. 43, pp. 157–168, 2018.
Applications, vol. 87, pp. 384–391, 2017. [18] Y. Xu, Y. Li, Y. Wang, M. Liu, Y. Fan, M. Lai, I. Eric, and C. Chang,
[7] L. G. Nyúl, J. K. Udupa et al., “On standardizing the MR image intensity “Gland instance segmentation using deep multichannel neural networks,”
scale,” Image, vol. 1081, 1999. IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp.
[8] G. Collewet, M. Strzelecki, and F. Mariette, “Influence of MRI acqui- 2901–2912, 2017.
sition protocols and image intensity normalization methods on texture [19] L. Bi, J. Kim, E. Ahn, A. Kumar, M. Fulham, and D. Feng, “Dermo-
classification,” Magnetic Resonance Imaging, vol. 22, no. 1, pp. 81–91, scopic image segmentation via multistage fully convolutional networks,”
2004. IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2065–
[9] C. Studholme, V. Cardenas, E. Song, F. Ezekiel, A. Maudsley, and 2074, 2017.
M. Weiner, “Accurate template-based correction of brain MRI intensity [20] M. H. Yap, G. Pons, J. Martı́, S. Ganau, M. Sentı́s, R. Zwiggelaar, A. K.
distortion with application to dementia and aging,” IEEE Transactions Davison, and R. Martı́, “Automated breast ultrasound lesions detection
on Medical Imaging, vol. 23, no. 1, pp. 99–110, 2004. using convolutional neural networks,” IEEE Journal of Biomedical and
[10] “A review of methods for correction of intensity inhomogeneity in MRI, Health Informatics, 2017.
author=Vovk, Uro and Pernus, Franjo and Likar, Botjan, journal=IEEE [21] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional net-
Transactions on Medical Imaging, volume=26, number=3, pages=405– works for biomedical image segmentation,” in International Conference
421, year=2007, publisher=IEEE.” on Medical Image Computing and Computer-Assisted Intervention.
[11] N. L. Weisenfeld and S. Warfteld, “Normalization of joint image- Springer, 2015, pp. 234–241.
intensity statistics in MRI using the Kullback-Leibler divergence,” in [22] L. J. Grimm, J. Zhang, and M. A. Mazurowski, “Computational ap-
Biomedical Imaging: Nano to Macro, 2004. IEEE International Sympo- proach to radiogenomics of breast cancer: Luminal A and luminal B
sium on. IEEE, 2004, pp. 101–104. molecular subtypes are associated with imaging features on routine
[12] J. Zhang, M. Liu, and D. Shen, “Detecting anatomical landmarks from breast MRI extracted using computer vision algorithms,” Journal of
limited medical imaging data using two-stage task-oriented deep neural Magnetic Resonance Imaging, vol. 42, no. 4, pp. 902–907, 2015.
networks,” IEEE Transactions on Image Processing, vol. 26, no. 10, pp. [23] V. A. Arasu, R. C. Chen, D. N. Newitt, C. B. Chang, H. Tso, N. M.
4753–4764, 2017. Hylton, and B. N. Joe, “Can signal enhancement ratio (SER) reduce
[13] X. Cao, J. Yang, J. Zhang, Q. Wang, P.-T. Yap, and D. Shen, “De- the number of recommended biopsies without affecting cancer yield in
formable image registration using cue-aware deep regression network,” occult MRI-detected lesions?” Academic Radiology, vol. 18, no. 6, pp.
IEEE Transactions on Biomedical Engineering, 2018. 716–721, 2011.
[14] J. Zhang, M. Liu, L. Wang, S. Chen, P. Yuan, J. Li, S. G.-F. Shen, [24] J. Wang, F. Kato, N. Oyama-Manabe, R. Li, Y. Cui, K. K. Tha,
Z. Tang, K.-C. Chen, J. J. Xia, and D. Shen, “Joint craniomaxillofacial H. Yamashita, K. Kudo, and H. Shirato, “Identifying triple-negative
bone segmentation and landmark digitization by context-guided fully breast cancer using background parenchymal enhancement heterogeneity
convolutional networks,” in International Conference on Medical Image on dynamic contrast-enhanced MRI: A pilot radiomics study,” PLoS
Computing and Computer-Assisted Intervention. Springer, 2017, pp. One, vol. 10, no. 11, p. e0143308, 2015.
720–728. [25] T. Wan, B. N. Bloch, D. Plecha, C. L. Thompson, H. Gilmore, C. Jaffe,
[15] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain tumor seg- L. Harris, and A. Madabhushi, “A radio-genomics approach for identi-
mentation using convolutional neural networks in MR images,” IEEE fying high risk estrogen receptor-positive breast cancers on DCE-MRI:
Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016. Preliminary results in predicting OncotypeDX risk scores,” Scientific
[16] C. Lian, J. Zhang, M. Liu, X. Zong, S.-C. Hung, W. Lin, and D. Shen, Reports, vol. 6, p. 21394, 2016.
“Multi-channel multi-scale fully convolutional network for 3D perivas- [26] R. M. Haralick, K. Shanmugam et al., “Textural features for image
cular spaces segmentation in 7T MR images,” Medical Image Analysis, classification,” IEEE Transactions on Systems, Man, and Cybernetics,
vol. 46, pp. 106–117, 2018. no. 6, pp. 610–621, 1973.
[17] M. Liu, J. Zhang, E. Adeli, and D. Shen, “Landmark-based deep multi- [27] J. Zhang, Y. Gao, Y. Gao, B. C. Munsell, and D. Shen, “Detecting
11