You are on page 1of 12

Biomedical Signal Processing and Control 60 (2020) 101945

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

Saliency-guided automatic detection and segmentation of tumor in


breast ultrasound images
Hiba Ramadan a,∗ , Chaymae Lachqar b , Hamid Tairi a
a
University of Sidi Mohamed Ben Abdellah, Faculty of Sciences Dhar El Mahraz, Department of Informatics, B.P. 1796, Atlas-Fez, Fez, 30000, Morocco
b
University of Sidi Mohamed Ben Abdellah, Faculty of Medicine and Pharmacy, Fez, 30000, Morocco

a r t i c l e i n f o a b s t r a c t

Article history: The development of computer aided diagnosis (CAD) systems for the task of tumor detection and seg-
Received 30 September 2019 mentation in breast ultrasound (BUS) images presents one of the most active research fields. In this paper,
Received in revised form 26 February 2020 a saliency-guided approach for fast and automatic tumor segmentation in BUS images is proposed. We
Accepted 28 March 2020
explore the ability of the concept of saliency to emulate radiologist expertise for tumor lesion detection in
Available online 16 April 2020
BUS images. For that, we compute BUS image saliency to generate effective tumor seeds and background
seeds. Then, a graph-based interactive image segmentation method is applied automatically using the
Keywords:
generated seeds to extract tumor region. For more accurate segmentation, we propose a novel saliency
Computer aided diagnosis
Breast ultrasound
score measure to apply a post-processing step for result refinement. Our results have been compared
Tumor detection with other approaches in challenged datasets and demonstrated the effectiveness of our saliency-guided
Tumor segmentation approach for tumor detection and segmentation using different evaluation metrics.
Saliency detection © 2020 Elsevier Ltd. All rights reserved.
Graph-based segmentation
Saliency score
Active contours

1. Introduction lesions segmentation manually [6], the need of automatic CAD sys-
tem that is able to emulate human cognitive capacity to detect and
According recent statistics [1], breast cancer is the most com- segment accurately desired regions of interest (ROI) while being
monly occurring cancer in women and the second most common fast remains more and more necessary.
cancer overall. Many cancers have a high chance of cure if diagnosed BUS image segmentation methods can be classified according to
early and treated adequately. Although mammography is the most human intervention into: semi-automatic methods [7–9] and fully
used modality for the diagnosis of breast cancer, recently breast automatic ones [10]. The first category requires radiologist inter-
ultrasound (BUS) imaging is suggested to be comparable to mam- action to mark the ROI containing the tumor either using seeds or
mography for breast cancer screening [2]. In fact, BUS can be very enclosing contour. The fully automatic methods do not need any
benefic in computer aided diagnosis (CAD) systems especially for human intervention and process directly BUS images by exploiting
selected women who are pregnant and should not be exposed to prior knowledge extracted from images using different techniques
x-rays used in mammography, and for women with dense breast [11]. In this work, we tackle the task of automatic BUS image seg-
tissue where the sensitivity of mammography is reduced [3]. Fur- mentation.
thermore, BUS is characterized by its low cost, safety, and real-time
imaging capability and can be used in conjunction with mammog- 1.1. Automatic BUS image segmentation
raphy to improve breast cancer diagnosis [4]. One of the most
challenging tasks in BUS-based CAD systems is automatic tumor According to the recent survey presented in [10] the dominant
detection and segmentation. Due to the limitations of the ultra- categories of automatic BUS image segmentation approaches are:
sound image quality caused by different factors such as noise, deformable models (DMs), graph-based approaches, and learning-
speckle, etc. [5]. and the time consuming by experts to generate based approaches. DMs have been firstly proposed by [12] and
became very popular in computer vision applications, especially
in medical imaging, with the great success of the snakes [13].
∗ Corresponding author. Generally, image segmentation based on DMs requires two steps:
E-mail addresses: hiba.ramadan@usmba.ac.ma (H. Ramadan), contour initialization and contour evolution processes. The seg-
chaymae.lachqar@usmba.ac.ma (C. Lachqar), hamid.tairi@usmba.ac.ma (H. Tairi). mentation framework of DMs is flexible and can incorporate

https://doi.org/10.1016/j.bspc.2020.101945
1746-8094/© 2020 Elsevier Ltd. All rights reserved.
2 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

various image features (edge, shape, intensity, region information. 䊏 Saliency score computation and result refinement (Step 4 and
For BUS images, many works have been proposed in the litera- 5): To avoid mis-segmentation results, we develop a pixel-based
ture to perform automatic lesion segmentation using DMs either refinement scheme using saliency score computation and active
using parametric [14] or geometric [15] modeling. In the first contours model (ACM). We define a new saliency score to decide
family of DMs i.e. parametric DMs, curve or surface, which can if this refinement operation is needed, then an ACM is applied
evolve toward to the object contours, is represented by its paramet- to post-process the segmentation result and extract an accurate
ric form explicitly. The methods belonging to this category suffer tumor boundary.
from high sensitivity to contour initialization compared to the geo-
metric DMs. These last ones apply level set functions to model Our algorithm is compared to other methods and has produced
curves and surfaces implicitly and can segment multiple objects. competitive results while being faster and doesn’t need any expert
Although geometric DMs have achieved great success in BUS image intervention. To demonstrate the performance of our system, we
segmentation, active research is still focusing to deal with the inho- have evaluated our results in term of tumor detection accuracy as
mogeneity of BUS images and decrease the computational time well as in term of segmentation precision. The remainder of this
during the curve evolution process [10]. The second dominant cat- paper is given as follows: Section 2 describes in details the different
egory of BUS image segmentation is graph-based approaches [16]. steps of the proposed framework, and experimental results with
In the graph-based method, pixels or regions in the BUS image are discussion are given in Section 3.
represented as nodes in a graph and each two adjacent nodes are
connected via an edge associated with a defined weight. By con- 2. Proposed framework
sidering Markov random field, the BUS image segmentation can
be transformed into a graph-based energy minimization problem 2.1. Pre-processing
that can be solved via the graph-cut framework [17]. For more
details about energy formulation/ optimization in the graph-based Due to the low contrast and the interference with speckle in
models, refer to [18]. Despite the big success of graph-cut-based BUS images, a pre-processing step remains necessary to enhance
approaches for automatic BUS tumor image segmentation [19,20], the quality of the input image while preserving important image
the “shrink” problem presents the common issue of all these tech- features in order to exploit them in next steps. The pre-processing
niques. Learning-based methods are also widely used in ultrasound of BUS images consists of two stages: contrast enhancement and
CAD systems [21] and especially in the task of BUS image seg- speckle reduction.
mentation either in a supervised or unsupervised manner [22–24].
However, according to the survey in [18], some approaches belong- 2.1.1. Contrast enhancement
ing to this category are slow such as [25] or depend on parameters The main goal of contrast enhancement is to apply an inten-
initialization [10]. Besides the afro-minded classes there are other sity transformation to image pixels using a mapping function such
methods that use thresholding techniques, region growing process, as histogram equalization (HEQ) [27], contrast limited adaptive
watershed segmentation and others [10,18]. histogram equalization (CLAHE) [28], fuzzy logic-based contrast
enhancement (FEN) [29] and sigmoidal mapping-based adaptive
contrast enhancement (SACE) [5]. All these methods (HEQ, CLAHE,
1.2. Motivation and contribution
FEN and SACE) have been applied successfully for BUS images
enhancement. The evaluation study conducted in [5] resulted in the
Due to the challenging nature of BUS images and to deal with the
outperformance of SACE in term of contrast enhancement quality
different disadvantages of existing BUS image segmentation meth-
and segmentation accuracy using the watershed-based segmen-
ods (computational time, need of human intervention, “shrink”
tation method of [30]. Since we have conducted our experiments
problem, unsatisfactory results), we propose a robust hybrid sys-
in two different BUS image datasets (see subsection 3.1) obtained
tem able to detect and segment automatically tumor regions in
with different imaging devices and presenting different contrast
BUS images with the guide of saliency. Our framework integrates
distributions, to achieve an optimal compromise between preci-
many computer vision algorithms and consists of five main steps
sion of segmentation and runtime execution in our framework, we
as shown in Fig. 1:
have tested the four contrast enhancement methods to select the
appropriate technique for each dataset. We found that the results
䊏 Pre-processing step (Step 1): In this stage the input BUS image is of tumor segmentation in BUS images by using CLAHE and HEQ
pre-processed via contrast enhancement and speckle suppres- in the pre-processing step produce high segmentation accuracy
sion to enhance its quality without affecting important image in less execution time. More details about the validation of the
features in order to exploit them in next steps. choice of these contrast enhancement methods are presented in
䊏 BUS saliency detection and tumor seeds generation (Step 2): the the experimental subsection (3.3).
goal of this step is to extract the salient region in the BUS image CLAHE is a local contrast enhancement technique often used
containing the tumor. To prepare the segmentation process in to process BUS images [30]. Instead of mapping of the whole of
the following step, regions having high saliency probability are image pixels such as in the classic HEQ, CLAHE performs histogram
marked as tumor seeds and those having low probability are equalization mappings on contextual regions to preserve local fea-
marked as background. Both of the tumor and the background tures. Furthermore, the enhancement of uniform regions is limited
seeds will be used to initialize the next stage. to avoid the over enhancement of the noise.
䊏 Tumor segmentation (Step 3): In this step, the Lazy Snapping
(LS) algorithm [26] is applied to segment the tumor region. The 2.1.2. Speckle suppression
LS is a graph-cut-based model that has widely used in many One of the most challenges of BUS CAD is the negative effect
computer vision applications to perform segmentation due to its of the speckle noise on image processing. Many works have been
performance and speed. The LS operates on superpixels instead proposed in the literature for feature-preserving BUS despeckling
of pixels and need user input seeds to segment the object of [31,32]. The authors in [33] classified BUS despeckle filters into five
interest via an energy minimization framework. Those seeds are categories: the local adaptive filter, the anisotropic diffusion fil-
provided automatically in our system as described in the previ- ter, the multi-scale filter, the nonlocal means filter (NLM), and the
ous step. hybrid filter. By inspecting a recent work in the field [34] that had
H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945 3

Fig. 1. Flowchart of the proposed saliency-guided framework for tumor segmentation in BUS image.

conducted evaluations of different despeckling methods applied than 91 % of accuracy) or a graph-cut model (more than 93 % of
as pre-processing step for the task of BUS image segmentation, accuracy), we adopt OBNLM for speckle removal in the preprocess-
we found that the state of the art methods in ascending order ing step. The reader can refer to the original work [35] for detailed
of performance are: Optimized Bayesian NLM (OBNLM) [35], the description.
anisotropic diffusion guided by Log-Gabor filters (ADLG) [36], the
non-local mean filter combined with local statistics (NLMLS) [37] 2.2. BUS image saliency detection and tumor seeds generation
and the non-local low-rank framework (NLLR) [34]. However, using
the implementation provided by the authors, the runtime execu- The goal of saliency detection is to distinguish the most attrac-
tion of the NLLR method is greater than 3 min to pre-process an tive and informative, so called salient, region in a scene. Recently,
image of 200 × 300 pixels and ADLG requires 4 s to filter the same the concept of saliency has been applied in many areas of computer
image. OBNLM takes only 2.5 s and NLMLS should run in more time vision and gains more and more interest in biomedical appli-
than OBNLM since its model combines local statistics with NLM to cations [38–40]. The Contrast prior and boundary prior are the
remove speckle. Since OBNLM achieves comparable segmentation most used principles to compute bottom up saliency models [41].
results with the state-of-the-art methods in term of segmentation The first supposes that appearance contrasts between objects and
accuracy either using a level set-based segmentation model (more their surrounding regions are high. The second prior supposes that
4 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

Fig. 2. Example of saliency BUS image generation with the sets of foreground and background seeds. (a) input BUS image. (b) enhanced BUS image after pre-processing. (c)
generation of superpixels using the SLIC algorithm. (d) BUS saliency map obtained by SO method. (e) sets of tumor (red regions) and background (green regions) seeds.

Fig. 3. Example of pixel-based refinement scheme using saliency score computation and ACM. (a) input BUS image. (b) BUS saliency map. (c) tumor (red region) and
background (green region) seeds. (d) segmented tumor using LS (red region) and unknown region (blue region). (e) refined segmented tumor (red region). (f) groundtruth.

image boundary regions are mostly belonging to the background. we develop a pixel-based refinement scheme using saliency score
However, when a part of the salient object touches the image computation and ACM. We define a saliency score measure of the
boundaries, the use of the boundary prior may affect the saliency regions that don’t belong to either background in (2) or segmented
computation. To overcome this issue, a robust boundary connectiv- tumor region. We consider that the labels of those superpixels are
ity measure is proposed in [42] to compute saliency and assumes unknown and we investigate their average saliency values. If the
that object regions are much less connected to image boundaries computed average saliency score is greater than a threshold th, we
than background ones. admit that there are still salient regions that are omitted during the
Since the main goal of this step is to compute a saliency map segmentation stage and candidates to belong to the tumor region. In
of the BUS image in order to provide visual clues of tumors that this case, we apply a pixel-wise segmentation method using ACM
can be used as input seeds for the segmentation process, and based of [48] by considering the contour of the LS-based segmentation
on the observation that tumor regions are much less connected to result as initial contour. Otherwise, i.e. if the average score of the
image boundaries than background ones, the use of saliency opti- saliency is less than the value of the threshold, we keep the result
misation (SO) method of [42] seems a good choice. To validate this of the LS segmentation and we do not apply this post-treatment to
step, in addition to the SO method, we have tested three other avoid the risk of including non-tumor regions in the final result.
saliency methods based on boundary prior: geodesic saliency (GS) Let Bg be the binary image corresponding to the effective back-
[43], manifold ranking-based saliency (MR) [44] and minimum bar- ground pixels defined in (2) and Bseg the binary segmentation of
rier distance-based saliency (MB) [45]. According to the results in the BUS image I obtained by the LS method. The unknown region
our experiment (see subsection 3.4), we found that SO produces the Ur is defined as follows:
best compromise between accuracy and runtime execution overall
the BUS images used in our test. Ur = Bg ∩ Bseg (3)
Since the SO method is a superpixel-level model, to build the
BUS saliency map, first we over-segment the input image using Where the operator .̄ is used to compute the image complement.
the SLIC algorithm [46] to generate N image regions. Let I be the The saliency score is given as the average saliency value among
enhanced input BUS image and {spk }N k=1
is the set of N image super- the superpixels belonging to Ur :
pixels. We apply the SO method to the image I to generate the
saliency map S = {sk }N average
k=1
where sk designs the saliency value of the score =   (S (spk )) (4)
superpixel spk . To provide initial seeds for the segmentation in the spk ∈ Ur , k ∈ 1, . . ., N
next step, image superpixels with the highest saliency value are
taken as tumor seeds while those with lowest saliency value are To decide wherever the pixel-based refinement step based on
taken as background seeds. Let F and B be the sets of foreground the ACM [48] should be adopted or no, we compare the computed
and background seeds respectively. F and B are defined as follows: average saliency value to a fixed threshold th as follows:
   
F= spk ∈ I, k ∈ 1, . . ., N / spk = argmaxi ∈ {1,...,N} (si ) (1) 
if score > th then apply ACM to I based on the contour in Bseg
   
B= spk ∈ I, k ∈ 1, . . ., N / spk = argmini ∈ {1,...,N} (si ) (2) else keep Bseg as the final result
Fig. 2 shows an example of BUS image with the corresponding
saliency map and the sets of foreground and background seeds. The value of th is set experimentally to 60.
Fig. 3 describes with an example the proposed pixel-based
2.3. Tumor segmentation and result refinement refinement scheme using saliency score computation and ACM.
After pre-processing the input BUS image (Fig. 3-(a)), the saliency
Using the sets F and B of foreground and background seeds map of the enhanced image is computed (Fig. 3-(b)) and the sets
defined by Eqs. (1) and (2) respectively, we apply the LS algo- of tumor and background seeds are shown in red and green col-
rithm [26] to the BUS image I to segment the tumor region. LS is ors (Fig. 3-(c)) respectively. The result of the segmentation using LS
a graph-based interactive image segmentation method proposed and the unknown region are shown in red and blue colors (Fig. 3-
to improve the graph cut [47] in terms of speed and accuracy by (d)) respectively. The saliency score of the unknown region Ur (3) is
using small image regions (the set of N superpixels in our case) as computed: in this example it is equal to 95 and greater to the fixed
nodes of the graph instead of image pixels. Although the use of the threshold th = 60. The refinement step is then applied by taking the
SLIC method reduces the runtime and generates accurate bound- boundary of the segmented region as initial contour for the ACM.
ary regions, our algorithm may produce mis-segmentation results The final segmentation result is shown in Fig. 3-(e) (red region) and
in some BUS images (see example in Fig. 3). To overcome this issue, the ground-truth image in Fig. 3-(f).
H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945 5

Table 1
Comparison of segmentation accuracy and pre-processing time of our framework using different contrast enhancement methods in the two experimented datasets. All values
are given as mean ± std.

D (%) J (%) TPR (%) BF (%) time (s)

Dataset 1
Ours using CLAHE 62.14 ± 5.17 54.65 ± 4.01 54.17 ± 5.78 50.08 ± 10.15 0.01 ± 0.00
Ours using HEQ 45.38 ± 10.11 36.25 ± 10.02 48.91 ± 4.87 36.42 ± 8.61 <0.01
Ours using FEN 9.41 ± 2.45 7.88 ± 1.12 10.91 ± 3.06 8.60 ± 2.11 0.12 ± 0.02
Our method using SACE 35.24 ± 8.38 27.42 ± 4.43 31.22 ± 6.18 23.45 ± 3.87 14.47 ± 2.45
Dataset 2
Ours using CLAHE 55.37 ± 2.70 43.69 ± 1.56 46.97 ± 1.87 78.82 ± 7.17 <0.01
Ours using HEQ 84.07 ± 10.95 73.29 ± 12.73 76.81 ± 8.58 98.15 ± 2.47 <0.01
Ours using FEN 66.90 ± 9.35 58.54 ± 7.17 63.66 ± 1.01 92.60 ± 6.93 0.08 ± 0.01
Ours using SACE 56.03 ± 1.32 42.22 ± 0.16 42.86 ± 0.99 87.07 ± 3.23 3.88 ± 0.02

3. Results and discussion Table 2


MAE measures of the different saliency methods (SO, MB, GS and MR) on Dataset
1 and Dataset 2 and the average runtime saliency detection computed among the
3.1. Datasets two datasets given in second (s). All values are as mean ± std.

SO MB GS MR
We have conducted the experiments on two datasets which
we name Dataset 1 and Dataset 2. Dataset 1 is the BUS Lesions Dataset 1 19.23 ± 1.30 28.54 ± 2.11 24.32 ± 0.78 25.04 ± 2.63
Dataset (Dataset B) [49] provided under the release agreement. It Dataset 2 17.55 ± 0.96 14.99 ± 1.15 19.64 ± 2.11 26.79 ± 2.31
Time (s) 0.16 ± 0.01 0.03 ± 0.01 0.14 ± 0.02 0.17 ± 0.01
contains 163 lesion images with a mean size of 760 × 570 pixels.
53 images present cancerous masses while 110 concern benign
lesions. The lesions were delineated by experienced radiologists
to provide ground truth for each BUS image. In our experiments, we test our algorithm using the four techniques and evaluate the
each image is resized using a scale factor equal to 0.5 to reduce tumor segmentation results using different metrics (defined in
the runtime execution. Dataset 2 has been recently made available subsection 3.2) in addition to the pre-processing time in second
online from the authors in [50]. It contains 160 BUS images with (s). Table 1 presents the obtained results and some examples of
benign tumors and 160 BUS images with malignant tumors. The pre-processed BUS images using the different methods are shown
size of all the images is 128 × 128 and the labels of tumor regions in Fig. 4.
are provided. Following the authors in [50], 100 BUS images (50 According the results in Table 1, the highest average segmen-
with benign tumors and 50 with malign tumors) from Dataset 2 tation accuracy scores (D, J, TPR, BF) in Dataset 1 are achieved
have been used in the experiments to evaluate the different results. using the CLAHE method while FEN has affected dramatically the
All the results were calculated on a Windows 7 64-bit operating process of tumor segmentation. In the other hand, the results of pre-
system running MATLAB 2017 with an Intel i5-2430 M 2.40 GHz processed BUS images from Dataset 2 using CLAHE are the worst.
CPU and 8 GB of RAM. Whereas the use of HEQ has produced the best results in Dataset
2 by improved segmentation accuracy over 20 % than the results
3.2. Evaluation metrics obtained using the other contrast enhancement methods. If we take
into consideration the pre-processing time, both of CLAHE and HEQ
To evaluate the binary segmentation results obtained by the take less than 0.01 s to process one image. Since CLAHE and HEQ
different methods, we follow previous works [10] by using the outperform the other contrast enhancement methods by producing
Dice similarity index (D), the Jaccard index (J), the True Positive the best results in tumor segmentation in a less time in Dataset 1
Rate (TPR) in addition to the mis-segmentation rate (Err) and the and Dataset 2 respectively, CLAHE is adopted to pre-process the BUS
Boundary F1(BF) score [51]. images in Dataset 1 and HEQ is selected for contrast enhancement
Given a segmented BUS image S and its ground truth G, these in Dataset 2.
metrics are defined as follows:
     
D = 2 × S ∩ G  / S  + G  (5) 3.4. Validation of the choice of saliency detection model
   
J = S ∩ G  / S ∪ G  (6) To validate the choice of the saliency method in our pipeline for
    tumor seeds generation, we have tested four saliency models based
TPR = S ∩ G / G (7) on boundary prior: GS [43], MR [44], MB [45] and SO [42]. Some
  samples of BUS saliency maps obtained by the different methods
Err = xor(S, G)/ G (8) are shown in Fig. 5.
BF = 2 * Pb * Rb / (Pb + Rb ) (9) To decide what model is the most performing for our system, we
follow previous works and evaluate the different saliency results
In (9), Pb is the ratio of the number of points on the boundary using two criteria: the standard precision-recall (PR) curve and the
S that are close enough to the boundary of G to the length of the mean absolute error (MAE) [29]. The resulting MAE measures and
predicted boundary and Rb is the ratio of the number of points on PR curves are illustrated in Table 2 and Fig. 6 respectively.
the boundary of G that are close enough to the boundary of S to the By inspecting Fig. 6 and Table 2, we found that SO is the best
length of boundary of S. The computed values of these metrics are performing method in term of PR analysis and MAE measure by
expressed as mean ± std on BUS datasets. producing the lower error value in Dataset 1. In Dataset 2, all of SO,
MB and GS generate high PR scores, however MB has the lower MAE
3.3. Validation of the choice of the pre-processing method rate. If we compare the average running time of the four saliency
methods reported in Table 2, all of them consume less than 20 s
To validate the choice of the contrast enhancement method to detect image saliency and MB is the faster one. For more deep
among HEQ, CLAHE, FEN and SACE methods (see subsection 2.1.1), analysis, we have tested our BUS segmentation algorithm using the
6 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

Fig. 4. Examples of pre-processed images of the BUS image (a) using the different contrast enhancement methods: (b) CLAHE, (c) FEN, (d) HEQ and (e) SACE. The BUS image
in the first row is from Dataset 1 and the image in the second row is from Dataset 2.

Fig. 5. Some samples of saliency maps of the BUS image (a) obtained by the methods: MB (b), GS (c), MR (d) and SO (e). The BUS image in the first row is from Dataset 1 and
the image in the second row is from Dataset 2.

Fig. 6. Evaluation of different saliency models (GS [43], MR [44], MB [45] and SO [42]) using PR curve on (a) Dataset 1 and (b) Dataset 2.

Table 3
Segmentation performance using different metrics with the four saliency models on Dataset 1 and Dataset 2. All values are given as mean ± std.

D (%) J (%) TPR (%) BF (%) Err (%)

Dataset 1
Ours with SO 62.14 ± 5.17 54.65 ± 4.01 54.17 ± 5.78 50.08 ± 10.15 4.11 ± 0.35
Ours with GS 35.86 ± 6.41 40.73 ± 5.42 27.86 ± 2.26 45.75 ± 7.70 5.46 ± 1.06
Ours with MB 37.47 ± 4.93 39.78 ± 3.19 49.56 ± 6.11 47.24 ± 12.05 28.52 ± 3.42
Dataset 2
Ours with SO 84.07 ± 10.95 73.29 ± 12.73 76.81 ± 8.58 98.15 ± 2.47 13.35 ± 7.21
Ours with GS 80.18 ± 8.10 70.05 ± 6.81 72.36 ± 1.76 96.70 ± 1.01 14.31 ± 2.01
Ours with MB 80.01 ± 8.31 67.75 ± 7.11 69.42 ± 0.30 96.92 ± 3.05 15.18 ± 1.59
H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945 7

Table 4
Segmentation performance using different metrics on Dataset 1 with benign tumor. All values are given as mean ± std.

Method Detection ratio (%) D (%) J (%) TPR (%) BF (%) Err (%)

[52] 81.65 47.62 ± 8.52 41.88 ± 7.68 39.87 ± 5.13 36.15 ± 9.21 4.50 ± 0.35
[53] 83.49 42.15 ± 1.15 35.67 ± 5.20 34.97 ± 10.11 33.50 ± 2.67 5.37 ± 0.88
ours (1) 100 62.84 ± 3.75 61.17 ± 5.24 58.79 ± 2.63 52.08 ± 6.36 4.17 ± 0.62
ours 100 67.75 ± 6.32 64.62 ± 2.45 62.82 ± 5.02 58.11 ± 9.88 3.92 ± 0.10

Table 5
Segmentation performance using different metrics on Dataset 1 with malign tumor. All values are given as mean ± std.

Method Detection ratio (%) D (%) J (%) TPR (%) BF (%) Err (%)

[52] 61.09 37.15±12.75 28.46±10.12 30.12±13.65 22.08±6.25 6.22±0.15


[53] 75.91 36.87±8.04 27.54±5.71 26.68±11.50 19.02±12.75 6.38±0.23
ours (1) 100 40.87±7.14 31.38±8.60 34.55±9.12 24.04±11.35 5.91±0.16
ours 100 51.11±10.05 38.86±8.14 41.09±3.15 34.45±10.33 5.63±0.03

Table 7
Segmentation performance using different metrics on Dataset 2 with benign tumor. All values are given as mean ± std.

Method Detection ratio (%) D (%) J (%) TPR (%) BF (%) Err (%)

[8] 100 74.83 ± 13.42 61.36 ± 15.14 72.10 ± 16.46 97.01 ± 3.46 19.12 ± 8.49
[9] 100 77.81 ± 7.46 64.26 ± 9.63 71.82 ± 9.87 92.39 ±4.76 17.34 ± 6.59
ours (1) 100 81.34 ± 10.57 69.77 ± 14.05 73.17 ± 10.61 97.74 ± 3.23 15.12 ± 8.15
ours 100 84.13 ± 8.84 73.48 ± 11.77 76.90 ± 8.96 98.18 ± 2.14 13.19 ± 6.73

Table 6 Table 9
Average runtime in second (s) of the different methods among all the BUS images Average runtime in second (s) of the different methods among all the BUS images
in Dataset 1. All values are given as mean ± std. in Dataset 2. All values are given as mean ± std.

Method [52] [53] ours (1) ours Method [8] [9] ours (1) ours

Time (s) 8.10 ± 1.16 36.65 ± 1.48 3.45 ± 0.76 3.28 ± 0.05 Time (s) >720 ≈0.12 2.52 ± 0.08 2.43 ± 0.06

four saliency methods. The quantitative results of segmentation demonstrate the effectiveness of the saliency score computation in
performance in the two datasets are summarized Table 3. the refinement step, we have generated the results of tumor seg-
According to the different measures in Table 3, the highest scores mentation using our same pipeline without computing the saliency
of the different metrics and the lowest error rate are produced by score, i.e. by applying directly the ACM to refine the graph-based
our results of BUS segmentation using SO in the two datasets. There- segmentation result. Thereafter we designate by ‘ours’ the pro-
fore, the SO model has been used to compute BUS image saliency posed method and ‘ours (1)’ the proposed method without saliency
in our framework. score computation.

3.5. Evaluation of tumor segmentation results 3.5.1. Quantitative results


In addition to the different evaluation measures defined in sub-
To show the performance of our system, we compare our lesion section 3.2, we have computed the detection ratio which is equal
segmentation results with two fully automatic methods presented to the number of BUS images where a tumor is found, to the total
in [52,53] and two semi-automatic ones proposed in [8,9]. The number of BUS images. We notice that BUS images where no tumor
method in [52] proposed a BUS lesion segmentation based on was detected have been excluded from the evaluation to give sig-
Quickshift [54] followed by a post-processing step. The work of [53] nificative comparison between the different methods.
used the Normalized Cut approach [55] and the Kmeans classifier to The quantitative results and the runtime of each method in sec-
segment tumor regions. The authors in [8] proposed a parameter- ond (s) on Dataset 1 are presented in Tables 4–6 and those on
automatically optimized robust graph-based lesion segmentation Dataset 2 are presented in Tables 7–9.
model. In [9], an efficient and rapid BUS segmentation method was In regards to the results obtained in Dataset 1, we observe that
developed using the mean shift and the graph-cut. our method is able to detect a tumor in each image in the dataset
We compared our results on Dataset 1 with those obtained by while the other methods fail to find the ROI in about 20 % of the
[52,53] based on the codes provided by their authors online. The BUS images containing benign tumors (Table 4) and more than 25
comparison with the methods in [8] and [9] was performed on % of the BUS images with malign tumors (Table 5). Furthermore,
Dataset 2 using the pre-computed segmentation masks recently the best performing approach in terms of D, J, TPR and BF measures
available from the authors of Dataset 2 in [50]. Furthermore, to is our method with improved segmentation accuracy over 20 % for

Table 8
Segmentation performance using different metrics on Dataset 2 with malign tumor. All values are given as mean ± std.

Method Detection ratio (%) D (%) J (%) TPR (%) BF (%) Err (%)

[8] 100 74.74 ± 11.37 60.84 ± 13.22 72.73 ± 11.02 96.72 ± 3.95 18.26 ± 6.44
[9] 100 76.86 ± 5.65 62.76 ±7.57 71.45 ± 7.78 92.15 ± 3.87 16.90 ± 4.61
ours (1) 100 79.58 ± 14.47 67.93 ± 15.95 70.14 ± 7.41 97.06 ± 2.26 15.88 ± 8.30
ours 100 84.00 ± 13.08 73.09 ± 13.70 76.73 ± 8.21 98.13 ± 2.80 13.51 ± 7.69
8 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

Fig. 7. Some examples of tumor segmentation on BUS images (a) using our method without saliency score computation (b) and with saliency score computation (c).
ground-truth images are shown in row (d).

the methods in [50,51] in BUS images with benign tumors (Table 4) 3.5.2. Qualitative results
and 10 % in the case of malign tumor (Table 5). In term of mis- For visual comparison, Fig. 7 presents some examples of seg-
segmentation error rate (Err), the proposed method produces the mented BUS images from Dataset 1 using our method without
lowest value in both of BUS images with benign and malign lesions. (Fig. 7-(b)) and with (Fig. 7-(c)) saliency score computation. By
By inspecting the results of the experiments on Dataset 2, our comparing the segmentation results with the ground-truth ((Fig. 7-
method and the methods in [8,9] have detected successively the (d)), we can see that the application of ACM directly to refine the
tumor region among all the BUS images either with benign or segmented tumor may produce an over segmentation result by
malign lesions (Tables 7 and 8). If we compare our segmentation including no tumor regions to the final result as shown in Fig. 7-(b).
results with the results of [8,9], the proposed method achieved the On the other hand, the investigation of saliency score is effective to
highest D, J, TPR and BF scores and the lowest error among all the decide the (no) requirement of a post-processing step.
BUS images in Dataset 2 with benign tumors (Table 7) and malign Fig. 8 presents some examples of segmented lesions in BUS
ones (Table 8). images from Dataset 1 using our method and the other approaches
By analysing the results obtained by our method with saliency in [52,53]. Visually; the proposed method outperforms the other
score computation and without, a significant improvement of seg- works either in the case of benign lesions (the first two rows in
mentation performance is shown in the BUS images with either Fig. 8) or malign ones (the two last rows). Some qualitative results
benign or malign tumors on both of the two experimented datasets obtained from Dataset 2 of our method and the works in [8,9] are
when we use the saliency score to decide if a refinement step is also presented in Fig. 9 where the two first rows correspond to
required or no. This improvement is due to the fact that the use benign tumors and the two last rows to malign ones. The visual
of saliency score excludes non salient regions to be segmented as comparison shows that our results are the closest to the ground
tumor regions in the refinement step. truth images in both of benign and malign cases compared to the
If we focus on the type of lesions, it can be seen that all the results of [8,9].
methods perform better on BUS images with benign tumor than
those with malign tumor in the two experimented datasets.
3.6. Evaluation of tumor detection results
In term of runtime analysis, our system can segment tumor
region in less than 3.5 s while the methods of [52,53] needs about
For more intensive evaluation of our system and following the
36 s and 8 s respectively to process the same BUS image in Dataset
authors in [49], we propose to compare our results on term of
1 (Table 6). In Dataset 2, we notice that the runtimes of the
lesion detection. For that, each segmented lesion is surrounded
methods in [8,9] are reported from [50] and have been computed
by a bounding box where the upper-left corner location of the
with a more powerful machine than ours. According the values in
box is the coordinate of the upper-left point from the boundary
Table 9, the method in [9] is the fastest one and can segment a
of the segmented tumor, and the bottom-right corner location cor-
BUS image in less than 0.2 s while the method in [9] needs more
responds to the coordinate of the bottom-right one. Fig. 10 presents
than 720 s to extract the tumor region. The runtime of the pro-
the results of tumor detection based on our segmentation results
posed method in Dataset 2 is reasonable (about 2.5 s) especially
and the results obtained by Elawady et al. [52] and Sadek et al.
if we take into consideration that our system is fully automatic
[53].
and outperforms the other methods in term of tumor segmenta-
We have compared our lesion detection results with a deep
tion.
learning-based method [49] using the U-Net implementation [56]
H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945 9

Fig. 8. Some examples of BUS image segmentation results from Dataset 1. (a) original BUS images. (b), (c) and (d) results obtained by [52,53], and our method respectively.
(e) binary ground-truth. The two first rows present BUS images with benign tumors and the two last ones present malign tumors.

Fig. 9. Some examples of BUS image segmentation results from Dataset 1. (a) original BUS images. (b), (c) and (d) results obtained by [52,53], and our method respectively.
(e) binary ground-truth. The two first rows present BUS images with benign tumors and the two last ones present malign tumors.

and other approaches based on Radial Gradient Index Filtering tion (TPF) and False Positives per image (FPs/image) [49] given as
[57], Multifractals Filtering [58], region ranking [25] and DMs [59]. follows:
All the compared methods are described in details in [49]. The
results of [52,53] are also included in this study. The performance of number of TPs
TPF = (10)
tumor detection in BUS images is measured by True Positive Frac- number of actual lesions
10 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

Fig. 10. Some examples of tumor detection based on our segmentation results (a) and the results obtained by [53] (b) and [52] (c). the green color indicates the detected
tumor obtained by each method and the red color presents the ground-truth tumor bounding box. The absence of green color means that no lesion was detected.

Fig. 11. Example of failure cases. (a) original BUS images. (b) saliency maps. (c) tumor (red) and background seeds (green). (d) segmentation results. (e) groundtruth.

Table 10 into consideration that BUS images in the experimented dataset


Comparison of tumor detection accuracy using different metrics.
present many challenges making the lesion detection more diffi-
Method TPS FPS/image F-measure cult, since they were required from modern ultrasound devices and
[57] 72 247 34 may include other structures such as ribs, pectoral muscle or the air
[58] 59 51 56 in the lungs, we can affirm the effectiveness of our system in term
[25] 60 54 56 of lesion detection. In fact, our method outperforms other image
[59] 79 21 79 processing-based approaches [57,58,25,53,52] and produces com-
[53] 31 69 33
petitive results to a recent deep learning-based method [49] and
[52] 33 67 33
[49] 77 28 75 DF model [59]. Furthermore, our method remains the fastest since
ours 74 26 74 it does not require any training phase such as [49,59].

number of FPs 3.7. Failure cases


FPs/image = (11)
number of images
Since our BUS tumor segmentation algorithm is guided by
In (10) and (11), detection is considered as a True Positive (TP) if the saliency, saliency detection errors may bring wrong tumor seeds
center of the segmented region is placed within the bounding box of and therefore wrong segmentation results. Fig. 11 presents some
the ground-truth. Otherwise, it was considered to be a False Positive failure cases where no-tumor regions have been segmented as
(FP). The different results are presented in Table 10. It should be tumor ones (Fig. 11-(d)) due to false lesion and background seeds
noted that all the measures are reported from the work of [49] ((Fig. 11-(c)) detected from saliency map ((Fig. 11-(b)).
using the same dataset (Dataset 1), except the results of [52,53] Such failure cases can be easily rectified by user interaction
and ours are computed based on the segmentation results. to mark the ROI, however since we aim in this work to present
According the results presented in Table 10, the best perform- a fully automatic CAD system for BUS image segmentation, we
ing methods for BUS tumor detection are [59,49], and ours. Taking will address this issue in our future work to develop a robust BUS
H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945 11

saliency model that integrates anatomical characteristics of breast [7] Q.-H. Huang, S.-Y. Lee, L.-Z. Liu, M.-H. Lu, L.-W. Jin, A.-H. Li, A robust
in BUS images and able to distinguish tumor region from the rest graph-based segmentation method for breast tumors in ultrasound images,
Ultrasonics 52 (2012) 266–275.
of image accurately. [8] Q. Huang, X. Bai, Y. Li, L. Jin, X. Li, Optimized graph-based segmentation for
ultrasound images, Neurocomputing 129 (2014) 216–224.
[9] Z. Zhou, W. Wu, S. Wu, P.H. Tsui, C.C. Lin, L. Zhang, T. Wang, Semi-automatic
4. Conclusion breast ultrasound image segmentation based on mean shift and graph cuts,
Ultrason. Imaging 36 (2014) 256–276, http://dx.doi.org/10.1177/
A new saliency-guided approach for automatic detection and 0161734614524735.
[10] M. Xian, Y. Zhang, H.D. Cheng, F. Xu, B. Zhang, J. Ding, Automatic breast
segmentation of BUS image lesion is presented in this paper. In our ultrasound image segmentation: a survey, Pattern Recognit. 79 (2018)
CAD system, we have exploited the concept of saliency to estimate 340–355, http://dx.doi.org/10.1016/J.PATCOG.2018.02.012.
the most salient region in BUS image to generate effective tumor [11] H.D. Cheng, J. Shan, W. Ju, Y. Guo, L. Zhang, Automated breast cancer detection
and classification using ultrasound images: a survey, Pattern Recognit. 43
seeds and background region. These seeds have been integrated (2010) 299–317, http://dx.doi.org/10.1016/J.PATCOG.2009.05.012.
into a graph cut-based segmentation framework to extract tumor [12] D. Terzopoulos, Demetri, On matching deformable models to images, Opt. Soc.
region. In order to avoid mis-segmentation results, we propose to Am. Top. Meet. Mach. Vis. (1980) 160–163 (SEE N89-19145 11-74).
[13] M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, Int. J.
use again the BUS image saliency in order to compute a saliency Comput. Vis. 1 (1988) 321–331, http://dx.doi.org/10.1007/BF00133570.
score. Based on the computed score, a decision is taken to apply [14] A. Madabhushi, D.N. Metaxas, Combining low-, high-level and empirical
a post-processing step to refine the segmented tumor. The choice domain knowledge for automated segmentation of ultrasonic breast lesions,
IEEE Trans. Med. Imaging 22 (2003) 155–169, http://dx.doi.org/10.1109/TMI.
of the different techniques in each stage of the proposed frame-
2002.808364.
work has been validated experimentally. Our results have been [15] Z. Liu, L. Zhang, H. Ren, J.-Y. Kim, A robust region-based active contour model
compared with different methods from the literature in challeng- with point classification for ultrasound breast lesion segmentation, in: Med.
Imaging 2013 Comput. Diagnosis, SPIE, 2013, http://dx.doi.org/10.1117/12.
ing datasets using different evaluation metrics and have achieved
2006164, p. 86701P.
high performance in terms of segmentation accuracy as well as in [16] Q. Huang, F. Yang, L. Liu, X. Li, Automatic segmentation of breast lesions for
tumor detection, without need of high time execution. interaction in ultrasonic computer-aided diagnosis, Inf. Sci. (Ny) 314 (2015)
293–310, http://dx.doi.org/10.1016/J.INS.2014.08.021.
[17] Y. Boykov, O. Veksler, R. Zabih, Fast approximate energy minimization via
CRediT authorship contribution statement graph cuts, IEEE Trans. Pattern Anal. Mach. Intell. 23 (2001) 1222–1239.
[18] Q. Huang, Y. Luo, Q. Zhang, Breast ultrasound image segmentation: a survey,
Int. J. Comput. Assist. Radiol. Surg. 12 (2017) 493–507, http://dx.doi.org/10.
Hiba Ramadan: Conceptualization, Formal analysis, Investi- 1007/s11548-016-1513-1.
gation, Methodology, Software, Validation, Visualization, Writing [19] Z. Hao, Q. Wang, X. Wang, J.B. Kim, Y. Hwang, B.H. Cho, P. Guo, W.K. Lee,
Learning a structured graphical model with boosted top-down features for
- original draft, Writing - review & editing. Chaymae Lachqar:
ultrasound image segmentation, Lect. Notes Comput. Sci. (Including Subser.
Writing - original draft, Writing - review & editing. Hamid Tairi: Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) (2013) 227–234, http://dx.
Validation, Supervision. doi.org/10.1007/978-3-642-40811-3 29.
[20] J. Zhang, S.K. Zhou, S. Brunke, C. Lowery, D. Comaniciu, Database-guided
breast tumor detection and segmentation in 2D ultrasound images, in: N.
Acknowledgments Karssemeijer, R.M. Summers (Eds.), International Society for Optics and
Photonics, 2010, http://dx.doi.org/10.1117/12.844558, p. 762405.
[21] Q. Huang, F. Zhang, X. Li, Machine learning in ultrasound computer-aided
The authors thank the authors of [49] for providing the BUS diagnostic systems: a survey, Biomed. Res. Int. 2018 (2018), http://dx.doi.org/
Dataset B and the authors of [50] for making available their BUS 10.1155/2018/5137904.
[22] S.F. Huang, Y.C. Chen, K.M. Woo, Neural network analysis applied to tumor
Dataset as well as all the qualitative results of the works used in the
segmentation on 3D breast ultrasound images, in: 2008 5th IEEE Int. Symp.
comparison in this paper. The authors gratefully acknowledge the Biomed. Imaging From Nano to Macro, Proceedings, ISBI, 2008, pp.
anonymous reviewers for their valuable remarks and comments, 1303–1306, http://dx.doi.org/10.1109/ISBI.2008.4541243.
which significantly contributed to improve the quality of this paper. [23] P. Jiang, J. Peng, G. Zhang, E. Cheng, V. Megalooikonomou, H. Ling,
Learning-based automatic breast tumor detection and segmentation in
ultrasound images, Proc. Int. Symp. Biomed. Imaging (2012) 1587–1590,
http://dx.doi.org/10.1109/ISBI.2012.6235878.
Declaration of Competing Interest
[24] W.K. Moon, C.-M. Lo, R.-T. Chen, Y.-W. Shen, J.M. Chang, C.-S. Huang, J.-H.
Chen, W.-W. Hsu, R.-F. Chang, Tumor detection in automated breast
The authors declare that they have no known competing finan- ultrasound images using quantitative tissue clustering, Med. Phys. 41 (2014),
042901, http://dx.doi.org/10.1118/1.4869264.
cial interests or personal relationships that could have appeared to
[25] J. Shan, H.D. Cheng, Y. Wang, Completely automated segmentation approach
influence the work reported in this paper. for breast ultrasound images using multiple-domain features, Ultrasound
Med. Biol. 38 (2012) 262–275.
[26] Y. Li, J. Sun, C.-K. Tang, H.-Y. Shum, Lazy snapping, ACM Trans. Graph. (2004)
References 303–308.
[27] H.D. Cheng, X.J. Shi, A simple and effective histogram equalization approach
[1] F. Bray, J. Ferlay, I. Soerjomataram, R.L. Siegel, L.A. Torre, A. Jemal, Global to image enhancement, Digit. Signal Process. 14 (2004) 158–170, http://dx.
cancer statistics 2018: GLOBOCAN estimates of incidence and mortality doi.org/10.1016/J.DSP.2003.07.002.
worldwide for 36 cancers in 185 countries, CA Cancer J. Clin. 68 (2018) [28] K. Zuiderveld, Contrast limited adaptive histogram equalization, Graph. Gems.
394–424, http://dx.doi.org/10.3322/caac.21492. (1994) 474–485, http://dx.doi.org/10.1016/b978-0-12-336156-1.50061-6.
[2] W.A. Berg, A.I. Bandos, E.B. Mendelson, D. Lehrer, R.A. Jong, E.D. Pisano, [29] Y. Guo, H.D. Cheng, J. Huang, J. Tian, W. Zhao, L. Sun, Y. Su, Breast ultrasound
Ultrasound as the primary screening test for breast cancer: analysis from image enhancement using fuzzy logic, Ultrasound Med. Biol. 32 (2006)
ACRIN 6666, J. Natl. Cancer Inst. 108 (2016) djv367, http://dx.doi.org/10.1093/ 237–247, http://dx.doi.org/10.1016/j.ultrasmedbio.2005.10.007.
jnci/djv367. [30] W. Gómez, L. Leija, A.V. Alvarenga, A.F.C. Infantosi, W.C.A. Pereira,
[3] D. Thigpen, A. Kappler, R. Brem, The role of ultrasound in screening dense Computerized lesion segmentation of breast ultrasound based on
breasts-a review of the literature and practical solutions for implementation, marker-controlled watershed transformation, Med. Phys. 37 (2009) 82–95,
Diagnostics (Basel, Switzerland) 8 (2018), http://dx.doi.org/10.3390/ http://dx.doi.org/10.1118/1.3265959.
diagnostics8010020. [31] W. Cui, M. Li, G. Gong, K. Lu, S. Sun, F. Dong, Guided trilateral filter and its
[4] M.I. Daoud, A.A. Atallah, F. Awwad, M. Al-Najjar, R. Alazrai, Automatic application to ultrasound image despeckling, Biomed. Signal Process. Control
superpixel-based segmentation method for breast ultrasound images, Expert 55 (2020), 101625, http://dx.doi.org/10.1016/j.bspc.2019.101625.
Syst. Appl. 121 (2019) 78–96, http://dx.doi.org/10.1016/J.ESWA.2018.11.024. [32] X. Feng, X. Guo, Q. Huang, X. Feng, X. Guo, Q. Huang, Systematic evaluation on
[5] W.G. Flores, W.C. de A. Pereira, A contrast enhancement method for improving speckle suppression methods in examination of ultrasound breast images,
the segmentation of breast lesions on ultrasonography, Comput. Biol. Med. 80 Appl. Sci. 7 (2016) 37, http://dx.doi.org/10.3390/app7010037.
(2017) 14–23, http://dx.doi.org/10.1016/J.COMPBIOMED.2016.11.005. [33] J. Zhang, C. Wang, Y. Cheng, Comparison of despeckle filters for breast
[6] M. Xian, Y. Zhang, H.D. Cheng, F. Xu, K. Huang, B. Zhang, J. Ding, C. Ning, Y. ultrasound images, circuits, Syst. Signal Process. 34 (2015) 185–208, http://
Wang, A Benchmark for Breast Ultrasound Image Segmentation (BUSIS), 2018, dx.doi.org/10.1007/s00034-014-9829-y.
http://arxiv.org/abs/1801.03182. (Accessed 19 March 2019).
12 H. Ramadan, C. Lachqar and H. Tairi / Biomedical Signal Processing and Control 60 (2020) 101945

[34] L. Zhu, C.-W. Fu, M.S. Brown, P.-A. Heng, A non-local low-rank framework for [48] T.F. Chan, L.A. Vese, Active contours without edges, IEEE Trans. Image Process.
ultrasound speckle reduction, in: 2017 IEEE Conf. Comput. Vis. Pattern 10 (2001) 266–277.
Recognit., IEEE, 2017, pp. 493–501, http://dx.doi.org/10.1109/CVPR.2017.60. [49] M.H. Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, A.K. Davison, R.
[35] P. Coupe, P. Hellier, C. Kervrann, C. Barillot, Nonlocal means-based speckle Marti, Moi Hoon Yap, G. Pons, J. Marti, S. Ganau, M. Sentis, R. Zwiggelaar, A.K.
filtering for ultrasound images, IEEE Trans. Image Process. 18 (2009) Davison, R. Marti, Automated breast ultrasound lesions detection using
2221–2229, http://dx.doi.org/10.1109/TIP.2009.2024064. convolutional neural networks, IEEE J. Biomed. Heal. Inf. 22 (2018)
[36] W. Gómez Flores, W.C. de A. Pereira, A.F.C. Infantosi, Breast ultrasound 1218–1226, http://dx.doi.org/10.1109/JBHI.2017.2731873.
despeckling using anisotropic diffusion guided by texture descriptors, [50] Q. Huang, Y. Huang, Y. Luo, F. Yuan, X. Li, Segmentation of breast ultrasound
Ultrasound Med. Biol. 40 (2014) 2609–2621, http://dx.doi.org/10.1016/j. image with semantic classification of superpixels, Med. Image Anal. 61
ultrasmedbio.2014.06.005. (2020), 101657, http://dx.doi.org/10.1016/j.media.2020.101657.
[37] J. Yang, J. Fan, D. Ai, X. Wang, Y. Zheng, S. Tang, Y. Wang, Local statistics and [51] G. Csurka, D. Larlus, F. Perronnin, F. Meylan, What is a good evaluation
non-local mean filter for speckle noise reduction in medical ultrasound measure for semantic segmentation? BMVC (2013), p. 2013.
image, Neurocomputing 195 (2016) 88–95, http://dx.doi.org/10.1016/J. [52] M. Elawady, I. Sadek, A.E.R. Shabayek, G. Pons, S. Ganau, Automatic Nonlinear
NEUCOM.2015.05.140. Filtering and Segmentation for Breast Ultrasound Images, Springer, Cham,
[38] K. Muhammad, M. Sajjad, M.Y. Lee, S.W. Baik, Efficient visual attention driven 2016, pp. 206–213, http://dx.doi.org/10.1007/978-3-319-41501-7 24.
framework for key frames extraction from hysteroscopy videos, Biomed. [53] I. Sadek, M. Elawady, V. Stefanovski, Automated Breast Lesion Segmentation
Signal Process. Control 33 (2017) 161–168, http://dx.doi.org/10.1016/j.bspc. in Ultrasound Images, ArXiv Prepr. ArXiv1609.08364, 2016.
2016.11.011. [54] A. Vedaldi, S. Soatto, in: D. Forsyth, P. Torr, A. Zisserman (Eds.), Computer
[39] C.L. Srinidhi, P. Aparna, J. Rajan, A visual attention guided unsupervised Vision – ECCV 2008: 10th European Conference on Computer Vision,
feature learning for robust vessel delineation in retinal images, Biomed. Marseille, France, October 12-18, 2008, Proceedings, Part IV, Springer Berlin
Signal Process. Control 44 (2018) 110–126, http://dx.doi.org/10.1016/j.bspc. Heidelberg, Berlin, Heidelberg, 2008, pp. 705–718, http://dx.doi.org/10.1007/
2018.04.016. 978-3-540-88693-8 52.
[40] F. Deeba, F.M. Bui, K.A. Wahid, Computer-aided polyp detection based on [55] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Trans. Pattern
image enhancement and saliency-based selection, Biomed. Signal Process. Anal. Mach. Intell. 22 (2000) 888–905, http://dx.doi.org/10.1109/34.868688.
Control 55 (2020), 101530, http://dx.doi.org/10.1016/j.bspc.2019.04.007. [56] O. Ronneberger, P. Fischer, T. Brox, U-net: convolutional networks for
[41] A. Borji, M.-M. Cheng, H. Jiang, J. Li, Salient Object Detection: A Survey, ArXiv biomedical image segmentation, in: Lect. Notes Comput. Sci. (Including
Prepr. ArXiv1411.5878, 2014. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), Springer Verlag,
[42] W. Zhu, S. Liang, Y. Wei, J. Sun, Saliency optimization from robust background 2015, pp. 234–241, http://dx.doi.org/10.1007/978-3-319-24574-4 28.
detection, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (2014) 2814–2821. [57] K. Drukker, M.L. Giger, K. Horsch, M.A. Kupinski, C.J. Vyborny, E.B. Mendelson,
[43] Y. Wei, F. Wen, W. Zhu, J. Sun, Geodesic saliency using background priors, Computerized lesion detection on breast ultrasound, Med. Phys. 29 (2002)
Comput. Vision–ECCV 2012 (2012) 29–42. 1438–1446.
[44] C. Yang, L. Zhang, H. Lu, X. Ruan, M.-H. Yang, Saliency detection via [58] M.H. Yap, E.A. Edirisinghe, H.E. Bez, A novel algorithm for initial lesion
graph-based manifold ranking, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. detection in ultrasound breast images, J. Appl. Clin. Med. Phys. 9 (2008)
(2013) 3166–3173. 181–199.
[45] J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price, R. Mech, Minimum barrier salient [59] G. Pons, R. Martí, S. Ganau, M. Sentís, J. Martí, Feasibility study of lesion
object detection at 80 fps, Proc. IEEE Int. Conf. Comput. Vis. (2015) 1404–1412. detection using deformable part models in breast ultrasound images, Iber.
[46] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Süsstrunk, SLIC superpixels Conf. Pattern Recognit. Image Anal. (2013) 269–276.
compared to state-of-the-art superpixel methods, IEEE Trans. Pattern Anal.
Mach. Intell. 34 (2012) 2274–2282.
[47] Y.Y. Boykov, M.-P. Jolly, Interactive graph cuts for optimal boundary & region
segmentation of objects in ND images, Comput. Vision, 2001. ICCV 2001.
Proceedings. Eighth IEEE Int. Conf. (2001) 105–112.

You might also like