You are on page 1of 6

A Novel Deep Learning Method for Red Lesions Detection Using

Hybrid Feature
Yao Yan1, Jun Gong1,2, Yangyang Liu 1
1. College of Information Science and Engineering, Northeastern University, Shenyang 110819
E-mail: 1165768426@qq.com
2. College of Information Science and Engineering, Northeastern University, Shenyang 110819
E-mail: gongjun@ise.neu.edu.cn

Abstract: Red lesions detection is the key to controlling disease progression in Diabetic Retinopathy (DR) early stages.
In this paper we propose a novel method for red lesions detection based on hybrid features, which consist of deep learned
features extracted via an improved LeNet architecture and hand-crafted features. A class balanced cross-entropy loss in
full connected layer of the modified LeNet network is used to reduce the interference from the unbalanced data types on
learning features. Blood vessels segmentation based on the U-net Convolutional Network is applied to deal with the
lesion candidates overlapping with vessels in the process of hand-crafted features extraction. Such ensemble vector of
descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. The approach was
evaluated based on the public dataset-DIARETDB1.
Key Words: Diabetic Retinopathy; Red Lesion; Deep Learning; Feature Fusion; Random Forest Classifier

AdaBoost) in [8], GMM and KNN classifiers respectively


1 INTRODUCTION were the best classifiers for bright and red lesion
The prevalence of diabetes in adults aged 18–99 years was classification. The paper published by Kawadiwale et al. [9]
estimated to be 8.4% in 2017 and predicted to rise to 9.9% employs modified methodology to matched filtering for
in 2045[1]. Almost one in three people with diabetes has detection of candidate lesions followed by Support Vector
evidence of the eye disease called Diabetic Retinopathy Machine classifier (SVM). Novel filters were applied to
(DR). There is an increasingly urgent need to ensure deal with false detections on blood vessels for interference
appropriate access to treatment for all people living with reduction to candidate detection by Srivastava [10]. The
diabetes to decrease the risk of vision loss [2]. However, methods proposed by Mohammad [11] applied the
DR may only be recognized when the patient's condition Gaussian matched filter followed by bottom hat filtering
has deteriorated to a level where treatment is complicated with gamma correction on preprocessed image to segment
and nearly impossible [2]. Therefore, early detection blood vessel and MA, followed by threshold operation.
through regular screening plays an especially important
role in controlling disease progression [3].
To make such screenings’ cost down, using the state-of-the
art digital image processing technology to automate the
detection of abnormalities in retinal images is a primary
choice. Compared to other kinds of lesions in the DR, red
lesions (including microaneurysms (MAs) and intraretinal
hemorrhages (HEs)), as is shown in Fig.1, are the earliest
unequivocal symptom [3;4] and complicated to be detected.
Thus, this paper primarily focuses on the detection of red
lesions in fundus images. Fig.1 A fundus image with red lesions
The earliest paper related to the computer aided screening In recent years, deep learning is a trend in medical image
of MA detection was published by Baudoin [5], tracing processing. Chudzik et al. [12], L. Dai et al. [13], Orlando et
back to 1983. Hereafter, Spencer et al. [6] have firstly al. [14] have been proposed respective deep learning
applied the bilinear top-hat transformation that belongs to methods in this domain and several other researchers are
morphological operation for segmentation of MA candi- ongoing. Chudzik et al. [12] applied a full convolution
dates, followed by pixel classification using KNN classifier neural network to detect MAs based on three standard
to extract lesion candidates. databases. The potential MA was via multi-sieving
The feature extraction followed by the use of classifier is convolutional neural network in [13]. The algorithm
most a widely studied domain in the screening of red applied image to text mapping to build an efficient MAs
lesions [7]. Compared the classification ability to red detection framework. A Random Forest Classifier with
lesions of different classifiers (GMM, KNN, SVM, and hybrid features was trained to classify the lesions candi-
dates in [14], where the depth features were learned by
This work is supported by National Nature Science Fund of Northeastern Convolutional Neural Networks (CNN). However, labeling
University under No.61773104.

978-1-7281-0106-4/19/$31.00 2019
c IEEE 2287
images at a lesion level is costly, tedious and time
consuming. In consideration of the fact that the process of
training must use lesion level annotated data, the perfor-
mance of deep learning based method is always reduced.
Here, we proposed a novel deep learning detection method
for red lesions to deal with the problem. In particular, we
aim to learn a series of representative features using an
improved LeNet architecture, which is complementary to (a) (b)
the manually engineered by experimentations. Such
ensemble vector of descriptors is used afterwards to iden-
tify true lesion candidates using a Random Forest classifier.
The rest of this paper is organized as follow. The section 2
more narrowly illustrates the proposed method for red
lesions detection. In the section 3, the comprehensive
experimental results are described. Finally, the section 4
summarizes the research. (d)
(c)
2 PROPOSED METHOD Fig.2 From left to right: (a) ROI extracting from ‫ܫ‬௚௥௘௘௡ ; (b) ROI
In this section, we mainly state the proposed method for the after iterative dilation;(c) CLAHE result of the (b);(d) the histo-
detection of red lesions. gram of (c).

2.1 Preprocessing Where ۩ and ߆ is respectively the mathematical symbol of


morphological dilation and closing. The set of relevant
Visualization analysis of RGB retinal images is performed. scales  is a fixed parameter that is in order to automati-
The green channel presents the best contrast between cally adapt to different lesion sizes, as explained in Section
fundus details (red lesions, optic disc, macular, and fundus ሺ௟ሻ
3.2. ‫ܫ‬௠௜௡ is the result of the remaining lesions with sizes
blood vessels and so on) and background and lowest noise
in contrast to the other two channels [11;14]. Therefore, the longer than ݈ by taking the minimum response. Thus, the
remaining structures map is obtained by:
green channel ‫ܫ‬௚௥௘௘௡ taken from original color image ‫ ܫ‬is
ሺ௟ሻ ሺ௟ሻ
adopted for processing. ‫ܫ‬௖௔௡ௗ ൌ ‫ܫ‬௠௜௡ െ ‫ ܫ‬ᇱ
In order to eliminate the influence of non-retinal region on ൌ ݉݅݊ሺ‫ ܫ‬Ȉ ‫ܮ‬ሻ െ ‫ ܫ‬ᇱ (4)
sub-sequent processing, threshold segmentation is applied Next, a thresholding operation is applied on
ሺ௟ሻ
‫ܫ‬௖௔௡ௗ .
The
to extract the region of interest (ROI, Fig.2 (a)): threshold setting is closely related to the number of
ܾ݈ܽܿ݇ǡܲ ൏ ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ candi-date regions we expect. To set the appropriate
݉ܽ‫ ݇ݏ‬ൌ ൜ (1)
‫݁ݐ݄݅ݓ‬ǡܲ ൒ ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ threshold ‫ݐ‬௞ , the maximum number a fundus image has of
Where  is the intensity of each pixel in the image ‫ܫ‬௚௥௘௘௡ . red lesions is assumed to be ‫ܭ‬. On condition that the num-
Then, iterative dilation is used to expand the area of ROI ber of connected domains is less than or equal to ‫ܭ‬, the
(Fig.2 (b)) to avoid the impact of the camera aperture on pixels whose intensity is less than the threshold ‫ݐ‬௦ is set to
image filter transformation in retinal images [15]. zero. To deal with the conditions when there is no lesion or
Some potential lesions possibly hide in the darkest areas when is impossible to detect less than ‫ ܭ‬candidates, an
due to uneven illumination. So, contrast limited adaptive upper bound ‫ݐ‬௨ and a lower bound ‫ݐ‬௟ is set such that:
histogram equalization (CLAHE) [16] model is adopted to ሺ௟ሻ
‫ݐ‬௟ ‫ݐ׊‬௦ ǣ ‫ܥܥ‬൫‫ܫ‬௖௔௡ௗ ൐ ‫ݐ‬௦ ൯ ൏ ‫ܭ‬
make the hidden lesions more visible greatly, an example ሺ௟ሻ
vividly is shown in Fig.2 (c). ‫ݐ‬௞ ൌ ൞ ‫ݐ‬௦ ‫ܥܥ‬൫‫ܫ‬௖௔௡ௗ ൐ ‫ݐ‬௦ ൯ ൑ ‫ܭ‬ (5)
ሺ௟ሻ
2.2 Candidate detection ‫ݐ‬௨ ‫ݐ׊‬௦ ǣ ‫ܥܥ‬൫‫ܫ‬௖௔௡ௗ
൐ ‫ݐ‬௦ ൯ ൐ ‫ܭ‬
Where ‫ ܥܥ‬is a function that computes the number of the
In consideration of semi-supervised or supervised learning
connected domains. Once ‫ݐ‬௞ is fixed, a binary map of
methods aggravating the tag burden to a certain extent, an ሺ௟ሻ
unsupervised morphological method is applied on candi- candidates is obtained by thresholding ‫ܹܤ‬௟ ൌ ‫ܫ‬௖௔௡ௗ ൐ ‫ݐ‬௞
date detection. [14]. A candidate binary image can be obtained for each ݈,
Given a preprocessed image ‫ ܫ‬, a Gaussian filter with and the final binary images of all candidate regions at
parameters ߪ ൌ ͷ is used to reduce the noise, generating a different scales can be calculated by:
new image ‫ܫ‬Ԣ. Introducing linear structuring elements of ‫ ܹܤ‬ൌ ‫ڂ‬௟‫א‬௅ ‫ܹܤ‬௟ (6)
length ݈ ‫ ܮ א‬to the traditional Bot-hat transformation which Fig.3 (a) to (b) respectively shows the binary map of
is aimed at highlighting the dark object from the bright candidates ‫ܹܤ‬௟ at different scales. Due to some small
background, image ‫ܫ‬௕௢௧ି௛௔௧ is computed by the formula: candidate regions not associated with any pathological area
‫ܫ‬௕௢௧ି௛௔௧ ൌ ሺ‫ ܫ‬ᇱ Ȉ ݈ሻ െ ‫ ܫ‬ᇱ (2) but with noise in ‫ܹܤ‬, connection structures smaller than
‫ܫ‬Ԣ Ȉ ܾ ൌ ሺ‫ܫ‬Ԣ۩݈ሻ߆݈ (3) ’š pixels are abandoned, as is shown in Fig.3 (d).

2288 The 31th Chinese Control and Decision Conference (2019 CCDC)
Table 1 Modified network structure parameters

Type Description Output size


Input image size 32×32×3 32×32×3
convolution kernel 5×5,
CONV 32×32×32
padding = ‘SAME’
ReLU 32×32×32
(a) (b)
maxpool, window size 3×3,
Pooling 16×16×32
stride = 2

convolution kernel 5×5,


CONV 16×16×32
padding = ‘SAME’
ReLU 16×16×32
maxpool, window size 3×3,
Pooling 8×8×32
(c) (d) stride = 2

Fig.3 From left to right: (a) the binary map of candidates ‫ܹܤ‬௟ at convolution kernel 5×5,
CONV 8×8×64
݈ ൌ ͻ; (b) the binary map of candidates ‫ܹܤ‬௟ at ݈ ൌ ͵Ͳ; (c) the padding = ‘SAME’
final binary image ‫ ܹܤ‬fused by all ‫ܹܤ‬௟ ;(d) result of candidate ReLU 8×8×64
extraction.
maxpool, window size 3×3,
2.3 Feature extraction Pooling 4×4×64
stride = 2
This paper improved the LeNet network architecture to convolution kernel 4×4,
capture candidate’s feature, according to the principle that CONV 1×1×128
padding = ‘VALID’
the output number increases of the feature map with the
FC ‫ݏݏ݋ܮ‬ఉ 2
decline of its resolution. Table 1 shows modified network
structure parameters. eliminate effects on the shape based features extraction due
A training set ܵ ൌ ሼܺ௜ ǡ ‫ݕ‬௜ ሽ , ݅ ൌ ͳǡʹǡ ‫ ݊ ڮ‬is dynamically to some red lesion overlapped with it, as is shown in Fig.4.
constructed, where each sample ܺ௜ is a red-lesion-centric The area of overlapping region is computed as the Vascular
patch with ͵ʹ ൈ ͵ʹ pixels in size and the label ‫ݕ‬௜ ߳ሼെͳǡͳሽ features which is taken into consideration in the classifica-
represents a true or false red lesion. However, false samples tion. Besides, Area, circumference, compactness, and
approximately outnumber the true by three to one. As a eccentricity and so on, because of red lesions’ specific
result, the classification model tends to predict most catego- shape similar to the circular, are extracted, as is shown in
ries and ignores few categories, which will ultimately affect table 2.
the performance of classifiers. To avoid this, a class bal- Interaction with mathematical features helps us to build an
anced cross-entropy loss [17] is given by: optimal feature set. In this paper, we extracted the pixel
‫ݏݏ݋ܮ‬ఉ ൌ െߚ σ௜‫א‬௒శ ݈‫ܲ ݃݋‬ሺ‫ݕ‬௜ ȁܺ௜ Ǣ ܹሻ െ based features in different color channels, including the
ሺͳ െ ߚሻ σ௜‫א‬௒ష ݈‫ܲ ݃݋‬ሺ‫ݕ‬௜ ȁܺ௜ Ǣ ܹሻ (7) original color channels, enhancement channels, and the
other channels containing lesion information. The two
where ߚ ൌ ȁܻି ȁȀሺȁܻା ȁ ൅ ȁܻି ȁሻ is the proportion of false categories of features are described in detail in table 2.
samples ܻି vs. all samples, ܹ is the weight and ܲ is the
probability obtained in ReLU layer.
In the process of convolutional neural network training, the
learning rate ߟ determines the influence degree of weight
updated by each step to the current weight. To realize
dynamic adjustment based on loss rate, ߟ is given by:
ߟ ൌ ߟ ή ߛ ᇞ௅௢௦௦ழఎ̴௧௛௥௘௦௛௛௢௟ௗ (8)
Where ᇞ ‫ ݏݏ݋ܮ‬is the change rate of loss function, and ˄a˅ ˄b˅
Ʉ̴–h”‡•hh‘Ž† is a fixed threshold value. On the condition
that ᇞ ‫ ݏݏ݋ܮ‬is less than ߟ̴‫ ݈݀݋݄݄ݏ݁ݎ݄ݐ‬, current learning Fig.4 From left to right: (a) result of candidate extraction; (b) the
result of segmentation based on U-net, the green area is the red
rate will narrow ߛ (attenuation coefficient) times.
lesions candidate.
The study of Babenko et al. [18] verified that the highest
accuracy was achieved when the fully connected layer 2.4 Candidate classification with random forest based
output 128-dimensional vector of deep learned features. on hybrid features
Therefore, this paper takes the 128-dimensional feature as A Random Forest (RF) is a Bagging ensemble classifier
the finally extracted depth feature. based on decision trees with random feature selection. The
As a complement of depth features, shape and pixel based classifier comes with two advantages: 1) It is robust against
features which are hand-crafted of red lesions is captured. A noise and overfitting when training high-dimensional data;
U-net network is applied to segment retinal blood vessels to 2) it is especially suitable for the cases where the sample

The 31th Chinese Control and Decision Conference (2019 CCDC) 2289
Table 2 Summary of the hand-crafted features

Shape based features calculation formula


࡭࢘ࢋࢇ of each red lesion candidate ‫ ܣ‬ൌ σ௞ఢ ͳ, where is the set of pixels of candidate
ࡼࢋ࢘࢏࢓ࢋ࢚ࢋ࢘ is the boundary of each candidate ܲ ൌ σ௞ఢ୆ ͳ, where ‫ ܤ‬is the set of pixels of candidate’s boundary
࡯࢏࢘ࢉ࢛࢒ࢇ࢘࢏࢚࢟ ‫ ݕݐ݅ݎ݈ܽݑܿݎ݅ܥ‬ൌ Ͷߨ ή ‫ܽ݁ݎܣ‬Ȁܲ݁‫ ݎ݁ݐ݁݉݅ݎ‬ଶ
ࡸ is the major axis length of each red lesion
ࢃ is the minor axis length of each red lesion
࡭࢙࢖ࢋࢉ࢚ࡾࢇ࢚࢏࢕ ‫ ݋݅ݐܴܽݐܿ݁݌ݏܣ‬ൌ ‫ܮ‬Ȁܹ
‫ ݏݏ݁݊ݐܿܽ݌݉݋ܥ‬ൌ ට൫σ௡௜ୀଵ ݀௜ െ ݀ሚ ൯Τ݊ , where ݀௜ is the distance between the ݅‫ݐ‬
࡯࢕࢓࢖ࢇࢉ࢚࢔ࢋ࢙࢙
pixel on the boundary to the center, and ݀ሚ is the average of all the distances
ࡱࢉࢉࢋ࢔࢚࢘࢏ࢉ࢏࢚࢟ of each red lesion ‫ ݕݐ݅ܿ݅ݎݐ݊݁ܿܿܧ‬ൌ …Ȁƒ
ࢂࢋ࢙࢙ࢋ࢒ࡾࢇ࢚࢏࢕ is the ratio of the lesion candidate
ܸ݁‫ ݋݅ݐܴ݈ܽ݁ݏݏ‬ൌ ݊‫݉ݑ‬ሺܲ݅‫݈݁ݏݏܸ݁ݔ݅ܲځ݀݊ܽܥݔ‬ሻȀ݊‫݉ݑ‬ሺܲ݅‫݀݊ܽܥݔ‬ሻ
areas coincide with the blood vessels
Pixel based features calculation formula (take the color channel after CLAHE as an example)
ࡹࢋࢇ࢔ intensity of the candidate ‫݊ܽ݁ܯ‬ሺ‫ܥ‬ሻ ൌ σ௜‫א‬௖௔௡ௗ ‫ܥ‬ሺ݅ሻ Ȁ‫ݏ‬, ‫ ݏ‬ൌ σ௜‫א‬௖௔௡ௗ ͳ is the area of the candidate
ࡿ࢛࢓ intensity of the candidate ܵ‫݉ݑ‬ሺ‫ܥ‬ሻ ൌ σ௜‫א‬௖௔௡ௗ ‫ܥ‬ሺ݅ሻ
ࡿ࢚ࢊ is standard deviation of the candidate ܵ‫݀ݐ‬ሺ‫ܥ‬ሻ ൌ ඥσ௜‫א‬௖௔௡ௗሺ‫ܥ‬ሺ݅ሻ െ ‫݊ܽ݁ܯ‬ሺ‫ܥ‬ሻሻଶ Ȁ‫ݏ‬
‫ݐݏܽݎݐݏ݊݋ܥ‬ሺ‫ܥ‬ሻ ൌ σ௜‫א‬௖௔௡ௗ ‫ܥ‬ሺ݅ሻ Ȁ‫ ݏ‬െ σ௜‫א‬ௗ௜௟௔௧௘௖௔௡ௗ ‫ܥ‬ሺ݅ሻ Ȁ‫ ݏ‬, ݈݀݅ܽ‫ ݀݊ܽܿ݁ݐ‬is the
࡯࢕࢔࢙࢚࢘ࢇ࢙࢚
candidate region after dilation
ࡺࡿ is the sum of normalized intensity ܰܵሺ‫ܥ‬ሻ ൌ ൫ܵ‫݉ݑ‬ሺܿሻ െ ‫݊ܽ݁ܯ‬ሺ‫ܫ‬௕௚ ሻ൯Ȁܵ‫݀ݐ‬ሺ‫ܫ‬௕௚ ሻ
ࡺࡹ is the mean of normalized intensity ܰ‫ܯ‬ሺ‫ܥ‬ሻ ൌ ൫‫݊ܽ݁ܯ‬ሺܿሻ െ ‫݊ܽ݁ܯ‬ሺ‫ܫ‬௕௚ ሻ൯Ȁܵ‫݀ݐ‬ሺ‫ܫ‬௕௚ ሻ
ࡹ࢏࢔intensity value of the candidate

data is small but the characteristic dimension is high. The adapted on the remaining data sets, which allows to recover
wrong cognition should be noted: the more features entered a set of candidates with a size proportional to the resolution
into the classifier, the better result classifier gets [19]. The of each image [14].
݉-dimentional feature that is randomly selected from a
sample’s feature vector ‫ ܯ‬is the only parameter determin-
3.2 Evaluation criterion
ing the classification capability of a RF and can be chosen There are two evaluation indexes for the detection perfor-
according to Out-of-Bag error (OOB error). In this paper, mance of DR lesion features: based on lesion and based on
݉ ൌ ͵ͲͲ or ݉ ൌ ͵ͶͲ was respectively selected when image. The former is focused on the judging whether a
constructing a RF with hand-crafted features and a RF with candidate region is DR lesion and whether the algorithm
fused features. can detect the number of DR lesions, which the latter pays
no attention to. Therefore, evaluation index based on lesion
3 EXPERIMENT was what this paper focuses on.
3.1 Dataset and Model selection The result of red lesion candidate detection is of vital im-
portance to subsequent steps. In order to check this method
In this section, we conduct extensive experiments to for extracting the candidate regions, Free-response ROC
vali-date and evaluate the effectiveness of our proposed red (FROC) curves was adopted to evaluate the performance of
les-ion detection method on public retinal image database- the task. The abscissa and ordinate of these plots are respec-
DIARETDB1 [8;10;14]. DIARETDB1 contains 89 color tively the average number of false positive detections per
fundus images collected under various imagery environ- image (FPI) and per lesion sensitivity [20]. Thus, the FROC
ments [14]. 84 images consist of signs of mild or curves are intuitively capable of indicating the performance
pre-proliferative DR, and the remaining 5 are normal. 28 that the method proposed distinguishes the true and false
images were used for training and 61 images for testing.
lesion. Meanwhile, the sensitivity (ܵ݁ ൌ ܶܲȀሺ‫ ܰܨ‬൅ ܶܰሻ)
The accuracy of detecting red lesion candidate is critical to represents the accuracy of detecting lesion features by the
subsequent classification. Whereas candidate detection algorithm. The larger the value is, the better the detection
chiefly depends on three parameters settings:1)‫ܮ‬, which is method. In addition, the competition metric as proposed in
the set of different scale linear structures;2) ‫ܭ‬, which is the the retinopathy online challenge [21] was also computed.
number of connected domains expectedly reserved;3) ‫ݔ݌‬,
which is the minimum area of retained lesions in pixels. A 3.3 Red lesion detection results and analysis
crossover experimental design was carried out to complete By experiments, there would be no significant increase in
the parameters-optimized process. The sets of values are Per lesion sensitivity when ‫ ܮ‬reached a certain limit 30.
‫ ܮ‬ൌ ሼ͵ǡ͸ǡͻǡ ‫ ڮ‬ǡ͸Ͳሽ , ‫ ܭ‬ൌ ሼ͸ͲǡͺͲǡ ‫ͲͲʹ ڮ‬ሽ and ‫ ݔ݌‬ൌ When ‫ ܭ‬was greater than 160ˈPer lesion sensitivity only
ሼ͵ǡͶǡͷǡ͸ǡ͹ǡͺǡͻǡͳͲሽ . The maximum scale from ‫ ܮ‬was
increased by 0.84%. ‫ ݔ݌‬is set at 4 in order to ensure the

2290 The 31th Chinese Control and Decision Conference (2019 CCDC)
maximum utilization rate of Per lesion sensitivity. To sum candidate threshold segmentation based on multi-scale
up, the parameters of the candidate region algorithm pro- Bot-hat transformation and statistics of connected domain
posed in this paper were ‫ ܮ‬ൌ ሼ͵ǡ͸ǡͻǡ ‫ ڮ‬ǡ͵Ͳሽ, ‫ ܭ‬ൌ ͳ͸Ͳ and is put forward. Taking expensive cost of manual annotation
‫ ݔ݌‬ൌ Ͷ. at a lesion level in fundus images into account, the im-
The three experimental group experiments were conducted proved LeNet convolutional neural network as section 2
for per lesion evaluation of classification, as detailed in mentioned is a robust choice to solve an imbalance of data
Table 3. Results obtained by Seoud et al. [22] were pro- types and to enhance performance of other deep learning
vided. Experiment 3 evaluated the ability of the model to based approaches. Deep features extracted by convolutional
process MAs and HEs simultaneously on multiple scales neural network are highly complemented fusing with
according to the same protocol as [22]. Hypothesis tests hand-craft features to further strengthen the description
show a statistically significant improvement in the per ability of lesion features. Experiments on DIARETDB1
lesion sensitivity values when using the combined approach data set empirically reflected that inosculating the two
compared to using each representation separately sources of feature provides significantly superior in the
probabilities and the hand-crafted features, respectively. classification compared to any homo-source feature.
FROC curves are used to comparison, as is shown in Fig.5, REFERENCES
and Wilcoxon signed rank tests were performed to estimate
the statistical significance of the differences in the per [1] Cho N H, Shaw J E, Karuranga S, et al. IDF Diabetes Atlas: Global
estimates of diabetes prevalence for 2017 and projections for
lesion sensitivity values [14;22]. Compared with the 2045[J]. Diabetes Research & Clinical Practice, 2018, 138:271.
recognition effect of RE with deep learned features or [2] Ronald Klein, Barbara E. K. Klein, Scot E. Moss, et al. The
hand-crafted feature, the method with hybrid feature pro- Wisconsin Epidemiologic Study of Diabetic Retinopathy: III.
posed in this paper has performed better. Compared with Prevalence and Risk of Diabetic Retinopathy When Age at
Diagnosis Is 30 or More Years[J]. Ophthalmology, 2008,
the hand-crafted feature, CPM of proposed method has 115(11):1859-1868.
been improved by 3.1%, and the sensitivity (FPI=1) has [3] Faust O, Acharya U. R, Ng E Y, et al. Algorithms for the Automated
been improved by 2.7%. Compared with deep feature Detection of Diabetic Retinopathy Using Digital Fundus Images: A
method, CPM is improved by 8.9%, and the sensitivity Review[J]. Journal of Medical Systems, 2012, 36(1):145-157.
(FPI=1) is improved by 11.8%. [4] [4] Michael A, Ginneken B V, Meindert N, et al. Automatic
detection of red lesions in digital color fundus photographs[J]. IEEE
Table 3 Results of RF classifier for various feature descriptors Transactions on Medical Imaging, 2009, 24(5):584-92.
[5] Lay B, Baudoin C, Klein J C. Automatic Detection Of
Sensitivity Microaneurysms In Retinopathy Fluoro-Angiogram[J]. Proceed-ings
Exp.ID Method CPM
(FPI=1) of SPIE - The International Society for Optical Engineering,
Author 1984(6):165.
literature [22] 35.40% 34.62%
proposed [6] Spencer T, Olson J A, Mchardy K C, et al. An Image-Processing
1 Deep learned features 39.33% 36.84% Strategy for the Segmentation and Quantification of
Microaneury-sms in Fluorescein Angiograms of the Ocular
Fundus[J]. Computers & Biomedical Research an International
2 Hand-crafted features 45.09% 45.93%
Journal, 1996, 29(4):284-302.
3 Hybrid feature 48.23% 48.71% [7] R.S.Biyani, B.M.Patre Algorithms for red lesion detection in
Diabetic Retinopathy: A review[J]. Biomedicine &
Pharmacother-apy, 2018, 107(6): 681-688.
[8] Roychowdhury S, Koozekanani D, Parhi K. DREAM: Diabetic
Retinopathy Analysis using Machine Learning[J]. IEEE J Biomed
Health Inform, 2014, 18(5):1717-1728.
[9] Kawadiwale R B, Mane V M, Jadhav D D V. Detection of Red
lesions in diabetic retinopathy affected fundus images[C]// Advance
Computing Conference. IEEE, 2015.
[10] Srivastava R, Duan L, Wong D W K, et al. Detecting retinal
microaneurysms and hemorrhages with robustness to the presence of
blood vessels[J]. Computer Methods & Programs in Biomedicine,
2017, 138:83-91.
[11] Mohammad N, Omar Z, Supriyanto E, et al. Automated detection of
microaneurysm for fundus images[C]// Biomedical Engineering &
Sciences. IEEE, 2017.
[12] Chudzik P, Majumdar S, Calivá, Francesco, et al. Microaneurysm
detection using fully convolutional neural networks[J]. Computer
Methods and Programs in Biomedicine, 2018, 158:185-192.
Fig.5 Experiment 3. FROC
[13] Dai L, Sheng B, Wu Q, et al. Retinal Microaneurysm Detection
Using Clinical Report Guided Multi-sieving CNN[C]// International
4 CONCLUSION Conference on Medical Image Computing & Computer-assisted
This paper emphasizes on the problem of looking for an Intervention. Springer, Cham, 2017.
appropriate way to detect red lesions from complex [14] José Ignacio Orlando, Prokofyeva E, Fresno M D, et al. An
Ensemble Deep Learning Based Approach for Red Lesion Detection
back-ground in fundus images. One of the main contribu- in Fundus Images[J]. Comput Methods Programs Biomed, 2018,
tions of this thesis is that we have proposed an unsupervised 153:115-127.
morphological method for candidate extraction. To over- [15] Soares J V B, Leandro J J G, Cesar R M, et al. Retinal vessel
come the drawbacks of merely detecting mono-size MAs in segmentation using the 2-D Gabor wavelet and supervised
traditional methods and improve the extraction ability,

The 31th Chinese Control and Decision Conference (2019 CCDC) 2291
classification[J]. IEEE Transactions on medical Imaging, 2006, [19] Kumar G K, Viswanath P, Rao A A. Ensemble of randomized soft
25(9): 1214-1222. decision trees for robust classification[J]. SƗdhanƗ, 2016, 41(3):
[16] [16] Stark J A. Adaptive image contrast enhancement using 273-282.
generaliz-ations of histogram equalization[J]. IEEE Transactions on [20] Roychowdhury S, Koozekanani D D, Parhi K K. Screening fundus
image processing, 2000, 9(5): 889-896. images for diabetic retinopathy[C]// Signals, Systems & Computers.
[17] Maninis K K, Pont-Tuset J, Arbeláez P, et al. Deep Retinal Image IEEE, 2013.
Understanding[C]// International Conference on Medical Image [21] Niemeijer, Ginneken V, Mizutani, et al. Retinopathy Online
Computing and Computer-Assisted Intervention. Springer, Cham, Challenge: Automatic Detection of Microaneurysms in Digital Color
2016:140-148. Fundus Photographs[J]. IEEE Trans Med Imaging, 2010,
[18] Babenko A, Slesarev A, Chigorin A, et al. Neural codes for image 29(1):185-195.
retrieval[C]//European conference on computer vision. Springer, [22] Seoud L, Hurtut T, Chelbi J, et al. Red lesion detection using
Cham, 2014: 584-599. dynamic shape features for diabetic retinopathy screening[J]. IEEE
transactions on medical imaging, 2016, 35(4): 1116-1126.

2292 The 31th Chinese Control and Decision Conference (2019 CCDC)

You might also like