Professional Documents
Culture Documents
Hybrid Feature
Yao Yan1, Jun Gong1,2, Yangyang Liu 1
1. College of Information Science and Engineering, Northeastern University, Shenyang 110819
E-mail: 1165768426@qq.com
2. College of Information Science and Engineering, Northeastern University, Shenyang 110819
E-mail: gongjun@ise.neu.edu.cn
Abstract: Red lesions detection is the key to controlling disease progression in Diabetic Retinopathy (DR) early stages.
In this paper we propose a novel method for red lesions detection based on hybrid features, which consist of deep learned
features extracted via an improved LeNet architecture and hand-crafted features. A class balanced cross-entropy loss in
full connected layer of the modified LeNet network is used to reduce the interference from the unbalanced data types on
learning features. Blood vessels segmentation based on the U-net Convolutional Network is applied to deal with the
lesion candidates overlapping with vessels in the process of hand-crafted features extraction. Such ensemble vector of
descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. The approach was
evaluated based on the public dataset-DIARETDB1.
Key Words: Diabetic Retinopathy; Red Lesion; Deep Learning; Feature Fusion; Random Forest Classifier
978-1-7281-0106-4/19/$31.00 2019
c IEEE 2287
images at a lesion level is costly, tedious and time
consuming. In consideration of the fact that the process of
training must use lesion level annotated data, the perfor-
mance of deep learning based method is always reduced.
Here, we proposed a novel deep learning detection method
for red lesions to deal with the problem. In particular, we
aim to learn a series of representative features using an
improved LeNet architecture, which is complementary to (a) (b)
the manually engineered by experimentations. Such
ensemble vector of descriptors is used afterwards to iden-
tify true lesion candidates using a Random Forest classifier.
The rest of this paper is organized as follow. The section 2
more narrowly illustrates the proposed method for red
lesions detection. In the section 3, the comprehensive
experimental results are described. Finally, the section 4
summarizes the research. (d)
(c)
2 PROPOSED METHOD Fig.2 From left to right: (a) ROI extracting from ܫ ; (b) ROI
In this section, we mainly state the proposed method for the after iterative dilation;(c) CLAHE result of the (b);(d) the histo-
detection of red lesions. gram of (c).
2288 The 31th Chinese Control and Decision Conference (2019 CCDC)
Table 1 Modified network structure parameters
Fig.3 From left to right: (a) the binary map of candidates ܹܤ at convolution kernel 5×5,
CONV 8×8×64
݈ ൌ ͻ; (b) the binary map of candidates ܹܤ at ݈ ൌ ͵Ͳ; (c) the padding = ‘SAME’
final binary image ܹܤfused by all ܹܤ ;(d) result of candidate ReLU 8×8×64
extraction.
maxpool, window size 3×3,
2.3 Feature extraction Pooling 4×4×64
stride = 2
This paper improved the LeNet network architecture to convolution kernel 4×4,
capture candidate’s feature, according to the principle that CONV 1×1×128
padding = ‘VALID’
the output number increases of the feature map with the
FC ݏݏܮఉ 2
decline of its resolution. Table 1 shows modified network
structure parameters. eliminate effects on the shape based features extraction due
A training set ܵ ൌ ሼܺ ǡ ݕ ሽ , ݅ ൌ ͳǡʹǡ ݊ ڮis dynamically to some red lesion overlapped with it, as is shown in Fig.4.
constructed, where each sample ܺ is a red-lesion-centric The area of overlapping region is computed as the Vascular
patch with ͵ʹ ൈ ͵ʹ pixels in size and the label ݕ ߳ሼെͳǡͳሽ features which is taken into consideration in the classifica-
represents a true or false red lesion. However, false samples tion. Besides, Area, circumference, compactness, and
approximately outnumber the true by three to one. As a eccentricity and so on, because of red lesions’ specific
result, the classification model tends to predict most catego- shape similar to the circular, are extracted, as is shown in
ries and ignores few categories, which will ultimately affect table 2.
the performance of classifiers. To avoid this, a class bal- Interaction with mathematical features helps us to build an
anced cross-entropy loss [17] is given by: optimal feature set. In this paper, we extracted the pixel
ݏݏܮఉ ൌ െߚ σאశ ݈ܲ ݃ሺݕ ȁܺ Ǣ ܹሻ െ based features in different color channels, including the
ሺͳ െ ߚሻ σאష ݈ܲ ݃ሺݕ ȁܺ Ǣ ܹሻ (7) original color channels, enhancement channels, and the
other channels containing lesion information. The two
where ߚ ൌ ȁܻି ȁȀሺȁܻା ȁ ȁܻି ȁሻ is the proportion of false categories of features are described in detail in table 2.
samples ܻି vs. all samples, ܹ is the weight and ܲ is the
probability obtained in ReLU layer.
In the process of convolutional neural network training, the
learning rate ߟ determines the influence degree of weight
updated by each step to the current weight. To realize
dynamic adjustment based on loss rate, ߟ is given by:
ߟ ൌ ߟ ή ߛ ᇞ௦௦ழఎ̴௧௦ௗ (8)
Where ᇞ ݏݏܮis the change rate of loss function, and ˄a˅ ˄b˅
Ʉ̴hhh is a fixed threshold value. On the condition
that ᇞ ݏݏܮis less than ߟ̴ ݈݄݄݀ݏ݁ݎ݄ݐ, current learning Fig.4 From left to right: (a) result of candidate extraction; (b) the
result of segmentation based on U-net, the green area is the red
rate will narrow ߛ (attenuation coefficient) times.
lesions candidate.
The study of Babenko et al. [18] verified that the highest
accuracy was achieved when the fully connected layer 2.4 Candidate classification with random forest based
output 128-dimensional vector of deep learned features. on hybrid features
Therefore, this paper takes the 128-dimensional feature as A Random Forest (RF) is a Bagging ensemble classifier
the finally extracted depth feature. based on decision trees with random feature selection. The
As a complement of depth features, shape and pixel based classifier comes with two advantages: 1) It is robust against
features which are hand-crafted of red lesions is captured. A noise and overfitting when training high-dimensional data;
U-net network is applied to segment retinal blood vessels to 2) it is especially suitable for the cases where the sample
The 31th Chinese Control and Decision Conference (2019 CCDC) 2289
Table 2 Summary of the hand-crafted features
data is small but the characteristic dimension is high. The adapted on the remaining data sets, which allows to recover
wrong cognition should be noted: the more features entered a set of candidates with a size proportional to the resolution
into the classifier, the better result classifier gets [19]. The of each image [14].
݉-dimentional feature that is randomly selected from a
sample’s feature vector ܯis the only parameter determin-
3.2 Evaluation criterion
ing the classification capability of a RF and can be chosen There are two evaluation indexes for the detection perfor-
according to Out-of-Bag error (OOB error). In this paper, mance of DR lesion features: based on lesion and based on
݉ ൌ ͵ͲͲ or ݉ ൌ ͵ͶͲ was respectively selected when image. The former is focused on the judging whether a
constructing a RF with hand-crafted features and a RF with candidate region is DR lesion and whether the algorithm
fused features. can detect the number of DR lesions, which the latter pays
no attention to. Therefore, evaluation index based on lesion
3 EXPERIMENT was what this paper focuses on.
3.1 Dataset and Model selection The result of red lesion candidate detection is of vital im-
portance to subsequent steps. In order to check this method
In this section, we conduct extensive experiments to for extracting the candidate regions, Free-response ROC
vali-date and evaluate the effectiveness of our proposed red (FROC) curves was adopted to evaluate the performance of
les-ion detection method on public retinal image database- the task. The abscissa and ordinate of these plots are respec-
DIARETDB1 [8;10;14]. DIARETDB1 contains 89 color tively the average number of false positive detections per
fundus images collected under various imagery environ- image (FPI) and per lesion sensitivity [20]. Thus, the FROC
ments [14]. 84 images consist of signs of mild or curves are intuitively capable of indicating the performance
pre-proliferative DR, and the remaining 5 are normal. 28 that the method proposed distinguishes the true and false
images were used for training and 61 images for testing.
lesion. Meanwhile, the sensitivity (ܵ݁ ൌ ܶܲȀሺ ܰܨ ܶܰሻ)
The accuracy of detecting red lesion candidate is critical to represents the accuracy of detecting lesion features by the
subsequent classification. Whereas candidate detection algorithm. The larger the value is, the better the detection
chiefly depends on three parameters settings:1)ܮ, which is method. In addition, the competition metric as proposed in
the set of different scale linear structures;2) ܭ, which is the the retinopathy online challenge [21] was also computed.
number of connected domains expectedly reserved;3) ݔ,
which is the minimum area of retained lesions in pixels. A 3.3 Red lesion detection results and analysis
crossover experimental design was carried out to complete By experiments, there would be no significant increase in
the parameters-optimized process. The sets of values are Per lesion sensitivity when ܮreached a certain limit 30.
ܮൌ ሼ͵ǡǡͻǡ ڮǡͲሽ , ܭൌ ሼͲǡͺͲǡ ͲͲʹ ڮሽ and ݔൌ When ܭwas greater than 160ˈPer lesion sensitivity only
ሼ͵ǡͶǡͷǡǡǡͺǡͻǡͳͲሽ . The maximum scale from ܮwas
increased by 0.84%. ݔis set at 4 in order to ensure the
2290 The 31th Chinese Control and Decision Conference (2019 CCDC)
maximum utilization rate of Per lesion sensitivity. To sum candidate threshold segmentation based on multi-scale
up, the parameters of the candidate region algorithm pro- Bot-hat transformation and statistics of connected domain
posed in this paper were ܮൌ ሼ͵ǡǡͻǡ ڮǡ͵Ͳሽ, ܭൌ ͳͲ and is put forward. Taking expensive cost of manual annotation
ݔൌ Ͷ. at a lesion level in fundus images into account, the im-
The three experimental group experiments were conducted proved LeNet convolutional neural network as section 2
for per lesion evaluation of classification, as detailed in mentioned is a robust choice to solve an imbalance of data
Table 3. Results obtained by Seoud et al. [22] were pro- types and to enhance performance of other deep learning
vided. Experiment 3 evaluated the ability of the model to based approaches. Deep features extracted by convolutional
process MAs and HEs simultaneously on multiple scales neural network are highly complemented fusing with
according to the same protocol as [22]. Hypothesis tests hand-craft features to further strengthen the description
show a statistically significant improvement in the per ability of lesion features. Experiments on DIARETDB1
lesion sensitivity values when using the combined approach data set empirically reflected that inosculating the two
compared to using each representation separately sources of feature provides significantly superior in the
probabilities and the hand-crafted features, respectively. classification compared to any homo-source feature.
FROC curves are used to comparison, as is shown in Fig.5, REFERENCES
and Wilcoxon signed rank tests were performed to estimate
the statistical significance of the differences in the per [1] Cho N H, Shaw J E, Karuranga S, et al. IDF Diabetes Atlas: Global
estimates of diabetes prevalence for 2017 and projections for
lesion sensitivity values [14;22]. Compared with the 2045[J]. Diabetes Research & Clinical Practice, 2018, 138:271.
recognition effect of RE with deep learned features or [2] Ronald Klein, Barbara E. K. Klein, Scot E. Moss, et al. The
hand-crafted feature, the method with hybrid feature pro- Wisconsin Epidemiologic Study of Diabetic Retinopathy: III.
posed in this paper has performed better. Compared with Prevalence and Risk of Diabetic Retinopathy When Age at
Diagnosis Is 30 or More Years[J]. Ophthalmology, 2008,
the hand-crafted feature, CPM of proposed method has 115(11):1859-1868.
been improved by 3.1%, and the sensitivity (FPI=1) has [3] Faust O, Acharya U. R, Ng E Y, et al. Algorithms for the Automated
been improved by 2.7%. Compared with deep feature Detection of Diabetic Retinopathy Using Digital Fundus Images: A
method, CPM is improved by 8.9%, and the sensitivity Review[J]. Journal of Medical Systems, 2012, 36(1):145-157.
(FPI=1) is improved by 11.8%. [4] [4] Michael A, Ginneken B V, Meindert N, et al. Automatic
detection of red lesions in digital color fundus photographs[J]. IEEE
Table 3 Results of RF classifier for various feature descriptors Transactions on Medical Imaging, 2009, 24(5):584-92.
[5] Lay B, Baudoin C, Klein J C. Automatic Detection Of
Sensitivity Microaneurysms In Retinopathy Fluoro-Angiogram[J]. Proceed-ings
Exp.ID Method CPM
(FPI=1) of SPIE - The International Society for Optical Engineering,
Author 1984(6):165.
literature [22] 35.40% 34.62%
proposed [6] Spencer T, Olson J A, Mchardy K C, et al. An Image-Processing
1 Deep learned features 39.33% 36.84% Strategy for the Segmentation and Quantification of
Microaneury-sms in Fluorescein Angiograms of the Ocular
Fundus[J]. Computers & Biomedical Research an International
2 Hand-crafted features 45.09% 45.93%
Journal, 1996, 29(4):284-302.
3 Hybrid feature 48.23% 48.71% [7] R.S.Biyani, B.M.Patre Algorithms for red lesion detection in
Diabetic Retinopathy: A review[J]. Biomedicine &
Pharmacother-apy, 2018, 107(6): 681-688.
[8] Roychowdhury S, Koozekanani D, Parhi K. DREAM: Diabetic
Retinopathy Analysis using Machine Learning[J]. IEEE J Biomed
Health Inform, 2014, 18(5):1717-1728.
[9] Kawadiwale R B, Mane V M, Jadhav D D V. Detection of Red
lesions in diabetic retinopathy affected fundus images[C]// Advance
Computing Conference. IEEE, 2015.
[10] Srivastava R, Duan L, Wong D W K, et al. Detecting retinal
microaneurysms and hemorrhages with robustness to the presence of
blood vessels[J]. Computer Methods & Programs in Biomedicine,
2017, 138:83-91.
[11] Mohammad N, Omar Z, Supriyanto E, et al. Automated detection of
microaneurysm for fundus images[C]// Biomedical Engineering &
Sciences. IEEE, 2017.
[12] Chudzik P, Majumdar S, Calivá, Francesco, et al. Microaneurysm
detection using fully convolutional neural networks[J]. Computer
Methods and Programs in Biomedicine, 2018, 158:185-192.
Fig.5 Experiment 3. FROC
[13] Dai L, Sheng B, Wu Q, et al. Retinal Microaneurysm Detection
Using Clinical Report Guided Multi-sieving CNN[C]// International
4 CONCLUSION Conference on Medical Image Computing & Computer-assisted
This paper emphasizes on the problem of looking for an Intervention. Springer, Cham, 2017.
appropriate way to detect red lesions from complex [14] José Ignacio Orlando, Prokofyeva E, Fresno M D, et al. An
Ensemble Deep Learning Based Approach for Red Lesion Detection
back-ground in fundus images. One of the main contribu- in Fundus Images[J]. Comput Methods Programs Biomed, 2018,
tions of this thesis is that we have proposed an unsupervised 153:115-127.
morphological method for candidate extraction. To over- [15] Soares J V B, Leandro J J G, Cesar R M, et al. Retinal vessel
come the drawbacks of merely detecting mono-size MAs in segmentation using the 2-D Gabor wavelet and supervised
traditional methods and improve the extraction ability,
The 31th Chinese Control and Decision Conference (2019 CCDC) 2291
classification[J]. IEEE Transactions on medical Imaging, 2006, [19] Kumar G K, Viswanath P, Rao A A. Ensemble of randomized soft
25(9): 1214-1222. decision trees for robust classification[J]. SƗdhanƗ, 2016, 41(3):
[16] [16] Stark J A. Adaptive image contrast enhancement using 273-282.
generaliz-ations of histogram equalization[J]. IEEE Transactions on [20] Roychowdhury S, Koozekanani D D, Parhi K K. Screening fundus
image processing, 2000, 9(5): 889-896. images for diabetic retinopathy[C]// Signals, Systems & Computers.
[17] Maninis K K, Pont-Tuset J, Arbeláez P, et al. Deep Retinal Image IEEE, 2013.
Understanding[C]// International Conference on Medical Image [21] Niemeijer, Ginneken V, Mizutani, et al. Retinopathy Online
Computing and Computer-Assisted Intervention. Springer, Cham, Challenge: Automatic Detection of Microaneurysms in Digital Color
2016:140-148. Fundus Photographs[J]. IEEE Trans Med Imaging, 2010,
[18] Babenko A, Slesarev A, Chigorin A, et al. Neural codes for image 29(1):185-195.
retrieval[C]//European conference on computer vision. Springer, [22] Seoud L, Hurtut T, Chelbi J, et al. Red lesion detection using
Cham, 2014: 584-599. dynamic shape features for diabetic retinopathy screening[J]. IEEE
transactions on medical imaging, 2016, 35(4): 1116-1126.
2292 The 31th Chinese Control and Decision Conference (2019 CCDC)