You are on page 1of 9

Zou BJ, Chen Y, Zhu CZ et al. Supervised vessels classification based on feature selection.

JOURNAL OF COMPUTER
SCIENCE AND TECHNOLOGY 32(6): 1222–1230 Nov. 2017. DOI 10.1007/s11390-017-1796-x

Supervised Vessels Classification Based on Feature Selection

Bei-Ji Zou1,2 , Member, CCF, Yao Chen1,2 , Cheng-Zhang Zhu2,3,∗ , Member, CCF
Zai-Liang Chen1,2 , Member, CCF, and Zi-Qian Zhang1,2

1
School of Information Science and Engineering, Central South University, Changsha 410083, China
2
“Mobile Health” Ministry of Education-China Mobile Joint Laboratory, Central South University
Changsha 410083, China
3
College of Literature and Journalism, Central South University, Changsha 410083, China

E-mail: bjzou@csu.edu.cn; ychencs@whu.edu.cn; anandawork@126.com; xxxyczl@csu.edu.cn; 450867607@qq.com

Received June 20, 2017; revised September 25, 2017.

Abstract Arterial-venous classification of retinal blood vessels is important for the automatic detection of cardiovascular
diseases such as hypertensive retinopathy and stroke. In this paper, we propose an arterial-venous classification (AVC)
method, which focuses on feature extraction and selection from vessel centerline pixels. The vessel centerline is extracted
after the preprocessing of vessel segmentation and optic disc (OD) localization. Then, a region of interest (ROI) is extracted
around OD, and the most efficient features of each centerline pixel in ROI are selected from the local features, grey-level
co-occurrence matrix (GLCM) features, and an adaptive local binary patten (A-LBP) feature by using a max-relevance and
min-redundancy (mRMR) scheme. Finally, a feature-weighted K-nearest neighbor (FW-KNN) algorithm is used to classify
the arterial-venous vessels. The experimental results on the DRIVE database and INSPIRE-AVR database achieve the high
accuracy of 88.65% and 88.51% in ROI, respectively.

Keywords fundus image, arterial-venous classification, adaptive local binary patten (A-LBP), feature selection, feature-
weighted K-nearest neighbor (FW-KNN)

1 Introduction more apparent central reflection than veins[3] . Fig.1


shows an example of arteries and veins. For this rea-
Retina is the only part of the human body where son, classifying the vessels into arteries and veins is pos-
vascular structure is observable in vivo through a sim- sible. And these visual and geometrical features have
ple technology[1] . The attributes of vascular structure been used in many arterial-venous classification meth-
such as width, tortuosity, branching pattern and an- ods.
gles are useful for early diagnosis of diseases such as
diabetic retinopathy and hypertensive retinopathy, be-
cause most of these diseases result in changes of vas- Vein
cular structure[2] . The arterial to venous width ratio
(AVR) is an important indicator reflecting these small
changes. For the calculation of AVR, we need to classify
the vessels into arteries and veins.
There are visual and geometrical features that can Artery
be used for arterial-venous classification. Arteries are
brighter while veins are darker, and in general artery
calibers are smaller than vein calibers. Arteries have Fig.1. Example of arteries and veins.

Regular Paper
Special Section of CAD/Graphics 2017
This work was supported by the National Natural Science Foundation of China under Grant Nos. 61573380, 61702559, 61562029.
∗ Corresponding Author

©2017 Springer Science + Business Media, LLC & Science Press, China
Bei-Ji Zou et al.: Supervised Vessels Classification Based on Feature Selection 1223

The existing arterial-venous classification methods method called Squared-Loss Mutual Information Clus-
are either completely automatic or semi-automatic. tering (SMIC) for classifying arteries and veins in reti-
The semi-automatic methods require ophthalmologists nal images, which achieved 87.6% and 86.2% classifica-
to mark the initial point of the trunk blood vessel as tion rates on the INSPIRE-AVR dataset[7] and DRIVE
an artery or a vein in the retinal image, and then it au- dataset[11] respectively. Vijayakumar et al.[12] used ran-
tomatically classifies the remaining vessels, according dom forest to select features extracted from vessels, and
to the connectivity of vessels. The automatic meth- then chose the features that contribute most to the clas-
ods are based on various characteristics of artery and
sification as inputs to the polynomial kernel support
vein blood vessels and apply machine learning or other
vector machine (SVM) classifier. This method achieves
methods to automatically label the vessels as arteries
92.4% of accuracy on the VICAVR database○ 1
.
or veins.
There are several samples of semi-automatic meth- Though the accuracy is not better than that of semi-
ods. Hubbard et al.[3-4] first proposed a semi-automatic automatic classification methods, the advantage of au-
method for the calculation of vessel calibers, and suc- tomatic classification methods is their speed. A rapid
cessfully applied it to community atherosclerosis re- and accurate classification method can help improve
search. Aguilar et al.[5] achieved semi-automatic classi- diagnostic efficiency for batch treatment of retinal im-
fication based on the analysis of vascular network struc- ages. Therefore, we focus on automatic methods in this
ture, subtrees of geometric, and topological properties. paper.
Rothaus et al.[6] considered the characteristics of cross- In this paper, a new feature-selected method for
over and bifurcation points and the non-closed-loop arterial-venous classification is proposed. Firstly, we
characteristics of vessels. Then they combined them collect various features that have effect on the final clas-
with a vessel segmentation and tracking algorithm and sification value. Secondly, a feature selection algorithm
finally achieved vessel classification. Semi-automatic is applied, which can eliminate those features that con-
methods are time-consuming and laborious for handling
tribute negatively to classification accuracy. Lastly, we
a large number of retinal fundus images. The automatic
take three strategies for vessel classification: no feature
methods perform well on the problem.
selection, feature selection, and feature weighting by
The automatic classification methods have the ad-
our feature-weighted K-nearest neighbor (FW-KNN)
vantages of saving manpower and material resources,
algorithm.
and also achieve the automatic screening of cardiovas-
cular diseases, which is paid more and more attention As a short summary, the major contributions of our
by researchers now. Grisan and Ruggeri[2] first pro- work are summarized as below.
posed an automated method of arterial-venous classi- 1) We propose an arterial-venous classification
fication. They considered the existence of brightness (AVC) method for vessel classification based on feature
and color differences between one fundus image and selection.
others, and took into account that the differences be- 2) To the best of our knowledge, we collect various
tween arteries and veins were reduced from the optic features including local features and an adaptive local
disc (OD) to extended region in fundus images. And binary patten (A-LBP) feature to represent a retinal
the vessels inside optic disc are hard to track; therefore image.
they chose an ROI. They divided the ROI into four
3) To get the best performance of classification, we
quadrants and the classification processes are carried
propose FW-KNN to apply for arterial-venous classifi-
out independently within the four quadrants.
cation.
Niemeijer et al.[1,7] also applied a supervised learn-
ing method to automatically identify vascular type. The remainder of this paper is organized as follows.
Dashtbozorg et al.[8] used a graph-based approach for The proposed method is explained in Section 2. In Sec-
arterial-venous classification. Estrada et al.[9] proposed tion 3 we describe our experimental results of arterial-
a hierarchical edge marker program based on topology venous classification and compare the results of our pro-
estimation and finally obtained the results of arterial- posed method and existing methods on DRIVE and
venous classification. Relan et al.[10] proposed a novel INSPIRE-AVR, respectively.


1
http://www.varpa.es/research/ophtalmology.html#databases, Nov. 2017.
1224 J. Comput. Sci. & Technol., Nov. 2017, Vol.32, No.6

2 Proposed Method results in a high error rate for classification. Thus,


we choose an annular region of 0.5∼1.5 disc diameters
Some preprocessing techniques are applied to get away from the OD center as ROI. The OD location is
vessel centerline pixels after acquisition of fundus im- based on a morphological operation, a circular Hough
ages. Then vascular features are extracted from center- transform, and a grow-cut algorithm[14] . We apply the
line pixels, and a machine learning algorithm is applied Hough transform to get the OD center, and get the
for arterial-venous classification. OD boundary by employing the grow-cut algorithm.
Fig.2(b) shows the result of OD localization.
2.1 Materials

The DRIVE database[12] , INSPIRE-AVR


[7] ○
database , and VICAVR database 2 are public
databases that can be used for arterial-venous classifi-
cation. In our proposed method, we choose INSPIRE-
AVR database and DRIVE database. The images in
the DRIVE database are randomly selected from a
DR (diabetic retinopathy) screening program in the (a) (b) (c)
Netherlands, and manual segmentation of the vascu- Fig.2. Example of preprocessing. (a) Vessel segmentation. (b)
lature is available. Arterial-venous ground truth for Optic disc localization. (c) Vessel centerline extraction.
the DRIVE dataset (20 test images) is obtained by an
expert who manually labeled the vessel centerline seg- 2.2.3 Vessel Centerline Extraction
ments in each image[8] . The 40 high-resolution images An improved augmented fast marching method
in the INSPIRE-AVR database have the resolution of (AFMM)[15] is applied to centerline extraction. AFMM
2 392×2 048 pixels and are optic disc-centered. is widely used in skeleton extraction by computing the
parameterized boundary location of each pixel. More
2.2 Preprocessing
details can be found in [15]. In this step, we also com-
2.2.1 Retinal Vessel Segmentation pute the local radius at each centerline point. The lo-
cal radius is the maximum radius that includes only
Vessel segmentation is aimed at extracting the vessel
the vessel pixels. We choose pixels whose local radiuses
centerline. The vessel segmentation method proposed
are greater than a threshold. It aims at getting a more
by Zhu et al.[13] is a supervised method which is used
accurate result, and does not affect the calculation of
for segmenting retinal vessels. This method follows a
AVR. After centerline extraction, cross-over and bifur-
pixel-wise approach based on extreme learning machine
cation points are removed for obtaining independent
(ELM) with three phases. The first phase is the feature
vessel segments, because these pixels will affect the ef-
extraction process. Local features, morphological fea-
fect of training. Fig.2(c) shows the result of vessel cen-
tures, phase congruency, and Hessian and divergence of
terline extraction.
vector fields are extracted from each pixel of the image.
In the second phase, training and testing are done using 2.3 Feature Extraction
ELM. Then we get the preliminary vessel segmentation
result, where there are some no-vessel regions. The Feature extraction is one of the most important
third phase is optimization processing. In this phase, steps of ACV. A normalized operation and a contrast
the region where the area less than 20 pixels is removed. enhancement operation are applied on the retinal im-
Fig.2(a) shows the result of vessel segmentation. ages separately for better feature extraction. In order to
obtain normalized images, we estimate the luminosity
2.2.2 Optic Disc Localization
and contrast variability in the background part of the
Optic disk (OD) localization is necessary for deter- image and the subsequent compensation of this variabi-
mining the ROI for arterial-venous classification. The lity in the whole image[16] . For obtaining enhanced con-
vessels on OD are not relevant for the arterial-venous trast images, an iterative algorithm is performed, which
classification process. And the differences of vessels be- aims to remove the strong contrast between the retinal
tween outside OD and inside OD are reduced, which fundus and the region outside the camera’s aperture[17] .


2
http://www.varpa.es/research/ophtalmology.html#databases, Nov. 2017.
Bei-Ji Zou et al.: Supervised Vessels Classification Based on Feature Selection 1225

Then a Gaussian filter is applied to obtain the enhanced from this matrix. In order to describe the texture fea-
contrast images. Fig.3 illustrates the results of above tures intuitively in GLCM, some parameters are derived
two processes. from the matrix as Table 1 illustrates.

Table 1. Parameters Extracted from GLCM


Designation Function
Contrast Measuring the local variations in the gray-level
co-occurrence matrix
Energy Providing the sum of squared elements in the
GLCM; also known as uniformity or the angu-
lar second moment
Homogeneity Measuring the closeness of the distribution of
(a) (b) (c) elements in the GLCM to that of the GLCM
diagonal
Fig.3. Example of original and processed images. (a) Original
image. (b) Image processed by normalized procedure. (c) Image
processed by enhanced contrast procedure. In this paper, the local images of the 25-pixel region
are extracted and at the center of those local images are
Features which are used to differentiate arteries vessel centerline pixels. Then we create GLCMs, and
from veins, are usually based on intensity and geometri- calculate contrast, energy and homogeneity separately
cal features. A collection of pixel intensity features, and as feature values.
GLCM (grey-level co-occurrence matrix) features, and 2.3.3 Adaptive Local Binary Pattern Feature
an A-LPB feature is used to generate an 80-dimension
feature vector for each centerline pixel. The central reflex is more apparent in arteries than
in veins due to the difference of absorbing lights. And
2.3.1 Local Features we propose an A-LBP (adaptive local binary pattern
feature) operator to describe this feature. Local bi-
The arteries are oxygen-rich and the veins carry a
nary pattern (LBP) is an operator that describes the
large amount of metabolites in human body[2] . This
local texture features of an image. Under normal cir-
leads to differences in the color of artery and vein blood
cumstances, this operator covers an area within a fixed
vessels. Therefore, the intensity of the vein vessel pixels
radius. For example, we compare the intensity values
shows darker than that of the artery vessel. Also artery
of the central pixel and its 8-neighborhood. If the value
velocities are faster than vein velocities. It causes the
of each neighborhood pixel is greater than that of the
width of arteries around the optic disc smaller than that
central pixel, we assign 1 to the corresponding neigh-
of veins. Based on these characteristics, a large num-
bor position in LBP operator; otherwise, we assign 0
ber of color-related features are extracted on different
to this position. Then, we transform the LBP operator
channels of different color spaces. The L, A, B color
into a binary series in clockwise or counter-clockwise
space describes mathematically all perceivable colors
direction. After that, we convert the binary series to a
in the three dimensions, L for lightness, and A and B
decimal number.
for the color opponents green-red and blue-yellow re-
In this paper, we change the LBP operator’s radius
spectively. Besides, the green channel of the original to be variable. Fig.4 is a part of a gray image and
image is presented separately, and is filtered with diffe- its intensity value. The black numbers represent the
rent scale Gaussian kernel functions. The response val- background while the red ones represent vessels. And
ues are recorded as features. the blue number indicates an example of the centerline
2.3.2 Grey-Level Co-Occurrence Matrix Features pixels. We have calculated the local radius of center-
line pixels in Subsection 2.2.3 and the local radius in
Arteries and veins also have differences in texture. this centerline pixel is 2. We set this local radius as
Therefore the gray-level co-occurrence matrix (GLCM), our A-LPB radius. This means we compare the values
which is a statistical method of examining texture fea- of the central point and the neighbor points which are
ture of an image, is applied in our method. The method two pixels away from the center. If the value of each
calculates how often the pairs of pixels with specific val- neighborhood point is greater than that of the central
ues and in a specified spatial relationship occur in an point, we assign 1 to the corresponding neighbor po-
image. We create a GLCM, and then extract statistics sition in LBP operator; otherwise, we assign 0 to the
1226 J. Comput. Sci. & Technol., Nov. 2017, Vol.32, No.6

Fig.4. Example of a gray image and its intensity value. The black numbers represent the background while the red ones represent
vessels. The blue number indicates one of the centerline pixels. The bold and underlined numbers show an example of our adaptive
LBP windows.

position. Because the values in A-LBP are arranged could be large. If two features that are highly depen-
along a circle, if we choose different values as origin dent, one of them can be removed, and then the class-
values, we get different binary sequences. Then we con- discriminative power would not change much. There-
vert these sequences into decimal numbers, and choose fore, the next step is to remove redundancy features.
the smallest decimal value as our A-LBP feature value. Step 2: Min-Redundancy. Min-redundancy means
The bold and underlined numbers show the border of to select mutually exclusive features.
A-LBP windows in Fig.4 (for the unity of the A-LPB 1 X
values, we take eight sampling points). min R(S), R(S) = I(xi ; xj ). (2)
|S|2
Finally, a set of 80 features are extracted. Table 2 xi ,xj ∈S

shows the list of extracted features. All features are Combining (1) and (2), we get a criterion for feature
normalized. selection:

2.4 Feature Selection max Φ(D, R), Φ(D, R) = D − R.

Feature selection is not only a method of reducing The combination of max-relevance and min-
dimension, but also a method to improve the perfor- redundancy criteria, which we call mRMR, is proved
mance of the classifier. First of all, the mutual informa- credible and leads to promising improvement on fea-
tion between two random variables x and y is defined in ture selection and classification accuracy[18]. Consider
terms of their probabilistic density functions p(x), p(y), the feature set F = {f1 , f2 , · · · , fM } where there are
and p(x, y). M features (as mentioned in Subsection 2.3, a set of 80
ZZ
p(x, y) features is extracted, and thus M = 80 in this paper)
I(x; y) = p(x, y) log( ) dx dy. per centerline point, and the decision class is −1 for
p(x)p(y)
arteries and 1 for veins. Then, we calculate mutual in-
The minimal-redundancy maximal-relevance formation between features, or features and class, and
(mRMR)[18] tends to select features with a high cor- use mRMR criteria for features’ score. The final set
relation with the class (output) and a low correlation W = {λ1 , λ2 , · · · , λM } is the set of feature importance
between themselves. Namely, it is to find m features
scores for every feature in the dataset. The features
from a feature set S, which have the largest relevance
with the best scores are known to contribute the most
and smallest redundancy on the target class c.
to classification.
Step 1: Max-Relevance. Max-relevance is to search
features satisfying (1). 2.5 Arterial-Venous Classification
1 X
max D(S, c), D = I(xi ; c). (1) K-nearest neighbor (KNN) is faster than some clas-
|S|
xi ∈S sifiers like random forest and usually has a higher ac-
But the selected features may have rich redundancy curacy. In this paper, we proposed a feature-weighted
which means the dependency among these features KNN (FW-KNN) algorithm. An example is used to
Bei-Ji Zou et al.: Supervised Vessels Classification Based on Feature Selection 1227

Table 2. Feature Vector of Each Pixel


Dimension Feature No. Feature Description
Based on original 1∼3 Red, green, and blue intensities of the centerline pixels
images (51) 4∼6 Hue (H), saturation (S), and intensity (I) of the centerline pixels
7∼14 Intensity of the centerline pixel in a Gaussian blurred (σ = 2, 4, 8, 16) of red and green plane
15 Local radius of the centerline pixels
16∼18 Value of L, A, B of the centerline pixels
19∼21 Value of contrast, energy and homogeneity by using GLCM (as entropy always results in bad)
22∼24 Mean of L, A, B over the 8-neighborhood of the centerline pixels
25∼27 Mean green, red and blue intensities of around the centerline pixels
28∼35 Mean intensity over the 8-neighborhood of the centerline pixel in a Gaussian blurred
(σ = 2, 4, 8, 16) of red and green plane
36∼38 Mean value for all pixels of each segment in red, green, blue image
39∼41 Mean value for all pixels of each segment in H, I, S channel
42∼44 Variance value for all pixels of each segment in red, green, blue image
45∼47 Variance value for all pixels of each segment in H, I, S channel
48∼50 Mean value for all pixels of each segment in L, A, B channel
51 A-LPB feature in gray images
Based on enhanced 52∼54 Red, green and blue intensities of the centerline pixels
contrast images 55 Intensity of the centerline pixels
(16) 56∼58 L, A, B of the centerline pixels
59∼61 Mean value of red green and blue intensities of around the centerline pixels
62∼64 Mean value of L, A, B over the 8-neighborhood of the centerline pixels
65∼67 Mean value for all pixels of each segment in L, A, B channel
Based on normalized 68∼70 Red, green, and blue intensities of the centerline pixels
images (13) 71∼73 Red, green, and blue intensities over the 8-neighborhood of the centerline pixels
74∼76 Variance value of red, green, and blue intensities over the 8-neighborhood of the centerline pixels
77∼78 Maximum and minimum value for all pixels of each segment in red image
79∼80 Maximum and minimum value for all pixels of each segment in green image

illustrate our method. There are two-dimension (2D) the same type by definition. The final label assigned to
feature sets and we apply KNN for classification. We a centerline pixel thus depends on the labels assigned
usually use Euclidean distance to calculate the distance to the other centerline pixels in the vessel segment. Be-
between the unknown sample and each training sample, cause our method does not fit vessels with one or two
which is calculates as (3). pixels, we only classify the vessels with more than two
v pixels, which is enough for measuring AVR.
u n
uX
d(x, y) = t (xk − yk )2 . (3)
k=1
3 Results

In this example, n = 2. And then we put the for- The DRIVE database and the INSPIRE-AVR
mula in this way: database are used to evaluate our method. The refe-
v rence standard is obtained by an expert who manu-
u n
uX ally labeled the vessel centerline segments in each im-
d(x, y) = t (wk × (xk − yk )2 ).
age. There is a difference in vessel centerline extraction
k=1
between the expert manual method and our method;
If w1 = 1, w2 = 1, d(x, y) is the Euclidean distance. therefore we retrieve the label on each segment pixel
If we change w1 → +∞, feature 1 (f1 ) becomes a deci- extracted by our method, and assign the most frequent
sive role and the role of feature 2 (f2 ) can be ignored. label in the segment as the final label of the segment.
Based on this idea, we propose the FW-KNN algorithm Table 3 shows the classification property indexes.
and apply it to arteriovenous classification. “A” represents one label, and “B” represents another
Each centerline pixel is assigned a soft label after label. True positive (T P ) is the number of pixels cor-
FW-KNN algorithm is performed. We use the fact that rectly classified as “A”. False positive (F P ) is the num-
connected centerline pixels in a vessel segment are all of ber of vein pixels misclassified as “A”. True negative
1228 J. Comput. Sci. & Technol., Nov. 2017, Vol.32, No.6

(T N ) is the number of pixels correctly classified as “B”. Feature Weighting Results. We add weight by FW-
And false negative (F N ) is the number of artery pixels KNN to the features based on the results of feature
misclassified as “B”. selection. Weight settings are set as follows: the weight
of the features 1∼20 is 0.5, that of features 21∼40 is 0.3,
Table 3. Comparison of the Result with the Ground Truth that of features 41∼60 is 0.2 and that of features 61∼80
is 0. And an accuracy value of 88.65% is obtained.
Classification “A” Label in “B” Label in
These results turn out our feature selection is useful.
Ground Truth Ground Truth
The best classification results of the test retinal vessels
“A” pixels True positive (T P ) False positive (F P ) from the DRIVE database are shown in Table 5. The
“B” pixels False negative (F N ) True negative (T N ) average accuracy of our method is 88.65%. Compared
with the previous methods used for the classification
The accuracy (Acc), sensitivity (Sn), and speci- of retinal vessels into arteries and veins, the technique
ficity (Sp) are used to measure the performance of the used in this paper is easier to implement and the accu-
arterial-venous. It is obviously that the sensitivity of racy of vessel classification is higher. The comparison
arteries is equal to the specificity of veins. Thus we is shown in Table 6.
regard “A” label as arteries, and the accuracy (Acc),
sensitivity (Sn), and specificity (Sp) are shown in Ta-
Table 5. Segmentation Results of Our Method (DRIVE)
ble 4.
Image Sn Sp Acc
Table 4. Performance Measures of Arterial-Venous 11 test 1.000 0 0.897 2 0.965 4

Classification 12 test 0.792 0 1.000 0 0.869 8


13 test 0.854 7 0.928 1 0.888 6
Performance Measure Description
14 test 0.946 7 0.851 9 0.893 8
Sensitivity (Sn) T P/(T P + F N )
15 test 0.782 6 1.000 0 0.851 8
Specificity (Sp) T N/(T N + F P )
16 test 0.976 6 0.975 2 0.975 7
Accuracy (Acc) (T P + T N )/(T P + F P + T N + F N )
17 test 0.608 7 1.000 0 0.778 0
18 test 0.769 2 0.835 8 0.797 9
19 test 0.801 3 0.901 7 0.843 7
3.1 Results on DRIVE
20 test 1.000 0 1.000 0 1.000 0
Direct Results. We select the first 10 images from Average 0.853 2 0.939 0 0.886 5
the 20 test images as the training set for the KNN classi- Maximum 1.000 0 1.000 0 1.000 0
fier and the rest are the test set. An accuracy of 80.69% Minimum 0.608 7 0.835 8 0.778 0

is achieved for the classification of centerline pixels of


the vessels in ROI. This result proves good feature ex- Table 6. Comparison with Existing Methods on DRIVE
traction.
Feature Selection Results. We put the training set Method Acc
[19] 0.870 0
into the mRMR algorithm and the output is a final set
[10] 0.862 0
sorted by importance. The result of feature selection
Our method 0.886 5
shows the red and the green of RGB color space are the
most important features and their relevant features are
also vital. The light plane (L) from the LAB color space 3.2 Results on INSPIRE-AVR
is the second dominant feature. A-LBP features also
play an important role. We use 10-fold cross-validation For the 40 images of the INSPIRE-AVR database,
to select the best feature subset of the results. We ran- we select the first 20 images from 40 images as the train-
domly divide the training data into ten parts and use ing set and the rest as the test set. The best accuracy of
each of them as test data while using the other nine correctly-classified vessel segments using the proposed
as training data. We choose the subset which has the method is 88.51%, which is similar to the value achieved
highest accuracy as the selected subset. The selected by Relan et al.[10] (87.60%). An example of automatic
subset of features is used for KNN training and test- classification of INSPIRE-AVR images can be observed
ing. And the percentage of correctly-classified vessel is in Fig.5. The differences between the results of the pro-
84.69%. posed method and manual labeling are shown in green
Bei-Ji Zou et al.: Supervised Vessels Classification Based on Feature Selection 1229

(a) (b) (c)

Fig.5. Results of INSPIRE-AVR classification. (a) Original images. (b) Segmentation results. (c) Our results in ROI (red: correctly
classified arteries, blue: correctly classified veins, green: wrongly classified vessels).

(wrongly classified arteries) and yellow (wrongly clas- in the atherosclerosis risk in communities study. Ophthal-
sified veins), while the correctly classified arteries and mology, 1999, 106(12): 2269-2280.
veins are presented in red and blue, respectively. [4] Wong T Y, Knudtson M D, Klein R, Klein B E K, Meuer S
M, Hubbard L D. Computer-assisted measurement of reti-
nal vessel diameters in the Beaver Dam Eye Study: Method-
4 Conclusions ology, correlation between eyes, and effect of refractive er-
rors. Ophthalmology, 2004, 111(6): 1183-1190.
In this paper, we focused on the feature design and [5] Aguilar W, Martinez-Perez M E, Frauel Y, Escolano F,
selection for better performance of arterial-venous clas- Lozano M A, Espinosa-Romero A. Graph-based methods
sification of blood vessels. We extracted 80 features in- for retinal mosaicing and vascular characterization. In Proc.
the 6th IAPR-TC-15 International Workshop on Graph-
cluding the local features, the GLCM features, and an
Based Representations in Pattern Recognition, June 2007,
A-LBP feature. The mRMR scheme was used for the pp.25-36.
feature selection and the selected features were used to [6] Rothaus K, Jiang X, Rhiem P. Separation of the retinal
train and test with the FW-KNN classifier for the ves- vascular graph in arteries and veins based upon structural
sel classification. The results are very promising with knowledge. Image and Vision Computing, 2009, 27(7): 864-
an average accuracy of 88.65% and 88.5% on DRIVE 875.
[7] Niemeijer M, Xu X, Dumitrescu A V, Gupta P, van Gin-
and INSPIRE-AVR database, respectively.
neken B et al. Automated measurement of the arteriolar-
The vessel classification in ROI is enough for mea- to-venular width ratio in digital color fundus photographs.
suring the AVR. And the small vessels are also impor- IEEE Transactions on Medical Imaging, 2011, 30(11):
tant for the automatic detection of relevant diseases. 1941-1950.
Therefore, the next target can be computing the value [8] Dashtbozorg B, Mendonca A M, Campilho A. An automatic
graph-based approach for artery/vein classification in reti-
of AVR. Further research needs to be done to explore
nal images. IEEE Transactions on Image Processing, 2014,
the effects with different features and other feature se- 23(3): 1073-1083.
lection methods. [9] Estrada R, Allingham M J, Mettu P S, Cousins S W,
Tomasi C, Farsiu S. Retinal artery-vein classification via
References topology estimation. IEEE Transactions on Medical Imag-
ing, 2015, 34(12): 2518-2534.
[1] Niemeijer M, van Ginneken B, Abràmoff M D. Automatic [10] Relan D, Ballerini L, Trucco E, MacGillivray T. Retinal
classification of retinal vessels into arteries and veins. In vessel classification based on maximization of squared-loss
Proc. SPIE7260, Medical Imaging 2009: Computer-Aided mutual information. In Proc. Machine Intelligence and Sig-
Diagnosis, October 2009, p.72601F. nal Processing, October 2016, pp.77-84.
[2] Grisan E, Ruggeri A. A divide et impera strategy for auto- [11] Staal J, Abramoff M D, Niemeijer M, Viergever M A, Gin-
matic classification of retinal vessels into arteries and veins. neken B V. Ridge-based vessel segmentation in color images
In Proc. the 25th Annual International Conference of the of the retina. IEEE Transactions on Medical Imaging, 2004,
IEEE Engineering in Medicine and Biology Society, Sept. 23(4): 501-509.
2003, pp.890-893. [12] Vijayakumar V, Koozekanani D D, White R, Kohler J, Roy-
[3] Hubbard L D, Brothers R J, King W N, Clegg L X, Klein R, chowdhury S, Parhi K K. Artery/vein classification of reti-
Cooper L S et al. Methods for evaluation of retinal microvas- nal blood vessels using feature selection. In Proc. the 38th
cular abnormalities associated with hypertension/sclerosis Annual International Conference of the IEEE Engineering
1230 J. Comput. Sci. & Technol., Nov. 2017, Vol.32, No.6

in Medicine and Biology Society (EMBC), August 2016, Yao Chen received her B.S. degree
pp.1320-1323.
in geophysics from Wuhan University,
[13] Zhu C, Zou B, Zhao R et al. Retinal vessel segmentation in
Wuhan, in 2016. She is currently a grad-
colour fundus images using extreme learning machine. Com-
puterized Medical Imaging and Graphics, 2017, 55: 68-77. uate student at Central South Univer-
[14] Abdullah M, Fraz M M, Barman S A. Localization sity, Changsha. Her Research interests
and segmentation of optic disc in retinal images us- include image processing and computer
ing circular Hough transform and grow-cut algorithm. vision. text text text text text text text
https://peerj.com/articles/2003/, Sept. 2017. text text text text text text text text
[15] Telea A, van Wijk J J. An augmented fast marching method text text text text text text text text text text text text
for computing skeletons and centerlines. In Proc. the Sym- text text text text text text text text text
posium on Data Visualisation, May 2002, pp.251-260.
[16] Foracchia M, Grisan E, Ruggeri A. Luminosity and contrast Cheng-Zhang Zhu received her
normalization in retinal images. Medical Image Analysis, M.E. degree in computer science and
2005, 9(3): 179-190. education from Huazhong University
[17] Soares J V B, Leandro J J G, Cesar R M, Jelinek H F, of Science and Technology, Wuhan, in
Cree M J. Retinal vessel segmentation using the 2-D Gabor 2006 and her Ph.D. degree in control
wavelet and supervised classification. IEEE Transactions science and engineering from School of
on Medical Imaging, 2006, 25(9): 1214-1222. Information Science and Engineering,
[18] Hanchuan P, Fuhui L, Ding C. Feature selection based Central South University, Changsha,
on mutual information: Criteria of max-dependency, max-
in 2016. Currently, she is a faculty of the College of
relevance, and min-redundancy. IEEE Transactions on Pat-
Literature and Journalism, Central South University,
tern Analysis and Machine Intelligence, 2005, 27(8): 1226-
1238. Changsha. Her research interests include medical image
[19] Chhabra S, Bhushan B. Supervised pixel classification into processing, computer vision and pattern recognition.text
arteries and veins of retinal images. In Proc. Innovative Ap- text text text text text text
plications of Computational Intelligence on Power, Energy
and Controls with Their Impact on Humanity (CIPECH),
November 2014, pp.59-62.
Zai-Liang Chen received his Ph.D.
degree in computer science from Central
South University, Changsha, in 2012.
Bei-Ji Zou received his B.S. degree
He is currently an associate profes-
in computer software from Zhejiang
sor with Central South University,
University, Hangzhou, in 1982, his
Changsha, and the associate director
M.S. and Ph.D. degrees in computer
of the Center for Ophthalmic Imaging
science and technology from Tsinghua
Research, Central South University,
University, Beijing, in 1984, and Hu-
Changsha. In 2014, he was a visiting scholar with the
nan University, Changsha, in 2001,
Stevens Institute of Technology, New Jersey. He has
respectively. He joined the School
authored or co-authored over 30 papers in journals and
of Computer and Communication at Hunan University,
conferences. His recent research interests include computer
Changsha, in 1984, where he became an associate professor
vision, medical image analysis, and large-scale medical
in 1997, and a professor in 2001. He served as the vice
image processing.
dean there since 1997. He is currently a professor and
served as the dean at the School of Information Science
and Engineering in Central South University, Changsha. Zi-Qian Zhang received his B.S.
His research interest is focused on computer graphics, degree in electronic science and techno-
image processing and virtual reality technology. Until now logy from Central South University,
he has published more than 100 papers in journals. Changsha, in 2016. He is currently
a graduate student at Central South
University, Changsha. His Research
interests include image processing and
pattern recognition.text text text text
text text text text text text text text text text text text
text text text text text text text text text text text text
text text text text text text text text text text text text
text text text text text text text

You might also like