You are on page 1of 7

2020 International Conference on Machine Vision and Image Processing (MVIP)

Faculty of Engineering, College of Farabi, University of Tehran, Iran, 19 & 20 Feb. 2020

Fully Convolutional Networks for Fluid


Segmentation in Retina Images
Behnam Azimi Abdolreza Rashno Sadegh Fadaei
Department of Computer Engineering Department of Computer Engineering Department of Electrical Engineering
Faculty of Engineering Faculty of Engineering Faculty of Engineering
Lorestan University Lorestan University Yasouj University
Khorramabad, Iran Khorramabad, Iran, 6815144316 Yasouj, Iran, 7591874934
behnam3azimi@gmail.com rashno.a@lu.ac.ir s.fadaei@yu.ac.ir

Abstract—Retinal diseases can be manifested in optical co- automated methods for fluid segmentation and detection as
herence tomography (OCT) images since many signs of retina well as retinal layer segmentation based on graph cut and
abnormalities are visible in OCT. Fluid regions can reveal the graph shorters path methods in Neutrosophic (NS) domain
signs of age-related macular degeneration (AMD) and diabetic
macular edema (DME) diseases and automatic segmentation for both AMD and DME subjects in [16]–[22]. A compre-
of these regions can help ophthalmologists for diagnosis and hensive fluid detection and segmentation challenge was done
treatment. This work presents a fully-automated method based by Bogunovic for three types of fluid [23]. Fluid regions
on graph shortest path layer segmentation and fully convolu- in OCT scans were segmented and detected by deep neural
tional networks (FCNs) for fluid segmentation. The proposed networks in [24]. Layers and fluid regions of retinal were
method has been evaluated on a dataset containing 600 OCT
scans of 24 subjects. Results showed that the proposed FCN automatically segmented by fully convolutional networks in
model outperforms 3 existing fluid segmentation methods by the a method referred as Relaynet [25]. Convolutional neural
improvement of 4.44% and 6.28% with respect to dice cofficients networks were applied for robust retina thickness segmentation
and sensitivity, respectively. of OCT scans in [26]. Fully-automated quantification and
Index Terms—Fully Convolutional Networks, Fluid Segmenta- detection of macular fluid in oct was proposed with deep
tion, graph shortest path, Retina, Optical Coherence Tomogra-
phy. learning methods in [27]. Multi-vendor OCT scans were ana-
lyzed by deep learning approaches in the term of intraretinal
I. I NTRODUCTION cystoid fluid detection and quantification in [28]. Generally,
Machine learning methods have been applied in many fluid segmentation can be done by either image segmentation
applications such as image and audio processing for verifi- or pixel classification. In image segmentation methodologies,
cation, identification, recognition and classification [1]–[8]. In the whole image is segmented to fluid and tissue segments
recent years, a concept referred as neutrosophic sets was also (regions) while in pixel classification, each pixel is classified to
used in mentioned applications [9]–[15]. Ophthalmologists use fluid or tissue. For pixel classification, texture, color and shape
machine learning methods to evaluate retina condition in the features are extrected from pixels and used in conventional
term of being in normal or abnormal conditions. For this task, pixel classification methods [29], [30]. Here, fluid regions are
first images are taken from retina by imaging methodologies segmented by pixel classification.
such as fundus and optical coherence tomography (OCT). The main contribution of this research is to propose a fully-
Then, structure of retinal layers, blood vessels, tissue regions automated method consisting layer segmentation and fully
area, optic disk and drusen regions can all be computed by convolutional network (FCN) for fluid segmentation in sub-
algorithms based on image processing and machine vision. jects with retina abnormalities. First, inner limiting membrane
Since manually extraction of such parameters is a very time- (ILM) and retinal pigmentation epithelium (RPE) layers as
consuming task, automated algorithms are used for these tasks. first and last layers of retina, respectively, are segmented by
One of the most significant parameters in retina is fluid graph shortest path algorithms in NS domain. Then, retinal
regions which are created due to the leakage of blood vessels. region between these layers as region of interest (ROI) is
Fluid regions can be detected in OCT scans and are used extracted. OCT scan and its corresponding ROI are presented
as good signs for age-related macular degeneration (AMD) as two input channels for FCN. The proposed model for
and diabetic macular edema (DME). Therefore, the area and FCN has two encoder and decoder steps with the depth of
volume of fluid regions can be used by ophthalmologists for 4. In each depth of encoder, two convolution layers and
treatment and follow-up of retinal diseases. one max-pooling layer; and in each depth of decoder, two
Many fluid segmentation methods have been proposed in convolution layers, one transposed convolution layer and one
the literature. Prior to this work, we have presented fully- concatenate layer are used. The final output of FCN is set
to 2 channels as same as input dimension, one for fluid and
another for tissue regions. The rest of this paper is organized

978-1-7281-6832-6/20/$31.00 2020
c IEEE

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
as follows: Section II presents the proposed method including region is lost in segmentation process. 2) Fluid regions are very
ROI Segmentation and FCN model. Dataset is described in similar to background in both brightness and texture lead to
Section III. Experimental setup and results are discussed in mislead the segmentation method. 3) ROI region is processed
Sections IV and V, respectively. Limitations of the proposed instead of whole image which results in speeding up. The
FCN model are explanied by examples in Section VI. Finally, proposed scheme in this research uses the first and scened
this work is concluded in Section VII. mentioned advantages of ROI segmentation.
II. P ROPOSED METHOD
B. Fully Convolutional Network Model
A. Region of Interest Segmentation
Two channels of OCT images including main OCT scan and
If the first and last layers of OCT scans; named as ILM and its ROI are presented for FCN. Therefore, input dimension is
RPE; are segmented, it leads to ROI segmentation and back- 256x512. The proposed FCN structure for fluid segmentation
ground elimination. For this task, images are first transferred is shown in Fig. 1 which consists two encoder and decoder
to NS domain by Eqs. (1)-(5): steps. The depth of both decoder and encoder is 4. There
is no need for further depths due to the input dimension.
g(i, j) − gmin In each depth of encoder, two convolution layers with 32
T (i, j) = (1)
gmax − gmin filters are applied to the whole depth of input channels.
To preserve the input dimension, padding is applied to all
F (i, j) = 1 − T (i, j); (2) convolution layers which leads to the channels with dimension
of 256x512x32. Then max-pooling layer for downsampling
δ(i, j) − δmin with window size [2,2] and stride 2 is applied to decrease
I(i, j) = ; (3)
δmax − δmin the input dimension by half (128x256x32). Other depth of
encoder are repeated by the same way except the number of
w w
1 
2 
2 convolution layers which are 64, 256 and 512 for depths 2,
g(i, j) = 2 g(i + m, j + n) (4) 3 and 4, respectively. After encoder steps, two convolution
w w w
m=− 2 n=− 2 layers with 512 filters are applied which leads to the dimension
of (16x32x512). In each level of decoder phase, transposed
δ(i, j) = |g(i, j) − g(i, j)| (5) convolution layer with learning parameters is first used for
upsampling which creates an output as twice as input size.
where g is an input image, gmin and gmax are the minimum
This layer inverses max-pooling operation. Then, the output
and maximum gray level of input image, respectively. T , I and
of convolution layer in each level of encoder is sent for
F represent true, indeterminacy and false sets, respectively.
corresponding layer in decoder via skip connection and is
Then, a graph is created from image by mapping each pixel
concatenated with the output of transposed convolution in
in image to one vertex in graph. Edges between vertexes are
concatenate layer. For the first level, channels of dimension
computed by gray-levels of pixels connected to those vertexes.
32x64x512 are appeared. Finally, two convolution layer with
For ILM segmentation, weights are computed by Eqs. (6)-(7)
256 filters and padding are applied which leads to 32x64x256.
which is applied to T set in NS domain. .
In other levels 2, 3 and 4 of decoder; 128, 64 and 32 filters
⎤ ⎡ are used, respectively. By applying transposed convolution in
−2 each level for doubling input size, the final dimension of
V erGrad = T ∗ H, H = ⎣ 0 ⎦ (6) decoder is 256x512x32. Finally, since there are two classes
2 for segmentation, a convolution layer with 2 filters are used
to create final dimension 256x512x2 for FCN. One channel is
W ((a1 , b1 ), (a2 , b2 )) = 4 × M axG − V erGrad(a1 , b1 ) considered for tissue revelation and another for fluid pixels.
− V erGrad(a2 , b2 ) + 2 × mean(R)
(7) III. OCT DATASET
where R is a set of pixels above (a1 , b1 ) and M axG is the To evaluate the effectiveness of the proposed method, a
maximum gradient in image. For RPE segmentation, filter H publicly available OCT dataset containing 600 OCT scans of
is inversed and R corresponds to a set of pixels under (a1, b1) 24 subjects with the dimension of 496x1024 collected from
[17]. University of Minnesota is used. The width and length reso-
Two pixels(vertexes) in OCT images including up-left and lution of OCT scans are 3.87μm/pixel and 5.88μm/pixel, re-
down-right pixels are selected as start and end points, respec- spectively. Ground truth images for fluid regions are available
tively. In the next stage, Dijkstra algorithm is applied to find from two ophthalmologist experts. Among all 600 images, 48
the shortest path between start and end points which leads to scans are selected for training set and 83 scans for validation.
segment ILM and RPE [17]. Therefore, totally 141 scans are used in training phase and
ROI segmentation has some advantages as follows: 1) All 459 scans for testing. The reason for few number of training
fluid regions are located between these layers and no fluid scans is that, in training set, all types of fluids including sub-

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
Fig. 1: Proposed structure for fluid segmentation by FCN model.

retinal, intra-retinal and sub-RPE are considered and similar reported in Tables I and II, respectively. In each column, a
points are ignored. measure is computed with respect to expert 1 (E.1) and expert
2 (E.2). For dice coefficient measure, the proposed FCN model
IV. E XPERIMENTAL SETUP outperforms NSGC, KGC and GC in subjects 1, 4, 6, 8-13, 15,
Because of memory and computation resource limitations, 17 and 19-23 in both E.1 and E.2 columns. NSGC achieved
OCT scans are first resized by half to 256x512. After, fluid the best dice coefficient in E.1 column of subjects 2, 3, 7 and
segmentation, they are resized to their original size and 14; E.2 column of subject 16; and both E.1 and E.2 columns
then segmented fluid regions are compared with ground truth of subjects 18 and 24. The proposed method has the best
images. Kernel size [3, 3] is used for all convolution layers average dice coefficient of 86% in comparison with NSGC,
followed by LeakyReLu activation function with alpha = 0.2. KGC and GC with 81.56%, 65.25% and 69.13% respectively.
Finally, loss function cross entropy and ADAM optimizer are The proposed method also achieves the best sensitivity in both
used. By these configurations, FCN model is trained with E.1 and E.2 columns of subjects 1, 2-6, 8-17, 19 and 21-24.
labels of expert 1 and tested against expert 1 and 2 labels NSGC has better performance in E.1 column of subjects 2, 7,
for fluid regions. 18 and 20. Averagely, the proposed model with the sensitivity
of 87.38% outperforms NSGC, KGC and GC with 81.10%,
V. R ESULTS 75.69% and 70.11%, respectively.
Segmented fluid regions by the proposed model are com-
VI. D ISCUSSION
pared with neutrosophic graph cut (NSGC) [17], graph cut
(GC) [32] and kernel graph cut (KGC) [33] methods with Despite the good segmentation results of the proposed FCN
respect to dice coefficient and sensitivity measures. These model, there are some issues that affect the FCN model and
methods have been applied to the mentioned OCT dataset and leads to bad segmentation results. The first one is errors raised
this is the main reason that why we have compared the results from inaccurate ROI segmentation. Basically, there is not
of our model with them. These measures are computed for any fluid region above ILM and RPE layers. If these layers
comparison between automated methods in one side and expert are segmented wrongly, fluid regions can be located out of
1 and expert 2 in another side. Fig. 2 depicts segmented scans ROI. All fluid regions out of ROI (above segmented ILM
by the proposed FCN model for subjects with intra-retinal and beneath segmented RPE) will be missed by FCN since
fluid regions. Note that this type of fluid is revealed between these regions are trained as tissue. One sample of such cases
retinal layers. Segmented scans for sub-retinal and sub-RPE is demonstrated in Fig. 5. In this case, fluid region beneath
fluid regions are shown in Figs. 3 and 4, respectively. segmented RPE is missed because of inaccurate segmentation
Numeric results for segmented fluid regions by Graph Cut of RPE.
(GC), Kernel Graph Cut (KGC) and Neutrosophic Graph Also, in some cases with sub-retinal fluid regions, non-fluid
Cut (NSGC) for dice coefficient and precision measures are dark regions are appeared above fluid regions and mislead

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
TABLE I: Dice coefficient of the proposed methods and other
methods.
Dice Coefficients
GC KGC NSGraph Prop. Method
sub E.1 E.2 E.1 E.2 E.1 E.2 E.1 E.2
1 78.68 75.34 79.07 74.76 85.68 81.81 94.41 94.84
2 77.79 77.83 86.11 82.03 89.49 85.36 89.19 91.78
3 56.54 53.05 56.56 51.91 85.81 79.4 84.87 83.75
4 71.94 72.26 69.51 69.49 82.81 82.34 84.45 83.6
5 45.80 43.7 25.2 24.64 68.25 66.02 75.8 72.82
6 73.32 68.61 6134 57.1 85.91 80.53 93.92 94.08
7 56.71 52.61 46.32 42.0 67.31 63.13 54.84 58.04
8 86.89 86.25 79.06 79.07 86.85 86.29 95.79 95.54
9 76.65. 72.61 74.95 70.92 83.0 78.81 95.33 95.24
10 76.54 76.39 76.57 76.44 84.63 80.51 90.01 89.81
11 72.22 67.74 55.28 51.02 86.51 82.26 92.42 87.17
12 71.34 68.03 69.53 66.08 82.42 79.43 90.41 85.87
13 62.61 62.57 64.67 64.56 89.9 89.85 92.28 92.19
14 63.48 59.26 57.34 53.47 81.07 76.98 78.23 78.19
15 61.94 53.73 58.19 50.26 74.16 66.07 78.74 76.48
16 71.17 71.07 69.21 69.2 87.62 87.57 91.76 85.89
17 73.03 72.17 88.31 83.58 91.52 86.48 92.16 91.45
18 66.27 58.43 66.93 59.49 84.7 77.06 76.9 74.66
19 73.93 73.64 43.31 43.26 86.38 82.09 90.54 85.94
20 65.76 57.41 69.75 63.05 85.4 75.13 85.64 79.79
21 81.11 70.62 76.04 66.16 89.0 78.38 93.27 83.03
22 71.33 64.40 61.5 55.0 77.16 70.19 82.13 82.08
23 89.83 79.47 90.85 82.49 91.68 82.66 96.45 87.67
24 81.37 77.12 87.27 83.22 88.79 84.77 88.44 83.37
Ave 71.01 67.26 67.2 63.3 84.0 79.13 87.29 84.72
Ave 69.13 65.25 81.56 86.00

TABLE II: Sensitivity of the proposed methods and other


methods.
Sensitivity
GC KGC NSGraph Prop. Method Fig. 2: Segmented scans by the proposed FCN for subjects
Subject E.1 E.2 E.1 E.2 E.1 E.2 E.1 E.2 with Intra-retinal.
1 74.39 71.24 83.03 79.4 84.38 81.01 96.55 96.73
2 84.24 80.33 92.42 88.31 94.38 90.36 90.74 93.79
3 50.70 47.26 67.23 62.3 81.82 75.08 86.04 85.23
4 61.70 62.15 79.6 79.88 84.92 84.89 86.91 86.67 FCN to assign them as fluid wrongly. It is shown in Fig. 6 that
5 56.1 53.13 54.73 53.18 67.9 64.26 77.84 73.76 a non-fluid region above RPE is labeled as fluid incorrectly.
6 81.38 76.73 78.8 74.62 87.78 82.34 96.98 96.25 If the neighbors of such regions be darker, they trigger FCN
7 57.14 54.03 56.7 52.05 66.37 63.28 60.5 64.02
to assign them to fluid labels.
8 88.43 87.18 86.39 86.14 92.75 91.56 97.29 96.64
9 79.37 75.32 78.24 74.3 89.31 84.95 96.19 96.1 Undesired artifacts in imaging process such as retinal
10 74.39 70.24 77.86 73.59 82.91 78.44 91.96 91.77 movement, noise, instrument problems and regions of retinal
11 74.29 69.78 77.0 72.63 82.23 77.62 94.51 88.92 without layer structure can create good candidates for fluid
12 62.61 59.6 67.56 64.88 77.15 74.84 91.28 87.45
13 76.83 76.96 84.06 84.02 88.31 88.13 92.38 92.39
regions. A sample of such scans is demonstrated in Fig. 7.
14 62.24 57.92 65.87 61.82 78.27 73.80 80.38 80.64 Elevated RPE creates non-fluid regions referred as drusens.
15 64.41 56.2 66.02 57.76 74.24 66.04 79.39 77.2 These regions are also considered as fluid by FCN. If ROI
16 88.08 84.04 88.63 84.69 90.34 86.29 92.17 86.43 stage segment RPE layers accurately for such cases, this
17 79.57 74.88 89.29 84.13 90.71 85.36 96.56 95.22
18 59.55 51.68 67.45 59.86 81.33 73.3 79.02 77.82 error can be avoided. Also, blood vessels absorbs light waves
19 72.3 67.93 77.73 73.44 83.24 78.57 91.55 87.1 in retinal surface and do not allow to go through retina,
20 75.35 62.68 87.05 74.09 88.61 75.60 86.02 79.85 Therefore, dark regions under blood vessels are appeared
21 77.6 64.02 84.37 69.73 85.61 69.97 94.84 80.33 which is also labeled as fluid incorrectly (see Fig. 8).
22 83.81 73.53 90.66 78.71 87.94 76.8 91.51 79.23
23 85.9 73.21 94.24 79.51 87.77 72.97 96.75 82.09 Finally, as it is depicted in Fig. 9, very small fluid regions
24 74.72 70.33 86.72 82.41 86.85 82.57 88.21 83.04 under RPE are missed by FCN. The main reason is their low
Ave 72.71 67.52 78.4 72.98 83.96 78.25 88.98 85.78 distance to choriod region and inaccurate RPE segmentation.
Ave 70.11 75.69 81.10 87.38
Dark regions between middle layers are assigned as fluid
wrongly. This is because of their similarity with fluid regions.

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
Fig. 3: Segmented scans by the proposed FCN for subjects Fig. 4: Segmented scans by the proposed FCN for subjects
with Sub-retinal. with Sub-RPE.

[1] A. Rashno, H. SadeghianNejad, A. Heshmati, Highly efficient dimension


reduction for text-independent speaker verification based on relieff
algorithm and support vector machines”, International Journal of Signal
VII. C ONCLUSION Processing, Image Processing and Pattern Recognition 6 (1) (2013) 91–
108.
This work presented a fully-automated method for fluid [2] A. Rashno, S. M. Ahadi, M. Kelarestaghi, Text-independent speaker
verification with ant colony optimization feature selection and support
segmentation based on fully convolutional networks (FCNs) vector machine, in: 2015 2nd International Conference on Pattern
applied to OCT scans and their corresponding regions of Recognition and Image Analysis (IPRIA), IEEE, 2015, pp. 1–5.
interest computed by graph shortest path in neutrosophic (NS) [3] A. Rashno, M. Saraee, S. Sadri, Mars image segmentation with most
domain. The proposed method was evaluated in 600 OCT relevant features among wavelet and color features, in: 2015 AI &
Robotics (IRANOPEN), IEEE, 2015, pp. 1–7.
scans and results showed that there are improvements with [4] A. Rashno, F. S. Tabataba, S. Sadri, Image restoration with regularization
respect to dice coefficients and sensitivity in comparison with convex optimization approach, Journal of Electrical Systems and Signals
two fluid segmentation methods proposed in literature. Future 2 (2) (2014) 32–36.
[5] A. Rashno, F. S. Tabataba, S. Sadri, Regularization convex optimization
efforts will be directed to train FCN with augmented training method with l-curve estimation in image restoration, in: 2014 4th
data by random translation, reflection, rotation, flipping and International Conference on Computer and Knowledge Engineering
cropping to achieve more accurate results. Finally, using more (ICCKE), IEEE, 2014, pp. 221–226.
[6] Rashno, A., Sadri, S. and SadeghianNejad, H., ”An efficient content-
inputs such as three NS subsets for FCN can be proposed as based image retrieval with ant colony optimization feature selection
another future work. schema based on wavelet and color features,” In 2015 The International
Symposium on Artificial Intelligence and Signal Processing (AISP), pp.
59-64, 2015.
R EFERENCES [7] Rashno, A. and Rashno, E., ”Content-based image retrieval system with
most relevant features among wavelet and color features,” arXiv preprint

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
Fig. 8: Segmentation Error by elevated RPE.

Fig. 5: Segmentation Error by wrongly segmented ROI.


Fig. 9: Segmentation Error by very small fluid regions under
RPE.

arXiv:1902.02059, 2019.
[8] S. Fadaei, A. Rashno, E. Rashno, Content-based image retrieval speedup,
arXiv preprint arXiv:1911.11379.
[9] Heshmati, A., Gholami, M. and Rashno, A., ”Scheme for unsupervised
colour–texture image segmentation using neutrosophic set and non-
subsampled contourlet transform,” IET Image Processing, vol. 10, no.
6, pp. 464-473, 2016.
[10] Rashno, A., Smarandache, F. and Sadri, S., ”Refined neutrosophic sets
in content-based image retrieval application,” In 2017 10th Iranian
Conference on Machine Vision and Image Processing (MVIP), pp. 197-
202, 2017.
[11] Rashno, A. and Sadri, S., ”Content-based image retrieval with color
and texture features in neutrosophic domain,” In 2017 3rd International
Fig. 6: Segmentation Error by non-fluid dark regions. Conference on Pattern Recognition and Image Analysis (IPRIA), pp.
50-55, 2017.
[12] Rashno, E., Akbari, A. and Nasersharif, B., ”A Convoloutional Neural
Network model based on Neutrosophy for Noisy Speech Recognition,”
Infinite Study, 2019.
[13] Rashnoa, E., Minaei-Bidgolia, B. and Guo, Y., ”An effective clustering
method based on data indeterminacy in neutrosophic set domain,”
Infinite Study, 2018.
[14] Rashno, E., Norouzi, S.S., Minaei-Bidgoli, B. and Guo, Y., ”Certainty
of outlier and boundary points processing in data mining,” In 2019 27th
Iranian Conference on Electrical Engineering (ICEE), pp. 1929-1934,
2019.
[15] Rashno, E. and Minaei-Bidgoli, B., ”Boundary points handling for image
edge detection based on Neutrosophic set,” In 2019 5th Conference on
Knowledge Based Engineering and Innovation (KBEI), pp. 886-890,
2019.
[16] A. Rashno, D. D. Koozekanani, P. M. Drayna, B. Nazari, S. Sadri,
H. Rabbani, K. K. Parhi, Fully automated segmentation of fluid/cyst
regions in optical coherence tomography images with diabetic macular
edema using neutrosophic sets and graph algorithms, IEEE Transactions
Fig. 7: Segmentation Error by undesired artifacts. on Biomedical Engineering 65 (5) (2017) 989–1001.
[17] A. Rashno, B. Nazari, D. D. Koozekanani, P. M. Drayna, S. Sadri,
H. Rabbani, K. K. Parhi, Fully-automated segmentation of fluid regions

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.
in exudative age-related macular degeneration subjects: Kernel graph cut
in neutrosophic domain, PloS one 12 (10) (2017) e0186949.
[18] A. Rashno, D. D. Koozekanani, K. K. Parhi, Oct fluid segmentation
using graph shortest path and convolutional neural network, in: 2018
40th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), IEEE, 2018, pp. 3426–3429.
[19] B. Salafian, R. Kafieh, A. Rashno, M. Pourazizi, S. Sadri, Automatic
segmentation of choroid layer in edi oct images using graph theory in
neutrosophic space, arXiv preprint arXiv:1812.01989.
[20] A. Rashno, K. K. Parhi, B. Nazari, S. Sadri, H. Rabbani, P. Drayna,
D. D. Koozekanani, Automated intra-retinal, sub-retinal and sub-rpe
cyst regions segmentation in age-related macular degeneration (amd)
subjects, Investigative Ophthalmology & Visual Science 58 (8) (2017)
397–397.
[21] K. K. Parhi, A. Rashno, B. Nazari, S. Sadri, H. Rabbani, P. Drayna,
D. D. Koozekanani, Automated fluid/cyst segmentation: A quantitative
assessment of diabetic macular edema, Investigative Ophthalmology &
Visual Science 58 (8) (2017) 4633–4633.
[22] J. Kohler, A. Rashno, K. K. Parhi, P. Drayna, S. Radwan, D. D.
Koozekanani, Correlation between initial vision and vision improvement
with automatically calculated retinal cyst volume in treated dme after
resolution, Investigative Ophthalmology & Visual Science 58 (8) (2017)
953–953.
[23] H. Bogunović, F. Venhuizen, S. Klimscha, S. Apostolopoulos, A. Bab-
Hadiashar, U. Bagci, M. F. Beg, L. Bekalo, Q. Chen, C. Ciller, et al.,
Retouch-the retinal oct fluid detection and segmentation benchmark and
challenge, IEEE transactions on medical imaging.
[24] S. H. Kang, H. S. Park, J. Jang, K. Jeon, Deep neural networks for the
detection and segmentation of the retinal fluid in oct images, in: Proc.
MICCAI Retinal OCT Fluid Challenge (RETOUCH), 2017, pp. 9–14.
[25] A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian,
C. Wachinger, N. Navab, Relaynet: retinal layer and fluid segmentation
of macular optical coherence tomography using fully convolutional
networks, Biomedical optics express 8 (8) (2017) 3627–3642.
[26] F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. van Grinsven,
S. Fauser, C. Hoyng, T. Theelen, C. I. Sánchez, Robust total retina
thickness segmentation in optical coherence tomography images using
convolutional neural networks, Biomedical optics express 8 (7) (2017)
3292–3316.
[27] T. Schlegl, S. M. Waldstein, H. Bogunovic, F. Endstraßer,
A. Sadeghipour, A.-M. Philip, D. Podkowinski, B. S. Gerendas,
G. Langs, U. Schmidt-Erfurth, Fully automated detection and
quantification of macular fluid in oct using deep learning,
Ophthalmology 125 (4) (2018) 549–558.
[28] F. G. Venhuizen, B. van Ginneken, B. Liefers, F. van Asten, V. Schreur,
S. Fauser, C. Hoyng, T. Theelen, C. I. Sánchez, Deep learning approach
for the detection and quantification of intraretinal cystoid fluid in
multivendor optical coherence tomography, Biomedical optics express
9 (4) (2018) 1545–1569.
[29] A. Rashno, S. Sadri, H. SadeghianNejad, An efficient content-based im-
age retrieval with ant colony optimization feature selection schema based
on wavelet and color features, in: 2015 The International Symposium
on Artificial Intelligence and Signal Processing (AISP), IEEE, 2015, pp.
59–64.
[30] A. Rashno, B. Nazari, S. Sadri, M. Saraee, Effective pixel classification
of mars images based on ant colony optimization feature selection and
extreme learning machine, Neurocomputing 226 (2017) 66–79.
[31] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for
semantic segmentation, in: Proceedings of the IEEE conference on
computer vision and pattern recognition, 2015, pp. 3431–3440.
[32] Boykov Y, Funka-Lea G. “Graph cuts and efficient nd image segmen-
tation,” International journal of computer vision, vol. 70, no. 2, pp.
109–131, 2006.
[33] Salah MB, Mitiche A, Ayed IB. “Multiregion image segmentation by
parametric kernel graph cuts,” IEEE Transactions on Image Processing,
vol. 20, no. 2, pp. 545–557, 2011.

Authorized licensed use limited to: University of Exeter. Downloaded on June 19,2020 at 23:56:57 UTC from IEEE Xplore. Restrictions apply.

You might also like