You are on page 1of 21

A.

AYYAPPAN
17MIS0289

VELLORE INSTITUTE OF TECHNOLOGY


SCHOOL OF INFORMATION TECHNOLOGY
AND ENGINEERING
MULTIMEDIA SYSTEMS
FINAL-REVIEW
Winter Semester 2019-2020
Course Code : SWE1013
Slot : C1
SUBMITTED BY:
A.AYYAPPAN
17MIS0289

SUBMITTED TO:
Prof. RAHAMATHUNNISA.U

1
A.AYYAPPAN
17MIS0289

CONTENT:

CHAPTER TOPICS PAGE NO


NO
1 ABSTRACT 3

2 INTRODUCTION 3

3 DESIGN 4

4 PROPOSED SYSTEM 5

5 LITERATURE 5-8
REVIEW
6 ALGORITHM 9

7 IMPLEMENTATION 12-19

8 RESULT 19-22

9 TEST CASES 22

10 CONCLUSION 22-23

11 REFERENCE 23-24

2
A.AYYAPPAN
17MIS0289

1. ABSTRACT:
Glaucoma is a disease in which damage to the optic nerve causes
progressive, irreversible vision loss. It is the second leading cause of blindness
that can damage the eye’s optic nerve, causing loss of vision and thereby
permanent blindness. It is caused due to increase eye pressure which enlarges
the size of optic cup blocking the flow of fluid to the optic nerve and
deteriorating the vision.
The ratio of the size of optic cup to optic disc, also known as the
cup-to-disc ratio (CDR), is one of the measure indicators of glaucoma. More is
the value of CDR, more is the chance of glaucoma. The aim of this analysis is
to highlight various techniques used for segmentation of optic disc and optic
cup used by different researches.

2. INTRODUCTION
Glaucoma is a disease of increased pressure within the eyeball. The
disease is mostly caused due to increased intraocular pressure (IOP) resulting
from a malfunction or malformation of the eye’s drainage structures. If left
untreated, it would lead to degeneration of optic nerve and retinal fibers. Early
diagnosis of glaucoma through analysis of the neuro-retinal optic disc (OD) and
optic cup (OC) area is crucial.
The increase in pressure results in immoderate amount of stress to be
put to the attachment of the optic nerve to the eye. Lack of treatment for
glaucoma can lead to permanent blindness. Early detection of the disease will
help prevent against developing a more serious condition. The fundus images are
used for diagnosis by trained clinicians to check for any abnormalities or any
change in the retina. Important anatomical structures captured in a fundus image
are blood vessels, OD, OC, and macula for a normal retina.
An image of a diseased retina may also contain many visible symptoms
of the eye-disease. In a healthy retinal image the OD usually appears as a circular
shaped bright yellowish object which is partly covered with vessels.The OC is the
cupping of the optic nerve and that means the size of the depression in the
middle of the nerve when viewed from the front of the eye. When there is
damage to the optic nerve, the cupping increases. Changes in the OD and OC can

3
A.AYYAPPAN
17MIS0289

indicate the presence, current state and progression of glaucoma . Since the
colour fundus images provide early signs of certain diseases such as diabetes,
glaucoma etc., colour fundus images are used to track the eye diseases by the
ophthalmologists. Figure1 shows the important features of a retinal colour fundus
image.

3. Design :

4. PROPOSED METHOD:

4
A.AYYAPPAN
17MIS0289

5. literature review:
Glaucoma is a serious eye disease, overtime it will result in gradual
blindness. Early detection of the disease will help prevent against developing
a more serious condition. A vertical cup-to-disc ratio which is the ratio of the
vertical diameter of the optic cup to that of the optic disc, of the fundus eye
image is an important clinical indicator for glaucoma diagnosis. This paper
presents an automated method for the extraction of optic disc and optic cup
using Fuzzy C Means clustering technique combined with thresholding.
Using the extracted optic disc and optic cup the vertical cup-to-disc ratio was
calculated. The validity of this new method has been tested on 365 colour
fundus images from two different publicly available databases DRION,
DIARATDB0 and images from an ophthalmologist. The result of the method
seems to be promising and useful for clinical work.

5
A.AYYAPPAN
17MIS0289

The proposed system consists of three main key phases: fundus image
processing, visual field examination and assistant diagnostic module. Fundus
images are input into the fundus image processing and visual field is examined.
With the help of these two modules, glaucoma and glaucoma suspect are found
out. IOP is used for accuracy of glaucoma and glaucoma suspect. Assistant
diagnostic module contains an “IF- THEN” fuzzy rule which gives primary
diagnosis. For collection of data there is a data base in the glaucoma screening
system. The proposed system is cost effective and suitable for detecting early
stage glaucoma especially for large scale screening.
Glaucoma is classified by extracting two features using retinal fundus
images.
(i) Cup to Disc Ratio (CDR).
(ii) Ratio of Neuroretinal Rim in inferior, superior, temporal and
nasal quadrants that is to say ISNT quadrants.
Glaucoma frequently damages superior and inferior fibers before temporal
and nasal optic nerve fibers and which start decreasing the superior and inferior
rims areas and change the order of ISNT rule. Hence, the detection of rim areas in
four directions can assist the correct verification of ISNT rule and then improve
the correct diagnosis of glaucoma at early stages. In the end, feed forward back
propagation neural network is used for classification based on the above two
features.

This paper proposes a computer aided decision support system for an


automated detection of glaucoma in monocular fundus images. Identification
of Glaucoma using fundus images involves the measurement of the size,
shape of the Optic cup and Neuroretinal rim. Optic Cup detection is a
challenging task because of the interweavement of cup with the blood
vessels. A new color model technique based on pallor in fundus images using
K means clustering is proposed to differentiate between the Optic cup to disc
boundary. The method differs by initial optic cup region detection followed by
the erasure of blood vessels. In addition to the shape based features, textural
features are extracted to better characterize the pathological subjects.
Optimal set of features selected by Genetic algorithm are fed as input to

6
A.AYYAPPAN
17MIS0289

Adaptive Neuro fuzzy inference system for classification of images into


normal, suspect and abnormal categories. The method has been evaluated on
550 images comprising normal and glaucomatous images. The performance of
the proposed technique is compared with Neural Network and SVM (Support
vector machine) Classifier in terms of classification accuracy and convergence
time. Experimental results shows that the features used are clinically
significant for the accurate detection of glaucoma.
C-means clustering and superpixel classification can also be used for optic
cup segmentation,recreated the optic cup through fuzzy C-means clustering on a
wavelet- transformed green plane image after the removal of blood vessels.
However, the paper did not report the segmentation accuracy of the optic cup.
They proposed a clustering-based thresholding algorithm to recreate the optic
cup using the spatial distribution of gray levels, presented a superpixel
classification-based method for optic cup segmentation used in glaucoma
screening.This method involved computing the center-surround statistics from
superpixels and using them with histograms for cup segmentation, extended the
superpixel framework by modeling the binary superpixel clustering task as a low-
rank representation problem employing the domain prior and the low-rank
property of the superpixels, combined clustering and thresholding to segment
optic cups. However, the clustering methods and the superpixels methods both fit
the optic cup into an ellipse, ignoring vessel information; therefore, it is easy for
them to miss the local features of the optic cup.
Glaucoma detection focused on the features of the ONH(Optic Nerve
Head) in fundus images has because the ONH is the main site affected by
glaucoma. Before feature extraction can be performed on an ONH image, the
process of ONH localization and segmentation is required. Localization is
performed to form a sub- image that contains the region of interest in the ONH
section. This sub-image is then called the region of interest (ROI) image, and it
contains the whole ONH. The purpose of segmentation is to distinguish the ONH
from the background. Several of the most popular methods involve processes of
localization and segmentation, such as thresholding and active contour.
Additional methods have been developed, such as superpixel and fuzzy c-means

7
A.AYYAPPAN
17MIS0289

A method which used region based active contour modelling in red


color channel for optic disc segmentation. The results of the proposed
method is compared with gradient vector flow (GVF) and chan-vese model (C-
V model). It is analysed that the proposed method improved the boundary
measure. It applied the concept of r-bends (vessel bends) information which
use dynamic region of support (ROS) for corners detection followed by 2-D
spline interpolation for non uniform r-bends. The result after comparing with
thresholding and ellipse fitting realize that proposed approach improved the
result as compared with other approaches. A method which used localization
of the image by extracting the region of interest. It used otsu method for
removal of vessels and finally for optic disc extraction used active contour
modelling (ACM), fuzzy c-means clustering (FCM) and artificial neural
network (ANN) segmentation techniques. It was determined that ACM and
ANN gave better results as compared to that of FCM.

A method which on extracted region of interest used histogram


equalization and thresholding. Noise is removed by component labelling
(neighbourhood connecting pixels) finally the disc boundaries are extracted
by optimal color channel histogram, ellipse fitting and optimization. It then
used optimal color channel histogram, ellipse fitting and optimization for
detection of the cup boundary.

There are several previous studies to classify normal and glaucoma in


fundus image through machine learning and to play a supporting role in
physician’s glaucoma diagnosis criteria. Chen performed a classification of
normal and glaucoma using a convolutional neural network in . Chen designed
the AlexNet-style CNN, evaluated with the ORIGA [15] and SCES [12] fundus
image dataset, and obtained 0.831 and 0.887 area under the curve (AUC),
respectively. Chen’s study is significant in that it classifies glaucoma using
CNN, but classifies only normal and glaucoma classes and does not show good
classification performance. Li proposed a model combining CNN and SVM to
diagnose glaucoma focusing on the disk/cup region of interests (ROI) and
obtained a 0.838 AUC . Li’s work, however, has the same limitations as Chen’s
work, and at the same time, did not directly extract the disk/cup ROI, but

8
A.AYYAPPAN
17MIS0289

instead used the ROI that was manually labeled in the ORIGA dataset. Khali
conducted a review of several machine learning techniques for glaucoma
detection in . Various machine learning techniques have been compared such
as decision tree, fuzzy logic, K- nearest neighbor, support vector machine,
and Naive Bayes.

K-means clustering plays a vital role in the feature extraction stage to


compute one of the features CDR. It is an unsupervised learning algorithm that
solves the well-known clustering problem. The procedure follows a simple and
easy way to classify a given data set through a certain number of clusters (k
clusters) fixed a priori. The main idea is to define k centroids, one for each
cluster. The next step is to take each point belonging to a given data set and
associate it to the nearest centroid. At this point k new centroids are
calculated as the mean of the clusters resulting from the previous step. As a
result of repetitive application of these two steps, the k centroids change their
location step by step until no more changes take place. In other words
centroids do not move any more.
Two different techniques were used to localize the optic disc: (1)
analyzing the convergence of the vessels to detect the circular bright shapes
and (2) detecting the brightest circular area based on a fuzzy Hough
transform. After detecting the OD, the segmentation techniques were
conducted using the region of interest specified by a difference of Gaussian
filter. The vessel tree boundaries were segmented by Canny filter to compute
the edges. The vessels edges from the Canny output were suppressed using
the vessel tree segmentation.
Finally, the histogram information was included to measure the accuracy of
segmentation.

The methodology was evaluated on 120 images from the VARIA


dataset. The method achieved 100% of OD localization for both fuzzy

9
A.AYYAPPAN
17MIS0289

convergence and Hough transform. Using brute force search, the


segmentation success rates were 92.23% and 93.36% for the fuzzy
convergence and Hough transform, respectively. The aforementioned OD
segmentation approach did not involve pathologic retinal images affecting the
OD. This is a limitation which should be addressed in the future work in order
to develop a robust methodology.

6. ALGORITHM:

7. Code:
clc;

clear all;
close all;

[filen,pathn] = uigetfile('Desktop,jpg;*.bmp'); I=imread([pathn,filen]);


% I= imcrop(I); I=
imresize(I,[256,256]);
figure,imshow(I);
title('ROI image');
I1=I(:,:,1);
figure,imshow(I1);
[m,n]=size(I1);

10
A.AYYAPPAN
17MIS0289

I3=zeros(m,n);

figure,imshow(I

3); for i=1:m for

j=1:n if

I1(i,j)>230

I3(i,j)=1;
else

I3(i,j)=0;
end

end
end
se=strel('disk',8);
I4=imdilate(I3,se);
I4=immultiply(double(I4),double(I1));
figure,imshow(uint8(I4));
title('segmented disc');
R=edge(I4);
[X,Y]=find(R);
I5=I(:,:,2);
for i=1:m

for j=1:n if
I5(i,j)>230
I6(i,j)=1;
else
I6(i,j)=0;
end

end end
figure,imshow(I6,[]);title('segmented cup');
R1=edge(I6);
[X1,Y1]=find(R1);
[X2,Y2]=pol2cart(X1,Y1);
figure;

11
A.AYYAPPAN
17MIS0289

k1=convhu11(X1,Y1);
plot(X1,Y1,'r-');hold on; plot(X1(k1),Y1(k1),'b+');hold
off;

X3=X1(k1);

Y3=Y1(k1);
ellipse_t=ellipsefit(X,Y); figure,imshow(I);

Nb=300;
C='b'; h=ellipse(ellipse_t.a,ellipse_t.b,1.6,ellipse_t.X0_1n,ellipse_t,Y0_in,C,Nb);
ellipse_t1=ellipsefit(X3,Y3);
Nb1=300;
C1='r';
h1=ellipse(ellipse_t1.a,ellipse_t1.b,1.6,ellipse_t1:X0_in,ellipse_t1.Y0_1n,C1,Nb1); title('Disc and
cup boundary smoothing by ellipse fitting');

CDR=h/h1;

clc;
clear
all:

img=imread('multimedia.jpg'
); imgr=img(:,:,1);
imshow(imgr); imgrb =
imbinarize(imgr,.99);
se=strel('disk',2);
imgrbc=imclose(imgrb,se);
[cr,rr]=imfindcircles(imgrbc,[
4 100],'Objectpolarity',...

'bright',Sensitivity,0.92); imgr=img(:,:,2); imgrb


=imbinarize(imgr,.99); se=strel('disk',2);
imgrbc=imclose(imgrb,se);
[cg,rg]=imfindcircles(imgrbc,[4 100],'Objectpolarity',...

'bright',Sensitivity,0.92); imshow(img);

12
A.AYYAPPAN
17MIS0289

hr=viscircles(cr,rr);
hb=viscircles(cg,rg);
cdr=rr/rg;

fprint('\ncdr=%f\n',cdr);

classifie
r
clc

clear all
close all
[inp1, pathname] =
uigetfile('EYE.jpg'); if
isequal(inp1,0)
disp('User selected
Cancel')
else
disp(['User selected ', fullfile(pathname, inp1)])
end
b=imread(inp1);
imshow(b)
title('input image ')
%%%%%
222222222
r=b(:,:,1);
g=b(:,:,2);
bb=b(:,:,3); %
% % disckel
th=graythres
h(g);
ne=g>130;
binaryImage
=ne;
Get rid of stuff touching the border
binaryImage =
imclearborder(binaryImage);
fill=imfill(binaryImage,'holes');
se=strel('disk',6) dil=imdilate(fill,se)
figure,imshow(dil) title('disk image ')
disckel

13
A.AYYAPPAN
17MIS0289

th=graythresh(g);
ne=g>140;
binaryImage=ne;
Get rid of stuff touching the border
binaryImage =
imclearborder(binaryImage);
cup=imfill(binaryImage,'holes');
se1=strel('disk',2);
di=imdilate(cup,se1);
cup=di;
figure,imshow(cup)
title('cup image ')
Extract only the two largest blobs.
binaryImage = bwareafilt(binaryImage, 2);
CC = bwconncomp(binaryImage,8);
numPixels =
cellfun(@numel,CC.PixelIdxList);
[biggest,idx] = max(numPixels);
BW(CC.PixelIdxList{idx}) = 0;
BW=binaryImage ;
CC = bwconncomp(BW); numPixels =
cellfun(@numel,CC.PixelIdxList);
[biggest,idx] = max(numPixels);
BW(CC.PixelIdxList{idx}) = 0;
filteredForeground=BW;
%
% figure, imshow(BW);
% % Fill holes in the blobs to make them
solid. binaryImage = imfill(binaryImage,
'holes'); % % Display the binary image.
dis(:,:,1)=immultiply(binaryImage,b(:,:,
1));
dis(:,:,2)=immultiply(binaryImage,b(:,:,
2));
dis(:,:,3)=immultiply(binaryImage,b(:,:,
3)); a = dil; stats =
regionprops(double(a),'Centroid',...
'MajorAxisLength','MinorAxisLength')
centers = stats.Centroid;
diameters = mean([stats.MajorAxisLength
stats.MinorAxisLength],2); radii = diameters/2; % Plot the circles.
figure,imshow(b) hold on viscircles(centers,radii);
hold off
figure
subplot(3,

14
A.AYYAPPAN
17MIS0289

3,1)
imshow(b
)
title('input image ')
subplot(3,3,2)
imshow(dil,[])
title('disk segment
image ') subplot(3,3,3)
imshow(b) hold on
viscircles(centers,radii)
; hold off title('Disc
boundary')
subplot(3,3,4)
imshow(di,[]) title('cup
image ') subplot(3,3,5)
imshow(b) hold on
viscircles(centers,radi
i/2); hold off
title('cup boundary')
c1=bwarea(dil);
c2=bwarea(di);
cdr=c2./(c1)
rim=(1-di)-(1-dil);
RDR=bwarea(rim)./(c2);
nn=sprintf('The CDR is %2f ',cdr)
msgbox(nn)
pause(2)
nn1=sprintf('The RDR is %2f ',RDR/2)
msgbox(nn1)
pause(2)
if cdr<0.45
msgbox('NO GLUCOMA')
msgbox('pls provide expert
input')
x = inputdlg({'EYE PAIN','HEAD ACHE','VISION'}, 'Customer', [1 15; 1 15; 1
15]) x1 = inputdlg({'AGE ','DIABETICS','GLUCOMA'}, 'Customer', [1 15; 1
15; 1 15]) inp1=x{1} inp2=x{2} inp3=x{3} inp11=x1{1} inp22=x1{2}
inp33=x1{3}
inp111=input('EYE PAIN Y/N :')
inp222=input('HEADACH Y/N :')
inp333=input('DIABETIC Y/N :') elseif cdr <0.6 &
cdr>0.45 msgbox('pls provide expert input')
x = inputdlg({'EYE PAIN','HEAD ACHE','VISION'}, 'Customer', [1 15; 1 15; 1
15]) x1 = inputdlg({'AGE ','DIABETICS','GLUCOMA'}, 'Customer', [1 15; 1

15
A.AYYAPPAN
17MIS0289

15; 1 15]) inp1=x{1} inp2=x{2} inp3=x{3} inp11=x1{1} inp22=x1{2}


inp33=x1{3} if inp1==1 & inp3==1
msgbox('MEDIUM
RISK') else
msgbox('LOW RISK')
end elseif cdr>0.6
msgbox('GO FOR EXPERT : ')
msgbox('pls provide expert
input')
x = inputdlg({'EYE PAIN','HEAD ACHE','VISION'}, 'Customer', [1 15; 1 15; 1
15]) x1 = inputdlg({'AGE ','DIABETICS','GLUCOMA'}, 'Customer', [1 15; 1
15; 1 15]) inp1=x{1} inp2=x{2} inp3=x{3} inp11=x1{1} inp22=x1{2}
inp33=x1{3} if inp1==1 & inp3==1 msgbox('HIGH RISK') else
msgbox('VERY HIGH RISK') end end
if cdr<0.45 msgbox('NO
GLUCOMA DETECTED '); if
out==1

myString1 = sprintf('NO DR &


DM', 1); set(handles.text3,
'String', myString1); elseif out==2

myString1 = sprintf('DIABETIC RETINOPATHY', 1);


set(handles.text3, 'String', myString1);
elseif out==3

myString1 = sprintf('DIABETIC MACULOPATHY', 1);


set(handles.text3, 'String', myString1);
end elseif cdr <0.6 & cdr>0.45
msgbox('GLUCOMA DETECTED '); if
inp1==1 & inp3==1 & inp22==1 &
inp33==0 msgbox('MEDIUM RISK')
msgbox('CHECK UP FOR EVERY 2
MONTHS ')
else msgbox('RISK OF
GLUCOCMA ') end
elseif cdr>0.6 msgbox('GLUCOMA
DETECTED '); if inp1==1 & inp3==1 &
inp22==0 & inp33==0 msgbox('HIGH
RISK') msgbox('CHECK UP FOR EVERY 2
MONTHS ')
else msgbox('VERY HIGH RISK')
msgbox('CHECK UP FOR EVERY 2
MONTHS ') end end cd Database DF=[]
for ii=1:14 str=int2str(ii);

16
A.AYYAPPAN
17MIS0289

str=strcat(str,'.jpg');
bb=imread(str);

glcms = graycomatrix(rgb2gray(bb)); stats =


graycoprops(glcms,'Contrast Correlation');
stats1 = graycoprops(glcms,'Energy
Homogeneity'); conts=stats.Contrast;
corre=stats.Correlation; en=stats1.Energy;
ho=stats1.Homogeneity;

bw1=rgb2gray(bb);
me=mean2(bw1); st=std2(bw1);
va=var(var(double(bw1)));
sk=skewness(skewness(double(bw
1)));
ku=kurtosis(kurtosis(double(bw1))
); feat=[conts corre en ho ] DF=[DF
;feat] end cd ..
bbb=b;
glcms = graycomatrix(rgb2gray(bbb)); stats =
graycoprops(glcms,'Contrast Correlation');
stats1 = graycoprops(glcms,'Energy
Homogeneity'); conts=stats.Contrast;
corre=stats.Correlation; en=stats1.Energy;
ho=stats1.Homogeneity;

bw1=rgb2gray(bbb);
me=mean2(bw1); st=std2(bw1);
va=var(var(double(bw1)));
sk=skewness(skewness(double(bw
1)));
ku=kurtosis(kurtosis(double(bw1))
);
QF=[conts corre en ho ]

train=DF;
xdata
=train;
TrainingSet=double(xdata);
GroupTrain=[1;1;1;1;1;1;1;1;1;1;2;2;3;3]
TestSet=double(QF);
u=unique(GroupTrain);
numClasses=length(u);
result = zeros(length(TestSet(:,1)),1);

17
A.AYYAPPAN
17MIS0289

for k=1:numClasses
G1vAll=(GroupTrain==u(k));
models(k) =
svmtrain(TrainingSet,G1vAll);
end
for
j=1:size(TestSet,1)
for k=1:numClasses
if(svmclassify(models(k),TestSet(
j,:))) break; end end
result(j) =
k; end
out=result

if out==1 msgbox('------NO ------- DIABETIC RETINOPATHY & DIABETIC


MACULOPATHY') elseif out==2
msgbox('DIABETIC
RETINOPATHY') elseif out==3
msgbox('DIABETIC
MACULOPATHY ') end

OUTPUT:

8. Test Cases :

Test Case CDR Risk Level Glaucoma Result Accuracy


Image 1 0.45 No Risk No pass 98%
Glaucoma
Image 2 0.5 Medium Glaucoma pass 95%
risk Detected
Image 3 0.66 High Risk Glaucoma pass 95%
Detected
Image 4 0.3 No Risk No pass 98%
Glaucoma
Image 5 0.61 High Risk Glaucoma pass 97%
Detected

18
A.AYYAPPAN
17MIS0289

Image 6 0.42 No Risk No pass 96%


Glaucoma
Image 7 0.7 High Risk Glaucoma pass 95%
Detected
Image 8 0.39 No Risk No pass 99%
Glaucoma
Image 9 0.67 High Risk Glaucoma pass 95%
Detected
Image 0.44 No Risk No pass 94%
10 Glaucoma

10 . CONCLUSION :

Segmentation of the optic disc and optic cup has captured the interest
of many researchers. Although there are many promising approaches, there is
still room for improvement in segmentation techniques. Only few of the
existing methodologies, whether for optic disc or for optic cup segmentation,
can be applied for glaucomatous retinal images. Also, most of the current
methods have been tested on a limited number of datasets such as DRIVE and
STARE. These datasets do not provide images with many different
characteristics. Furthermore, the generally low resolution of the images
(ranging from 0.4 to 0.3 megapixels) has made the segmentation process
even more challenging . An advanced camera capable of taking high volumes
of high resolution retinal images will facilitate glaucoma screening. In order to
achieve good outcomes for the images captured by different systems, robust
and fast segmentation methods are required. Most of the retinal images used
to evaluate segmentation methods have been taken from adults. The retinas
of infants, babies, and children have different morphological characteristics
than that of adults, and this difference must be considered in segmentation
methodologies. The glaucoma screening system complements but does not
replace the work of ophthalmologists and optometrists in diagnosis; routine
examinations have to be conducted in addition to the fundus image analysis.
However, the system facilitates diagnosis by calculating the disc and cup
structural parameters and showing greater details of ONH, such as the disc

19
A.AYYAPPAN
17MIS0289

and cup areas, the vertical and horizontal cup-to-disc ratios, and cup to disc
area ratio, and also checking the ISNT arrangement. This is a shareable
opinion that could associate the worlds of consultant ophthalmologists,
optometrists, orthoptists, and engineers.

11. REFERENCES:

1. Devasia, T., Jacob, P., & Thomas, T. (2015). Fuzzy Clustering Based

Glaucoma Detection Using the CDR. Signal & Image Processing: An


Inernational Journal, 6(3), 55-70.

2. Qureshi, I. (2015). Glaucoma Detection in Retinal Images Using Image


Processing Techniques: A Survey. International Journal of Advanced
Networking and Applications, 7(2), 2705.

3. Choudhary, K., & Tiwari, S. (2015). ANN Glaucoma Detection using


Cup- to-Disk Ratio and Neuroretinal Rim. International Journal of
Computer Applications, 111(11).

4. Kavitha, S., & Duraiswamy, K. (2012). An efficient decision support

system for detection of glaucoma in fundus images using ANFIS.


International Journal of Advances in Engineering & Technology, 2(1),
227.

5. Hu, M., Zhu, C., Li, X., & Xu, Y. (2017). Optic cup segmentation from
fundus images for glaucoma diagnosis. Bioengineered, 8(1), 21-28.

6. Septiarini, A., Khairina, D. M., Kridalaksana, A. H., & Hamdani, H.

(2018). Automatic Glaucoma Detection Method Applying a Statistical


Approach to Fundus Images. Healthcare informatics research, 24(1), 53-
60.

20
A.AYYAPPAN
17MIS0289

7. Angadi, A. B., Angadi, A. B., & Gull, K. C. (2013). International Journal

of Advanced Research in Computer Science and Software Engineering.


International Journal, 3(6).

8. Jun, T. J., Kim, D., Nguyen, H. M., Kim, D., & Eom, Y. (2018).
2sRanking- CNN: A 2stage ranking-CNN for diagnosis of glaucoma from
fundus images using CAMextracted ROI as an intermediate input. arXiv
preprint arXiv:1805.05727.

9. Narasimhan, K., & Vijayarekha, K. (2011). An efficient automated


system for glaucoma detection using fundus image. Journal of theoretical
and applied information technology, 33(1), 104-110.

10. Duke-Elder, S. (1952). Text-book of Ophthalmology (Vol. 5). Mosby.

21

You might also like