Professional Documents
Culture Documents
A project report on
BACHELORS OF ENGINEERING
IN
CERTIFICATE
Certified that the project work entitled <Diabetic Retinopathy Detection Using CNN=
is a bonafide work carried out by
In partial fulfilment for the award of the Bachelor of Engineering in Computer Science and Engineering
of the Visvesvaraya Technological University, Belagavi during the year 2019-2020. It is certified that
all corrections/suggestions indicated for the internal assessment have been incorporated in the Report
deposited in the department library. This project has been approved as it satisfies the academic
requirements in respect to the project work prescribed for Bachelor of Engineering Degree.
External Viva
2.
ACKNOWLEDGEMENT
We take this opportunity to thank MJF. LN. Leo Muthu, Chairman of Sri Sairam
College of Engineering for providing us with excellent infrastructure that is required for the
development of our project.
We would express our gratitude to our project Coordinators Mrs. Sowmya A M, Asst.
Professor, for their able assistance, timely suggestions and guidance throughout the
duration of the project.
I would like to thank all the teaching and non-teaching staff of the Department of
Computer Science and Engineering and parents who have contributed to this project directly
or indirectly.
DECLARATION
B.E in the Computer Science and Engineering, Sri Sairam College of Engineering, hereby
CNN= has been carried out by us under the guidance of Mrs.Shoba V, Assistant Professor,
Bengaluru, for the partial fulfilment of requirements for the award of the degree of Bachelor
Place: Bengaluru
Date:
TEAM MEMBERS
1. Rashmi Neginahal
2. Rohini L
3. Kummetha Saileela
4. Niveditha R
ABSTRACT
Diabetic retinopathy is a state of eye infirmity in which damage arises due to diabetes mellitus. It is one of
the prominent reasons behind blindness. Most of new cases of diabetic retinopathy can get reduced with
proper treatment of eyes. The method proposed in this paper aims at detecting the various stages of
Diabetic Retinopathy by using U-Net segmentation with region merging & Convolutional Neural
Network(CNN) to automatically diagnose and thereby classify high-resolution retinal fundus images into
5 stages of the disease based on severity. A major difficulty of fundus image classification is high
variability especially in the case of Proliferative diabetic retinopathy where there exist retinal proliferation
of new blood vessels and retinal detachment. Hence, The proper analysis of retinal vessel is required to
get the precise result, which can be done by Retinal Segmentation. Retinal Segmentation is the process of
automatic detection of boundaries of blood vessels. The features lost during segmentation are retained
during region merging & passed through the image classifier, with the accuracy up to 93.33%.
TABLE OF CONTENTS
Acknowledgment ii
Declaration iii
Abstract iv
Table of Contents v
List of Figures vii
List of Tables viii
1 INTRODUCTION 1-5
1.1 Problem Definition 2
1.2 Aim of the Project 2
1.3 Existing Systems 2
1.3.1 Advantages 4
1.3.2 Disadvantages 4
1.4 Proposed System 4
1.5 Organization of the Project 5
2 LITERATURE SURVEY 6-8
3 SOFTWARE REQUIREMENT
SPECIFICATIONS 9-18
3.1 Software Requirements 9
3.2 Hardware Requirements 9
4 SYSTEM DESIGN 19-23
4.1 System Architecture 19
4.2 Flow Chart 22
4.3 Use case Diagram 21
23
4.4 Data Flow Diagram
24-25
5 IMPLEMENTATION
23
5.1 Training Data Set
23
5.2 Image Input
24
5.3 Image Preprocessing
25
5.4 Modules
25
5.5 Sample Code
26
6 TESTING
28
6.1 Testing types
28
6.1.1 Unit Testing
6.1.2 Integration Testing
29
6.1.3 System Testing
29
6.1.4 Performance Testing
29
6.1.5 Validation Testing
29
6.1.6 Acceptance Testing
30
7 RESULT ANALYSIS
31
8 CONCLUSION
40
REFERENCES
41
Appendix –1 Snapshots
42
LIST OF FIGURES
CHAPTER 1
INTRODUCTION
Diabetes or diabetes mellitus is a metabolic disease in which the person body produces an
inadequate amount of insulin to produce high blood sugar. In India itself, more than 62 million
people are suffering from diabetes. The people who are suffering from diabetes for more than 20
years has 80% chance of causing diabetic retinopathy
According to the International Diabetes Federation, the number of adults with the diabetes in the
world is estimated to be 366 million in 2011 and by 2030 this would have risen to 552 million. The
number of people with type 2 diabetes is increasing in every country 80% of people with diabetes
live in low-and middle-income countries. India stands first with 195% (18 million in 1995 to 54
million in 2025). Previously, diabetes mellitus (DM) was considered to be present, largely, among
the urban population in india. Recent studies clearly show an increasing prevalence in rural areas
as well. Indian studies show a 3-fold increase in the presence of diabetes among the rural
population over the last decade or so (2.2% in 1989 to 6.3% in 2003)
In India, Study shows the estimated prevalence of type 2 diabetes mellitus and diabetic retinopathy
in a rural population of south india are nearly 1 of 10 individuals in rural south india, above the age
of 40 years.
Diabetic retinopathy is a state of eye infirmity in which damage arises due to diabetes mellitus. It is
one of the prominent reason behind blindness. The increased blood sugar due to diabetes
incorporated damage to the tiny blood vessels in the retina thereby causing diabetic retinopathy At
least 90% of new cases could be reduced with proper medication as well as frequent monitoring of
the eyes It primarily affects the retinas of both the eyes, which can lead to vision loss if it is not
treated. Poorly controlled blood sugars, high blood pressure, and high cholesterol increase the risk
of developing Diabetic retinopathy. The earlier work in the detection of varies stages DR based on
explicit feature extraction & classification by using various Image Processing techniques &
Machine learning algorithm respectively. Though high accuracy can be achieved using these
methods but diagnosing diabetic retinopathy based on the explicit extraction of features is an
intricate procedure. Due to development of Computer vision in recent times & availability of large
dataset, it is now possible to use a deep Neural network for the detection & classification of
Diabetic retinopathy.
Hence, several methods have been proposed based on the deep neural network for the classification
of Diabetic retinopathy based on severity A major difficulty of fundus image classification using
the deep neural network is high variability, especially in the case of retinal proliferation and retinal
detachment of new blood vessels, which lowers the accuracy of the network. The method proposed
in this paper aims at detecting the various stages of Diabetic Retinopathy by using U-Net
segmentation with region merging & Convolutional Neural Network. The retinal segmentation is
the process of automatic detection of boundaries of blood vessels within the retina. This allows
classifier to learn important features such as retinal proliferation and retinal detachment. The data
lost during retinal segmentation is retracted through region merging.
Diabetic retinopathy is a state of eye infirmity in which damage arises due to diabetes mellitus. It is
one of the prominent reasons behind blindness.
This project mainly focuses on the prediction of diabetic retinopathy disease. CNN model can be
trained by using training datasets and CNN will give the probability of the eye infected with
diabetics Our objective is to train our model by providing training datasets to it and our goal is to
detect the severity of diabetic retinopathy disease accurately.
The model proposed in the paper named <Automated detection of diabetic retinopathy using SVM=
by Enrique V. Carrera et.al included 8 features namely:
• Standard deviation of the red component
They have used <Messidor Dataset= which consists of 300 images divided into three subsets.
They have classified for NPDR phase detection.
1.3.1 Disadvantage:
• CNN do not encode the position and orientation of object.
• Lack of ability to be spatially invariant to the input data.
• Lots of training data is required.
In recent years most of the image processing researchers indulged in the development of machine
learning especially deep learning approaches in the field of Hand-written digit recognition such as
MNIST dataset, image classification by IMAGENET. Our proposed methodology strongly
emerged based on these key aspects of diseases severity classification from the fundus images. In
general, especially classification of diseases with the proposed architecture a DCNN [add citation]
following these basic steps to achieve maximum accuracy from the images dataset are:
• Data Augmentation
• Pre-processing
• Initialization of Networks
• Training
• Activation function selections
• Regularizations
Ensemble the multiple methods. In our proposed diabetic retinopathy classification model in Fig an
architecture are condensed and its building blocks are :
a. Data augmentation
b. Preprocessing
c. Deep Convolutional Neural Network Classification
Preprocessing:
The dataset contained images from patients of varying ethnicity, age groups and extremely varied levels
of lighting in the fundus photography. This affects the pixel intensity values within the images and and
creates unnecessary variation unrelated to classification levels. To counteract this colour normalisation
was implemented on the images using the OpenCV (http://opencv.org/) package. The result of this can
be seen in Fig 3. The images were also high resolution and therefore of significant memory size. The
dataset was resized to 512x512 pixels which retained the intricate features we wished to identify but
reduced the dataset to a memory size the NVIDIA K40c could handle.
Training:
The CNN was initially pre-trained on 10,290 images until it reached a significant level. This was needed
to achieve a relatively quick classification result without wasting substantial training time. After 120
epochs of training on the initial images the network was then trained on the full 78,000 training images
for a further 20 epochs. Neural networks suffer from severe over-fitting especially in a dataset such as
ours in which the majority of the images in the dataset are classified in one class, that showing no
signs of retinopathy. To solve this issue, we implemented real-time class weights in the network. For
every batch loaded for back-propagation the class-weights were updated with a ratio respective to
how many images in the training batch were classified as having no signs of DR. This reduced the risk
of over-fitting to a certain class to be greatly reduced.
The network was trained using stochastic gradient descent with Nestrov momentum. A low learning
rate of 0.0001 was used for 5 epochs to stabilise the weights. This was then increased to 0.0003 for the
substantial 120 epochs of training on the initial 10,290 images, taking the accuracy of the model to
over 60%, this took circa 350 hours of training. The network was then trained on the full training set of
images with a low learning rate. Within a couple of large epochs of the full dataset the accuracy of the
network had increased to over 70%. The learning rate was then lowered by a factor of 10 every time
training loss and accuracy saturated.
Augmentation:
The original pre-processed images were only used for training the network once. Afterwards, real-time
data- augmentation was used throughout training to improve the localisation ability of the network.
During every epoch each image was randomly augmented with: random rotation 0-90 degrees, random
yes or no horizontal and vertical flips and random horizontal and vertical shifts. The result of an image
augmentation can be seen in Fig .
1.4.1 Advantage:
• Very High accuracy in image recognition problems.
• Automatically detects the important features without any human supervision.
• Weight sharing.
• Chapter 1: Introduction tells about the issue explanation, existing and proposed
frameworks.
• Chapter 2: Literature survey manages all the discoveries and perceptions which are led
as attainability study before real improvement of the venture. The part additionally
manages the clarification of existing methodologies.
• Chapter 3: Software Requirement Specification lists the equipment and programming
determination for this task. It additionally portrays the general depiction diverse outline
requirements, interface and execution necessities clarified.
• Chapter 4: System Design manages the propelled programming building where the
whole stream of the venture is spoken to by expert information stream charts and
grouping graphs.
• Chapter 5: Implementation area clarifies coding rules and framework upkeep for the
venture.
• Chapter 6: Testing manages the different sorts of experiments to demonstrate the
legitimacy of the venture.
• Chapter 7: Result Analysis clarifies in insights about the result of the test and contrasts it
and the outcome acquired in existing framework.
• Chapter 8: Conclusion and Future work this segment depict the synopsis of the related
work and future improvements of the proposed framework.
• References: This segment basically highlights all the diaries and contextual analysis
papers being eluded for the advancement cycle of the venture.
• Appendix: It contains the snapshot that basically manages the graphical yield and client
interface of the application, publication points of interest and sample code is given.
CHAPTER 2
LITERATURE SURVEY
on the features retrieved from segmented retinal images for detecting diabetic retinopathy disease
[8]. This work made use of different classification algorithms to make decision of forecasting the
occurrence of DR (Diabetic Retinopathy) disease. The classification algorithms used in this work
showed good performance. The future work would be focused on developing new techniques of
DR detection for helping doctors in the early diagnosis of this server disease.Valliappan Raman,
et.al (2016) used CAD (Computer Aided Detection) system for the classification of retinal images
for detecting diabetic retinopathy disease [9]. This system used machine learning algorithms for
developing patterns of DR. The recommended system had the ability to detect the different stages
of DR disease precisely. The comparison of classification outcomes generated by the
recommended system was carried out with the outcomes generated by other existing approaches.
This system showed good accuracy in feature extraction, classification and the grading of NPDR
(Non-proliferative Diabetic Retinopathy) lesions.The future work would be focused on improving
the recommended system in terms of more parameters such as sensitivity, specificity, precision etc.
Ömer Deperlıoğlu, et.al (2018) implemented image processing and deep learning algorithms on the
fundus images of retina for detecting DR (Diabetic Retinopathy) disease [10]. This work made use
of ConvNet (Convolutional Neural Network) for classifying the retinal fundus images. The tested
outcomes showed that the recommended approach achieved accuracy, sensitivity, specificity,
precision, recall and Fscore of 97%, 96.67%, 93.33%, 97.78%, 93.33% and 93.33% respectively.
The future work would be focused on the use of more openly existing databases to test the
recommended technique. More exudate images could be included in the training set in the nearby
future. Yuchen Wu, et.al (2019) presented a transfer learning based approach for the detection of
DR (diabetic retinopathy) disease [13]. At first, the downloading of data was carried out from
official website of Kaggle.
Afterward, improvement in data was carried out using different methods. This work made use of
some already trained models. This work made use of ImageNet dataset for the pre-training of each
NN (Neural Network). At last, the division of images was carried out into five different types of
DR diseases on the basis of their severity. The tested outcomes showed that the recommended
approach achieved classification accuracy of 60%.
The recommended approach was more robust and simple thanthe earlier approaches. The future
work would be focused on developing new techniques of DR detection for helping doctors in the
early diagnosis of this server disease.
Toan Bui, et.al (2017) presented an automated segmentation algorithm for detecting cotton wool
spots in the retinal images for detecting DR (Diabetic Retinopathy) malady[14]. This work made
use of an openly available data set DIARETDB1 for evaluating the recommended approach. The
achieved outcomes demonstrated that the recommended technique had the ability to segment
cotton wool in efficient manner. This approach achieved good sensitivity, specificity and accuracy
of 85.9%, 84.4% and 85.54% respectively. The future work would be focused on improving
accuracy of DR detection using various machine learning algorithms and more complicated
attributes.
CHAPTER 3
Python is an interpreted language, which means you just type in plain text to an interpreter, and
things happen. There is no compilation step, as in languages such as c or FORTRAN. To start up
the Python interpreter,just type python from the command line on climate. You’ll get a prompt,
and can start typing in python commands. Try typing in 2.5*3+5. and see what happens. To exit
the Python interpreter, type ctrl-d.
Eventually, you’ll probably want to put your Python programs, or at least your function definitions,
in a file you create and edit with a text editor, and then load it into Python later. This saves you
having to re-type everything every time you run. The standard Unix implementation of Python
provides an integrated development environment called idle, which bundles a Python interpreter
window with a Pythonaware text editor. To start up idle, log in to the server from an xterm and
type IDLE. You will get a Python shell window, which is an ordinary Python interpreter except
that it allows some limited editing capabilities. The real power of idle comes from the use of the
integrated editor. To get an editor window for a new file, just choose New Window from the File
menu on the Python Shell window. If you want to work with an existing file instead, just choose
Open from the File menu, and pick the file you want from the resulting
dialog box. You can type text into the editor window, and cut and paste in a fashion that will
probably be familiar to most computer users. You can have as many editor windows open as you
want, and cut and paste between them. When you are done with your changes, select Save or Save
as from the File menu of the editor window, and respond to the resulting dialog box as necessary.
Once you have saved a file, you can run it by selecting Run module from the Run menu. You can
actually use the integrated editor to edit just about any text file, but it has features that make it
especially useful for Python files. For example, it colorizes Python key words, automatically
indents in a sensible way, and provides popup advice windows that help you remember how
various Python functions are used. As an exercise at this point, you should try creating and saving
a short note (e.g. a letter of gratitude to your TA), and then try opening it up again in a new editor
window. To exit from idle just choose Exit from the File menu of any window. An especially
useful feature of the idle editor is that it allows you to execute the Python script you are working
on without leaving the window. To do this, just choose Run Script from the Edit menu of the editor
window. Then the script will run in the Python shell window. When the script is done running, you
can type additional Python commands into the shell window, to check the values of various
quantities and so forth. IDLE has various other powerful features, including debugging support.
You can manage without these, but you should feel free to learn about and experiment with them
as you go along. Once you have written a working Python script and saved it,say, as MyScript.py,
you can run it from the command line by typing python MyScript.py. There is no need to start up
IDLE just to run a script.
Note that many of the Python-based exercises given in the problem sets do not need the data stored
on climate, or the special Python extension modules written for this course. If you have a computer
of your own, you can download your own copy of Python from the web site python.org.
Implementations are available for Macs, Linux and Windows PC’s. The MacPython
implementation, available for both OS9 and OSX Macs provides an excellent integrated
development environment that in some ways is superior to IDLE. You can use your own stand-
alone machine for any of the exercises that need only straight Python programming using the
standard modules. You can also use your own machine for any exercises involving reading and
writing of text data files, if you first download any needed data from climate to your own machine.
Also, any Python extension modules that are written as ordinary human-readable Python scripts
(e.g. phys.py ) can be just downloaded and put in your python directory, regardless of what kind of
machine you are using. However, compiled extension modules, with names like veclib.so need to
be compatible with your specific hardware and Python implementation. In the rest of this
workbook, when we say =Start up the Python interpreter,= the choice is up to you whether you use
the simple command line interpreter or idle, or perhaps some other integrated Python development
environment you might have (e.g. MacPython). For results that produce graphics, and for the use
of idle, you must be connected to Python in a way that can display graphics on your screen (e.g.
via an xterm). You won’t be reminded of this explicitly in the text. Exercises that don’t produce
graphics can be done over any kind of link. =Write and run= a script could mean that you enter
it using your favorite editor and run it from the command line, or it could mean using idle
3.2.2 Indentation
Python syntax and semantics § Indentation
Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An
increase in indentation comes after certain statements; a decrease in indentation signifies the end of
the current block.[62] Thus, the program's visual structure accurately represents the program's
semantic structure.[1] This feature is also sometimes termed the off-side rule. The enforcement of
indentation in Python makes the code look neat and clean. This results in Python programs that
look similar and consistent.[63]
3.2.3 Statements and control flow[edit]
Python's statements include (among others):
The assignment statement (token '=', the equals sign). This operates differently than in
traditional imperative programming languages, and this fundamental mechanism (including the
nature of Python's version of variables) illuminates many other features of the language.
Assignment in C, e.g., x = 2, translates to "typed variable name x receives a copy of numeric value
2". The (right-hand) value is copied into an allocated storage location for which the (left-
hand) variable name is the symbolic address. The memory allocated to the variable is large enough
(potentially quite large) for the declared type. In the simplest case of Python assignment, using the
same example, x = 2, translates to "(generic) name x receives a reference to a separate,
dynamically allocated object of numeric (int) type of value 2." This is termed binding the name to
the object. Since the name's storage location doesn't contain the indicated value, it is improper to
call it a variable. Names may be subsequently rebound at any time to objects of greatly varying
types, including strings, procedures, complex objects with data and methods, etc. Successive
assignments of a common value to multiple names, e.g., x = 2; y = 2; z = 2 result in allocating
storage to (at most) three names and one numeric object, to which all three names are bound. Since
a name is a generic reference holder it is unreasonable to associate a fixed data type with it.
However at a given time a name will be bound to some object, which will have a type; thus there
is dynamic typing.
▪ The if statement, which conditionally executes a block of code, along with else and elif (a
contraction of else-if).
▪ The for statement, which iterates over an iterable object, capturing each element to a local
variable for use by the attached block.
▪ The while statement, which executes a block of code as long as its condition is true.
▪ The try statement, which allows exceptions raised in its attached code block to be caught
and handled by except clauses; it also ensures that clean-up code in a finally block will
always be run regardless of how the block exits.
▪ The raise statement, used to raise a specified exception or re-raise a caught exception.
▪ The class statement, which executes a block of code and attaches its local namespace to
a class, for use in object-oriented programming.
▪ The def statement, which defines a function or method.
▪ The with statement, from Python 2.5 released on September 2006,[64] which encloses a code
block within a context manager (for example, acquiring a lock before the block of code is
run and releasing the lock afterwards, or opening a file and then closing it),
allowing Resource Acquisition Is Initialization (RAII)-like behavior and replaces a
common try/finally idiom.[65]
▪ The pass statement, which serves as a NOP. It is syntactically needed to create an empty
code block.
▪ The assert statement, used during debugging to check for conditions that ought to apply.
▪ The yield statement, which returns a value from a generator function. From Python
2.5, yield is also an operator. This form is used to implement coroutines.
▪ The import statement, which is used to import modules whose functions or variables can be
used in the current program. There are three ways of using import: import <module name>
[as <alias>] or from <module name> import * or from <module name> import <definition
1> [as <alias 1>], <definition 2> [as <alias 2>], ....
▪ The print statement was changed to the print() function in Python 3.[66]
Python does not support tail call optimization or first-class continuations, and, according to Guido
van Rossum, it never will.[67][68] However, better support for coroutine-like functionality is
provided in 2.5, by extending Python's generators.[69] Before 2.5, generators were lazy iterators;
information was passed unidirectionally out of the generator. From Python 2.5, it is possible to
pass information back into a generator function, and from Python 3.3, the information can be
passed through multiple stack levels.[70]
3.2.4 Expressions[edit]
Some Python expressions are similar to languages such as C and Java, while some are not:
▪ Addition, subtraction, and multiplication are the same, but the behavior of division differs.
There are two types of divisions in Python. They are floor division (or integer
division) // and floating point/division.[71] Python also added the ** operator for
exponentiation.
▪ From Python 3.5, the new @ infix operator was introduced. It is intended to be used by
libraries such as NumPy for matrix multiplication.[72][73]
▪ From Python 3.8, the syntax :=, called as 'walrus operator' was introduced. It assigns values
to variables as part of a larger expression.[74]
▪ In Python, == compares by value, versus Java, which compares numerics by value[75] and
objects by reference.[76] (Value comparisons in Java on objects can be performed with
the equals() method.) Python's is operator may be used to compare object identities
(comparison by reference). In Python, comparisons may be chained, for example a <= b <=
c.
▪ Python uses the words and, or, not for its boolean operators rather than the
symbolic &&, ||, ! used in Java and C.
▪ Python has a type of expression termed a list comprehension. Python 2.4 extended list
comprehensions into a more general expression termed a generator expression.[51]
▪ Anonymous functions are implemented using lambda expressions; however, these are
limited in that the body can only be one expression.
▪ Conditional expressions in Python are written as x if c else y [77] (different in order of
operands from the c ? x : y operator common to many other languages).
▪ Python makes a distinction between lists and tuples. Lists are written as [1, 2, 3], are
mutable, and cannot be used as the keys of dictionaries (dictionary keys must
be immutable in Python). Tuples are written as (1, 2, 3), are immutable and thus can be
used as the keys of dictionaries, provided all elements of the tuple are immutable.
The + operator can be used to concatenate two tuples, which does not directly modify their
contents, but rather produces a new tuple containing the elements of both provided tuples.
Thus, given the variable t initially equal to (1, 2, 3), executing t = t + (4, 5) first evaluates t
+ (4, 5), which yields (1, 2, 3, 4, 5), which is then assigned back to t, thereby effectively
"modifying the contents" of t, while conforming to the immutable nature of tuple objects.
Parentheses are optional for tuples in unambiguous contexts.
▪ Python features sequence unpacking where multiple expressions, each evaluating to
anything that can be assigned to (a variable, a writable property, etc.), are associated in the
identical manner to that forming tuple literals and, as a whole, are put on the left hand side
of the equal sign in an assignment statement. The statement expects an iterable object on
the right hand side of the equal sign that produces the same number of values as the
provided writable expressions when iterated through, and will iterate through it, assigning
each of the produced values to the corresponding expression on the left.
▪ Python has a "string format" operator %. This functions analogous to printf format strings
in C, e.g. "spam=%s eggs=%d" % ("blah", 2) evaluates to "spam=blah eggs=2". In Python
3 and 2.6+, this was supplemented by the format() method of the str class, e.g. "spam={0}
eggs={1}".format("blah", 2). Python 3.6 added "f-strings": blah = "blah"; eggs = 2;
f'spam={blah} eggs={eggs}'.[80]
▪ Python has various kinds of string literals:
▪ Strings delimited by single or double quote marks. Unlike in Unix shells, Perl and Perl-
influenced languages, single quote marks and double quote marks function identically.
Both kinds of string use the backslash (\) as an escape character. String
interpolation became available in Python 3.6 as "formatted string literals".[80]
▪ Triple-quoted strings, which begin and end with a series of three single or double quote
marks. They may span multiple lines and function like here documents in shells, Perl
and Ruby.
▪ Raw string varieties, denoted by prefixing the string literal with an r. Escape sequences are
not interpreted; hence raw strings are useful where literal backslashes are common, such
as regular expressions and Windows-style paths. Compare "@-quoting" in C#.
▪ Python has array index and array slicing expressions on lists, denoted
as a[key], a[start:stop] or a[start:stop:step]. Indexes are zero-based, and negative indexes
are relative to the end. Slices take elements from the start index up to, but not including,
the stop index. The third slice parameter, called step or stride, allows elements to be
skipped and reversed. Slice indexes may be omitted, for example a[:] returns a copy of the
entire list. Each element of a slice is a shallow copy.
In Python, a distinction between expressions and statements is rigidly enforced, in contrast to
languages such as Common Lisp, Scheme, or Ruby. This leads to duplicating some functionality.
For example:
▪ List comprehensions vs. for-loops
▪ Conditional expressions vs. if blocks
▪ The eval() vs. exec() built-in functions (in Python 2, exec is a statement); the former is for
expressions, the latter is for statements.
Statements cannot be a part of an expression, so list and other comprehensions or lambda
expressions, all being expressions, cannot contain statements. A particular case of this is that an
assignment statement such as a = 1 cannot form part of the conditional expression of a conditional
statement. This has the advantage of avoiding a classic C error of mistaking an assignment
operator = for an equality operator == in conditions: if (c = 1) { ... } is syntactically valid (but
probably unintended) C code but if c = 1: ... causes a syntax error in Python.
3.2.5 TensorFlow
It is a free and open-source software library for dataflow and differentiable programming across a
range of tasks. It is a symbolic math library, and is also used for machine learning applications
such as neural networks.[4] It is used for both research and production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It was released
under the Apache License 2.0 on November 9, 2015.
3.2.6 Tensor processing unit (TPU)[edit]
In May 2016, Google announced its Tensor processing unit (TPU), an application-specific
integrated circuit (a hardware chip) built specifically for machine learning and tailored for
On March 1, 2018, Google released its Machine Learning Crash Course (MLCC). Originally
designed to help equip Google employees with practical artificial intelligence and machine
learning fundamentals, Google rolled out its free TensorFlow workshops in several cities around
the world before finally releasing the course to the public.[25]
3.2.12 Features:
TensorFlow provides stable Python (for version 3.7 across all platforms)[26] and C APIs;[27] and
without API backwards compatibility guarantee: C++, Go, Java,[28] JavaScript[3] and Swift (early
release).[29][30] Third-party packages are available for C#,[31][32] Haskell,[33] Julia,[34] R,[35] Scala,[36]
Rust,[37] OCaml,[38] and Crystal.[39]
"New language support should be built on top of the C API. However, [..] not all functionality is
available in C yet."[40] Some more functionality is provided by the Python API.
3.2.13 Applications:
Among the applications for which TensorFlow is the foundation, are automated image-
captioning software, such as DeepDream.[41] RankBrain now handles a substantial number of
search queries, replacing and supplementing traditional static algorithm-based search results.[4
CHAPTER 4
SYSTEM DESIGN
The first building block in our plan of attack is convolution operation. In this step, we will touch
on feature detectors, which basically serve as the neural network's filters. We will also discuss
feature maps, learning the parameters of such maps, how patterns are detected, the layers of
detection, and how the findings are mapped out.
Step 1(b): ReLU Layer
The second part of this step will involve the Rectified Linear Unit or ReLU. We will cover ReLU
layers and explore how linearity functions in the context of Convolutional Neural Networks.
Not necessary for understanding CNN's, but there's no harm in a quick lesson to improve your
skills.
Step 2: Pooling
In this part, we'll cover pooling and will get to understand exactly how it generally works. Our
nexus here, however, will be a specific type of pooling; max pooling. We'll cover various
approaches, though, including mean (or sum) pooling. This part will end with a demonstration
made using a visual interactive tool that will definitely sort the whole concept out for you.
Step 3: Flattening
This will be a brief breakdown of the flattening process and how we move from pooled to flattened
layers when working with Convolutional Neural Networks.
Register
Login
Upload Image
A use case chart is a kind of behavioral graph made from a Use-case examination. Its object is to
present a graphical diagram of the usefulness gave by a framework regarding performers, their
objectives (spoke to as utilization cases), and any conditions between those utilization cases. Use
case chart gives us the data about how that clients and utilization cases are connected with the
framework. Fig 4. shows the use case diagram for converting actions to text. A use case depicts a
capacity gave by framework that yields an obvious result for a performer. A performing artist
portrays any element that collaborates with the system. The performers are outside the limit of the
framework, while the use cases are inside the limit of the framework.
CHAPTER 5
IMPLEMENTATION
MODULE 1.INPUT IMAGES AND TRAINING DATASET: First we train the machine
using training dataset .then we give any retinal image as input.
MODULE 2.PRE-PROCESSING: Filtering the input retinal image to find whether given
input retina image has any defectives based on features like colors and also noise reduction
process(unwanted things).
MODULE 3. SEGMENTATION: Segmentation is done based on Binarization, Median
filtering, Morphological hole filling.
SAMPLE CODE
import base64
import numpy as np
import io
import os
from PIL import Image
from tensorflow.keras.models import Sequential
#import keras
#from keras import backend as K
#from keras.models import Sequential
from keras.models import load_model
#from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array
#from flask import request
from flask import jsonify
#from flask import Flask
from flask import Flask,render_template, url_for,request, flash, redirect, session
import json
import h5py
#from waitress import serve
import tensorflow as tf
from flask_cors import CORS
import sqlite3
conn = sqlite3.connect('diabetic.db')
print ("Opened database successfully")
cur = conn.cursor()
try:
cur.execute('''CREATE TABLE person (
Firstname varchar(20) DEFAULT NULL,
Lastname varchar(50) DEFAULT NULL,
email varchar(50) DEFAULT NULL,
username varchar(255) DEFAULT NULL,
password varchar(20) DEFAULT NULL,
checkbox varchar(20) DEFAULT NULL
)''')
except:
pass
app = Flask(_name_)
CORS(app)
return image
#
#
print(" * Loading keras model .... ")
#fix_layer0('diabetic_retinopathy.h5', [None, 224, 224, 3], 'float32')
get_model()
#
# database
@app.route('/')
@app.route('/home')
def home():
return render_template('index.html')
@app.route('/logout')
def logout():
return render_template('index.html')
return render_template('signin.html')
return render_template('signup.html')
# user_register_page
@app.route("/predict", methods=["POST"])
def predict():
message = request.get_json(force=True)
encoded = message['image']
decoded = base64.b64decode(encoded)
# with graph.as_default():
image = Image.open(io.BytesIO(decoded))
processed_image = preprocess_image(image, target_size=(224, 224))
prediction = model.predict(processed_image).tolist()
print("p",prediction)
response={
'prediction': {
"dr": prediction[0][0],
"nodr": prediction[0][1]
}
}
# dr1 = prediction["dr"]
# dr2 = prediction["nodr"]
return jsonify(response)
# return render_template('welcome.html', nodr=dr2, dr=dr1)
app.run(debug=True)
CHAPTER 6
SYSTEM TESTING
6. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each test
type addresses a specific testing requirement.
TYPES OF TESTS
6.1 Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately to
the documented specifications and contains clearly defined inputs and expected results.
6.2 Integration testing
Integration tests are designed to test integrated software components to determine if they actually
run as one program. Testing is event driven and is more concerned with the basic outcome of
screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that arise
from the combination of components.
6.3 Functional test
Functional tests provide systematic demonstrations that functions tested are available as specified
by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings, structure or
language of the module being tested. Black box tests, as most other kinds of tests, must be written
from a definitive source document, such as specification or requirements document, such as
specification or requirements document. It is a testing in which the software under test is treated, as a
black box .you cannot <see= into it. The test provides inputs and responds to outputs without
considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct
phases.
6.5 Test strategy and approach
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
▪ All field entries must work properly.
▪ Pages must be activated from the identified link.
▪ The entry screen, messages and responses must not be delayed.
Features to be tested
▪ Verify that the entries are of the correct format
▪ No duplicate entries should be allowed
▪ All links should take the user to the correct page.
.
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Test Environment Testing is an integral part of software development. Testing process certifies
whether the product that is developed compiles with the standards that it was designed to. Testing
process involves building of test cases against which the product has to be tested. In our project we
aim to predict the plant diseases using their leaves. And we will classify the leaf with their
diseases, details of that disease and solution.
Unit Testing Of Modules Unit testing involves the design of test cases that validate that the
Internal program logic is functioning properly and that program input produces valid outputs. All
decision branches and internal code flow should be validataed. It is the testing of individual
software unit of the application, it is done after the completion of an individual unit before
integration. This is a structural testing that relies on knowledge of its construction and its invasive.
Unit tests perform basic test at component level and test the specific business process application
and/or system configuration. Unit tests ensured that each unit path of a business process performs
accuarately to the documented specification and contains clearly defined input and expected
results.
Dept. of CSE, B.E, SSCE 21
2020-2021
CHAPTER 7
RESULT AND ANALYSIS
7 Tested Results:
Fig:7.1 Test DR
CHAPTER 8
CONCLUSION AND FUTURE WORKS:
Among other existing supervising algorithms, most of them are requiring more pre-processing or
post-processing stages for identifying the different stages of the diabetic retinopathy. Also, other
algorithms mandatorily requiring manual feature extraction stages to classify the fundus images. In
our proposed solution, Deep convolutional Neural Network (DCNN) is a wholesome approach to
all level of diabetic retinopathy stages. No manual feature extraction stages are needed. Our
network architecture with dropout techniques yielded significant classification accuracy. True
positive rate (or recall) are also improved. This architecture has some setbacks are: An additional
stage augmentation are needed for the images taken from different camera with different field of
view. Also, our network architecture is complex and computation-intensive requiring high-level
graphics processing unit to process the high resolution images when the level of layers stacked
more.
CHAPTER 9
REFERENCES
[1] S.Wang, et al, <Hierarchical retinal blood vessel segmentation based on feature and ensemble
learning=, Neurocomputing(2014), http://dx.doi.org/10.1016/j.neucom.2014.07.059.
[2] Mrinal Haloi, <Improved Microaneurysm detection using Deep Neural Networks=, Cornel
University Library(2015), arXiv:1505.04424.
[7] R.Priya, P.Aruna, <SVM and Neural Network based Diagnosis of Diabetic Retinpathy=,
International Journal of computer Applications(00975-8887), volume 41-No.1,(March 2012).
[8] S.Giraddi, J Pujari, S.Seeri, <Identifying Abnormalities in the Retinal Images using SVM
Classifiers=, International Journal of Computer Applications(0975-8887), Volume 111 –
No.6,(2015).
1958
[10] G.Lim, M.L.Lee, W.hsu, <Transformed Representations for Convolutional Neural Networks
in Diabetic Retinopathy Screening=, Modern Artificial Intelligence for Health Analytic Papers
from the AAAI(2014).
[13] Xiang chen et al, <A novel method for automatic hard exudates detection in color retinal
images=, Proceedings of the 2012 International Conference on Machine Learning and Cybernetics,
Xian (2012).
[14] Vesna Zeljkovi et al, <Classification Algorithm of Retina Images of Diabetic patients Based
on Exudates Detection=, 978-1-4673-2362-8/12, IEEE(2012).
[15]MichaelNielsen, <neuralnetworkanddeeplearning.com/chap6.html=
[17] DRIVE dataset[online] :J.J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van
Ginneken, "Ridge based vessel segmentation in color images of the retina", IEEE Transactions on
Medical Imaging, 2004, vol. 23, pp. 501-509.
APPENDIX-1:
SNAPSHOTS