You are on page 1of 11

COTTON LEAF RECOGNITION AND DISEASE

PREDICTION WEBAPP
Yogeshwar khandagre
Yogkhandagre444@gmail.com
Mrs. Sherin Eliyas
sherine@hindustanuniv.ac.in
Department of Computer Applications
Hindustan Institute of Technology and Science, Padur chennai
Abstract—Agriculture is very important part for any more man labor, properly equipped laboratories, expensive
country and we can say that India is considered to be an devices etc and some method is expensive. Nowadays image
agro-based country. India ranks second in the agricultural processing is used a lot for detecting such diseases, Pests
output worldwide. Agriculture helps to boosts various like the germ, fungus, and microorganisms.
industries such as food industry and textile industry, and This paper is based on Artificial Intelligence system
in textile industry cotton producing crops have a special especially Deep learning. As Farmer, we know Farmer can’t
position amongst all other textile crops. And we can say solve Farm’s complex and even small problems due to lack
that Cotton plant is important cash crops in India. And of perfect education. So as AI enthusiastic we decided to
have there are a variety of diseases on cotton plant that solve this problem using the latest technology like AI. In
have had a negative impact on the performance of cotton which implements Convolutional Neural Network to detect
plants over the past few years. The disease is diagnosed cotton leaf diseases. Cnn is easy to use for discover
by farmers too late and is very difficult and expensive to infection created by Bacteria and environmental effects. In
control. And because of disease loss productivity of early stage Disease detection is challenging task for farmers
cotton. For more profit and good productivity its intense where physical presence is a must. Disease detection and
care is necessary. The aim of the system is to develop an recognition is the very important. There are various
web based application which can recognizes and predict algorithms in image processing for disease recognition by
cotton leaf diseases. Our Goal Is build a website where image classification like KNN, SVM, Random Forest,
we can upload an image of any cotton plant leaf and this Artificial Neural Network and CNN. We all know many
website it will tell you information about decease .cotton Farmer can’t solve Farm’s complex and even small
plant have any decease or not or its burn by fertilizer. problems due to lack of perfect education. The goal of this
Cotton farmer need to upload the image and then with application is to develop a system which recognizes crop
the help of image processing we can get a digitized color diseases. Our Goal Is build a website where we can upload
image of a diseased leaf and then we can proceed with an image of any cotton plant leaf and this website it will tell
applying CNN to predict cotton leaf disease. Many you information about which decease it is on plant and how
researchers are using machine learning for early we can treat plant decease.
detections of cotton plant disease. Convolution neural
network (CNN) is a deep feed forward artificial neural
network. This algorithm is little faster as compared to A. Purpose of Proposed System:
other classification algorithms. In this paper, CNN is 1. Developing a user-friendly web-based system for
used for classification of the diseased portion of cotton farmers.
plant images. 
2. Recognizing Cotton leaf diseases accurately from
Keywords— agriculture production with AI, AI based input images.
website, Deep Convolution neural network , Artificial
neural network 3. Providing corrective and preventive measures for
the detected diseases
4. Provide disease information

B. Cotton leaf diseases focused:

1.Introduction 1. Bacterial Blight.

Agriculture has played an important role in the financial


development of India and most another agricultural 2.Literature Review
countries. India ranks second in the agricultural output In recent year, limited number of works has been done
worldwide. In India most of the land is used for agriculture related to cotton disease classification, cotton stage
purpose. It is also the back support for Indian farmer’s Identification using deep learning models.
financial condition. The most important requirement is the
acceptance of the infection and its planning. huge amount According to “Mahesh Shivaji Dange et.al.”[1] Infected leaf
of loss in quality and quantity of cotton yield due to characteristics are compared to normal leaf texture features.
different diseases affecting the plant which obstruct the Improved processing has four main steps, first the RGB
growth of crops in fields which may cause huge loss in the color image creation is created, and then the RGB image is
quality of products. The existing method for plants disease converted to HIS because RGB is for color reproduction and
detection is simply naked eye observation which requires HIS for color definition. Then the green pixels are hidden
and removed using a certain amount of limit, then the image www.plantvillage.org).The content has been written by
is split and the useful parts are extracted, finally the texture plant pathology experts, reflecting information is sourced
statements are calculated using spatial Gray Level from the scientific literature.
Dependence Matrices.
Another approach based on leaf images and using ANNs as
According to “Tushar J.Haware”( 2015) [2] an algorithm a technique for an automatic detection and classification of
that works well with high precision mixing is used to detect plant diseases was used in conjunction with -means as a
plant diseases. K-means integration is used for partitioning clustering procedure proposed by the Z. H. Zhou and S. F.
and the method of computational computing. The red- Chen, “Neural network ensemble,” [8]. ANN consisted of
colored, red-blue Zeros pixels and pixels on the boundary of 10 hidden layers. The number of outputs was 6 which was
the infected group are removed. the number of classes representing five diseases along with
the case of a healthy leaf. On average, the accuracy of
According to [3], author Mrunalini represents the technique classification using this approach was 94.67%.
to classify and identify the different diseasethrough which
plants are affected. In Indian Economy a Machine learning According to S.Vijay et al [9] analyzed the cotton leaf
based recognition system will prove to be very useful as it disease and applied the image enhancement and k means
saves efforts, money and time too. The approach given in clustering techniques. Image processing technique is easy
this for feature set extraction is the color co-occurrence and accurately for leaf disease.
method. For automatic detection of diseases in leaves,
neural networks are used. The approach proposed can
significantly support an accurate detection of leaf, and 3.METHODOLOGY
seems to be important approach, in case of steam, and root
In this procedure of developing the model for cotton plant
diseases, putting fewer efforts in computation.
disease recognition using deep CNN .system classifies the
leaf image using image classification algorithm CNN.
According to S.Batmavady, S.Samundeeswari [4] focused
A Convolutional Neural Network (ConvNet/CNN) is a Deep
on applying image processing and neural network
Learning algorithm which can take in an input image, assign
techniques to identify the diseased cotton leaf. The leaf
importance (learnable weights and biases) to various
images are taken from village plant dataset. Segmentation is
aspects/objects in the image and be able to differentiate one
done using FCM techniques and classification is done by
from the other. The pre-processing required in a ConvNet is
RBF Neural network.
much lower as compared to other classification algorithms.
While in primitive methods filters are hand-engineered, with
According to Namrata R.Bhimte, V.R.Thool [5] established
enough training, ConvNets have the ability to learn these
image processing approaches for automatic detection of
filters/characteristics. The whole process of training a model
cotton leaf spot disease. Classification, feature extractions
for plant disease detection using CNN in-depth is described
like color, texture of image using SVM classifier and
in detail. The complete process is divided into several
various preprocessing steps were done and colour based
required sections in the steps below:
segmentation is used to obtain the segmented part of leaf
spot disease.
A. Project Description:
According to Ranjith et al.[6] provide smart irrigation
The model for plant disease recognition using deep
system has been proposed which can control the irrigation
learning CNN . The complete process is divided into some
automatically using an android mobile application. Apart
stages, Appropriate datasets are required at all stages of
from this, the photos of plant leaves are captured and are
object recognition research, starting from training phase to
sent to the cloud server, which is further processed and
evaluating the performance of recognition algorithms.
compared with the diseased plant leaf images in the cloud
Images of the infected plants are captured by digital camera
database. Based on the comparison a list of plant diseases
and processed using image growing ,image segmentation
suspected is given to the user via the Android mobile
techniques to detect infected parts of the plants.
application.

According to [7] they incorporated all the hybrid features of


a. Image Acquisition:
a leaf color, texture shape (geometric feature) by the
respective methodology. Plant Village: a tool for crop In image processing, it is defined as the action of
health; an online platform dedicated to crop health and crop retrieving an image from some source, In this
diseases, called Plant Village (available at phase, raw image is taken as input from the user
and converted into equivalent gray scale image.
Also the image is resized into size of 128*128. 1.4 Mount Drive in Google Colab-
Click on mount drive option, an authorization code is
b. convolutional Layers:
generated and it is entered in Google drive, which generates
Convolutional layers apply a convolution operation the image folder path.
to the input, passing the result to the next layer.
A convolution converts all the pixels in its 1.5 Code Implementation, Test and Train-
receptive field into a single value. .The final output Python coding implementation based Python coding
of the convolutional layer is a vector. After the implementation based on different CNN model like VGG16
alteration of captured image, the processed image and RESNET50. We should train the above CNN model and
further passes through three different hidden layer test the images to get better accuracy and get the predicted
in which feature extraction, pooling and flattening disease.
layer are also performed.
c. Disease Prediction: 1. First, sign in the Google accounts. And proceed to
the Google colab welcome page.
After applying CNN, using Softmax layer the leaf 2. Click on the newpython3 notebook option to start
image is predicted with disease which is gaining the session fresh.
highest probability of occurrence. 3. Select runtime menu option or notebook option to
select GPU.
Configure notebook instance, to download the necessary
B. Implementation: pack Flask server is used to deploy the Web Application on
local computer. Server handles the request and response to
Real time images are collected using web camera and stored
system use.
in database. Stored database images are converted into test
images and train images. In the Google drive, the upload C. Algorithm:
folder option is used to upload the test and train dataset.
CNN itself is a technique of classifying images as a part of
Implementation steps used in Google colab (Google deep learning. In which we apply single neural network to
colaboratory) is described below. the full image.

1. Dataset collection. i. Accepts a volume of sizeW1×H1×D1


2. Uploading the dataset into drive. ii. Requires four hyper parameters:
3. Accessing colab.
4. Mounting drive in Google colab.  Number of filters K
5. Coding implementation , testing and training.  Their spatial extent F

1.1 Data set collection.  The stride S


Real time images are collected using web camera and  The amount of zero padding P
stored in database. Stored database images are converted
into test images and train images. iii. Produces a volume of size W2×H2×D2 where:
a. W2=(W1-F+2P)/S+1
1.2 Uploading the Dataset into Drive-
b. H2=(H1−F+2P)/S+1(i.e. width and height
In the Google drive, the upload folder option is used to
are computed equally by symmetry)
upload the test and train dataset which are named as test
cotton and train cotton. c. D2=K.
iv. With parameter sharing, it introduces F*F*D1
1.3 Accessing Colab-
weights per filter, for a total of (F*F*D1)*K weights and K
* First, sign in the Google accounts.
biases. In the output volume, the dth depth slice (of size
* proceed to the Google colab welcome page.
W2*H2) is the result of performing a valid convolution of
* Click on the newpython3 notebook option to start the
the dth filter over the input volume with a stride of S, and
session fresh.
then offset by dth bias.
* Select runtime menu option or notebook option to select
GPU. v. A common setting of the hyper parameters is F=3,
* Configure notebook instance, to download the necessary S=1,P=1
packages.
 First, sign in the Google accounts.
 And proceed to the Google colab welcome page.
 Click on the newpython3 notebook option to start
the session fresh.
 Select runtime menu option or notebook option to
select GPU.
 Configure notebook instance, to download the
necessary pacFlask server is used to deploy the Web
Application on local computer. Server handles the request
and response to system use
4.RESULTS AND DISCUSSION
The results training with the whole database containing both
original and augmented images. As it is known that Fig3.model loss
convolutional networks are able to learn features when
trained on larger datasets, results achieved when trained
with only original images will not be explored. 5. SYSTEM ARCHITECTURE
After fine-tuning the parameters of the network, an overall
accuracy of 96.3% was achieved, after the 100th training
iteration (95.8% without fine-tuning). Even after the 30th
training iteration high accuracy results were achieved with
exceedingly reduced loss, but after the 60th iteration, the
balance in accuracy and loss was carried out in high
accuracy.
the trained model was tested on each class individually.
Test was performed on every image from the validation set.
The results are displayed to emphasize how many images
from total of each class are accurately predicted.
Figure 8 illustrates trained model’s prediction results
separated for every class. The class numbers follow
enumeration from Table.

Fig2.Model accuracy

6. SOFTWARE INTERFACE DESIGN


There are many methods in automated or computer vision
plant disease detection and classification process, but still,
this research field is lacking. In addition, there are still no
commercial solutions on the market, except those dealing
with plant species recognition based on the leaves images.
IN this system, a new approach of using deep learning
method was explored in order to automatically classify and
detect plant diseases from leaf images. New plant disease
image database was created, containing more than 1,100
original images taken from the available Internet sources
and extended to more than 30,000 using appropriate
transformations. The experimental results achieved
precision between 91% and 98%, for separate class tests.
The final overall accuracy of the trained model was 96.3%.
Fine-tuning has not shown significant changes in the overall
accuracy, but augmentation process had greater influence to
achieve respectable results.

9.References.

7. SOFTWARE WORKING SCREENSHOT 1. P. Rothe and R. Kshirsagar, “Adaptive


neuro-fuzzy inference system for recognition of
cotton leaf diseases,” in Computational Intelligence
on Power, Energy and Controls with their impact
on Humanity (CIPECH), 2014 Innovative
Applications of. IEEE, 2014, pp. 12–17.

2. A.-K. Mahlein, T. Rumpf, P. Welke et al.,


“Development of spectral indices for detecting and
identifying plant diseases,” Remote Sensing of
Environment, vol. 128, pp. 21–30, 2013.View
a. at: Publisher Site | Google Scholar.
a. Diseased cotton plant output

3. A.-K. Mahlein, T. Rumpf, P. Welke et al.,


“Development of spectral indices for detecting and
identifying plant diseases,” Remote Sensing of
Environment, vol. 128, pp. 21–30, 2013. View at
Publisher · View at Google Scholar · View at
Scopus.

4. S.Raj Kumar , S.Sowrirajan,” Automatic


Leaf Disease Detection and Classification using
Hybrid Features and Supervised Classifier”,
International Journal of Advanced Research in
Electrical, Electronics and Instrumentation
b. Healthy cotton plant output Engineering, vol. 5, Issue 6,2016..

8. Conclusions
5. M. Francis and C. Deisy, "Disease Diseases from Paddy
Detection and Classification in Agricultural Plants
Using Convolutional Neural Networks — A Visual Plant Leaf Images”,
International Journal
Understanding," 2019 6th International Conference
on Signal Processing and Integrated Networks
(SPIN), Noida, India, 2019
of
6. ]Chaudhari Vaishnavi, Gondkar Sayali,
Shivarkar Pooja,”Survey on detection and
prediction of leaf diseases using CNN”, Indian
Journal of Automation and Artificial Intelligence,
Volume 6(10), 2019.

7. Patil Tushar, Palambe Shubham, Tawale


Gauri ,Patil Rajashree2sanchika Bajpai, “Cotton
Leaf Disease Identification Using Pattern
Recognition Techniques”, Multidisciplinary
Journal of Research in Engineering and
Technology, Volume 5, Issue 1,Pg.1-7,2018.

8. W. Xiuqing, W. Haiyan, and Y. Shifeng,


“Plant disease detection based on near-field
acoustic holography,” Transactions of the Chinese
Society for Agricultural Machinery, vol. 2, article
43, 2014.View at: Google Scholar.

9. R. Sarkar, A. Pramanik,” Segmentation of


Plant Disease Spots Using Automatic SRG
Algorithm: A Look Up Table Approach”, 2015
International Conference on Advances in Computer
Engineering and Applications (ICACEA) IMS
Engineering College, Ghaziabad, India.

10. K. Jagan Mohan, M. Balasubramanian,


“Detection and Recognition of Diseases from
Paddy Plant Leaf Images”, International Journal of
Computer Applications (0975 – 8887) Volume 144
– No.12, 2016.

11. K. Jagan Mohan,


M. Balasubramanian,
“Detection and
Recognition of
REFERENCES
The template will number citations consecutively within
brackets [1]. The sentence punctuation follows the bracket
[2]. Refer simply to the reference number, as in [3]—do not
use “Ref. [3]” or “reference [3]” except at the beginning of a
sentence: “Reference [3] was the first ...”
Number footnotes separately in superscripts. Place the
actual footnote at the bottom of the column in which it was
cited. Do not put footnotes in the abstract or reference list.
Use letters for table footnotes.
Unless there are six authors or more give all authors’
names; do not use “et al.”. Papers that have not been
published, even if they have been submitted for publication,
should be cited as “unpublished” [4]. Papers that have been
accepted for publication should be cited as “in press” [5].
Capitalize only the first word in a paper title, except for
proper nouns and element symbols.

This template, modified in MS Word 2007 and saved as a For papers published in translation journals, please give
“Word 97-2003 Document” for the PC, provides authors the English citation first, followed by the original foreign-
with most of the formatting specifications needed for language citation [6].
preparing electronic versions of their papers. All standard
paper components have been specified for three reasons: (1) [1] G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of
ease of use when formatting individual papers, (2) automatic Lipschitz-Hankel type involving products of Bessel functions,” Phil.
compliance to electronic requirements that facilitate the Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955.
(references)
concurrent or later production of electronic products, and (3)
[2] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed.,
conformity of style throughout a conference proceedings. vol. 2. Oxford: Clarendon, 1892, pp.68–73.
Margins, column widths, line spacing, and type styles are
[3] I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange
built-in; examples of the type styles are provided throughout anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds.
this document and are identified in italic type, within New York: Academic, 1963, pp. 271–350.
parentheses, following the example. Some components, such [4] K. Elissa, “Title of paper if known,” unpublished.
as multi-leveled equations, graphics, and tables are not [5] R. Nicole, “Title of paper with only first word capitalized,” J. Name
prescribed, although the various table text styles are Stand. Abbrev., in press.
provided. The formatter will need to create these [6] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron
components, incorporating the applicable criteria that follow. spectroscopy studies on magneto-optical media and plastic substrate
interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August
“Temperature (K)”, not “Temperature/K”. 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
[7] M. Young, The Technical Writer’s Handbook. Mill Valley, CA:
University Science, 1989.
ACKNOWLEDGMENT (Heading 5)
The preferred spelling of the word “acknowledgment” in IEEE conference templates contain guidance text for
America is without an “e” after the “g”. Avoid the stilted composing and formatting conference papers. Please
expression “one of us (R. B. G.) thanks ...”. Instead, try “R. ensure that all template text is removed from your
B. G. thanks...”. Put sponsor acknowledgments in the conference paper prior to submission to the
unnumbered footnote on the first page. conference. Failure to remove template text from
your paper may result in your paper not being published.
/

We suggest that you use a text box to insert a graphic


(which is ideally a 300 dpi TIFF or EPS file, with all fonts
embedded) because, in an MSW document, this method is
somewhat more stable than directly inserting a picture.
To have non-visible rules on your frame, use the
MSWord “Format” pull-down menu, select Text Box >
Colors and Lines to choose No Fill and No Line.

You might also like