You are on page 1of 25

INTERNSHIP REPORT

ON
Plant Disease Detection
Submitted in complete fulfillment for

Machine Learning & Artificial Intelligence Internship

SUBMITTED BY

Ankush Kumar, 6395687260, Bsc Cs 2018,Chandigarh University

Karthika Bhat, 6363194491, Electronics and Communication Engineering, Sahyadri College of


Engineering and Management(VTU)

Kavya Nagesh Naik, 9845284312, Electronics and Communication Engineering, Sahyadri College of
Engineering and Management(VTU)

Sonali MA, 9901435893, Computer Science Engineering, Bangalore institute of technology

UNDER THE GUIDANCE OF

Mr. Shashank,
Technical Lead,
Quant Masters
Quant Masters Pvt. Ltd,

#812, 6th cross 3rd main, Rajajinagar, Bengaluru - 560021


TABLE OF CONTENTS

SL NO. CHAPTERS PAGE NO.


1 Introduction 1-4
1.1 Background 2-4
1.2 Motivation 4
1.3 Objectives 4
1.4 Future Scope 4
2 Related Work 5-6
3 System Architecture 7-14
3.1 System Overview 7-8
3.2 Code Implementation 7-13
3.3 Model Creation 13
3.4 Working of Model 14
4 Results 15-16
5 Conclusion and Future Work 17
Bibliography 18-19
LIST OF FIGURES

SL No. Figure No. Title of figure Page No.


1 1 Disease Plant Leaves 1
2 3.1 System Architecture of Image Processing to Detect 7
Plant Disease
3 3.2 Plant Disease Image Dataset Processing 8
ABSTRACT
The offered system helps in seeing who a person is of plant disease and provides Remedies that can be used as an
arguments by person for whom law process is against apparatus against the disease . The database 2 got from the
the net is rightly kept separate and the different plant species 3 are taken to be and are gave a different name to form
a right database 2 then get test-database which is chiefly of different plant diseases that are used for checking the
act of having no error and secrets level of the undertaking . Then using training facts we will train our classifier and
then out-put will be predicted with most good act of having no error . We use convolution neural network ( CNN
) which having among its parts of different levels which are used for statement of what will take place in the future
. A first working design male bees scaled-copy is also designed which can be used for be living amount covered of
greatly sized farming fields to which a high power to get at detail camera is made joined and will take images of
the plants which will act as input for the software 8, based of which the software will say to us whether the plant is
healthy or not . With our code 9 and training design to be copied we have achieved an act of having no error level
of 78% .
.
CHAPTER 1

INTRODUCTION

Today's better technologies have enabled people to provide the adequate nutrition and food needed
to meet the needs of the world's growing population. If we talk about India unequivocally, 70% of the Indian
people is directly or by suggestion related to the cultivating territory, which remains the greatest region in
the country. If we explore the broader Picture According to Research Conducted by 2050 overall yield
creation can augment by at any rate half putting more weight on the inside and out pushed and cultivating
Sector. The greater part of the Farmers is poor and have no inclination in development which may incite
hardships more essential than half because of pets and sicknesses of plant. Vegetables and fruits are common
items and the principal agricultural things. Powerful dependence on engineered pesticides achieves the high
substance content which creates in the earth, air, water, and shockingly in our bodies antagonistically
influence the environment.

At present, the conventional technique of visual inspection in humans by visual


inspection makes it impossible to characterize plant diseases. Advances in
computer vision models offer fast, normalized, and accurate answers to these
problems. Classifiers can also be sent as attachments during preparation. All
you need is a web association and a camera-equipped cellphone. The well-
known business applications "I Naturalist" and "Plant Snap" show how this is possible. Both apps excel at
sharing skills with customers as well as building intuitive online social communities.

Figure 1: Disease Plant Leaves

Page 1
In Recent Years, Deep Learning has led to great performance in various fields like Image
Recognition, Speech Recognition, and Natural Language Processing. The use of the Convocational Neural
Network in the Problem of Plant Disease Detection has very good results. Convocational Neural Network is
recognized as the best method for Object Recognition. We Consider the Neural Architecture namely faster
Region-Based Convolutional Neural networks (Faster R-CNN), Region-based Convolution Neural
Networks(R-FCN), and single-shot Multi box detector (SSD). Each of the Neural Architecture should be
able to be merged with any feature exactor depending on the application. Pre-processing of data is very
important to models for accurate performance. Many infections (viral or fungal) can be hard to distinguish
often sharing overlap of symptoms.

1.1 Background

Methods are too specific. The ideal method would be able to identify any kind of plant. Evidently, this
is unfeasible given the current technological level. However, many of the methods that are being
proposed not only are able to deal with only one species of plant, but those plants need to be at a certain
growth stage in order to the algorithm to be effective. That is acceptable if the plant is in that specific
stage, but it is very limiting otherwise. Many of the researchers do not state this kind of information
explicitly, but if their training and test sets include only images of a certain growth stage, which is often
the case, the validity of the results cannot be extended to other stages.

Since recent decades, digital image processing, image analysis and machine vision have been sharply
developed, and they have become a very important part of artificial intelligence and the interface between
human and machine grounded theory and applied technology. These technologies have been applied
widely in industry and medicine, but rarely in realm related to agriculture or natural habitats.

Despite the importance of the subject of identifying plant diseases using digital image processing, and
although this has been studied for at least 30 years, the advances achieved seem to be a little timid. Some
facts lead to this conclusion:
Operation conditions are too strict. Many images used to develop new methods are collected under
very strict conditions of lighting, angle of capture, distance between object and capture device, among
others. This is a common practice and is perfectly acceptable in the early stages of research. However,
in most real-world applications, those conditions are almost impossible to be enforced, especially if
the analysis is expected to be carried out in a non-destructive way. Thus, it is a problem that many
studies never get to the point of testing and upgrading the method to deal with more realistic conditions,
because this limits their scope greatly. Lack of technical knowledge about more

Page 2
sophisticated technical tools.

In recent times, computer vision methodologies and pattern recognition techniques have been applied
towards automated procedures of plant recognition. Digital image processing is the use of the algorithms
and procedures for operations such as image enhancement, image compression, image analysis, mapping,
geo-referencing, etc. The influence and impact of digital images on modern society is tremendous and is
considered as a critical component in variety of application areas including pattern recognition, computer
vision, industrial automation and health care industries.

The simplest solution for a problem is usually the preferable one. In the case of image processing,
some problems can be solved by using only morphological mathematical operations, which are easy
to implement and understand. However, more complex problems often demand more sophisticated
approaches. Techniques like neural networks, genetic algorithms and support vector machines can be
very powerful if properly applied. Unfortunately, that is often not the case. In many cases, it seems
that the use of those techniques is in more demand in the scientific community than in their technical
appropriateness with respect to the problem at hand. As a result, problems like over fitting,
overtraining, undersized sample sets, sample sets with low representativeness, bias, among others,
seem to be a widespread plague. Those problems, although easily identifiable by a knowledgeable
individual on the topic, seem to go widely over looked by the authors, probably due to the lack of
knowledge about the tools they are employing. The result is a whole group of technically flawed
solutions.

One of the most common methods in leaf feature extraction is based on morphological features of leaf.
Some simple geometrical features are aspect ratio, rectangularity, convexity, sphericity, form factor etc.

One can easily transfer the leaf image to a computer and a computer can extract features automatically
in image processing techniques. Some systems employ descriptions used by botanists. But it is not easy
to extract and transfer those features to a computer automatically.

The aim of the project is to develop a Leaf recognition program based on specific characteristics
extracted from photography. Hence this presents an approach where the plant is identified based on its
leaf features such as area, histogram equalization and edge detection and classification. The main purpose
of this program is to use Open-CV resources.

Indeed, there are several advantages of combining Open-CV with the leaf recognition program. The
result proves this method to be a simple and an efficient attempt. Future sections will discuss more on
image preprocessing and acquisition which includes the image preprocessing and enhancement,
Page 3
histogram equalization, edge detection. Further on sections introduces texture analysis and high
frequency feature extraction of a leaf images to classify leaf images i.e. parametric calculations and then
followed by results.

1.2 Motivation
Here is the brief review of the papers which we have referred for this project. Since digital image
processing is used in this project to detect diseases in plants, it eliminates the traditional methods which
are used in olden days and also it removes human error. This method needs a digital computer, mat lab
software and a digital camera to detect diseases in plants. So it is a suitable method to adapt for this
project. In the paper by Pallavi S. Marathe, different steps like Image acquisition, Pre processing which
includes clipping, smoothing and Contrast enhancement. She has also used Segmentation techniques to
partition different parts in an image. Disease detection is done by extracting features and classifying
using SVM algorithm.

1.3 Objectives

• Coding is used to analyze the leaf infection.


• Classification of plant leaf diseases using texture features.
• To detect unhealthy region of plant leaves particularly Tomato Plant.

1.4 Future Scope

Using new Different technologies and careful way we can make more quicker and good at producing an effect of
application 1 for user . The system presented in this undertaking was able to act accurately 2, however there are still
a number of issues which need to be worked out . First of all, we take into account as only four diseases in this
undertaking therefore the range of observation of disease discovery is limited . In order to increase the range of
observation of the disease discovery greatly sized knowledge units of different disease should be use.

Page 4
CHAPTER 2

RELATED WORK

This Paper also describes various strategies for Extracting the nature of infected leaves and classifying
plants Disease. Here we are using a Convolution Neural Network (CNN), Which consists of various levels
that are used for forecasting. That The complete method is described based on the images used for training
and pretreatment testing and Image enhancement and then a training method for CNN deep and optimizers.
Use these images We can precisely determine the processing method and differentiate between different plant
diseases.

In future work to predict the apple leaf disease, other Models of Deep Learning like F-CNN, R-CNN,
and SSD can be implemented. This article suggests a new way to classify leave using the CNN model and
builds two models by adjusting network depth using Google Net. We assessed the effectiveness of each
model based on discoloration or leaf damage. The recognition rate achieved is more than 94%, even if 30%
of the leaves are damaged. In future research, we will seek to identify leaves attached to branches to develop
a visual system that can mimic the methods humans use to identify plant species.

Liu, Bin, et al. "Identification of apple leaf diseases based on deep convolutional neural networks. In
this paper, Liu proposes a new model of deep convolution networks for accurate prediction and identification
in apple leaves. Model Proposed in the Paper can automatically recognize the different character trades with
a very high level of accuracy. A total of 13,689 images were created with the help of image processing
technologies like PCA oscillation. Apart from this new AlexNet based neural network was also proposed by
implementing the NAG Algorithm to optimize the network.

To Classification of disease reproduction from holidays, artificial neurons Maintenance vector


networks and machines are used in maintenance the vector engine provides the most satisfactory results for
each type Picture. On paper, RGB images are converted to grayscale images using color conversion. Various
enhancement techniques such as histogram alignment and contrast adjustment are used to improve image
quality. The purpose of this paper is to review evidence of foliar disease thermal, digital, and hyperspectral
imaging studies with various classification techniques. The segmentation method is applied to identify the
required areas. The method helps Isolate the desired area from the background. Based on the threshold Value,
grayscale image, color image segmentation method different. Used to extract features as well as various
methods such as grayscale the matrix is used for associated values, histogram intensity, etc.

Page 5
Different types of classification characteristics are used here, e.g. B. Classification according to SVM,
ANN, and FUZZY. When extracting functions, different types of characteristic values are used; B. Textures,
structures, and geometric elements. The ANN and FUZZY classifications can be used to identify diseases in
unpeeled plants.

Page 6
CHAPTER 3

SYSTEM ARCHITECTURE

3.1 System Overview

Figure 3.1: System Architecture of Image Processing to detect plant disease

Steps related to image processing to detect plant diseases. The whole process is divided into three stages:

1. Input images are first created by an Android device or uploaded to our web application by users.

2. Segmentation pre-processing includes the process of image segmentation, image enhancement and color
space conversion. First, the digital image of the image is enhanced with a filter. Then convert each image
into an array. Using the scientific name for Binarizes Diseases, each image name is converted to a binary
field.

3. CNN classifiers are trained to identify diseases in each plant class. Level 2 results are used to call up a
classifier, which is trained to classify various diseases in that plant. If not present, the leaves are classified as
"healthy".

Page 7
Figure 3.2: Plant disease image dataset processing

3.2 Code Implementation :

import keras
from keras.preprocessing.image import ImageDataGenerator, img_to_array, load_img
from keras.applications.vgg19 import VGG19, preprocess_input, decode_predictions

from google.colab import drive


drive.mount('/content/drive')

len(os.listdir('/content/drive/MyDrive/data/train'))

trainning = ImageDataGenerator(zoom_range = 0.5, shear_range= 0.5, horizontal_flip=True, preproce


ssing_function=preprocess_input)

validation = ImageDataGenerator(rescale=1/255,preprocessing_function=preprocess_input)

train = trainning.flow_from_directory(directory = '/content/drive/MyDrive/data/train',


target_size = (256,256), batch_size = 5)

val = validation.flow_from_directory(directory= '/content/drive/MyDrive/data/valid',


target_size = (256,256), batch_size = 5)

Page 8
to_image, label =
train.next()

to_image.shape

def plotImage(img_arr, label):

for im, l in zip(img_arr, label):


plt.figure(figsize= (5,5))
plt.imshow(im)
plt.show()
plotImage(to_image[:3], label[:3])

from keras.layers import Dense, Flatten


from keras.models import Model
from keras.applications.vgg19 import VGG19

base_model =VGG19(input_shape=(256,256,3), include_top= False)

for layer in base_model.layers:


Page 9
layer.trainable = False
base_model.summary()
Model: "vgg19"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 256, 256, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 256, 256, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 128, 128, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 128, 128, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 128, 128, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 64, 64, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 64, 64, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 32, 32, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 16, 16, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 8, 8, 512) 0
=================================================================
Total params: 20,024,384
Trainable params: 0
Non-trainable params: 20,024,384
_________________________________________________________________

Page 10
X = Flatten()(base_model.output)
X = Dense(units= 5,activation='softmax')(X)
model = Model(base_model.input,X)
model.summary()
model.compile(optimizer = 'adam', loss= keras.losses.categorical_crossentropy, metrics = ['accuracy'])
from keras.callbacks import ModelCheckpoint, EarlyStopping
es = EarlyStopping(monitor = 'val_accuracy', min_delta = 0.01, patience = 3 , verbose = 1)

mc = ModelCheckpoint(filepath = 'best_model.h5',

monitor = 'val_accuracy',

min_deltas = 0.01,

patience = 3,

verbose = 1,

save_best_only = True)

cb = [es,mc]

his = model.fit_generator(train,
steps_per_epoch=16,
epochs = 50,
verbose = 1,
callbacks = cb,
validation_data = val,
validation_steps = 16)

Page 11
h = his.history
h.keys()
plt.plot(h['accuracy'])
plt.plot(h['val_accuracy'], c = 'red')
plt.title("acc vs v-acc")
plt.show()

plt.plot(h['loss'])
plt.plot(h['val_loss'], c = 'red')
plt.title("loss vs v-loss")
plt.show()

from keras.models import load_model


Page 12
model = load_model("/content/best_model.h5")
acc = model.evaluate_generator(val)[1]
print(f"The accuracy of this model is = {acc *100} %")

ref = dict(zip(list(train.class_indices.values()), list(train.class_indices.keys())))


def prediction(path):
img = load_img(path, target_size = (256, 256))
i = img_to_array(img)
im = preprocess_input(i)
img = np.expand_dims(im, axis = 0)
pred = np.argmax(model.predict(img))
print(f"the image belong to {ref[pred]}")

path = "/content/drive/MyDrive/data/test/Apple___Black_rot/d65349b4-80ff-474e-a48e-
bf8fa597d880___JR_FrgE.S 8775.JPG"
prediction(path)

OUTPUT :
the image belong to Apple___Black_rot

3.3 Model Creation

We use a convolutional neural network for model creation. We create model layers as mentioned below the
image. We also specified filter size for the Conv layer and Pool layer and the shape on each layer.

shape = ( channels , height , width )

In PyTorch shape is not automatically calculated we manually have to take care of shape on each layer. At
the First Fully connected layer we have to mention output size as per the shape of the convolutional layer.
This calculation is also called Convolutional Arithmetic.

Here is Equation for Convolutional Arithmetic:

Here for this project dilation = 0.

For model code do check out My Github repo here

Page 13
model = CNN(targets_size) # targets_size = 39

Here we have to classify the images into 39 Categories so that’s why we used categorical cross-entropy as
loss and adam optimizer. In the Model, we are using ReLU as activation but for the last layer, we have to use
Softmax activation. In PyTorch, we have a cross-entropy loss which is a mixture of softmax and category
cross-entropy loss.

Page 14
3.4 Working of Model

If you have knowledge of the working of CNN 1 then You get my point about what I do in this undertaking . This
part is for those who do not get through knowledge of . basically, First we change size every image into 224 X 224
. After that this image get food to into the convolutional 2 neural 3 network 4 . We get food to color image so it has
3 narrow ways rgb . First conv level we make a request 32 come through slowly, cleaning up size or out-put narrow
ways . That means 32 different apparatus for making liquid clean make a request to the images and attempt to
discover features and after that using 32 points, we make come into existence a features map that has narrow ways
32 . So from 3 X 224 X 224 it will become 32 X 222 X 222 . After that we are making a request relu putting in
operation purpose, use to take away not linearity 5 and after that we are making a request group act of pushing back
to normal to work back to normal the weights of the neuron 6 . After that this image we get food to the max group
level which takes only the most to the point features only so that why we get the out-put image in form 32 X 112
X 112 . After that, we get food to this image to the next convolutional 2 level and its process is the same as talked-
about above . At last, we flatten the last max group level out-put and food to the next having an effect equal to the
input covering, board which is also named a fully connected level, and at last, as a last covering, board, we say what
will take place in the future with 39 groups . So as a design to be copied out-put we get tensor 1x39 size . And from
that tensor, we take a list of words in a book of the greatest point value in the tensor . That one example list of
words in a book is our main statement of what will take place in the future.

Page 15
CHAPTER 4

RESULTS

We only selected 400 images from each folder. Each image is converted into an array. In addition, we
processed the input file by scaling the info points from [0, 255] (image minimum and most RGB values) to
the vary [0, 1]. We then split the dataset into 70% of the training images and 30% for testing. Image generator
objects are created which perform random rotations, movements, inversions, cultures and parts of our image
set. In the standard model we use a "last channel" architecture, but we also build backend switches that
support "first channel". Then we do Conv => Relu => Pool first. Our Conv layer has 36 filters with 3 x 3
core and Relu activation (linear correction module). We apply batch normalization, maximum aggregation,
and a 27% reduction (0.26). Dropout is a control technology used to reduce neural network readjustment by
preventing the correction of complex collaborative data for training. This is a very effective method for
averaging neural network models. Then we create two sets (Conv => Relu) * 2 => Pool blocks. Then just a
series of fully connected layers (fully connected layers) => Relu. We use Adam's Hard Optimizer for our
model. Our network starts where we call model.fit_generator. Our aim is to add data, train - test data and the
no.of epochs we want to train. For this project we used a value for epochs of 26.

Page 16
PREDICTION:

ACCURACY:

Page 17
CHAPTER 5

CONCLUSION AND FUTURE WORK

The outcomes presented in this part are related to training with the complete work database 1 having
in it both first form and increased images . As it is experienced that convolutional 2 networks 3 are able to
learn features when trained on larger knowledge units, results achieved when trained with only uncommon,
noted images will not be was out for discovery . After small sharp adjustments the parameters 4 of the
network 5, An over-all act of having no error of 96 . 77 77% was achieved . in addition, the trained scaled-
copy was tested on each part one at a time . Test was done on every image from the say for certain group
. As suggested by good experience general rules, achieved results should be made a comparison of with
some other results . In addition, there are still no trading, business like answers available, except those
trading with plant species 6 being seen based on the lets go of images . In this paper, a way in of using
deep learning careful way was had a look for in order to automatically put in order and discover plant
diseases from leaf images . The complete way was made, was moving in, separately, from getting together
the images used for training and to make good to image pre-processing and process of making greater and
finally the way of training the deep CNN 7 and small sharp adjustments . Different tests were done in order
to check the operation of newly made come into existence design to be copied . As the presented careful
way has not been used persons wrongly, as far as we have knowledge of, in the field of plant disease wide
approval of one's work, there was no comparison with related outcomes, using the certain, errorless expert
way .

Protecting crops in organic farming is not an easy task. This depends on a thorough knowledge of the
crop being grown and possible pests, pathogens and weeds. In our system, a special deep learning model has
been developed based on a special architectural convolution network to detect plant diseases through images
of healthy or diseased plant leaves. The system described above can be upgraded to a real- time video entry
system that allows unattended plant care. Another aspect that can be added to certain systems is an intelligent
system that cures identified ailments. Studies show that managing plant diseases can help increase yields by
about 50%.

Page 18
BIBLIOGRAPHY

1. Liu, Bin, "Identification of apple leaf diseases based on deep convolutional neural networks

2. Jeon, Wang-Su, and Sang-Yong Rhee. "Plant leaf recognition using a convolution neural network."
International Journal of Fuzzy Logic and Intelligent Systems 17.1 (2017): 26-34.

3. Amara, Jihen, Bassem Bouaziz, and Alsayed Algergawy. "A Deep Learning-based Approach for
Banana Leaf Diseases Classification." BTW (Workshops). 2017.

4. Lee, Sue Han, et al. "How deep learning extracts and learns leaf features for plant classification." Pattern
Recognition 71 (2017): 1-13.

5. Sladojevic, Srdjan, et al. "Deep neural networks-based recognition of plant diseases by leaf image
classification." Computational intelligence and neuroscience 2016 (2016).

6. Lee, Sue Han, et al. "Plant Identification System based on a Convolutional Neural Network for the
LifeClef 2016 Plant Classification Task." CLEF (Working Notes). 2016.

7. He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference.

8. K.Padmavathi, and K.Thangadurai,“Implementation of RGB and Gray scale images in plant leaves
disease detection –comparative study,”

9. Kiran R. Gavhale, and U. Gawande, “An Overview of the Research on Plant Leaves International Journal
of Pure and Applied Mathematics Special Issue 882 Disease detection using Image Processing Techniques,”
IOSR J. of Compu. Eng.

10. Y. Q. Xia, Y. Li, and C. Li, “Intelligent Diagnose System of Wheat Diseases Based on Android Phone,”
J. of Infor. & Compu. Sci., vol. 12, pp. 6845-6852, Dec. 2015.

11. Wenjiang Huang, Qingsong Guan, JuhuaLuo, Jingcheng Zhang, Jinling Zhao, Dong Liang, Linsheng
Huanand Dongyan Zhang, “New Optimized Spectral Indices for Identifying and Monitoring Winter Wheat
Diseases”, IEEE journal of selected topics in applied earth observation and remote sensing,Vol. 7, No. 6,
June 2014

12. Monica Jhuria, Ashwani Kumar, and RushikeshBorse, “Image Processing For Smart Farming: Detection
Of Disease And Fruit Grading”, Proceedings of the 2013.

Page 19
13. Zulkifli Bin Husin, Abdul Hallis Bin Abdul Aziz, Ali Yeon Bin MdShakaffRohaniBinti S Mohamed
Farook, “Feasibility Study on Plant Chili Disease Detection Using Image Processing Techniques”, 2012.

14. Mrunalini R. Badnakhe, Prashant R. Deshmukh, “Infected Leaf Analysis and Comparison by Otsu
Threshold and k-Means Clustering”,

15. H. Al-Hiary, S. Bani-Ahmad, M. Reyalat, M. Braik and Z. ALRahamneh, “Fast and Accurate Detection
and Classification of Plant Diseases”,

16. Chunxia Zhang, Xiuqing Wang, Xudong Li, “Design of Monitoring and Control Plant Disease System
Based on DSP&FPGA”,

17. RajneetKaur , Miss. ManjeetKaur“A Brief Review on Plant DiseaseDetection using in Image
Processing”IJCSMC, Vol. 6, Issue. 2, February 2017

18. SandeshRaut, AmitFulsunge “Plant Disease Detection in Image Processing Using MATLAB” IJIRSET
Vol. 6, Issue 6, June 2017

19. K. Elangovan , S. Nalini “Plant Disease Classification Using Image Segmentation and SVM
Techniques” IJCIRV ISSN 0973-1873 Volume 13, Number 7 (2017)

20. Sonal P Patel. Mr. Arun Kumar Dewangan “A Comparative Study on Various Plant Leaf Diseases
Detection and Classification” (IJSRET), ISSN 2278 – 0882 Volume 6, Issue 3, March 2017

21. V Vinothini, M Sankari, M Pajany, “Remote Intelligent For Oxygen Prediction Content in Prawn
Culture System”, ijsrcseit,vol 2(2), 2017, pp 223-228.

Page 20
Page 21

You might also like