You are on page 1of 7

Computers and Electronics in Agriculture 155 (2018) 237–243

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture


journal homepage: www.elsevier.com/locate/compag

Original papers

Deep leaning approach with colorimetric spaces and vegetation indices for T
vine diseases detection in UAV images

Mohamed Kerkecha, Adel Hafianea, , Raphael Canalsb
a
INSA-CVL, Univ. Orléans, PRISME, EA 4229, F18022 Bourges, France
b
Univ. Orléans, INSA-CVL, PRISME, EA 4229, F45072 Orléans, France

A R T I C LE I N FO A B S T R A C T

Keywords: Detection of symptoms in grape leaves is a very important factor in preventing a serious disease. An epidemic
Deep learning spread in vineyards has huge economic consequences and therefore it is considered a major challenge for viti-
CNN culture. Automatic detection of vine diseases can play an important role in addressing the issue of diseases
UAV management. This study deals with the problem of identifying infected areas of grapevines using Unmanned
Image processing
Aerial Vehicles (UAV) images in the visible domain. In this paper we propose a method based on Convolutional
Color spaces
neural network (CNN) and color information to detect symptoms in the vine yards. We studied and compared
Vegetation indices
Precision agriculture performances of CNNs using different color spaces, vegetation indices, as well as the combination of both in-
formation. The obtained results showed that CNNs with YUV color space combined with ExGR vegetation index,
and CNNs with a combination of ExG, ExR, ExGR vegetation indices yield the best results with accuracy more
than 95.8%.

1. Introduction can play an important role in addressing the issue of vine diseases
control. In recent years, technology has made significant progress in the
In the last decade, vineyards disease diagnosis through visual development of Unmanned Aerial Vehicles (UAV), leading to many
feedback gained significant attention in improving the quality of remote sensing applications (Honrado et al., 2017; Arroyo et al., 2017;
grapes. Many diseases that affect the vineyards such as Esca (Hofstetter Navia et al., 2016). In agriculture industry, UAV has been used for
et al., 2012), are responsible for economic loss due to the quality re- several useful applications such as the fertilizer rates computation
duction of grape production. In fact, this kind of disease is very de- (Schut et al., 2018), biomass production monitoring (Karpina et al.,
structive for the vine, because it spreads very fast, and the treatments 2016), weeds (Bah et al., 2017) and diseases detection (Gennaro et al.,
are generally ineffective once the disease has progressed (Miliordos 2016), etc. Despite the progress in such technologies, symptoms auto-
et al., 2017). The infection of the vine triggers visible symptomatic matic detection using aerial images is still a challenging task.
reactions on growth, color, leaf size and branches. Generally, Esca can Vine diseases are most of the time characterized by the unnatural
be identified with visual symptoms, namely, the change of leaves color, change of leaves color during the flowering period. This characteristic
leaves winding, the absence of lignification new buds, and death of applies to several types of crops, which motivated several researchers to
inflorescences and berries (Al-Saddik et al., 2016). Most of the time, study segmentation, identification and evaluation of crop leaves in-
once the vine is affected by the disease followed by the appearance of fection severity in different color spaces (Chaudhary et al., 2012; Han
symptoms, the quality of grape production decreases considerably, until and Cointault, 2013; Waghmare et al., 2016; Prasetyo et al., 2017).
vine death. Visible spectrum of the culture leaves allows a better identification of
To maintain a healthy vine, it is important to regularly inspected the diseases at the leaf image level. In Chaudhary et al. (2012), the authors
vine for continuous monitoring in order to avoid diseases spreading carried out a comparative study on a color transformation approach to
problems. Traditionally, this task is performed in the field by human study diseases spots detection on plants leaves. Four methods have been
experts, but this is very time consuming which limits the efficiency of tested with different color spaces (RGB, YCbCr, HSV and LAB), where
continuous monitoring (Al-Saddik et al., 2016). Besides, advances in LAB provided the best results. In Han and Cointault (2013), the authors
sensors technology have enabled sensing and imaging different crops focused on early diseases detection on grape leaves. Two studies were
diseases (Mahlein, 2016). Automatic diseases detection using sensors conducted on vine leaves with early and late stage “Mildew” disease


Corresponding author.
E-mail address: adel.hafiane@insa-cvl.fr (A. Hafiane).

https://doi.org/10.1016/j.compag.2018.10.006
Received 20 July 2018; Received in revised form 2 October 2018; Accepted 5 October 2018
0168-1699/ © 2018 Elsevier B.V. All rights reserved.
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

(YUV color space, Hybrid Color-Texture Spaces). Both methods can overview of the different processing phases. As can be seen from Fig. 1,
detect diseases at a stage where symptoms begin to appear. On the the method consists of six steps sliding window segmentation, color
other hand, early detection (no visible symptoms) can only be detected conversion, CNN prediction, diseases map generation, post processing,
by a thermal image. Detection and classification of the grape plant and diseases map overlapped on RGB image. First, the vineyard image
diseases has been realized by Waghmare et al. (2016). Four main pro- is divided into several blocks. Two sliding window schemes have been
cessing steps were applied to the images for feature extraction image used without recovery and with a 50% of recovery. Appropriate color
resizing, RGB to HSV image converting, bench removing, and texture space and vegetation indices are extracted from each image block. Then
analyzing. Finally, vector support machine was used to discriminate this information is fed to the CNN model, already trained on similar
between the diseased and healthy leaf. These studies show good per- types of input data, to determine the membership classes of the blocks.
formance with leaf-level resolution (when the sensor is near the leaf). The disease map is generated from the classifications process. After
However, it is difficult to supervise vineyards with this method because that, the diseases map is refined with post processing using mathema-
it requires expensive acquisition and can not cover large areas for a long tical morphology, elimination of small areas and contours detection.
time. In contrast, the use of UAV covers larger areas and extract re-
levant information to act in a timely manner on a given disease. 2.2. Dataset
Early detection of vine diseases is becoming a very active area of
research (Al-Saddik et al., 2016; Albetis et al., 2017; Gennaro et al., UAV system with RGB sensor were used for data acquisition. Images
2016). Often, these methods use UAV to capture vine images with a were obtained at altitude of 25 meters, with a resolution close to 1 pixel
several spectral channels such as visible, Red Edge, NIR and IR medium, per centimeter, and a dimension of 4608 × 3456 pixels. In order to be
and perform computing vegetation indices, biophysical parameters and able to discriminate between different types of areas in the vineyard
extracting features from the spectral channels. However, these earlier images, four classes: (i) ground class (including ground and the vege-
approaches showed several limitations. Recently, the convolutional tation between lines of vines), (ii) healthy class, (iii) potentially dis-
neural networks (CNNs) have had outstanding success in many appli- eased class, (iv) the diseased class (see Fig. 2). In the used dataset, the
cations, thanks to their good ability to solve problems in the field of vineyard is partially contaminated by ESCA disease.
computer vision and pattern recognition. So far, there has been little The potentially diseased class was introduced with the aim of
attention to use deep learning approach in agriculture (Kamilaris and avoiding confusion between the healthy class and the diseased one.
Prenafeta-Boldú, 2018). The existing studies have focused on culture Indeed, some vine areas seem to be symptomatic because of the light
identification (Kussul et al., 2017), plant phenology recognition reflection of the green light color, which is not necessarily diseased.
(Yalcin, 2017), segmentation of root and soil (Douarre et al., 2016), This case usually occurs when there are poorly irrigated vine areas that
crop yield estimation (Kuwata and Shibasaki, 2015), weeds detection leads to vine dehydration.
(Bah et al., 2018), seed-per-pod estimation (Uzal et al., 2018), im- The dataset consists of 70,560 learning patches (17,640 patches for
proving efficiency of organic farming (Knoll et al., 2018) and diseases each class) and the same number of testing patches.
recognition from a public dataset of images zoomed on crop leaves
(Mohanty et al., 2016; Garcia and Barbedo, 2018; Ferentinos, 2018).
In this paper, we present a new method of detecting vine diseases in 2.3. Data augmentation
images obtained by UAV and RGB sensor. The proposed method com-
bines strength of deep learning approach (specifically CNNs) with dif- In this experiment, three datasets were generated with different
ferent color spaces and vegetation indices. The aim is to obtain, through sizes. By default, each dataset is saved in the RGB color space decom-
CNNs approach, a relevant combination of the feature spaces that leads posed in three types of paches, small size (16 × 16 ), an average size
to high accuracy. For that purpose, different colors spaces (RGB, HSV, (32 × 32 ), and a large size (64 × 64 ). The process of these three patch
LAB, YUV) and vegetation indices (ExG, ExR, ExGR, GRVI, NDI, RGI) sizes is applied for all training and testing datasets. For all images, the
have been computed from the original RGB images. We have in- area center is selected manually to set ground truth.
vestigated different schemes, single spaces, different combination be- Due to the lack of data, new samples were generated to enrich the
tween color spaces, as well as different combination of color spaces dataset using data augmentation methods (Dosovitskiy et al., 2014).
with vegetation indices. Then, the yielded models are used to detect and Rotation and translation were utilized to generate the augmented da-
segment the diseases zones. taset. The augmentation was performed by a spatial shift on ten pixels
This article is organized as follows, materials and method are de- around patch center coordinates x and y. Varying from ( x −10, y−10 ) to
scribed in Section 2, experiments and results are presented in Section 3 ( x + 10, y + 10 ) with a step of 3 pixels. At each shift, a 360° rotation
and finally the conclusion in Section 4. process was performed with a step of 3°. For each rotation and trans-
lation, patches sizes of (16 × 16 ), (32 × 32 ) and (64 × 64 ) were gener-
ated.
2. Materials and methods
2.4. Convolutional neural network
In this section, we present an overview of the developed method,
the dataset, colors spaces and the used CNN architecture.
In Le Cun et al. (1990), CNNs architecture were introduced to
process images with deep network. Among the popular CNN archi-
2.1. Overview tecture, LeNet-5 (LeCun et al., 1998). It includes five layers (shown in
Fig. 3), where these layers are based on three transformations for fea-
The proposed method involves several steps. Fig. 1 depicts the ture extraction and fully connected layer for classification. The first

Fig. 1. Overview of the system architecture for


identifying diseased areas in vineyards.

238
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

Fig. 2. Dataset conception.

transformation consists of a convolution operation between the input 2016). As indicated in the introduction section.
image and the filter bank. A feature map is produced by this convolu- Three color spaces (HSV, LAB and YUV) were used in this work,
tion for each filter. This followed by a second transform where ReLU where each color space is calculated and represented by different
(element-wise nonlinear function) is applied to all feature maps. The concept. The HSV color space uses a circlar shape for the color channel
last transform is a subsampling transform, where non-overlapping is (H), a color purity for saturation channel (S), and a brightness for value
used to ensemble each feature map. Furthermore, each feature map in channel (V). The LAB color space has one luminance channel (L) and
the last convolutional layer is fully connected to the last layer in the two chrominance channels (A and B). YUV contains one light intensity
network (softmax function), where the probability of each class is cal- (Y) and two color channels (U and V). HSV and LAB are computed from
culated. the RGB space using non linear formula, unlike these two spaces YUV is
The CNN in Fig. 3 is configured as follows: calculated with linear combination of RGB channels (Gonzalez and
Woods, 2008).
– Input layer size of (16 × 16 × 3), (32 × 32 × 3) or (64 × 64 × 3)
– Convolution + ReLU layer, kernel size (3 × 3), (5 × 5) or (7 × 7 ) 2.6. Vegetation indices
(Feature map of 64)
– Max pooling layer, kernel size (2 × 2 ), strides size (2 × 2 ) and same Vegetation indices has been used often in many agricultural appli-
padding cations (Meyer and Neto, 2008; Ponti, 2013; Soontranon et al., 2014). It
– Convolution + ReLU layer, kernel size (3 × 3), (5 × 5) or (7 × 7 ) can be computed from visible channels (Red, Green, Blue), or combined
(Feature map of 128) multi spectral channels. Vegetation indices aim to emphasize the fea-
– Max pooling layer, kernel size (2 × 2 ), strides size (2 × 2 ) and same tures of vegetation.
padding In our study, vegetation indices was computed from RGB color
– Dense + ReLU layer, size of 1024 space. When only the visible sensor is available, the most used vege-
– Dropout layer rate of 0.5 (for training) and 1.0 (for testing) tation indices are Excess Green (ExG), Excess Red (ExR), Excess Green-
– Output layer (Dense layer), size of 4 (Softmax) Red (ExGR), Green-Red Vegetation Index (GRVI), Normalized
Difference Indices (NDI) and Red-Green Index (RGI). Table 1 shows
In our study, diseases detection in grape fields is considered as four- more details on the calculation of these vegetation indices.
class classification problem. For a vine field image, the goal is to build a
CNN classifier which is able to differentiate between classes ground, 3. Experimentation
health, potentially diseased and diseased class. The distinction between
each class is mainly based on the color and texture variations where the This section details the implementation of learning and testing ex-
CNN must be able to extract during the convolution process, called the periences. Here, different strategies for using color spaces, network
feature extraction process. In other words, features extracted from the configuration, and parameter variation were adopted in order to
convolutional network are mainly oriented colorimetric and texture achieve the best detection performance.
features.
3.1. Learning and testing procedure
2.5. Color spaces
The CNN network implementation was carried out under Tensorflow
Color features play essential role for visual content description. For environment. Our system uses the CNN LeNet-5 (LeCun et al., 1998)
better color representation, generally the original RGB representation is architecture (shown in Fig. 3) for learning and testing procedures. In
converted to different color spaces for separating the intensity in- the training phase, patches sizes of (16 × 16), (32 × 32 ) and (64 × 64 )
formation from the chrominance. Color spaces (Ford and Roberts, with 2, 3, 4 or 12 channels are provided as input. Each patch is labeled
1998) such as HSV, LAB and YUV are used in many applications for according to its class. Due to the large amount of data and memory
agriculture such as crops segmentation (Bai et al., 2013; Hernández- constraints, learning was performed with streaming technique, to load a
Hernández et al., 2016; Hernández-Hernández et al., 2017; Sarkate limited amount of the learning data at each iteration. In this experi-
et al., 2013), vegetation changes discovery (Xiao et al., 2016), phe- ment, 50 learning patches were loaded sequentially in the memory and
nology estimation (Cetin and Altilar, 2015), and diseases detection this process is repeated 20.000 times.
(Chaudhary et al., 2012; Han and Cointault, 2013; Waghmare et al., As can be observed in Fig. 3, the learning process goes through the

Fig. 3. The proposed method training and testing


procedures of CNN LeNet-5 architecture.

239
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

Table 1 space have produced a good accuracy of 94.37%, 95.82% and 95.84% for
Mathematical formulas of vegetation indices. the three patch sizes. The RGB color space reaches an accuracy of
Index name Formula References 94.83%, 95.29% and 95.30%. As can be seen the RGB and YUV color
spaces have obtained the best performances in terms of discrimination
Excess Green ExG =
2G − R − B Woebbecke et al. (1995) between the four classes (ground, healthy, potentially diseased and
R+G+B
Excess Red 1.4R − G Meyer et al. (1998) diseased). However, the RGB color space do slightly better for patch
ExR =
R+G+B
size (16 × 16), but shows less accuracy for (32 × 32) and (64 × 64 )
Excess Green-Red ExGR = ExG−ExR Meyer and Neto (2008)
Green-red vegetation index R−G Motohka et al. (2010)
compared YUV color space. RGB remains sensitive to the size of the
GRVI =
R+G CNN’s convolution filter, especially for patches size (64 × 64 ). The HSV
Normalized Difference Index G−R Pérez et al. (2000)
NDI =
G+R and LAB color spaces have performed less than other color spaces. For
Red-green index RGI =
R Steele et al. (2009) HSV, this is mainly due to the Hue (H) channel that contains all color
G
information grouped into a single channel which is less relevant for
network to learn best color features from this color space. In other
conversion, feature extraction and classification steps. At the end of the words, since it is the color that has the most relevance in the classifi-
learning process, a CNN model is generated. In order to have a good cation, the saturation and value channels did not make a good con-
empirical estimation, the learning models were performed five times. tribution to the classification. For the LAB color space, classification
Each score represents the average accuracy of the five models. results were not conclusive. This may be due to that a and b channels,
Manually annotated data have been used in the testing process to does not represent effectively the colors related to the diseased vine-
evaluate the performances of the models obtained. This step loads the yard. The L-channel contributes little to the classification because it
testing data in groups of 50 patches. These patches are sent to the CNN represents the quantity of light the color space. Overall, it is apparent
model to predict their membership class. For performance measure- from the results, that YUV is more stable and consistent which makes it
ment, a correct classification rate (accuracy) is calculated. more suitable for symptoms detection. These good performances are
related to the color information of the green colour corresponding to
3.2. Performance measurement healthy vegetation, and the brown and yellow colours characterising
diseased vegetation, presented in UV channels. The combination of
In this experiment, different configurations and parameters were different spaces produced lower scores than each space separately ex-
evaluated including, patch size, color spaces, combination of vegetation cept a slight improvement for (16 × 16) patch. This is due to the fact
indices, combination of color spaces and vegetation indices, and testing that CNN has not been able to extract good color features from multiple
different filter sizes of CNN networks. To find the best design para- channels for discriminating healthy and diseased classes.
meters, we performed the tests by combination of different patch sizes Vegetation indices have been experimented with combining them
(16 × 16 ), (32 × 32 ) and (64 × 64 ) with different color spaces (RGB, with the YUV and RGB color space. The experimental results are shown
HSV, LAB, YUV) and vegetation indices (ExG, ExR, ExGR, GRVI, NDI, in Tables 3, 4. As can be seen, in most cases, the combination of the
RGI). YUV and RGB with vegetation indices provides better performances
To find the right configuration for each patch size, CNN of the same than YUV and RGB alone. This improvement is due to the additional
architecture and configuration is used for all experiments. Then, the information that vegetation indices provided to the YUV and RGB color
CNN filters of the 1st and 2nd convolution layers are changed with spaces, which made the combination approach more robust.
three filter sizes (3 × 3), (5 × 5) and (7 × 7 ). For each test sample, the For the combination of YUV color space and ExGR vegetation in-
accuracy of the global classification is computed. Once the two best dices with patch size (32 × 32 ) and (64 × 64 ), it can be observed from
color spaces in terms of performance are found, it will be combined Table 3 the classification accuracies are 95.82% and 95.86%, respectively.
with all vegetation indices. Also, other vegetation indices were combined with both ExR and
For the runtime evaluation, the used hardware in this experiments ExG indices, the results shown in Table 5. As can be observed, in gen-
includes graphic processor (GPU) NVidia Quadro M2000 4 GB memory, eral, the combination results gave the best performance in this study,
a processor (CPU) Intel Xeon E5-1620 v4 @ 3.50 GHz with a RAM of which indicates that the vegetation indices present a good color char-
16 GB, and under Linux Ubuntu 16.04 LTS operating system. The run- acteristic for health and diseases.
time of the learning phase is not critical, because this task can be run on It can be seen from Table 5, the combination of ExR, ExG and ExGR
offline to generate CNN models which will be used in vineyard plot vegetation indices with patch size (16 × 16) provides 95.80% classifica-
diseases detection. tion accuracy. The second obtained results by the combination of ExR,
ExG and RGI index vegetation indices with patch size (32 × 32 ), and
ExR, ExG and GRVI vegetation index with patch size (64 × 64 ) with a
3.3. Results and discussion
accuracy of 95.82% and 95.86%, respectively.
It can be noted from patches sizes results and according to the color
The results obtained during experiments are shown in Tables 2–5,
spaces and/or vegetation indices, all patches sizes have obtained good
they are expressed in terms of accuracy measure. The bolded numbers
results. However, each patch size has its advantages and disadvantages.
shows the best scores.
Size patches (16 × 16) are the smallest category, where segmentation
Table 2 shows the results of different color spaces. The YUV color

Table 2
Experimental results of color spaces, where COMB is the combination of all color spaces (RGB, HSV, LAB and YUV).
Patch size 16 × 16 32 × 32 64 × 64

Filter size 3×3 5×5 7×7 3×3 5×5 7×7 3×3 5×5 7×7

RGB 94.69% 94. 83% 94.11% 94.24% 95.29% 94.84% 92.94% 90.91% 95.30%
HSV 90.00% 88.75% 87.99% 94.36% 94.03% 93.61% 95.00% 95.46% 94.96%
LAB 94.47% 94.46% 93.61% 92.27% 92.77% 93.11% 88.01% 89.49% 88.07%
YUV 94.37% 94.21% 93.03% 95. 08% 95. 82% 95. 63% 95. 84% 95. 71% 95. 47%
COMB 94. 82% 93.88% 94. 38% 94.15% 93.86% 92.42% 88.34% 90.63% 87.63%

240
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

Table 3
Experimental results of YUV color spaces combined with vegetation indices.
Patch size 16 × 16 32 × 32 64 × 64

Filter size 3×3 5×5 7×7 3×3 5×5 7×7 3×3 5×5 7×7

YUV & ExG 93.50% 93.44% 93.16% 95.41% 95.12% 95.05% 94.72% 95.12% 94.73%
YUV & ExR 93.75% 91.09% 93.75% 95.70% 95.29% 95.63% 95.15% 95.13% 92.48%
YUV & ExGR 90.11% 91.45% 91.23% 95. 89% 95. 85% 95.59% 95. 91% 95. 92% 95.43%
YUV & GRVI 88.55% 89.83% 88.28% 95.70% 95.49% 95.10% 95.84% 95.06% 93.61%
YUV & NDI 92.86% 87.98% 90.97% 95.41% 94.95% 95.16% 95.30% 95.52% 95. 51%
YUV & RGI 94. 43% 94. 25% 94. 05% 95.58% 95.66% 95. 65% 95.67% 95.73% 95.38%

Table 4
Experimental results of RGB color spaces combined with vegetation indices.
Patch size 16 × 16 32 × 32 64 × 64

Filter size 3×3 5×5 7×7 3×3 5×5 7×7 3×3 5×5 7×7

RGB & ExG 94. 59% 94.45% 94.31% 94.85% 94.93% 94.92% 95.14% 95. 43% 92.01%
RGB & ExR 94.56% 94.63% 94.48% 95. 31% 95. 35% 94.60% 91.60% 93.41% 95. 80%
RGB & ExGR 94.59% 94.64% 94.14% 94.78% 94.48% 94.72% 89.17% 95.16% 95.13%
RGB & GRVI 94.21% 94. 67% 94. 64% 94.94% 95.04% 94.91% 88.95% 91.04% 90.52%
RGB & NDI 94.58% 94.57% 94.27% 94.30% 94.41% 95.14% 95. 44% 94.80% 94.18%
RGB & RGI 94.43% 94.43% 94.42% 94.67% 95.04% 95. 35% 94.95% 95.34% 90.92%

Table 5
Experimental results of ExG, ExR combined with ExGR, GRVI, NDI and RGI vegetation indices.
Patch size 16 × 16 32 × 32 64 × 64

Filter size 3×3 5×5 7×7 3×3 5×5 7×7 3×3 5×5 7×7

ExG & ExR 95.07% 95.16% 95.57% 95.79% 95. 69% 95.15% 94.20% 95.00% 95.00%
ExG, ExR & ExGR 95. 59% 95. 62% 95. 80% 95.44% 95.63% 94.03% 95. 62% 94.96% 95. 28%
ExG, ExR & GRVI 94.64% 92.31% 95.04% 95.69% 95.36% 95.40% 95.57% 95. 86% 95.28%
ExG, ExR & NDI 94.46% 94.62% 92.39% 95.65% 95.24% 93.44% 94.80% 95.22% 95.24%
ExG, ExR & RGI 95.31% 95.22% 95.07% 95. 82% 95.30% 95. 52% 94.99% 95.43% 91.53%

Table 6 accuracy is their main advantage, unlike sizes (32 × 32 ) and (64 × 64 )
Runtime in seconds for image diseases detection with a size of 4608 × 3456 . which have less segmentation precision and could contain several
Patch size 16 × 16 32 × 32 64 × 64
borders (many classes) in the same patch. Another advantage of size
Number of 62,208 15,552 3888 patches (16 × 16), some cases contain fine diseases areas that may be
patch undetectable in a medium or large patch.
Filter size 3×3 5×5 7×7 3×3 5×5 7×7 3×3 5×5 7×7 Indeed, lack of information is one of the disadvantages of size pat-
From 1 to 12 83 95 109 27 30 36 14 16 19
ches (16 × 16), due to its small size that could not contain complete
channel
(Sec) information of the zone which leads to misclassification because of
diseased vine leaves winding. In contrast, (32 × 32 ) and (64 × 64 ) sizes
do not suffer from this problem because it has more information on the

Fig. 4. Example of UAV image diseases detection.

241
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

color variations and texture of the disease. The second disadvantage of Disease Using Unmanned Aerial Vehicle ( UAV ) Multispectral Imagery 9, 1–20.
the size (16 × 16) is the processing time, where can be seen in Table 6 doi:https://doi.org/10.3390/rs9040308.
Al-Saddik, H., Simon, JC., Brousse, O., Cointault, F, 2016. DAMAV: Un projet inter-
the size (16 × 16) has the longest time to scan the vine field image. regional de detection de foyers infectieux de flavescence doree par imagerie de drone.
By studying the parameter variation between the filter dimension of Journée technique VITINNOV Viticulture de précision: les capteurs à la loupe
the CNN and the patch size, it can be noticed that in terms of perfor- DAMAV 32–35.
Arroyo, J.A., Gomez-Castaneda, C., Ruiz, E., Munoz de Cote, E., Gavi, F., Sucar, L.E.,
mance, there is no absolute rule that links the patch size (16 × 16 ), 2017. UAV technology and machine learning techniques applied to the yield im-
(32 × 32 ) and (64 × 64 ), with the filters sizes (3 × 3), (5 × 5) and provement in precision agriculture. In: 2017 IEEE Mexican Humanitarian Technology
(7 × 7 ). Conference (MHTC), vol. 3, pp. 137–143. doi: https://doi.org/10.1109/MHTC.2017.
8006410.
The execution time performance is shown in Table 6, where it in- Bah, M.D., Hafiane, A., Canals, R., 2017. Weeds detection in UAV imagery using SLIC and
cludes the extraction time of non overlapped patches, and the classifi- the hough transform. In: 2017 Seventh International Conference on Image Processing
cation time of this patches. During runtime experiments, we found that Theory, Tools and Applications (IPTA), pp. 1–6. doi:https://doi.org/10.1109/IPTA.
2017.8310102.
channels number, color space, and different kinds of vegetation index
Bah, M.D., Dericquebourg, E., Hafiane, A., Canals, R., 2018. Deep learning based classi-
have no impact on the runtime which means that 1 channel run in the fication system for identifying weeds using high-resolution UAV imagery. Comput.
same time as 12 channels. This is due to the parallel processing on the Conf. (July), 1–6.
GPU of the CNN model. Bai, X.D., Cao, Z.G., Wang, Y., Yu, Z.H., Zhang, X.F., Li, C.N., 2013. Crop segmentation
from images by morphology modeling in the CIE L∗a∗b∗color space. Comput.
It can be observed from the results in Table 6, the execution time Electron. Agric. 99, 21–34. https://doi.org/10.1016/j.compag.2013.08.022.
depends mainly on the input patch size, and the filter size of the con- Cetin, A., Altilar, T., 2015. Estimation of phenology using different types of data
volutional layers. The use of larger blocks reduces the number of blocks (TARBIL). In: 2015 4th International Conference on Agro-Geoinformatics, Agro-
Geoinformatics 2015, pp. 363–367. doi:https://doi.org/10.1109/Agro-
to process on the image, resulting in a shorter execution time. The size Geoinformatics.2015.7248089.
of the convolution filters also has an impact on the runtime, since the Chaudhary, P., Chaudhari, A.K., Godara, S., 2012. Color transform based approach for
number of operations in the convolution layers depends on the size of disease spot detection on plant leaf. Int. J. Comput. Sci. Telecommunications 3 (6),
65–71.
the filter. Note that, the increase in layers number of the CNN archi- Dosovitskiy, A., Fischer, P., Springenberg, J.T., Riedmiller, M., Brox, T., 2014.
tecture also increases the execution time. Discriminative unsupervised feature learning with exemplar convolutional neural
Fig. 4 shows the segmentation diseased regions on the vine field networks, 1–14. arXiv:1406.6909, doi:https://doi.org/10.1109/TPAMI.2015.
2496141. URL 1406.6909.
image. As it can be seen, the areas delineated by white contours are the Douarre, C., Schielein, R., Frindel, C., Gerth, S., 2016. Deep learning based root-soil
diseased areas of the vine. For this example, an ExR, ExG, ExGR model segmentation from X-ray tomography images, pp. 1–22, bioRxiv. doi: https://doi.
with a patch size (16 × 16) and a model using convolution filters size org/10.1101/071662.
Ferentinos, K.P., 2018. Deep learning models for plant disease detection and diagnosis.
(7 × 7 ) has been chosen. Patch size (16 × 16) has been selected because
Comput. Electron. Agric. 145 (January), 311–318. https://doi.org/10.1016/j.
it provides more segmentation precision compared to the patches size compag.2018.01.009.
(32 × 32 ) and (64 × 64 ). The (16 × 16) patch with (7 × 7 ) filter, realizes Ford, A., Roberts, A., 1998. Colour Space Conversions. Westminster University, London,
a good trade off between the classification accuracy and segmentation pp. 1–31. doi: ColourSpaceConversions. URL. <http://herakles.fav.zcu.cz/research/
night_road/westminster.pdf>.
precision. Garcia, J., Barbedo, A., 2018. Impact of dataset size and variety on the effectiveness of
deep learning and transfer learning for plant disease classification. Comput. Electron.
4. Conclusion Agric. 153 (July), 46–53. https://doi.org/10.1016/j.compag.2018.08.013.
Gennaro, S.F.D.I., Battiston, E., Marco, S.D.I., Facini, O., Matese, A., Nocentini, M.,
Palliotti, A., Mugnai, L., 2016. Unmanned Aerial Vehicle ({UAV})-based remote
In this work, we proposed a new method of detecting Esca disease in sensing to monitor grapevine leaf stripe disease within a vineyard affected by esca
the vineyard fields by using UAV RGB images and deep learning ap- complex (2). doi:https://doi.org/10.14601/Phytopathol_Mediterr-18312.
Gonzalez, R.C., Woods, R.E., 2008. Digital image processing. arXiv:arXiv:1011.1669v3,
proach. The image was divided into regular square grid with different doi:https://doi.org/10.1049/ep.1978.0474.
size (16 × 16), (32 × 32 ), and (64 × 64 ) pixels. The aim was to classify Han, S., Cointault, F., 2013. Détection précoce de maladies sur feuilles par traitement
each block of this grid into healthy or diseased category. The classifi- d’images. Congres des jeunes chercheurs en vision par ordinateur 4. URL. <https://
hal.archives-ouvertes.fr/hal-00829402>.
cation model were trained through deep network based on different
Hernández-Hernández, J.L., García-Mateos, G., González-Esquiva, J.M., Escarabajal-
combination of color spaces and vegetation indices. The best results Henarejos, D., Ruiz-Canales, A., Molina-Martínez, J.M., 2016. Optimal color space
were obtained with the combination of ExR, ExG and ExGR vegetation selection method for plant/soil segmentation in agriculture 122, 124–132.
doi:https://doi.org/10.1016/j.compag.2016.01.020.
indices using (16 × 16) patch size reaching 95.80%. Also, the combina-
Hernández-Hernández, J.L., Ruiz-Hernández, J., García-Mateos, G., González-Esquiva, J.
tion of YUV color space and ExGR vegetation indice yields similar ac- M., Ruiz-Canales, A., Molina-Martínez, J.M., 2017. A new portable application for
curacy score for block sizes (32 × 32 ) and (64 × 64 ). Concerning the automatic segmentation of plants in agriculture 183, 146–157. doi:https://doi.org/
execution time for detection procedure, the number of channels has no 10.1016/j.agwat.2016.08.013.
Hofstetter, V., Buyck, B., Croll, D., Viret, O., Couloux, A., Gindro, K., 2012. What if esca
impact on the runtime. The only influential parameters are the CNN disease of grapevine were not a fungal disease? Fungal Diversity 54, 51–67. https://
filter, the number of layers and the patch size. Overall, this study doi.org/10.1007/s13225-012-0171-z.
strengthens the idea that the deep learning architecture has nice po- Honrado, J.L.E., Solpico, D.B., Favila, C.M., Tongson, E., Tangonan, G.L., Libatique, N.J.
C., 2017. UAV Imaging with low-cost multispectral imaging system for precision
tential for vine disease detection. This study was limited by the number agriculture applications. In: 2017 IEEE Global Humanitarian Technology Conference
of the expert labled data, a recurring problem in many deep learning (GHTC).
approach. For our future work and to improve the system, it is neces- Kamilaris, A., Prenafeta-Boldú, F.X., Deep learning in agriculture: a survey 147, 70–90.
doi: https://doi.org/10.1016/j.compag.2018.02.016.
sary to enrich the UAV multispectral images database, with vineyards Karpina, M., Jarzabek-Rychard, M., Tymków, P., Borkowski, A., 2016. Uav-based auto-
plots and new vine diseases samples. matic tree growth measurement for biomass estimation. Int. Arch. Photogrammetry,
Remote Sens. Spatial Inf. Sci. - ISPRS Arch. 41 (July), 685–688. https://doi.org/10.
5194/isprsarchives-XLI-B8-685-2016.
Acknowledgment
Knoll, F.J., Czymmek, V., Poczihoski, S., Holtorf, T., Hussmann, S., 2018. Improving ef-
ficiency of organic farming by using a deep learning classification approach 153
This work is part of the VINODRONE project supported by a Region (September), 347–356. doi:https://doi.org/10.1016/j.compag.2018.08.032.
Kussul, N., Lavreniuk, M., Skakun, S., Shelestov, A., 2017. Deep learning classification of
Centre-Val de Loire (France). We gratefully acknowledge Region
land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett.
Centre-Val de Loire for its support. 14 (5), 778–782. https://doi.org/10.1109/LGRS.2017.2681128.
Kuwata, K., Shibasaki, R., 2015. Estimating crop yields with deep learning and remotely
References sensed data. In: 2015 IEEE International Geoscience and Remote Sensing Symposium
(IGARSS), pp. 858–861. doi:https://doi.org/10.1109/IGARSS.2015.7325900.
Le Cun, Y., Matan, O., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W.,
Albetis, J., Duthoit, S., Guttler, F., Jacquin, A., Goulard, M., 2017. Open Archive Jacket, L., Baird, H., 1990. Handwritten zip code recognition with multilayer net-
TOULOUSE Archive Ouverte ( OATAO ) Detection of Flavescence dorée Grapevine works. arXiv:arXiv:1011.1669v3, doi:https://doi.org/10.1109/ICPR.1990.119325.

242
M. Kerkech et al. Computers and Electronics in Agriculture 155 (2018) 237–243

LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998. Gradient based learning applied to International Conference on Science and Technology-Computer, ICST 2017, pp.
document recognition. Proc. IEEE 86 (11), 2278–2323. https://doi.org/10.1109/5. 99–103. doi:https://doi.org/10.1109/ICSTC.2017.8011860.
726791. Sarkate, R.S., Kalyankar, N.V., Khanale, P.B., 2013. Application of computer vision and
Mahlein, A.-K., 2016. Plant disease detection by imaging sensors parallels and specific color image segmentation for yield prediction precision. In: Proceedings of the 2013
demands for precision agriculture and plant phenotyping 100 (2), 241–251. International Conference on Information Systems and Computer Networks, ISCON
doi:https://doi.org/10.1094/PDIS-03-15-0340-FE. 2013, pp. 9–13. doi:https://doi.org/10.1109/ICISCON.2013.6524164.
Meyer, G.E., Neto, J.C., 2008. Verification of color vegetation indices for automated crop Schut, A.G., Traore, P.C., Blaes, X., de By, R.A., 2018. Assessing yield and fertilizer re-
imaging applications. Comput. Electron. Agric. 63 (2), 282–293. https://doi.org/10. sponse in heterogeneous smallholder fields with UAVs and satellites. Field Crops Res.
1016/j.compag.2008.03.009. 221 (February), 98–107. https://doi.org/10.1016/j.fcr.2018.02.018.
Meyer, G.E., Hindman, T., Laksmi, K.M.G., (Eds.), DeShazer, J.A., 1998. Machine Vision Soontranon, N., Srestasathiern, P., Rakwatin, P., 2014. Rice growing stage monitoring in
Detection Parameters for Plant Species Identification. Precision Agriculture and small-scale region using ExG vegetation index. In: 2014 11th International
Biological Quality, Boston, Massachusetts, USA, 3 4 November, 3543 (November), Conference on Electrical Engineering/Electronics, Computer, Telecommunications
pp. 327–335. and Information Technology, ECTI-CON 2014 (February 2016). doi:https://doi.org/
Miliordos, D.E., Galetto, L., Ferrari, E., Pegoraro, M., Marzachì, C., Bosco, D., 2017. 10.1109/ECTICon.2014.6839830.
Acibenzolar-S-methyl may prevent vector-mediated flavescence dorée phytoplasma Steele, M.R., Gitelson, A.A., Rundquist, D.C., Merzlyak, M.N., 2009. Nondestructive es-
transmission, but is ineffective in inducing recovery of infected grapevines. Pest timation of anthocyanin content in grapevine leaves. Am. J. Enology Viticulture 60
Manag. Sci. 73 (3), 534–540. https://doi.org/10.1002/ps.4303. (1), 87–92.
Mohanty, S.P., Hughes, D.P., Salathé, M., 2016. Using deep learning for image-based Uzal, L.C., Grinblat, G.L., Namías, R., Larese, M.G., Bianchi, J.S., Morandi, E.N., Granitto,
plant disease detection. Frontiers Plant Sci. 7 (September), 1–10. https://doi.org/10. P.M., 2018. Seed-per-pod estimation for plant breeding using deep learning. Comput.
3389/fpls.2016.01419. arXiv: 1604.03169. Electron. Agric. 150 (May), 196–204. https://doi.org/10.1016/j.compag.2018.04.
Motohka, T., Nasahara, K.N., Oguma, H., Tsuchida, S., 2010. Applicability of Green-Red 024.
Vegetation Index for remote sensing of vegetation phenology. Remote Sens. 2 (10), Waghmare, H., Kokare, R., Dandawate, Y., 2016. Detection and classification of diseases
2369–2387. https://doi.org/10.3390/rs2102369. of Grape plant using opposite colour Local Binary Pattern feature and machine
Navia, J., Mondragon, I., Patino, D., Colorado, J., 2016. Multispectral mapping in agri- learning for automated Decision Support System. IEEEhttps://doi.org/10.1109/SPIN.
culture: terrain mosaic using an autonomous quadcopter UAV. In: 2016 International 2016.7566749.
Conference on Unmanned Aircraft Systems, ICUAS 2016, pp. 1351–1358. doi:https:// Woebbecke, D.M., Meyer, G.E., Von Bargen, K., Mortensen, D.A., 1995. Color indices for
doi.org/10.1109/ICUAS.2016.7502606. weed identification under various soil, residue, and lighting conditions. Trans. ASAE
Pérez, A.J., López, F., Benlloch, J.V., Christensen, S., 2000. Colour and shape analysis 38 (1), 259–269. https://doi.org/10.13031/2013.27838.
techniques for weed detection in cereal fields. Comput. Electron. Agri. 25 (3), Xiao, H., Tong, C., Liu, Q., 2016. A new method for discovery of vegetation changes based
197–212. https://doi.org/10.1016/S0168-1699(99)00068-X. on satellite ground photographs. In: Proceedings - 2015 8th International Congress on
Ponti, M.P., 2013. Segmentation of low-cost remote sensing images combining vegetation Image and Signal Processing, CISP 2015 (Cisp), pp. 851–855. doi:https://doi.org/10.
indices and mean shift. IEEE Geosci. Remote Sens. Lett. 10 (1), 67–70. https://doi. 1109/CISP.2015.7407996.
org/10.1109/LGRS.2012.2193113. Yalcin, H., 2017. Plant phenology recognition using deep learning: Deep-Pheno. In: 2017
Prasetyo, E., Adityo, R.D., Suciati, N., Fatichah, C., 2017. Mango leaf image segmentation 6th International Conference on Agro-Geoinformatics, pp. 1–5. doi:https://doi.org/
on HSV and YCbCr color spaces using Otsu thresholding. In: Proceeding - 2017 3rd 10.1109/Agro-Geoinformatics.2017.8046996.

243

You might also like