You are on page 1of 15

Computers and Electronics in Agriculture 189 (2021) 106367

Contents lists available at ScienceDirect

Computers and Electronics in Agriculture


journal homepage: www.elsevier.com/locate/compag

Original papers

Lightweight convolutional neural network model for field wheat ear


disease identification
Wenxia Bao a, Xinghua Yang a, Dong Liang a, Gensheng Hu a, 1, *, Xianjun Yang b
a
National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei, Anhui, China
b
Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China

A R T I C L E I N F O A B S T R A C T

Keywords: Manual diagnosis of crop diseases has high cost and low efficiency and has become increasingly unsuitable for
Wheat ear disease the needs of modern agricultural production. This study designed a lightweight convolutional neural network
Target identification (CNN) model called SimpleNet for the automatic identification of wheat ear diseases, such as glume blotch and
Lightweight convolutional neural network
scab, in natural scene images taken in the field. SimpleNet was constructed using convolution and inverted
Attention mechanism
residual blocks. In this study, Convolutional Block Attention Module (CBAM), which combines spatial attention
Feature fusion
mechanism and channel attention mechanism, was introduced into inverted residual blocks to improve the
representation ability of the model for disease features so that the model pays attention to important features,
suppresses unnecessary features, and reduces the influence of complex backgrounds in the images. In addition,
this study designed a feature fusion module to concatenate the down-sampled feature maps output by inverted
residual blocks and the average pooling features of the feature maps that input inverted residual blocks to realize
the fusion between features of different depths to reduce the loss of the detailed features of wheat ear diseases
caused by the networks in the down-sampling process and solve the disappearance of disease features in the
process of image feature extraction. Experimental results show that the proposed SimpleNet model achieved an
identification accuracy of 94.1% on the test data set, which is higher than that of classic CNN models, such as
VGG16, ResNet50, and AlexNet, and lightweight CNN models, such as MobileNet V1, V2, and V3. SimpleNet has
only 2.13 M parameters, which is less than those of MobileNet V1, V2, and V3-Large. The designed model can be
used for the automatic identification of wheat ear diseases on the mobile terminal.

1. Introduction the ears, leaves, stems, and other parts of wheat and causes a 30%
reduction in grain production in severe cases (Lin et al. 2020; Ficke et al.
Wheat is a cereal crop that is widely planted all over the world. The 2018). Wheat scab is caused by the infection of Fusarium graminearum. In
yield and quality of wheat have an important impact on human life. the early stage of infection, water-stained light brown spots appear on
However, wheat ears in most planting areas are often infected by one or spikelet and glumes, and when the humidity is high, a pink gelatinous
more diseases (Manavalan 2020). The common wheat ear diseases in mold layer appears at the spots. The fungus grows through the rachis to
most areas in China are wheat glume blotch and wheat scab. Infected invade spike-lets until the entire head takes on a bleached appearance,
wheat ears will show corresponding symptoms, which are used by often with a salmon-pink tint. Wheat scab can reduce the yield and
experienced agricultural experts to determine diseases. Wheat glume produce several mycotoxins that are harmful to humans and animals
blotch is a fungal disease caused by the infection of Septoria nodorum (Wang et al. 2019). Identifying fungal diseases in crops is a very pro­
Berk. After wheat ears are infected with wheat glume blotch, the older fessional job (Barbedo and Garcia 2016). In the rural areas of China, the
parts of the infected tissue in the wheat ears turn from light gray–brown contact between farmers and plant protection experts is very limited
to chocolate brown, and pycnidia (small, brown, spore-producing fungal because of geographical conditions, economic costs, and other factors.
fruiting bodies) are formed. The brown–black edges of the lesions Therefore, the research of a wheat ear disease identification method that
continue to expand along the plant tissue. Wheat glume blotch damages has relatively low computing power and can be transplanted to mobile

* Corresponding author at: Anhui University, Hefei 230601, China.


E-mail address: hugs2906@sina.com (G. Hu).
1
Contact Address: School of Electronic and Information Engineering, Anhui University, Jiulong Road, Hefei, Anhui Province, China.

https://doi.org/10.1016/j.compag.2021.106367
Received 25 May 2021; Received in revised form 27 July 2021; Accepted 1 August 2021
Available online 19 August 2021
0168-1699/© 2021 Elsevier B.V. All rights reserved.
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

phones and other mobile devices is of great importance to assist farmers identify diseases. Hu et al. (2019) extracted the color features of tea
in identifying wheat ear diseases and improving wheat yield and quality spots, used support vector machine to segment the spots, and then used a
(Picon et al., 2019a). VGG16 model to identify the segmented images. Hu et al. (2021)
With the development of computer vision technology, many re­ improved the Faster R-CNN and used the pyramid region proposal
searchers used machine learning and image processing methods to network of a feature pyramid network to improve the detection per­
identify crop diseases. Tian et al. (2011) extracted the color, shape, and formance of blurred, occluded, and small diseased tea leaves in natural
texture features of wheat leaves suffering from powdery mildew, rust, scenes. The model’s anti-interference ability against complex back­
leaf blotch, and other fungal diseases and then used support vector grounds can be improved by removing image backgrounds, introducing
machines to classify the features to identify diseases. Majumdar et al. additional information, and improving the CNN model structure. How­
(2015) adopted fuzzy c-means clustering algorithm to extract wheat leaf ever, the numerous parameters and calculations of the model hinder its
disease features and artificial neural network (ANN) to classify the direct transplantation to mobile devices.
extracted features to determine whether wheat is diseased. Prasad et al. Google has designed a series of lightweight CNN models of Mobile­
(2016) combined Gabor wavelet transform and gray level co-occurrence Net to make the deep CNN model run on mobile devices; these models’
matrix to extract the multiresolution features of plant leaf diseases and effect in public data sets is not inferior to classic CNN models, such as
used a k-nearest neighbor classification method for the identification of AlexNet, VGG16, and Inception V3, and can be transplanted to mobile
diseases; this algorithm can be transplanted to the Android system. devices. Kamal et al. (2019) used depthwise separable convolution in
Moshou et al. (2004) studied the difference in spectral reflectance be­ MobileNet to construct a lightweight CNN model for plant disease leaf
tween healthy wheat and wheat with yellow rust and developed a yellow classification in the PlantVillage dataset. The parameters of this network
rust detection algorithm based on multilayer perceptron. Traditional model are less than those of classical CNN models. Barman et al. (2020)
machine learning methods have achieved good results in several specific adopted their own CNN and MobileNet that can be applied to mobile
applications. These methods use feature engineering to design manual terminal to identify diseased citrus leaves. Singh et al. (2021) fine-tuned
features, such as color, texture, and edge gradient, of wheat ear diseases multiple pre-trained MobileNet models to identify coconut tree diseases.
in wheat images. However, the color and texture of diseased wheat ears The identification accuracy of MobileNet is better than those of other
are similar to those of weeds and wheat leaves in wheat images with classical CNNs; MobileNet is also deployed in Web programs. Bi et al.
complex backgrounds, leading to false identification of the disease. (2020) used MobileNet to identify apple leaf spot and rust. The identi­
In recent years, deep learning methods, especially convolutional fication accuracy is similar to the existing complex CNN models, but the
neural networks (CNNs), have produced remarkable results in image computational complexity is slightly lower.
identification. CNNs can simulate human vision and divide features into A lightweight CNN model, SimpleNet, was designed for the auto­
low-level and high-level features. Low-level features are similar to the matic identification of wheat ear diseases, such as glume blight and scab,
designed features in traditional machine learning methods, but several in natural scenes in the field and to solve the problems of complex
high-level features can be learned through CNNs. Moreover, CNNs backgrounds, large number of parameters, and high cost of CNN models.
transform the feature representation of images in the original space into The model uses lightweight module for feature extraction, adds atten­
a new feature space via layer feature transformation, which makes the tion mechanism module to suppress complex background information to
identification of wheat ear diseases by deep learning methods more improve the network’s ability to express disease features, and uses
accurate than that by traditional machine learning methods. Many feature fusion method to reduce the loss of useful information in the
current studies have used CNNs for crop disease identification. Fer­ process of down-sampling. Experimental results show that the identifi­
entinos (2018) used CNNs to identify 58 diseases in 25 plants in the cation accuracy of the proposed model is higher than those of some
public data set, PlantVillage (Hughes and Salathé 2015), and compared classical and lightweight CNN models, and its parameter quantity is like
the identification effects of different CNNs. Liang et al. (2019) proposed those of MobileNet series (V1, V2, and V3) and other lightweight CNN
a deep learning model called “PD2SE-Net” to automatically identify models.
diseases on crop leaves and disease severity. Bao et al. (2021) took The contributions of this study include the following:
powdery mildew and stripe rust as research objects and proposed an
algorithm for identifying wheat leaf diseases and their severity based on (1) A lightweight CNN model named SimpleNet was designed for the
elliptical-maximum margin criterion metric learning. Jin et al. (2018) automatic identification of wheat ear diseases in the field.
reconstructed the spectral data of wheat ears into a 2-D data structure (2) A CBAM module that combines the spatial attention mechanism
suitable for CNN input and then used CNNs to classify the data to and the channel attention mechanism was introduced to the
distinguish healthy wheat ears from scab. Su et al. (2019) used the im­ inverted residual block in the SimpleNet model to improve the
ages of eight kinds of wheat leaf diseases as the research object and used representation ability of the model for disease features and
local support vector machine (LSVM) instead of softmax as the classifier reduce the influence of complex backgrounds in the image on the
of the CNN to alleviate the misclassification caused by data imbalance. identification performance of the model.
Esgario et al. (2020) proposed a multi-task architecture based on CNN to (3) A feature fusion module was designed to concatenate shallow
classify and identify the severity of diseases in the collected coffee leaf features with deep features to reduce the damage to the detailed
data set. Karlekar and Seal (2020)) removed the backgrounds of images features of wheat ear diseases during the down-sampling process
according to the color characteristics of soybean leaves and then used of the model.
CNN for the identification of soybean leaf diseases on the images after (4) SimpleNet has only 2.13 M parameters and achieved an identi­
removing the backgrounds. These CNN models identified crop diseases fication accuracy of 94.1% on the test data set. It can be used for
on images collected under manual control conditions and had achieved the automatic identification of wheat ear diseases on mobile
good results. However, crop disease images collected in natural scenes in devices.
the field usually have complex backgrounds and uneven light, which
reduces the identification accuracy of the models. Lu et al. (2017) 2. Materials and method
designed a field wheat disease diagnosis system based on weakly su­
pervised deep learning. The system used multi-instance learning to 2.1. Image acquisition
realize the identification of field wheat diseases and disease region.
Picon et al. (2019b) photographed five crops (wheat, barley, corn, rice, The images used in this study were obtained from Anhui Agricultural
and rape) in the field, combined non-image context information (crop University Industry-University Research Base, Guohe Town, Hefei City,
information) with image information, and used three CNN models to Anhui Province, China (31◦ 25′ –31◦ 42′ N, 117◦ 09′ –117◦ 16′ E). The

2
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Fig. 1. Image acquisition area.

Fig. 2. Some images of wheat ears.

image acquisition area is shown in Fig. 1. The Base is responsible for with 12 + 5 million pixel dual cameras. The 5 million pixels are mainly
research on the phenotypic information of various crops, such as wheat, used to obtain depth of field information, and the 12-million-pixel main
rice, and corn. Wheat varieties, such as ‘Yangmai No. 13,’ ‘Yangmai No. camera has Dual PD dual-core focusing. Wheat ears suffering from scab
19,’ ‘Ningmai No. 9,’ ‘Ningmai No. 24,’ ‘Sumai,’ and ‘Lemai,’ were and glume blight and healthy wheat ears were selected as the objects for
planted in the Base. The image acquisition time was from May 4, 2019 to shooting. The camera’s mode was set to auto exposure and auto focus.
May 6, 2019. Considering the growth characteristics of winter wheat The lens was 20–50 cm away from the canopy with an angle of 0◦ –40◦
and the influence of light intensity on image quality, the shooting was with the horizontal plane. The resolution of the images captured by the
selected from 3 pm to 6 pm. The shooting devices were a digital camera camera was 3456 × 2304 and 5184 × 3456, and the resolution of the
(Canon EOS 600D) and a mobile phone (Redmi Note5). Canon EOS 600D images captured by the mobile phone was 4000 × 3000. A total of 568
is a digital SLR camera released by Canon Japan on February 11, 2011. It wheat ear images were collected, including 183 wheat glume blotch
has full-pixel, dual-core, CMOS AF focusing technology and a 9-point images, 280 wheat scab images, and 105 healthy wheat ear images.
full cross focusing system. The effective number of pixels is 18 Some images of wheat ears are shown in Fig. 2. The first line shows
million, and the sensor size is APS-C (22.3 × 14.9 mm). Redmi Note 5 is the images of wheat ears with glume blotch, which are labeled “0′′ . The
a mobile phone released by Xiaomi on March 16, 2018. It is equipped second line shows the images of wheat ears with scab, which are labeled

3
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

”1′′ . The third line shows the images of healthy wheat ears, which are
labeled “2′′ . The shapes of the wheat ears in the images are different, the
image backgrounds are complex, the light is uneven, and the severity of
the diseases is varied. The identification of wheat ear diseases in natural
scenes of the field is a very challenging work.

2.2. Image preprocessing and sample augmentation


Fig. 3. Schematic diagram of the Retinex principle.
(1) Image preprocessing

The color of wheat images obtained from natural scenes in the field
will be different from that perceived by vision because of the change in

Table 1
Sample distribution after augmentation.
Disease Number of training Number of validation Number of test
images (Aug) images (Aug) images

Wheat glume 375 63 55


Fig. 4. Overall process of the Retinex algorithm. blotch
Wheat scab 566 94 84
Healthy 264 44 31
Total 1205 201 170

(a) Original images

(b) Images enhanced using the Retinex algorithm


Fig. 5. Original images and the images enhanced using the Retinex algorithm.

Fig. 6. Some of the augmented images.

4
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Fig. 7. Structure of SimpleNet model.

light intensity, which affects the effect of the CNN on disease feature estimated and removed by calculating the weighted average F(x,y) be­
extraction. This study used single-scale Retinex algorithm for image tween the pixels in the original image and the surrounding region, and
enhancement to reduce the influence of uneven light on images. As only the reflection properties of the object are retained. The luminance
shown in Fig. 3, the image obtained by human vision is composed of image L(x,y) can be expressed as
luminance and reflected images, as follows (Sun et al., 2020):
L(x, y) = F(x, y)*S(x, y) (4)
S(x, y) = R(x, y)∙L(x, y) (1)
where F(x,y) is a low-pass filtering function. Through the low-pass
where R(x,y) represents the reflected image, L(x,y) represents the filtering of F(x,y) to S(x,y), the luminance image L(x,y)can be esti­
luminance image, S(x,y) is the visual image captured by the human eye mated. Then, we have
or camera, and (x,y) is the coordinate of the pixel in the image. To
logR(x, y) = logS(x, y) − log[F(x, y)*S(x, y)] (5)
extract R(x,y) from S(x,y), R(x,y) can be expressed as
S(x, y) Thus, transform logR(x,y) into the real number domain, the reflection
R(x, y) = (2) image R(x,y) can be obtained from the human visual image S(x,y) to
L(x, y)
achieve the purpose of image enhancement. The overall process of the
Logarithmic transformation of Formula (2) can be obtained: Retinex algorithm is shown in Fig. 4.
Several examples of the enhanced image are shown in Fig. 5. Fig. 5
logR(x, y) = logS(x, y) − logL(x, y) (3)
(a) shows the original images, and Fig. 5(b) shows the images enhanced
using the Retinex algorithm.
If the value of L(x,y) is estimated, R(x,y) can be calculated according to
Formula (3). The intensity of incident light usually changes slowly on
(2) Sample augmentation
the illuminated surface, so L(x, y) can be represented by the low-
frequency components in the image. The illuminance change is

5
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

(a) Stride = 1

(b) Stride = 2

Fig. 8. Inverted residual blocks.

(a) Ordinary convolution

(b) Depthwise separable convolution


Fig. 9. Schematic of ordinary convolution and depthwise separable convolution.

6
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Fig. 10. Feature fusion module.

Fig. 11. Structure of the inverted residual block with CBAM.

The number of diseased wheat ear images obtained was limited Meanwhile, a feature fusion module was designed to concatenate the
because of economic and physical limitations. Thus, the number of down-sampled feature maps produced by the inverted residual blocks
training samples should be expanded to increase the diversity of samples and the average pooling features of the feature maps that were inputted
in the training image, prevent the model from overfitting, and make the to the inverted residual blocks to realize the fusion of shallow and deep
model have strong generalization ability to natural scene images. features. The module can reduce the loss of detailed features of wheat
The enhanced wheat ear images were divided into training, valida­ ear diseases caused by the networks in the down-sampling process and
tion, and test sets according to the ratio, 6:1:3. The number of training solve the problem of disease features disappearing in the process of
and validation images was expanded by horizontal flip, rotation, and image feature extraction. The details of the feature fusion and CBAM
translation. Some of the expanded images are shown in Fig. 6. modules are introduced in Sections 2.3.2 and 2.3.3.
The sample distribution after augmentation is shown in Table 1. We
obtained 1205 training images and 201 validation images after sample 2.3.2. Feature fusion module
augmentation. The 170 test images were not augmented. The structure of inverted residual block in Fig. 7 is shown in Fig. 8.
The inverted residual block is formed by concatenating ordinary
2.3. SimpleNet model convolution and depthwise separable convolution. As can be seen in
Fig. 8(a), ordinary convolution with a kernel size of 1 × 1 increases the
2.3.1. Model structure number of channels of the feature maps. Depthwise convolution extracts
This study designed a lightweight CNN model called SimpleNet for the deep features of the feature map without changing the number of
automatic identification of wheat ear diseases in the field. By intro­ channels. The number of channels of the output feature maps after
ducing CBAM modules and designing feature fusion modules, SimpleNet pointwise convolution is similar to that of the input feature maps. The
can accurately extract the features from images of wheat ear diseases output feature maps of pointwise convolution and the input feature
with complex backgrounds. SimpleNet is a lightweight CNN model, so it maps are added point by point to obtain output feature maps of the
is suitable for mobile devices. The structure of SimpleNet is shown in inverted residual block. Residual connection improves the capability of
Fig. 7. It is mainly composed of convolution blocks, inverted residual gradients to propagate across layers and can prevent the disappearance
blocks (Howard et al. 2019), average pooling layers, and classifier of gradients in deep convolutional layers. Notably, the nonlinear acti­
blocks. A 3 × 3 convolution layer in the first convolution block of vation function is not used after pointwise convolution to avoid infor­
SimpleNet is used to obtain rich feature representations of images. Seven mation loss during feature map compression. As shown in Fig. 8(b), the
inverted residual blocks are connected after the first convolution block stride of the depthwise convolution on the feature map is 2, and the
to increase the depth of CNN with a small number of parameters and feature maps are down-sampled while extracting deep features to
improve the nonlinear representation capability of the model. The expand the receptive field of the network. However, the resolution of the
average pooling layers are used to decrease the spatial dimension of the feature map is reduced after down-sampling, and the target information
feature maps and increase the receptive field of the model. The last is damaged.
classifier block uses the softmax function to realize disease classification. Compared with ordinary convolution, the number of parameters and
In this study, the attention mechanism module CBAM (Woo et al. the cost of depthwise separable convolution are relatively low. As shown
2018) was added to the inverted residual blocks of SimpleNet to allow in Fig. 9, assuming that M and N are the numbers of input and output
SimpleNet to focus on wheat ear disease regions in the image and reduce channels, respectively, and k × k is the size of convolution kernels, the
the influence of complex backgrounds on disease identification. number of parameters of ordinary convolution is k2 × M × N, and the

7
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

(a) CBAM module

(b) Channel attention submodule

(c) Spatial attention submodule


Fig. 12. CBAM module.

Fig. 13. Process flow of the method.

8
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Table 2 identifying the types of wheat ear diseases. The attention mechanism
Hardware and software configuration. can be used to suppress redundant background information, enhance the
Name Parameter feature representation of foreground diseases in the image, and improve
the identification performance of the model. This study introduced the
CPU Intel i7
GPU Tesla V100-PCIE-16 GB CBAM module, which combines channel and spatial attention in the
RAM 192 GB inverted residual block, to highlight the target features in the feature
System CentOS Linux map produced by depthwise convolution and improve the identification
CUDA 8.0 performance of the model. The CBAM module acts on the output feature
Keras 2.1.2
Tensorflow 1.4
map of depthwise convolution, as shown in Fig. 11.
The CBAM module contains two submodules cascaded in sequence:
the channel attention submodule and the spatial attention submodule
(Fig. 12).
Table 3 The channel attention submodule weights the channel dimensions of
Results of ablation experiments. the input feature map, highlights the feature map channel that contains
CBAM Feature fusion Accuracy F1 score Param(M) abundant disease information, and suppresses the channel that contains
Blotch Scab Health
abundant background information. Assuming that the size of input
feature map F is h × w × c, where w is the width of the feature map, h is
0.894 0.85 0.94 0.84 1.655
the height of the feature map, and c is the number of channels of the
– –
√ – 0.924 0.88 0.95 0.93 1.893
– √ 0.918 0.88 0.96 0.84 1.884 feature map, the channel attention can be expressed as
0.941 0.92 0.96 0.93 2.129
(10)
√ √
Mc (F) = σ {MLP[AvgPool(F) ] + MLP[MaxPool(F) ] },

number of parameters of depthwise separable convolution is k2 × M + where AvgPool and MaxPool represent average pooling and max pool­
M × N. Hence, the ratio of the parameters can be calculated as ing, respectively, which obtain two sets of feature vectors of size 1 × 1 ×
c. MLP stands for learnable multilayer perceptron with one hidden layer,
k2 × M + M × N 1 1
= + 2. (6) which can output a set of learned feature vectors. σ represents the sig­
k2 × M × N N k moid activation function to map each point of the feature vector be­
tween 0 and 1, and Mc (F) is the weight of different channels.
This study designed feature fusion modules to concatenate the output
The spatial attention submodule weights the spatial position of the
feature maps of the 2 × 2 average pooling layer and the down-sampling
input feature map, highlights the information of the region of interest in
feature maps produced by the inverted residual block to make full use of
the feature map, and suppresses other region information in the feature
the relevant information in the feature maps and reduce the information
map. Spatial attention can be expressed as
loss caused by down-sampling the feature maps. Shallow features were
{ }
re-injected into the deep layers of the network. The structure of the Ms (F) = σ f 7×7 [AvgPool(Mc ); MaxPool(Mc ) ] , (11)
feature fusion module is shown in Fig. 10.
In Fig. 10, the size of input feature map Xh×w×c is h × w × c. H(X) is where f7×7 represents a convolution kernel with a size of 7 × 7, which is
the feature map obtained by nonlinear transformation in the inverted the spatial feature extractor of the target. The feature map is max pooled
residual block, and it has a deep feature representation. H(X) can be and average pooled along the channel dimension. The pooled features
expressed as are concatenated to generate effective feature descriptors. The spatial
attention submodule uses the sigmoid activation function after 7 × 7
H(X) = Tl {Tnl [Tnl (X) ]}, (7)
convolution to map the value of each pixel in the feature map to a
probability value of 0–1. The output of the spatial attention submodule
where Tl stands for linear transformation and Tnl denotes nonlinear
in the CBAM module is multiplied point by point with the input feature
transformation. P(X) is the down-sampling feature map obtained
map, that is, different positions in the feature map are given different
through the linear transformation of the average pooling layer, which
weights to enhance foreground target information and suppress back­
retains the original information in the input feature map and has a large
ground information.
receptive field. P(X) can be expressed as
P(X) = Tl (X) (8) 2.4. Disease identification steps and process flow

Feature map Y(h/2)×(w/2)×2c, which was obtained after H(X) and P(X) The resolution and size of the obtained images are different because
were cascaded, combines shallow and deep features. Y can be expressed the shooting devices were a digital camera and a mobile phone. The size
as of the images collected in the field was adjusted to 224 × 224 × 3 to
Y = concat[Tl {Tnl [Tnl (X) ] }; Tl (X)]. (9) adapt to the input size of the CNN model. Then, the Retinex enhance­
ment algorithm was used to preprocess the images to reduce the influ­
Y combines the position information of the high-resolution feature map ence of the uneven light of the images collected in natural scenes on the
with the deep feature information produced by the inverted residual performance of the deep learning model. The preprocessed images were
block, thereby reducing the loss of disease features on the wheat ears divided into training, validation, and test sets, and horizontal flip,
caused by the network during down-sampling and avoiding the bottle­ rotation, and translation were adopted to augment the samples in the
neck of feature representation. The designed feature fusion module does training and validation sets to avoid the overfitting of the model. The
not increase the amount of calculation considerably because of the use of augmented training images were used to train the SimpleNet model, and
down-sampling and depthwise separable convolution. the augmented validation images were used to fine tune the model
training effect during the training process. The images in the test set
2.3.3. CBAM module were tested by the trained SimpleNet model, and the labels and confi­
Wheat ear disease images captured in natural scenes have compli­ dences corresponding to the predicted category of the test samples were
cated backgrounds. Therefore, the spatial location information of wheat obtained to realize the identification of wheat ear diseases.
ear disease features in the image plays an important role in accurately The specific steps of disease identification are as follows, and the
process flow of the method is shown in Fig. 13.

9
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

(a) Original image

(b) Without the feature fusion module (c) With the feature fusion module
Fig. 14. Visualization results of the output feature maps of the convolution layer.

10
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Table 4 The parameters of the CNN were set as follows: the initial learning
Comparison of the identification results of different CNN models. rate was 0.01, the learning rate was reduced to 1/10 of the initial value if
Precision Recall F1 Accuracy Param the loss value of the validation set does not decrease after 10 epochs, the
score (M) batch size was 32, the number of iterations (epoch) was 200, Adam
AlexNet Blotch 0.90 0.78 0.83 0.888 58.33 optimizer was used to optimize the model, and the cross entropy was
Scab 0.93 0.98 0.95 used as the loss function.
Health 0.76 0.84 0.80
VGG16 Blotch 0.71 0.87 0.78 0.829 134.32
Scab 0.93 0.96 0.95 3.2. Evaluation indexes
Health 0.80 0.39 0.52
ResNet50 Blotch 0.85 0.80 0.82 0.882 23.59 In this study, precision, recall, F1 score, and accuracy were selected
Scab 0.87 0.98 0.92
as evaluation indexes to comprehensively evaluate the performance of
Health 1.00 0.77 0.87
InceptionV3 Blotch 0.78 0.84 0.81 0.853 21.81
deep learning algorithms:
Scab 0.90 0.95 0.92
TP
Health 0.86 0.61 0.72 Precision = , (12)
DenseNet121 Blotch 0.84 0.76 0.80 0.865 7.04 TP + FP
Scab 0.85 0.98 0.91
Health 1.00 0.74 0.85 TP
Recall = , (13)
Proposed Blotch 0.93 0.91 0.92 0.941 2.13 TP + FN
SimpleNet Scab 0.93 0.99 0.96
Health 1.00 0.87 0.93 2TP
F1score = , (14)
2TP + FP + FN

TP + TN
Table 5 Accuracy = , (15)
Comparison of the identification results of the proposed model and the Mobi­
TP + TN + FP + FN
leNet series models.
where TP is the number of true positive samples, TN is the number of
Precision Recall F1 Accuracy Param true negative samples, FP is the number of false positive samples, and FN
score (M)
is the number of false negative samples. Precision is the ratio of the
MobileNet V1 Blotch 0.82 0.84 0.83 0.888 3.23 number of correctly predicted positive samples to the number of all
Scab 0.91 0.96 0.94 predicted positive samples. Recall is the ratio of the number of correctly
Health 0.96 0.77 0.86
MobileNet V2 Blotch 0.83 0.89 0.86 0.906 2.26
predicted positive samples to the total number of true positive samples.
Scab 0.93 0.95 0.94 F1 score is an index that comprehensively considers precision and recall
Health 1.00 0.81 0.89 rates and is defined based on the harmonic mean of precision and recall
MobileNet V3- Blotch 0.85 0.93 0.89 0.918 4.24 rates. Accuracy is the ratio of the number of samples correctly predicted
Large Scab 0.94 0.99 0.97
to the total number of test samples and reflects the overall performance
Health 1.00 0.71 0.83
MobileNet V3- Blotch 0.82 0.89 0.85 0.894 1.68 of the model.
Small Scab 0.94 0.98 0.96
Health 0.91 0.68 0.78
3.3. Ablation experiment
Proposed Blotch 0.93 0.91 0.92 0.941 2.13
SimpleNet Scab 0.93 0.99 0.96
Health 1.00 0.87 0.93 Ablation experiments were performed on the proposed model, and
the results are shown in Table 3. The identification accuracy of the
model without the CBAM module and feature fusion module was 89.4%.
1. The size of the images collected in the field is cropped and resized to
The introduction of the CBAM module increased the identification ac­
224 × 224 × 3.
curacy of the model to 92.4% and improved the F1 scores of each type of
2. Retinex algorithm is used to preprocess the images.
wheat ear diseases, especially wheat glume blotch, and healthy wheat
3. The preprocessed images are divided into training set, validation set,
ear. The identification accuracy of the model after adopting the feature
and test set according to the ratio, 6:1:3.
fusion module was 91.8%. The model can better extract the image fea­
4. The number of training and validation images was expanded by
tures of wheat ear diseases using the CBAM and feature fusion modules.
horizontal flip, rotation, and translation.
The identification accuracy of the proposed model reached 94.1%,
5. The augmented training images are used to train the SimpleNet
which was 4% higher than that of the benchmark model. After the
model, and the augmented validation images are used to fine tune
introduction of the CBAM module and the feature fusion module, the
the model.
number of model parameters increased to 2.129 M, which was still lesser
6. The Loss value of the validation set is observed. The model is saved
than those of lightweight CNN models, such as MobileNet V1, V2, and
when the Loss value reaches the minimum value.
V3-Large.
7. The images in the test set are tested by the trained SimpleNet model.
Fig. 14 shows some of the visualization results of the output feature
8. The labels and confidences corresponding to the predicted category
map of the convolution layer with and without the feature fusion
of the test images are obtained to realize the identification of wheat
module. The figure shows that the model extracts image feature infor­
ear diseases.
mation, such as texture, edge, and color, in the shallower layer. The
visual information of the image in the feature map decreases and ab­
3. Experimental results and analysis
stract information increases as the convolution layers deepen. In Fig. 14
(b) and (c), the 4th and 6th convolution layers can extract the texture
3.1. Experimental configuration and hyperparameter setting
and color features of wheat ears. In the 11th convolution layer, the
localization information in the shallow feature map and the deep se­
The experiments used Python as the programming language, Keras as
mantic information are fused by adding a feature fusion module, which
the deep learning framework, and Tensorflow as the backend and were
reduces the loss of disease feature information during the down-
run on the high-performance computing platform of Anhui University.
sampling process and makes the features of the region of interest more
The hardware and software configurations are shown in Table 2.
prominent.

11
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

(a) (b) (c) (d) (e)


Fig. 15. Visualization results of different models: (a) original images, (b) AlexNet, (c) MobileNet V3-Small, (d) MobileNet V3-Large, and (e) SimpleNet.

which turns to withered white in later stages. The backgrounds are


Table 6 complicated because the images were taken in natural scenes; this
Identification results of different backbones with CBAM and feature fusion problem increases the difficulty of accurately identifying wheat glume
modules.
blotch. In terms of parameters, the classical CNNs have a large number
Model Accuracy F1 score Number of of parameters. The structure of the proposed model is simple, and the
Param(M)
Blotch Scab Health inverted residual blocks used are composed of depthwise separable
AlexNet 0.888 0.83 0.95 0.80 58.33
convolution, which reduces the number of parameters. Thus, SimpleNet
AlexNet + CBAM 0.912 0.88 0.96 0.84 58.39 is a lightweight CNN model, which has high identification accuracy for
MobileNet V3-Small 0.894 0.85 0.96 0.78 1.68 wheat ear diseases.
MobileNet V3-Small + 0.906 0.86 0.96 0.83 1.72
Feature fusion
MobileNet V3-Large 0.918 0.89 0.97 0.83 4.24 3.5. Comparison of identification results with lightweight CNN models of
MobileNet V3-Large + 0.924 0.89 0.97 0.85 4.36
Feature fusion
MobileNet series
Proposed SimpleNet 0.941 0.92 0.96 0.93 2.13
The proposed model SimpleNet was compared with lightweight CNN
models, namely, the MobileNet series, and the identification results are
3.4. Comparison of identification results with classical CNN models shown in Table 5. Introducing the CBAM module and feature fusion
module to SimpleNet the identification effect, that is, the proposed
In this experiment, classical CNN models, namely, AlexNet (Kriz­ model had better identification performance than MobileNet V1
hevsky et al, 2017), VGG16 (Simonyan and Zisserman, 2014), ResNet50 (Howard et al.,2017), V2(Sandler et al.,2018), and V3(Howard
(He et al, 2016), InceptionV3 (Szegedy et al, 2016), and DenseNet121 et al.,2019). MobileNet series models had relatively low recall values for
(Huang et al,2017), were compared with the proposed model, Simple­ healthy wheat ears and may mistake the backgrounds as targets of in­
Net. The identification results of different CNN models are shown in terest. The recall value of the proposed model for healthy wheat ears was
Table 4. The identification results of the proposed model for wheat scab higher than that of the MobileNet series models, which indicates that
were better than those for wheat glume blotch and healthy wheat ears. attention mechanism can improve the performance of the model. The
The reason is that compared with wheat glume blotch, the characteris­ number of parameters of the proposed model is 2.13 M, which is less
tics of wheat scab are more obvious in the image. The symptoms of than those of the lightweight models, such as MobileNet V1, V2, and V3-
wheat glume blotch are mostly brown spots on wheat glume husks, Large. Compared with MobileNet V3-Small, the identification accuracy

12
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

(a) SE module (b) CBAM module


Fig. 16. Confusion matrix of the identification results of different attention mechanisms.

Fig. 17. Comparison of visualization results of SE module and CBAM module.

of the proposed model improved by 4.7% on the premise of adding a these modules were added to other backbone network models, such as
small number of parameters. AlexNet, MobileNet V3-Large, and MobileNet V3-Small. The network
models achieved good performance in the previous comparative ex­
periments. Given that AlexNet has no inverted residual blocks, only the
3.6. Effectiveness of CBAM module and feature fusion module to other
CBAM module was added to it. SE channel attention modules (Hu et al.,
backbones
2018) are already used in MobileNet V3, so the experiments only added
feature fusion module to MobileNet V3. The identification results of
The experimental results showed that compared with the benchmark
AlexNet, AlexNet + CBAM, MobileNet V3, and MobileNet V3 + Feature
CNN models, such as AlexNet, VGG16, ResNet50, InceptionV3, and
fusion are shown in Table 6. After adding the CBAM module to AlexNet,
DenseNet121, and the lightweight CNN models of the MobileNet series,
the identification accuracy of the model improved by 2.4%, and the
the proposed SimpleNet model performs the best in the identification of
number of parameters increased by only 0.06 M. After adding the
wheat ear disease images with complex backgrounds. Grad-CAM (Sel­
feature fusion module to MobileNet V3-Small, the identification accu­
varaju et al., 2020) was used to display the visualization results of the
racy of the model improved by 1.2%, and the number of parameters
different models. The visualization results of the superposition of the
increased by only 0.04 M. The identification accuracy of MobileNet V3-
wheat ear images and their heatmaps are shown in Fig. 15. As presented
Large with the feature fusion module obtained a 0.6% improvement.
by the first two rows of the figure, the heatmaps of AlexNet and Mobi­
The feature fusion module in the proposed model concatenated the
leNet V3 highlight the land in the backgrounds. The straws in the last
output feature maps to make full use of the relevant information and
two rows of images are also highlighted by AlexNet and MobileNet V3.
reduce the information loss caused by down-sampling. The CBAM
However, the proposed SimpleNet model can accurately focus on the
module in the proposed model combines channel and spatial attention to
ears of wheat plants and pays minimal attention to irrelevant complex
highlight the disease features and suppress the background information
backgrounds, thus obtaining higher disease identification accuracy than
in the feature maps. Thus, the proposed model is more accurate in the
the other models.
identification of wheat ear diseases in images with complex
To verify the effectiveness of the CBAM and feature fusion modules,

13
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

backgrounds. The identification result of the proposed SimpleNet is seasonal wheat ear images will be collected in the future to further verify
better than those of the other backbone network models with the CBAM the proposed model.
or the feature fusion module.
CRediT authorship contribution statement
3.7. Comparison of different attention mechanisms
Wenxia Bao: Investigation, Conceptualization, Data curation, Soft­
The CBAM attention module used in the proposed model was ware, Writing-original draft, Writing – review & editing. Xinghua Yang:
compared with the SE channel attention module. The confusion matrixes Writing–original draft, Software, Conceptualization, Methodology,
of the identification results are shown in Fig. 16. Fig. 16 shows that the Validation. Dong Liang: Conceptualization, Supervision, Formal anal­
proposed model using the CBAM module has higher identification ac­ ysis. Gensheng Hu: Writing – original draft, Writing – review & editing,
curacy for each type of wheat ear diseases than the model with the SE Data curation, Methodology, Formal analysis, Funding acquisition,
module, especially for wheat glume blotch, whose image features are not Project administration. Xianjun Yang: Formal analysis, Supervision.
obvious. This result shows that introducing CBAM attention module can
make the model better find the diseased region in the image and reduce
the influence of backgrounds on the identification results. Declaration of Competing Interest
Grad-CAM was used to display the visualized results of the output
feature maps of the SE and CBAM modules (Fig. 17). The background in The authors declare that they have no known competing financial
Fig. 17(a–c) is relatively simple; hence, the background had little in­ interests or personal relationships that could have appeared to influence
fluence on the identification results. Both models gave correct identifi­ the work reported in this paper.
cation results with high confidence. Fig. 17(c) shows that the CBAM
module was more focused on the diseased region. Fig. 17(d) has a Acknowledgments
complicated background. The SE module did not pay attention to the
region of interest in the image, and the identification result was wrong. The authors thank the Major Natural Science Research Projects in
In comparison, the CBAM module focused on the region of interest, and Colleges and Universities of Anhui Province, China under Grant
the model gave the correct identification result. Fig. 17(e) shows healthy KJ2020ZD03 and the Open Research Fund of National Engineering
wheat ears. The result given by the SE module was the highest proba­ Research Center for Agro-Ecological Big Data Analysis & Application of
bility of glume blotch (61.19%), whereas the result given by the CBAM Anhui University, China under Grant (AE201902) for their support.
module was the highest probability of healthy wheat ears (65.01%). The
results show that combining the CBAM attention module can make the References
model better find the diseased region in the image and reduce the in­
fluence of the background on the identification effect. Bao, W., Zhao, J., Hu, G., Zhang, D., Huang, L., Liang, D., 2021. Identification of wheat
leaf diseases and their severity based on elliptical-maximum margin criterion metric
learning. Sust. Comput. 30, 100526. https://doi.org/10.1016/j.
4. Conclusion suscom.2021.100526.
Barbedo, A., Garcia, J., 2016. A Review on the Main Challenges in Automatic Plant
The images of diseased wheat ears captured in natural scenes in the Disease Identification Based on Visible Range Images. Biosyst. Eng. 144, 52–60.
Barman, U., Choudhury, R.D., Sahu, D., Barman, G.G., 2020. Comparison of convolution
field have complicated backgrounds. Existing CNN models have high neural networks for smartphone image based real time classification of citrus leaf
identification accuracy but have many parameters and high computa­ disease. Comput. Electron. Agric. 177, 105661.
tional cost. This study designed a lightweight CNN model, SimpleNet, Bi, C.K., Wang, J.M., Duan, Y.L., Fu, B.F., Kang, J.R., Shi, Y., 2020. MobileNet Based
Apple Leaf Diseases Identification. Mobile Netw. Appl. 10, 1–9.
which can be used for the automatic identification of wheat ear diseases, Esgario, J.G.M., Krohling, R.A., Ventura, J.A., 2020. Deep learning for classification and
such as glume blotch and scab, in natural scene images. SimpleNet uses severity estimation of coffee leaf biotic stress. Comput. Electron. Agric. 169, 105162.
inverted residual block as the main module to construct the network, Ferentinos, K.P., 2018. Deep learning models for plant disease detection and diagnosis.
Comput. Electron. Agric. 145, 311–318.
and it reduces the number of model parameters and computational cost. Ficke, A., Cowger, C., Bergstrom, G., Brodal, G., 2018. Understanding yield loss and
The combination of inverted residual block and the CBAM module en­ pathogen biology to improve disease management: Septoria nodorum blotch—a case
hances the model’s ability to represent disease features in images with study in wheat. Plant Dis. 102 (4), 696–707.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In:
complex backgrounds and pay better attention to disease information;
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
therefore, the identification accuracy of the model is improved. In pp. 770–778.
addition, SimpleNet uses the feature fusion module to fuse shallow and Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M.X., Wang, W.J., Zhu, Y.K.,
deep features to reduce the loss of disease features during the down- Pang, R.M., Vasudevan, V., et al. 2019. Searching for MobileNetV3. In: IEEE/CVF
International Conference on Computer Vision (ICCV); OCT 27-NOV 02; Seoul,
sampling process. SOUTH KOREA. pp. 1314-1324.
In the experiments, the proposed model was compared with light­ Howard, A.G., Zhu, M.L., Chen, B., et al. 2017. MobileNets: Efficient Convolutional
weight and classical CNN models. The proposed model achieved an Neural Networks for Mobile Vision Applications. arXiv:1704.04861.
Hu, G., Wang, H., Zhang, Y., Wan, M., 2021. Detection and severity analysis of tea leaf
accuracy of 0.941, which is 5.7% higher than that of AlexNet with the blotch based on deep learning. Comput. Electr. Eng. 90, 107023.
highest accuracy among the classical CNN models and 2.3% higher than Hu, G., Wu, H., Zhang, Y., Wan, M., 2019. A low shot learning method for tea leaf’s
that of MobileNetV3-Large with the highest accuracy among the Mobi­ disease identification. Comput. Electron. Agric. 163, 104852. https://doi.org/
10.1016/j.compag.2019.104852.
leNet series. Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.H., 2018. Squeeze-and-Excitation Networks.
The proposed model has only 2.13 M parameters, which is similar to IEEE Trans. Pattern Anal. Mach. Intell. 42 (8), 2011–2023.
the number of parameters of the MobileNet series models. The proposed Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q., 2017. Densely connected
convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision
model can be transplanted to mobile devices to assist farmers in iden­ and Pattern Recognition, pp. 4700–4708.
tifying wheat ear diseases accurately. Hughes DP, Salathé M. An open access repository of images on plant health to enable the
However, the initial symptoms of wheat ear infection with fungal development of mobile disease diagnostics. arXiv:1511.08060.
Jin, X., Jie, L., Wang, S., Qi, H.J., Li, S.W., 2018. Classifying Wheat Hyperspectral Pixels
diseases are very mild, and the model usually experiences difficulty
of Healthy Heads and Fjusarium Scab Disease Using a Deep Neural Network in the
extracting the information of the diseased region. In addition, the pro­ Wild Field. Remote Sens. 10 (3), 395.
posed method only uses images of winter wheat ears, which is a data Kamal, K.C., Yin, Z.D., Wu, M.Y., Wu, Z.L., 2019. Depthwise separable convolution
limitation of this study. Future research should focus on collecting more architectures for plant disease classification. Comput. Electron. Agric. 165, 104948.
https://doi.org/10.1016/j.compag.2019.104948.
wheat ear image data under different natural scenes and at the initial Karlekar, A., Seal, A., 2020. SoyNet: Soybean leaf diseases classification. Comput.
stage of fungal infection. Images of wheat leaf diseases and other Electron. Agric. 172, 105342. https://doi.org/10.1016/j.compag.2020.105342.

14
W. Bao et al. Computers and Electronics in Agriculture 189 (2021) 106367

Krizhevsky, A., Sutskever, I., Hinton, G.E., 2017. ImageNet classification with deep Prasad, S., Peddoju, S.K., Ghosh, D., 2016. Multi-resolution mobile vision system for
convolutional neural networks. Commun. ACM 60 (6), 84–90. https://doi.org/ plant leaf disease diagnosis. Signal Image Video Process. 10 (2), 379–388.
10.1145/3065386. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. MobileNetV2:
Liang, Q., Xiang, S., Hu, Y., Coppola, G., Zhang, D., Sun, W., 2019. PD2SE-Net: Inverted Residuals and Linear Bottlenecks, doi: 10.1109/cvpr.2018.00474.
Computer-assisted plant disease diagnosis and severity estimation network. Comput. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2020. Grad-
Electron. Agric. 157, 518–529. CAM: Visual Explanations from Deep Networks via Gradient-based Localization. Int.
Lin, M., Corsi, B., Ficke, A., Tan, K.-C., Cockram, J., Lillemo, M., 2020. Genetic mapping J. Comput. Vis. 128 (2), 336–359.
using a wheat multi-founder population reveals a locus on chromosome 2A Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale
controlling resistance to both leaf and glume blotch caused by the necrotrophic image recognition. arXiv preprint arXiv: 1409.1556.
fungal pathogen Parastagonospora nodorum. Theor. Appl. Genet. 133 (3), 785–808. Singh, P., Verma, A., Alex, J.S.R., 2021. Disease and pest infection detection in coconut
Lu, J., Hu, J., Zhao, G., Mei, F., Zhang, C., 2017. An in-field automatic wheat disease tree through deep learning techniques. Comput. Electron. Agric. 182, 105986.
diagnosis system. Comput. Electron. Agric. 142, 369–379. https://doi.org/10.1016/j.compag.2021.105986.
Majumdar D, Kole DK, Chakraborty A, Majumder DD. 2015. An Integrated Digital Image Su, T., Mu, S., Shi, A., Cao, Z., Dong, M., 2019. A CNN-LSVM Model for Imbalanced
Analysis System for Detection, Recognition and Diagnosis of Disease in Wheat Images Identification of Wheat Leaf. Neural Netw. World. 29 (5), 345–361.
Leaves. In: 3rd International Symposium on Women in Computing and Informatics Sun, J., Yang, Y., He, X.F., Wu, X.H., 2020. Northern Maize Leaf Blotch Detection Under
(WCI); AUG 10-13; SCMS Sch Engn & Technol, Aluva, INDIA. pp. 400–405. Complex Field Environment Based on Deep Learning. IEEE Access. 8, 33679–33688.
Manavalan, R., 2020. Automatic identification of diseases in grains crops through Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the
computational approaches: A review. Comput. Electron. Agric. 178, 105802. https:// inception architecture for computer vision. In: Proceedings of the IEEE conference on
doi.org/10.1016/j.compag.2020.105802. computer vision and pattern recognition, pp. 2818–2826.
Moshou, D., Bravo, C., West, J., Wahlen, T., McCartney, A., Ramon, H., 2004. Automatic Tian, Y., Zhao, C.J., Lu, S.G., Guo, X.Y., 2011. Multiple Classifier Combination for
detection of ‘yellow rust’ in wheat using reflectance measurements and neural Recognition Of Wheat Leaf Diseases. Intell. Autom. Soft Comput. 17 (5), 519–529.
networks. Comput. Electron. Agric. 44 (3), 173–188. Wang, H., Chen, D., Li, C., Tian, N., Zhang, J.u., Xu, J.-R., Wang, C., 2019. Stage-specific
Picon, A., Alvarez-Gila, A., Seitz, M., Ortiz-Barredo, A., Echazarra, J., Johannes, A., functional relationships between Tub1 and Tub2 beta-tubulins in the wheat scab
2019a. Deep convolutional neural networks for mobile capture device-based crop fungus Fusarium graminearum. Fungal. Genet. Biol. 132, 103251. https://doi.org/
disease classification in the wild. Comput. Electron. Agric. 161, 280–290. 10.1016/j.fgb.2019.103251.
Picon, A., Seitz, M., Alvarez-Gila, A., Mohnke, P., Ortiz-Barredo, A., Echazarra, J., 2019b. Woo, S., Park, J., Lee, J.Y., Kweon, I.S. 2018. CBAM: Convolutional Block Attention
Crop conditional Convolutional Neural Networks for massive multi-crop plant Module. In: 15th European Conference on Computer Vision (ECCV); SEP 08-14;
disease classification over cell phone acquired images taken on real field conditions. Munich, GERMANY. pp. 3-19.
Comput. Electron. Agric. 167, 105093. https://doi.org/10.1016/j.
compag.2019.105093.

15

You might also like