You are on page 1of 8

Orchard monitoring using UAV to capture invaders insect images

Cîciu Radu-Marian and Popescu Dan, IEEE Member

Abstract— In this paper, we propose to detect and classify weeds affect the growing conditions of plants, insects can
four different type of bugs. The operation is performed using compromise the entire crop if they are not controlled in time
transfer learning based on Convolutional Neural Network [BAR11].
(CNN). Transfer learning is known to achieve a better accuracy.
Transfer learning is a technique in deep learning where learned Most harmful insects attack the plants immediately after
knowledge of one network for a specific problem is used as hatching, when they enter the fruit, they bear the epidermis of
initial point for another problem. The learned knowledge the leaves and those of the stems. If they are not controlled
includes parameters of network like weights and biases etc. effectively, the insects will multiply quickly and will
compromise the entire crop in a very short time [RES09].
I. INTRODUCTION
To reduce the damage caused by those harmful insects,
The scope of the research is to develop a method and an one first important step would be to identify the signs of an
algorithm that can detect and specify the name of the invader attack in time and to effectively administer the treatments to
insect by processing the gathered images from the orchard. combat them.
The dataset of images will be provided by using one drone or
multiple drones that will be sent into the field to capture II. STATE OF THE ART DESCRIPTION
images from the orchard trees. After gathering the data, the
images will be processed to extract useful information, like Convolutional neural network (CNN) represents a class of
the aria of the spread pest, the concentration of insects in the Dynamic neural network (DNN) in the field of deep learning
orchard, the variety of insect species etc. that has been used in computer vision and other related
studies. The CNN is built to have one input, several hidden
There are a lot of methods that could help us better detect layers that usually include convolutional layers, ReLU layers
and observe the invasive insect species. One of the methods type, max-pooling layers, and fully connected layers and one
could be the use of insect pheromones to attract them. output. The most important feature is that CNN uses less
Pheromones are odorous substances emitted by a single preprocessing and returns outstanding performance results
gender to attract individuals of the opposite gender for determined by the quality of the input objects, the size of the
mating, thus being the essential factor for the survival and data and the number of classes [ZHU20].
perpetuation of the species. Pheromones are generally
produced by females and play a role in orienting and Related to our research, apart from AlexNet, there are
approaching males from long distances. Due to the other convolutional neural networks that could be used to help
attractiveness of female pheromones, they have become very on classifying the four insect classes we have chosen. They
valuable in detecting and estimating insect populations and are considered to be above AlexNet with a more advanced
even in direct control combat. Due to the detection, we can architecture and more efficient. They are: GoogLeNet (2014),
monitor the evolution of the pest, the severity of the attack or VGGNet (2014) and FractalNet (2016) [LU17][ZAH18].
the crop healing after the treatment has been applied and how
it is decreasing in intensity [JUL16].
GoogLeNet (2014)
Another observation method could be the use of ground
robots. Autonomous robots equipped with cameras would be It is another convolutional neural network made of 22
driven on a designed route in the orchard aiming to capture layers deep and 27 max-pooling layers included. It uses a
targeted images and gather crop data. It is not a new concept, pretrained version of the trained network of two datasets:
as scientists have continuously searched for ways to improve ImageNet or Places365. First dataset, ImageNet could classify
the robots that can be used in all sorts of industries. Although the image dataset into 1000 object categories, like phones,
the agricultural sector implies a series of multiple factors to TVs, laptops, or animal species. The Places365 does basically
be considered when trying to take measures, like wind speed, the same thing but divides images into 365 different areas
the ground type, the intensity of the light, the weather such as field, park, runway, and lobby. Similar to AlexNet,
conditions, the placement of the orchard etc., the robots have we can retrain GoogLeNet to perform other tasks applying
been more and more often used in agricultural research transfer learning. At this moment, there are three versions of
because the information that they collected has helped the this neural network named Inception, version 1, 2 and 3
researchers to find solutions to several problems. We will be [SZE15][ZHA19].
seeing in the future more and more robots that will be
involved in topics like this [SIN10][BER15][ZHA19]. GoogLeNet is built with a total of nine inception blocks
and global average pooling to generate estimations. the
In many orchards, pests are one of the reasons that can inception block consists of four parallel paths. The first three
seriously affect crop production throughout the year. Just as paths use convolutional layers with window sizes of 1 × 1, 3
× 3, and 5 × 5 to extract information from different spatial
sizes. The middle two paths perform a 1 × 1 convolution on
the input to reduce the number of channels, reducing the linear transformation of the input channels (followed by non-
model’s complexity. The fourth path uses a 3 × 3 maximum linearity). The convolution stride is fixed to 1 pixel; the
pooling layers, followed by a 1 × 1 convolutional layer to spatial padding of convolutional layer input is such that the
change the number of channels. The four paths use spatial resolution is preserved after convolution. Spatial
appropriate padding to give the input and output the same pooling is carried out by five max-pooling layers, which
height and width. Finally, the outputs along each path are follow some of the convolutional layers. Max-pooling is
concatenated along the channel dimension and comprise the performed over a 2 × 2 pixel window, with stride 2
block’s output [SZE15][ZAH18][ZHA19]. [SIM15][QAS18][ZAH18][KAN19].
Three Fully-Connected (FC) layers follow a stack of
convolutional layers (which has a different depth in different
architectures): the first two have 4096 channels each, the third
performs 1000-way ILSVRC classification and thus contains
1000 channels (one for each class). The final layer is the soft-
max layer. The configuration of the fully connected layers is
the same in all networks [SIM15][QAS18][ZAH18][KAN19].
All hidden layers are equipped with the rectification
(ReLU) non-linearity. It is also noted that none of the
networks (except for one) contain Local Response
Normalization (LRN), such normalization does not improve
the performance on the ILSVRC dataset but leads to
increased memory consumption and computation time
[SIM15][QAS18][KAN19].

Figure 2. VGG16 model architecture [BLI16].

Figure 1. The GoogLeNet model architecture [ZHA19].


FractalNet (2016)
FractalNet is an example of convolutional neural network
VGG (2014) that drops residual connections in benefit of fractal design.
VGG is a convolutional neural network (CNN) containing The working principle is that it is repeatedly apply a simple
16 layers deep, designed by K. Simonyan and A. Zisserman expansion order to produce deep networks whose structural
from the University of Oxford. It is known to be one of the layouts are accurately truncated fractals. Those networks
top vision model architecture, submitted to ILSVRC-14. It carry interacting subparts of different lengths, but do not
differentiates from AlexNet by replacing the big sized kernel include any pass-through or residual connections; each
filters one by one. The training process for such network took internal signal is reshaped by a filter and nonlinearity before
weeks and used NVIDIA Titan Black GPUs. The model being seen by subsequent layers. This property is at the
achieved 92.7% accuracy during the tests with ImageNet, opposite pole with the approach of structuring very deep
which is an image dataset with over 14 million objects networks thus the training is a residual learning problem. The
representing about 1000 classes [SIM15][QAS18][KAN19]. fractal design model achieves an error rate of 22.85% on
CIFAR-100, matching the state-of-the-art held by residual
The input layer is fixed size of RGB image, size 224 x 224. networks. Fractal networks reveal intriguing properties
The image is filtered through a batch of convolutional layers, beyond their high performance. They can be seen as a
where the filters have been used with a small receptive field: problem solving efficient implicit union of subnetworks of
3 × 3 (representing the smallest size to capture the notion of every depth [LAR16][LAR17][NAS18].
left/right, up/down, center). In one of the configurations, it
also uses 1 × 1 convolution filters, which can be seen as a
Applying transfer learning, we essentially try to utilize
what has been previously learned during a task to improve
abstraction in another one. Basically, the measures and the
knowledge we have trained system A to do will be applied to
system B.
In our case study, we have selected four most common
bug invaders that can be found in Romanian orchards and
have caused a lot of troubles to the farmers, by compromising
a significant amount of fruits like apples, peaches, pears,
plums etc. - see the following images:

Figure 3. Fractal network architecture [LAR17].

III. MATERIALS AND METHODS


The images we have used for learning, testing and
classification have been taken from the Global Biodiversity
Information Facility website. GBIF - the Global Biodiversity Figure 4. Insects that will be detected and classified:
Information Facility - is an international network and data 1. Class1: Cydia Funebrana [GBI01]; 2. Class2: Euproctis
infrastructure funded by the world's governments and aimed Chrysorrhoea [GBI01]; 3. Class3: Halyomorpha Halys
at providing anyone, anywhere, open access to data about all [GBI01]; 4. Class4: Vespula Germanica [GBI01].
types of life on Earth [GBI01].
Coordinated through its Secretariat in Copenhagen, the The following flow is presenting the key steps we will
GBIF network of participating countries and organizations, follow to automatically identify the insect type:
working through participant nodes, provides data-holding
institutions around the world with common standards and
open-source tools that enable them to share information about Images dataset gathering
where and when species have been recorded. This knowledge
derives from many sources, including everything from
museum specimens collected in the 18th and 19th century to
geotagged smartphone photos shared by amateur naturalists in Images resizing (227x227)
recent days and weeks [GBI01].
The drone used for images capture is UAV Hexacopter
Dji Matrice 600 PRO, with attached camera. Augmentation
The neural network training has been performed using an
HP Pavilion 15-e000 laptop, powered by Intel Core i5-
3230M, dual-core processor at base clock speed of 2.6 GHz Training of convolutional neural network via AlexNet
up to 3.2 GHz with Turbo Boost technology, operating on
Windows 10-x64 with Intel HD Graphics 4000 GPU. It has a
15.6-inch BrightView LED-backlit display with a resolution
of 1366 x 768 pixels. Training finished
The general idea is to use the knowledge a model has
learned from a task with a big amount of labeled training data
in a new task that does not have much data. Instead of starting Testing the trained network
the learning process from scratch, we start with patterns
learned from solving a related task.
The dataset includes color images of four different insect
In transfer learning, the knowledge of an already classes. The number of images to be processed is as follows:
trained machine learning model will be applied to a different, Cydia Funebrana – 171 images; Euproctis Chrysorrhoea –
yet similar problem. For example, if you trained a simple 128 images; Halyomorpha Halys – 200 images; Vespula
classifier to predict whether the image contains a laptop, you Germanica – 200 images.
could use the knowledge that the model gained during its
training to recognize other objects like TVs. Image Resizing
All the images from insect dataset are resized according to
the input layer of AlexNet Deep Convolutional Neural
Network. In this case, AlexNet pretrained network is Evaluation Parameters
assigned for the classification which has an input size of For evaluation purposes, statistical parameters including
227x227. Hence, all the images from bugs dataset are resized sensitivity, specificity and total classification accuracy were
to 227x227. used. These parameters are widely used for analysis of
Data Augmentation classification systems and are listed below:
Data augmentation is a process in which the amount of
dataset is increase by making copies of original dataset. Where TP is true positive, FN is false negative, TN is true
These copies are slightly modified version of original images negative, and FP is false positive of the classification system.
and are by geometrical transformation of original images. The detail of these statistical parameters is given below:
These geometrical transformations include image rotation,
image scaling, image translation and image reflection. In this Accuracy
project of bug types of classification, data augmentation is Accuracy is defined as a ratio between the number of
carried out as there are very less images for training of correctly detected subjects and the total number of subjects.
pretrained network [SHO19]. It represents a measure for a model classification capability.
AlexNet Deep Convolutional Neural Network Often its behavior is monitored on a graphic in real time
AlexNet is considered a state-of-the-art in the machine during the training phase.
learning and computer vision technique in its recognition
accuracy rate. It has a total of 8 layers out of which five are Precision
convolutional layers and some include max-pooling layers Precision tells us about how exact and truthful a
and the remaining three are fully connected layers. The usage classification model is. Precision is measure of actual
of saturation ReLU activation function is not coincidental positive out of true predicted values. Precision is widely used
since the training performance has been improved over to measure correctness of signal classification systems
mathematical functions tanh and sigmoid [ALO18]. especially when the value of False Positive is higher.

Input Sensitivity
The evaluation of classifier in applications of pattern
recognition and binary classification known as sensitivity is
defined as the ratio of true positive values to that of sum of
Conv., MXP, LRN true positives and false negatives. It essentially estimates that
how many of the True Positives of classification model are
truly classified.
Conv., MXP, LRN
F1-score
F1-score is a function of both the precision and sensitivity.
It is an important statistical evaluation parameter to
Conv. & ReLU understand the difference between precision and accuracy.
F1-score may be a good measure in case if there is a need of
equilibrium between Precision and Sensitivity and it is most
used when there is an unequal distribution of classes.
Conv. & ReLU
Equation
Parameter Formula
number
Conv. & ReLU
Accuracy TP + TN
ACC = 100 (3.1)
(ACC)
(TP + FN + TN + FP )
FC Precision TP
PRE = (3.2)
(PRE) TP + FP
Sensitivity TP
FC SEN = (3.3)
(SEN) TP + FN

TP
Soft-max
F1-score F1 = (3.4)
(F1) FP + FN
TP +
2
Figure 5. Architecture of AlexNet: Convolution, max-
pooling, LRN and FC - fully connected layer.
IV. EXPERIMENTAL RESULTS Table 1. The confusion matrix for system data validation.
After approximately 25 minutes, the network has finished
the image cycle processing. A total of 16 Epochs, 24 images Classification
Class1 Class2 Class3 Class4
per Epoch, have shown the efficiency of the transfer learning overall
method. The system has reached 95.69% accuracy rate. The Class1 49 2 51
more images were processed, the higher the accuracy and the
lower the loss. Class2 1 34 2 1 38

Class3 60 60

Class4 1 2 57 60

Truth
51 34 66 58 209
overall

Table 2. Statistical parameter Evaluation.

Figure 6. Training progress. ACC PRE SEN F1 N N


Class
[%] [%] [%] [%] truth classified

1 98.09 96.08 96.08 96 51 51


2 98.09 89.5 100 100 34 38
3 97.13 100 90.1 91 66 60
4 98.09 95 98.3 98 58 60

Figure 7. Loss progress. The matrix shows us that the cases in which the network
returned errors were very few and the used algorithm was
able to identify with a high accuracy the insect species. The
more insect classes will be added, the higher the probability
that the system would return even more errors or reduced
accuracy. The test results depend on a lot of the similarities of
the insects and the number of clear images gathered from the
field. For a higher precision, the system could re-iterate the
existing data and the new ones. This is yet another topic that
needs to be studied.

Table 3. Performance comparison for pretrained models


Performance AlexNet GoogLeNet VGG ResNet50
measure [%] [%] [%] [%]
Accuracy 97.85 99.39 90.4 99.15
Figure 8. Results of statistical evaluation parameters.
Precision 95 99.29 - 98.94
Sensitivity 96.25 99.12 - 98.72
The following results represent the performance of the
defined parameters: F1-score 95.55 99.2 - 98.82
• Accuracy = 97.85% Area under
99.28 99.2 - 99.55
• Precision = 95% the curve
• Sensitivity = 96.25% Time (min) 94.17 133.15 - 476.78
• F1-score = 95.55%
V. CONCLUSION options=trainingOptions('sgdm', ...
'MiniBatchSize',miniBatchSize, ...
Transfer learning is commonly used in an optimization or 'MaxEpochs',16, ...
classification problem when it is required to achieve high 'InitialLearnRate',1e-4, ...
accuracy in limited time frame. In this research of bug 'Shuffle','every-epoch', ...
classification, the transfer learning is suitable in case it is 'ValidationData',augimdsValidation, ...
used a small dataset of images, in our case a few hundred 'ValidationFrequency',valFrequency, ...
images all. As deep learning requires a huge amount of data 'Verbose',false, ...
for training, transfer learning can use this pattern to train the 'Plots','training-progress');
pretrained network on small dataset. After comparing the
results in Table 3, AlexNet shows satisfactory performance netTransfer=trainNetwork(augimdsTrain,layers,options);
even considering that more and more complex models start
to appear. We can admit that the more advanced models do [YPred,scores]=classify(netTransfer,augimdsValidation);
not reach too far ahead of AlexNet’s giving the fact that we
had used a small number of images to be processed (about YValidation=imdsValidation.Labels;
400 images). It can be concluded that transfer learning is accuracy=mean(YPred == YValidation);
considered a powerful tool when it is required high accuracy
when we deal with small training dataset. figure('Units','normalized','Position',[0.2 0.2 0.4 0.4]);
cm = confusionchart(YValidation,YPred);
VI. APPENDIX cm.Title = 'Confusion Matrix for Validation Data';
cm.ColumnSummary = 'column-normalized';
MATLAB Code for Training cm.RowSummary = 'row-normalized';
clc MATLAB Code for Testing
clear
close all clc
clear
close all
imds=imageDatastore('D:\','FileExtensions','.png', ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames'); [FileName, PathName] = uigetfile({'*.png'},'File Selector');
I =imread(fullfile(PathName,FileName));
[imdsTrain,imdsValidation]=splitEachLabel(imds,0.7,'rando
mized'); I = imresize(I,[227 227]);
load TrainedNetwork.mat
[YPred,scores] = classify(netTransfer,I);
augimdsTrain=augmentedImageDatastore([227,227],imdsTr
ain);
augimdsValidation=augmentedImageDatastore([227,227],im figure
dsValidation); imshow(I)
title(YPred)
net=alexnet;
VII. REFERINȚE
layersTransfer=net.Layers(1:end-3);
[1] Agnieszka Mikołajczyk, Michał Grochowski. 2018.
numClasses=numel(categories(imdsTrain.Labels)); „Data augmentation for improving deep learning in
image classification problem.”
[2] Ard Nieuwenhuizen, Jochen Hemming, Hyun Suh.
layers=[ 2018. „Detection and classification of insects on
layersTransfer stick-traps in a tomato crop using Faster R-CNN.”
[3] Barnard, Peter C. 2011. Royal Entomological Society
fullyConnectedLayer(numClasses,'WeightLearnRateFactor',2 Book of British Insects, 1st edition. Wiley-
0,'BiasLearnRateFactor',20) Blackwell.
softmaxLayer [4] Bo Jiang, Jinrong He, Shuqin Yang, Hongfei Fu,
classificationLayer]; Tong Li, Huaibo Song, Dongjian He. 2019. „Fusion
of machine vision technology and AlexNet-CNNs
miniBatchSize=20; deep learning network for the detection of
valFrequency=floor(numel(augimdsTrain.Files)/miniBatchSi postharvest apple pesticide residues.”
ze);
[5] Bruno Tinen, Jun Okamoto Junior. 2019. „Transfer [22] Loris Nanni, Gianluca Maguolo, Fabio Pancino.
learning of ImageNet Object Classification 2019. „Insect pest image detection and recognition
Challenge.” based on bio-inspired methods.”
[6] Charles Triplehorn, Norman F. Johnson. 2005. [23] Marcel Bergerman, Silvio M. Maeta, Ji Zhang,
Borror's Introduction to the Study of Insects 7th Gustavo Freitas. 2015. „Robot Farmers:
Edition. Autonomous Orchard Vehicles Help Tree Fruit
[7] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Production.” IEEE Robotics & Automation
Sermanet, Scott Reed, Dragomir Anguelov, Magazine.
Dumitru Erhan, Vincent Vanhoucke, Andrew [24] Matheus Cardim Ferreira Lima, Maria Elisa
Rabinovich. 2015. „Going Deeper with Damascena de Almeida Leandro, Constantino
Convolutions.” Valero, Luis Carlos Pereira Coronel, Clara Oliva
[8] Connor Shorten, Taghi M. Khoshgoftaar. 2019. „A Goncalves Bazzo. 2020. „Automatic Detection and
survey on Image Data Augmentation.” Monitoring of Insect - A Review.”
[9] Daniel P. Bebber, Tim Holmes, Sarah Jane Gurr. [25] Md Zahangir Alom, Tarek M. Taha, Chris
2014. „The global spread of crop pests and Yakopcic, Stefan Westberg, Paheding Sidike. 2019.
pathogens.” „A State-of-the-Art Survey on Deep Learning
[10] Denan Xia, Peng Chen, Bing Wang, Jun Zhang, Theory.”
Chengjun Xie. 2018. „Insect Detection and [26] Md Zahangir Alom, Tarek M. Taha, Christopher
Classification Based on an Improved Convolutional Yakopcic, Stefan Westberg, Mahmudul Hasan,
Neural Network.” Brian C Van Esesn, Abdul A S. Awwal, Vijayan K.
[11] Elisabeth Jullien, Jerome Jullien. 2016. Plant Asari. 2018. „The History Began from AlexNet: A
diseases and pests. Diagnosis and treatment, 2nd Comprehensive Survey on Deep Learning
edition. MAST. Approaches.”
[12] Gao Huang, Zhuang Liu, Geoff Pleiss, Laurens van [27] MIROSLAV VALAN, KAROLY MAKONYI,
der Maaten, Kilian Q. Weinberger. 2020. ATSUTO MAKI, DOMINIK VONDRÁCEK,
„Convolutional Networks with Dense FREDRIK RONQUIST. 2019. „Automated
Connectivity.” Taxonomic Identification of Insects with Expert-
[13] Gustav Larsson, Michael Maire, Gregory Level Accuracy Using Effective Feature Transfer
Shakhnarovich. 2017. „FRACTALNET: ULTRA- from Convolutional Networks.”
DEEP NEURAL NETWORKS WITHOUT [28] Qin Zhang, Manoj Karkee, Amy Tabb. 2019. „The
RESIDUALS.” ICLR 2017. Use of Agricultural Robots in Orchard
[14] Haoyu Xu, Zhenqi Han, Songlin Feng, Han Zhou, Management.” Robotics and automation for
Yuchun Fang. 2018. „Foreign object debris material improving agriculture, pp. 187-214.
recognition based on convolutional neural [29] Reem Ibrahim Hasan, Suhaila Mohd Yusuf, Laith
networks.” Alzubaidi. 2020. „Review of the State of the Art of
[15] Himansu Das, Chittaranjan Pradhan, Nilanjan Dey. Deep Learning for Plant Diseases: A Broad
2020. Deep Learning for Data Analytics, 1st Analysis and Discussion.”
edition. Academic Press. [30] Samet Akcay, Mikolaj E. Kundegorski, Michael
[16] Hsiao-Chi Li, Zong-Yue Deng, Hsin-Han Chiang. Devereux, Toby P. Breckon. 2016. „TRANSFER
2020. „Lightweight and Resource-Constrained LEARNING USING CONVOLUTIONAL
Learning Network for Face Recognition with NEURAL NETWORKS FOR OBJECT.” IEEE
Performance Optimization.” International Conference on Image Processing
[17] Hussam Qassim, Abhishek Verma, David (ICIP), pp. 1057-1061.
Feinzimer. 2018. „Compressed Residual-VGG16 [31] Sanjiv Singh, Marcel Bergerman. 2010. „Improving
CNN Model for Big Data Places Image Orchard Efficiency with Autonomous Utility
Recognition.” Vehicles.”
[18] Ian Goodfellow, Yoshua Bengio, Aaron Courville. [32] Sebastien C. Wong, Adam Gatt, Victor Stamatescu.
2016. Deep learning. Cambridge, Massachusetts: 2016. „Understanding data augmentation for
The MIT Press. classification: when to warp?”
[19] Iandola, Forrest N. 2016. „Exploring the Design [33] Shuai Peng, Hongbo Huang, Weijun Chen, Liang
Space of Deep Convolutional Neural Networks at Zhang, Weiwei Fang. 2020. „More trainable
Large Scale.” inception-ResNet for face recognition.” În
[20] Kang, Hyeong-Ju. 2019. „Real-Time Object Neurocomputing, Volume 411, 9-19.
Detection on 640x480 Image With VGG16+SSD.” [34] Skonski, Marek. 2019. „A comparison of deep
2019 International Conference on Field- convolutional neural networks for image-based
Programmable Technology (ICFPT), 419-422. detection of concrete surface cracks.” Computer
[21] Le Lu, Yefeng Zheng, Gustavo Carneiro, Lin Yang. Assisted Methods in Engineering and Science, pp.
2017. Deep Learning and Convolutional Neural 105–112.
Networks for Medical Image Computing. Springer.
[35] Thenmozhi Kasinathan, Dakshayani Singaraju,
Srinivasulu Reddy Uyyala. 2020. „Insect
classification and detection in field crops using
modern machine learning techniques.”
[36] Universitetsparken 15, DK-2100 Copenhagen, CVR
no. 29087156. fără an. Global Biodiversity
Information Facility. https://www.gbif.org/.
[37] Valeria Maeda-Gutiérrez, Carlos E. Galván-Tejada,
Laura A. Zanella-Calzada, José M. Celaya-Padilla,
Jorge I. Galván-Tejada, Hamurabi Gamboa-Rosales,
Huizilopoztli Luna-García, Rafael Magallanes-
Quintanar, Carlos A. Guerrero Mendez, Carlos A.
Olvera-Olvera. 2020. „Comparison of
Convolutional Neural Network Architectures for
Classification of Tomato Plant Diseases.”
[38] Vincent H. Resh, Ring T. Cardé. 2009.
Encyclopedia of Insects 2nd Edition. Academic
Press.
[39] Will Nash, Tom Drummond, Nick Birbilis. 2018.
„A review of deep learning in the study of materials
degradation.”
[40] Yanfen Li, Hanxiang Wang, L. Minh Dang,
Abolghasem Sadeghi-Niaraki, Hyeonjoon Moona.
2020. „Crop pest recognition in natural scenes using
convolutional neural networks.”

You might also like