You are on page 1of 8

Available online at www.sciencedirect.

com
Available online at www.sciencedirect.com

ScienceDirect
ScienceDirect
Available online at www.sciencedirect.com
Procedia Computer Science 00 (2022) 000–000
Procedia Computer Science 00 (2022) 000–000
www.elsevier.com/locate/procedia
ScienceDirect www.elsevier.com/locate/procedia

Procedia Computer Science 219 (2023) 145–152

CENTERIS
CENTERIS –– International
International Conference
Conference on
on ENTERprise
ENTERprise Information
Information Systems
Systems // ProjMAN
ProjMAN ––
International Conference on Project MANagement / HCist – International Conference
International Conference on Project MANagement / HCist – International Conference on on Health
Health
and Social Care Information Systems and Technologies 2022
and Social Care Information Systems and Technologies 2022
A
A deep
deep learning
learning approach
approach for automatic counting
for automatic counting of
of bedbugs
bedbugs and
and
grape moth
grape moth
Ana
Ana Cláudia
Cláudia Teixeira
Teixeiraa,b,, Raul
a,b
Raul Morais
Moraisa,c,, Joaquim
a,c
Joaquim J.
J. Sousa
Sousa a,b, Emanuel Peresa,c, António
a,b
, Emanuel Peres
a,c
, António
Cunha
Cunha * a,b,*
a,b,
University of Trás-os-Montes and Alto Douro (UTAD), Vila Real, Portugal
a
a
University of Trás-os-Montes and Alto Douro (UTAD), Vila Real, Portugal
bCentre for Robotics in Industry and Intelligent Systems, INESC-TEC, Porto, Portugal
b
Centre for Robotics in Industry and Intelligent Systems, INESC-TEC, Porto, Portugal
cCentre for the Research and Technology of Agro-Environmental and Biological Sciences (CITAB), Vila Real, Portugal
c
Centre for the Research and Technology of Agro-Environmental and Biological Sciences (CITAB), Vila Real, Portugal

Abstract
Abstract
The bedbug and the grape moth are the most significant pests affecting rice and vineyards, causing great damage. However, these
The bedbug and the grape moth are the most significant pests affecting rice and vineyards, causing great damage. However, these
pests are only two examples of the many insect pests that exist with great potential to cause significant crop damage. Insect traps
pests are only two examples of the many insect pests that exist with great potential to cause significant crop damage. Insect traps
are among the most appropriate solution for monitoring and counting, influencing the selection and dosage of the pesticide to be
are among the most appropriate solution for monitoring and counting, influencing the selection and dosage of the pesticide to be
applied for pest control. However, the counting and monitoring operations are based on the frequent visit of technicians to the site
applied for pest control. However, the counting and monitoring operations are based on the frequent visit of technicians to the site
and are supported by inefficient counting methods, which is a challenging and time-consuming task. This study proposes the
and are supported by inefficient counting methods, which is a challenging and time-consuming task. This study proposes the
automatic counting of bedbugs and grape moths in traps using deep learning algorithms. We use three different databases, Pest24,
automatic counting of bedbugs and grape moths in traps using deep learning algorithms. We use three different databases, Pest24,
Bedbug and Grape moth. Pest24 is a public dataset with a great diversity of insects. The Bedbugs and the Grape moth datasets are
Bedbug and Grape moth. Pest24 is a public dataset with a great diversity of insects. The Bedbugs and the Grape moth datasets are
private datasets provided by mySense, a precision agriculture platform developed and managed by researchers from the University
private datasets provided by mySense, a precision agriculture platform developed and managed by researchers from the University
of Trás-os-Montes e Alto Douro (UTAD). First, we trained the Pest24 dataset with YOLOv5, and we got an mAP of 69.3%. Then,
of Trás-os-Montes e Alto Douro (UTAD). First, we trained the Pest24 dataset with YOLOv5, and we got an mAP of 69.3%. Then,
using the weights obtained from the Pest24 dataset, we trained the Bedbug and Grape moth datasets. The best results for the bedbug
using the weights obtained from the Pest24 dataset, we trained the Bedbug and Grape moth datasets. The best results for the bedbug
dataset were obtained with the YOLOv5 with transfer learning with an AP of 96.5% and a counting error of 63.3%. The best result
dataset were obtained with the YOLOv5 with transfer learning with an AP of 96.5% and a counting error of 63.3%. The best result
was obtained with YOLOv5 without transfer learning of Pest24 with an AP of 90.9% and a counting error of 6.7 for the Grape
was obtained with YOLOv5 without transfer learning of Pest24 with an AP of 90.9% and a counting error of 6.7 for the Grape
moth.
moth.
©
© 2022
2023 The
The Authors.
Authors. Published
Published by
by ELSEVIER
Elsevier B.V.B.V.
© 2022 The Authors. Published by ELSEVIER B.V.
This
This is
is an
an open
open access
access article
article under
under the
the CC
CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review
Peer-review underresponsibility
under responsibilityofofthe
thescientific
scientific committee
committee of of the
the CENTERIS
CENTERIS – International
– International Conference
Conference on ENTERprise
on ENTERprise Information
Peer-review under responsibility of the scientific committee of the CENTERIS – International Conference on ENTERprise
Information
Systems / Systems- /International
ProjMAN ProjMAN - Conference
International onConference
Project on Project /MANagement
MANagement HCist - / HCistConference
International - International
on Conference
Health and on Care
Social
Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on
Health and Social
Information Care
Systems andInformation
Technologies Systems
2022 and Technologies 2022
Health and Social Care Information Systems and Technologies 2022
Keywords: Insects detection, insect counting, smart pest monitoring, deep learning, YOLOv5
Keywords: Insects detection, insect counting, smart pest monitoring, deep learning, YOLOv5

* Corresponding author. Tel.: +351-259350300


* Corresponding author. Tel.: +351-259350300
E-mail address: acunha@utad.pt
E-mail address: acunha@utad.pt

1877-0509 © 2022 The Authors. Published by ELSEVIER B.V.


1877-0509 © 2022 The Authors. Published by ELSEVIER B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the CENTERIS – International Conference on ENTERprise Information Systems /
Peer-review under responsibility of the scientific committee of the CENTERIS – International Conference on ENTERprise Information Systems /
ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems
ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems
and Technologies 2022
and Technologies
1877-0509 © 20232022
The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the CENTERIS – International Conference on ENTERprise
Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference
on Health and Social Care Information Systems and Technologies 2022
10.1016/j.procs.2023.01.275
146 Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152
2 Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000

1. Introduction

Bedbugs and grape moths are responsible for causing high losses, as they cause damage to crop quality and
productivity. The bedbugs feed on the sap of oat, wheat, corn, cotton, soybeans and rice plants. This insect is attracted
and captured to light traps as it is an insect with phototropism [1]. The grape moth is the main pest of the vine. This
insect has several stages of its morphology, egg, caterpillar, pupa and adult. The caterpillar pierces and feeds on the
grape. The protection strategy is pest monitoring, using pheromone traps to attract adult moths [2]. For the application
of pesticides in adequate amounts to the state of the pest, there is a need to count the pests collected in the traps [3].
Traditionally, counting insects is done visually by humans. However, as each trap can contain dozens or hundreds of
insects of different species, the counting task becomes very laborious, time-consuming, error-prone, and can delay
decision-making. The development of the internet of things (IoT) and artificial intelligence (AI) allowed for smart
pest monitoring (SPM). This technology is essential for automatic counting, it will make the counting procedure faster,
more reliable and effective [4].
Several methods with deep learning (DL) are developed to make insect detection and counting. Hong et al. [5]
proposed several algorithms that detect and count M. thunbergianae from pheromone trap images. The authors trained
a Faster R-CNN with Resnet101, EfficientDet D4, Retinanet50, and SSD Mobilenetv2 architectures. The Faster R-
CNN achieved the best result, with an AP of 85.63%. Yun et al. [6] applied different methodologies for black pine
detection from adhesive trap images. The methodology with the best performance was YOLOv5, with an AP of 94.7%.
mySense is an innovative platform for the internet of things (IoT) oriented to agricultural applications. Several
devices, deployed in several geographically dispersed observatories in different cultures and installed at strategic
points on the agricultural plot to collect images [7], are feeding mySense platform. These devices collect real-time
images of insect traps, enabling images in traps in a natural environment. Images were collected from two different
observatories, giving rise to two datasets (Bedbug and Grape moth dataset) and used in our methodology. For the time
of this experiment, the datasets provided by mySense had a reduced number of images, leading to the inclusion of the
Pest24 dataset in our methodology. Pest24 is an extensive public dataset that contains 25,378 annotated images
provided by Wang et al.[8].
This work proposes a deep learning approach for automatically counting bedbugs and grape moth pests. Our
approach was divided into two parts, insect detection from the Pest24 public database and insect detection and
counting from the mySense database. This paper is structured as follows: in section 2, description of the datasets, data
preparation, methodology, and evaluation metrics. Then, in section 3, the results for each dataset are analysed and
discussed. Last, in section 4, the conclusions and recommendations for the future are presented.

2. Methodology

The used methodology is organised into three steps, represented in Fig. 1. First, we describe and prepare the datasets
to apply the methodologies. Then, we trained the three datasets separately with YOLOv5: first training the Pest24;
and then the mySense datasets, bedbug, and grape moth. Our strategy was based on transfer learning concepts. For
Pest24, we started by training with YOLOv5 (MP1). For bedbug, we trained with YOLOv5 (MB1) and trained with
transfer learning with weights obtained MP1 (MB2). For the grape moth, the same methodology as bedbug was
applied, so we trained with YOLOv5 (MG1) and with transfer learning with weights obtained MP1 (MG2). Finally,
the model’s performances were evaluated.
Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152 147
Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000 3

Fig. 1: The pipeline of this work.

2.1. Datasets and data preparation

2.1.1. Public dataset

Deep learning methods show great results for representative datasets. The datasets made available by mySense, as
they are not yet complete, have little data. Therefore, we included Pest24 in our methodology, as it is a very large
dataset with great variability. This consists of 25,378 images with 24 different insect classes [8]. The dataset was
divided into 18,105 images for train, 4,857 images for validation and 2,417 images for the test. The images had an
initial resolution of 800×600 pixels and have been resized to 640×640 pixels.

2.1.2. MySense dataset

For our case study, devices collected real-time images of insect traps, enabling images in traps in a natural
environment were also used. As shown in Fig. 2, images were collected from two different observatories in two
cultures, in the vineyard and a rice field, enabling the creation of two insect pest datasets, the bedbug and the Grape
moth.

Fig. 2: Observatories in two cultures.

The bedbug dataset is composed of only 42 images, in jpg format, with a resolution of 920×840 pixels. Therefore,
all images have been resized to 640×640 pixels, as this is the resolution indicated for the detection methods applied.
Since there was a lack of bedbug data, we resorted to data augmentation techniques: horizontal and vertical flip and
rotation between -14º and +14º. After the data augmentation operation, the dataset started to integrate 102 images with
770 instances. We divided the dataset into training, validation and testing, with 90, 7 and 5 images, respectively.
148 Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152
4 Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000

The grape moth dataset was composed of 228 images in jpg format with an initial resolution of 1600×1200 pixels
has been resized to 640×640 pixels. Then, we applied the data augmentation techniques, like the 90º rotation, upside
and down, vertical flip, and rotation between -14º and +14º, resulting in 212 images. After data augmentation, the
dataset is composed of 348 images with 12,438 instances, and we divided the dataset images into 318 for training, 19
for validation and 11 for testing. Fig. 3 shows the example of images with data augmentation for the two datasets.

Fig. 3: Original images (on the left) and images with data augmentation (on the right).

Annotations of all images are in yolov5 format. Each image has a txt file with a line for each bounding box in this
format. The format contains class id, centre x, centre y, width and height. A space separates all parameters, and the
coordinates are normalised.

2.2. Training

YOLOv5 uses CSPDarknet-53 as the backbone. PAN with FPN constituted the neck, which will improve the
propagation of low-level features in the model, and increases the object location accuracy even for small objects. The
head is composed of the YOLO layer, which can detect objects at three different scales [9]. For this work, we use
YOLOV5s because it has a lower computational cost, making it possible to implement them in the field in the future.
This model contains 270 layers with 7 084 357 parameters and uses the stochastic gradient descent optimiser and the
loss binary cross-entropy function.
All experiments are summarised in Table 1. We trained the three datasets separately with YOLOv5s: first training
the Pest24; and then the mySense datasets, bedbug, and grape moth. The goal is to use Pest24 to pre-train the model.
And then transfer learning to mySense datasets since there is little data. For Pest24, we applied methods MP1 and
were trained for 400 epochs with a batch size of 16 for training and validation. For bedbug, we trained with methods
MB1 and MB2. For transfer learning, we use the MP1 weights and train the head of the model, freezing the 12 layers
of backbone for MB2. Each experiment was trained for 150 epochs with a batch size of 16 for training and validation.
Finally, for the grape moth, we trained with methods MG1 and MG2. For each experiment was trained for 250 epochs
with a batch size of 16 for training and validation.
Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152 149
Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000 5

Table 1: Description of implemented methods.

Model Dataset Description

MP1 Pest24 Detection of 24 insects species with YOLOv5s

MB1 Bedbug Detection of the bedbug with YOLOv5s


Detection of the bedbug with transfer learning using the MP1
MB2 Bedbug
weights
MG1 Grape moth Detection of the grape moth with YOLOv5s
Detection of the grape moth with transfer learning using the MP1
MG2 Grape moth
weights

2.3. Evaluation

The metrics for evaluating the models were the average precision (AP), mAP and counting error. AP is the widely
used metric for object detection, is calculated from the shape of the recall/precision curve. The mAP is the mean of
the AP for all classes. To consider a detection, we set the threshold greater than 0.5 when calculating the intersection
over union (IoU). IoU is an evaluation metric that measures the common area between the ground truth and prevision,
divided by the total area of the two regions. The true positives (TP) are the correct number of detections. Detections
are correct if the IoU value exceeds the preset threshold value. Usually, by default, the threshold value is 0.5. The
false positives (FP) are the wrong number of detections with the IoU less than the threshold. And the false negatives
(FN) are the number of objects that are not detected. The precision and recall are used to calculate AP. They are
calculated by the equations (1) and (2).
𝑇𝑇𝑇𝑇 (1)
𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 =
𝑇𝑇𝑇𝑇 + 𝐹𝐹𝐹𝐹
𝑇𝑇𝑇𝑇 (2)
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 =
𝑇𝑇𝑇𝑇 + 𝐹𝐹𝐹𝐹

Counting error was the metric used to assess insect counts. The measure is the division between the difference
between the number of correct detections (𝐶𝐶) and the number of predicted detections (𝐶𝐶̂ ), by the number of correct
detections, like Equation 3. This metric is measured as a percentage, and the optimal result is closest to 0.
𝐶𝐶 − 𝐶𝐶̂ (3)
𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 (%) = | | × 100
𝐶𝐶

3. Results and discussion

3.1. Pest24 detection

The results achieved by using the Pest24 dataset obtained 69.3%, 64.7% and 66.3 for mAP, recall, and precision.
Fig. 4 shows the graph recall/precision, the grey ones are the APs for each class, and the blue is the mAP curve.
Fig. 5 illustrates the predictions for three images (a), (b) and (c). The first row of images is the ground truth. The
images predicted by the MP1 contain 12 TP, 1 FP and 1 FN.
150 Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152
6 Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000

Fig. 4: Graph of the obtained mAP. Fig. 5: Example of predictions from three Pest24 images, (a), (b)
and (c). The first column is the ground truth, the second is the
prediction of the MP1 model.

3.2. Detection and counting mySense datasets

All results are summarised in Table 2. The best dataset results for each metric are in bold. For the Bedbug dataset,
the methodology that obtained a superior mAP was MB2, detecting a bedbug with a YOLOv5s model with transfer
learning using the MP1 weights. Analysing the results, we can verify that it was possible to obtain the best performance
with transfer learning of the weights of MP1. In this dataset, the solution with transfer learning reached the best
performance, with 96.5% of AP and a counting error of 63.3%. Regarding the counting error, we can verify that it is
greater than 60% in all the methods, which reveals that is, in this dataset, the methods were not effective in performing
the counting.

Table 2: Results obtained in this work


Model AP (%) Recall (%) Precision (%) Counting error (%)

MB1 81.8 75.0 87.9 81.7


MB2 96.5 82.5 86.8 63.3
MG1 90.9 86.2 86.9 6.7
MG2 86.8 86.8 79.7 7.3

The methodology that obtained the best performance for the grape moth was MY1, detecting the grape moth with
the YOLOv5s model with the default configuration. However, the differences between the models trained are not
relevant. As for the counting error, the MY1 presented the best count. Comparing the results with the bedbug, we
noticed that the AP is similar; however, there is a very large discrepancy in the counting error. This can be explained
by the bedbug dataset containing images with more than 200 insects, while the grape moth images have fewer insects.
Thus, we conclude that YOLO has a lower performance for images with a high number of classes. These can be
observed in Fig. 6. This figure illustrates the predictions for three images of bedbug dataset, (a), (b) and (c). The first
Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152 151
Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000 7

column of images is the ground truth, the first image has 2 bedbugs, the second has 7 bedbugs, and the third image
has 204 bedbugs. In the image (a), both models had difficulties getting the detection right because the image scenario
is less frequent in the dataset. In the image (b), the two models all detections give TP. Observing the image (c), it is
possible to conclude that the experiments are not so efficient, presenting false detections (FP) and multiple bedbugs
to be detected (FN). For example, this image for MB1 contains 26 TP and 178 FN; and MB2 has 63 TP and 141 FN.
Fig. 7 illustrates the predictions of two methods for three images, (a), (b) and (c). The first column of images shows
that the first image has 4 grape moths, the second image has 47 grape moths, and the third has 10 grape moths. The
predicted grape moths count for each image is in the upper left corner of each image. The methods correctly predicted
all grape moths for images (a) and (c). For image (b), in general, the methods were very close to the correct detection
of all insects. The MG1 method has two FN, having hit 45 out of 47 predictions, and the MG2 only generate 1 FP,
hitting everything else.

Fig. 6: Example of predictions from three Bedbug images. The first Fig. 7: Example of predictions from three Grape moth images. The
column is the ground truth, the second is the prediction of the MB1 first column is the ground truth, the second is the prediction of the
model, and the last is the prediction of the MB2 model. In the MG1 model, and the last is the prediction of the MG2 model. In the
predicted images, green is represented the ground truth bounding predicted images, green is represented the ground truth bounding
box, and red is the prediction. box, and red is the prediction.

Comparing the counting results obtained from bedbug and grape moth, we can conclude that the grape moth count
is much better than counting bedbugs. This can be explained by the relative scale of the insects and the number of
insects in the traps. The vine moth has a larger relative scale than the bedbug, thus representing a greater number of
pixels, making the detection task less difficult. On the other hand, some bedbug images had hundreds of insects,
hampered the methods’ efficiency, and thus obtained inferior results compared to grape moth.

4. Conclusion

In this work, we apply YOLOv5 in order to count bedbugs and grape moths automatically. Two sets of data were
generated from the images provided by the mySense platform. As images are still being collected, these datasets are
still small. Therefore, we started by training the Pest24 dataset with YOLOv5s. The proposed method obtained an
mAP of 69.3%. For the Bedbug dataset, the best method was YOLOv5 with transfer learning from Pest24, reaching
an AP of 96.5% and a counting error of 63.3%. For the grape moth dataset, the best method was the YOLOv5s without
transfer learning, with an AP of 90.9% and a counting error of 6.7%.
152 Ana Cláudia Teixeira et al. / Procedia Computer Science 219 (2023) 145–152
8 Ana Cláudia Teixeira et al. / Procedia Computer Science 00 (2019) 000–000

From this work, we can conclude that the results obtained are promising and it was possible to detect pests with
high AP, however, we found that YOLOv5 is not as efficient for images that have a high number of insects which had
an impact on the counting error (for example, a bedbug image with 204 insects), so we encourage frequent replacement
of traps to avoid the high number and overlapping insects.
For future work, we intend to add the images that have been continuously collected by mySense to our datasets;
apply new detection and counting methodologies with deep learning, and implement the methods developed in the
field.

Acknowledgements

This work is supported by National Funds by FCT - Portuguese Foundation for Science and Technology, under the
projects UIDB/04033/2020 and UIDB/50014/2020.

References

[1] A. Arias, J. Jiménez, J. A. Rodríguez, J. M. Casado, C. García, and A. J. Lancharro J Vázquez, “La chinche del arroz, Eysarcoris ventralis
(West.), Sin. E. inconspicuus (H. Sch.), en Extremadura: colonización del arroz y estrategias de protección,” 1998.
[2] S. Reis, J. Martins, F. Gonçalves, C. Carlos, and J. A. Santos, “European Grapevine Moth and Vitis vinifera L. Phenology in the Douro Region:
(A)synchrony and Climate Scenarios,” Agronomy, vol. 12, no. 1, Jan. 2022, doi: 10.3390/agronomy12010098.
[3] D. R. Harris and D. Q. Fuller, “Agriculture: Definition and Overview,” in Encyclopedia of Global Archaeology, Springer New York, 2014, pp.
104–113. doi: 10.1007/978-1-4419-0465-2_64.
[4] W. Li, T. Zheng, Z. Yang, M. Li, C. Sun, and X. Yang, “Classification and detection of insects from field images using deep learning for smart
pest management: A systematic review,” Ecological Informatics, vol. 66, Dec. 2021, doi: 10.1016/j.ecoinf.2021.101460.
[5] S. J. Hong et al., “Automatic pest counting from pheromone trap images using deep learning object detectors for matsucoccus thunbergianae
monitoring,” Insects, vol. 12, no. 4, Apr. 2021, doi: 10.3390/insects12040342.
[6] W. Yun, J. P. Kumar, S. Lee, D. S. Kim, and B. K. Cho, “Deep learning-based system development for black pine bast scale detection,”
Scientific Reports, vol. 12, no. 1, Dec. 2022, doi: 10.1038/s41598-021-04432-z.
[7] R. Morais et al., “mySense: A comprehensive data management environment to improve precision agriculture practices,” Computers and
Electronics in Agriculture, vol. 162, pp. 882–894, Jul. 2019, doi: 10.1016/j.compag.2019.05.028.
[8] Q. J. Wang et al., “Pest24: A large-scale very small object data set of agricultural pests for multi-target detection,” Computers and Electronics
in Agriculture, vol. 175, Aug. 2020, doi: 10.1016/j.compag.2020.105585.
[9] L. Zhu, X. Geng, Z. Li, and C. Liu, “Improving yolov5 with attention mechanism for detecting boulders from planetary images,” Remote
Sensing, vol. 13, no. 18, Sep. 2021, doi: 10.3390/rs13183776.

You might also like