You are on page 1of 9

Ecological Informatics 63 (2021) 101302

Contents lists available at ScienceDirect

Ecological Informatics
journal homepage: www.elsevier.com/locate/ecolinf

Accurate mapping of Brazil nut trees (Bertholletia excelsa) in Amazonian


forests using WorldView-3 satellite images and convolutional
neural networks
Matheus Pinheiro Ferreira a, *, Rodolfo Georjute Lotte b, Francisco V. D’Elia b,
Christos Stamatopoulos b, Do-Hyung Kim c, Adam R. Benjamin d
a
Cartographic Engineering Department, Military Institute of Engineering (IME), Praça Gen. Tibúrcio 80, 22290-270 Rio de Janeiro, RJ, Brazil
b
Bioverse Tecnologia e Inovação em Monitoramento Ambiental, Alameda dos Anapurus 1473, 04087-005 São Paulo, SP, Brazil
c
Office of Innovation, UNICEF, New York, NY 10017, USA
d
School of Forest, Fisheries, and Geomatics Sciences, Univ. of Florida Geomatics Program, Fort Lauderdale R.E.C., 3205 College Ave., Fort Lauderdale, FL 33314, United
States of America

A R T I C L E I N F O A B S T R A C T

Keywords: The commercialization of Brazil nuts, seeds of Bertholletia excelsa Bonpl. (Lecythidaceae), represents one of the
Deep learning main income-generation activities for local and indigenous people from the Brazilian Amazon region. Because
Tree species discrimination trees of B. excelsa grow and bear fruit almost exclusively in natural forests, information on their spatial distri­
Amazon
bution is crucial for nut harvest planning. However, this information is difficult to obtain with traditional ap­
Very-high-resolution
proaches such as ground-based surveys. Here, we show the potential of convolutional neural networks (CNNs)
and WorldView-3 satellite images (pixel size = 30 cm) to map individual tree crowns (ITCs) and groves of
B. excelsa in Amazonian forests. First, we manually outlined B. excelsa ITCs in the WorldView-3 images using
field-acquired geolocation information. Then, based on ITC boundaries, we sequentially extracted image patches
and selected 80% of them for training and 20% for testing. We trained the DeepLabv3+ architecture with three
backbones: ResNet-18, ResNet-50, and MobileNetV2. The average producer’s accuracy was 93.87 ± 0.85%,
93.89 ± 1.6% and 93.47 ± 3.6% for ResNet-18, ResNet-50 and MobileNetV2, respectively. We then developed a
new random patch extraction training strategy and assessed how a reduction in the percentage of training
patches impacted the classification accuracy. To illustrate the robustness of the new training strategy, similar F1-
scores were achieved whether 80% or 10% of the total number of patches were used to train the CNN model. By
analyzing the feature maps derived from ResNet-18, we found that the shadow of emergent B. excelsa trees are
important for their discrimination. Geometric distortions in the WorldView-3 images resulting from extreme off-
nadir viewing angles compromise the presence of shadows, thus potentially hampering B. excelsa detection. Our
results show that ITCs and groves of B. excelsa can be mapped by integrating CNNs and very-high-resolution
(VHR) satellite images, paving the way for monitoring this important tree species in large tracts of Amazo­
nian forests.

1. Introduction tons of Brazil nuts, which represented 11.1% of the economic profit
generated by exploring natural plant resources (IBGE, 2019). In addition
The Brazil nut is one of the most important non-timber forest prod­ to the nuts’ economic importance, the wood of B. excelsa trees has a high
ucts in South America (Peres et al., 2003). It is extracted from Bertholletia value in the market, which motivates illegal logging activities, given
excelsa Bonpl. (Lecythidaceae) trees that grow and bear fruit almost that the species is considered threatened by the Brazilian Ministry of
exclusively in natural forests rather than in plantations. Brazil nuts are Environment (MMA, 2019).
an essential dietary item for local populations in Amazonia and are also Management and conservation efforts of B. excelsa trees in Amazo­
traded in considerable volumes. In 2019, Brazil produced 32.9 thousand nian forests can benefit from information regarding its spatial

* Corresponding author.
E-mail address: matheus@ime.eb.br (M.P. Ferreira).

https://doi.org/10.1016/j.ecoinf.2021.101302
Received 17 February 2021; Received in revised form 9 April 2021; Accepted 9 April 2021
Available online 15 April 2021
1574-9541/© 2021 Elsevier B.V. All rights reserved.
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

distribution. For example, one can previously locate productive groves, at the top of the stem. They are divided into several leaflets along the
thus supporting the planning of nut collection. Conservation strategies leaf rachis, resulting in a peculiar starry pattern in remote sensing
such as establishing protected areas can be guided by spatially contin­ images.
uous data of B. excelsa occurrence in large tracts of Amazonian forests. CNNs have only been recently employed in vegetation remote
However, knowledge of B. excelsa trees’ spatial distribution is tradi­ sensing, with most studies published in the last two years (Kattenborn
tionally obtained with ground-based surveys that involve high costs and et al., 2021). Little is known regarding the potential of CNNs to
logistic efforts. An encouraging way to obtain spatially explicit infor­ discriminate among tropical tree species that have a vast diversity of
mation on tree species occurrence over large areas is by integrating canopy architectures (Hallé et al., 1978). The few existing studies used
remote sensing images and machine learning methods. hyperspectral images acquired by UAVs (e.g., Sothe et al. (2020)).
Recently, convolutional neural networks (CNNs), a type of deep However, these images require extensive calibration and complicated
learning method, have been hailed as a promising approach for identi­ preprocessing steps, which limits their use to specialists. Moreover, a
fying tree species in remote sensing images, especially in very-high- knowledge gap exists between suitable CNN training strategies for tree
resolution (VHR) data (Kattenborn et al., 2021). CNNs are designed to species classification and different backbones for feature extraction.
automatically extract spatial patterns (e.g., shapes, edges, texture) of This study seeks to contribute knowledge regarding the potential of
images using a set of convolution and pooling operations (Zhang et al., CNNs to map B. excelsa trees in Amazonian forests using WorldView-3
2016), hence learning object-specific characteristics. In the case of tree satellite images. We bring new insights into model training strategies
species, such characteristics are mainly related to the canopy structure: and backbones for feature extraction. We presented an approach that
the arrangement of leaves and branches in the crown or color patterns can map the spatial distribution of individual trees and groves of
caused by flowering events. CNNs designed for semantic segmentation B. excelsa in large tracts of Amazonian forests.
can simultaneously classify and detect individual tree crowns (ITCs),
thus avoiding object-based approaches that require an ITC delineation 2. Materials
step before classification. ITC delineation is a challenging task, partic­
ularly in tropical forests in which the tree crowns usually overlap and 2.1. Study area
have highly variable sizes and shapes (Tochon et al., 2015; Wagner
et al., 2018; G Braga et al., 2020). The study area is located in southern Pará State, southeastern Bra­
Several studies have demonstrated the potential of CNNs for map­ zilian Amazon, within the Kayapó indigenous land (Fig. 1). The Kayapó
ping trees species in urban areas (Hartling et al., 2019; Lobo Torres et al., territory, a regularized area of 3,284,005 ha, harbors more than 8580
2020; Santos et al., 2019; Zhang et al., 2020) or monoculture planta­ indigenous people (IBGE, 2010) from several ethnicities or subgroups at
tions, especially oil palm (Dong et al., 2020; Li et al., 2017; Liu et al., the eastern part of the Amazon biome. Given that the Xingu River crosses
2021; Zheng et al., 2020; Zheng et al., 2021). In forest environments, the the territory and other numerous small streams run through the inner
majority of studies were performed in temperate regions using RGB forest, there are estimated to be 19 known indigenous communities and
images (Onishi and Ise, 2021; Schiefer et al., 2020), hyperspectral im­ more than five isolated ones (Ricardo et al., 2000).
ages (Fricker et al., 2019; Liao et al., 2018; Nezami et al., 2020; Trier The Kayapó territory lies on a transition zone between the Amazon
et al., 2018) or light detection and ranging (LiDAR) data (Weinstein forest and Cerrado (savanna) biomes in central Brazil, where seasonally-
et al., 2020) acquired by sensors onboard unoccupied aerial vehicles dry forests dominate the landscape (Salm, 2004). The relief is charac­
(UAVs, refer to Joyce et al. (2021)) and airplanes. Images collected by terized by rocky ridges that vary between 100 and 500 m in altitude, and
UAV-borne or airborne sensors have a high spatial resolution (pixel size the climate is humid and hot, with a dry season that spans from May to
< 50 cm) but are typically limited to small spatial extents. August. The region receives less than 1700 mm yr− 1 of rain and has a
Conversely, VHR spaceborne sensors can combine high spatial res­ solid dry season of as many as 100 rainless days yr− 1 (Zimmerman et al.,
olution with extensive area coverage. Wagner et al. (2019) used 2001). The mean annual temperature is 25.8◦ C and ranges from 18◦ to
WorldView-2 images (pixel size = 50 cm) and the U-net (Ronneberger 35◦ C.
et al., 2015) convolutional network to map trees of Cecropia hololeuca,
an indicator species of disturbance in tropical forests. The results 2.2. WorldView-3 images and preprocessing
showed that the U-net network produced fine-grained segmentation of
C. hololeuca trees with an overall accuracy of 97%. Wagner et al. (2020a) The WorldView-3 satellite was launched by DigitalGlobe© on 13
used WorldView-2 and WorldView-3 (pixel size = 30 cm) and U-net to August 2014, carrying one of the most advanced commercials, VHR
map the spatial distribution of the species Tibouchina pulchra and assess sensors. With a circular sun-synchronous orbit and flying at an altitude
the regeneration history of Brazilian Atlantic Forest fragments. Trees of of 617 km, WorldView-3 collects eight visible to near-infrared (VNIR,
T. pulchra were segmented with 98.92% accuracy. The high classifica­ 400–1040 nm) bands at 1.2 m resolution, eight shortwave infrared
tion accuracy (<95%) reported by the studies mentioned above can be (SWIR, 1210–2365 nm) bands at 3.7 m resolution, and a panchromatic
explained by the remarkable characteristics of C. hololeuca and band with 30 cm spatial resolution.
T. pulchra trees. The former has a large round (>50 cm in diameter) and Cloud-free WorldView-3 images were acquired over the area of in­
bright gray peltate leaf, and the latter undergoes mass-flowering with terest on 19 June 2016, at a maximum off-nadir view angle of 16.07◦ ,
purple and pink blossoms that enable its detection. encompassing an area of about 1071 km2 (see images within the red
Some studies performed in tropical forests used CNNs to discriminate polygon of Fig. 1). For this study, we used the Red (630–690 nm), Green
palm trees in VHR images and reported high accuracy rates (>90%). For (510–580 nm), Blue (450–510 nm), and panchromatic (450–800 nm)
example, Wagner et al. (2020b) performed a regional mapping of can­ bands. These bands were provided by DigitalGlobe© as the standard
opy palms in Amazonian forests with GeoEye-1 images (pixel size = 50 LV2A product (DigitalGlobe, 2016), which is radiometrically corrected
cm) and the U-net model, achieving 95.5% of overall accuracy. Morales and georeferenced to World Geodetic System (WGS) 1984 datum and
et al. (2018) discriminated palm trees of Mauritia flexuosa in UAV images the Universal Transverse Mercator (UTM) zone 22S projection.
acquired over swamps in the Peruvian Amazon rainforest, reporting Through pan-sharpening, the RGB bands were merged to the
98.1% of classification accuracy. Ferreira et al. (2020) developed a deep panchromatic band to obtain a single high-resolution RGB image at 30
learning-based method for automatically detecting three Amazonian cm resolution. We used Brovey’s pansharpening algorithm (Hallada and
palm species in UAV images, achieving an average producer’s accuracy Cox, 1983), available in the open-source Geospatial Data Abstraction
of 87.8 ± 4.4%. The morphological characteristics of palm crowns can Library (GDAL) as the gdal_pansharpen.py script. The weights used
explain the success of CNNs. Palms usually have the leaves concentrated by the algorithm for the computation of the pseudo panchromatic value

2
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

Fig. 1. Location of the study area in Brazil. (a) Pará state and indigenous territories. (b) Study area located within the Kayapó reserve. (c) True color composition of a
WorldView-3 scene (pixel size = 30 cm) showing individual tree crowns (yellow polygons) of Brazil nut trees (Bertholletia excelsa). (For interpretation of the ref­
erences to color in this figure legend, the reader is referred to the web version of this article.)

were 0.005, 0.142, and 0.209 for the RGB bands (Parente and Pepe, structure with atrous convolution (Chen et al., 2017) that captures
2017), respectively. high-level semantic information and produces fine-grained segmenta­
tion. We used a downsampling factor (output stride) (Chen et al., 2017)
2.3. Field data of eight in DeepLabv3+, which means that the input image is down­
sampled by up to a factor of eight in the encoder section.
In 2010, we carried out a fieldwork campaign to geolocate B. excelsa The deep residual learning framework called Residual Network
groves. We visited these groves with previous knowledge from the local (ResNet) was proposed by He et al. (2016). It consists of several residual
communities and registered the geographic coordinates of 802 points blocks that forward the feature maps (activations) of a given layer to a
using a global navigation satellite system (GNSS) device (Garmin 65CSx deeper layer. This procedure allows the network’s number of layers
©). During the fieldwork, the GNSS accuracy was 5 ± 5 m. Thus, the (depth) to increase, which improves feature extraction. The ResNet-18
GNSS points do not represent the exact location of an ITC. Most geo­ and ResNet-50 differ from each other by the number of residual layers.
located groves had trees with a large diameter at breast height (DBH) (> The MobileNetV2, proposed by Sandler et al. (2018), is a lightweight
60 cm) and thus a large crown (see Blanchard et al. (2016) for more neural network designed for mobile devices. It is smaller, more compact
information on the relationship between DBH and crown area). We than ResNet-18 and ResNet-50, and has a bottleneck residual block
could quickly identify these trees in the VHR WorldView-3 images module that uses depth-wise convolutions instead of standard convo­
(Fig. 1c). lutions. More details can be found in Sandler et al. (2018).

3. Methods 3.3. Experimental set-up

3.1. Individual tree crown dataset We assessed ResNet-18, ResNet-50, and MobileNetV2’s performance
into the DeepLabv3+ architecture in the first experiment. To do this,
With a true color composition of the WorldView-3 image at a scale of using the manually delineated ITCs, we sequentially extracted 1252
1:500, we outlined crowns of B. excelsa trees using the QGIS software image patches with dimensions 256 × 256 pixels (Fig. 2a). We carefully
(QGIS Development Team, 2019). We used the field-acquired GNSS lo­ inspected the image patches to certify that we outlined all B. excelsa
cations (Section 2.3) to guide the manual tree crown delineation pro­ trees within them. We randomly selected 80% (1002) of the patches for
cedure. We outlined only ITCs that could be easily identified in the training and 20% (250) for testing. The testing patches were used only
images (Fig. 1), thereby reducing eventual errors caused by the GNSS for validation and did not participate in the training process. Thus, they
positional uncertainty. We could identify B. excelsa trees because of their did not update the network parameters at each epoch. For each image
remarkable characteristics: large crowns and height. In total, we out­ patch, there is a reference labeled image in which the pixels are labeled
lined 2069 ITCs over the study area. as either (a) B. excelsa or (b) “forest background”, to represent all other
trees. Within all 1252 image patches, there were 5,523,478 B. excelsa
3.2. CNN architectures pixels and 7,652,759 forest background pixels.
For the network architecture that provided the best classification
The use of CNNs to retrieve tree species from satellite images is just results, we tested two training frameworks. First, we trained the network
beginning to be explored. Thus, several technical issues remain unclear. with the sequentially extracted 256 × 256 pixel-sized patches (Fig. 2a).
For example, it is not well understood how the classification accuracy is Second, we combined the sequentially extracted patches so that they
affected by (a) different training strategies nor (b) different backbones roughly form a square. From this image, we extracted 2000 randomly
for feature extraction. positioned patches of size 256 × 256 pixels (Fig. 2b). We chose 2000
This study tested the DeepLabv3+ architecture with three backbone patches because this number of patches provided a reasonable trade-off
networks: ResNet-18, ResNet-50, and MobileNetV2. DeepLabv3+ is between classification accuracy and processing time in preliminary
considered a state-of-the-art framework for semantic segmentation. tests. This approach, which we called random patch extraction, allowed
Proposed by Chen et al. (2018), it consists of an encoder-decoder us to augment the training data size, and at each epoch, use a different

3
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

Fig. 2. Illustration of two approaches used to train the convolutional neural network (CNN) model. (a) Sequential patch extraction. The patches are sequentially
extracted from the WorldView-3 image based on the manually delineated individual tree crowns’ boundaries. These patches are then used to train a CNN model. (b)
Random patch extraction. The sequentially extracted patches are combined so that they roughly form a squared tiled image, and, from this image, patches are
randomly extracted.

set of images to feed the network. performed in the score maps of the target species.
Additionally, for each framework (i.e., sequential or random patch
extraction, Fig. 2), we evaluate the influence of the number of training
patches on model performance. To do this, from the 1252 image patches, 3.5. Accuracy assessment
we selected 20% (250) for validation. Then, we trained the CNN model
with the sequential and the random patch extraction approaches (Fig. 2) We performed the accuracy assessment with testing patches, which
we did not use to train the networks. The results are presented in terms
using 125, 250, 500, 752, and 1002 patches that corresponded to 10%,
20%, 40%, 60%, and 80% of the original number of patches, respec­ of confusion matrices showing the number of correctly classified and
misclassified pixels along with relative percentages. Moreover, we
tively. In this experiment, we assessed the classification accuracy of
different training percentages using a single dataset of 250 patches compute the user’s and producer’s accuracy and the F1-score to evaluate
the classification accuracy of B. excelsa obtained after using the random
selected for validation.
In all the experiments, we used a mini-batch size of 16, and we and sequential patch extraction approaches for network training (Sec­
tion 3.3, Fig. 2).
trained the networks for 30 epochs. An epoch is a full pass over the entire
training set, while a mini-batch is a subset of the training patches used to Since it is operationally unfeasible to verify the species of each tree
mapped over the validation WorldView-3 scene (Section 3.4), we per­
update the network parameters (weights and biases). We used the sto­
formed a visual assessment of the segmented trees using field-acquired
chastic gradient descent with momentum optimizer (Murphy, 2012).
GNSS locations of B. excelsa groves (Section 2.3). It is important to
Data augmentation was used during training and consisted of random
note that the GNSS locations show planimetric positional accuracies of
reflection and rotation of the patches. Class weighting was used to bal­
approximately 5 ± 5 m. Thus, they do not represent individual trees.
ance the classes, thus avoiding biased training of the network. The
weights of the ResNet-18, ResNet-50, and MobileNetV2 models were
initialized with pre-trained values of the ImageNet database (Deng et al., 4. Results
2009). The learning rate started at 0.05 and decreased by a factor of 0.5
at every five epochs. The three backbone networks (ResNet-18, ResNet-50, and Mobile­
NetV2) achieved similar values of average producer’s accuracy (Fig. 3).
Specifically, ResNet-18 reached 93.87% of average producer’s accuracy
3.4. Satellite image scene mapping of B. excelsa trees and correctly classified 93.3% of the B. excelsa pixels from the testing
patches. ResNet-18 was the less computationally intensive model, with
To assess the potential of CNNs to produce maps of the B. excelsa an elapsed time of 54 s for each epoch. For MobileNetV2 and ResNet-50,
occurrence, we used the network architecture that provided the best the per-epoch training time was 86 and 140 s, respectively. Because the
classification results (Section 3.3) to detect B. excelsa trees automatically ResNet-18 model provided the shortest training time, it was chosen to
in an entire WorldView-3 scene (not used in the experiments) encom­ test the two patch extraction approaches (Section 3.3, Fig. 2).
passing an area of about 24 km2 of Amazonian forests. We produced a The classification accuracy results of the two strategies used to train
map of individual B. excelsa trees by employing the approach proposed the ResNet-18 are shown in Table 1. The sequential patch extraction
by Ferreira et al. (2020), which is based on morphological operations procedure reached 70.17% and 60.92% of F1-score using, respectively,

4
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

Fig. 3. Confusion matrices showing the classification accuracy of Brazil nut (Bertholletia excelsa) in WorldView-3 images. The producer’s accuracy is distributed
along with the diagonal cells (highlighted in blue). The percentage of misclassification between B. excelsa and the forest background is shown in the off-diagonal cells.
Each cell contains absolute values (pixels) and relative percentages. Three convolutional neural networks were tested (a) ResNet-18; (b) ResNet-50 and (c) Mobi­
leNetV2. The networks were backbones of the DeepLabv3+ architecture for semantic segmentation. (For interpretation of the references to color in this figure legend,
the reader is referred to the web version of this article.)

maps of B. excelsa by labeling only their pixels. Fieldwork information


Table 1 revealed that the automatically detected ITCs are found in areas of
Classification accuracy metrics obtained with the sequential and random patch
B. excelsa occurrence (Fig. 5b,c).
extraction approaches (Section 3.3, Fig. 2) used to train the DeepLabv3+ ar­
chitecture with ResNet-18 as backbone. UA = User’s accuracy (%), PA = Pro­
ducer’s accuracy (%). 5. Discussion

% of training data (n◦


Sequential patch Random patch extraction
5.1. Model performance and training strategies
of patches) extraction

UA PA F1- UA PA F1-
We demonstrate the potential of CNNs and VHR satellite images to
score score
map B. excelsa trees in large tracts of Amazonian forests. We introduced
10 (125) 47.07 86.29 60.92 54.75 98.79 70.45 a novel strategy to train the CNN model to achieve high classification
20 (250) 51.23 91.52 65.69 56.59 97.95 71.74
40 (500) 57.64 92.43 71.00 55.46 98.42 70.95
accuracy (<97%) with a reduced training set. Convolutional neural
60 (752) 56.51 96.12 71.17 56.83 98.06 71.95 networks are data-driven methods that usually require large amounts of
80 (1002) 56.24 93.27 70.17 56.69 97.92 71.81 ground truth information to be successfully trained. However, ground-
based surveys are costly and may be logistically prohibitive, particu­
larly in areas of tropical forests with difficult access. The results of the
80% and 10% of the training patches, which represents a decrease of
proposed random patch extraction approach showed that the classifi­
9.25 percentage points. Conversely, for the random patch extraction
cation accuracy of the target species did not change significantly after
approach, the F1-score changed minimally, in a range of 70.45% to
reducing the percentage of training patches from 80% (1002 patches) to
71.91%, whether 10% (125) or 80% (1002) of training patches were
10% (125 patches) (Table 1). It is important to note that the random
used (see Fig. 2b).
patch extraction procedure’s application relies on a pixel-wise (dense)
Some examples of the automatic detection of B. excelsa trees in
reference labeled dataset. For training, all pixels of the image patch
testing patches of WorldView-3 images are shown in Fig. 4. The stron­
should be assigned with a label.
gest activations (feature maps) of the initial and intermediate layers
Our ITC dataset was constructed by visual interpretation. We used
reveal interesting features learned by the network. For example, one can
GNSS information from fieldwork surveys to guide the manual delin­
note that the first layer detects mainly the edges (Fig. 4b), while the
eation of B. excelsa trees in the WorldView-3 images. While GNSS
intermediate layer show positive activations (white pixels) on shadows
measurements are subject to geolocation errors, particularly under
of emergent trees (Fig. 4c). The penultimate layer shows strong activa­
dense forest canopies (Kaartinen et al., 2015), they provide crucial in­
tions on areas of B. excelsa occurrence (Fig. 4d). The network uses the
formation on B. excelsa groves. These groves are composed of emergent
feature maps information to semantically segment the image patch
trees that stand out in the forest landscape, facilitating their crowns’
(Fig. 4e). However, it labeled pixels that did not coincide with the
identification and delineation. Trees of B. excelsa often reach 30 to 50 m
ground truth (red arrows in Fig. 4d,e). A closer look at these areas of
in height and diameter at breast height that can exceed 300 cm (Baldoni
misclassification reveals that they belong to emergent trees. It is worth
et al., 2020; Mori and Prance, 1990; Scoles and Gribel, 2011). Visual
noting that the misclassified pixels did not show the same strong positive
interpretation is the most used approach by studies involving CNNs in
activations (bright white) in the penultimate layer as the pixels
vegetation remote sensing (Kattenborn et al., 2021).
belonging to the target species, which suggests low confidence by the
We used the DeepLabv3+ (Chen et al., 2018) architecture, given its
network. In Fig. 4f, we show the results obtained by the ITC detection
good performance to discriminate tropical tree species in remote sensing
method proposed by Ferreira et al. (2020). Because this method is based
images. Morales et al. (2018) achieved 98.1% of classification accuracy
on the areas in which the network has high confidence, misclassified
using the DeepLabv3+ architecture to automatically segment palm trees
pixels are suppressed.
of Mauritia flexuosa in the Peruvian Amazon from UAV images. Ferreira
In Fig. 5, we show ITCs of B. excelsa mapped over an entire
et al. (2020) used DeepLabv3+ combined with a ResNet-18 network (He
WorldView-3 scene (see Section 3.4). Our approach produced realistic
et al., 2016) to detect and identify three species of Amazonian palms in

5
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

Fig. 4. Illustration of the semantic segmentation process with DeepLabv3+ architecture and ResNet-18 as backbone network. The input image patches of 256 × 256
pixels are passed through convolutional and rectified linear unit layers that learn features from Brazil nut trees (Bertholletia excelsa). These features are used to
distinguish them from other types of trees. (a) Ground truth individual tree crowns of B. excelsa overlaid to a true color composition of the WorldView-3 image (pixel
size = 30 cm); (b) Strongest activation channel of the first convolutional layer. Note that this channel activates on edges; (c) Strongest activation channel of an
intermediate layer that activates on shaded portions of the input image; (d) Activation of the penultimate layer of the network (before the softmax layer) showing
strong activations in areas of B. excelsa occurrence; (e) Semantic segmentation results provided by the network and (f) Individual trees detected by using the method
proposed by Ferreira et al. (2020). The red arrows in (d) and (e) point to areas segmented by the network that did not coincide with the ground truth. (For
interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 5. Individual tree crowns of Brazil nut trees (Bertholletia excelsa) mapped over Amazonian forests. (a) True color composition of a WorldView-3 scene (pixel size
= 30 cm) overlaid with automatically detected ITCs of Brazil nut trees (yellow polygons) and GNSS information acquired in the field (cyan points). (b) and (c) show
two areas of B. excelsa occurrence visited in the field. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of
this article.)

6
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

UAV images, reaching 87.8% of average class accuracy. Besides Deep­ area, the Kayapó ethnic group, the Brazil nut trade represents one of the
Labv3+, the U-net architecture (Ronneberger et al., 2015) has been main income-generation activities. The maps of B. excelsa can be used to
successfully employed in tropical tree species classification tasks (e.g., locate the most productive groves in each village’s territories. Further­
Wagner et al. (2019); Lobo Torres et al. (2020); Wagner et al. (2020a, more, knowledge regarding the spatial distribution of B. excelsa
2020b)). As shown by Lobo Torres et al. (2020), U-net is less compu­ contribute to assessing the impacts of nut harvesting on the dynamics of
tationally intensive than DeepLabv3+. Thus, we aim to use U-net in the species. This includes seedling density and dispersal, which is per­
future studies involving large-scale mapping of B. excelsa trees. formed traditionally with ground-based surveys (Ribeiro et al., 2014).

5.2. Considerations for large scale mapping of B. excelsa trees 6. Conclusions

Through successive transformations (convolutions) of the input Our study shows the potential of CNNs and WorldView-3 images to
image, CNNs identify and learn low-level image features such as edges map B. excelsa trees in Amazonian forests. We propose a new model
and blobs and high-level features such as complex shapes representing training strategy capable of achieving a high producer’s accuracy
the target class. In the case of tree species, the features learned by a CNN (>97%) with a reduced set of training patches. We confirm the effec­
model are mainly related to the crown structure. For example, CNN tiveness of the DeepLabv3+ architecture with the ResNet-18 as a
models’ classification success to retrieve palm trees in high-resolution backbone network for feature extraction. We show that the shadow of
images (Ferreira et al., 2020; Wagner et al., 2020b) can be explained emergent B. excelsa trees is a crucial discriminative feature. The off-
by the peculiar pattern of the crowns of palm trees. Similarly, the dif­ nadir imaging capabilities of WorldView-3 can affect the presence of
ferentiation between a conifer and a broad-leaf tree relies on their shadows by distorting the image. Thus, it is essential to consider the off-
typical canopy structures (Schiefer et al., 2020). Here, we found that nadir viewing angle of WorldView-3 images for mapping of B. excelsa
deep shadows play a prominent role in B. excelsa trees detection and other emergent trees. Individual trees and groves of B. excelsa can be
(Fig. 4c). These shadows result from sun-sensor geometry and are mapped accurately over large tracts of Amazonian forests. This helps
marked for emergent trees such as adult B. excelsa individuals. forest managers, ecologists, and indigenous populations to conserve and
The WorldView-3 satellite has off-nadir imaging capabilities, which manage this important tree species.
means that it can acquire data in a view angle different from nadir or not
looking straight down at the target. WorldView-3 can point the sensor CRediT authorship contribution statement
up to 20◦ from nadir. The off-nadir viewing increases the sensor’s
temporal resolution but distorts the image by enlarging the instanta­ Matheus Pinheiro Ferreira: Conceptualization, Methodology,
neous field of view and, consequently, the pixel size. Extreme off-nadir Software, Validation, Formal analysis, Investigation, Writing - original
viewing angles reduce image shadows (Wagner et al., 2018), which can draft, Writing - review & editing. Rodolfo Georjute Lotte: Conceptu­
hamper B. excelsa tree detection. However, large-scale mapping with alization, Methodology, Investigation, Writing - original draft, Writing -
VHR satellite images requires the combination of multiple scenes ac­ review & editing. Francisco V. D’Elia: Project administration, Super­
quired in various sun-sensor geometries and off-nadir angles. Thus, more vision, Writing - original draft, Writing - review & editing. Christos
research is needed to assess the ability of CNNs to deal with the effects of Stamatopoulos: Resources, Data curation. Do-Hyung Kim: Resources,
satellite image acquisition conditions for tree species mapping over Data curation, Writing - original draft. Adam R. Benjamin: Resources,
large spatial extents. Data curation, Writing - review & editing.
Another crucial aspect to consider for tree species classification from
remotely sensed data is plant phenology. Phenology determines seasonal
Declaration of Competing Interest
variations in plant characteristics such as changes in leaf color, fruiting
and flowering events. In eastern Brazilian Amazon, leaf fall for B. excelsa
None.
trees begin at the end of the rainy season (generally in July), while
flowering occurs during the dry season (September to February) (Mori
Acknowledgments
and Prance, 1990). Phenology influences crown structural properties
and the spectral response of tropical tree species, affecting the classifi­
This work was supported by Bioverse Tecnologia e Inovação em
cation accuracy (Ferreira et al., 2019). The role of phenology in
Monitoramento Ambiental and UNICEF Innovation Venture Fund. The
B. excelsa discrimination with remote sensing is poorly known and re­
WorldView-3 images were downloaded and processed strictly under the
quires further investigation.
NextView license. M.P. Ferreira was supported by the Brazilian National
Council for Scientific and Technological Development (CNPq) (grant
5.3. Perspectives for forest management and conservation
#306345/2020-0). We are grateful to four anonymous reviewers for
their valuable suggestions that improved our manuscript quality and
The sustainable exploration of B. excelsa is one of the most important
presentation.
socio-economic activities involving non-timber products in the Amazon
(Peres et al., 2003), with minimum impacts on the forest ecosystem (de
References
Oliveira Wadt et al., 2018). Mapping B. excelsa trees’ spatial distribution
over large tracts of Amazonian forests has important implications for Baldoni, A.B., Teodoro, L.P.R., Teodoro, P.E., Tonini, H., Tardin, F.D., Botin, A.A.,
forest management, ecological studies, and conservation policymaking. Hoogerheide, E.S.S., Botelho, S.D.C.C., Lulu, J., de Farias Neto, A.L., et al., 2020.
For example, a B. excelsa map like Fig. 5 could help local communities to Genetic diversity of brazil nut tree (Bertholletia excelsa bonpl.) in southern brazilian
amazon. For. Ecol. Manag. 458, 117795. https://doi.org/10.1016/j.
find the most productive groves without roaming long distances into the foreco.2019.117795.
forest. Moreover, one can use species maps to improve species distri­ Blanchard, E., Birnbaum, P., Ibanez, T., Boutreux, T., Antin, C., Ploton, P., Vincent, G.,
bution models (He et al., 2015) and assess species-specific responses to Pouteau, R., Vandrot, H., Hequet, V., et al., 2016. Contrasted allometries between
stem diameter, crown area, and tree height in five tropical biogeographic areas.
environmental and climate change (Sales et al., 2021). Brazil nuts are
Trees 30, 1953–1968. https://doi.org/10.1007/s00468-016-1424-3.
the main food source for several wild animals such as agoutis (genus Braga, G., J.R, Peripato, V., Dalagnol, R.P., Ferreira, M., Tarabalka, Y., Aragão, O.C., de
Dasyprocta) and macaws (e.g., Ara macao and Anodorhynchus hyacin­ Campos Velho, L.E.F., Shiguemori, E.H., Wagner, F.H., 2020. Tree crown delineation
thinus). These animals’ conservation strategies can benefit from a map of algorithm based on a convolutional neural network. Remote Sens. 12. https://doi.
org/10.3390/rs12081288.
B. excelsa tree occurrence to infer habitat suitability. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L., 2017. Deeplab:
Particularly for the indigenous population that inhabits the study semantic image segmentation with deep convolutional nets, atrous convolution, and

7
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. https:// Mori, S.A., Prance, G.T., 1990. Taxonomy, ecology, and economic botany of the Brazil
doi.org/10.1109/TPAMI.2017.2699184. nut (Bertholletia excelsa humb. & bonpl.: Lecythidaceae). Adv. Econ. Bot. 130–150.
Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H., 2018. Encoder-decoder with URL: https://www.jstor.org/stable/43927571.
atrous separable convolution for semantic image segmentation. In: Proceedings of Murphy, K.P., 2012. Machine Learning: A Probabilistic Perspective. MIT Press.
the European Conference on Computer Vision (ECCV), pp. 801–818. Nezami, S., Khoramshahi, E., Nevalainen, O., Pölönen, I., Honkavaara, E., 2020. Tree
de Oliveira Wadt, L.H., Faustino, C.L., Staudhammer, C.L., Kainer, K.A., Evangelista, J.S., species classification of drone hyperspectral and rgb imagery with deep learning
2018. Primary and secondary dispersal of Bertholletia excelsa: implications for convolutional neural networks. Remote Sens. 12 https://doi.org/10.3390/
sustainable harvests. For. Ecol. Manag. 415-416, 98–105. https://doi.org/10.1016/j. rs12071070.
foreco.2018.02.014. Onishi, M., Ise, T., 2021. Explainable identification and mapping of trees using uav rgb
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., 2009. Imagenet: a large-scale image and deep learning. Sci. Rep. 11, 1–15. https://doi.org/10.1038/s41598-020-
hierarchical image database. In: 2009 IEEE Conference on Computer Vision and 79653-9.
Pattern Recognition. IEEE, pp. 248–255. https://doi.org/10.1109/ Parente, C., Pepe, M., 2017. Influence of the weights in ihs and brovey methods for pan-
CVPR.2009.5206848. sharpening worldview-3 satellite images. Int. J. Eng. Technol. 6, 71–77.
Development Team, Q.G.I.S., 2019. QGIS Geographic Information System. Open Source Peres, C.A., Baider, C., Zuidema, P.A., Wadt, L.H., Kainer, K.A., Gomes-Silva, D.A.,
Geospatial Foundation. URL: http://qgis.osgeo.org. Salomão, R.P., Simões, L.L., Franciosi, E.R., Valverde, F.C., et al., 2003.
DigitalGlobe, 2016. Radiometric Use of WorldView-3 Imagery. Technical Report. URL: Demographic threats to the sustainability of Brazil nut exploitation. Science 302,
https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/20 2112–2114. https://doi.org/10.1126/science.1091698.
7/Radiometric_Use_of_WorldView-3_v2.pdf. Ribeiro, M.B.N., Jerozolimski, A., de Robert, P., Salles, N.V., Kayapó, B., Pimentel, T.P.,
Dong, R., Li, W., Fu, H., Gan, L., Yu, L., Zheng, J., Xia, M., 2020. Oil palm plantation Magnusson, W.E., 2014. Anthropogenic landscape in southeastern Amazonia:
mapping from high-resolution remote sensing images using deep learning. Int. J. contemporary impacts of low-intensity harvesting and dispersal of Brazil nuts by the
Remote Sens. 41, 2022–2046. https://doi.org/10.1080/01431161.2019.1681604. kayapó indigenous people. PLoS One 9, 1–8. https://doi.org/10.1371/journal.
Ferreira, M.P., Wagner, F.H., Aragão, L.E., Shimabukuro, Y.E., de Souza Filho, C.R., pone.0102187.
2019. Tree species classification in tropical forests using visible to shortwave Ricardo, C.A., et al., 2000. Povos indgenas no Brasil: 1996/2000 (Instituto
infrared worldview-3 images and texture analysis. ISPRS J. Photogramm. Remote Socioambiental).
Sens. 149, 119–131. https://doi.org/10.1016/j.isprsjprs.2019.01.019. Ronneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for
Ferreira, M.P., de Almeida, D.R.A., de Almeida Papa, D., Minervino, J.B.S., Veras, H.F.P., biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M.,
Formighieri, A., Santos, C.A.N., Ferreira, M.A.D., Figueiredo, E.O., Ferreira, E.J.L., Frangi, A.F. (Eds.), Medical Image Computing and Computer-Assisted Intervention –
2020. Individual tree detection and species classification of amazonian palms using MICCAI 2015. Springer International Publishing, Cham, pp. 234–241. https://doi.
uav images and deep learning. For. Ecol. Manag. 475, 118397. https://doi.org/ org/10.1007/978-3-319-24574-4_28.
10.1016/j.foreco.2020.118397. Sales, L.P., Rodrigues, L., Masiero, R., 2021. Climate change drives spatial mismatch and
Fricker, G.A., Ventura, J.D., Wolf, J.A., North, M.P., Davis, F.W., Franklin, J., 2019. threatens the biotic interactions of the Brazil nut. Glob. Ecol. Biogeogr. 30, 117–127.
A convolutional neural network classifier identifies tree species in mixed-conifer https://doi.org/10.1111/geb.13200.
forest from hyperspectral imagery. Remote Sens. 11 https://doi.org/10.3390/ Salm, R., 2004. Tree species diversity in a seasonally-dry forest: the case of the pinkait
rs11192326. site, in the kayapó indigenous area, southeastern limits of the amazon. Acta Amazon.
Hallada, W.A., Cox, S., 1983. Image Sharpening for Mixed Spatial and Spectral 34, 435–443. https://doi.org/10.1590/S0044-59672004000300009.
Resolution Satellite Systems. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C., 2018. Mobilenetv2:
Hallé, F., Oldeman, R.A., Tomlinson, P.B., 1978. Tropical Trees and Forests: An inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on
Architectural Analysis, 1st ed. Springer-Verlag, Berlin Heidelberg. https://doi.org/ Computer Vision and Pattern Recognition, pp. 4510–4520.
10.1007/978-3-642-81190-6. Santos, A.A.D., Marcato Junior, J., Araújo, M.S., Di Martini, D.R., Tetila, E.C.,
Hartling, S., Sagan, V., Sidike, P., Maimaitijiang, M., Carron, J., 2019. Urban tree species Siqueira, H.L., Aoki, C., Eltner, A., Matsubara, E.T., Pistori, H., et al., 2019.
classification using a worldview-2/3 and lidar data fusion approach and deep Assessment of cnn-based methods for individual tree detection on images captured
learning. Sensors 19, 1284. https://doi.org/10.3390/s19061284. by rgb cameras attached to uavs. Sensors 19, 3595. https://doi.org/10.3390/
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: s19163595.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Schiefer, F., Kattenborn, T., Frick, A., Frey, J., Schall, P., Koch, B., Schmidtlein, S., 2020.
pp. 770–778. https://doi.org/10.1109/CVPR.2016.90. Mapping forest tree species in high resolution uav-based rgb-imagery by means of
He, K.S., Bradley, B.A., Cord, A.F., Rocchini, D., Tuanmu, M.N., Schmidtlein, S., convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 170, 205–215.
Turner, W., Wegmann, M., Pettorelli, N., 2015. Will remote sensing shape the next https://doi.org/10.1016/j.isprsjprs.2020.10.015.
generation of species distribution models? Rem. Sens. Ecol. Conserv. 1, 4–18. Scoles, R., Gribel, R., 2011. Population structure of Brazil nut (Bertholletia excelsa,
https://doi.org/10.1002/rse2.7. lecythidaceae) stands in two areas with different occupation histories in the brazilian
IBGE, 2010. Censo demográfico 2010. Demographical Census 2010. Technical Report. amazon. Hum. Ecol. 39, 455–464. https://doi.org/10.1007/s10745-011-9412-0.
Instituto Brasileiro de Geografia e Estatística (IBGE). URL: http://www ibge. Sothe, C., Almeida, C.M.D., Schimalski, M.B., Rosa, L.E.C.L., Castro, J.D.B., Feitosa, R.Q.,
gov.br/estatistica. Dalponte, M., Lima, C.L., Liesenberg, V., Miyoshi, G.T., Tommaselli, A.M.G., 2020.
IBGE, 2019. Produção da Extração Vegetal e da Silvicultura 2019. Technical Report. Comparative performance of convolutional neural network, weighted and
Instituto Brasileiro de Geografia e Estatística (IBGE). URL: https://biblioteca. conventional support vector machine and random forest for classifying tree species
ibge.gov. using hyperspectral and photogrammetric data. GISci. Rem. Sens. 57, 369–394.
br/visualizacao/periodicos/74/pevs_2019_v34_informativo.pdf. https://doi.org/10.1080/15481603.2020.1712102.
Joyce, K.E., Anderson, K., Bartolo, R.E., 2021. Of course we fly unmanned—we’re Tochon, G., Féret, J., Valero, S., Martin, R., Knapp, D., Salembier, P., Chanussot, J.,
women! Drones 5. https://doi.org/10.3390/drones5010021. Asner, G., 2015. On the use of binary partition trees for the tree crown segmentation
Kaartinen, H., Hyyppä, J., Vastaranta, M., Kukko, A., Jaakkola, A., Yu, X., Pyörälä, J., of tropical rainforest hyperspectral images. Remote Sens. Environ. 159, 318–331.
Liang, X., Liu, J., Wang, Y., et al., 2015. Accuracy of kinematic positioning using https://doi.org/10.1016/j.rse.2014.12.020.
global satellite navigation systems under forest canopies. Forests 6, 3218–3236. htt Trier, Ø.D., Salberg, A.B., Kermit, M., Rudjord, Ø., Gobakken, T., Næsset, E., Aarsten, D.,
ps://doi.org/10.3390/f6093218. 2018. Tree species classification in Norway from airborne hyperspectral and
Kattenborn, T., Leitloff, J., Schiefer, F., Hinz, S., 2021. Review on convolutional neural airborne laser scanning data. Eur. J. Rem. Sens. 51, 336–351. https://doi.org/
networks (cnn) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 10.1080/22797254.2018.1434424.
173, 24–49. https://doi.org/10.1016/j.isprsjprs.2020.12.010. Wagner, F.H., Ferreira, M.P., Sanchez, A., Hirye, M.C., Zortea, M., Gloor, E., Phillips, O.
Li, W., Fu, H., Yu, L., Cracknell, A., 2017. Deep learning based oil palm tree detection and L., de Souza Filho, C.R., Shimabukuro, Y.E., Aragão, L.E.O.C., 2018. Individual tree
counting for high-resolution remote sensing images. Remote Sens. 9, 22. https://doi. crown delineation in a highly diverse tropical forest using very high resolution
org/10.3390/rs9010022. satellite images. ISPRS J. Photogramm. Remote Sens. 145, 362–377. https://doi.org/
Liao, W., Van Coillie, F., Gao, L., Li, L., Zhang, B., Chanussot, J., 2018. Deep learning for 10.1016/j.isprsjprs.2018.09.013.
fusion of apex hyperspectral and full-waveform lidar remote sensing data for tree Wagner, F.H., Sanchez, A., Tarabalka, Y., Lotte, R.G., Ferreira, M.P., Aidar, M.P.,
species mapping. IEEE Access 6, 68716–68729. https://doi.org/10.1109/ Gloor, E., Phillips, O.L., Aragão, L.E.O.C., 2019. Using the u-net convolutional
ACCESS.2018.2880083. network to map forest types and disturbance in the Atlantic rainforest with very high
Liu, X., Ghazali, K.H., Han, F., Mohamed, I.I., 2021. Automatic detection of oil palm tree resolution images. Rem. Sens. Ecol. Conserv. 5, 360–375. https://doi.org/10.1002/
from uav images based on the deep learning method. Appl. Artif. Intell. 35, 13–24. rse2.111.
https://doi.org/10.1080/08839514.2020.1831226. Wagner, F.H., Sanchez, A., Aidar, M.P., Rochelle, A.L., Tarabalka, Y., Fonseca, M.G.,
Lobo Torres, D., Queiroz Feitosa, R., Nigri Happ, P., Elena Cué La Rosa, L., Marcato Phillips, O.L., Gloor, E., Aragão, L.E.O.C., 2020a. Mapping Atlantic rainforest
Junior, J., Martins, J., Olã Bressan, P., Gonçalves, W.N., Liesenberg, V., 2020. degradation and regeneration history with indicator species using convolutional
Applying fully convolutional architectures for semantic segmentation of a single tree network. PLoS One 15, 1–24. https://doi.org/10.1371/journal.pone.0229448.
species in urban environment on high resolution uav optical imagery. Sensors 20, Wagner, F.H., Dalagnol, R., Tagle Casapia, X., Streher, A.S., Phillips, O.L., Gloor, E.,
563. https://doi.org/10.3390/s20020563. Aragão, L.E.O.C., 2020b. Regional mapping and spatial distribution analysis of
MMA, 2019. Official list of the threatened species of the flora of Brazil (Portaria n. 443, canopy palms in an amazon forest using deep learning and vhr images. Remote Sens.
de 17 de dezembro de 2014, pp. 110–121). In: Technical Report. Brazilian Ministry 12 https://doi.org/10.3390/rs12142225.
of Environment. URL: https://dados.gov.br/dataset/portaria_443. Weinstein, B.G., Marconi, S., Bohlman, S.A., Zare, A., White, E.P., 2020. Cross-site
Morales, G., Kemper, G., Sevillano, G., Arteaga, D., Ortega, I., Telles, J., 2018. Automatic learning in deep learning rgb tree crown detection. Ecol. Inform. 56, 101061.
segmentation of Mauritia flexuosa in unmanned aerial vehicle (uav) imagery using https://doi.org/10.1016/j.ecoinf.2020.101061.
deep learning. Forests 9. https://doi.org/10.3390/f9120736.

8
M.P. Ferreira et al. Ecological Informatics 63 (2021) 101302

Zhang, C., Xia, K., Feng, H., Yang, Y., Du, X., 2020. Tree species classification using deep Zheng, J., Fu, H., Li, W., Wu, W., Yu, L., Yuan, S., Tao, W.Y.W., Pang, T.K., Kanniah, K.D.,
learning and rgb optical images obtained by an unmanned aerial vehicle. J. For. Res. 2021. Growing status observation for oil palm trees using unmanned aerial vehicle
1–10. https://doi.org/10.1007/s11676-020-01245-0. (uav) images. ISPRS J. Photogramm. Remote Sens. 173, 95–121. https://doi.org/
Zhang, L., Zhang, L., Du, B., 2016. Deep learning for remote sensing data: a technical 10.1016/j.isprsjprs.2021.01.008.
tutorial on the state of the art. IEEE Geosci. Rem. Sens. Magaz. 4, 22–40. https://doi. Zimmerman, B., Peres, C.A., Malcolm, J.R., Turner, T., 2001. Conservation and
org/10.1109/MGRS.2016.2540798. development alliances with the kayapó of South-Eastern Amazonia, a tropical forest
Zheng, J., Fu, H., Li, W., Wu, W., Zhao, Y., Dong, R., Yu, L., 2020. Cross-regional oil palm indigenous people. Environ. Conserv. 10–22. URL: http://www.jstor.org/stable/
tree counting and detection via a multi-level attention domain adaptation network. 44521675.
ISPRS J. Photogramm. Remote Sens. 167, 154–177. https://doi.org/10.1016/j.
isprsjprs.2020.07.002.

You might also like