You are on page 1of 38

International Journal of Remote Sensing

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tres20

Crop mapping using supervised machine learning


and deep learning: a systematic literature review

Mouad Alami Machichi, loubna El mansouri, yasmina imani, Omar Bourja,


Ouiam Lahlou, Yahya Zennayi, François Bourzeix, Ismaguil Hanadé Houmma
& Rachid Hadria

To cite this article: Mouad Alami Machichi, loubna El mansouri, yasmina imani, Omar
Bourja, Ouiam Lahlou, Yahya Zennayi, François Bourzeix, Ismaguil Hanadé Houmma &
Rachid Hadria (2023) Crop mapping using supervised machine learning and deep learning: a
systematic literature review, International Journal of Remote Sensing, 44:8, 2717-2753, DOI:
10.1080/01431161.2023.2205984

To link to this article: https://doi.org/10.1080/01431161.2023.2205984

© 2023 The Author(s). Published by Informa Published online: 05 May 2023.


UK Limited, trading as Taylor & Francis
Group.

Submit your article to this journal Article views: 1289

View related articles View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tres20
INTERNATIONAL JOURNAL OF REMOTE SENSING
2023, VOL. 44, NO. 8, 2717–2753
https://doi.org/10.1080/01431161.2023.2205984

Crop mapping using supervised machine learning and deep


learning: a systematic literature review
Mouad Alami Machichi a,c, loubna El mansouria, yasmina imanib, Omar Bourjac,
Ouiam Lahloub, Yahya Zennayic, François Bourzeixc, Ismaguil Hanadé Houmma a

and Rachid Hadriad


a
Topography, Agronomic and Veterinary Institute Hassan 2, Rabat, Morocco; bAgronomy, Agronomic and
Veterinary Institute Hassan 2, Rabat, Morocco; cMoroccan foundation for Advanced Science, Innovation and
Research (MAScIR), Rabat, Morocco; dDepartment of environment, National Institute of Agriculture Research,
Morocco

ABSTRACT ARTICLE HISTORY


The ever-increasing global population presents a looming threat to Received 18 January 2023
food production. To meet growing food demands while minimizing Accepted 16 April 2023
negative impacts on water and soil, agricultural practices must be KEYWORDS
altered. To make informed decisions, decision-makers require Crop mapping; remote
timely, accurate, and efficient crop maps. Remote sensing-based sensing; machine learning;
crop mapping faces numerous challenges. However, recent years deep learning
have seen substantial advances in crop mapping through the use of
big data, multi-sensor imagery, the democratization of remote sen­
sing data, and the success of deep learning algorithms. This sys­
tematic literature review provides an overview of the history and
evolution of crop mapping using remote sensing techniques. It also
discusses the latest scientific advances in the field of crop mapping,
which involve the use of machine and deep learning models. The
review protocol involved the analysis of 386 peer-reviewed pub­
lications. The results of the analysis show that areas such as crop
rotation mapping, double cropping, and early crop mapping
require further exploration. The use of LiDAR as a tool for crop
mapping also needs more attention, and hierarchical crop mapping
is recommended. This review provides a comprehensive framework
for future researchers interested in accurate large-scale crop map­
ping from multi-source image data and machine and deep learning
techniques.

1. Introduction
Food security worldwide is threatened by the ever-increasing population count. The
amount of food currently produced would not be nearly enough to feed upwards of
10 billion people by 2060 (FAO 2017). To meet global food demand, current production
levels must roughly double (Foley et al. 2011). Even though there is a strong need for
increasing food production, it should not come at the detriment of soil and water.

CONTACT Mouad Alami Machichi moadalami40@gmail.com Agronomic and Veterinary Institute Hassan 2,
Rabat, Morocco
© 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License
(http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any med­
ium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. The terms on which this article
has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
2718 M. ALAMI MACHICHI ET AL.

Conventional large-scale agricultural practices, such as monoculture farming significantly


reduce the amount of water and nutrients the soil can retain (Shannon et al. 2015), which
leads to the degradation of billions of tonnes of fertile soil each year. Instead, sustainable
agriculture should be practiced to ensure that any potential negative effects are main­
tained to a minimum and manageable level (Tiziano, Pimentel, and Paoletti 2011).
Sustainable agriculture requires that decision makers both in the public and private
sector have access to reliable and up-to-date agricultural data. In Africa, for example,
a region of the world most confronted with the challenges of extreme poverty and food
security, high-quality data on crops and their growth stages are rare. In this part of the
world, production systems are spatially and temporally heterogeneous, which makes it
particularly difficult to estimate the area of agricultural land by crop type and therefore to
estimate yields (You and Sun 2022). Yet, high scale agricultural maps at the scale of
a country, for example, are vital for developing and guiding sustainable resource alloca­
tion strategies in the field of food security, irrigation control management, energy, crop
adaptation to climate change, environmental protection, or market policy decision-
making (Murmu and Biswas 2015; Phalke et al. 2020; Panjala, Krishna Gumma, and
Teluguntla 2021; Hoummaidi, Larabi, and Alam 2021; Wang et al. 2022; Singh et al.
2022; Blickensdörfer et al. 2022). In addition to this context, the increasing demand for
crop insurance around the world has strengthened the practical requirements of crop
mapping. To meet this need, crop variety statistics at the plot level have become
necessary (Hudait and Pravin Patel 2022).
According to the FAO, agricultural land refers to the area of arable land and permanent
crops. Thus, based on this definition, crop mapping in a broad sense covers the identifica­
tion of crop varieties of different species (Sherrie et al. 2020; Turkoglu et al. 2021; Meng
et al. 2021; Hudait and Pravin Patel 2022; Blickensdörfer et al. 2022), crop associations
often of the same plant species, e.g., wheat or rice varieties (Zhou et al. 2019; Parra et al.
2022), mapping of a group or single crop of major interest (Kuenzer and Knauer 2013;
Mansouri et al. 2018; Cai et al. 2018; Tian et al. 2020; Sood, Kumar, and Persello 2021;
Hoummaidi, Larabi, and Alam 2021; Wei et al. 2022; Sabir and Kumar 2022), crop pheno­
logical mapping (Gao and Zhang 2021), or binary (mask) cropland mapping (Phalke et al.
2020; Kumar and Jayagopal 2021; Danya et al. 2022; Potapov et al. 2021). Other forms of
crop mapping include mapping of crop losses during growth due to excess water (Dong
et al. 2016) or persistent severe water stress, mapping of fragmented farming systems
(Qiangyi et al. 2020; Feyisa et al. 2020; Qiu et al. 2022; You and Sun 2022), inter-annual
crop rotation mapping (Liu et al. 2021), crop growth-stage mapping, or crop intensity
mapping (Pan et al. 2021). Among the main components of crop mapping, crop associa­
tion detection and mapping is the most complex but often necessary task for spatialized
quantitative crop estimation purposes. In other words, depending on agricultural land
management policies, crop mapping may also include the identification and spatialized
classification of agricultural production systems (e.g. irrigated and non-irrigated land).
The identification of vegetation canopies using traditional techniques, such as survey
missions – although accurate – had many disadvantages. Mapping a single crop parcel
can take a significant amount of time. Access to cropland can be challenging, and
uncooperative farmers and landowners restrict the number of parcels that can be
accessed. Field work remains an inefficient and costly approach to crop mapping
(Mansouri and Loubna 2017).
INTERNATIONAL JOURNAL OF REMOTE SENSING 2719

Ever since the early 70s (Suits 1972), the identification of vegetation canopies
using remote sensing data has started to be used as an alternative to traditional field
survey. Over the last couple of decades, research in the field of crop mapping using
remote sensing has continued to grow in importance (Weiss, Jacob, and Duveiller
2020). Supervised Machine Learning (ML) models have shown their aptitude for
identifying crops based on mid- to high-resolution satellite imagery with high
accuracy (Mansouri et al. 2019; Hadria 2018; Zhao et al. 2020; Moussaid, El Fkihi,
and Zennayi 2021).
Agricultural research using remote sensing is abundant, and the number of pub­
lications is increasing at an exponential rate. In order to keep track of the advances in
the research done, the challenges faced, and the direction it is headed in, literature
reviews are a must. Multiple Systematic Literature Reviews (SLR) were published on
agricultural studies (Lei et al. 2019; van Klompenburg et al. 2020; Weiss, Jacob, and
Duveiller 2020; Saleem, Potgieter, and Mahmood Arif 2021). These reviews focused on
general applications of remote sensing in agriculture (Weiss, Jacob, and Duveiller 2020;
Garcia-Berna et al. 2020), yield prediction (van Klompenburg et al. 2020;
Muruganantham et al. 2022), and crop diseases, among others (Hatfield and Pinter
1993; Yang 2020). While some reviews bring up the topic of crop mapping, none go
into detail in providing a complete and comprehensive review of the literature
regarding the identification of vegetation canopies. Hence, our decision is to embark
on an SLR of crop mapping using supervised ML and Deep Learning (DL). This review
is warranted and required in order to complete and bridge the gap between the
various reviews.
In this SLR, we intend to extract current trends in crop mapping research, benchmark
models, identify gaps in the current studies, and help orient future research. The rest of
this paper is structured as follows: in Section 2 we present the methodology used in this
SLR, the databases used, research questions, and the search criteria. Section 3 provides
answers from the literature to the different questions asked in the methodology. In
Section 4, we discuss the different aspects of crop mapping as well as the challenges
facing this field of research and present perspectives for future studies.

2. Methodology
2.1. Search questions
In this systematic literature review, we aim to get a clear picture of the current remote
sensing-based crop mapping using supervised ML and Deep Learning (DL). This SLR
follows the guidelines outlined by Kitchenham (2007). We start by asking relevant ques­
tions, the answers to which would help in describing and analysing crop mapping studies,
extracting current trends, and listing the major challenges:

● (1) What remote-sensing technologies and sensors are most commonly used for crop
mapping in the literature?
● (2) What are the most used crop mapping approaches in the literature?
● (3) What are the most optimal supervised ML algorithms and DL architectures that
are used for crop mapping using remote sensing?
2720 M. ALAMI MACHICHI ET AL.

● (4) What features are being used in literature to accurately identify crop classes?
● (5) What are the current challenges that face researchers dealing with crop mapping
using remote sensing?

2.2. Search strategy


In this SLR we relied on seven scientific databases: ScienceDirect, Scopus, Web of Science,
Springer Link, Wiley, Taylor and Francis, and Google Scholar. Each one of these databases
was queried using the same search string:
[crop type OR crop mapping OR crop identification OR crop classification] AND [machine
learning OR deep learning OR data mining] AND remote sensing: Anywhere.

2.3. Exclusion criteria


● Publications that are not written in English.
● In this SLR, we aimed our focus at case studies. Reviews and survey papers were
therefore excluded.
● Publications where unsupervised ML was used were also discarded as they did not fit
within the scope of this study.
● Likewise, crop mapping papers that were limited to fewer than three crop classes
were excluded.
● Publications where the full-text is not available.
● Crop mapping publications with only non-herbaceous crops.
● Duplicate studies.

The number of citations per year was not taken as a selection criterion, as to not
penalize newer publications.

2.4. Data extraction


To answer the different research questions, each publication had to be analysed, and a set
of attributes were extracted from each paper.

● Q1. We noted the type of platform that was used: either satellite, manned or
Unmanned Aerial Vehicle (UAV) and the technology of the sensor used (multispec­
tral, hyperspectral, RADAR, Lidar). In the case of satellite imagery, we also noted the
name of the satellite as well as its characteristics to identify what makes it the sensor
of choice for that study.
● Q2. Three attributes were extracted: approach, time-series, and Hierarchy. The term
“approach” in this paper refers to the unit used during the crop mapping task. In
other words, did the study follow a pixel-based, an object-based, or patch-based
approach? The “hierarchy” indicates whether a tree-like classification scheme was
used. This scheme first identifies broad classes (such as crop/non-crop) and then
classifies lower levels (detailed crop species). Whereas “time-series” refers to the use
of multi-temporal remote sensing imagery. This attribute also informs on the length
of the time-series used (if available).
INTERNATIONAL JOURNAL OF REMOTE SENSING 2721

● Q3. The models used in this study were recorded, and the baseline algorithms were
noted. In publications where multiple models were used, the evaluation metrics,
mainly the overall accuracy, Cohen’s Kappa coefficient, and F1-score were extracted.
● Q4. The features used in this study, also called predictive or independent variables,
were extracted and classified into several classes. Some studies evaluated the effect
that each feature had. For those studies, we also noted the most and least useful
feature as well as the maximal overall accuracy and range.
● Q5. We took note of the challenges faced by the authors in each case study, and their
potential solutions were reported.

3. Results
The first crop mapping publication we could find dates back to 1969, where an airborne
multispectral sensor was used to map agricultural land in Indiana, US (Fu, Landgrebe, and
Phillips 1969). The results acquired were promising and paved the way for subsequent
research. Using our search strategy we were able to find 386 publications, 315 of which
were in the form of journal articles. The rest (71/386) were found in conference proceed­
ings. Before 2015, crop mapping research was an understudied field with very few
publications before 2000s. The stagnation of publication count between 2003 and 2014
can be attributed to the lack of development in the modelling sphere. In 2015, there was
a sudden increase in crop mapping research, and the amount of publications increased
exponentially (Figure 1). The increase of popularity can be explained by the newfound
success of DL models. In addition, there are imminent issues, such as global population
increase, food shortage, and climate change.
Study areas selected for crop mapping research are disproportionately distributed
globally (Figure 2). The majority of crop mapping experiments were conducted in China
and the US with 90 and 70 publications, respectively. African, South American and South-

Figure 1. Evolution of crop mapping publications.


2722 M. ALAMI MACHICHI ET AL.

Figure 2. Distribution of crop mapping publications.

East-Asian countries are lacking in the quantity of research. Developing countries should
be studied more to improve the agricultural sector and contribute to the betterment of
those nation economies.

3.1. Technologies and platforms


Our analysis of the literature has shown that satellites are by far the most commonly used
platform, as they were used in 308/386 case studies. Whereas airborne imagery was only
used in 70/386. Additionally, we were able to find 7/386 publications where multi-
platform imagery was used conjointly to produce high-resolution crop maps (Kasapoglu
and Okan 2007; Bhosle and Musande 2019; Zafari, Zurita-Milla, and Izquierdo-Verdiguier
2019; Khosravi and Alavipanah 2019; Huapeng et al. 2019; Shakya, Biswas, and Pal 2021;
Prins and Van Niekerk 2021). The wide popularity of satellite imagery over other platforms
is due to the availability and open-access of earth observation satellites, chiefly Sentinel
and Landsat series, used in 203 and 73 studies, respectively. The multispectral remote
sensing imagery provided by these sensors has been proven to be adequate for crop
mapping. In fact, 55% (213/386) of the publications analysed relied solely on optical data.
The second most used technology for crop mapping is Synthetic Aperture Radar (SAR)
(67/386). The first SAR crop mapping study that we could find using our search strategy
dates back to 1996 (Chen et al. 1996). Agricultural land in the Netherlands was mapped
using airborne SAR imagery with 89% accuracy. The use of hyperspectral imagery remains
less frequent. They were used in 36 case studies, with additional 4 publications where
hyperspectral and multispectral imagery were conjointly used for crop mapping. Only one
study used terrestrial Lidar scanner to map 6000 areas in Indiana, US (Reji, Rao
Nidamanuri, and Ramiya 2021). Unfortunately, this platform is subject to similar issues
to traditional field mapping (difficulty of access, high cost, small coverage area). Lidar and
lidar-derived products as a tool for crop mapping are particularly understudied, only
addressed in 3/386 publications (Hütt, Waldhoff, and Bareth 2020; Reji, Rao Nidamanuri,
and Ramiya 2021; Prins and Van Niekerk 2021).
INTERNATIONAL JOURNAL OF REMOTE SENSING 2723

For crop mapping purposes, most researchers used a single sensor (242/386).
Preprocessing of multi-sensor imagery can be challenging, especially when sensors
have different spatial, spectral and radiometric resolutions. These difficulties led to the
development of the Harmonized Landsat-Sentinel-2 (HLS) dataset (Claverie et al. 2018).
Nonetheless, being limited to one technology can hinder the reliability of the produced
crop maps. Early multi-sensors studies (Bruzzone and Prieto 1999; Wenbo et al. 2004)
highlighted the complementary nature of multispectral and SAR imagery. Even though
these studies did not evaluate the synergistic effects of the multi-modal input dataset,
they were able to achieve highly accurate crop maps. Since then, more options for multi-
modal remote sensing imagery became available. As of now, the Sentinel-1/Sentinel-2
couple is the most used multi-sensor combination (53/386).

3.2. Ground truth data


The quality and quantity of ground truth data has a direct effect on the accuracy and
reliability of the crop maps produced through supervised machine learning. Acquiring
a ground truth dataset is a time-consuming and costly process. Our analysis of the
literature has shown that many studies were done on the same ground truth datasets.
Not only does this reduce the cost for developing new crop identification schemes and
approaches but it also allows for an objective quantitative evaluation of the resulting
map. The most used dataset is the Cropland Data Layer (CDL). Used as ground truth in 40
studies. This georeferenced raster is updated annually and covers the entirety of the US.
Other popular datasets include Indian Pines (11/386), Salinas (8/386), and WHU-Hi data­
sets (10/386). The use of pre-made datasets remains infrequent as they are not available at
a global scale. Most researchers conduct field surveys to obtain up-to-date reference data
in their respective study areas.
Alternative methods for gathering ground truth data were researched, and multiple
approaches were proposed. Crowdsourcing of reference data is particularly promising
(Sherrie et al. 2020), although the final dataset was found to be noisy due to location
inaccuracies. Other studies used high-resolution images to substitute or complement field
surveys, but the flights are limited in the area that can be covered and require a priori field
knowledge (Khosravi and Alavipanah 2019; Hegarty-Craver et al. 2020). Another interest­
ing method for acquiring ground truth data was proposed by Yan and Ryu (2019) where
they classified Google Street View images with DL and used the classified images as
reference data for crop mapping. While the produced map had a high accuracy (97%), this
approach cannot be adopted at a wide scale because street view images are not available
in many countries, and their update frequency remains uncertain in the countries where
they are present.

3.3. Crop mapping approaches


The use of single date remote sensing imagery remains unsatisfactory for mapping
seasonal crops. Our analysis of the literature has shown that most publications (301/
386) use time-series imagery. Only 85 publications used mono-date imagery, 78 of which
had used commercial remote sensing data, where building a temporal dataset for which
would be costly and inefficient.
2724 M. ALAMI MACHICHI ET AL.

Mono-date crop mapping produced moderately good results, but it was limited in the
capabilities of discriminating seasonal crops. The availability of open-access remote
sensing imagery, such as MODIS, Landsat and Sentinel allows for building rich temporal
datasets and contribute to most of the new findings in research, such as identifying crops
that have similar spectral responses and even interspecies variety.
Most of the classification schemes used Pixel-Based Image Analysis (PBIA) (251/386).
The first use of object-based crop mapping was in 1975 (Gupta and Wintz 1975) where
multispectral airborne imagery was used to produce a crop map. This approach proved to
be 1.5% more accurate than the pixel-based approach. Motivated by the findings of this
research, a multitude of case studies were conducted using Object-Based Image Analysis
(OBIA) (57/386). Since then, studies have confirmed the initial findings in that OBIA
produces far better results in comparison with PBIA on the condition that the segmenta­
tion achieves a satisfactory state (Shao et al. 2010; Zhang et al. 2016; Basukala et al. 2017;
Niazmardi et al. 2018; Busauier et al. 2020; Zhou et al. 2022). In one study, Hao et al. (2015)
found that the use of OBIA did not improve the accuracy, although it reduced the salt and
pepper effect caused by misclassified pixels.
Another unit that started gaining popularity lately is the patch-based approach (63/
386). First used in 2006 (Barnes and Burki 2006) where pixel blocks (as they were called) of
SAR imagery was used to map agricultural land in Georgia, US. This unit of processing falls
somewhere between the pixel and the segment and can capture contextual information
about a pixel without prior delineation. The patch-based approach is mostly associated
with Convolutional Neural Networks (CNN).
An interesting approach that is not well researched in the literature is hierarchical crop
mapping used only in 22/386 publications. It is a classification scheme that progressively
maps cropland areas into more thematically detailed crops. This approach was first used
in 2008 (Wardlow and Egbert 2008) to map crops in Kansas, US, using MODIS time-series
at 4 distinct levels. Hierarchical classifications can help when dealing with imbalanced
datasets and increase classification accuracy (Turkoglu et al. 2021).

3.4. Crop mapping models


Our analysis of the literature has shown that the algorithms and models used for crop
mapping are highly diversified, they can be grouped into three categories: Parametric ML
classifiers, non-parametric ML classifiers, and DL classifiers. (I) Parametric machine learning
classifiers operate by following certain assumptions regarding the statistical distribution
of the data. (II) Non-parametric ML classifiers, on the other hand, do not make any
assumptions about the data distribution. (III) DL is a relatively new subfield of ML. It is
a mathematical framework that allows for learning representations from data. According
to Chollet (2018), the deep in ‘Deep Learning’ is in reference to the successive layers of
representations also called the depth of the model.
We found that 196/386 publications used a single classifier, whereas 190/386 studies
tested several crop mapping models. With the high variability of ML and DL classifiers, it is
difficult to assert the efficiency of one model over the others. In Table 1, we showcase the
comparisons done between crop mapping models using only recent studies (from 2021
to 2023) that have tested and compared different models. Various models have been used
for crop mapping, the most successful of which are Random Forest (RF), Support Vector
INTERNATIONAL JOURNAL OF REMOTE SENSING 2725

Table 1. Performance comparison of different crop mapping models (most performant models are
written in bold characters, whereas the least performant are written in italic).
Max
OA
Ref Models (%) Range Classes Sensor
Yan et al. (2021) RF, SVM 88 4 3 Multispectral
Aneece and Thenkabail RF, SVM, NB, WekaXMeans 83 13 5 Hyperspectral
(2021)
Reji, Rao Nidamanuri, and CropPointNet, PointNet, DGCNN 81.5 26.3 6 Lidar
Ramiya (2021)
Hamidi, Safari, and LSVM, GSVM, RF, AE 94.1 6 7 Multispectral
Homayouni (2021)
Saini and Kumar Ghosh SVM, Adaboost M1, SGB, RF, XGB 86.9 4 11 Multispectral
(2021)
Ghosh et al. (2021) UNet, Bi-LSTM Attn, CALD, ConvLSTM, 4D-CNN 72.2 5.6 14 Multispectral
Yuan and Lin (2021) SVM, RF, CNN-1D, LSTM, Bi-LSTM, SITS-BERT, 94.2; 8; 8.6 14; 11 Multispectral
pretrained SITS-BERT 98.8
Weikmann, Paris, and Inc. Time, MSResNet, TempCNN, Transformer, 85.39 4.2 16 Multispectral
Bruzzone (2021) StarRNN, LSTM, LSTM Weig.
Metzger et al. (2022) ODE-LSTM, ODE-GRU, ODE-GRU (reg.) 89.9 1.8 19 Multispectral
Prins and Van Niekerk d-NN, DT, k-NN, LR, NB, NN, RF, SVM-l, SVM RBF, 85.2 12.5 5 Multispectral,
(2021) XGB Lidar
Qadeer et al. (2021) RF, 1D-CNN, 2D-CNN, 3D-CNN, 3D-1D CNN 85.9 3.7 14 Multispectral
Zhang et al. (2021) BP-NN, CART, K-NN, MLR, NB, SVM 80.7 5.6 8 Multispectral
Liu et al. (2021) RF, LightGBM, WCRN, DBMA, HResNet 60.9; 5.2; 10; 12 SAR
81.2 4.2
Reuß et al. (2021) LSTM, RF 87 6 8 SAR
Huapeng et al. (2021) SS-OCNN, PCNN, OCNN, MOCNN 87.8 8.6 10 SAR
Adrian, Sagan, and SegNet, U-Net, 3D U-Net 94.1 43.4 13 SAR,
Maimaitijiang (2021) Multispectral
Turkoglu et al. (2021) RF, LSTM, TCN, Transformer, 2D-CNN (U-Net), 88 9.2 48 Multispectral
U-Net+convLSTM, Bi-convGRU, ms-
convSTAR
Wang et al. (2021) RF, CNN, CBAM-CNN, Geo-CRAM-CNN 97.8 2.8 4 Multispectral
Siesto, Fernández-Sellers, CNN, Optimized CNN 96.2 1.1 7 Multispectral
and Lozano-Tello
(2021)
Zhao et al. (2021) 1D CNN, LSTM, GRU, LSTM-CNN, GRU-CNN 86.2 5.1 7 Multispectral
Yan et al. (2022) RF, SVM, MsResnet, TCN, LSTM, Bi-LSTM, 96.5 7.1 6 Multispectral
Informer
Sykas et al. (2022) U-Net, ConvLSTM, ConvSTAR 94.7 1.9 11 Multispectral
Seydi, Amani, and RF, XGB, R-CNN, 2D-CNN, 3D-CNN, CBAM, Dual 98.5 24.8 10 Multispectral
Ghorbanian (2022) Attention CNN
Jiang et al. (2022) SVM, DT, RF, DNN 88 17 10 Multispectral
Espinosa-Herrera et al. BT, SVM 94.8 3 3 Multispectral
(2022)
Tang et al. (2022) Deeplabv3+, UNet, RF, Skcnn-Tabnet 91 16 5 Multispectral
Xie et al. (2022) Y4O, Y4R, S4R, G4U, GMD, PCGMD 91.8 3.1 9 SAR
Sun, Geng, and Wang SVM, LGBM, LGBM-SLIC, XGB-SLIC, RF-SLIC, RV- 97.4 16.6 15 SAR
(2022) CNN, CV-CNN, Superpixel entropy
discrimination
Bhosle and Musande Optimized CNN, Convolutional AE, Deep NN 97 7 16 Hyperspectral
(2020)
Zhang et al. (2022) CNN, DHCNet, SSRN, CNN CRF, SPRN, FCN 98.9 22.9 22 Hyperspectral
Jia et al. (2022) CART, RF, K-NN 86.1 6.9 12 Hyperspectral
Hamza et al. (2022) SVM, FNEA-OO, SVRFMC, CNN, CNN-CRF, 97.2 19.9 Hyperspectral
SSODTL-CC
Yadav et al. (2022) PCAL, SVM, CL-JSRC, EDP-AL 97.1 6.4 16 Hyperspectral
Haibin et al. (2022) RBF-SVM, EMP-SVM, CNN, ResNet, MLP-Mixer, 99 13 16 Hyperspectral
RepMLP, DFFN, DMLP, DMLPFFN
Tian, Qikai, and Wei SVM, RF 94.2 8.2 9 Hyperspectral
(2022)
(Continued)
2726 M. ALAMI MACHICHI ET AL.

Table 1. (Continued).
Max
OA
Ref Models (%) Range Classes Sensor
Wang et al. (2022) RBF+SVM, CNN, HybridSN, PyResNet, SSRN, 98.8 20.5 16 Hyperspectral
SSFTT, A2S2KResNet, ViT
Wang et al. (2022) SVM, RF, K-NN, Stacking (SVM, RF, K-NN), 77.1 3.9 Multispectral
Conv1D, LSTM
Shan et al. (2022) RF, NB, SVM, NN, K-NN, XGB, 1D-CNN 74.7 28.6 4 Multispectral
Erdanaev, Kappas, and SVM, RF 87 5 8 Multispectral
Wyss (2022b)
Erdanaev, Kappas, and SVM, RF 86.8 0.8 8 Multispectral
Wyss (2022b)
Chen et al. (2022) PSVM, PRF, PRF-T, PRFS, PRFS-T, PRFSC, SRF, 88.6 23.3 7 SAR
SRFC
Yao et al. (2022) RF, DNN, RF+DNN 98 11 4 SAR,
multispectral
Machichi et al. (2022) SVM, RF, CNN, LSTM, CerealNet (CNN+LSTM) 94 10 5 Multispectral
Miao et al. (2022) KNN, NB, DT, SVM 97.8 2.7 4 Multispectral
Ioannidou et al. (2022) GA, SVM 94 1.7 10 Multispectral,
SAR
Guo et al. (2022) SVM, RF, KNN, ANN, 1D-CNN, SAE, C-AENN 97.9 10.3 4 SAR
Chaudhari et al. (2022) BODLD-CTC, DNN, LSTM, SGD, NB, SVC 95.5 4.7 7 Multispectral
Teloglu and Aptoula RF, CNN-LSTM, MNN-LSTM, LSTM, StarRNN, 79 13 9 Multispectral
(2022) Transformer
Ghassemi et al. (2022) RF, SVM 77.8 1 21 Multispectral
Zhang et al. (2022) SVM, RF, Xception, U-Net, CNN 92.5 4.7 5 Multispectral
Singh et al. (2022) U-Net, RF 97.8 1.6 6 Multispectral
Tingyu, Wan, and Wang RF, SVM, CSNet 90.6 13.3 4 Multispectral
(2022)
Erdanaev, Kappas, and SVM, RF, ML 91.3 5.8 8 Multispectral
Wyss (2022a)
weilandtEarly2023 PSE-TAE, RF 91 19 8 Multispectral,
SAR
Xia et al. (2023) RF, TempCNN, LSTM, Transformer 88 1.5 9 Multispectral,
SAR

Machine (SVM), eXtreme Gradient Boosting (XGB), CNN, and LSTM. These models have
achieved the highest overall accuracies in multiple studies. However, the range of overall
accuracy was found to be quite large, with values ranging from 0.8 to 43.4 across different
experiments. This suggests that the choice of model and the specific application notably
impact the accuracy of the crop mapping process.
To better illustrate the performance comparison of crop mapping models, we
calculated the success rate of the most frequently used models (Figure 3). The
success rate is defined as the ratio of times the model was considered the best to
the total number of times the model was used. Among the classical machine
learning models, SVM had the highest success rate, with a value of 0.29, despite
being used 28 times. On the other hand, RF had the lowest success rate of 0.13,
and has been used the most at 32 times. While this finding might suggest that RF
may not be the best option for crop mapping using remote sensing, we were able
to find two studies where it outperformed more complex deep learning models
(Liu et al. 2021; Shan et al. 2022). This could potentially be due to the high data
requirements and long training times of deep learning models. Nonetheless, CNN
INTERNATIONAL JOURNAL OF REMOTE SENSING 2727

Figure 3. Comparison of model performance: Number of uses vs success rate (ratio of times
considered best to times used).

+LSTM achieved the highest success rate (0.67), with six uses. The combination of
CNN+LSTM far outshines each of the individual deep learning architectures (LSTM
or CNN). Additionally, optimized deep learning models have been found to gen­
erally outperform classical machine learning classifiers (Ghosh et al. 2021).

3.5. Features
Spectral bands are the most frequently used type of feature for crop mapping (178
occurrences). This category includes bands in the visible and near-infrared range, such
as blue, green, red, near-infrared, and mid-infrared bands. Spectral indices, which include
vegetation indices like Normalized Difference Vegetation Index (NDVI) and Enhanced
Vegetation Index (EVI), were the second most common feature group, appearing in 139
publications. Polarimetric features, derived from SAR sensor data and providing informa­
tion on the structure of the vegetation canopies, were the third most frequently used
feature set with 99 occurrences. Spatial texture features, which capture patterns in the
spatial distribution of pixel values within an image and can be derived using techniques
like grey-level co-occurrence matrices (GLCM) or spatial filtering, were used 34 times.
Other features, including climate data, topographic indices, phenological variability, and
phenological features, were used less frequently, with a total of 39 occurrences.
To identify the most useful features for crop mapping, we analysed the results
of studies where multiple features were used. The studies listed in Table 2 primar­
ily used three types of features for crop mapping: spectral bands (SB), spectral
indices (SI), and polarimetric features (SAR). The most successful schemes for crop
mapping, as indicated by the studies, are those that combine multiple types of
2728 M. ALAMI MACHICHI ET AL.

Table 2. Comparison of remote sensing features used for crop mapping (most useful features are
written in bold characters whereas the least useful are written in italic).
Max OA
Ref Features (%) Range Approach Model
Camps-Valls et al. (2003) SB: 128, 6, 3, 2 96.4 13.7 PBIA SVM
Martin-Guerreo et al. (2003) SB: 6, 3, 2 95.7 13 PBIA MLP
Peña et al. (2014) SI, T, SI+T 88 26 OBIA MLP
Peña et al. (2014) SI, T, SI+T 88 22 OBIA SVM
Peña et al. (2014) SI, T, SI+T 86 23 OBIA LR
Peña et al. (2014) SI, T, SI+T 79 32 OBIA C4.5
Sonobe et al. (2017b) SB, KT (Kauth-Thomas transform), SI, 94.5 3 PBIA CART
SB+SI, SB+SI+KT
Reshma, Veni, and Elsa George SB, SB+T, SB+SI+T 98.1 7.2 PBIA SVM
(2017)
Sonobe et al. (2017a) SAR, SB, SB+SAR 96.8 16.2 PBIA KELM
Wei et al. (2018) SB, SI 92.9 7.1 PBIA SVM
Kwak and Park (2019) SB, SB+T 98.7 1.4 PBIA SVM
Pelletier, Webb, and Petitjean SB, SI, SB+SI 90.9 2.7 PBIA RF
(2019)
Pelletier, Webb, and Petitjean SB, SI, SB+SI 92.4 3.8 PBIA RNN
(2019)
Pelletier, Webb, and Petitjean SB, SI, SB+SI 93 2.8 PBIA TempCNN
(2019)
Zafari, Zurita-Milla, and Izquierdo- SB, SB+SI+T 82 0.7 PBIA SVM-RFK
Verdiguier (2019)
Zafari, Zurita-Milla, and Izquierdo- SB, SB+SI+T 81.1 0.2 PBIA RF
Verdiguier (2019)
Zafari, Zurita-Milla, and Izquierdo- SB, SB+SI+T 82.1 4.1 PBIA SVM-RBF
Verdiguier (2019)
Akbari et al. (2020) SB, SI, T, SB+SI+T 94 1.9 PBIA RF
James, Vardanega, and Robson SAR, SB, SAR+SB 90.6 4.6 Patches CNN
(2019)
Gu, He, and Yang (2019) SAR, SB, SAR+SB 91.6 3.6 Patches VGG
Sun et al. (2019) T, SB+T, T+SB+SI 91 5 OBIA RF
Sun et al. (2019) T, SB+T, T+SB+SI 85 2 OBIA SVM
Sun et al. (2019) T, SB+T, T+SB+SI 85 5 OBIA ANN
Zhang et al. (2020) SB, SB+Geometry, SB+geometry+T 80.8 5.4 OBIA RF
Zhang et al. (2020) SB, SB+Geometry, SB+geometry+T 78.9 4.2 OBIA SVM
Kyere et al. (2020) SB, SB+Phenology, SB+Topography 75 2 PBIA RF
Alejandro et al. (2020) VV, VH, VV+VH 86.7 5.2 PBIA RF
Mario, Lopez-Sanchez, and Bargiel HH, VV, HH+VV, HH+VV+Corr, HH 89 13.2 PBIA RF
(2020) +VV+Corr+Phase
Yueran et al. (2022) SI, SAR, SI+SAR 91.6 0.9 PBIA RF
Zhang et al. (2022) SI, SAR, SI+SAR 90.5 17.1 PBIA RF
Tingyu, Wan, and Wang (2022) SB, SB+SI 90.6 2 Patches CSNet

features. Specifically, combining SB and SI, combining SB and SAR, and combining
SB, SI, and texture (T) tend to result in higher overall accuracy (Figure 4), with
overall accuracies ranging from 86% to 98.7% and an average of 92.6%. In contrast,
using only one type of feature, such as SB, SI, or SAR bands, tends to result in
lower overall accuracy compared to using a combination of features, with overall
accuracies ranging from 79% to 95.7% and an average of 87.2%. SB in particular
had the lowest success rate when used alone, with a value of 0.11 after 18 uses.
The range of overall accuracy between the most and least useful features, as
indicated in the ‘Range’ column, ranged from 0.2% to 32%, with an average of
8.8%. This suggests that the choice of features can substantially impact the overall
accuracy of crop mapping.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2729

Figure 4. Comparison of feature performance: number of uses vs success rate (ratio of times
considered best to times used).

4. Discussion
The ongoing transition mechanisms in the agricultural sector worldwide, particularly
those of precision agriculture, require accurate information on the varieties and areas of
crops offered by territorial potentials. The products of precise mapping of crops at the
scale of a country and/or region constitute the basic tool for decision-making in terms of
evaluating and monitoring crops and orienting agricultural policies. To achieve this,
nowadays high-resolution image classification techniques using ML and DL are widely
used to cover several thematic aspects of crop mapping with precision in line with
practical requirements. In the following subsections, we analyze and discuss the thematic
and technological components of crop mapping studies involving the use of big data
images and advanced optimized DL algorithms.

4.1. Different forms of crop mapping using remote sensing


Based on the FAO definition of agricultural land, broad-based crop mapping can cover
numerous aspects depending on scientific, economic interest and the need for reliable
information for the management of agricultural land. Therefore, based on the analysis of
current literature on the application of remote sensing techniques and machine learning
in crop mapping, we document and analyse three forms of crop mapping.

4.1.1. Crop type mapping


The identification and spatialization of crop types is a subfield of crop mapping that has
been the subject of numerous empirical studies (Dipankar, Kumar, and Rao 2020; Metzger
et al. 2022; Luo et al. 2021; Rao et al. 2021; Singh et al. 2022; Diem et al. 2022; Dimov 2022;
2730 M. ALAMI MACHICHI ET AL.

Jiang et al. 2022; Asam et al. 2022). It is also the first component of the application of
image classification techniques in agriculture. In this type of mapping, the focus is
primarily on the thematic aspect, that is, a spatially and qualitatively inventoried list of
crop types in a region, country, or on the global scale. Therefore, the first criterion of
reliability for this type of mapping would be to what extent it spatially and qualitatively
reproduces crop types. This includes, to some extent, the identification and classification
of intercrops (Parra et al. 2022) and the mapping of perennial crop types (Rikkerink,
Oraguzie, and Gardiner 2007; Tenreiro 2020; James, Vardanega, and Robson 2019;
Chabalala, Adam, and Adem Ali 2022).

4.1.2. Crop phenological mapping


In addition to the identification of crop varieties, crop phenological mapping includes the
identification of crop growth stages. This type of mapping aims to provide early informa­
tion on the growth status of crops, for this, the parameters and attributes of the land
surface phenology of agricultural land are mapped from start-to-finish over the crop
growth period (Dong et al. 2016; Htitiou et al. 2021; Salinero-Delgado et al. 2021; Mishra
et al. 2021). These parameters include information on the start of the season (SOS), crop
phenological profiles, and the end of the season (EOS). These elements are also added to
the dynamic mapping of seasonal crop performances (Thieme et al. 2020), the mapping of
combined attributes during the growth period (Mengyao et al. 2022), or the discrimina­
tion of a crop type by phenological stages (da Silva Junior et al. 2020). In this regard,
significant scientific advances have been recorded in the past decade in terms of intra-
seasonal phenological mapping. Maps of start-of-season crops can be generated inde­
pendently of within-year samples with an overall accuracy of 91% (You and Dong 2020);
mapping of replanting zones 15 days after sowing (Mishra et al. 2021); and maps of areas
suitable for large-scale harvest can be generated only with multisensor optical remote
sensing data with an OA of 94.35% (Zheng et al. 2022). Likewise, by adapting an approach
based on Sentinel-1 SAR time-series, Mandal et al. (2018) proposed a unified framework
particularly suited to the mapping of late and early transplanted rice cultivars at the scale
of three districts in the state of West Bengal with an overall accuracy greater than 85%. In
general, based on a systematic review analysis, Gao and Zhang (2021) distinguished two
types of approaches that are commonly used for crop phenological mapping. The first
category relies on the time-series profiles of vegetation indices, and the second is based
on the use of historical and current data on crop growth stages to predict the short-term
dynamics of crop growth. Recently, Rußwurm et al. (2023) successfully developed an End-
to-end Learned Early Classification of Time-Series (ELECTS) scheme. The decision-making
process of the classification pipeline was found to be tightly related to the phenological
events of crops. This research suggests that active incorporation of phenological data can
help guide early classification approaches.

4.1.3. Mapping of crop systems


Nowadays, farmers and institutional decision-makers are encouraged to integrate green­
house gas emission reduction strategies into their land resource exploitation system. To
achieve this, mapping crop systems and crop sequences is a must. Research has shown
that this subfield of agricultural land mapping is the least explored (Xie et al. 2019;
Blickensdörfer et al. 2022; Zitian et al. 2022). On a global scale, a growing number of
INTERNATIONAL JOURNAL OF REMOTE SENSING 2731

global and regional products on crop systems have emerged, including the SPAM team’s
regional and global crop maps (Qiangyi et al. 2020), the ESA CCI 2013, MODIS 2013 and
GlobCover 2009 crop layer; the ESA’s 2020 and 2021 global Land Use and Land Cover
(LULC) at 10; and Land Cover (Esri), the GFSAD30 and GlobeLand30 products. However, it
has been noted that the quality of data on global crop systems lacks spatial coherence
from one country to another and is characterized by huge discrepancies between pro­
ducts (Samasse et al. 2018; Venter et al. 2022; You and Sun 2022). On the scale of the
Sahel, by comparing eight databases on land cover to a reference data. Samasse et al.
(2018)revealed that the majority of LULC products overestimate cultivated land by 170%
and none of them reach the targeted precision threshold of 75%. Therefore, in view of the
enormous inaccuracies of coverage products at the global or regional scale, new
approaches to mapping crop systems at the local or national scale have emerged.
These approaches rely on advanced image classification approaches that involve the
use of high-resolution multi-sensor time-series and deep learning algorithms. Thus, the
mapping of crop systems includes the mapping of crop rotations using optical and SAR
time-series (Liu et al. 2021), the mapping of double cropping (Guo et al. 2022); the
mapping of crop intensity (Pan et al. 2021; Guo et al. 2022); and the mapping of irrigated
agricultural land (Zitian et al. 2022).

4.2. Analysis of crop map validation approaches


In this study, we document several methods for validating maps of crops and crop
systems derived from remote-sensing techniques and artificial intelligence algorithms.
Validation using ground truth samples collected during field surveys (using mobile
applications or GPS) is the most popular method (Xiong et al. 2017; Teluguntla et al.
2018; Mandal et al. 2018; Paludo et al. 2020; You and Dong 2020; Htitiou et al. 2021;
Venturieri et al. 2022; Lee et al. 2022). However, the CDL and existing LULC maps are
widely used for selecting and labelling training samples for ML and DL models in image
classification for crop mapping (Aneece and Thenkabail 2018; Huapeng et al. 2021; Zhang
et al. 2022; Jiang et al. 2022; Asam et al. 2022; Haolu et al. 2021). Cai et al. (2018) developed
a corn and soybean classification system during the growing season using the USDA’s
Common Land Units (CLU) to aggregate the multi-temporal spectral information of 1322
Landsat scenes using a deep learning model (DNN). The map resulting from the approach
has a maximum overall accuracy of 96% with the CLU layer. This high accuracy shows the
usefulness of integrating previously existing data for labelling the training pixels of
models or validating the results of crop classification. Another study (Asam et al. 2022),
found the importance of using LPIS data for training and validating the classification
results of 17 crop varieties at the scale of Germany using the RF algorithm as a classifier.
The third validation approach found in the literature is comparing local scale crop maps
with a national or global crop map. To this end, in some approaches, the accuracy of very
high-resolution crop maps is often compared to open access global datasets such as
GFSAD (Phalke et al. 2017) or the Copernicus LULC map (Buchhorn et al. 2020). For
example, Danya et al. (2022) evaluated the accuracy of the 10 binary crop land map of
the Kullu, Mandi, and Shimla districts in India using GFSAD data. The fourth approach for
labelling and validating results involves using photo-interpretation of high-resolution
satellite images on platforms, such as Google Earth (Google 2005) and Collect-Earth-
2732 M. ALAMI MACHICHI ET AL.

Online (Bey et al. 2016). This method is commonly used to delimit and label reference data
for training models and validating crop mapping results (James, Vardanega, and Robson
2019; Guo et al. 2022; Htitiou et al. 2021). In this regard, Phalke et al. (2020) used Very
High-Resolution Imagery (VHRI) of the US-NGA for labelling training samples and validat­
ing the results of the cultivated land map of 64 countries. In addition to these four
validation approaches, new approaches for self-validating crop maps using only multi-
sensor remote sensing data have been developed. For example, Zhenong et al. (2019)
used the internal Google Earth Engine (GEE) labelling tool to generate thousands of crop/
non-crop labels for binary corn crop mapping. There is a clear trend towards automating
crop mapping frameworks, which is already characterized by the automatic generation of
training samples in many empirical studies (Yang et al. 2021).

4.3. Big data multi-sensor images in crop mapping


Massive multi-sensor image data is essential for accurate and timely crop mapping. It is
the best alternative for improving observation frequency through the reconstruction of
spatially and temporally continuous time-series. An analysis of the latest scientific and
technological advances in this direction is provided in the following sections.

4.3.1. Contribution of cloud computing platforms


Over the past decade, the requirements of big technologies in terms of massive data have
led to a significant and expanding growth of open access data hosting and processing
platforms. In the field of crop mapping in particular, the use of cloud computing platforms
has given a new direction to crop mapping. With regard to the scale of analysis, the
current trend of developing fine-resolution and national/global-scale crop mapping
frameworks is made possible thanks to the advantages offered by cloud computing
platforms in terms of acquiring, processing, and online training of artificial intelligence
algorithms for image classification. Thus, taking advantage of the cloud, recent studies
have produced fine (10) global-scale and publicly accessible LUCL maps. In the same way,
at the national or local level, several previous studies have produced high accuracy crop
maps through online automatic learning on the GEE platform (Xie et al. 2019; Rudiyanto
et al. 2019; Tian et al. 2019; Tiwari et al. 2020; Panjala, Krishna Gumma, and Teluguntla
2021; Htitiou et al. 2021; Adrian, Sagan, and Maimaitijiang 2021; Salinero-Delgado et al.
2021). Using online ML with RF on the GEE environment and multi-source passive and
active remote sensing data, Tian et al. (2020) developed a specific approach to mapping
the cultivation of garlic and wheat in northern China with a global accuracy of 95.97%. At
the subnational level, RF automatic learning on GEE was adopted by Teluguntla et al.
(2018) for mapping the cultivated land of China and Australia at a 30 m resolution with
a global accuracy of 97.6%, which is very comparable to that of Tian et al. (2020). Similarly,
at the scale of Morocco, Htitiou et al. (2021) proposed an automated approach on GEE for
extracting phenological metrics of crops by combining multi-temporal Sentinel-2A/B
bands, spectral indices and environmental features. The performance of the approach
reaches a global accuracy of 97.86%, indicating good prospects for large-scale dynamic
mapping of crop phenology. In a recent study, Xue et al. (2023) were able to produce and
evaluate a large-scale crop map of Jalaid Banner in China using GEE. The study used multi-
modal Sentinel imagery as input for a segmentation model called Simple Noninterative
INTERNATIONAL JOURNAL OF REMOTE SENSING 2733

Clustering (SNIC) using RF and SVM and reached an overall accuracy of 98.66%. Another
large-scale classification was conducted by Xuan et al. (2023) in north-east China from
2013 to 2021 using hexagonal tiles. The authors used multi-source samples from field
survey and existing classification products. Errors from existing sources are bound to
affect the final result. Nonetheless, the approach yielded accuracies ranging between 89
and 97%.

4.3.2. Importance of spatio-temporal fusion approaches for multi-sensor images


Nowadays, both optical and radar geospatial technologies have seen a considerable
growth, and this trend does not seem to be slowing down. The importance of multi-
sensor image fusion lies in the role of time-series in crop mapping, particularly for
mapping the different stages of crop growth. In regions where optical data acquisition
is hindered by persistent cloud cover, the increasing interest in radar remote sensing
continues, particularly in the field of agriculture due to the continuity of measurements
and the insensitivity of the radar acquisition mode to atmospheric disturbances.
In the field of crop mapping, data fusion includes the fusion of multi-sensor images
from the same temporal window (Luo et al. 2021; da Silva Junior et al. 2020), multi-sensor
spatial overlap for large-scale mapping needs, the fusion of optical and radar sensor data
often with spectral indices Tian2019, Jin2019, (Tian et al. 2019; Zhenong et al. 2019; Amani
et al. 2020; Liu et al. 2021; Qiu et al. 2022; Nihar et al. 2022; Chabalala, Adam, and Adem Ali
2022; Diem et al. 2022), and to some extent the combination of multi-sensor optical
Paludo et al. 2020; Htitiou et al. 2021b; Yan et al. 2021,Xia et al. 2022; Rehman et al. 2023,
or SAR data only Mandal et al. 2018. Recently, Qiu et al. (2022) developed a robust
algorithm that uses multi-sensor time-series data fusion for mapping cultivated fields at
a 20 resolution at the national scale in China. In fact, unlike the use of a single radar or
optical sensor, Blickensdörfer et al. (2022) demonstrated that taking advantage of the
combination of integrated multi-sensor data is very beneficial for improving overall
accuracy, which could range from 6 to 10%. The analysis was carried out at the scale of
Germany and includes 24 classes of agricultural land use. It should be noted that this is
one of the few approaches that includes environmental data in addition to the combina­
tion of Sentinel-1, Sentinel-2, and Landsat-8 time-series. Furthermore, when optical and
radar sensors are considered separately, the study highlights the superiority of optical
sensors over SAR data. In fact, discriminating crops with SAR data is strongly dependent
on the geometric characteristics of crops, which are dynamic depending on the pheno­
logical stage. Thus, when a multi-date approach is applied with SAR data, it results in
different temporal signatures depending on the intensity of the radar backscatter
(Dipankar, Kumar, and Rao 2020). It is therefore important to understand the limitations
of multi-date approaches based solely on SAR data series for mapping the phenology of
certain crops that have dynamic geometric variations in their phenological cycles.
Another study at the scale of Germany (Asam et al. 2022), also highlights a possible
improvement in the overall accuracy of the classification from 6 to 9% by combining
optical and SAR data. These figures are very comparable to those of Blickensdörfer et al.
(2022) which clearly demonstrate the contribution of combining optical and radar sensors
in crop mapping. Several other case studies have investigated the contribution of using
combined optical and SAR data in the field of crop mapping (Mansouri et al. 2018; Zhou
et al. 2019; Tiwari et al. 2020). In evaluating the influence of using multi-temporal Sentinel-
2734 M. ALAMI MACHICHI ET AL.

1 and Sentinel-2 data on the accuracy of classification, Tufail et al. (2021) revealed that this
combination would significantly improve the accuracy of the classification of crops. In the
same vein, Rao et al. (2021) found that the joint use of three optical sensors (Sentinel-1,
Sentinel-2 and Planet) was more beneficial in terms of improving the accuracy of classi­
fication compared to the use of those sensors individually.
Recent advancements in SAR crop mapping have been made by utilizing various
techniques in response to numerous challenges. One common challenge in large-scale
crop classification is unevenly distributed training data, but spatial feature selection on
optical-SAR data can reduce the number of input features while maintaining good
predictive performance (Orynbaikyzy, Gessner, and Conrad 2022). The combination of
Sentinel-1 and Sentinel-2 data has been found to be effective not only for crop-type
mapping, but also for crop rotation monitoring for smallholder farms (Ren et al. 2022). The
same challenge of training data was recently overcome by xiaNational 2023 using
national census information in Japan to generate training data of nine main crop types.
Using a transformer on Sentinel-1 and Landsat 8 time-series, they were able to produce
the first high-resolution cropland map of Japan with an overall accuracy of 87.89%, with
class accuracy ranging between 77.09% and 90.37%. Using monthly time-series data
improves classification accuracy compared to single monthly window images (Imanni
et al. 2022). Feature engineering via optimal multi-temporal SAR image selection, which
involves selecting the most informative images based on an analysis of variance, and
Jeffries-Matusita distance-based method, followed by an improved FCN model, was found
to achieve better classification performance than traditional machine learning methods,
even for complex crop planting structures (Guo et al. 2022). Xie and Niculescu (2022)
studied the effect of different polarization configurations on accurately detecting the
phenological stage of crops using Sentinel-1 time-series and found that it varies depend­
ing on the crop being studied. The crop type was also a big factor in quantifying the gains
of combining SAR and multispectral data. Weilandt et al. (2023) found that crop classes
with few training samples benefited the most from the multi-modal fusion. One possible
explanation could be the cloud coverage in the training data which could have hindered
the successful training of the model. They also noted that the accuracy obtained from
using one set of imagery is still almost as high as using the fused dataset. This finding
suggests that, if one is only interested in frequent classes, good classification results could
be acquired with only Sentinel-1 or Sentinel-2 and would reduce hardware and time
resources needed for crop mapping.

4.3.3. Image segmentation algorithms in crop mapping


In recent years, the diversity of approaches dedicated to the qualitative and quantita­
tive mapping of seasonal crop covers have increasingly relied on various image
segmentation algorithms. This is due to two fundamental reasons: the increasing
requirements for more accurate mapping of crops for the need of intelligent agricul­
ture, and the recent progress made in the availability of open access data at finer
spatial resolutions. In recent literature, depending on the characteristics of image data,
several image segmentation algorithms have been used for the identification and
classification of crops. These include non-iterative multispectral image segmentation
algorithms such as the Simple Non-Iterative Clustering (SNIC) algorithm (Amani et al.
2020; Luo et al. 2021), the CNN-based U-net model (Kumar and Jayagopal 2021),
INTERNATIONAL JOURNAL OF REMOTE SENSING 2735

iterative hyperspectral image segmentation algorithms, and deep semantic segmenta­


tion algorithms using multi-source data (Wei et al. 2022). By applying the Simple Non-
Iterative Clustering method and the Continuous Naive Bayes classifier to Landsat-8,
Sentinel-2, and SRTM images, Paludo et al. (2020) found that the algorithm’s perfor­
mance reached a maximum correlation of 0.96 with ground truth data. Recently, a new
semantic segmentation model (HSI-TransUNet) for crop mapping using UAV hyper­
spectral data has been proposed by Niu et al. (2022). The approach is an improved
version of TransUnet and records an overall accuracy of 86% with a Kappa of 0.83
slightly better than the performance of other image segmentation models. However,
the optimal segmentation size for crop classification is strongly influenced by image
resolution, crop growth stage, and plot size (Luo et al. 2021). In general, Rui et al.
(2022) suggest that current approaches to agricultural field segmentation and delinea­
tion can be grouped into three categories: thresholding-based methods, texture
analysis-based segmentation, and trainable model-based segmentation.

4.3.4. The contribution of spectral indices in image classification


Spectral indices correspond to a synthetic transformation of multispectral reflectance
bands. Spectral indices, in their many varieties, particularly those of vegetation, have been
used in conjunction with multispectral image series to improve the performance of image
classification algorithms (Mengyao et al. 2022; Pech-May et al. 2022). They are particularly
used for crop type mapping (Xiong et al. 2017; Teluguntla et al. 2018; Sood, Kumar, and
Persello 2021; Htitiou et al. 2021), double crop mapping (Guo et al. 2022), and threshold­
ing for identification of land cover units or crop systems (Xie et al. 2019; Zitian et al. 2022;
Diem et al. 2022). However, the contribution of spectral indices in crop mapping is not
limited to this only. Stacked time-series images of normalized vegetation indices, such as
NDVI, EVI have been used in several studies for crop monitoring and crop intensity
mapping independently of multispectral images (Panjala, Krishna Gumma, and
Teluguntla 2021). To achieve this effectively, the Time-Weighted Dynamic Time Warping
algorithm (TWDTW), an improved version of Dynamic Time Warping (DTW), is often used
for phenological mapping of crops (Zheng et al. 2022). In other studies, in order to
enhance the robustness of vegetation indices, the land surface temperature (LST), land
surface water index (LSWI), soil index (Guo et al. 2021; Parra et al. 2022) and other indices,
such as the Perpendicular Crop Enhancement Index (PCEI) or Perpendicular Vegetation
Index (PVI) are frequently used in conjunction with vegetation indices (Pan et al. 2021). In
the same vein, unlike the use of reflectance bands only, Luo et al. (2021) found that adding
vegetation indices to classification would improve overall accuracy by up to 0.6%. In this
regard, Hoummaidi, Larabi, and Alam (2021) highlighted that the use of multispectral
images, with the NDVI, combined with indices, such as the Crop-Water Stress Index (CWSI)
and Canopy-Chlorophyll Content Index (CCCI) is valuable for identifying the health of
crops at different phenological stages. In addition to adding spectral indices, some
authors such as Rui et al. (2022) suggest that adding meteorological and geological
indices could improve the detection and classification of cultivated land.

4.3.5. Multi and hyperspectral UAV images in crop mapping


Like other breakthrough technologies, the emergence of unmanned aerial vehicles (UAV)
has profoundly revolutionized the application of remote sensing in crop mapping. New
2736 M. ALAMI MACHICHI ET AL.

technological advances motivated by a dynamic transformation of the agricultural sector


already in transition in several countries where the concept of precision agriculture is
gaining momentum has accelerated the growing use of unmanned aerial vehicles (UAV).
The emergence of new precision agriculture concepts has its origins in the growing use of
multi and hyperspectral UAV sensors in the agricultural field in general and more speci­
fically in the geo-spatialized identification of crop varieties. The development of very high-
resolution maps of seasonal crop coverage has grown considerably in recent years.
Multispectral imagery by drone is one of the most reliable data sources for a better
identification of crops (Hoummaidi, Larabi, and Alam 2021; Parra et al. 2022). Unlike
optical satellite imagery, UAV products are unaffected by cloud cover because of the
height of the flight. Moreover, UAV allows for generating the highest resolution imagery
at a controllable temporal frequency. The main drawbacks of using UAV for crop mapping
is the small coverage area and the cost of the imaging system.

4.4. Challenges, limitations and perspectives


Image classification for crop mapping is a difficult task often subject to inaccuracies
closely related to data characteristics, choice of image classification algorithms, expert
handling (choice of training sample size), and a mix of auxiliary factors to the image data.
A review of the current literature raises several challenges in crop mapping using image
classification techniques. We document in this review several obstacles that hinder the
identification and accurate geo-spatialization of crop varieties.
Indeed, by traditional pixel-based approaches, when using very high spatial resolution
images, the separability of weed reflectance and some crop species that have the same
characteristics is challenging. To address this challenge, multi-resolution segmentation of
hyper-spectral images and at different crop growth stages is the most emerging alter­
native (Liu et al. 2020; Che’ya, Dunwoody, and Gupta 2021; Dmitriev et al. 2022). On the
other hand, the challenge of mapping crops at different stages of their growth by remote
sensing products is closely related to the difficulty of adequately reconciling the phenol­
ogy of crop growth with that of remote sensing image availability (Guo et al. 2021; Xia
et al. 2022).
According to, Gao and Zhang (2021), to ensure rational and optimal crop management,
the phenology of remote sensing must be perfectly correlated with the growth stages of
the crops. However, in rainy and humid regions, the temporal correspondence of remote
sensing image availability and crop phenology is particularly difficult (Zhao et al. 2012;
RuBwurm and Korner 2017). Although essential, real-time updating of accurate crop
phenology mapping is often confronted with the unavailability of usable image data at
the desired spatial coverage.
Despite advances in the automation of crop mapping, including temporal, spectral,
and spatial fusion techniques of multi-source data, timely and accurate mapping of small
crops in-season and at large scales still remains challenging (Huapeng et al. 2021; Zhang
et al. 2022). Nevertheless, at small spatial scales, the increasing availability and accessi­
bility of high temporal and spatial resolution data from multiple sensors presents good
prospects for real-time and high spatial resolution crop phenology mapping. However, for
the identification and classification of crop varieties in highly temporally and spatially
heterogeneous agricultural production systems, regardless of the continuous availability
INTERNATIONAL JOURNAL OF REMOTE SENSING 2737

of data, well-adapted image classification methods are necessary. Indeed, especially when
focusing on cross-regional phenological mapping, the discrepancies between the tem­
poral and spectral attributes of crops in different areas make it difficult to generalize
approaches developed on specific contexts. Thus, the shortcomings of generalizing
trained image classification models to specific areas are the most important challenges
to accurate mapping of actively cultivated fields over large areas.
In terms of perspectives, to overcome these crop mapping constraints, some studies
such as Feyisa et al. (2020) revealed the value of iterative participatory mapping that
involves the use of vegetation time-series, field data, and auxiliary data composites for
training deep learning models with massive sample varieties that are perfectly represen­
tative of the complexity of the agricultural landscape. Similarly, in another study, Zhou
et al. (2019) suggested that adding spatial features to existing methods extracted from
time-series by DL model improved the overall accuracy of crop classification by more than
5%. In general, in addition to the emergence of participatory crop mapping approaches,
our analysis shows increasing interest in incorporating prior features whether temporal
and/or spatial in improving image classification for crop mapping. A recent study by
Duhayyim et al. 2023 integrated a set of useful feature vectors using a capsule network
(CapsNet) inside a novel Hurricane Optimization Algorithm with Deep Transfer Learning
Driven Crop Classification (HOADTL-CC). A CapsNet is a new form of CNN which differs
from the traditional neural networks as it uses ‘capsules’ instead of neurons. Capsules are
groups of neurons that work together to represent various properties of a particular
object. Comparative analysis was done with state-of-the-art approaches, and the results
suggest that this new approach can be used as an effectual tool to classify crops.
With respect to multi-source image fusion, our analysis shows that the fusion of
multispectral or hyperspectral UAV and satellite images is not well explored. Empirical
research in this direction with state-of-the-art deep learning algorithms should be encour­
aged to address many of the challenges of operational crop mapping. For example,
changes in agricultural field areas over time are a real obstacle for approaches based on
multi-temporal image segmentation techniques at the multi-year scale. Thus, the accu­
racy of mapping actively farmed land using deep learning algorithms for image segmen­
tation is strongly influenced by the temporal evolution of agricultural field geometry. In
highly evolving agricultural landscapes, the combination of massive multi-source data
with very high temporal and spatial resolution are required to adequately capture the
spatial-temporal patterns of crops. Lidar use for crop mapping remains rare, used in only
3/386 studies. This technology can be used for quantitative mapping at very high resolu­
tions. It has shown promising results for all studies where it has been used.
Identification of crops that have similar erectophile structure presents a challenge in
crop mapping tasks and can lower the overall accuracy. Researchers dealt with this issue
by simply combining spectrally similar classes into one large class. Recently, Machichi
et al. (2022) proposed a multi-input CNN-LSTM architecture capable of discriminating
between four cereal species. More studies are warranted for other crop types
A hierarchical multi-resolution approach for crop-type mapping has the potential to
produce high accuracy maps at different levels. This problem is more prevalent in early
crop mapping tasks. This specific type of classification is starting to get more interest as it
allows for predicting yield and detecting plant diseases at a much earlier stage wei et al.
2023; Li et al. 2023.
2738 M. ALAMI MACHICHI ET AL.

However, independently of these perspectives, in terms of model performance, our


analysis shows that the relationship between the number of classes, spatial, temporal and
spectral resolutions and the possible alteration at a certain threshold of the performance
of supervised ML and DL has not been factually studied. However, in the field of remote
sensing applied to crop mapping, the number of classes, i.e. the variety of potentially
discriminable crop patterns, is a key element that can to a certain extent determine and
limit the performance of certain algorithms depending on whether they are trained on
fairly representative training samples or on fine or coarse spatial resolutions. At present, it
is still unknown what specific sensitivities related to data features or the number of classes
(labelled samples) determine or influence the performance of image classification algo­
rithms. There is a critical lack of extensive benchmarking analysis whose objective would
be to determine the critical limits at which the performance of the most powerful
algorithms would drop.

5. Conclusion
The identification and mapping of crops and their characteristics using remote sensing data
has received much attention from the scientific community in recent years. The emergence
of new technologies, the newfound success of deep learning models, and a real dynamic of
transformation of the agricultural sector in transition towards precision agriculture in
several countries of the world have accelerated the qualitative and quantitative require­
ments of crop mapping. The operational application of scientific methods, which was once
restricted to basic research, is now a common occurrence worldwide. This systematic
literature review study examines recent scientific advances in the field of crop mapping
in a broad sense that includes classical crop mapping (crop variety maps), intra-seasonal
phenological crop growth mapping approaches, and crop systems mapping. The analysis
presents and discusses image classification approaches and methods in the field of crop
mapping, the performance of machine learning algorithms and the contribution of spectral,
temporal and very high spatial resolution features in crop identification and classification.
Despite the scientific advances in image classification for crop mapping, dynamic
mapping in a timely manner and at different crop growth stages is still difficult from
optical remote sensing data. Likewise, the identification of crop associations or even crop
species of the same spectral characteristics remains a major challenge. Regardless of the
algorithm or combination of algorithms and image classification method, the use of very
high spatial resolution is often compromised by the simultaneous detection of weeds and
crops of the same spectral properties. In addition to this, the problem of accuracy and
consistency of the data including the phase shift between temporal and spectral attri­
butes of crops has been recognized as the major challenge in mapping crops at different
stages of the phenological cycle.
However, the emergence of new spatio-temporal fusion methods of multi-sensor
imagery, and the increasing availability of hyperspectral data offers new perspectives
towards a better identification of spectral characteristics of crop species. One relatively
new technology that has been rarely used is Lidar. With the democratization of data
access, if Lidar datasets become available, more advances in quantitative mapping could
be made. Finally, hierarchical crop mapping is a promising approach that could solve
multiple issues especially when ground truth data is unbalanced.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2739

Acknwoledgement
This research was funded by Hassan II Academy of Science and Technology under the project
entitled ‘multispectral satellite imagery, data mining, and agricultural applications’.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
The work was supported by the Académie Hassan II des Sciences et Techniques .

ORCID
Mouad Alami Machichi http://orcid.org/0000-0001-9685-0664
Ismaguil Hanadé Houmma http://orcid.org/0000-0001-7838-6597

References
Adrian, J., V. Sagan, and M. Maimaitijiang. 2021. “Sentinel SAR-Optical Fusion for Crop Type Mapping
Using Deep Learning and Google Earth Engine.” ISPRS Journal of Photogrammetry and Remote
Sensing 175: 215–235. doi:10.1016/j.isprsjprs.2021.02.018. Accessed 2021-12-17.
Akbari, E., A. Darvishi Boloorani, N. Neysani Samany, S. Hamzeh, S. Soufizadeh, and S. Pignatti. 2020.
“Crop Mapping Using Random Forest and Particle Swarm Optimization Based on Multi-Temporal
Sentinel-2.” Remote Sensing 12 (9): 1449. Number: 9 Publisher: Multidisciplinary Digital Publishing
Institute. https://www.mdpi.com/2072-4292/12/9/1449.
Alejandro, M.Q., J. M. Lopez-Sanchez, F. Vicente-Guijalba, A. W. Jacob, and M. E. Engdahl. 2020.
“Time-Series of Sentinel-1 Interferometric Coherence and Backscatter for Crop-Type Mapping.“
EEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. https://doi.org/10.
1109/JSTARS.2020.3008096
Amani, M., M. Kakooei, A. Moghimi, A. Ghorbanian, B. Ranjgar, S. Mahdavi, A. Davidson, et al. 2020.
“Application of Google Earth Engine Cloud Computing Platform, Sentinel Imagery, and Neural
Networks for Crop Mapping in Canada.” Remote Sensing Number: 21 Place: ST ALBAN-ANLAGE 66,
CH-4052 BASEL, SWITZERLAND Publisher: MDPI Type: Article 12 (21): 3561. 10.3390/rs12213561
Aneece, I., and P. Thenkabail. 2018. “Accuracies Achieved in Classifying Five Leading World Crop
Types and Their Growth Stages Using Optimal Earth Observing-1 Hyperion Hyperspectral
Narrowbands on Google Earth Engine.” Remote Sensing 10 (12): 2027. Number: 12 Publisher:
Multidisciplinary Digital Publishing Institute, Accessed 2022-10-21. https://www.mdpi.com/2072-
4292/10/12/2027.
Aneece, I., and P. S. Thenkabail. 2021. “Classifying Crop Types Using Two Generations of
Hyperspectral Sensors (Hyperion and DESIS) with Machine Learning on the Cloud.” Remote
Sensing 13 (22): 4704. Publisher: mdpi.com. https://www.mdpi.com/2072-4292/13/22/4704.
Asam, S., U. Gessner, R. Almengor González, M. Wenzl, J. Kriese, and C. Kuenzer. 2022. “Mapping Crop
Types of Germany by Combining Temporal Statistical Metrics of Sentinel-1 and Sentinel-2 Time
Series with LPIS Data.” Remote Sensing 14 (13): 2981. Number: 13 Publisher: Multidisciplinary
Digital Publishing Institute, Accessed 2022-10-21. https://www.mdpi.com/2072-4292/14/13/2981
.
Barnes, C. F., and J. Burki. 2006. “Late-Season Rural Land-Cover Estimation with Polarimetric-SAR
Intensity Pixel Blocks And$sigma$-Tree-Structured Near-Neighbor Classifiers.“ IEEE Transactions
on Geoscience and Remote Sensing. https://doi.org/10.1109/TGRS.2006.875449
2740 M. ALAMI MACHICHI ET AL.

Basukala, A. K., C. Oldenburg, J. Schellberg, M. Sultanov, and O. Dubovyk. 2017. “Towards Improved
Land Use Mapping of Irrigated Croplands: Performance Assessment of Different Image
Classification Algorithms and Approaches.” European Journal of Remote Sensing 50 (1): 187–201,
Number: 1 Place: 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
Publisher: TAYLOR & FRANCIS LTD Type: Article. doi:10.1080/22797254.2017.1308235.
Bey, A., A. Sánchez-Paus Díaz, D. Maniatis, G. Marchi, D. Mollicone, S. Ricci, J.F. Bastin, et al. 2016.
“Collect Earth: Land Use and Land Cover Assessment Through Augmented Visual Interpretation.”
Remote Sensing 8 (10) 807 10.3390/rs8100807
Bhosle, K., and V. Musande. 2019. “Evaluation of Deep Learning CNN Model for Land Use Land Cover
Classification and Crop Identification Using Hyperspectral Remote Sensing Images.” Journal of the
Indian Society of Remote Sensing 47 (11): 1949–1958, Number: 11 Place: ONE NEW YORK PLAZA,
SUITE 4600, NEW YORK, NY, UNITED STATES Publisher: SPRINGER Type: Article. doi:10.1007/
s12524-019-01041-2.
Bhosle, K., and V. Musande. 2020. Journal of the Indian Society of Remote Sensing.
Blickensdörfer, L., M. Schwieder, D. Pflugmacher, C. Nendel, S. Erasmi, and P. Hostert. 2022.
“Mapping of Crop Types and Crop Sequences with Combined Time Series of Sentinel-1,
Sentinel-2 and Landsat 8 Data for Germany.” Remote Sensing of Environment 269: 112831.
doi:10.1016/j.rse.2021.112831. Accessed 2021-12-17.
Bruzzone, L., and D. F. Prieto. 1999. “A Technique for the Selection of Kernel-Function Parameters in
RBF Neural Networks for Classification of Remote-Sensing Images.” IEEE Transactions on
Geoscience and Remote Sensing 37 (2): 1179–1184. Accessed 2022-10-19. http://ieeexplore.ieee.
org/document/752239/.
Buchhorn, M., B. Smets, L. Bertels, B. De Roo, M. Lesiv, N.E. Tsendbazar, L. Linlin, and A. Tarko. 2020.
“Copernicus Global Land Service: Land Cover 100m: Version 3 Globe 2015-2019: Product User
Manual.” Sep. 10.5281/zenodo.3938963.
Cai, Y., K. Guan, J. Peng, S. Wang, C. Seifert, B. Wardlow, and L. Zhan. 2018. “A High-Performance and
In-Season Classification System of Field-Level Crop Types Using Time-Series Landsat Data and
a Machine Learning Approach.” Remote Sensing of Environment 210: 35–47. doi:10.1016/j.rse.
2018.02.045.
Camps-Valls, G., L. Gómez-Chova, J. Calpe-Maravilla, E. Soria-Olivas, J. D. Martín-Guerrero, and
J. Moreno. 2003. “Support Vector Machines for Crop Classification Using Hyperspectral Data.” In
Pattern Recognition and Image Analysis, edited by F. J. Perales and A. J. C. Campilho, 134–141.
Berlin, Heidelberg: Springer.
Chabalala, Y., E. Adam, and K. Adem Ali. 2022. “Machine Learning Classification of Fused Sentinel-1
and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous
Landscapes.” Remote Sensing 14 (11): 2621. doi:10.3390/rs14112621.
Chaudhari, S. V., S. Polepaka, M. Shaikhul Ashraf, R. Swain, A. Gvs, and R. Kumar Bora. 2022. “Bayesian
Optimization with Deep Learning Based Crop Type Classification on UAV Imagery.” In 2022
International Conference on Augmented Intelligence and Sustainable Systems (ICAISS), Trichy,
India, IEEE.
Chen, Q., W. Cao, J. Shang, J. Liu, and X. Liu. 2022. “Superpixel-Based Cropland Classification of SAR
Image with Statistical Texture and Polarization Features.” IEEE Geoscience and Remote Sensing
Letters, Trichy, India, 19: 1–5.
Chen, K. S., W. P. Huang, D. H. Tsay, and F. Amar. 1996. “Classification of Multifrequency Polarimetric
SAR Imagery Using a Dynamic Learning Neural Network.“ IEEE Transactions on Geoscience and
Remote Sensing 34 (3): 814–820.
Che’ya, N. N., E. Dunwoody, and M. Gupta. 2021. “Assessment of Weed Classification Using
Hyperspectral Reflectance and Optimal Multispectral UAV Imagery.” Agronomy 11 (7) https://
www.mdpi.com/2073-4395/11/7/1435. 7 1435
Chollet, F. 2018. Deep Learning with Python. Shelter Island, New York, United States: Manning
Publications Co.
Claverie, M., J. Junchang, J. G. Masek, J. L. Dungan, E. F. Vermote, J.C. Roger, S. V. Skakun, and
C. Justice. 2018. “The Harmonized Landsat and Sentinel-2 Surface Reflectance Data Set.” Remote
Sensing of Environment 219: 145–161. doi:10.1016/j.rse.2018.09.002.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2741

Danya, L., J. Gajardo, M. Volpi, and T. Defraeye. 2022. “Using Machine Learning to Generate an
Open-Access Cropland Map from Satellite Images Time Series in the Indian Himalayan Region.”
arXiv preprint arXiv:2203.14673.
da Silva Junior, A. H. S. Leonel-Junior, C. Antonio, A. Hérbete Sousa Leonel-Junior, F. Saragosa Rossi,
W. Luiz Félix Correia Filho, D. de Barros Santiago, et al. 2020. “Mapping Soybean Planting Area in
Midwest Brazil with Remotely Sensed Images and Phenology-Based Algorithm Using the Google
Earth Engine Platform.” Computers and Electronics in Agriculture 169: 105194. doi:10.1016/j.
compag.2019.105194.
Diem, P. K., N. K. Diem, N. T. Can, V. Q. Minh, H. T. T. Huong, N. T. H. Diep, and P. C. Tao. 2022.
“Assessing the Applicability of Fusion Landsat-MODIS Data for Mapping Agricultural Land Use -
a Case Study in an Giang Province.” IOP Conference Series: Earth and Environmental Science 964 (1):
012005. doi:10.1088/1755-1315/964/1/012005.
Dimov, D. 2022. “Classification of Remote Sensing Time Series and Similarity Metrics for Crop Type
Verification.” Journal of Applied Remote Sensing 16 (02). doi:10.1117/1.JRS.16.024519.
Dipankar, M., V. Kumar, and Y. S. Rao. 2020. “An Assessment of Temporal RADARSAT-2 SAR Data for
Crop Classification Using KPCA Based Support Vector Machine.” Geocarto International 37 (6):
1547–1559. doi:10.1080/10106049.2020.1783577.
Dmitriev, P. A., B. L. Kozlovsky, D. P. Kupriushkin, A. A. Dmitrieva, V. D. Rajput, V. A. Chokheli,
E. P. Tarik, et al. 2022. “Assessment of Invasive and Weed Species by Hyperspectral Imagery in
Agrocenoses Ecosystem.” Remote Sensing 14 (10): 2442. https://www.mdpi.com/2072-4292/14/
10/2442 .
Dong, J., X. Xiao, M. A. Menarguez, G. Zhang, Y. Qin, D. Thau, C. Biradar, and B. Moore. 2016.
“Mapping Paddy Rice Planting Area in Northeastern Asia with Landsat 8 Images,
Phenology-Based Algorithm and Google Earth Engine.” Remote Sensing of Environment 185:
142–154. doi:10.1016/j.rse.2016.02.016.
Duhayyim, M. A., H. Alsolai, S. B. H. Hassine, J. S. Alzahrani, A. S. Salama, A. Motwakel, I. Yaseen, and
A. S. Zamani. 2023. “Automated Deep Learning Driven Crop Classification on Hyperspectral
Remote Sensing Images.” Computers, Materials & Continua 74 (2): 3167–3181. doi:10.32604/cmc.
2023.033054.
Erdanaev, E., M. Kappas, and D. Wyss. 2022a. “The Identification of Irrigated Crop Types Using
Support Vector Machine, Random Forest and Maximum Likelihood Classification Methods with
Sentinel-2 Data in 2018: Tashkent Province, Uzbekistan.” International Journal of Geoinformatics
18 (2): 37–53.
Erdanaev, E., M. Kappas, and D. Wyss. 2022b. “Irrigated Crop Types Mapping in Tashkent Province of
Uzbekistan with Remote Sensing-Based Classification Methods.” Sensors 22(15): 5683. Number: 15
Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/1424-8220/22/15/
5683. Accessed 2022-10-21.
Espinosa-Herrera, J. M., A. Macedo-Cruz, D. S. Fernández-Reynoso, H. Flores-Magdaleno,
Y. M. Fernández-Ordoñez, and J. Soria-Ruíz. 2022. “Monitoring and Identification of Agricultural
Crops Through Multitemporal Analysis of Optical Images and Machine Learning Algorithms.”
Sensors 22(16): 6106. Number: 16 Publisher: Multidisciplinary Digital Publishing Institute https://
www.mdpi.com/1424-8220/22/16/6106. Accessed 2022-10-21.
FAO. 2017. “The Future of Food and Agriculture–Trends and Challenges.” Annual Report 296: 1–180.
Feyisa, G. L., L. Kris Palao, A. Nelson, M. Krishna Gumma, A. Paliwal, K. Thawda Win, K. Htar Nge, and
D. E. Johnson. 2020. “Characterizing and Mapping Cropping Patterns in a Complex
Agro-Ecosystem: An Iterative Participatory Mapping Procedure Using Machine Learning
Algorithms and MODIS Vegetation Indices.” Computers and Electronics in Agriculture 175:
105595. doi:10.1016/j.compag.2020.105595.
Foley, J. A., N. Ramankutty, K. A. Brauman, E. S. Cassidy, J. S. Gerber, M. Johnston, N. D. Mueller, et al.
2011. “Solutions for a Cultivated Planet.” Nature 478 (7369): 337–342. doi:10.1038/nature10452.
Fu, K. S., D. A. Landgrebe, and T. L. Phillips. 1969. “Information Processing of Remotely Sensed
Agricultural Data.” Proceedings of the IEEE 57 (4): 639–653, Number: 4 Conference Name:
Proceedings of the IEEE. doi:10.1109/PROC.1969.7019.
2742 M. ALAMI MACHICHI ET AL.

Gao, F., and X. Zhang. 2021. “Mapping Crop Phenology in Near Real-Time Using Satellite Remote
Sensing: Challenges and Opportunities.” Journal of Remote Sensing 2021: 1–14. doi:10.34133/
2021/8379391.
Garcia-Berna, A., S. O. Jose, B. Benmouna, G. Garcia-Mateos, J. Luis Fernandez-Aleman, and J. Miguel
Molina-Martinez. 2020. “Systematic Mapping Study on Remote Sensing in Agriculture.” Applied
Sciences-Basel 10 (10): 3456, Number: 10 Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL,
SWITZERLAND Publisher: MDPI Type: Review. doi:10.3390/app10103456.
Ghassemi, B., A. Dujakovic, M. Żółtak, M. Immitzer, C. Atzberger, and F. Vuolo. 2022. “Designing a
European-Wide Crop Type Mapping Approach Based on Machine Learning Algorithms Using
LUCAS Field Survey and Sentinel-2 Data.” Remote Sensing 14 (3): 541. doi:10.3390/rs14030541.
Ghosh, R., P. Ravirathinam, X. Jia, C. Lin, Z. Jin, and V. Kumar. 2021. “Attention-Augmented
Spatio-Temporal Segmentation for Land Cover Mapping.” In 2021 IEEE International Conference
on Big Data (Big Data), Conference Location: Orlando, FL, USA, 1399–1408.
Google, L. L. C. 2005. “Google Earth.” Accessed on December 29, 2022, https://www.google.com/
earth/ .
Guang, L., W. Han, Y. Dong, X. Zhai, S. Huang, M. Weitong, X. Cui, and Y. Wang. 2023. “Multi-Year
Crop Type Mapping Using Sentinel-2 Imagery and Deep Semantic Segmentation Algorithm in the
Hetao Irrigation District in China.” Remote Sensing 15 (4): 875. doi:10.3390/rs15040875.
Gu, L., F. He, and S. Yang. 2019. “Crop Classification Based on Deep Learning in Northeast China
Using Sar and Optical Imagery.” In 2019 SAR in Big Data Era, BIGSARDATA 2019 - Proceedings, Type:
Conference Paper, Conference Location: Beijing, China.
Guo, Z., Q. Wenwen, Y. Huang, J. Zhao, H. Yang, V.C. Koo, and L. Ning. 2022. “Identification of Crop
Type Based on C-AENN Using Time Series Sentinel-1A SAR Data.” Remote Sensing 14 (6): 1379.
doi:10.3390/rs14061379.
Guo, Y., H. Xia, L. Pan, X. Zhao, and L. Rumeng. 2022. “Mapping the Northern Limit of Double
Cropping Using a Phenology-Based Algorithm and Google Earth Engine.” Remote Sensing 14 (4):
1004. doi:10.3390/rs14041004.
Guo, Y., H. Xia, L. Pan, X. Zhao, L. Rumeng, X. Bian, R. Wang, and Y. Chong. 2021. “Development of
a New Phenology Algorithm for Fine Mapping of Cropping Intensity in Complex Planting Areas
Using Sentinel-2 and Google Earth Engine.” ISPRS International Journal of Geo-Information 10 (9):
587. doi:10.3390/ijgi10090587.
Gupta, J., and P. Wintz. 1975. “A Boundary Finding Algorithm and Its Applications.“ IEEE Transactions
on Circuits and Systems. 22 (4): 351–362.
Hadria, R. 2018. “Classification multi-temporelle des agrumes dans la plaine de triffa a partir des
images sentinel 1 en vue d’une meilleure gestion de l’eau d’irrigation.“ Atelier International sur
l’apport des images satellite Sentinel2 : Etat de L’art de la recherche au service de l’environne­
ment et applications associées,CRTS, Rabat, Morocco. 03.
Haibin, W., H. Zhou, A. Wang, and Y. Iwahori. 2022. “Precise Crop Classification of Hyperspectral
Images Using Multi-Branch Feature Fusion and Dilation-Based MLP.” Remote Sensing 14(11): 2713.
Number: 11 Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-
4292/14/11/2713. Accessed 2022-07-4.
Hamidi, M., A. Safari, and S. Homayouni. 2021. “An Auto-Encoder Based Classifier for Crop Mapping
from Multitemporal Multispectral Imagery.” International Journal of Remote Sensing 42 (3):
986–1016, Number: 3. doi:10.1080/01431161.2020.1820619.
Hamza, M. A., F. Alrowais, J. S. Alzahrani, H. Mahgoub, N. M. Salem, and R. Marzouk. 2022. “Squirrel
Search Optimization with Deep Transfer Learning-Enabled Crop Classification Model on
Hyperspectral Remote Sensing Imagery.” Applied Sciences 12(11): 5650. Number: 11 Publisher:
Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2076-3417/12/11/5650.
Accessed 2022-07-4.
Haolu, L., G. Wang, Z. Dong, X. Wei, W. Mengjuan, H. Song, and S. Obiri Yeboah Amankwah. 2021.
“Identifying Cotton Fields from Remote Sensing Images Using Multiple Deep Learning Networks.”
Agronomy 11 (1): 174. doi:10.3390/agronomy11010174.
Hao, P., L. Wang, Z. Niu, and Q. K. Hassan. 2015. “Comparison of Hybrid Classifiers for Crop
Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for
INTERNATIONAL JOURNAL OF REMOTE SENSING 2743

Major Crops in North Xinjiang, China.” PloS One 10 (9): e0137748, Number: 9 Place: 1160 BATTERY
STREET, STE 100, SAN FRANCISCO, CA 94111 USA Publisher: PUBLIC LIBRARY SCIENCE Type:
Article. doi:10.1371/journal.pone.0137748.
Hatfield, P. L., and P. J. Pinter. 1993. “Remote Sensing for Crop Protection.” Crop Protection 12 (6):
403–413. Accessed 2022-10-19. https://www.sciencedirect.com/science/article/pii/
026121949390001Y.
Hegarty-Craver, M., J. Polly, M. O’Neil, N. Ujeneza, J. Rineer, R. H. Beach, D. Lapidus, and D. S. Temple.
2020. “Remote Crop Mapping at Scale: Using Satellite Imagery and UAV-Acquired Data as Ground
Truth.” Remote Sensing 12(12): 1984. Number: 12 Publisher: Multidisciplinary Digital Publishing
Institute https://www.mdpi.com/2072-4292/12/12/1984. Accessed 2022-02-17.
Hoummaidi, L. E., A. Larabi, and K. Alam. 2021. “Using Unmanned Aerial Systems and Deep Learning
for Agriculture Mapping in Dubai.” Heliyon 7 (10): e08154. doi:10.1016/j.heliyon.2021.e08154.
Htitiou, A., A. Boudhar, A. Chehbouni, and T. Benabdelouahab. 2021. “National-Scale Cropland
Mapping Based on Phenological Metrics, Environmental Covariates, and Machine Learning on
Google Earth Engine.” Remote Sensing 13 (21): 4378. doi:10.3390/rs13214378.
Htitiou, A., A. Boudhar, Y. Lebrini, H. Lionboui, A. Chehbouni, and T. Benabdelouahab. 2021.
“Classification and Status Monitoring of Agricultural Crops in Central Morocco: A Synergistic
Combination of OBIA Approach and Fused Landsat-Sentinel-2 Data.” Journal of Applied Remote
Sensing 15 (01). doi:10.1117/1.JRS.15.014504.
Huapeng, L., C. Zhang, S. Zhang, and P. M. Atkinson. 2019. “A Hybrid OSVM-OCNN Method for Crop
Classification from Fine Spatial Resolution Remotely Sensed Imagery.” Remote Sensing 11 (20):
2370, Number: 20 Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Publisher: MDPI
Type: Article. doi:10.3390/rs11202370.
Huapeng, L., C. Zhang, S. Zhang, X. Ding, and P. M. Atkinson. 2021. “Iterative Deep Learning (IDL) for
Agricultural Landscape Classification Using Fine Spatial Resolution Remotely Sensed Imagery. “
International Journal of Applied Earth Observation and Geoinformation, 102. Netherlands: Elsevier.
Huapeng, L., C. Zhang, Y. Zhang, S. Zhang, X. Ding, and P. M. Atkinson. 2021. “A Scale Sequence
Object-Based Convolutional Neural Network (SS-OCNN) for Crop Classification from Fine Spatial
Resolution Remotely Sensed Imagery.” International Journal of Digital Earth 14 (11): 1528–1546,
Number: 11 Place: 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON, ENGLAND
Publisher: TAYLOR & FRANCIS LTD Type: Article. doi:10.1080/17538947.2021.1950853.
Hudait, M., and P. Pravin Patel. 2022. “Crop-Type Mapping and Acreage Estimation in Smallholding
Plots Using Sentinel-2 Images and Machine Learning Algorithms: Some Comparisons.” The
Egyptian Journal of Remote Sensing and Space Science 25 (1): 147–156. doi:10.1016/j.ejrs.2022.
01.004.
Hütt, C., G. Waldhoff, and G. Bareth. 2020. “Fusion of Sentinel-1 with Official Topographic and
Cadastral Geodata for Crop-Type Enriched LULC Mapping Using FOSS and Open Data.” IJGI 9(2):
120. Number: 2 https://www.mdpi.com/2220-9964/9/2/120. Accessed 2021-12-25.
Imanni, H. S. E., A. El Harti, M. Hssaisoune, A. Velastegui-Montoya, A. Elbouzidi, M. Addi, L. El Iysaouy,
and J. El Hachimi. 2022. “Rapid and Automated Approach for Early Crop Mapping Using
Sentinel-1 and Sentinel-2 on Google Earth Engine. A Case of a Highly Heterogeneous and
Fragmented Agricultural Region.” Journal of Imaging 8 (12): 316. doi:10.3390/jimaging8120316.
Ioannidou, M., A. Koukos, V. Sitokonstantinou, I. Papoutsis, and C. Kontoes. 2022. “Assessing the
Added Value of Sentinel-1 PolSar Data for Crop Classification.” Remote Sensing 14 (22): 5739.
doi:10.3390/rs14225739.
James, B., J. Vardanega, and A. J. Robson. 2019. “Land Cover Classification of Nine Perennial Crops
Using Sentinel-1 and -2 Data.” Remote Sensing 12 (1): 96. doi:10.3390/rs12010096.
Jia, J., J. Chen, X. Zheng, Y. Wang, S. Guo, H. Sun, C. Jiang, et al. 2022. “Tradeoffs in the Spatial and
Spectral Resolution of Airborne Hyperspectral Imaging Systems: A Crop Identification Case
Study.“ IEEE Transactions on Geoscience and Remote Sensing. 60: 1–18.
Jiang, D., S. Chen, J. Useya, L. Cao, and L. Tianqi. 2022. “Crop Mapping Using the Historical Crop Data
Layer and Deep Neural Networks: A Case Study in Jilin Province, China.” Sensors 22(15): 5853.
Number: 15 Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/1424-
8220/22/15/5853. Accessed 2022-10-21.
2744 M. ALAMI MACHICHI ET AL.

Kasapoglu, N. G., and K. E. Okan. 2007. “Border Vector Detection and Adaptation for Classification of
Multispectral and Hyperspectral Remote Sensing Images.” IEEE Transactions on Geoscience and
Remote Sensing 45 (12): 3880–3893. doi:10.1109/TGRS.2007.900699.
Khosravi, I., and S. K. Alavipanah. 2019. “A Random Forest-Based Framework for Crop Mapping Using
Temporal, Spectral, Textural and Polarimetric Observations.” In International Journal of Remote
Sensing. Publisher: Taylor & Francis. https://www.tandfonline.com/doi/abs/10.1080/01431161.
2019.1601285.
Kitchenham, B. 2007. Kitchenham, B.: Guidelines for Performing Systematic Literature Reviews in
Software Engineering. EBSE Technical Report EBSE-2007-01.
Kuenzer, C., and K. Knauer. 2013. “Remote Sensing of Rice Crop Areas.” International Journal of
Remote Sensing 34 (6): 2101–2139. doi:10.1080/01431161.2012.738946.
Kumar, S., and P. Jayagopal. 2021. “Delineation of Field Boundary from Multispectral Satellite Images
Through U-Net Segmentation and Template Matching.” Ecological Informatics 64: 101370. doi:10.
1016/j.ecoinf.2021.101370.
Kwak, G.H., and N.W. Park. 2019. “Impact of Texture Information on Crop Classification with Machine
Learning and UAV Images.” Applied Sciences 9(4): 643. Number: 4 Publisher: Multidisciplinary
Digital Publishing Institute https://www.mdpi.com/2076-3417/9/4/643. Accessed 2022-10-21.
Kyere, I., T. Astor, R. Graß, and M. Wachendorf. 2020. “Agricultural Crop Discrimination in
a Heterogeneous Low-Mountain Range Region Based on Multi-Temporal and Multi-Sensor
Satellite Data.” Computers and Electronics in Agriculture 179: 105864. doi:10.1016/j.compag.
2020.105864. Accessed 2021-12-17.
Lee, J. Y., S. Wang, A. Jain Figueroa, R. Strey, D. B. Lobell, R. L. Naylor, and S. M. Gorelick. 2022.
“Mapping Sugarcane in Central India with Smartphone Crowdsourcing.” Remote Sensing 14 (3):
703. doi:10.3390/rs14030703.
Lei, M., Y. Liu, X. Zhang, Y. Yuanxin, G. Yin, and B. Alan Johnson. 2019. “Deep Learning in Remote
Sensing Applications: A Meta-Analysis and Review.” ISPRS Journal of Photogrammetry and Remote
Sensing 152: 166–177. doi:10.1016/j.isprsjprs.2019.04.015. Accessed 2021-12-17.
Liu, M., Y. Tao, G. Xingfa, Z. Sun, J. Yang, Z. Zhang, M. Xiaofei, W. Cao, and L. Juan. 2020. “The Impact
of Spatial Resolution on the Classification of Vegetation Types in Highly Fragmented Planting
Areas Based on Unmanned Aerial Vehicle Hyperspectral Images.” Remote Sensing 12 (1): 146.
doi:10.3390/rs12010146.
Liu, Y., W. Zhao, S. Chen, and T. Ye. 2021. “Mapping Crop Rotation by Using Deeply Synergistic
Optical and SAR Time Series.” Remote Sensing 13(20): 4160. Publisher: mdpi.com1 https://www.
mdpi.com/1316742. Accessed 2022-10-2.
Liu, S., Z. Zhou, H. Ding, Y. Zhong, and Q. Shi. 2021. “Crop Mapping Using Sentinel Full-Year
Dual-Polarized SAR Data and a CPU-Optimized Convolutional Neural Network with Two
Sampling Strategies.“ IEEE Journal of Selected Topics in Applied Earth Observations and Remote
Sensing. 14: 7017–7031.
Luo, C., Q. Beisong, H. Liu, D. Guo, L. Lvping, F. Qiang, and Y. Shao. 2021. “Using Time Series
Sentinel-1 Images for Object-Oriented Crop Classification in Google Earth Engine.” Remote
Sensing 13 (4): 561. doi:10.3390/rs13040561.
Machichi, A., L. E. M. Mouad, Y. Imani, O. Bourja, R. Hadria, O. Lahlou, S. Benmansour, Y. Zennayi, and
F. Bourzeix. 2022. “CerealNet: A Hybrid Deep Learning Architecture for Cereal Crop Mapping
Using Sentinel-2 Time-Series.” Informatics 9 (4): 96. doi:10.3390/informatics9040096.
Mandal, D., V. Kumar, A. Bhattacharya, Y. Subrahmanyeswara Rao, P. Siqueira, and S. Bera. 2018.
“Sen4Rice: A Processing Chain for Differentiating Early and Late Transplanted Rice Using
Time-Series Sentinel-1 SAR Data with Google Earth Engine.” IEEE Geoscience and Remote
Sensing Letters 15 (12): 1947–1951. doi:10.1109/LGRS.2018.2865816.
Mansouri, E., and Loubna 2017. “Multiple Classifier Combination for Crop Types Phenology Based
Mapping.” In 2017 International Conference on Advanced Technologies for Signal and Image
Processing (ATSIP), Fez, Morocco, 05, 1–6. IEEE. Accessed 2021-12-25. http://ieeexplore.ieee.org/
document/8075529/.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2745

Mansouri, E., S. L. Loubna, R. Hadria, N. Eddaif, T. Benabdelouahab, and A. Dakir. 2019. “Time Series
Multispectral Images Processing for Crops and Forest Mapping: Two Moroccan Cases.” Geospatial
Technologies for Effective Land Governance 24.
Mansouri, E., R. H. Loubna, I. Lahmer, O. Moutaib, A. Oujemaa, and A. ElGorch. 2018. “Technologies
Géo-Spatiales pour renforcer les dispositifs de gestion des terres agricoles: Appui à la gestion des
surfaces agrumicoles par télédétection dans la Plaine de Triffa-Berkane (Maroc).” African Journal
on Land Policy and Geospatial Sciences 1 (3): 164–177.
Mario, B., J. M. Lopez-Sanchez, and D. Bargiel. 2020. “Added Value of Coherent Copolar Polarimetry
at X-Band for Crop-Type Mapping.“ IEEE Geoscience and Remote Sensing Letters. 17 (5): 819–823.
Mario, B., J. M. Lopez-Sanchez, A. Mestre-Quereda, E. Navarro, M. P. González-Dugo, and L. Mateos.
2020. “Exploring TanDEM-X Interferometric Products for Crop-Type Mapping.” Remote Sensing 12
(11): 1774. Number: 11 Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.
com/2072-4292/12/11/1774. Accessed 2022-02-17.
Martin-Guerreo, J. D., L. Gomez-Chova, J. Calpe-Maravilla, G. Camps-Valls, E. Soria-Olivas, and
J. Moreno. 2003. “A Soft Approach to ERA Algorithm for Hyperspectral Image Classification.”
3rd International Symposium on Image and Signal Processing and Analysis, 2003. ISPA 2003.
Proceedings of the, Rome, Italy, 2, Sep 761–765.
Meng, S., X. Wang, H. Xin, C. Luo, and Y. Zhong. 2021. “Deep Learning-Based Crop Mapping in the
Cloudy Season Using One-Shot Hyperspectral Satellite Imagery.” Computers and Electronics in
Agriculture 186: 106188. doi:10.1016/j.compag.2021.106188. Accessed 2021-12-17.
Mengyao, L., R. Zhang, H. Luo, G. Songwei, and Z. Qin. 2022. “Crop Mapping in the Sanjiang Plain
Using an Improved Object-Oriented Method Based on Google Earth Engine and Combined
Growth Period Attributes.” Remote Sensing 14(2): 273. Number: 2 Publisher: Multidisciplinary
Digital Publishing Institute https://www.mdpi.com/2072-4292/14/2/273. Accessed 2022-07-4.
Metzger, N., M. Ozgur Turkoglu, S. D’Aronco, J. Dirk Wegner, and K. Schindler. 2022. “Crop
Classification Under Varying Cloud Cover with Neural Ordinary Differential Equations.“ IEEE
Transactions on Geoscience and Remote Sensing 60: 1–12.
Miao, L., B. Ying, B. Xue, H. Qiong, M. Zhang, Y. Wei, P. Yang, and W. Wenbin. 2022. “Genetic
Programming for High-Level Feature Learning in Crop Classification.” Remote Sensing 14 (16):
3982. doi:10.3390/rs14163982.
Mishra, S., A. Maria Issac, R. Singh, P. Venkat Raju, and V. Rao Vala. 2021. “Mapping of Intra-Season
Dynamics in the Cropping Pattern Using Remote Sensing for Irrigation Management.” Geocarto
International 37 (17): 4994–5016. doi:10.1080/10106049.2021.1903573.
Moussaid, A., S. El Fkihi, and Y. Zennayi. 2021. “Tree Crowns Segmentation and Classification in
Overlapping Orchards Based on Satellite Images and Unsupervised Learning Algorithms.” Journal
of Imaging 7(11): 241. Number: 11 Publisher: Multidisciplinary Digital Publishing Institute https://
www.mdpi.com/2313-433X/7/11/241. Accessed 2022-06-20.
Murmu, S., and S. Biswas. 2015. “Application of Fuzzy Logic and Neural Network in Crop
Classification: A Review.” Aquatic Procedia 4: 1203–1210. doi:10.1016/j.aqpro.2015.02.153.
Muruganantham, P., S. Wibowo, S. Grandhi, N. Hoque Samrat, and N. Islam. 2022. “A Systematic
Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing.” Remote
Sensing 14 (9): 1990. https://www.mdpi.com/2072-4292/14/9/1990.
Niazmardi, S., S. Homayouni, A. Safari, H. McNairn, J. Shang, and K. Beckett. 2018. “Histogram-Based
Spatio-Temporal Feature Classification of Vegetation Indices Time-Series for Crop Mapping.”
International Journal of Applied Earth Observation and Geoinformation 72: 34–41. doi:10.1016/j.
jag.2018.05.014. Accessed 2021-12-17.
Nihar, A., N. R. Patel, S. Pokhariyal, and A. Danodia. 2022. “Sugarcane Crop Type Discrimination and
Area Mapping at Field Scale Using Sentinel Images and Machine Learning Methods.” Journal of
the Indian Society of Remote Sensing 50 (2): 217–225. doi:10.1007/s12524-021-01444-0.
Niu, B., Q. Feng, B. Chen, O. Cong, Y. Liu, and J. Yang. 2022. “HSI-TransUNet: A Transformer Based
Semantic Segmentation Model for Crop Mapping from UAV Hyperspectral Imagery.” Computers
and Electronics in Agriculture 201: 107297. doi:10.1016/j.compag.2022.107297.
2746 M. ALAMI MACHICHI ET AL.

Orynbaikyzy, A., U. Gessner, and C. Conrad. 2022. “Spatial Transferability of Random Forest Models
for Crop Type Classification Using Sentinel-1 and Sentinel-2.” Remote Sensing 14 (6): 1493. doi:10.
3390/rs14061493.
Paludo, A., W. Ronaldo Becker, J. Richetti, L. Cavalcante De Albuquerque Silva, and J. Adriani Johann.
2020. “Mapping Summer Soybean and Corn with Remote Sensing on Google Earth Engine Cloud
Computing in Parana State – Brazil.” International Journal of Digital Earth 13 (12): 1624–1636.
doi:10.1080/17538947.2020.1772893.
Panjala, P., M. Krishna Gumma, and P. Teluguntla. 2021. “Machine Learning Approaches and
Sentinel-2 Data in Crop Type Mapping.“ In Studies in Big Data. edited by Reddy,Obi, G. P., Raval,
Mehul S., Adinarayana, J., Chaudhary,Sanjay. 161–180. Springer Singapore.
Pan, L., H. Xia, J. Yang, W. Niu, R. Wang, H. Song, Y. Guo, and Y. Qin. 2021. “Mapping Cropping
Intensity in Huaihe Basin Using Phenology Algorithm, All Sentinel-2 and Landsat Images in
Google Earth Engine.” International Journal of Applied Earth Observation and Geoinformation
102: 102376. doi:10.1016/j.jag.2021.102376.
Parra, L., D. Mostaza-Colado, J. F. Marin, P. V. Mauri, and J. Lloret. 2022. “Methodology to Differentiate
Legume Species in Intercropping Agroecosystems Based on UAV with RGB Camera.” Electronics
11 (4): 609. doi:10.3390/electronics11040609.
Pech-May, F., R. Aquino-Santos, G. Rios-Toledo, and J. Pablo Francisco Posadas-Durán. 2022.
“Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth
Engine.” Sensors 22 (13): 4729. doi:10.3390/s22134729.
Pelletier, C., G. Webb, and F. Petitjean. 2019. “Temporal Convolutional Neural Network for the
Classification of Satellite Image Time Series.” Remote Sensing 11(5): 523. Number: 5 Publisher:
Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-4292/11/5/523.
Accessed 2021-12-14.
Peña, J. M., A. G. Pedro, C. Hervás-Martínez, J. Six, E. P. Richard, and F. López-Granados. 2014. “Object-
Based Image Classification of Summer Crops with Machine Learning Methods.” Remote Sensing 6
(6): 5019–5041. Number: 6 Publisher: Multidisciplinary Digital Publishing Institute https://www.
mdpi.com/2072-4292/6/6/5019. Accessed 2021-12-25.
Phalke, A., M. Ozdogan, P. S. Thenkabail, R. G. Congalton, K. Yadav, R. Massey, P. Teluguntla,
J. Poehnelt, and C. Smith. 2017. NASA Making Earth System Data Records for Use in Research
Environments (MEaSures) Global Food Security-Support Analysis Data (GFSAD)@ 30-M for Europe.
Middle-East, Russia and Central Asia: Cropland Extent Product (GFSAD30EUCEARUMECE).
Phalke, A. R., M. Özdoğan, P. S. Thenkabail, T. Erickson, N. Gorelick, K. Yadav, and R. G. Congalton.
2020. “Mapping Croplands of Europe, Middle East, Russia, and Central Asia Using Landsat,
Random Forest, and Google Earth Engine.” ISPRS Journal of Photogrammetry and Remote
Sensing 167: 104–122. doi:10.1016/j.isprsjprs.2020.06.022.
Potapov, P., S. Turubanova, M. C. Hansen, A. Tyukavina, V. Zalles, A. Khan, X.P. Song, A. Pickens,
Q. Shen, and J. Cortez. 2021. “Global Maps of Cropland Extent and Change Show Accelerated
Cropland Expansion in the Twenty-First Century.” Nature Food 3 (1): 19–28. doi:10.1038/s43016-
021-00429-z.
Prins, A. J., and A. Van Niekerk. 2021. “Crop Type Mapping Using LiDar, Sentinel-2 and Aerial Imagery
with Machine Learning Algorithms.” In Geo-Spatial Information Science. Publisher: Taylor &
Francis. https://www.tandfonline.com/doi/abs/10.1080/10095020.2020.1782776.
Qadeer, M. U., S. Saeed, M. Taj, and A. Muhammad. 2021. “Spatio-Temporal Crop Classification on
Volumetric Data.” In 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK,
USA, Sep 3812–3816.
Qiangyi, Y., L. You, U. Wood-Sichra, R. Yating, A. K. Joglekar, S. Fritz, W. Xiong, L. Miao, W. Wenbin,
and P. Yang. 2020. “A Cultivated Planet in 2010–Part 2: The Global Gridded Agricultural-
Production Maps.” Earth System Science Data 12 (4): 3545–3572. doi:10.5194/essd-12-3545-2020.
Qinghua, X., Q. Dou, X. Peng, J. Wang, J. M. Lopez-Sanchez, J. Shang, F. Haiqiang, and J. Zhu. 2022.
“Crop Classification Based on the Physically Constrained General Model-Based Decomposition
Using Multi-Temporal RADARSAT-2 Data.” Remote Sensing 14(11): 2668. Number: 11 Publisher:
Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-4292/14/11/2668.
Accessed 2022-07-4.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2747

Qiu, B., D. Lin, C. Chen, P. Yang, Z. Tang, Z. Jin, Y. Zhiyan, et al. 2022. “From Cropland to Cropped
Field: A Robust Algorithm for National-Scale Mapping by Fusing Time Series of Sentinel-1 and
Sentinel-2.” International Journal of Applied Earth Observation and Geoinformation 113: 103006.
doi:10.1016/j.jag.2022.103006.
Rao, P., W. Zhou, N. Bhattarai, A. K. Srivastava, B. Singh, S. Poonia, D. B. Lobell, and M. Jain. 2021.
“Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms.”
Remote Sensing 13(10): 1870. Number: 10 Publisher: Multidisciplinary Digital Publishing
Institute https://www.mdpi.com/2072-4292/13/10/1870. Accessed 2022-07-4.
Rehman, T. U., M. Alam, N. Minallah, W. Khan, J. Frnda, S. Mushtaq, M. Ajmal, and M. Kumar. 2023.
“Long Short Term Memory Deep Net Performance on Fused Planet-Scope and Sentinel-2 Imagery
for Detection of Agricultural Crop.” PloS One 18 (2, February 2): e0271897. doi:10.1371/journal.
pone.0271897.
Reji, J., R. Rao Nidamanuri, and A. M. Ramiya. 2021. “Object-Level Classification of Vegetable Crops in
3D LiDar Point Cloud Using Deep Learning Convolutional Neural Networks.” Precision Agric 22(5):
1617–1633. Number: 5. Accessed 2021-12-23. 10.1007/s11119-021-09803-0
Ren, T., X. Hongtao, X. Cai, Y. Shengnan, and Q. Jiaguo. 2022. “Smallholder Crop Type Mapping and
Rotation Monitoring in Mountainous Areas with Sentinel-1/2 Imagery.” Remote Sensing 14 (3):
566. doi:10.3390/rs14030566.
Reshma, S., S. Veni, and J. Elsa George. 2017. “Hyperspectral Crop Classification Using Fusion of
Spectral, Spatial Features and Vegetation Indices: Approach to the Big Data Challenge.” In 2017
International Conference on Advances in Computing, Communications and Informatics (ICACCI),
Udupi, India, Sep 380–386.
Reuß, F., I. Greimeister-Pfeil, M. Vreugdenhil, and W. Wagner. 2021. “Comparison of Long Short-Term
Memory Networks and Random Forest for Sentinel-1 Time Series Based Large Scale Crop
Classification.” Remote Sensing 13(24): 5000. Number: 24 Publisher: Multidisciplinary Digital
Publishing Institute https://www.mdpi.com/2072-4292/13/24/5000. Accessed 2022-10-21.
Rikkerink, E. H. A., N. C. Oraguzie, and S. E. Gardiner. 2007. “Prospects of Association Mapping in
Perennial Horticultural Crops.“ In Association Mapping in Plants. edited by Oraguzie, N. C.,
Rikkerink, E. H. A., Gardiner, S. E., De Silva, H. N. 249–269. New York, US: Springer New York.
RuBwurm, M., and M. Korner. 2017. “Temporal Vegetation Modelling Using Long Short-Term
Memory Networks for Crop Identification from Medium-Resolution Multi-Spectral Satellite
Images.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW), Honolulu, HI, USA, IEEE.
Rudiyanto, M., S. Shah, Arif, Setiawan. 2019. “Automated Near-Real-Time Mapping and Monitoring of
Rice Extent, Cropping Patterns, and Growth Stages in Southeast Asia Using Sentinel-1 Time Series
on a Google Earth Engine Platform.” Remote Sensing 11 (14): 1666. Accessed 2021-12-14. https://
www.mdpi.com/2072-4292/11/14/1666.
Rui, L., N. Wang, Y. Zhang, Y. Lin, W. Wenqiang, and Z. Shi. 2022. “Extraction of Agricultural Fields via
DASFNet with Dual Attention Mechanism and Multi-Scale Feature Fusion in South Xinjiang,
China.” Remote Sensing 14 (9): 2253. doi:10.3390/rs14092253.
Rußwurm, M., N. Courty, R. Emonet, S. Lefèvre, D. Tuia, and R. Tavenard. 2023. “End-To-End Learned
Early Classification of Time Series for In-Season Crop Type Mapping.” ISPRS Journal of
Photogrammetry and Remote Sensing 196: 445–456. doi:10.1016/j.isprsjprs.2022.12.016.
Sabir, A., and A. Kumar. 2022. “Optimized 1D-CNN Model for Medicinal Psyllium Husk Crop Mapping
with Temporal Optical Satellite Data.” Ecological Informatics 71: 101772. doi:10.1016/j.ecoinf.2022.
101772.
Saini, R., and S. Kumar Ghosh. 2021. “Crop Classification in a Heterogeneous Agricultural
Environment Using Ensemble Classifiers and Single-Date Sentinel-2A Imagery.” Geocarto
International 36 (19): 2141–2159, Number: 19 Place: 2-4 PARK SQUARE, MILTON PARK,
ABINGDON OR14 4RN, OXON, ENGLAND Publisher: TAYLOR & FRANCIS LTD Type: Article.
doi:10.1080/10106049.2019.1700556.
Saleem, M. H., J. Potgieter, and K. Mahmood Arif. 2021. “Automation in Agriculture by Machine and
Deep Learning Techniques: A Review of Recent Developments.” Precision Agriculture 22 (6):
2748 M. ALAMI MACHICHI ET AL.

2053–2091, Number: 6 Place: VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS


Publisher: SPRINGER Type: Review. doi:10.1007/s11119-021-09806-x.
Salinero-Delgado, M., J. Estévez, L. Pipia, S. Belda, K. Berger, V. Paredes Gómez, and J. Verrelst. 2021.
“Monitoring Cropland Phenology on Google Earth Engine Using Gaussian Process Regression.”
Remote Sensing 14 (1): 146. doi:10.3390/rs14010146.
Samasse, K., N. Hanan, G. Tappan, and Y. Diallo. 2018. “Assessing Cropland Area in West Africa for
Agricultural Yield Analysis.” Remote Sensing 10 (11): 1785. doi:10.3390/rs10111785.
Seydi, S. T., M. Amani, and A. Ghorbanian. 2022. “A Dual Attention Convolutional Neural Network for
Crop Classification Using Time-Series Sentinel-2 Imagery.” Remote Sensing 14(3): 498. Number: 3
Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-4292/14/3/
498. Accessed 2022-10-21.
Shakya, A., M. Biswas, and M. Pal. 2021. “Parametric Study of Convolutional Neural Network Based
Remote Sensing Image Classification.” International Journal of Remote Sensing 42 (7): 2663–2685,
Number: 7 Publisher: Taylor & Francis _eprint. doi:10.1080/01431161.2020.1857877.
Shannon, K. L., B. F. Kim, S. E. McKenzie, and R. S. Lawrence. 2015. “Food System Policy, Public Health,
and Human Rights in the United States.” Annual Review of Public Health 36 (1): 151–173. doi:10.
1146/annurev-publhealth-031914-122621.
Shan, H., P. Peng, Y. Chen, and X. Wang. 2022. “Multi-Crop Classification Using Feature
Selection-Coupled Machine Learning Classifiers Based on Spectral, Textural and Environmental
Features.” Remote Sensing 14(13): 3153. Number: 13 Publisher: Multidisciplinary Digital Publishing
Institute https://www.mdpi.com/2072-4292/14/13/3153. Accessed 2022-10-21.
Shao, Y., R. S. Lunetta, J. Ediriwickrema, and J. Iiames. 2010. “Mapping Cropland and Major Crop
Types Across the Great Lakes Basin Using MODIS-NDVI Data.” Photogrammetric Engineering &
Remote Sensing 76 (1): 73–84. doi:10.14358/PERS.76.1.73.
Sherrie, W., S. Di Tommaso, J. Faulkner, T. Friedel, A. Kennepohl, R. Strey, and D. B. Lobell. 2020.
“Mapping Crop Types in Southeast India with Smartphone Crowdsourcing and Deep Learning.”
Remote Sensing 12(18): 2957. Number: 18 Publisher: Multidisciplinary Digital Publishing Institute.
Accessed 2022-02-17. 10.3390/rs12182957
Siesto, G., M. Fernández-Sellers, and A. Lozano-Tello. 2021. “Crop Classification of Satellite Imagery
Using Synthetic Multitemporal and Multispectral Images in Convolutional Neural Networks.”
Remote Sensing 13 (17): Number: 17 Type: Article. doi:10.3390/rs13173378.
Singh, G., S. Singh, G. Sethi, and V. Sood. 2022. “Deep Learning in the Mapping of Agricultural Land
Use Using Sentinel-2 Satellite Data.” Geographies 2 (4): 691–700. doi:10.3390/
geographies2040042.
Singh, P., P. K. Srivastava, D. Shah, M. K. Pandey, A. Anand, R. Prasad, R. Dave, J. Verrelst,
B. K. Bhattacharya, and A. S. Raghubanshi. 2022. “Crop Type Discrimination Using Geo-Stat
Endmember Extraction and Machine Learning Algorithms.” Advances in Space Research. doi:10.
1016/j.asr.2022.08.031.
Sonobe, R., Y. Yamaya, H. Tani, X. Wang, N. Kobayashi, and K.I. Mochizuki. 2017a. “Assessing the
Suitability of Data from Sentinel-1A and 2A for Crop Classification.” GISCIENCE & REMOTE SENSING
54 (6): 918–938, Number: 6 Place: 2-4 PARK SQUARE, MILTON PARK, ABINGDON OR14 4RN, OXON,
ENGLAND Publisher: TAYLOR & FRANCIS LTD Type: Article. doi:10.1080/15481603.2017.1351149.
Sonobe, R., Y. Yamaya, H. Tani, X. Wang, N. Kobayashi, and K.I. Mochizuki. 2017b. “Mapping Crop
Cover Using Multi-Temporal Landsat 8 OLI Imagery.” International Journal of Remote Sensing
38 (15): 4348–4361. Accessed 2021-12-14. https://www.tandfonline.com/doi/full/10.1080/
01431161.2017.1323286 .
Sood, M., A. Kumar, and C. Persello. 2021. “Deep Learning Model for Time-Series Images to
Discriminate Potato Crop in Punjab: Case Study of Monitoring Crop Harvesting.” Khoj: An
International Peer Reviewed Journal of Geography 8 (1): 31–46. doi:10.5958/2455-6963.2021.
00004.7.
Suits, G. H. 1972. “The Calculation of the Directional Reflectance of a Vegetative Canopy.” Remote
Sensing of Environment 2: 117–125. doi:10.1016/0034-4257(71)90085-X.
Sun, C., Y. Bian, T. Zhou, and J. Pan. 2019. “Using of Multi-Source and Multi-Temporal Remote
Sensing Data Improves Crop-Type Mapping in the Subtropical Agriculture Region.” SENSORS
INTERNATIONAL JOURNAL OF REMOTE SENSING 2749

19 (10): Number: 10 Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Publisher: MDPI
Type: Article. doi:10.3390/s19102401.
Sun, J., L. Geng, and Y. Wang. 2022. “A Hybrid Model Based on Superpixel Entropy Discrimination for
PolSar Image Classification.” Remote Sensing 14(16): 4116. Number: 16 Publisher: Multidisciplinary
Digital Publishing Institute https://www.mdpi.com/2072-4292/14/16/4116. Accessed 2022-10-21.
Sykas, D., M. Sdraka, D. Zografakis, and I. Papoutsis. 2022. “A Sentinel-2 Multiyear, Multicountry
Benchmark Dataset for Crop Classification and Segmentation with Deep Learning.“ IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing. 15: 3323–3339.
Tang, J., X. Zhang, Z. Chen, and Y. Bai. 2022. “Crop Identification and Analysis in Typical Cultivated
Areas of Inner Mongolia with Single-Phase Sentinel-2 Images.” Sustainability 14(19): 12789.
Number: 19 Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/
2071-1050/14/19/12789. Accessed 2022-10-21.
Teloglu, H. K., and E. Aptoula. 2022. “A Morphological-Long Short Term Memory Network Applied to
Crop Classification.” In IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing
Symposium, Kuala Lumpur, Malaysia, IEEE.
Teluguntla, P., P. S. Thenkabail, A. Oliphant, J. Xiong, M. Krishna Gumma, R. G. Congalton, K. Yadav,
and A. Huete. 2018. “A 30-M Landsat-Derived Cropland Extent Product of Australia and China
Using Random Forest Machine Learning Algorithm on Google Earth Engine Cloud Computing
Platform.” ISPRS Journal of Photogrammetry and Remote Sensing 144: 325–340. doi:10.1016/j.
isprsjprs.2018.07.017.
Tenreiro, T. R. 2020. “Mapping cover crop dynamics in Mediterranean perennial cropping systems
through remote sensing and machine learning methods.” Master’s thesis, Spanish Council for
Scientific Research.
Thieme, A., S. Yadav, P. C. Oddo, J. M. Fitz, S. McCartney, L. King, J. Keppler, G. W. McCarty, and
W. Dean Hively. 2020. “Using NASA Earth Observations and Google Earth Engine to Map Winter
Cover Crop Conservation Performance in the Chesapeake Bay Watershed.” Remote Sensing of
Environment 248: 111943. doi:10.1016/j.rse.2020.111943.
Tian, F., W. Bingfang, H. Zeng, X. Zhang, and X. Jiaming. 2019. “Efficient Identification of Corn
Cultivation Area with Multitemporal Synthetic Aperture Radar and Optical Images in the Google
Earth Engine Cloud Platform.” Remote Sensing 11 (6): 629. doi:10.3390/rs11060629.
Tian, H., J. Pei, J. Huang, L. Xuecao, J. Wang, B. Zhou, Y. Qin, and L. Wang. 2020. “Garlic and Winter
Wheat Identification Based on Active and Passive Satellite Imagery and the Google Earth Engine
in Northern China.” Remote Sensing 12 (21): 3539. doi:10.3390/rs12213539.
Tian, S., L. Qikai, and L. Wei. 2022. “Multiscale Superpixel-Based Fine Classification of Crops in the
UAV-Based Hyperspectral Imagery.” Remote Sensing 14(14): 3292. Number: 14 Publisher:
Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-4292/14/14/3292.
Accessed 2022-10-21.
Tingyu, L., L. Wan, and L. Wang. 2022. “Fine Crop Classification in High Resolution Remote Sensing
Based on Deep Learning.” Frontiers in Environmental Science 10: 10. doi:10.3389/fenvs.2022.
991173.
Tiwari, V., M. A. Matin, F. M. Qamer, W. Lee Ellenburg, B. Bajracharya, K. Vadrevu, B. Rabeya Rushi, and
W. Yusafi. 2020. “Wheat Area Mapping in Afghanistan Based on Optical and SAR Time-Series
Images in Google Earth Engine Cloud Environment.” Frontiers in Environmental Science 8: 77.
doi:10.3389/fenvs.2020.00077.
Tiziano, G., D. Pimentel, and M. G. Paoletti. 2011. “Environmental Impact of Different Agricultural
Management Practices: Conventional Vs. Organic Agriculture.” Critical Reviews in Plant Sciences
30 (1–2): 95–124. doi:10.1080/07352689.2011.554355.
Tufail, R., A. Ahmad, M. Asif Javed, and S. Rashid Ahmad. 2021. “A Machine Learning Approach for
Accurate Crop Type Mapping Using Combined SAR and Optical Time Series Data.” Advances in
Space Research 69 (1): 331–346. Accessed 2021-12-17. https://www.sciencedirect.com/science/
article/pii/S0273117721007262 .
Turkoglu, M. O., S. D’Aronco, G. Perich, F. Liebisch, C. Streit, K. Schindler, and J. Dirk Wegner. 2021.
“Crop Mapping from Image Time Series: Deep Learning with Multi-Scale Label Hierarchies.“ In
Remote Sensing of Environment 264. New York, USA: Elsevier Science Inc.
2750 M. ALAMI MACHICHI ET AL.

van Klompenburg, A. K. Kassahun, C. Thomas, and T. van Klompenburg. 2020. “Crop Yield Prediction
Using Machine Learning: A Systematic Literature Review.” Computers and Electronics in Agriculture
177: 105709. doi:10.1016/j.compag.2020.105709. Accessed 2022-01-17.
Venter, Z. S., D. N. Barton, T. Chakraborty, T. Simensen, and G. Singh. 2022. “Global 10 M Land Use
Land Cover Datasets: A Comparison of Dynamic World, World Cover and Esri Land Cover.” Remote
Sensing 14 (16): 4101. doi:10.3390/rs14164101.
Venturieri, A., R. R. S. de Oliveira, T. Koiti Igawa, K. De Avila Fernandes, M. J. Marcos Adami,
C. Aparecido Almeida, C. A. Almeida, et al. 2022. “The Sustainable Expansion of the Cocoa Crop
in the State of Pará and Its Contribution to Altered Areas Recovery and Fire Reduction.” Journal of
Geographic Information System 14 (03): 294–313. doi:10.4236/jgis.2022.143016.
Wang, Y., Z. Zhang, L. Feng, M. Yuchi, and D. Qingyun. 2021. “A New Attention-Based CNN Approach
for Crop Mapping Using Time Series Sentinel-2 Images.” Computers and Electronics in Agriculture
184: 106090. doi:10.1016/j.compag.2021.106090. Accessed 2021-12-17.
Wang, Z., H. Zhang, H. Wei, and L. Zhang. 2022. “Cross-Phenological-Region Crop Mapping
Framework Using Sentinel-2 Time Series Imagery: A New Perspective for Winter Crops in
China.” ISPRS Journal of Photogrammetry and Remote Sensing 193: 200–215. doi:10.1016/j.
isprsjprs.2022.09.010.
Wang, X., J. Zhang, L. Xun, J. Wang, W. Zhenjiang, M. Henchiri, S. Zhang, et al. 2022. “Evaluating the
Effectiveness of Machine Learning and Deep Learning Models Combined Time-Series Satellite
Data for Multiple Crop Types Classification Over a Large-Scale Region.” Remote Sensing. 14(10).
2341. Number: 10 Publisher: Multidisciplinary Digital Publishing Institute. Accessed 2022-10-21.
https://www.mdpi.com/2072-4292/14/10/2341 .
Wardlow, B. D., and S. L. Egbert. 2008. “Large-Area Crop Mapping Using Time-Series MODIS 250
M NDVI Data: An Assessment for the U.S. Central Great Plains.” Remote Sensing of Environment 112
(3): 1096–1116. Number: 3 https://www.sciencedirect.com/science/article/pii/
S0034425707003458. Accessed 2021-12-17.
Wei, P., D. Chai, R. Huang, D. Peng, T. Lin, J. Sha, W. Sun, and J. Huang. 2022. “Rice Mapping Based on
Sentinel-1 Images Using the Coupling of Prior Knowledge and Deep Semantic Segmentation
Network: A Case Study in Northeast China from 2019 to 2021.” International Journal of Applied
Earth Observation and Geoinformation 112: 102948. doi:10.1016/j.jag.2022.102948.
Weikmann, G., C. Paris, and L. Bruzzone. 2021. “TimeSen2crop: A Million Labeled Samples Dataset of
Sentinel 2 Image Time Series for Crop-Type Classification.” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing 14: 4699–4708. Conference Name: IEEE Journal
of Selected Topics in Applied Earth Observations and Remote Sensing.
Weilandt, F., R. Behling, R. Goncalves, A. Madadi, L. Richter, T. Sanona, D. Spengler, and J. Welsch.
2023. “Early Crop Classification via Multi-Modal Satellite Data Fusion and Temporal Attention.”
Remote Sensing 15 (3): 799. doi:10.3390/rs15030799.
Weiss, M., F. Jacob, and G. Duveiller. 2020. “Remote Sensing for Agricultural Applications: A
Meta-Review.” Remote Sensing of Environment 236: 111402. doi:10.1016/j.rse.2019.111402.
Accessed 2021-12-17.
Wei, M., H. Wang, Y. Zhang, L. Qiangzi, D. Xin, G. Shi, and Y. Ren. 2023. “Investigating the Potential of
Crop Discrimination in Early Growing Stage of Change Analysis in Remote Sensing Crop Profiles.”
Remote Sensing 15 (3): 853. doi:10.3390/rs15030853.
Wei, X., G. Xingfa, Y. Tao, Z. Wei, X. Zhou, K. Jia, L. Juan, and M. Liu. (2018). “Land-Cover Classification
Using Multi-Temporal GF-1 Wide Field View Data.” International Journal of Remote Sensing 39 (20):
6914–6930. Number: 20 Publisher: Taylor & Francis _eprint. doi:10.1080/01431161.2018.1468106.
Wenbo, X., W. Bingfang, Y. Tian, J. Huang, and Y. Zhang. 2004. “Synergy of Multitemporal Radarsat
SAR and Landsat ETM Data for Extracting Agricultural Crops Structure.” In IGARSS 2004. 2004 IEEE
International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 6, Sep 4073–4076.
Xia, J., N. Yokoya, B. Adriano, and K. Kanemoto. 2023. “National High-Resolution Cropland
Classification of Japan with Agricultural Census Information and Multi-Temporal Multi-Modality
Datasets.” International Journal of Applied Earth Observation and Geoinformation 117: 103193.
doi:10.1016/j.jag.2023.103193.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2751

Xia, T., H. Zhen, Z. Cai, C. Wang, W. Wang, J. Wang, H. Qiong, and Q. Song. 2022. “Exploring the
Potential of Chinese GF-6 Images for Crop Mapping in Regions with Complex Agricultural
Landscapes.” International Journal of Applied Earth Observation and Geoinformation 107:
102702. doi:10.1016/j.jag.2022.102702.
Xie, Y., T. J. Lark, J. F. Brown, and H. K. Gibbs. 2019. “Mapping Irrigated Cropland Extent Across the
Conterminous United States at 30 M Resolution Using a Semi-Automatic Training Approach on
Google Earth Engine.” ISPRS Journal of Photogrammetry and Remote Sensing 155: 136–149. doi:10.
1016/j.isprsjprs.2019.07.005.
Xie, G., and S. Niculescu. 2022. “Mapping Crop Types Using Sentinel-2 Data Machine Learning and
Monitoring Crop Phenology with Sentinel-1 Backscatter Time Series in Pays de Brest, Brittany,
France.” Remote Sensing 14 (18): 4437. doi:10.3390/rs14184437.
Xiong, J., P. S. Thenkabail, M. K. Gumma, P. Teluguntla, J. Poehnelt, R. G. Congalton, K. Yadav, and
D. Thau. 2017. “Automated Cropland Mapping of Continental Africa Using Google Earth Engine
Cloud Computing.” ISPRS Journal of Photogrammetry and Remote Sensing 126: 225–244. doi:10.
1016/j.isprsjprs.2017.01.019.
Xuan, F., Y. Dong, L. Jiayu, L. Xuecao, S. Wei, X. Huang, J. Huang, et al. 2023. “Mapping Crop Type in
Northeast China During 2013– 2021 Using Automatic Sampling and Tile-Based Image
Classification.” International Journal of Applied Earth Observation and Geoinformation 117:
103178. doi:10.1016/j.jag.2022.103178.
Xue, H., X. Xingang, Q. Zhu, G. Yang, H. Long, L. Heli, X. Yang, et al. 2023. “Object-Oriented Crop
Classification Using Time Series Sentinel Images from Google Earth Engine.” Remote Sensing
15 (5): 1353. doi:10.3390/rs15051353.
Yadav, C. S., M. Kumar Pradhan, S. Machinathu Parambil Gangadharan, J. Kumar Chaudhary, J. Singh,
A. Ahmad Khan, M. Anul Haq, et al. 2022. “Multi-Class Pixel Certainty Active Learning Model for
Classification of Land Cover Classes Using Hyperspectral Imagery.” Electronics. 11(17). 2799.
Number: 17 Publisher: Multidisciplinary Digital Publishing Institute. Accessed 2022-10-21.
https://www.mdpi.com/2079-9292/11/17/2799 .
Yang, C. 2020. “Remote Sensing and Precision Agriculture Technologies for Crop Disease Detection
and Management with a Practical Application Example.” Engineering 6 (5): 528–532. doi:10.1016/j.
eng.2019.10.015.
Yang, G., Y. Weiguo, X. Yao, H. Zheng, Q. Cao, Y. Zhu, W. Cao, and T. Cheng. 2021. “AGTOC: A Novel
Approach to Winter Wheat Mapping by Automatic Generation of Training Samples and One-Class
Classification on Google Earth Engine.” International Journal of Applied Earth Observation and
Geoinformation 102: 102446. doi:10.1016/j.jag.2021.102446.
Yan, J., J. Liu, L. Wang, D. Liang, Q. Cao, W. Zhang, and J. Peng. 2022. “Land-Cover Classification with
Time-Series Remote Sensing Images by Complete Extraction of Multiscale Timing Dependence.“
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 15: 1953–1967.
Yan, Y., and Y. Ryu. 2019. “Google Street View and Deep Learning: A New Ground Truthing Approach
for Crop Mapping.” arXiv preprint arXiv:1912.05024 Publisher: arxiv.org. https://arxiv.org/abs/1912.
05024 .
Yan, S., X. Yao, D. Zhu, D. Liu, L. Zhang, Y. Guojiang, B. Gao, J. Yang, and W. Yun. 2021. “Large-Scale
Crop Mapping from Multi-Source Optical Satellite Imageries Using Machine Learning with
Discrete Grids.” International Journal of Applied Earth Observation and Geoinformation 103:
102485. doi:10.1016/j.jag.2021.102485. Accessed 2021-12-17.
Yao, J., J. Wu, C. Xiao, Z. Zhang, and L. Jianzhong. 2022. “The Classification Method Study of Crops
Remote Sensing with Deep Learning, Machine Learning, and Google Earth Engine.” Remote
Sensing 14(12): 2758. Number: 12 Publisher: Multidisciplinary Digital Publishing Institute
https://www.mdpi.com/2072-4292/14/12/2758. Accessed 2022-10-21.
You, N., and J. Dong. 2020. “Examining Earliest Identifiable Timing of Crops Using All Available
Sentinel 1/2 Imagery and Google Earth Engine.” ISPRS Journal of Photogrammetry and Remote
Sensing 161: 109–123. doi:10.1016/j.isprsjprs.2020.01.001.
You, L., and Z. Sun. 2022. “Mapping Global Cropping System: Challenges, Opportunities, and Future
Perspectives.” Crop and Environment 1 (1): 68–73. doi:10.1016/j.crope.2022.03.006.
2752 M. ALAMI MACHICHI ET AL.

Yuan, Y., and L. Lin. 2021. “Self-Supervised Pretraining of Transformers for Satellite Image Time
Series Classification.” IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND
REMOTE SENSING 14: 474–487. Place: 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA
Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC Type: Article.
Yueran, H., H. Zeng, F. Tian, M. Zhang, W. Bingfang, S. Gilliams, L. Sen, L. Yuanchao, L. Yuming, and
H. Yang. 2022. “An Interannual Transfer Learning Approach for Crop Classification in the Hetao
Irrigation District, China.” Remote Sensing 14(5): 1208. Number: 5 Publisher: Multidisciplinary
Digital Publishing Institute https://www.mdpi.com/2072-4292/14/5/1208. Accessed 2022-07-4.
Zafari, A., R. Zurita-Milla, and E. Izquierdo-Verdiguier. 2019. “Evaluating the Performance of
a Random Forest Kernel for Land Cover Classification.” Remote Sensing 11(5): 575. Number: 5
Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/2072-4292/11/5/
575. Accessed 2022-10-21.
Zhang, S., X. Dai, L. Jingzhong, X. Gao, F. Zhang, F. Gong, L. Heng, et al. 2022. “Crop Classification for
UAV Visible Imagery Using Deep Semantic Segmentation Methods.” Geocarto International
37 (25): 10033–10057. doi:10.1080/10106049.2022.2032387.
Zhang, C., D. Liping, L. Lin, L. Hui, L. Guo, Z. Yang, E. G. Yu, D. Yahui, and A. Yang. 2022. “Towards
Automation of In-Season Crop Type Mapping Using Spatiotemporal Crop Information and
Remote Sensing Data.” Agricultural Systems 201: 103462. doi:10.1016/j.agsy.2022.103462.
Zhang, P., H. Shougeng, L. Weidong, and C. Zhang. 2020. “Parcel-Level Mapping of Crops in
a Smallholder Agricultural Area: A Case of Central China Using Single-Temporal VHSR Imagery.”
Computers and Electronics in Agriculture 175: 105581. doi:10.1016/j.compag.2020.105581.
Accessed 2021-12-17.
Zhang, P., H. Shougeng, L. Weidong, C. Zhang, and P. Cheng. 2021. “Improving Parcel-Level
Mapping of Smallholder Crops from VHSR Imagery: An Ensemble Machine-Learning-Based
Framework.” Remote Sensing 13 (11): 2146, Number: 11 Place: ST ALBAN-ANLAGE 66, CH-4052
BASEL, SWITZERLAND Publisher: MDPI Type: Article. doi:10.3390/rs13112146.
Zhang, J., C. Yang, H. Song, W. Clint Hoffmann, D. Zhang, and G. Zhang. 2016. “Evaluation of an
Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop
Identification.” Remote Sensing 8(3): 257. Number: 3 Publisher: Multidisciplinary Digital
Publishing Institute https://www.mdpi.com/2072-4292/8/3/257. Accessed 2022-10-21.
Zhang, H., H. Yuan, D. Weibing, and X. Lyu. 2022. “Crop Identification Based on Multi-Temporal
Active and Passive Remote Sensing Images.” ISPRS International Journal of Geo-Information 11(7):
388. Number: 7 Publisher: Multidisciplinary Digital Publishing Institute https://www.mdpi.com/
2220-9964/11/7/388. Accessed 2022-10-21.
Zhang, X., Z. Zheng, P. Xiao, L. Zhenshi, and H. Guangjun 2022. “Patch-Based Training of Fully
Convolutional Network for Hyperspectral Image Classification with Sparse Point Labels.“ IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 1–13.
Zhao, H., S. Duan, J. Liu, L. Sun, and L. Reymondin. 2021. “Evaluation of Five Deep Learning Models
for Crop Type Mapping Using Sentinel-2 Time Series Images with Missing Information.” Remote
Sensing 13 (14): 2790. Publisher: mdpi.com. https://www.mdpi.com/2072-4292/13/14/2790.
Zhao, H., Z. Yang, D. Liping, and Z. Pei. 2012. “Evaluation of Temporal Resolution Effect in Remote
Sensing Based Crop Phenology Detection Studies.“ In Computer and Computing Technologies in
Agriculture V, edited by Li, D., Chen, Y. 135–150. Berlin Heidelberg: Springer.
Zhao, J., Y. Zhong, H. Xin, L. Wei, and L. Zhang. 2020. “A Robust Spectral-Spatial Approach to
Identifying Heterogeneous Crops Using Remote Sensing Imagery with High Spectral and Spatial
Resolutions.” Remote Sensing of Environment 239: 111605. doi:10.1016/j.rse.2019.111605.
Accessed 2021-12-17.
Zheng, Y., A. C. D. S. Luciano, J. Dong, and W. Yuan. 2022. “High-Resolution Map of Sugarcane
Cultivation in Brazil Using a Phenology-Based Method.” Earth System Science Data 14 (4):
2065–2080. doi:10.5194/essd-14-2065-2022.
Zhenong, J., G. Azzari, C. You, S. Di Tommaso, S. Aston, M. Burke, and D. B. Lobell. 2019. “Smallholder
Maize Area and Yield Mapping at National Scales with Google Earth Engine.” Remote Sensing of
Environment 228: 115–128. doi:10.1016/j.rse.2019.04.016.
INTERNATIONAL JOURNAL OF REMOTE SENSING 2753

Zhou, Y., J. Luo, L. Feng, and X. Zhou. 2019. “DCN-Based Spatial Features for Improving Parcel-Based
Crop Classification Using High-Resolution Optical Images and Multi-Temporal SAR Data.” Remote
Sensing 11 (13): 1619, Number: 13 Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND
Publisher: MDPI Type: Article. doi:10.3390/rs11131619.
Zhou, X., J. Wang, H. Yongjun, and B. Shan. 2022. “Crop Classification and Representative Crop
Rotation Identifying Using Statistical Features of Time-Series Sentinel-1 GRD Data.” Remote
Sensing 14(20): 5116. Number: 20 Publisher: Multidisciplinary Digital Publishing Institute
https://www.mdpi.com/2072-4292/14/20/5116. Accessed 2022-10-21.
Zitian, G., D. Guo, D. Ryu, and A. W. Western. 2022. “Enhancing the Accuracy and Temporal
Transferability of Irrigated Cropping Field Classification Using Optical Remote Sensing
Imagery.” Remote Sensing 14 (4): 997. doi:10.3390/rs14040997.

You might also like