Professional Documents
Culture Documents
Since 2013, the IEEE Geoscience and Remote Sensing Society (GRSS) publishes IEEE Geoscience and Remote
Sensing Magazine. The magazine provides a new venue to publish high-quality technical articles that by
their very nature do not find a home in journals requiring scientific innovation but that provide relevant information
to scientists, engineers, end-users, and students who interact in different ways with the geoscience and remote
sensing disciplines. Therefore, the magazine publishes tutorial and review papers, as well as technical papers on
geoscience and remote sensing topics, but the last category (technical papers) only in connection to a Special
Issue.
The magazine is published with an appealing layout in a digital format, and its articles are also published in
electronic format in the IEEE Xplore online archive. The digital format is different from the electronic format used
by the IEEE Xplore website. The digital format allows readers to navigate and explore the technical content of the
magazine with a look and feel similar to that of a printed magazine. The electronic version provided on the IEEE
Xplore website allows readers to access individual articles as separate PDF files. Both the digital and the electronic
magazine content is freely available to all GRSS members.
This call for papers is to encourage all potential authors to prepare and submit tutorials and technical papers for review
to be published in the IEEE Geoscience and Remote Sensing Magazine. Contributions for any of the above-
mentioned topics of the magazine (including columns) are also welcome.
Authors interested in proposing special issues as guest editors are encouraged to contact the editor-in-chief directly
for information about the proposal submission template, as well as the proposal evaluation and special issue
management procedures.
All tutorial, review and technical papers will undergo blind review by multiple reviewers. The submission and review
process are managed at the IEEE Manuscript Central, as is already done for the three GRSS journals. Prospective
authors are required to submit electronically using the website http://mc.manuscriptcentral.com/grs by selecting the
“Geoscience and Remote Sensing Magazine” option from the drop-down list. Instructions for creating new user
accounts, if necessary, are available on the login screen. No other manners of submission are accepted. Papers
should be submitted in single column, double-spaced format.
FEATURES
Observation Applications
by Agata M. Wijata, Michel-François Foulon,
Yves Bobichon, Raffaele Vitulli, Marco Celesti,
Roberto Camarero, Gianluigi Di Cosimo,
Ferran Gascon, Nicolas Longépé, Jens Nieke,
Michal Gumiela, and Jakub Nalepa
PG. 60
40 nboard Information Fusion for Multisatellite
O
Collaborative Observation
by Gui Gao, Libo Yao, W
enfeng Li, Linlin Zhang,
and Maolin Zhang
ON THE COVER:
Schematic diagram of federated learning (see p. 69).
©SHUTTERSTOCK.COM/VECTORFUSIONART
60
AI Security for Geoscience and Remote Sensing
by Yonghao Xu, Tao Bai, Weikang Yu, Shizhen Chang,
Peter M. Atkinson, and Pedram Ghamisi
SCOPE
IEEE Geoscience and Remote Sensing Magazine (GRSM) will
inform readers of activities in the IEEE Geoscience and
Remote Sensing Society, its technical committees,
and chapters. GRSM will also inform and educate
readers via technical papers, provide information on
international remote sensing activities and new satellite
missions, publish contributions on education activities,
industrial and university profiles, conference news, book
reviews, and a calendar of important events.
IEEE Geoscience and Remote Sensing Magazine (ISSN 2473-2397) is published provided the per-copy fee indicated in the code is paid through the Copyright Clear-
quarterly by The Institute of Electrical and Electronics Engineers, Inc., IEEE ance Center, 222 Rosewood Drive, Danvers, MA 01923 USA; 2) pre-1978 articles
Headquarters: 3 Park Ave., 17th Floor, New York, NY 10016-5997, +1 212 419 without fee. For all other copying, reprint, or republication information, write to:
7900. Responsibility for the contents rests upon the authors and not upon the Copyrights and Permission Department, IEEE Publishing Services, 445 Hoes Lane,
IEEE, the Society, or its members. IEEE Service Center (for orders, subscrip- Piscataway, NJ 08854 USA. Copyright © 2023 by the Institute of Electrical and Elec-
tions, address changes): 445 Hoes Lane, Piscataway, NJ 08854, +1 732 981 tronics Engineers, Inc. All rights reserved. Application to Mail at Periodicals Postage
0060. Individual copies: IEEE members US$20.00 (first copy only), nonmem- Prices is pending at New York, New York, and at additional mailing offices. Cana-
bers US$110.00 per copy. Subscription rates: included in Society fee for each dian GST #125634188. Canada Post Corporation (Canadian distribution) publica-
member of the IEEE Geoscience and Remote Sensing Society. Nonmember tions mail agreement number 40013885. Return undeliverable Canadian addresses
subscription prices available on request. Copyright and Reprint Permissions: to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA.
Abstracting is permitted with credit to the source. Libraries are permitted to
photocopy beyond the limits of U.S. Copyright Law for private use of patrons: IEEE prohibits discrimination, harassment, and bullying. For more information,
1) those post-1977 articles that carry a code at the bottom of the first page, visit http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html.
BENEFITS:
• Rapidly disseminate your research findings
Follow @TechRxiv_org
Learn more techrxiv.org Powered by IEEE
FROM THE EDITOR
BY PAOLO GAMBA
©SHUTTERSTOCK.COM/FOXPICTURES
Taking Artificial Intelligence
Into Space Through Objective
Selection of Hyperspectral
Earth Observation
Applications
AGATA M. WIJATA , MICHEL-FRANÇOIS FOULON,
YVES BOBICHON, RAFFAELE VITULLI, R ecent advances in remote sensing hyperspectral imag-
ing and artificial intelligence (AI) bring exciting op-
MARCO CELESTI , ROBERTO CAMARERO, portunities to various fields of science and industry that
GIANLUIGI DI COSIMO, FERRAN GASCON, can directly benefit from in-orbit data processing. Tak-
NICOLAS LONGÉPÉ, JENS NIEKE, ing AI into space may accelerate the response to various
MICHAL GUMIELA, AND JAKUB NALEPA events, as massively large raw hyperspectral images (HSIs)
can be turned into useful information onboard a satellite;
hence, the images’ transfer to the ground becomes much
Digital Object Identifier 10.1109/MGRS.2023.3269979
faster and offers enormous scalability of AI solutions
Date of current version: 30 June 2023 to areas across the globe. However, there are numerous
that air pollution contributes to 1.6 million deaths annu- which reduces the natural acidity of Earth’s surface [78].
ally, accounting for about 17% of all deaths in the coun- A significant decrease in acidity causes further reactions,
try [69]. The key air assessment parameter is dust with a which result in the release of metals and metalloids, such as
diameter below 2.5 μm (PM 2.5) [64], and appropriately Fe, Al, Cu, Pb, Cd, Co, and As, and sulfates from the soil. The
responding to the real-time PM 2.5 value can significantly released heavy metals penetrate soil, aquifers, and the bio-
reduce harmful effects on the human respiratory and cir- sphere, accumulating in vegetation and thus posing a threat
culatory systems [70] through, e.g., avoiding overexposure to humans and animals, increasing the potential toxicity for
and striving to reduce the level of pollution [71]. Estimat- agriculture, aquifers, and the biosphere [80].
ing the spatial distribution of the PM 2.5 concentration The mapping of an area corresponding to AMD, but
from satellite data was shown to be feasible using deep also others minerals, and the estimation of the pollu-
learning in both low- and high-pollution areas [64]. Im- tion level are carried out using in-field, air [77], and sat-
portantly, the level of PM 2.5 depends on seasonality ellite [65], [81] methods. The use of remote sensing to
[72]; the process of gathering ground truth data should estimate pollution maps using MSIs [65], [78] and HSIs
reflect that to enable training well-generalizing models. [77] requires preparing the reference data, the source of
Indeed, such activities may rapidly become cumbersome, the-ground processing of downlinked raw image
requiring the expertise of specialists in the application data) is not of practical importance.
domain to obtain relevant annotations. The cost of the 55 Two: The faster response directly contributes to
engineering database development necessary to handle specific actions undertaken on the ground, which
any data-driven AI solution must be put in front of a po- would have not been possible without this timely
tential nondata-driven handcrafted algorithm solution information.
that would not require the effort of collecting, annotat- •• Multitemporal analysis (compatibility with the revisit time),
ing, and simulating a huge amount of representative data. d OBP
mta : This objective relates to multitemporal onboard
In a classic data flow, which is followed while developing analysis of a series of images acquired for the same
an AI-powered solution, the data may be acquired using area at consecutive time points and to the benefits it
different sources (e.g., drones, aircraft, and satellites) and could give end users:
are commonly unlabeled; hence, they should undergo 55 Zero: Multitemporal analysis would not be beneficial
manual or semiautomated analysis to generate ground (it would not bring any useful information) or could
truth information. Such data samples are further bundled be beneficial but is not possible to achieve within
into datasets, which should be carefully split into training the mission (e.g., due to the assumed ConOps and
and test sets, with the former exploited to train machine missing onboard geolocalization).
strategy (e.g., the acquisition duty cycle, coverage, CONSTRAINTS AND FEASIBILITY
and revisit capabilities). Constraints relate to sensor capabilities and characteristics.
•• Potential of extending the mission perimeter and/or capac- Also, we discuss the availability of datasets, focusing on
ity, d Memp : This objective indicates whether the use case data-driven supervised techniques to achieve robust prod-
has potential to extend the current mission perimeter ucts. As in the case of objectives, the scale is either binary
and/or capacity, e.g., in multipurpose/reconfigurable (zero/two) or three-point (zero, one, and two), with larger
missions, and it relates to the cost of the mission’s values corresponding to more preferred use cases:
perimeter expansion. As for enhancing the mission 1) Sensor capabilities:
capacity, we may be able to have a higher scientific/ •• Compatibility with the sensor spectral range, ~ SCcspe : This
commercial return, for instance, by observing more constraint indicates the feasibility of tackling a use
target sites than what could be achieved without data case with image data (with respect to its spectral
analysis onboard a satellite: range) captured using the considered sensor:
55 Zero: The use case is already within the perimeter 55 Zero: The sensor is not compatible (it does not cap-
foreseen for the mission, and the mission capacity ture the spectral range commonly reported in the
would not be enhanced by onboard processing. Al- papers discussing the use case).
ternatively, from a pure mission perimeter exten- 55 One: The sensor is partly compatible (it captures
sion point of view, this use case is of poor interest. part of the spectral range commonly reported in the
55 One: The use case is not within the perimeter for papers discussing the use case).
which the mission was initially designed, and its 55 Two: The sensor is fully compatible (it captures the
implementation may have extra impact on the sys- spectral range commonly reported in the papers
tem design (e.g., sending an alert upon an illegal ship discussing the use case).
degassing detection may need an additional geosta- •• Compatibility with the sensor spectral sampling, ~ SC
css : This
tionary optical link to act with the necessary reactiv- constraint indicates the feasibility of tackling a use case
ity this situation requires). This use case is of great with image data (with respect to their spectral sam-
interest to extend the mission perimeter/capacity but pling) captured using the considered sensor:
may be feasible only with a significant impact on the 55 Zero: The commonly reported spectral sampling is
satellite (e.g., a new optical transmission device on- narrower than available in the target sensor; hence,
board) and system levels (e.g., geostationary relay). it may not be possible to capture spectral character-
55 Two: The use case is not within the perimeter for which istics of the objects of interest.
the mission was initially designed, and apart from the 55 One: The commonly reported spectral sampling is
new AI function, the implementation of this use case much wider than available in the target sensor (e.g.,
has only a minor impact that can be absorbed by the MSIs versus HSIs); hence, we may not fully benefit
current satellite and/or system design. Such a use case from the sensor spectral capabilities.
is therefore of great interest to extend the mission pe- 55 Two: The commonly reported spectral sampling is
rimeter and/or capacity at a minimal cost. compatible with the target sensor.
vised models for onboard processing: training data may be beneficial, but it is not of criti-
55 Zero: No ground truth datasets are available. cal importance for this use case (e.g., fire detection).
55 One: There exist ground truth datasets (at least
one), but they are not fully compatible with the SELECTING USE CASES FOR ONBOARD
target sensor (compatibility with the sensor could AI DEPLOYMENT
be achieved from such data if there were an in- In Table 2, we assemble objectives and constraints that
strument simulator). contribute to the selection process. There are parameters
55 Two: There exist ground truth datasets (at least one) that are mission independent; therefore, the same values
that are fully compatible with the target sensor. (determined once) can be used for fundamentally differ-
•• Difficulty of creating new ground truth data, ~ Ddgt: This ent satellite missions, as shown in the “Case Studies” sec-
constraint relates to the process of creating new tion for CHIME and Intuition-1 missions. Afterward, they
ground truth datasets that could be used to train and can be updated only when necessary, e.g., if the trends
validate supervised learners for onboard processing have changed within the “interest to the community” ob-
during the target mission: jective. The total score S, which aggregates all objectives
55 One: The localization of the objects of interest is and constraints, is their weighted sum:
not known, and/or their spectral characteristics
are not known in detail, but the current state of 64444444
Objectives(1)
474444444 4M8
the art suggests preliminary wavelengths deter- S = a OBP OBP
fr d fr + a OBP OBP M M M R R
mta d mta + a cas d cas + a emp d emp + a d
14444244443
mined by airborne/laboratory methods and ar- Objectives(2)
MISSION (M)
Compatibility with the acquisition strategy dM
cas aM
cas
✓
Potential of extending the mission perimeter d M
emp a M
emp
✓
Mission-specific parameters are indicated with a ✓, whereas those that are mission agnostic are marked with an ✗.
QUANTIFYING THE INTEREST OF THE commonly used in each application. (To perform the quan-
RESEARCH COMMUNITY titative analysis of the state of the art, we used the Dimen-
We performed quantitative analysis of the recent body sions tool available at https://app.dimensions.ai/discover/
of literature (2012–2022) for each use case to objectively publication.) We can observe the steady increase of the
investigate the interest of the community. In Figure 1, we number of papers published in each specific application re-
present the number of papers published yearly concerning lated to the analysis of earthquakes and landslides, with the
the detection and monitoring of earthquakes and landslides monitoring of earthquakes (rendered in yellow) and detec-
(for all other use cases, see the supplementary material avail- tion of landslides (dark blue) manifesting the fastest growth
able at https://www.doi.org/10.1109/MGRS.2023.3269979). in 2019–2021. Since the body of knowledge related to those
For the search process, we utilized keywords and key phrases two topics has been significantly expanding, we can infer
that they are disruptive areas and that
contributing to the state of the art
600 here is of high importance (therefore,
Monitoring of Earthquakes the topics were assigned the highest
500 Estimation of Earthquake Damages score in the evaluation matrix). On
Detection of Landslides the other hand, as the estimation of
Number of Publications
coregistration process is performed, so a subpixel-accurate all potential applications for the Intuition-1 mission, as
hyperspectral cube can be produced regardless of satellite this nanosatellite will not be equipped with onboard geo-
platform attitude determination and control system low- referencing routines; hence, it
frequency disturbances. would not be possible to ef-
The Leopard DPU is responsible for the acquisition of fectively benefit from multiple
raw image data from the optical system, storage of the data, images captured for the very IF ACTIONABLE ITEMS
running preprocessing algorithms, data compression (CC- same scene at more than one EXTRACTED FROM RAW
SDS-123), and AI processing. Other functionalities, such as time point. Similarly, since the
DATA ARE NOT DELIVERED
handling the S-band radio (uplink, 256 kb/s) and X-band estimation of soil parameters
IN TIME, THEY MAY EASILY
radio (downlink, up to 50 Mb/s), are also covered by the is already planned for CHIME
DPU. The Leopard DPU utilizes the Xilinx Vitis AI frame- and Intuition-1, such agricul- BECOME USELESS IN
work to accelerate CNNs on field-programmable gate array tural applications would not COMMERCIAL AND
hardware, providing energy-efficient (0.3 tera operations necessarily extend the mission SCIENTIFIC CONTEXTS.
per second per watt) inference and in-flight-reconfigurable perimeter; therefore, this pa-
deep models. rameter became zero for both
satellites ^d M
emp h .
TABLE 4. AN EVALUATION MATRIX CAPTURING ALL MISSION OBJECTIVES AND CONSTRAINTS FOR CHIME AND INTUITION-1.
OBJECTIVES
ONBOARD PROCESSING MISSION COMMUNITY
dOBP
fr dOBP
mta
M
dcas M
demp dR
USE CASE CHIME INTUITION-1 CHIME INTUITION-1 CHIME INTUITION-1
AGRICULTURAL APPLICATIONS
Estimation of soil parameters [19] 0 2 0 2 2 0 0 1
Analysis of fertilization [20] 0 2 0 2 2 0 0 1
Analysis of the plant growth [21] 0 1 0 2 2 0 0 1
Assessment of hydration [171] 2 2 0 2 2 0 0 1
CONSTRAINTS
SENSOR CAPABILITIES DATASET MATURITY
SC SC SC
~cspe ~css ~cspa ~D
agt ~D
dgt ~D
shd
CHIME INTUITION-1 CHIME INTUITION-1 CHIME INTUITION-1
2 1 1 2 2 2 0 1 0
2 2 1 2 2 2 0 1 2
2 2 0 2 0 2 0 1 0
2 2 1 2 0 2 0 1 0
2 1 2 2 2 2 0 0 0
1 1 1 1 0 2 0 0 0
2 2 1 2 2 2 1 2 0
2 1 0 1 2 0 1 1 2
2 0 1 1 2 1 0 1 0
2 1 1 2 0 0 0 1 2
2 1 1 2 2 0 0 1 2
2 1 2 2 2 2 0 1 0
2 0 1 2 2 2 0 1 0
2 2 1 2 0 2 1 1 2
2 1 1 2 0 2 1 2 2
1 1 1 2 2 2 1 2 2
2 1 0 2 2 0 1 1 2
2 2 1 2 2 0 0 1 0
2 2 1 2 2 0 1 1 2
2 2 0 2 2 0 0 1 0
2 1 1 2 2 2 1 1 2
2 1 1 2 2 2 0 1 0
1 1 1 2 0 2 0 1 0
2 1 1 2 2 2 0 1 2
2 1 1 2 2 2 0 1 0
2 1 1 2 2 2 0 1 0
2 1 1 2 2 2 1 1 0
2 1 1 2 2 2 0 1 2
2 1 1 2 0 2 0 1 0
2 1 1 2 2 2 0 0 2
2 1 1 2 0 0 1 0 2
2 1 1 2 2 0 1 2 2
2 2 1 2 0 0 1 2 2
δ OBP
fr δ OBP
fr δ OBP
fr
2 4 2
ωD
shd δ OBP
mta
ωD
shd δ OBP
mta ωD
shd δ OBP
mta
2 2 2 4 4 2
1 2 1
1 1 1 2 2 1
D δM D δM D δM
ω dgt
2 2 cas ω dgt
2 4 cas ω dgt cas
4 2
0 0 0
1 0 0 1 1 0 0 2 2 0 0 1
0 0 0 0 0 0
0 0 0 0 0 0
D 2
1 1 D 2 1 2 D 4 2 1
ω agt 2 δM ω agt 4 δM ω agt 2 δM
emp emp emp
0 0 0 0 0 0
0 0 0 0 0 0
1 1 1 2 2 1
SC 2 1 1 2 SC 2 1 1 4 SC 4
2 2 2
ω cspa δR ω cspa δR ω cspa δR
2 2 2 2 4 4
SC
ω css SC SC SC SC SC
ω cspe ω css ω cspe ω css ω cspe
(a)
Lava Tracking (S = 17) Detection of Volcanic Ash Clouds (S = 16) Lava Tracking (S = 24) Monitoring Fire Progress (S = 22) Lava Tracking (S = 27) Detection of Volcanic Ash Clouds (S = 26)
Detection of Volcanic Eruptions (S = 16) Detection of Floods (S = 15) Detection of Volcanic Ash Clouds (S = 22) Detection of Volcanic Eruptions (S = 22) Detection of Volcanic Eruptions (S = 26) Detection of Floods (S = 24)
Others (S ≤ 14) Detection of Floods (S = 21) Coastal / Inland Water Pollution (S = 21) Others (S ≤ 23)
Others (S ≤ 20)
δ OBP
fr δ OBP
fr δ OBP
fr
2 4 2
ωD
shd δ OBP
mta ωD
shd δ OBP
mta ωD
shd δ OBP
mta
2 2 2 4 4 2
1 2 1
1 1 1 2 2 1
D M D D
ω dgt ω dgt δM
cas
ω dgt δM
cas
2 2 δ cas 2 4 4 2
0 0 0
1 0 0 1 1 0 0 2 2 0 0 1
0 0 0 0 0 0
0 0 0 0 0 0
D 2 1 1 M D 2
1 2 D 4 2 1 M
ω agt 2 δ emp 4 δM ω agt 2 δ emp
ω agt emp
0 0 0 0 0 0
0 0 0 0 0 0
1 1 1 2 2 1
SC 2 1 1 2 2 1 1 4 4 2 2 2
ω cspa δR SC SC δR
ω cspa δR ω cspa
2 2 2 2 4 4
SC SC
ω cspe SC SC SC SC
ω css ω css ω cspe ω css ω cspe
1) Objectives and Constraints are Equally Important 2) Objectives are More Important 3) Constraints are More Important
(b)
JUNE 2023
we report its score S.
maximize the overall score aggregating the most impor- Environment from the artificial intelligence and data pro-
tant mission objectives and constraints in a simple way. We cessing perspectives. She is a Member of IEEE.
proved the flexibility of the evaluation process by employ- Michel-François Foulon (mic hel-f rancois.foulon
ing it on to two hyperspectral missions: CHIME and Intu- @thalesaleniaspace.com) received his Ph.D. degree (2008)
ition-1. Our technique may be straightforwardly utilized in micro- and nanotechnologies and telecommunications
to target two fundamentally different missions, and it al- from Université des Sciences et Technologies de Lille,
lows practitioners to analyze different mission profiles and France. In 2007, he joined Thales Alenia Space, 31100
the importance of assessment factors through the weight- Toulouse, France, where he is currently an imaging chain
ing mechanism. On top of that, the procedure can be ex- architect in the observation, exploration, and navigation
tended to capture other aspects, such as expected onboard business line. He has more than 15 years of experience in
data quality, e.g., geometric, spectral, and radiometric, microwaves, space-to-ground transmission, onboard data
and other types of payloads beyond optical sensors, which processing, and image chain architecture design for Earth
may play a key role in specific EO use cases. Also, it may observation systems. Since 2021, he has also worked on on-
be interesting to consider selected aspects that are currently board artificial intelligence solutions in the framework of
treated as being mission agnostic as mission specific. As an the Copernicus program.
example, creating new ground truth data may require plan- Yves Bobichon (yves.bobichon@thalesaleniaspace.com)
ning in situ measurement campaigns to be in line with the received his Ph.D. degree (1997) in computer science from
ConOps of a mission. The same would apply to the training University Nice Côte d’Azur, France. He joined Alcatel
image data, whose acquisition time and target area charac- Space Industries in 1999. He is currently an image process-
teristics should be close enough to those planned for a mis- ing chain architect in the System and Ground Segment
sion. Finally, including the TRL of specific onboard AI al- Engineering Department, Thales Alenia Space, 31100 Tou-
gorithms (in relation to the available hardware planned for louse, France. He has more than 25 years of experience
inference) in the evaluation procedure could help make a in satellite image processing, onboard data compression,
more informed decision on selecting the actual AI solution and image chain architecture design for Earth observation
(e.g., a deep learning architecture for a given downstream systems. Since 2018, he has also been a researcher at the
task). We believe that the standardized approach of evalu- French Research Technological Institute Saint Exupéry. His
ating onboard AI applications will become an important research interests include embedded machine and deep
tool that will be routinely exploited while designing and learning applications for image processing onboard Earth
planning emerging EO missions and that it will help maxi- observation satellites.
mize the percentage of successful satellite missions bring- Raffaele Vitulli (raffaele.vitulli@esa.int) received his
ing commercial, scientific, industrial, and societal value to M.Sc. degree in electronic engineering from Politecnico di
the community. Bari, Italy, in 1991. He is a staff member of the Onboard
Payload Data Processing Section, European Space Agency,
ACKNOWLEDGMENT 2201 AZ Noordwijk, The Netherlands, where he works
This work was partly funded by the ESA via a feasibility study on the Consultative Committee for Space Data System as
for the CHIME mission and Intuition-1-focused GENESIS a member of the Multispectral/Hyperspectral Data Com-
and GENESIS 2 projects supported by the U- lab (https:// pression Working Group. He has also been the chair and
philab.esa.int/). Agata M. Wijata and Jakub Nalepa were organizer of the Onboard Payload Data Compression
supported by a Silesian University of Technology grant for Workshop. He is actively involved in the Copernicus Hy-
maintaining and developing research potential. This article perspectral Imaging Mission for the Environment mission,
has supplementary material, provided by the authors, avail- supervising avionics and onboard data handling.
able at https://www.doi.org/10.1109/MGRS.2023.3269979. Marco Celesti (marco.celesti@esa.int) received his
M.Sc. (2014) and Ph.D. (2018) degrees in environmental
AUTHOR INFORMATION sciences from the University of Milano–Bicocca (UN-
Agata M. Wijata (awijata@ieee.org) received her M.Sc. IMIB), Italy. After that, he worked as a postdoc at UNIMIB,
(2015) and Ph.D. (2023) degrees in biomedical engineer- being also involved as scientific project manager in the
ing at the Silesian University of Technology. Currently, she Horizon 2020 Marie Curie Training on Remote Sensing
works as a researcher at the Silesian University of Technol- for Ecosystem Modeling project. He received a two-year
ogy, 44-800 Zabrze, Poland, and as a machine learning fellowship cofunded by the European Space Agency (ESA)
specialist at KP Labs, 44-100 Gliwice, Poland, where she Living Planet Fellowship program, working in support of
has been focusing on hyperspectral image analysis. Her the Earth Explorer 8 Fluorescence Explorer mission. Since
research interests include multi- and hyperspectral im- 2021, he has worked at the ESA, 2201 AZ Noordwijk,
age processing, medical image processing, image-guided The Netherlands, as a Sentinel optical mission scientist.
navigation systems in medicine, artificial neural networks, His research interests include optical remote sensing, re-
and artificial intelligence in general. She contributes to trieval of geophysical parameters, radiative transfer mod-
the Copernicus Hyperspectral Imaging Mission for the eling, and terrestrial carbon assimilation. He is currently
Summary, challenges,
and perspectives
velopment of optical communication, relay satellites, and SAR processor ESA Real-time imaging 2004 System-on-
chip
other technologies has enabled data transmission between
ERS satellite United Real-time imaging, 2004 FPGA and
satellites at high speeds. Some researchers have successively States change detection PowerPC
proposed the concepts of space-based Internet [86], [87], Interferometric United On-satellite SAR inter- 2009 FPGA
[88], [89], spatial information networks [90], [91], [92], [93], SAR States ferometric processing
and air–space–ground-integrated information communica- Chaohu-1 China SAR data on-orbit 2022 DSP, FPGA,
imaging and target and GPU
tion networks [94], [95]; conducted research on key tech-
intelligent detection
nologies; and planned and built an onboard dynamic real- Taijing-4 01 China SAR data on-orbit 2022 DSP and
time distributed network aimed at integrating space-based imaging FPGA
communication, navigation, remote sensing, and comput-
ing resources. Onboard collaborative intelligent perception
(a) (b)
FIGURE 1. Earth observation systems based on LEO small satellite constellation: (a) DARPA Blackjack constellation; (b) SDA-NDSA.
Reporting 460
Not Reporting 571 500 km
Data Sources:
TerraSAR-X
Cosmo-SkyMed
Radarsat-2
(a) (b)
FIGURE 2. Multisatellite data association: (a) SAR + AIS; (b) SAR + optical.
GPS Antennas (× 2)
S-Band Antennas (× 4)
Star Sensors (× 3)
FIGURE 3. Multisensor integrated satellite: (a) Canadian RCM (SAR + AIS); (b) Chinese Taijing-4 01 satellite (SAR + optical).
FIGURE 4. Multisensor satellite constellation: (a) Canadian OptiSAR constellation; (b) German Pendulum; (c) French CartWheel.
©SHUTTERSTOCK.COM/METAMORWORKS
0
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
Year
FIGURE 1. The cumulative numbers of AI-related articles published in IEEE Geoscience and Remote Sensing Society publications, along with
ISPRS Journal of Photogrammetry and Remote Sensing and Remote Sensing of Environment in the past 10 years. The statistics are obtained
from IEEE Xplore and ScienceDirect.
HYDICE: Hyperspectral Digital Imagery Collection Experiment; GAN: generative adversarial network; LRR: low-rank representation; AVIRIS: airborne visible/infrared imaging spectrome-
ter; EO-1: Earth Observing One; HJ-1A: Huan Jing 1A; SN: Siamese network; GMM: Gaussian mixture model; MHN: modern Hopfield network; VHR: very high resolution; FCN: fully con-
volutional network; ALOS PALSAR: advanced land observing satellite phased-array type L-band SAR; LSTM: long short-term memory; MODIS: moderate-resolution imaging spectrome-
ter; ANN: artificial neural network; JL-1: Jilin-1; FCN: fully convolutional neural network; SEVIRI: Spinning Enhanced Visible and InfraRed Imager.
tac
Ad
a
manipulated to be adversarial and fool well-trained ML
n
Applicatio
FL
plai
or
ithm safety-critical geoscience and RS applications. To this end,
Ex
PRELIMINARIES
FIGURE 2. An overview of the research topics covered in Adversarial attacks usually refer to finding adversarial ex-
this article. amples for well-trained models (target models). Taking
image classification as an example, let f : X " Y be a clas- Formally, the perturbation d is updated in each iteration
sifier mapping from the image space X 1 R d to the label as follows:
space Y = {1, f, K} with parameters i, where d and K
denote the numbers of pixels and categories, respectively. d i + 1 = Proj B (x, e) ^d i + asign ^d x i L ce ^i, x i, y hhh
Given the perturbation budget e under , p-norm, the com- x i = x + d i, d 0 = 0 (2)
mon way to craft adversarial examples for the adversary is
to find a perturbation d ! R d, which can maximize the loss where I is the current step, a is the step size (usually smaller
function, e.g., cross-entropy loss L ce, so that f (x + d) ! y, than e), and Proj is the operation to make sure the values
where y is the label of x. Therefore, d can be obtained by of d are valid. Specifically, for (i + 1) th iteration, we first
solving the following optimization problem: calculate the gradients of L ce with respect to x i = x + d i,
TABLE 3. DIFFERENCES AND CONNECTIONS BETWEEN DIFFERENT TYPES OF ATTACKS FOR ML MODELS.
ATTACK TYPE ATTACK GOAL ATTACK STAGE ATTACK PATTERN TRANSFERABILITY
Adversarial Cheat the model to yield wrong predictions with Evaluation Various patterns calculated Transferable
attack [30] specific perturbations phase for different samples
Data Damage model performance with out-of- Training phase Various patterns selected by Nontransferable
poisoning [57] distribution data the attacker
Backdoor Mislead the model to yield wrong predic- Training phase Fixed patterns selected by the Nontransferable
a
ttack [58] tions on data with embedded triggers attacker
Benign Image Poisoned Image Benign Map Poisoned Map Ground Truth
FIGURE 6. Qualitative semantic segmentation results of the backdoor attacks with the FCN-8s model on the Zurich Summer dataset using
the WABA method (adapted from [70]).
FL Server Aggregation
As previously discussed, AI technol- and Update
ogy has shown immense potential
with rapidly rising development and
application in both industry and Global Model
academia, where reliable and accu-
Initial Model
el
od
Up
lM
tia
loa
lM
ad
d
In
lo
od
el
Label A
Label A
Sample Space
Sample Space
Sample Space
Shared Feature
Space No Shared
Label B
Space
Label B
Label B
FIGURE 8. Three categories of FL according to the data partitions [84], [85]. (a) Horizontal FL, (b) vertical FL, and (c) federated transfer learning.
GENERATING GLOBAL-SCALE GEOGRAPHIC where f denotes the mapping function, and i represents
SYSTEMATIC MODELS the parameters of the neural network.
The geographic parameters of different countries are simi- Typically, as shown in Figure 9, developing an AI algo-
lar, but geospatial data often cannot be shared due to na- rithm involves data collection, model construction, model
tional security restrictions and data confidentiality. A hori- training, and model deployment. In the context of super-
zontal FL system could train local models separately, and vised learning, a training dataset D is constructed in the
then integrate the global-scale geographic parameters on data collection step, containing N pairs of input data sam-
the server according to the contribution of different own- ple x and labeled target y, as follows:
ers, which could effectively avoid data leaks.
D = (X, Y) = {x i, y i} iN= 1. (9)
INTERDISCIPLINARY URBAN COMPUTING
As is known, much spatial information about a specific city Then, the model architecture is designed according to the re-
can be recorded conveniently by RS images. Still, other in- quirement of EO missions, and the mapping function as well
formation, such as the locations of people and vehicles, and as its parameters i are initialized (i.e., fi is determined). Next,
K (12)
ject detection). In low-level RS vision tasks, neural networks
/ e z ^x h
k
)
x
x
y ∗ = Mean [ξ ∗]
σ ∗ = S.D. [ξ ∗] y ∗ = Mean [y1∗, y2∗, ..., yn∗] σ ∗ = S.D. [y1∗, y2∗, ..., yn∗]
(a) (b)
y ∗ = Mean [y1∗, y2∗, ..., yn∗] σ ∗ = S.D. [y1∗, y2∗, ..., yn∗]
(c) (d)
FIGURE 10. A visualization of uncertainty quantification methods. (a) Previous network-based methods. (b) Ensemble methods. (c) Monte
Carlo methods. (d) External methods. For an input sample xt , the first three methods deliver the prediction y ) and the quantified uncertain-
ty v ) from the average of a series of model outputs (i.e., p ) or y )n) and their standard deviation (S.D.) results, respectively. On the contrary,
the external methods directly output the results of prediction and uncertainty quantification. BNN: Bayesian neural network.
2
01
01
01
01
02
02
map can be a pixel map equal in size to the input. For ex-
.2
.2
.2
.2
.2
.2
ct
ct
ct
ct
ct
ct
S2 Image Sal With SG (0.16) DeepLift (0.15) LIME (0.08) Occlusion (0.01) Grad-CAM (0.01)
FIGURE 13. The heatmaps of DenseNet with different XAI algorithms for the Water class in the SEN12MS dataset (from [137]). The pixels
with a deeper color represent that they are more likely to be interpreted as the target class. Sal with SG: Saliency with SmoothGrad.
FIGURE 15. The heatmaps of ResNet-101 and DenseNet-201 with CAM, grad-CAM++, and ECR-CAM (adapted from [152]). The target objects
in rows 1, 2, and 3 are airplanes, cars, and mobile homes, respectively. The pixels in red represent that they are more likely to be interpreted
as the target objects.
, Score
Mask Decoder
Image
Encoder , Score
Conv Prompt Encoder
, Score
Image Image
Embedding Mask Points Box Text
Valid Masks
Conv: Convolution
FIGURE 1. ARD-motivated FAIR EO data and FAIR-AI model principles integrated into a common EO process. STAC: SpatioTemporal Asset
Catalog; DASE: Data and Algorithm Standard Evaluation; EOD: Earth observation database; DPP: Distributed Data Parallel.
PROGRAM
FIGURE 2. A geographical distribution of the selected participants. Remote sensing generates vast amounts of image data that
(a) The countries and (b) continents. can be difficult and time consuming to analyze using
FIGURE 3. The Organizing Committee of the first IEEE GRSS IADF School on CV4EO.
conventional image processing techniques. CV algorithms discussing current methods for analyzing satellite im-
enable the automatic interpretation of large data, allowing ages. The covered topics were image fusion, explainable
remote sensing to be used for a wide range of applica- artificial intelligence (AI) for Earth science, big geo-data,
tions, including environmental monitoring, land use/cover multisource image analysis, deep learning for spectral
mapping, and natural resource management. Thus, the unmixing, SAR image analysis, and learning with zero/
IADF TC aimed for prioritizing topics that integrate CV few labels. The technical program of the IADF school is
into remote sensing data analysis. The first IADF school depicted in Figure 4.
focused on applying CV techniques to address modern re- During the first day of the school, the “Deep/Machine
mote sensing challenges, consisting of a series of lectures Learning for Spectral Unmixing” lecture covered various top-
ics related to linear hyperspectral unmixing. These included
geometrical approaches, blind linear unmixing, and sparse
unmixing. Additionally, the course
delved into the utilization of au-
toencoders and convolutional
REMOTE SENSING GENER-
networks for unmixing purpos-
ATES VAST AMOUNTS OF
es. The lecture was followed by
“Change Detection (TorchGeo),” IMAGE DATA THAT CAN
which elaborated on the utiliza- BE DIFFICULT AND TIME
tion of TorchGeo with PyTorch CONSUMING TO ANALYZE
for training change detection USING CONVENTIONAL
models using satellite imagery. IMAGE PROCESSING
On the second day of the school, TECHNIQUES.
the “Learning with Zero/Few
Labels” lecture discussed recent
developments in machine learn-
ing with limited label data in EO, including semisupervised
learning, weakly supervised learning, and self-supervised
learning. The subsequent “SAR Processing” lecture covered
various topics, including the analysis of SAR images with
different polarimetric channels, the geometry of SAR image
4 October
10 a.m.–2 p.m. (UTC +2) Learning With Dr. Sudipan Saha, Technical University of Munich (Germany),
Zero/Few Labels Dr. Angelica I. Aviles-Rivero, German Aerospace Center (Germany), and
Dr. Lichao Mou, University of Cambridge (U.K.)
Prof. Carola-Bibiane Schönlieb,
and
Prof. Xiao Xiang Zhu
2 p.m.–6 p.m. (UTC +2) SAR Processing Dr. Shashi Kumar IIRS, ISRO (India)
5 October
10 a.m.–2 p.m. (UTC +2) Semantic Prof. Sylvain Lobry Universitè de Paris (France)
Segmentation
2 p.m.–6 p.m. (UTC +2) Big Geo-Data Prof. Saurabh Prasad, and University of Houston (USA), and
Prof. Melba Crawford Purdue University (USA)
6 October
10 a.m.–2 p.m. (UTC +2) Image Fusion Prof. Giuseppe Scarpa, and University of Naples
Dr. Matteo Ciotola “Federico II” (Italy)
2 p.m.–6 p.m. (UTC +2) XAI for Dr. Michele Ronco University of Valencia (Spain)
Earth Science
7 October
9 a.m.–1 p.m. (UTC +2) PolSAR Prof. Avik Bhattacharya, Indian Institute of Technology Bombay (India),
Prof. Alejandro Frery, and Victoria University of Wellington (New Zealand), and
Dr. Dipankar Mandal Kansas State University (USA)
FIGURE 4. The technical program of the first IEEE GRSS IADF School on CV4EO. IIRS, ISRO: Indian Institute of Remote Sensing, Indian Space
Research Organisation.
Prof. Sylvain Lobry Dr. Matteo Ciotola Prof. Giuseppe Scarpa Dr. Sudipan Saha Dr. Angelica I. Aviles-
Rivero
Dr. Dipankar Mandal Dr. Shashi Kumar Dr. Lichao Mou Prof. Carola-Bibiane Prof. Xiao Xiang Zhu
Schönlieb
made available online on a daily basis on the GRSS You- ◗◗ Dr. Lichao Mou, head of the Visual Learning and Rea-
Tube channel. Links to the daily lectures are provided soning Team, Remote Sensing Technology Institute,
for reference. German Aerospace Center, Weßling
◗◗ Prof. Carola-Bibiane Schönlieb, professor of applied
SPEAKERS mathematics, the University of Cambridge, U.K.
The first edition of the IEEE GRSS IADF school invited a ◗◗ Prof. Xiao Xiang Zhu, professor for data science in EO,
diverse group of experts from four continents. As shown in Technical University of Munich, Germany
Figure 5, the list includes the following:
◗◗ Prof. Melba Crawford, professor of civil engineering,
Purdue University, USA
◗◗ Prof. Saurabh Prasad, associate professor, the Depart-
Join the GRSS IADF TC
ment of Electrical and Computer Engineering, the Uni- You can contact the Image Analysis Data Fusion Technical Committee (IADF TC)
versity of Houston, USA chairs at iadf_chairs@grss-ieee.org. If you are interested in joining the IADF TC,
◗◗ Dr. Caleb Robinson, data scientist, the Microsoft AI for please complete the form on our website (https://www.grss-ieee.org/technical-
Good Research Lab, USA committees/image-analysis-and-data-fusion) or send an email to us including
your
◗ ◗ Dr. Behnood R asti, principal research associate,
◗◗ first and last name
Helmholtz–Zentrum Dresden–Rossendorf, Freiberg, ◗◗ institution/company
Germany ◗◗ country
◗◗ Prof. Giuseppe Scarpa and Dr. Matteo Ciotola, associate ◗◗ IEEE membership number (if available)
professor and Ph.D. fellow, respectively, the University ◗◗ email address.
Members receive information regarding research and applications on image
of Naples “Federico II”, Italy
analysis and data fusion topics, and updates on the annual Data Fusion
◗◗ Dr. Sudipan Saha, postdoctoral researcher, Technical Contest and on all other activities of the IADF TC. Membership in the
University of Munich, Germany IADF TC is free! Also, you can join the LinkedIn IEEE GRSS data fusion discus-
◗◗ Dr. Angelica I. Aviles-Rivero, senior research associate, sion forum, https://www.linkedin.com/groups/3678437/, or join us on
the Department of Applied Mathematics and Theoreti- Twitter: Grssiadf.
cal Physics, the University of Cambridge, U.K.
[21] W. Boulila, M. K. Khlifi, A. Ammar, A. Koubaa, B. Benjdira, [24] M. Casey. “Foundation models 101: A guide with essential
and I. R. Farah, “A hybrid privacy-preserving deep learning FAQs.” Snorkel AI. Accessed: Apr. 13, 2023. [Online]. Available:
approach for object classification in very high-resolution https://snorkel.ai/foundation-models/
satellite images,” Remote Sens., vol. 14, no. 18, Sep. 2022, [25] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D.
Art. no. 4631, doi: 10.3390/rs14184631. Thau, and R. Moore, “Google earth engine: Planetary-scale
[22] S. Zhang, X. Zhang, T. Li, H. Meng, X. Cao, and L. Wang, geospatial analysis for everyone,” Remote Sens. Environ.,
“Adversarial representation learning for hyperspectral vol. 202, pp. 18–27, Dec. 2017, doi: 10.1016/j.rse.2017.06.031.
image classification with small-sized labeled set,” Remote [26] T. Augspurger, “Scalable sustainability with the planetary
Sens., vol. 14, no. 11, May 2022, Art. no. 2612, doi: 10.3390/ computer,” presented at the AGU Fall Meeting Abstracts, New
rs14112612. Orleans, LA, USA, Dec. 2021.
[23] S. Yang, M. Sun, X. Lou, H. Yang, and H. Zhou, “An unpaired [27] “Earth on AWS.” Amazon. Accessed: Apr. 13, 2023. [Online].
thermal infrared image translation method using GMA- Available: https://aws.amazon.com/earth/
CycleGAN,” Remote Sens., vol. 15, no. 3, Jan. 2023, Art. no. [28] A. Kirillov et al., “Segment anything,” 2023, arXiv:2304
663, doi: 10.3390/rs15030663. .02643.GRS
Special issue on
“Data Fusion Techniques for Oceanic
Target Interpretation”
Guest Editors
Gui Gao, Southwest Jiaotong University, China (dellar@126.com)
Hanwen Yu, University of Electronic Science and Technology of China, China (yuhanwenxd@gmail.com)
Maurizio Migliaccio, Università degli Studi di Napoli Parthenope, Italy (maurizio.migliaccio@uniparthenope.it))
Xi Zhang, First Institute of Oceanography, Ministry of Natural Resources, China (xi.zhang@fio.org.cn)
Interpreting marine targets using remote sensing can provide critical information for various applications,
including environmental monitoring, oceanographic research, navigation, and resource management. With the
development of observation systems, the ocean information acquired is multi-source and multi-dimension. Data
fusion, as a general and popular multi-discipline approach, can effectively use the obtained remote sensing data
to improve the accuracy and reliability of oceanic target interpretation. This special issue will present an array of
tutorial-like overview papers that aim to invite contributions on the latest developments and advances in the field
of fusion techniques for oceanic target interpretation. In agreement with the approach and style of the
Magazine, the contributors to this special issue will pay strong attention to creating a balanced mix
between ensuring scientific depth, and dissemination to a wide public which would encompass remote
sensing scientists, practitioners, and students.
Important dates:
August 1, 2023 White paper submission deadline
September 1, 2023 Invitation notification
November 1, 2023 Full paper submission deadline
March 1, 2024 Review notification
June 1, 2024 Revised manuscript due
September 1, 2024 Final acceptance notification
October 1, 2024 Final manuscript due
January 2025 Publication date