You are on page 1of 104

CALL FOR PAPERS

IEEE Geoscience and remote sensing


magazine

Since 2013, the IEEE Geoscience and Remote Sensing Society (GRSS) publishes IEEE Geoscience and Remote
Sensing Magazine. The magazine provides a new venue to publish high-quality technical articles that by
their very nature do not find a home in journals requiring scientific innovation but that provide relevant information
to scientists, engineers, end-users, and students who interact in different ways with the geoscience and remote
sensing disciplines. Therefore, the magazine publishes tutorial and review papers, as well as technical papers on
geoscience and remote sensing topics, but the last category (technical papers) only in connection to a Special
Issue.

The magazine also publishes columns and articles on these topics:


• Education in remote sensing
• Space agency activities
• Industrial profiles and activities
• IDEA Committee activities
• GRSS Technical Committee activities
• GRSS Chapter activities
• Software reviews and data set descriptions
• Book reviews
• Conferences and workshop reports.

The magazine is published with an appealing layout in a digital format, and its articles are also published in
electronic format in the IEEE Xplore online archive. The digital format is different from the electronic format used
by the IEEE Xplore website. The digital format allows readers to navigate and explore the technical content of the
magazine with a look and feel similar to that of a printed magazine. The electronic version provided on the IEEE
Xplore website allows readers to access individual articles as separate PDF files. Both the digital and the electronic
magazine content is freely available to all GRSS members.

This call for papers is to encourage all potential authors to prepare and submit tutorials and technical papers for review
to be published in the IEEE Geoscience and Remote Sensing Magazine. Contributions for any of the above-
mentioned topics of the magazine (including columns) are also welcome.

Authors interested in proposing special issues as guest editors are encouraged to contact the editor-in-chief directly
for information about the proposal submission template, as well as the proposal evaluation and special issue
management procedures.

All tutorial, review and technical papers will undergo blind review by multiple reviewers. The submission and review
process are managed at the IEEE Manuscript Central, as is already done for the three GRSS journals. Prospective
authors are required to submit electronically using the website http://mc.manuscriptcentral.com/grs by selecting the
“Geoscience and Remote Sensing Magazine” option from the drop-down list. Instructions for creating new user
accounts, if necessary, are available on the login screen. No other manners of submission are accepted. Papers
should be submitted in single column, double-spaced format.

For any additional information, contact the editor-in-chief:


Prof. Paolo Gamba
Department of Electrical, Biomedical and Computer Engineering
University of Pavia
Via A. Ferrata, 5
26100 Pavia
Italy
E-Mail: paolo.gamba@unipv.it

Digital Object Identifier 10.1109/MGRS.2023.3278368


JUNE 2023
VOLUME 11, NUMBER 2
WWW.GRSS-IEEE.ORG

FEATURES

10 Taking Artificial Intelligence


Into Space Through Objective
Selection of Hyperspectral Earth
©SHUTTERSTOCK.COM/METAMORWORKS

Observation Applications

by Agata M. Wijata, Michel-François Foulon,
Yves Bobichon, Raffaele Vitulli, Marco Celesti,
Roberto Camarero, Gianluigi Di Cosimo,
Ferran Gascon, Nicolas Longépé, Jens Nieke,
Michal Gumiela, and Jakub Nalepa

PG. 60
40  nboard Information Fusion for Multisatellite
O
Collaborative Observation

by Gui Gao, Libo Yao, W
­ enfeng Li, Linlin Zhang,
and Maolin Zhang

ON THE COVER:
Schematic diagram of federated learning (see p. 69).
©SHUTTERSTOCK.COM/VECTORFUSIONART
60

AI Security for Geoscience and Remote Sensing
by Yonghao Xu, Tao Bai, Weikang Yu, Shizhen Chang,
Peter M. Atkinson, and Pedram Ghamisi

SCOPE
IEEE Geoscience and Remote Sensing Magazine (GRSM) will
inform readers of activities in the IEEE Geoscience and
Remote Sensing Society, its technical committees,
and chapters. GRSM will also inform and educate
readers via technical papers, provide information on
international remote sensing activities and new satellite
missions, publish contributions on education activities,
industrial and university profiles, conference news, book
reviews, and a calendar of important events.

Digital Object Identifier 10.1109/MGRS.2023.3277244

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 1


IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE
EDITORIAL BOARD Vice President of Professional Activities
Dr. Lorenzo Bruzzone
Dr. Paolo Gamba University of Trento, Italy
COLUMNS & University of Pavia
Department of Electrical, Biomedical, Vice President of Meetings and Symposia
DEPARTMENTS and Computer Engineering
Pavia, Italy
Sidharth Misra
NASA-JPL, USA
paolo.gamba@unipv.it
Vice President of Technical Activities
4 FROM THE EDITOR Dr. Fabio Pacifici
Subit Chakrabarti (Cloud to Street, USA) Maxar, USA
7 PRESIDENT’S MESSAGE Gong Cheng (Northwestern Polytechnical Secretary
University, P.R. China) Dr. Steven C. Reising
86 PERSPECTIVES Colorado State University, USA
Michael Inggs (University of Cape Town,
Chief Financial Officer
89 TECHNICAL COMMITTEES South Africa) Dr. John Kerekes
Rochester Institute of Technology, USA
George Komar (NASA retired, USA)
IEEE PERIODICALS
Joseé Levesque (Defence Research and MAGAZINES DEPARTMENT
Development, Canada) Journals Production Manager
Sara T. Scudder
Andrea Marinoni (UiT, Artic University
of Norway, Norway) Senior Manager, Production
Katie Sullivan
Fabio Pacifici (Maxar, USA) Senior Art Director
Janet Dudar
Mario Parente (University of Massachusetts, USA) Associate Art Director
Gail A. Schnitzer
Nirav N. Patel (Defence Innovation Unit, USA)
Production Coordinator
Theresa L. Smith
Michael Schmitt (Universität der Bundeswehr,
Germany) Director, Business Development–
Media & Advertising
Vicky Vanthof (Univ. of Waterloo, Canada) Mark David
+1 732 465 6473
m.david@ieee.org
Hanwen Yu (University of Electronic Science Fax: +1 732 981 1855
and Technology of China, P.R. China)
Advertising Production Manager
Felicia Spagnoli
GRS OFFICERS
Production Director
President Peter M. Tuohy
Mariko Sofie Burgin
NASA Jet Propulsion Laboratory, USA Editorial Services Director
Kevin Lisankie
Executive Vice President
MISSION STATEMENT Saibun Tjuatja Senior Director, Publishing Operations
The University of Texas at Arlington, USA Dawn M. Melley
The IEEE Geoscience and Remote Sensing Soci-
ety of the IEEE seeks to advance science and Vice President of Publications
technology in geoscience, remote sensing and Alejandro C Frery
Victoria University of Wellington, NZ
related fields using conferences, education, and
other resources. Vice President of Information Resources
Keely L Roth
Salt Lake Cty UT, USA

IEEE Geoscience and Remote Sensing Magazine (ISSN 2473-2397) is published provided the per-copy fee indicated in the code is paid through the Copyright Clear-
quarterly by The Institute of Electrical and Electronics Engineers, Inc., IEEE ance Center, 222 Rosewood Drive, Danvers, MA 01923 USA; 2) pre-1978 articles
Headquarters: 3 Park Ave., 17th Floor, New York, NY 10016-5997, +1 212 419 without fee. For all other copying, reprint, or republication information, write to:
7900. Responsibility for the contents rests upon the authors and not upon the Copyrights and Permission Department, IEEE Publishing Services, 445 Hoes Lane,
IEEE, the Society, or its members. IEEE Service Center (for orders, subscrip- Piscataway, NJ 08854 USA. Copyright © 2023 by the Institute of Electrical and Elec-
tions, address changes): 445 Hoes Lane, Piscataway, NJ 08854, +1 732 981 tronics Engineers, Inc. All rights reserved. Application to Mail at Periodicals Postage
0060. Individual copies: IEEE members US$20.00 (first copy only), nonmem- Prices is pending at New York, New York, and at additional mailing offices. Cana-
bers US$110.00 per copy. Subscription rates: included in Society fee for each dian GST #125634188. Canada Post Corporation (Canadian distribution) publica-
member of the IEEE Geoscience and Remote Sensing Society. Nonmember tions mail agreement number 40013885. Return undeliverable Canadian addresses
subscription prices available on request. Copyright and Reprint Permissions: to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA.
Abstracting is permitted with credit to the source. Libraries are permitted to
photocopy beyond the limits of U.S. Copyright Law for private use of patrons: IEEE prohibits discrimination, harassment, and bullying. For more information,
1) those post-1977 articles that carry a code at the bottom of the first page, visit http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

Digital Object Identifier 10.1109/MGRS.2023.3282408

2 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Share Your
Preprint Research
with the World!

TechRxiv is a free preprint server for unpublished


research in electrical engineering, computer
science, and related technology. Powered by
IEEE, TechRxiv provides researchers across a
broad range of fields the opportunity to share
early results of their work ahead of formal
peer review and publication.

BENEFITS:
• Rapidly disseminate your research findings

• Gather feedback from fellow researchers

• Find potential collaborators in the


scientific community

• Establish the precedence of a discovery

• Document research results in advance


of publication

Upload your unpublished research today!

Follow @TechRxiv_org
Learn more techrxiv.org Powered by IEEE
FROM THE EDITOR
BY PAOLO GAMBA

Why Does GRSM Require


the Submission of White Papers?

A s you have already guessed from the title, and in line


with my editorial in the March 2023 issue, I will
use my space here to address two different points. First,
Trends” [A3] provides a systematic and comprehensive
review of artificial intelligence (AI) security-related re-
search in remote sensing applications.
the reader will find a summary of the contents of this As for column articles, there is a very interesting
issue, which is useful to those who would like to quickly “Perspectives” column by Dr. Nirav Patel [A4], “Gen-
navigate the issue and read only what they are interested erative Artificial Intelligence and Remote Sensing: A
in. The second part of this editorial will be devoted in- Perspective on the Past and the Future,” devoted to ex-
stead to better explaining the concept of a white paper, plaining the background, the current status, and the
which is central to the editorial policy of this journal. possible issues related to the use of generative deep
learning models in EO data. This thought-provoking
GRSM ISSUE CONTENT article shows the increasing importance that genera-
This issue of IEEE Geoscience and Remote Sensing Magazine tive deep learning methods are taking in image pro-
(GRSM) includes the right blend of technical articles and cessing and the attention they deserve in EO as well as
columns with reports and information sharing by some in artificial data generation in general. The potential
of the committees and directorates of the IEEE Geosci- misuse of these systems, originally designed to provide
ence and Remote Sensing Society (GRSS). Indeed, this additional samples to supervised interpretation tech-
issue contains three technical articles, tackling a few of niques, opens new research paths and asks new ques-
the most interesting issues in Earth observation (EO) tions about data reliability that have not been consid-
data processing. The first article, titled “Taking Artifi- ered, so far, from the EO data user community.
cial Intelligence Into Space Through Objective Selection The next column reports on the first Summer School
of Hyperspectral Earth Observation Applications,” [A1] of the Image Analysis and Data Fusion (IADF) Techni-
starting from a review of the potential EO use cases that cal Committee (TC) [A5]. This TC is well known in the
may directly benefit from onboard hyperspectral data GRSS community and beyond because of the organiza-
analysis, introduces a procedure for the objective, quan- tion of the yearly Data Fusion Contest. A specific orga-
tifiable, and interpretable selection of onboard data nizing committee made it possible to realize the first
analysis applications. GRSS IADF School on Computer Vision for Earth Ob-
Similarly, the article “Onboard Information Fusion servation, held as an online event from 3 to 7 October
for Multisatellite Collaborative Observation: Summa- 2022. The event was a success, with 85 selected partici-
ry, Challenges, and Perspectives” [A2] summarizes, pants. The material distributed in the school is available
analyzes, and studies issues involving multisatellite via the GRSS YouTube channel at https://www.youtube.
data onboard intelligent fusion processing, showing com/c/IEEEGRSS.
how strong is the need for a shift from efficient data Finally, two columns provide updates about impor-
exploitation on the ground toward effective, although tant activities by the Standards TC [A6] and the first
computationally lightweight, onboard data process- ideas from the brand-new Remote Sensing Environ-
ing. Finally, the article titled “AI Security for Geo- ment, Analysis, and Climate Technologies (REACT) TC.
science and Remote Sensing: Challenges and Future [A7] Specifically, the first column provides a first out-
look on the guidelines for the EO community to create/
Digital Object Identifier 10.1109/MGRS.2023.3277221
adapt Analysis Ready Data formats that integrate with
Date of current version: 30 June 2023 various AI workflows.

4 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


WHITE PAPER RATIONALE AND cle, together with comments and suggestions to improve it.
TEMPLATE DESCRIPTION Therefore, without the burden of a full review and without
The technical contributions in this issue are the results of a the effort required to write a full article, the submission of
procedure that started with the submission of a white paper a white paper ensures that the authors and the Editorial
by their respective authors. But why does GRSM require the Board shape the full article in such a way that it will be
submission of a white paper, and what exactly is a GRSM easier to review, and hopefully, accept it. In summary, a
white paper? white paper stage is an opportunity for the authors to get
The scope of this preliminary step, which may not be early feedback—for example, whether their article is not a
clear to all of our readership, is in fact twofold. On the good fit for the magazine. This avoids spending time on an
one side, a GRSM white paper is meant to provide to the article that can instead be submitted to a different publica-
GRSM Editorial Board with enough information to un- tion with different requirements.
derstand whether the proposed full article is in line with According to this rationale, here are a few suggestions
the requirements of the journal scope (https://www.grss about the format of a GRSM white paper. First of all, its
-ieee.org/wp-content/uploads/2023/03/GRSM_call_for length should be limited; a white paper is expected to be six
_papers_rev2023.pdf). pages (or fewer) in the IEEE journal page format (A4/letter
“GRSM publishes … high-quality technical articles paper, two columns).
that by their very nature do not find a home in jour- The proposed title is an important component to set
nals requiring scientific innovation but that provide the stage for the scope of the article. Good titles should
relevant information to scientists, engineers, end-us- not be too long, but they should not be too generic. The
ers, and students who interact in different ways with abstract (expected to be no longer than half a column)
the geoscience and remote sensing disciplines…” should describe clearly if your article is a tutorial, a re-
On the other side, the submission of a white paper is view, or a special issue article, and it should describe the
important to the authors because they will receive prelimi- topic of the work and the main novelties and results of
nary feedback from expert reviewers about their full arti- the article.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 5


The main body of the white paper (four pages maximum) References in the white paper are a subset of the referenc-
should be a shorter version of the full article with all the nec- es of the full article. They should be cited and included to
essary information to understand how the full article will be guide readers to more information on the topic. For feature
structured and what would be the main topic, analyses, and articles, the reference list should not include every available
results described in it. It can be structured with “Introduc- source. For instance, in a tutorial article, one should include
tion,” “Methodology and Data,” and “Results” if it is a special only the key references that are important to explain a topic
issue article; with “Introduction,” “Review,” and “Comments” and the most influential and well-understood articles for
in the case of a review article; and with “Introduction,” “State an overview.
of the Art,” and “Tutorial” in the case of a tutorial article. This The final part of a GRSM white paper should be the au-
is the place to make the case that an article will be of broad thors’ list and bios, not longer than half a page. Indeed, this
interest to the remote sensing community. It is useful to cite part should be short enough to avoid not being read yet
some references in this section. It is also important to high- long enough to explain why the authors feel that they are
light differences with other tutorials, overviews, or survey ar- adequate to write the special issue article, review, or tutorial
ticles on related topics. Here, one should also provide a section that they are willing to submit to GRSM. This list is used to
and subsection outline of the final article. It is useful to briefly determine if the authors have expertise in the area. Good
mention the content of each section and to list the (main) ar- feature articles will be coauthored by different research
ticles that you plan to cite in this section. groups. This choice helps to ensure diverse perspectives on
Given the importance of mathematics in remote sens- different lines of research, as expected in tutorial and over-
ing, it would be unusual to find a feature article without view articles.
any equations. Due to the tutorial nature of the articles,
though, it would not be typical to see the pages filled with APPENDIX: RELATED ARTICLES
equations either. Mathematical equations should be lim- [A1] A. M. Wijata et al., “Taking artificial intelligence into
ited in scope and complexity. If one feels that the article space through objective selection of hyperspectral Earth
needs more equations, references, figures, or tables than observation applications,” IEEE Geosci. Remote Sens.
the magazine guidelines allow, this may be an indication Mag., vol. 11, no. 2, pp. 10–39, Jun. 2023, doi: 10.1109/
that it’s too technical for a magazine article and is more MGRS.2023.3269979.
suitable for publication in a regular journal. [A2] G. Gao, L. Yao, W. Li, L. Zhang, and M. Zhang, “Onboard
information fusion for multisatellite collaborative ob-
servation: Summary, challenges, and perspectives,” IEEE
Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 40–59, Jun.
2023, doi: 10.1109/MGRS.2023.3274301.
[A3] Y. Xu, T. Bai, W. Yu, S. Chang, P. M. Atkinson, and P.
Ghamisi, “AI security for geoscience and remote sensing:
Challenges and future trends,” IEEE Geosci. Remote Sens.
Mag., vol. 11, no. 2, pp. 60–85, Jun. 2023, doi: 10.1109/
MGRS.2023.3272825.
[A4] N. Patel, “Generative artificial intelligence and remote
sensing: A perspective on the past and the future,” IEEE
Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 86–88, Jun.
2023, doi: 10.1109/MGRS.2023.3275984.
[A5] G. Vivone, D. Lunga, F. Sica, G. Taşkin, U. Verma, and R.
Hänsch, “Computer vision for earth observation—The
first IEEE GRSS image analysis and data fusion school,”
IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 95–100,
Jun. 2023, doi: 10.1109/MGRS.2023.3267850.
[A6] D. Lunga, S. Ullo, U. Verma, G. Percivall, F. Pacifici, and
R. Hänsch, “Analysis-ready data and FAIR-AI—Stan-
dardization of research collaboration and transparency
across earth-observation communities,” IEEE Geosci. Re-
mote Sens. Mag., vol. 11, no. 2, pp. 89–93, Jun. 2023, doi:
10.1109/MGRS.2023.3267904.
[A7] I. Hajnsek et al., “REACT: A new technical committee for
earth observation and sustainable development goals,”
IEEE Geosci. Remote Sens. Mag., vol. 11, no. 2, pp. 93–95,
Jun. 2023, doi: 10.1109/MGRS.2023.3273083.
 GRS

6 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


PRESIDENT’S MESSAGE
BY MARIKO BURGIN

Letter From the President

H ello and nice to see you again! My name is Mariko


Burgin, and I am the IEEE Geoscience and Remote
Sensing Society (GRSS) President. You can reach me
thon, deep learning for remote sensing data analysis,
and radar interferometry and its applications. It also
includes three invited lectures on space-based environ-
at president@ieee-grss.org and @GRSS_President mental monitoring, small satellite science and appli-
on Twitter. cations, and the NASA-ISRO synthetic aperture radar
I hope you all had a chance to read my 2033 “press re- (NISAR) mission. Tip: Use the opportunity to learn
lease” in the previous column [1] and continue to dream from experts in the field before attending IGARSS to
big and tell us how you see the GRSS in 2033 by sharing present your own research.
on social media with #GRSSin10Years. If you haven’t
had the chance yet, keep posting or send me an e-mail. I TUTORIALS (SUNDAY, 16 JULY)
would be happy to hear from you. The Sunday before IGARSS, in this case 16 July, is
In this letter, I’d like to focus on the upcoming Inter- traditionally the day of the IGARSS tutorials. This
national Geoscience and Remote Sensing Symposium year, IGARSS is running 14 tutorials ranging from
(IGARSS) 2023, which will take place in Pasadena, CA, machine and deep learning in
USA, from 16 to 21 July 2023. If you’re not familiar, remote sensing and Earth ob-
IGARSS is the GRSS’ annual flagship conference, and servation to identifying ethi-
it is in its 43rd year. IGARSS is an invitation for shar- cal issues in Earth observation TIP: USE THE OPPORTUNITY
ing knowledge and experiences on recent developments research, from getting started TO LEARN FROM EXPERTS IN
and advancements in geoscience and remote sensing. w it h SA R a nd TomoSA R to THE FIELD BEFORE ATTENDING
For 2023, its particular focus is on Earth observation, learning about pansharpen- IGARSS TO PRESENT YOUR
disaster monitoring, and risk assessment. ing and GNSS-R in-the-cloud OWN RESEARCH.
If you head over to the IGARSS 2023 website (https:// drone processing for water
2023.ieeeigarss.org/), you might get overwhelmed by the applications (MAPEO-water),
sheer amount of information, but fear not. In this letter, and from predictive modeling
I will try my best to familiarize you with the various of hyperspectral responses of natural materials to a
activities and give you a few tips along the way. NASA Transform to Open Science (TOPS) Open Sci-
ence 101 workshop, all led by renowned experts in
IEEE GRSS-USC MHI 2023 REMOTE SENSING their respective fields. Our tutorials usually book out
SUMMER SCHOOL (THURSDAY TO SATURDAY, fast, so grab your spot while they are still available.
13–15 JULY) Tip: Can’t attend the GRSS school but still want to
It is a longstanding tradition for the GRSS to organize learn from the best experts in the field? Perhaps a
a GRSS school in conjunction with its IGARSS confer- tutorial fits your needs.
ence. This year, it will be held at the Ming Hsieh Institute
(MHI) of the University of Southern California (USC) IEEE GRSS AdCom OPEN HOUSE (SUNDAY, 16 JULY)
and consists of three full days of tutorials and lectures This year, for the first time, the GRSS, the sponsoring
on geospatial raster and vector data handling with Py- Society of IGARSS, is organizing an Administrative
Committee (AdCom) Open House on Sunday, 16 July.
Digital Object Identifier 10.1109/MGRS.2023.3277234
Over light refreshments, you will have a chance to meet
Date of current version: 30 June 2023 the GRSS leadership: your GRSS President, Executive

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 7


Vice President (VP), CFO, Secretary, Past President, VPs dents, YPs, industry leaders, and entrepreneurs. This event
and Directors, and all the volunteers working within the is an ideal way to mix, mingle, network, and make new
various GRSS committees on activities and projects for you. connections while you challenge yourself in a trivia contest
You will also get to meet the IGARSS 2023 Local Organiz- and other surprises. Tip: YP events are excellent opportuni-
ing Committee. Tip: Land in California and register for ties to meet new people (not just YPs) in a casual setting.
IGARSS on Sunday, and then come mingle and let us get to
know each other at the Open House. STUDENT PAPER COMPETITION (TUESDAY, 19 JULY)
AND 3MT (TO BE DETERMINED)
OPENING NIGHT CELEBRATION (MONDAY, 17 JULY) Don’t forget to stop by the Student Paper Competition held on
On Monday evening, you are invited to Pasadena Central Tuesday, 18 July. You’ll see the final 15-min presentations of
Park for the IGARSS 2023 Opening Night Celebration. the 10 finalists who are competing for the Mikio Takagi Stu-
Catch up with old friends and make new connections while dent Prize, which will be presented at the IGARSS Night on
enjoying music and refreshments. Tip: The Opening Night Wednesday, 19 July. Also don’t miss the Three-Minute Thesis
Celebration is usually a great opportunity to network. (3MT) Award where 10 master’s and doctoral students describe
their research in just 3 min to a general audience with only
TECHNICAL PROGRAM (MONDAY, 17 JULY TO one static slide. The three winners will receive the GRSS Excel-
FRIDAY, 21 JULY) lence in Technical Communication Student Prize Awards. Tip:
Throughout the week, we have a mind-boggling variety of Thinking of honing your own elevator pitch for your thesis,
technical sessions, grouped into 13 regularly recurring themes project, or startup? Swing by to get some inspiration.
and many community-contrib-
uted sessions. These are split IGARSS NIGHT “SPACE & MAGIC”
into oral and poster sessions. (WEDNESDAY, 19 JULY)
TIP: YP EVENTS ARE Tip: There is no easy way to Did you know that the Space Shuttle Endeavour traveled
EXCELLENT OPPORTUNITIES figure out which technical ses- 122,883,151 mi around Earth and 12 mi through the streets
TO MEET NEW PEOPLE sions to attend. I recommend of Los Angeles to its current home in the California Science
(NOT JUST YPS) IN A taking a good look at the ses- Center? On Wednesday night, you’ll get the unique op-
CASUAL SETTING. sion descriptions, and in addi- portunity to enjoy locally inspired food and drink while
tion to attending those on your mingling underneath the Space Shuttle Endeavour and being
specific field of expertise, also entertained by roaming magicians and jazz music. You’ll
mix it up and visit a few new also get included museum access where you can learn about
sessions. Who knows? You might discover a new passion, meet local ecosystems and trace the 12-mi journey of Endeavour
future collaborators, or simply learn something new. through Los Angeles. Tip: This year, the IGARSS Night will
be a standing dinner, so there will be plenty of opportuni-
EXHIBIT HALL (MONDAY, 17 JULY TO ties to mix and mingle.
FRIDAY, 21 JULY)
Don’t forget to swing by the IGARSS exhibit hall to meet TABLE TENNIS TOURNAMENT (TO BE DETERMINED)
the IGARSS sponsors and exhibitors. Tip: Looking to find It is a long-standing IGARSS tradition to organize a friendly
the GRSS leadership? Hang around the GRSS booth. The sports tournament, and this year, it is table tennis. IGARSS
GRSS booth is ideal if you want to make new connections, will supply the tables, paddles, and the ball; you just have
network, or provide feedback on IGARSS and the GRSS. to show up. Tip: Nothing bonds more than a won (or lost)
sports game. Apart perhaps from fieldwork.
TIE EVENTS (THROUGHOUT THE WEEK)
For several years, IGARSS has organized its Technology, NASA JET PROPULSION LABORATORY TECHNICAL
Industry, and Education (TIE) events. This year, you can at- TOUR (FRIDAY, 21 JULY)
tend a tutorial on how to query, fetch, analyze, and visual- Round out the IGARSS week by securing a spot for a tech-
ize geospatial data with Python; make new friends at the nical tour at NASA’s Jet Propulsion Laboratory (JPL). The
Women in GRSS luncheon; and participate in a CV/resume tour is already full, but you can put your name down on the
writing workshop. Tip: The Women in GRSS luncheon is waiting list. Tip: As a JPLer myself, I highly recommend the
open to everyone (but is a paid event). JPL tour. Where else can you visit the center of the universe?

YOUNG PROFESSIONAL EVENTS I hope to meet you at IGARSS 2023 in Pasadena!


(THROUGHOUT THE WEEK)
For our students, recent graduates, and young professionals REFERENCE
(YPs), IGARSS is organizing a YP mixer where you can join [1] M. Burgin, “Letter From the President [President’s Message],”
for an evening of interactive and engaging experiences that IEEE Geosci. Remote Sens. Mag., vol. 11, no. 1, pp. 6–7, Mar. 2023,
bring together a diverse group of academics, graduate stu- doi: 10.1109/MGRS.2023.3243686.GRS

8 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


One of the most influential
reference resources for
engineers around the world.
For over 100 years, Proceedings of the IEEE has been the leading journal for engineers
looking for in-depth tutorial, survey, and review coverage of the technical developments
that shape our world. Offering practical, fully referenced articles, Proceedings of the IEEE
serves as a bridge to help readers understand important technologies in the areas of
electrical engineering and computer science.

To learn more and start your subscription today, visit


ieee.org/proceedings-subscribe
To bring the “brain”
close to the “eyes”
of satellite missions

©SHUTTERSTOCK.COM/FOXPICTURES
Taking Artificial Intelligence
Into Space Through Objective
Selection of Hyperspectral
Earth Observation
Applications
AGATA M. WIJATA , MICHEL-FRANÇOIS FOULON,
YVES BOBICHON, RAFFAELE VITULLI, R ecent advances in remote sensing hyperspectral imag-
ing and artificial intelligence (AI) bring exciting op-
MARCO CELESTI , ROBERTO CAMARERO, portunities to various fields of science and industry that
­GIANLUIGI DI COSIMO, FERRAN GASCON, can directly benefit from in-orbit data processing. Tak-
­NICOLAS LONGÉPÉ, JENS NIEKE, ing AI into space may accelerate the response to various
MICHAL GUMIELA, AND JAKUB NALEPA events, as massively large raw hyperspectral images (HSIs)
can be turned into useful information onboard a satellite;
hence, the images’ transfer to the ground becomes much
Digital Object Identifier 10.1109/MGRS.2023.3269979
faster and offers enormous scalability of AI solutions
Date of current version: 30 June 2023 to areas across the globe. However, there are numerous

10 2473-2397/23©2023IEEE IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


challenges related to hardware and energy constraints, expect that in the future, satellite designs and ConOps will
resource frugality of (deep) machine learning models, be derived from user needs and not the opposite.) This was
availability of ground truth data, and building trust in demonstrated in [5], where a machine learning payload for
AI-based solutions. Unbiased, objective, and interpretable flood monitoring was updated “on the fly” and retrained
selection of an AI application is of paramount importance by exploiting knowledge transferred across two different
for emerging missions, as it influences all aspects of satel- optical payloads. Such use cases not only should bring real
lite design and operation. In this article, we tackle this value to the end users, community, and technology but it
issue and introduce a quantifiable procedure for objec- also affects satellite design, especially if it will be solved
tively assessing potential AI applications considered for using recent deep learning advances, which are commonly
onboard deployment. To prove the flexibility of the sug- memory and energy hungry.
gested technique, we utilize the approach to evaluate AI Unfortunately, there is no
applications for two fundamentally different missions: standardized, traceable, re-
the Copernicus Hyperspectral Imaging Mission for the producible, and quantifiable UNBIASED, OBJECTIVE, AND
Environment (CHIME) [European Union/European process that can be followed INTERPRETABLE SELECTION
Space Agency (ESA)] and the 6U nanosatellite Intuition-1 to objectively assess and se- OF AN AI APPLICATION IS
(KP Labs). We believe that our standardized process may lect the use case(s) for such
OF PARAMOUNT
become an important tool for maximizing the outcome missions. We address this im-
IMPORTANCE FOR
of Earth observation (EO) missions through selecting the portant research gap.
most relevant onboard AI applications in terms of scien- We have been observing EMERGING MISSIONS, AS IT
tific and industrial outcomes. an unprecedented tsunami of INFLUENCES ALL ASPECTS
AI algorithms for solving EO OF SATELLITE DESIGN.
INTRODUCTION tasks in a plethora of fields,
Hyperspectral missions have been attracting research including precision agricul-
and industrial attention due to numerous exciting appli- ture, detection of natural di-
cations of such imagery that span a multitude of fields, sasters, monitoring industrial environmental effects, mar-
including precision agriculture, surveillance, event detec- itime surveillance, smart compression of HSI, and more
tion and tracking, environmental monitoring, and many [6]. As with any other state-of-the-art processing function,
more [1]. Such imagery captures very detailed information deploying AI onboard satellites is challenging due the
in hundreds of contiguous narrow spectral bands, but its hardware- and energy-constrained environment, acquisi-
efficient transfer and storage are costly due to its large vol- tion strategies, and data storage [7]. Note that AI is not just
ume. Additionally, downlinking raw HSIs to the ground deep learning, as people from outside the machine learn-
for further processing is suboptimal, as, in the majority ing field might think. AI was defined by John McCarthy
of cases, only a subset of all available bands conveys im- (an emeritus professor at Stanford University) as “the sci-
portant information about remotely sensed objects [2], ence and engineering of making intelligent machines.”
[3]. Additionally, sending such large-size images is time- Then, machine learning is a subfield of AI focused on
consuming (not to mention onerous), thus negatively im- building computer programs that can improve their oper-
pacting the response time and mission abilities, especially ation based on experience and data, and deep learning is
if undertaking fast and timely actions is of paramount im- one family of techniques toward approaching that. Here,
portance, e.g., during natural disasters, and it can i­nduce data-driven approaches trained in a supervised way re-
tremendous data delivery costs. To tackle these issues, quire large amounts of high-quality, heterogeneous, and
bringing the “brain” close to the “eyes” of hyperspectral representative ground truth data to build models. Since
missions through deploying onboard processing is of the world is not labeled, creating such ground truth da-
interest and has been increasingly researched recently. tasets is human dependent, time-consuming, and costly,
Onboard algorithms can not only be deployed for data especially if in situ measurements should be performed.
compression, band and data selection, and image quality Since it is impossible to capture real HSI for emerging mis-
enhancement but they can turn raw pixels into useful in- sions, as the sensor is not operating in orbit before launch,
formation before sending it to Earth, hence converting im- data-level digital twins have been proliferating to simu-
ages into elaborate lightweight actionable items that are late target imagery based on existing HSI by “injecting”
easier to transfer. atmospheric conditions and noise into the data [8]. This
As an example, the ESA’s U- Sat-1 mission [4] used on- approach is pivotal to quantify the robustness of models
board deep learning for cloud detection. Although it is the against in-orbit acquisition. Also, we need to figure out
“hello, world” in remote sensing, selecting a specific ap- ways of producing representative enough data so that the
plication, or a set of them, and fitting it into the concept performance of a solution can be validated before flight,
of operations (ConOps) for emerging missions is critically especially in data-driven approaches. Preflight validation
important, and it is necessary for proving the versatility of is even more important because “trust” still needs to be
machine learning-powered payloads. (We may, however, built in this field. Unless there are very similar data from

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 11


another mission (which is unlikely, at least for institu- Applications” section). Although we focus on AI-pow-
tional missions), the best way of producing these data is ered solutions, our approach is generic enough to be
to use complex instrument/observation simulators fully seamlessly applied without any particular family of so-
representative of the expected instrument design. These lutions in mind (i.e., the same logic can be followed for
simulators need, in turn, input data with a higher spatial other algorithms, as well).
and spectral resolution (generally from costly aerial cam- ◗◗ We exploit our evaluation procedure to assess poten-
paigns). Therefore, there is a huge effort involved even be- tial AI applications for two EO missions: CHIME and
fore the annotation work starts. On the other hand, there Intuition-1 (the “Case Studies” section). Therefore, we
exist approaches for synthesizing training data based on illustrate the flexibility of the suggested approach in
limited samples [9] and for training from limited training real-life missions and show that mission-agnostic objec-
samples by using transfer learning [10]. Finally, there are tives and constraints may be evaluated once and conve-
strategies, such as semisupervised, self-supervised, and niently propagate as input to other missions to ease the
unsupervised learning, which operate over small training assessment process. Also, we show that our technique
samples [11], [12]. allows practitioners to simulate various mission profiles
Although recent advances in data-driven algorithms through the weighting scheme, which affects the impact
for EO have changed the way we process remotely sensed of specific objectives and constraints on the overall use
data on the ground, designing and validating onboard case score.
machine learning models is strongly affected by several We brought together a unique team of professionals
factors independent of the EO mission. They relate to the with different backgrounds, including advanced data anal-
tradeoff among an algorithm’s complexity and hardware ysis, machine learning and AI, space operations, hardware
constraints, the availability of ground truth datasets, and design, remote sensing, and EO to target a real-life chal-
the characteristics of the considered application. Such is- lenge of deploying AI in space. We believe that the stan-
sues impact the entire EO mission, due to the technologi- dardized procedure of selecting appropriate AI applications
cal difficulty of developing onboard AI applications. These for emerging EO missions established in this article can
constraints must be carefully considered while planning a be an important step toward more objective and unbiased
mission. To the best of our knowledge, there are no quanti- mission design. Ultimately, we hope that our evaluation
fiable approaches allowing the assessment of potential ap- will maximize the number of successful EO missions by
plications in an unbiased way. pruning AI applications that are not mature enough to be
implemented and would not bring important commercial
CONTRIBUTION and scientific value to the community.
In this article, we approach the problem of the quantifi-
able selection of EO applications to be deployed onboard ARTICLE STRUCTURE
an AI-powered imaging satellite of interest. We aim at de- This article is structured as follows. The “HSI Analysis: A
mystifying this procedure and suggest an objective way of Review” section presents a concise yet thorough review
evaluating use cases that can indeed benefit from in-orbit of applications that may be tackled using hyperspectral
processing. We introduce a set of mission-specific and mis- remote sensing. In the “Objective and Quantifiable Selec-
sion-agnostic objectives and constraints that, in our opin- tion of Onboard AI Applications” section, we introduce our
ion, should be thoroughly analyzed and assessed before objective and quantifiable procedure for selecting a target
reaching a final decision on a use case(s) to be implement- AI application for onboard implementation, in which we
ed. The flexibility of our evaluation procedure is illustrated consider both mission-independent and mission-specific
based on two example case studies of fundamentally differ- objectives and constraints affecting the prioritization of
ent missions. Overall, our contributions revolve around the each application. The evaluation procedure is deployed
following points: to analyze two missions (CHIME and Intuition-1) in the
◗◗ We present a synthetic review of potential EO use cases “Case Studies” section. The “Conclusions” section provides
that may directly benefit from onboard hyperspectral conclusions. Finally, Table 1 gathers the abbreviations used
data analysis in an array of fields, ranging from pre- in this article.
cision agriculture, surveillance, and event and change
detection to environmental monitoring (the “HSI HSI ANALYSIS: A REVIEW
Analysis: A Review” section). We discuss why bringing In this section, we provide an overview of potential EO use
such data analysis (here, in a form of AI) onboard a cases that can be effectively tackled through hyperspectral
satellite may become a game changer for those down- data analysis and for which such imagery can bring real
stream tasks. value. Although we are aware that there are very detailed
◗◗ We introduce a procedure for the objective, quantifiable, and thorough review papers on HSI analysis in EO, includ-
and interpretable selection of onboard data analysis ap- ing those by Li et al. [13], Audebert et al. [1], and Paoletti
plications that can be utilized for any EO mission (the et al. [6], we believe that our overview serves as important
“Objective and Quantifiable Selection of Onboard AI background and concisely consolidates the current state of

12 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


the art in the context of the deployment of AI techniques characteristics of areas, including various abnormalities
onboard CHIME and Intuition-1 missions. and plant diseases [18]. It provides a tremendous amount
of spectral–spatial information, which may be used to esti-
THE ROAD MAP mate the volume of crops and soil quality [19] as well as to
We focus on the following fields, which can directly benefit predict the effectiveness of fertilization [20], analyze plant
from the advantages of onboard hyperspectral data analy- growth [21], and extract vegetation indices [21]. These coef-
sis. They include ficients can be exploited to estimate and monitor biomass
◗◗ agricultural applications (the “Agricultural Applica- and assess soil composition and moisture [22]. There are
tions” section) lots of in situ campaigns focusing on HSI analysis cap-
◗◗ monitoring plant diseases and water stress (the “Moni- tured by manned and unmanned airplanes [14], [21].
toring Plant Diseases and Water Stress” section) Such measurements are necessary to develop and verify
◗◗ detection and monitoring of floods (the “Detection and the approaches that will be deployed onboard an imaging
Monitoring of Floods” section) satellite [8]. Such airborne data have been recently used
◗◗ detection of fire, volcanic eruptions, and ash clouds in the HYPERVIEW Challenge organized by KP Labs, the
(the “Detection of Fire, Volcanic Eruptions, and Ash ESA, and QZ Solutions, which was aimed at estimating
Clouds” section) soil parameters from HSIs. (For more details, see https://
◗◗ detection and monitoring of earthquakes and landslides platform.ai4eo.eu/seeing-beyond-the-visible. The chal-
(the “Detection and Monitoring of Earthquakes and lenge attracted almost 160 teams from across the globe,
Landslides” section) and the winning solution will fly on the Intuition-1 satel-
◗◗ monitoring industrially induced pollution (the “Moni- lite mission.)
toring Industrially Induced Pollution” section), fur-
ther split into dust events (the “Dust Events” section), MONITORING PLANT DISEASES
mine tailings (the “Mine Tailings” section), acidic dis- AND WATER STRESS
charges (the “Acidic Discharges” section), and hazard- Plant diseases and parasites have a serious impact on the
ous chemical compounds (the “Hazardous Chemical production of cereals and, subsequently, the functioning
Compounds” section) of the economy and food safety [23]. Currently, the as-
◗◗ detection and monitoring of methane (the “Detection sessment of vegetation conditions is carried out using a
and Monitoring of Methane” section) range of in-field methods, which lacks scalability [24].
◗◗ water environment analysis (the “Water Environment
Analysis” section), including marine litter detection (the
“Marine Litter Detection” section), detection of water
TABLE 1. THE ABBREVIATIONS USED IN THIS ARTICLE.
pollution (the “Detection of Water Pollution” section),
detection of harmful algal blooms (HABs) and water ABBREVIATION MEANING
quality monitoring (the “Detection of HABs and Water AI Artificial intelligence

Quality Monitoring” section), and maritime surveil- AIU AI unit

lance (the “Maritime Surveillance” section). AMD Acid mine drainage


AVIRIS Airborne Visible/Infrared Imaging Spectrometer
CHIME Copernicus Hyperspectral Imaging Mission for the
AGRICULTURAL APPLICATIONS
Environment
The nature of activities in the agricultural sector has
CNN Convolutional neural network
changed as a result of broadly understood human ac- ConOps Concept of operations
tivity concerned with the rapidly growing population, DPU Data processing unit
­environmental ­pollution, climate change, and depletion EO Earth observation
of natural resources. The premise of precision agriculture ESA European Space Agency
is effective food production with a reduced impact on the GSD Ground sampling distance
environment. To achieve this goal, we need to be able to as- HAB Harmful algal bloom
sess soil quality, irrigation, fertilizer content, and seasonal HSI Hyperspectral image
changes that occur in the ecosystem. Estimating the yields MSI Multispectral image
planned for a given region may also convey important NIR Near infrared
information related to the effectiveness of implemented SAR Synthetic aperture radar
practices [14]. SWIR Short-wave infrared
Remote sensing may become a tool enabling the iden- TIR Thermal infrared
tification of soil and crop parameters, due to its scalabil- TRL Technology readiness level
ity to large areas. Approaches using multispectral images UAV Unmanned aerial vehicle
(MSIs) are mainly based on the content of chlorophyll and, VAU Video acquisition unit
on that basis, the estimation of other parameters [15], [16], VNIR Visible and NIR
[17]. Hyperspectral imaging enables capturing more subtle

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 13


Here, remote sensing, which captures detailed plant char- key element in satellite-based flood monitoring is the need
acteristics by using, e.g., hyperspectral sensors, brings for rapid disaster response.
important practical advantages of being noninvasive Detection and assessment of floods is difficult due to
and inherently scalable over larger areas. In the case of the environmental impact of the water present in affected
ground-based methods, a hyperspectral camera is com- areas [37]. SAR has great potential to monitor flood situa-
monly placed at a distance of about 30 cm above the test tions in near real time, due to its ability to monitor Earth’s
object, with constant lighting conditions [25]. Since surface in all weather conditions [36]. Such data have been
various parts of the available spectrum are important successfully exploited in various deep learning-powered
in different use cases, the raw hyperspectral data are of- solutions toward the detection of floods [36], [38]. Here,
ten preprocessed with band selection methods [26]. To temporal information captured within time series data
automate the analysis process of highly dimensional hy- plays a key role in quantifying and understanding the
perspectral data, an array of evolution of flood maps. Detection of flooding areas is
machine learning methods also performed based on MSIs, commonly with the use
were introduced for monitor- of the Normalized Difference Water Index [35]. Interest-
WE NEED TO FIGURE OUT ing plant diseases [18]. Deep ingly, data-driven solutions may benefit from the pub-
WAYS OF PRODUCING learning techniques con- lic WorldFloods database, which currently contains 422
stitute the current research flood maps [35].
REPRESENTATIVE ENOUGH
focus in the field, with deep
DATA SO THAT THE
convolutional neural networks DETECTION OF FIRE, VOLCANIC
PERFORMANCE OF A
(CNNs) playing the leading ERUPTIONS, AND ASH CLOUDS
SOLUTION CAN BE role [27], [28], [29]. On the Fires are an example of a natural threat that destroys natu-
VALIDATED BEFORE FLIGHT, other hand, there are ap- ral resources and causes extensive socioeconomic damage.
ESPECIALLY IN DATA- proaches that strongly bene- The nature of a fire is primarily influenced by the type,
DRIVEN APPROACHES. fit from handcrafted features flammability, and amount of fuel present in an area. Cli-
and vegetation indices that mate change and natural conditions, such as topography
may be extracted from satel- and wind, are also important factors here. Historically, the
lite MSIs [23]. estimation of a fire area was carried out using field meth-
The impact of global climate change is manifested by ods based on GPS data. However, only the perimeter of
drought in terrestrial ecosystems. Drought is one of the the area can be determined, and this approach is limited
main factors of environmental stress; therefore, estimat- by the ­difficulty in reaching an area covered by an active
ing the water content in vegetation is one of the criti- fire and the risk of unevenness of burning areas [39]. The
cally important practical challenges faced today [30]. nature of a fire also changes over time, due to the fire’s
Based on this information, it is possible to monitor the effect of increasing the volume of charred vegetation and
physiology and health of forests and crops [31] by ex- changing the temperature and level of humidity. Impor-
tracting an array of quantifiable drought-related param- tantly, the decreasing amount of chlorophyll as a result of
eters. Such parameters include, but are not limited to, combustion causes changes in the characteristics of the
the canopy water content (reflecting water conditions at spectral signature of acquired MSIs/HSIs [40]. This infor-
the leaf level and information related to the structure of mation allows us to assess whether an area is experiencing
the canopy, based on remote sensing data [32]), the leaf an active fire (because there are changes over time), the
equivalent water thickness (estimating the water content area is burned (because the chlorophyll content is low),
per unit of a leaf ’s area [33]), and the live fuel moisture and there is a risk of the active area redeveloping (partial
content [34]. There are, indeed, in-field methods that fuel burnout) [41].
may allow us to quantify such parameters, but they are The monitoring and prevention of fires in Europe is car-
not scalable; hence, they are infeasible for large areas of ried out by the European Forest Fire Information System,
interest [30]. which is part of the Copernicus EO program, in which the
monitoring process exploits the 13-band MSI data captured
DETECTION AND MONITORING OF FLOODS by the Sentinel-2 mission. The key source of information
Between 1995 and 2015, flooding affected more than here is the red-edge band, which is one of the best descrip-
2.2 billion people, representing 53% of all people affected tors of chlorophyll, whereas the assessment of a fire condi-
by weather-related disasters. Detecting and monitoring tion is most often based on vegetation indices [40]. Such
floods in vulnerable areas is extremely important in emer- analysis can be, however, automated using a variety of ma-
gency activities to maintain the safety of the population chine learning approaches operating on MSIs and HSIs [42],
and diversity of the underlying ecosystems [35]. Today, the [43]. Accurate detection of active areas of fire, obtained in
most important source of information about floods is satel- a safe way, is an important advantage over methods based
lite data; both MSI and synthetic aperture radar (SAR) data only on the perimeter of an area, including both active and
are used to detect and determine floods’ extent [35], [36]. A burned parts [40]. Moreover, the application of remote

14 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


sensing techniques allows us to monitor fire progress over from water clouds [47] (hence, satellite imaging may be
time [42] and to perform early fire detection [44]. The use useful even in the presence of clouds). Changes in the ac-
of the airborne methods in this situation is limited due to curacy of volcanic ash cloud detection are also observed
smoke and high temperatures, to which drones equipped due to changes in meteorological conditions induced by
with sensors are sensitive. The wide scope and scalability seasonality during the year [53].
of satellite imaging is also important to identify active hot
spots. Unmanned aerial vehicles (UAVs) can be additional DETECTION AND MONITORING OF
sources of information, but they are supplemental due to EARTHQUAKES AND LANDSLIDES
their lack of continuity in time [40]. Detecting an area affected by an earthquake allows us to
Among the threats induced by volcanic ash (being a estimate the scale of damage and plan rescue activities [54],
direct result of an eruption), we can indicate air and wa- and precise information about the effects of an earthquake
ter pollution, climate change [45], and affecting aviation obtained quickly increases the effectiveness of mitigating
safety; therefore, monitoring and forecasting the location their impact [55]. Monitoring also allows for preventive ac-
of volcanic ash clouds is critically important [46]. Addi- tions [56]. The extent of the damage caused by an earth-
tionally, falling dust, containing silicate particles, causes quake often exceeds the capacity of ground equipment, due
diseases of the respiratory system and contaminates drink- to the dangerous and difficult access to some areas and the
ing water [45]. Satellite remote sensing techniques are one risk of further negative phenomena related to infrastruc-
of the most widely used tools for monitoring and locating ture damage and the limited operating time of the devices.
volcanic ash clouds [46]. Their advantage over terrestrial The major drawback of using terrestrial tools working at
methods is the lack of the need to install and maintain the microscale is the limited area that can be assessed in
measuring instruments in hard-to-reach often danger- a given time; therefore, macroscale imaging with satellites
ous locations [45]. An example of an existing solution is can increase the scalability of monitoring solutions. Most
HOTVOLC, which is a Web geographic information sys- often, the purpose of detecting damage caused by an earth-
tem based on data from the Spinning Enhanced Visible quake is to identify landslides and affected buildings. In the
and Infrared Imager aboard the Meteosat geostationary case of landslides, the assessment is carried out using tools
satellite. The system exploits the infrared thermal range that are divided into three categories: 1) analysis of land-
(8–14 μm) to distinguish silicon particles from those of slide image features by using optical data, including remote
water and sulfuric acid [45]. Utilizing the thermal infrared airborne and satellite sensing data [54], [57]; 2) detection
(TIR) band also allows us to determine the height of an ash of surface deformation and deposition due to landslides
cloud based on temperature, which is critical information by using radar data [58]; and 3) fusion of optical and radar
for aviation safety [45]. The use of a geostationary satel- data [59], [60].
lite enables monitoring volcanic activity and dust clouds Detection of damaged buildings can be carried out
24 h a day [45], but this monitoring can also be performed based on the geometrical features of the five main types
cyclically using the Moderate Resolution Imaging Spectro- of damage: 1) sloped layers, 2) pancake collapses, 3) de-
radiometer, in which both the visible light bands [47] and bris heaps, 4) overturn collapses, and 5) overhanging
TIR [48] are used. elements [61]. These elements can be identified using
There exists the correlation of MSI data with various ground ­methods, but the use of MSIs and HSIs allows for
quantifiable indicators for the assessment of volcanic ac- executing this procedure in a much shorter time, given
tivity [49]. End-to-end monitoring solutions commonly that a sufficiently high spatial resolution is maintained
exploit other data modalities, such as SAR, to detect the de- [62]. Damage to buildings, especially in highly urbanized
formation of volcanoes that precedes eruptions [50]. After- areas, must be mapped in a short time to improve (and
ward, the detection of volcanic eruptions may be effectively even enable) rescue operations. For historical buildings
performed using classic [48] and deep [51] machine learn- and networks of tunnels, it is pivotal to efficiently map
ing techniques operating on MSI data. An important chal- the damage. The main characteristics of urban rubble to
lenge concerned with this approach is the rapid darkening be assessed encompass location, volume, weight, and the
of lava, even though its temperature is still high, which may type of collapsed building materials, including hazard-
result in an incorrect volcano rating status if a satellite im- ous components (e.g., asbestos), which can have a major
age is captured too late. Extending the analysis to capture impact on further operations [59]. Due to the large spec-
high-temperature information available from short-wave tral and spatial heterogeneity of urbanized areas, classic
infrared (SWIR) allows us to classify volcanic activity more methods may become insufficient due to changes, e.g.,
precisely [52]. Using spectral information, the presence of caused by attempts to remove debris. Thus, capturing sub-
sulfur dioxide (contained in volcanic eruptions) can also be tle spectral differences for such areas may play a key role
identified at an early stage of an eruption [52]. Finally, the in the urban environment.
detection of volcanic ash clouds by using the 400–600-nm Current research concerning the analysis of optical
bands from HSI can enable us to determine the activity of data acquired in areas affected by earthquakes focuses
a volcano, due to the possibility of separating ash clouds on MSIs. The use of 450–510- (blue), 520–580- (green),

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 15


655–690- (red), and 780–920-nm [near-infrared (NIR)] The summer and spring periods are often characterized by
bands was shown to be sufficient both in the detection low concentration values, while high concentrations are
of landslides and assessing the damage using deep learn- observed during the winter period [70]. Additionally, un-
ing [61]. The detection stage is often preceded by image derestimation of the PM 2.5 value may be caused by local
enhancement using, e.g., pan sharpening [54] and by re- weather conditions, such as snow and rain (and clouds)
ducing the dimensionality of the data [55]. Nevertheless, as well as wind in coastal locations [64]. Also, the level of
most of the methods based on CNNs skip preprocessing urbanization of an area translates into the concentration
[57], [63] and benefit solely from representation learning. value and may be a source of significantly larger errors
Utilizing pretrained networks of various architectures when estimating PM 2.5 [64], [73].
can help us tackle the important issue of limited and im- The use of satellite images allows for estimating pol-
balanced ground truth optical data that could be used to lutant concentrations, while the use of solutions based
train such large-capacity learners from scratch [54], [62]. on geostationary satellites enables determining the spa-
Such images are commonly obtained from public sourc- tial distribution of concentrations every several minutes,
es and manually labeled. Also, one earthquake may be thus indicating sudden increases in the PM 2.5 value [64].
imaged by different satellites. For example, the images Extending optical data with natural and social factors,
used in [61] and [62] from two satellites (Geoeye-1 and such as, e.g., traffic information, directly translates into
QuickBird) captured the damage caused by the 2010 Haiti an improvement in the accuracy of concentration map-
earthquake, but the reported methods operated on com- ping in urban conditions and monitoring over time [71],
parable bands. [73]. The use of remote sensing also enables the estima-
tion of pollutant concentrations in rural areas, where net-
MONITORING INDUSTRIALLY INDUCED POLLUTION works of ground sensors are absent or extremely limited
Industrially induced pollution is a threat to terrestrial and [69]. Validation of estimation algorithms can be done
aquatic ecosystems. The impact and nature of this human– based on values obtained by ground sensors [74], an ex-
nature interaction can be determined by detecting various ample of which is the Aerosol Robotic Network optical
phenomena and monitoring them over time. The impact depth network [70], [75]. Data collected in this way al-
of industrial pollution can be divided into several groups: low for effective verification in daily, monthly, and yearly
in the case of air, there are dust events (the “Dust Events” cycles [64], [70].
section) [64], whereas in inland areas, we commonly
monitor mines (the “Mine Tailings” section) [65] as well MINE TAILINGS
as water acidification (the “Acidic Discharges” s­ection) Mining activities have a strong environmental impact, both
[66]. ­Hazardous chemicals should also be observed in ag- in terms of underground and open-pit mining. A typical ex-
ricultural areas, due to their interaction with vegetation ample of an impact is acid mine drainage (AMD) and acid
processes (the “Hazardous Chemical Compounds” sec- rock drainage [76]. Mine wastes are the main difficulty in
tion) [67], [68]. In each of these cases, it is advisable to the rehabilitation of former mining sites, and they have a
detect a phenomenon and monitor it over time in a way negative impact on soil and water ecosystems, due to their
that enables determining qualitatively and quantitatively toxicity [65]. The challenge is also to accurately determine
its impact on the environment. their influence on the environment, which requires moni-
toring and systematic data collection over time [77]. The
DUST EVENTS cause of the formation of AMD is sulfide minerals, which,
Air quality affects the health of humans and animals; through the action of atmospheric oxygen and water, are
therefore, measures are taken to monitor the concentra- oxidized, resulting in the release of ions, such as H+, Fe2+,
tions of dust particles in the air. In China, it is estimated and SO 2-4 [78]. The result of the reaction is sulfuric acid [79],

that air pollution contributes to 1.6 million deaths annu- which reduces the natural acidity of Earth’s surface [78].
ally, accounting for about 17% of all deaths in the coun- A significant decrease in acidity causes further reactions,
try [69]. The key air assessment parameter is dust with a which result in the release of metals and metalloids, such as
diameter below 2.5 μm (PM 2.5) [64], and appropriately Fe, Al, Cu, Pb, Cd, Co, and As, and sulfates from the soil. The
responding to the real-time PM 2.5 value can significantly released heavy metals penetrate soil, aquifers, and the bio-
reduce harmful effects on the human respiratory and cir- sphere, accumulating in vegetation and thus posing a threat
culatory systems [70] through, e.g., avoiding overexposure to humans and animals, increasing the potential toxicity for
and striving to reduce the level of pollution [71]. Estimat- agriculture, aquifers, and the biosphere [80].
ing the spatial distribution of the PM 2.5 concentration The mapping of an area corresponding to AMD, but
from satellite data was shown to be feasible using deep also others minerals, and the estimation of the pollu-
learning in both low- and high-pollution areas [64]. Im- tion level are carried out using in-field, air [77], and sat-
portantly, the level of PM 2.5 depends on seasonality ellite [65], [81] methods. The use of remote sensing to
[72]; the process of gathering ground truth data should estimate pollution maps using MSIs [65], [78] and HSIs
reflect that to enable training well-generalizing models. [77] requires preparing the reference data, the source of

16 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


which are ground measurements taking into account, An acidified environment favors the development of some
among others, acidity, X-ray fluorescence, and reflection species of algae, which also causes the water color to change
spectrometry [77], [78]. Another method of identifying over time [88], [92]. Finally, seasonality is important in
spectral signatures in remotely sensed images is the use terms of water acidification, due to changing weather and
of known spectral signatures [82]. Interestingly, analysis plant conditions [89]; hence, acquiring ground truth infor-
of satellite and flight data combined with in situ mea- mation is both spatially and temporally challenging [87].
surements enables the identification of bands containing
spectral signatures characteristic to chemical reactions in- HAZARDOUS CHEMICAL COMPOUNDS
volving Fe +3 , which result in the formation of sulfuric acid Soil is an open system that interacts with the atmosphere,
[79]. The suggested wavelength ranges are 560–782 [76] hydrosphere, and biosphere through the exchange of mat-
and 700–900 nm [65] when processing the data collected ter and energy [68]. The structure and quality of soil is
from the Sentinel-2 mission [78], which, due to the higher strongly related to the mineral composition of the area
resolution of the visible and NIR (VNIR) bands, is more and to its contamination
explored in the area of ferrous minerals than, for example, [93]. Pollutants penetrate veg-
data from the Landsat program. It was shown that mineral etation, thus posing a direct
mapping depends on the quality of the image spectra and threat to crops and, indirectly IMPLEMENTING AI
season of the year [77], and it is influenced by the spatial through accumulation, to hu- ONBOARD AN EO SATELLITE
resolution of the images [with a 15-m ground sampling mans and animals [68]. The
IS NOT A GOAL IN ITSELF,
distance (GSD) giving sensible results] [82]. It can also use most serious soil contamina-
AND IT MUST PRODUCE
machine learning techniques [83]. tion is related to heavy met-
SIGNIFICANT BENEFITS
als, and their sources include
ACIDIC DISCHARGES the progressive urbanization WHEN COMPARED TO WHAT
Chemical reactions of sulfide minerals in mine areas, which and industrialization of larger ON-GROUND DATA
reduce the acidity of the soil, affect the quality of nearby areas as well as mining [68]. PROCESSING MAY PROVIDE.
water [84]. The ultimate effect is to lower the acidity of mine Therefore, elaborating miner-
waters, which directly poses a threat to the health of miners al contamination maps is an
[77]. Groundwater and surface water are important resourc- essential element of risk level
es for the health of humans and animals; therefore, their estimation and monitoring to manage food safety [68].
acidification (affecting surrounding rivers, water reservoirs, A map of heavy metals can be obtained on the basis of
and soil) is a serious threat [83], [85]. Mine water purifica- soil and vegetation samples subjected to X-ray fluorescence
tion and reclamation involve an expensive process carried spectrometry and spectral analysis in the range of visible
out through the oxidation of sulfide minerals, resulting in light (400–700 nm) and NIR (700–2,500 nm) [68]. As in the
minerals that require further disposal [84]. Detection of the case of AMD, the analysis of samples and determination of
ecohydrological hazard in a given area in time, therefore, substance concentrations is not sufficient to assess the spa-
contributes to environmental safety [79], [85], not only due tial distribution of heavy metals but may be the basis for the
to the assessment of water quality but also thanks to the preparation of ground truth data exploited by data-driven
assessment of conditions in local mines [86]. It is also pos- mapping algorithms [93], which benefit from MSIs [93],
sible to forecast the quality of surface water based on the [94] and HSIs [95]. Unpolluted and contaminated soils have
monitoring of mine waste by using HSI [79], [87]. different spectral properties in the bands corresponding to
As in the case of soil analysis in former mining areas, heavy metals, which may be the basis for the classification
the assessment of the impact of mining on water may be of an area as potentially toxic [93]. Due to the different bio-
carried out by combining data collected through in situ availability and toxicity of c­hemical ­substances, mapping
measurements with remotely sensed data [85], [88], [89] the spatial distribution of these substances is more impor-
and airborne methods [82]. Based on the methods of pixel tant than determining their exact concentrations [68]. In
classification and those exploiting water rating indicators the case of data for areas contaminated with heavy metals,
[90], hydrological maps are created that show the spatial increasing values of the spectral reflectance in 500–780 nm
distribution of mineral compounds and allow us to pre- and 1,200–2,500 nm, and decreasing in 780–900 nm, are
pare a model for the transport of chemicals [91]. The spec- observed [93]. ­Satellite images captured, e.g., using Hyper-
tral analysis of water is commonly performed using VNIR ion and Landsat-8, as well as those obtained by airborne
(350–1,000 nm), which may mark the occurrence of mine methods [95] allow for detecting heavy metals by using AI
water in rivers [83]. In this context, mapping water quality [92], [93]. An important element influencing the detection
is difficult due to vegetation in rivers and reservoirs (and process is the concentration of metals [94], which is also
due to the varying spatial resolution of HSIs) [66]. The spec- influenced by environmental factors, such as soil moisture
tral characteristics of acidic water measured with a spec- content and surface roughness [68]. High concentrations of
trometer revealed the spectral effect of green vegetation, heavy metals occur most often in mine and slag areas [93]
similar to the effect of water depth and transparency [66]. and in areas rich in organic matter [94]. On the other hand,

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 17


concentrations are lower in arable lands. HSIs can help us by using a sensor located on a satellite, which offers enor-
detect lower concentrations and allow for the monitoring of mous scalability [106] together with the ability to track the
subtle changes in the environment [67]. Finally, it is possible temporal variability of emissions [108]. Methane detection
to optimize HSI data for noise suppression and amplifica- directly benefits from sensors’ spatial resolution, as those
tion of the signals of the metal of interest [67]. with a resolution of more than 1 km are not sufficient to
locate the plumes [105]. Finally, the detection of methane
DETECTION AND MONITORING OF METHANE in an oil shale extraction area was verified using a thermal
Methane is one of the most important greenhouse gases camera [96].
with a strong climate impact [96], [97], with 28 # greater
global warming potential than carbon dioxide, due to its WATER ENVIRONMENT ANALYSIS
significant influence on the global radiation balance [98]. Water is one of the basic elements of all ecosystems; there-
This phenomenon causes fore, its quality is an important element that affects hu-
changes in ecosystems that mans, animals, and vegetation. Water pollution may have
can be estimated by determin- a different nature and source, including littering (the “Ma-
SELECTING APPROPRIATE ing the value of the concen- rine Litter Detection” section) and oil leaks and spills (the
ONBOARD APPLICATIONS tration of methane [99]. We “Detection of Water Pollution” section). Both are caused
CAN IMPACT SOCIETY AT have been observing meth- by human activity during a relatively short period of time;
ane both from natural [99] thus, their detection, evaluation, and monitoring are of
LARGE THROUGH, E.G., THE
and anthropogenic sources utmost importance. Since the ocean makes up 70.8% of
IMPLEMENTATION OF
[100], such as industry [101], Earth’s surface, assessment of the scale of the phenom-
SUSTAINABLE agriculture [102], and land- enon of moving pollutants in marine waters requires the
DEVELOPMENT GOALS AND fills [103]. The detection and use of remote sensing methods. The quality of water may
CLIMATE CHANGE monitoring of methane can, be also affected by algal blooms (the “Detection of HABs
ADAPTATION. thus, form the basis for man- and Water Quality Monitoring” section), and oceanic
aging anthropogenic sources ecosystems may be influenced by coastal and inland pol-
to reduce the gas’ emission lution, which should be monitored, as well. Apart from
[98]. These tasks are, however, maintaining water quality, ensuring the security of water
challenging due to the dependence on the shape and size of can be supported with EO applications (the “Maritime
methane plumes at specific sources, such as oil and gas infra- Surveillance” section).
structure, warehouses, and pipelines [104]. Remote sensing
may have the potential to detect methane leaks so that they MARINE LITTER DETECTION
can be repaired, but for this purpose, high-resolution [105] Floating marine litter, such as damaged fishing nets, plas-
and cyclic observations are required [106]. Methane absorp- tic bottles and bags, wood, rubber, metals, and even ship-
tion is observed in the NIR spectral range (780–3,000 nm) wrecks, involves hazardous objects that significantly af-
and midinfrared (3,000–5,000 nm), particularly at 1,650, fect the environment. It is estimated that 5 to 13 million
2,350, 3,400, and 7,700 nm [96]. Mapping the emission tons of litter ended up in the marine environment in 2010
of methane plumes is commonly associated with SWIR [110], whereas in 2016, it was 19 to 23 million tons of plas-
range analysis (1,600–2,500 nm), due to the two absorp- tics debris [111]. It is predicted that by 2030, the mass of
tion bands present in this range (the weaker, at 1,700 nm, debris discharged into the marine environment could be
and the stronger, at 2,300 nm) [101]. Although most sen- 53 million tons [111]. Such contaminants are character-
sors are not optimized for gas analysis, it is possible to map ized by a significant variation in their composition [112],
an observed area while benefiting from known spectral sig- which is a challenge in the process of their detection and
natures [101]. Here, the Airborne Visible/Infrared Imaging tracking [113]. The location of macroplastic, which is often
Spectrometer (AVIRIS)-C and AVIRIS-NG airborne sensors ­accumulated by processes such as river plumes, windrows,
manifested agreement of the determined emission value oceanic fronts, and currents, forms the basis for further
with in situ measurements [108]. Also, comparison of the actions to remove it [114]. Preliminary works suggest that
Hyperion and airborne AVIRIS-C cameras shows that both the debris is similar in both size and shape across different
present similar plume morphology [109], which indicates plastic islands [113]. Nevertheless, some spectral variation
the possibility of using orbital tools to continuously moni- was observed, which was likely caused by differences in
tor gas infrastructure. A similar comparison of the detec- the optical properties of the objects, the level of immer-
tion of methane plumes carried out between simulated data sion in the water, and the intervening atmosphere. Iden-
from, e.g., the Hyperspectral Precursor of the Application tification of plastic materials may be associated with the
Mission and CHIME sensors, and data from AVIRIS-NG use of unique absorption features, which is manifested in
suggests the feasibility of detecting point sources by using the range of 800–1,900 nm for polymers [115]. The spatial
satellite sensors [105]. This is important due to the possi- resolution of the sensor is of great importance in the case
bility of reducing the costs of repeated image acquisition of contamination detection because with the increase of

18 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


a pixel’s area, the possibility of detecting smaller debris and tracking them [113]. HSI analysis enables capturing the
decreases [116]. characteristics of such materials; for example, initial simula-
Data preprocessing in the detection of plastic in the tions confirmed that the absorption characteristics of 1,215
ocean is commonly minimal (without noise reduction and and 1,732 nm have applications in detecting plastic [113].
normalization [117]) and very often omits atmospheric Also, the use of the SWIR range allows us to eliminate the
correction to eliminate the risk of removing information in bathymetric influence [115], which should be considered
wavelengths dominated by water absorption [115]. Atmo- due to water vapor absorption. Spatial resolution is of great
spheric correction may have a minimal impact on narrow importance in the case of contamination detection because
bands of HSIs, which may adversely affect detection and with the increase of the area per pixel, the possibility of
classification based on pollutants’ spectral profiles [113], detecting smaller plastic debris decreases [116]. Industrial
[118]. Therefore, level 1 MSIs and HSIs are often exploited impacts reduce water quality,
[116], [119], [120], and HSIs may be utilized to distinguish especially inland, through
different categories of garbage (e.g., fishing nets and plas- heavy metals [88]. Keeping
tics) [113] and seawater [115]. Since supervised methods track of these changes and de-
A QUANTITATIVE ANALYSIS
require ground truth, which is most often obtained with termining their nature is thus
in situ methods in extremely time-consuming processes, essential for human health. OF THE EXISTING STATE OF
unsupervised and semisupervised techniques have been The assessment of the state THE ART IS PIVOTAL TO
increasing in the field [116]. Finally, it is worth mention- of rivers should form the ba- MAKE AN INFORMED
ing that tracking microplastics at the ocean surface layer sis for planning agricultural DECISION ABOUT THE
requires very detailed radiative transfer analysis and the and industrial activities [83]. RESEARCH AREA THAT WILL
development of high signal-to-noise sensors, and it consti- The assessment of the mineral BE TACKLED IN AN
tutes another exciting avenue of emerging research [121]. composition of water may be
UPCOMING MISSION.
based on HSI and MSI analy-
DETECTION OF WATER POLLUTION sis through observing spectral
Water pollution poses a serious threat to ecosystems; there- signatures [65], [83].
fore, monitoring it is an important element of various coun-
termeasures. Due to the nature of pollution, several types DETECTION OF HABs AND WATER
of events can be distinguished here. Oil spills are one of QUALITY MONITORING
the main sources of marine pollution [122], resulting in HABs are a serious threat both to humans and other living
not only oil stains on the water surface but also the death organisms present in the aquatic environment [132]. HABs
of animals and vegetation and damaged beaches, which cause the extinction of some organisms, due to limited light
translates into losses in the economy and tourism [122]. Lo- and the inhibition of photosynthesis [133], and they reduce
cating oil spots and monitoring their displacement allows fishing [132], deteriorate water quality [133], and may be a
us to track their environmental impact, take preventive ac- threat in the case of power plants located near reservoirs, as
tions [123], and ensure justice (and compensation) when a they may cause blockages of cooling systems, as was the case
spill source can be identified [124]. with the Fangchenggang Nuclear Power Plant, in China [134].
Remote sensing can help monitor the displacement of Monitoring such harmful blooms can, therefore, result in so-
oil spills while ensuring high scalability over large areas. cioeconomic benefits [133]. The development of algal blooms
SAR imagery can be exploited here, as oil spots are mani- is influenced by environmental conditions, such as tempera-
fested as contrasting elements to surrounding clean water ture [133] and water fertility [135], and man-made infrastruc-
[125], [126]. As a result, SAR images show black spots rep- ture, e.g., river dams limiting the movement of water [136].
resenting oil spills and brighter areas representing clear wa- Algae occurrence estimation can be performed using in
ter [127]. Such imaging can effectively work regardless of situ methods [134]. This approach allows us to study condi-
weather conditions, such as the degree of cloudiness and tions by using buoys equipped with a sensor collecting data
changes in lighting [123]. The research area of detecting oil from aquatic ecosystems (temperature, salinity, dissolved
spills from SAR is very active and spans classic [127], [128] oxygen, chlorophyll a, and the concentration of algae [134]).
and deep machine learning [122], [123], [125], [126], [129]. In situ measurements can also validate estimation based on
The less common processing of MSI data from Landsat-8 HSIs [137] and MSIs. Due to the influence of seasonality on
[130] can also be precise in locating oil spills. the value of the water surface temperature [138], which af-
Garbage that flows down rivers to the ocean is a remnant fects the presence of HABs, the use of UAVs may limit the
of human activity [113]. Another threat to water quality and monitoring of changes over time. Satellite imagery, apart
ecosystems is the human impact on water management from providing massive scalability (in situ techniques are
through eutrophication [92] and industrial [131] and min- extremely costly and labor-intensive for periodic measure-
ing [65] activities. Litter contaminants are characterized by ments), enables generating algal maps taking into account
significant variation in their composition (plastic, wood, information on spatial distribution and variability over time
metal, and biomass) [112], which is a challenge to detecting [132], [129]. Automated machine learning detection methods

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 19


commonly exploit chlorophyll concentration estimation, as techniques based on signal processing [155] and deep learn-
this parameter can be determined from HSIs [140]. ing approaches [156] are often based on SAR and MSIs,
Water quality monitoring is most often carried out by with the former unaffected by clouds and fog [157]. Ships
determining parameters, such as temperature and acidity are characterized by various sizes and shapes; thus, the
as well as chlorophyll a [141]. An increase in temperature appropriate spatial image resolution [158] is pivotal to de-
promotes the growth of algae, which translates into an tect them [159]. Also, fusing SAR and MSI data enables
increase in the content of the phytoplankton biomass in locating vessels [158], whose positions can be tracked
the water [92]. This is often indicative of an uncontrolled [160]. Object detection based on satellite images is an im-
growth of HABs [139]. In recent work, Sahay et al. showed portant element supporting the search for lost ships and
that it is possible to estimate chromophoric dissolved or- planes [161]. This task is important in both civil and mili-
ganic matter, being the fraction of dissolved organic matter tary matters for safety reasons as well as for potentially
that absorbs sunlight in the ultraviolet and visible region quick assistance in the case of accidents [162]. The level
of electromagnetic radiation, from remote sensing reflec- of difficulty of detecting planes and ships depends on the
tance in coastal waters of India [142]. The authors showed background and their size [159]; hence, the spatial resolu-
that seasonal and spatial variability in the investigated area tion of the imagery is important [158]. Although remotely
allows their algorithm to retrieve the chromophoric dis- sensed images allow identifying such objects at a global
solved organic matter absorption in coastal areas by using scale, they are also challenging due to the lack of homoge-
high-resolution ocean color monitors, such as Sentinel-3, neity of the background [161].
but also from HSIs [143]. In [144], Cherukuru et al. focused Illegal fishing is a threat to coastal and marine ecosys-
on estimating the dissolved organic carbon concentration tems as well as the economy [152]. Unregulated fish catch-
in turbid coastal waters by using optical remote sensing ob- es reduce the stocks of fisheries and the lack of reporting
servations. Organic carbon is makes it impossible to monitor fisheries, which poses a
a major component of dis- threat to fish species and leads to economic effects [150].
solved organic matter in the The detection of illegal fishing is commonly built upon the
ONBOARD PROCESSING
aquatic system, and it plays detection of ships and monitoring their trajectories by us-
THAT COULD SELECT a critical role in the marine ing, e.g., deep learning techniques over MSIs [152]. The In-
RELEVANT INFORMATION AT carbon cycle. Overall, data- tegrated System for the Surveillance of Illegal, Unlicensed,
THE SENSOR LEVEL MAY driven HSI analysis may be and Unreported Fishing is an example of a working system
OFFER EXTENDED used for water environment that exploits SAR (Sentinel-1) and MSI (Sentinel-2) data for
MONITORING CAPACITIES monitoring and to under- this task [150].
BEYOND THE INITIAL stand its dynamics, leading
to better understanding of OBJECTIVE AND QUANTIFIABLE SELECTION
MISSION PERIMETER.
the underlying biogeochemi- OF ONBOARD AI APPLICATIONS
cal processes at a larger scale. Implementing AI onboard an EO satellite is not a goal in
Ocean monitoring is also itself, and it must produce significant benefits when com-
tackled within NASA’s aquatic Plankton, Aerosol, Cloud, pared to what on-ground data processing may provide. For
and Ocean Ecosystem mission carrying the Ocean Color a space project with cost and planning constraints, the an-
Instrument, which will be capable of measuring the color swer is not obvious and must take into account different
of the ocean, from ultraviolet to SWIR [145]. aspects by considering a wide range of criteria to estimate
Recently, Caribbean coasts have experienced atypi- how much a solution complies with the satellite, system,
cal arrivals of pelagic Sargassum, with negative conse- and mission constraints as well as what benefits may re-
quences both ecologically and economically [146]. Removing sult. The analysis of pros and cons must demonstrate that
Sargassum before its arrival, thanks to early detection, could having AI onboard an EO satellite is the best option to pro-
reduce the damage it causes [147]. It is known that floating vide operational added value for the final user in terms of
mats of vegetation alter the spectral properties of the water performance, timeliness/latency, autonomy, and extended
surface; hence, deep learning-powered exploitation of HSIs capacities as well as from an engineering, industrial, op-
has been investigated for such tasks [148]. erational, commercial, and scientific points of view. Ad-
ditionally, selecting appropriate onboard applications can
MARITIME SURVEILLANCE impact society at large through, e.g., the implementation
Maritime surveillance aims at maintaining the security of sustainable development goals and climate change ad-
of waters by monitoring traffic [149] and fishing [150] as aptation and mitigation.
well as the elimination of smuggling [151], illegal fish- At the end-to-end level, onboard processing may im-
ing [152], and pollution [153]. Remote sensing can help prove overall system reactivity by sending alerts upon the
to locate ships and verify their location with automatic detection of transient and short-duration phenomena,
­identification systems [150], allowing for inferring the le- thus providing rapid responses to events that require fast
gality of a vessel’s movement at scale [154]. Ship detection decision making that is incompatible with a standard

20 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


­ n-ground processing chain. From a mission point of
o learning models (they are commonly synthetically aug-
view, onboard processing that could select relevant in- mented to increase their size and representativeness [9])
formation at the sensor level may offer extended moni- and the latter used to quantify their generalization capa-
toring capacities beyond the initial mission perimeter by bilities. Although designing appropriate validation proce-
providing the final user with necessary and useful infor- dures of AI solutions is not the main focus of this article,
mation while limiting data storage and transmission to we want to emphasize that it should be performed with
marginal overheads. care, as incorrectly determined procedures can easily lead
When considering data-driven AI solutions, addition- to unbiased and fundamentally flawed experimental re-
al criteria must be considered in the early engineering sults [163]. In such cases, we would not be able to quantify
­phases. They encompass database construction and algo- the expected operational abilities of AI algorithms; hence,
rithm design for selecting the target solution, and they ap- we may end up being trapped in the “illusion of progress”
ply for analytical approaches, too, even if database needs reproducibility crisis [164], with overly optimistic hopes
are often easier to meet. Given that, for a future EO mis- for onboard processing.
sion, there are generally no actual data available before a We can appreciate that there are lots of objectives
satellite launch, the possibility to simulate onboard rep- and constraints related to system design, hardware con-
resentative acquisitions from exogenous data must be in- straints, and data availability that can (and should) direct-
vestigated carefully, especially for data-driven solutions. ly influence the process of selecting the target AI use case
For some use cases, generating such data could require for final deployment onboard an EO satellite mission. In
in-depth knowledge of the future system, which is not neces- the following sections, we present the scales we suggest to
sarily fully characterized in early phases, where the de- quantify the objectives (the “Objectives” section) and con-
sign of the algorithms occurs. Consequently, this requires straints (the “Constraints and Feasibility” section) and to
planning an ability for updating the onboard algorithm ultimately aggregate them into a weighted score assessing
configuration by uploading new parameters once a sat- each considered onboard AI use case (the “Selecting Use
ellite is in orbit. This level of flexibility may be, in turn, Cases for Onboard AI Deployment” section).
affected by the confidence and trust expected at launch.
Further updates could also be necessary during a satel- OBJECTIVES
lite’s lifetime to modify the algorithms according to the We split the objectives affecting the process of selecting a
actual image quality and to adapt algorithm behavior to target AI application into those related to onboard process-
another use case. This constraint of updating the onboard ing, the mission itself, and the interest of the community.
algorithm configuration must be considered at the system The following summarizes the objectives in more detail
level as soon as possible in the system development sched- and presents the suggested scale for each. The scale is bi-
ule since it may have a significant impact on the design nary (values of zero and two) or three point (zero, one, and
of some satellite subsystems. Validation of the algorithms’ two); the larger value, the better:
performance generally increases their maturity through 1) Onboard data analysis:
mission phases, while the instrument design (and simula- •• Faster response (better reactivity), d OBP
fr : This objective
tor) becomes more mature. relates to accelerating the extraction of actionable
The availability of annotated data related to the use items/information from raw data through onboard
case, or the difficulty to generate annotations (either analysis and in relation to the benefits it could give
manually or through automatic processing), must be care- end users:
fully investigated for any emerging supervised model. 55 Zero: The faster response (when compared with on-

Indeed, such activities may rapidly become cumbersome, the-ground processing of downlinked raw image
requiring the expertise of specialists in the application data) is not of practical importance.
domain to obtain relevant annotations. The cost of the 55 Two: The faster response directly contributes to

engineering database development necessary to handle specific actions undertaken on the ground, which
any data-driven AI solution must be put in front of a po- would have not been possible without this timely
tential nondata-driven handcrafted algorithm solution information.
that would not require the effort of collecting, annotat- •• Multitemporal analysis (compatibility with the revisit time),
ing, and simulating a huge amount of representative data. d OBP
mta : This objective relates to multitemporal onboard

In a classic data flow, which is followed while developing analysis of a series of images acquired for the same
an AI-powered solution, the data may be acquired using area at consecutive time points and to the benefits it
different sources (e.g., drones, aircraft, and satellites) and could give end users:
are commonly unlabeled; hence, they should undergo 55 Zero: Multitemporal analysis would not be beneficial

manual or semiautomated analysis to generate ground (it would not bring any useful information) or could
truth information. Such data samples are further bundled be beneficial but is not possible to achieve within
into datasets, which should be carefully split into training the mission (e.g., due to the assumed ConOps and
and test sets, with the former exploited to train machine missing onboard geolocalization).

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 21


55 One: Multitemporal analysis may be beneficial 3) Interest to the community, d R : This objective indicates
and can add new information, and it is possible to whether the use case is of interest to the community
achieve within the mission. (e.g., the geoscience and remote sensing research com-
55 Two: Multitemporal analysis is of critical importance, munity, businesses pursuing future trends based on
and it is possible to achieve within the mission. novelty and impact, and so forth) if it is novel, worthy
2) Mission: of investigation, and has potential to be disruptive:
•• Compatibility with the acquisition strategy, d Mcas: This ob- •• Zero: The number of existing papers is low and not in-
jective indicates whether the use case is in line with creasing, or the number of papers is notable but has stabi-
the acquisition strategy of the considered mission lized. This may be an indicator that the topic did not res-
(the duty cycle of the satellite, number of ground sta- onate in the community or has been widely researched,
tion locations, capabilities to store data onboard, and and it is difficult/unnecessary to contribute more.
overall processing time, including data preprocessing •• One: The number of existing papers is large (dozens)
and AI inference): and increasing at a stable pace. This may be an indicator
55 Zero: The use case is not compatible with the strat- that the topic is worthy of investigation, although it has
egy and may decrease the overall mission return already been researched in the literature.
(e.g., it requires acquisition retargeting, while the •• Two: The number of existing papers is small (no more
baseline strategy assumes a constant nadir scan). than tens) but increasing fast. This may be an indica-
55 One: The use case is not compatible with the baseline tor that the topic is worthy of investigation, novel, and
strategy. However, modifications to the strategy are pos- disruptive and that it attracts significant research at-
sible and do not have an adverse impact on the mission. tention very fast.
55 Two: The use case is compatible with the acquisition

strategy (e.g., the acquisition duty cycle, coverage, CONSTRAINTS AND FEASIBILITY
and revisit capabilities). Constraints relate to sensor capabilities and characteristics.
•• Potential of extending the mission perimeter and/or capac- Also, we discuss the availability of datasets, focusing on
ity, d Memp : This objective indicates whether the use case data-driven supervised techniques to achieve robust prod-
has potential to extend the current mission perimeter ucts. As in the case of objectives, the scale is either binary
and/or capacity, e.g., in multipurpose/reconfigurable (zero/two) or three-point (zero, one, and two), with larger
missions, and it relates to the cost of the mission’s values corresponding to more preferred use cases:
perimeter expansion. As for enhancing the mission 1) Sensor capabilities:
capacity, we may be able to have a higher scientific/ •• Compatibility with the sensor spectral range, ~ SCcspe : This

commercial return, for instance, by observing more constraint indicates the feasibility of tackling a use
target sites than what could be achieved without data case with image data (with respect to its spectral
analysis onboard a satellite: range) captured using the considered sensor:
55 Zero: The use case is already within the perimeter 55 Zero: The sensor is not compatible (it does not cap-

foreseen for the mission, and the mission capacity ture the spectral range commonly reported in the
would not be enhanced by onboard processing. Al- papers discussing the use case).
ternatively, from a pure mission perimeter exten- 55 One: The sensor is partly compatible (it captures

sion point of view, this use case is of poor interest. part of the spectral range commonly reported in the
55 One: The use case is not within the perimeter for papers discussing the use case).
which the mission was initially designed, and its 55 Two: The sensor is fully compatible (it captures the

implementation may have extra impact on the sys- spectral range commonly reported in the papers
tem design (e.g., sending an alert upon an illegal ship discussing the use case).
degassing detection may need an additional geosta- •• Compatibility with the sensor spectral sampling, ~ SC
css : This

tionary optical link to act with the necessary reactiv- constraint indicates the feasibility of tackling a use case
ity this situation requires). This use case is of great with image data (with respect to their spectral sam-
interest to extend the mission perimeter/capacity but pling) captured using the considered sensor:
may be feasible only with a significant impact on the 55 Zero: The commonly reported spectral sampling is

satellite (e.g., a new optical transmission device on- narrower than available in the target sensor; hence,
board) and system levels (e.g., geostationary relay). it may not be possible to capture spectral character-
55 Two: The use case is not within the perimeter for which istics of the objects of interest.
the mission was initially designed, and apart from the 55 One: The commonly reported spectral sampling is

new AI function, the implementation of this use case much wider than available in the target sensor (e.g.,
has only a minor impact that can be absorbed by the MSIs versus HSIs); hence, we may not fully benefit
current satellite and/or system design. Such a use case from the sensor spectral capabilities.
is therefore of great interest to extend the mission pe- 55 Two: The commonly reported spectral sampling is

rimeter and/or capacity at a minimal cost. compatible with the target sensor.

22 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


•• Compatibility with the sensor spatial resolution, ~ SC
cspa : This •• Importance of data representativeness/variability, ~Dshd:
constraint indicates the feasibility of tackling a use This constraint evaluates how training data would
case with image data (their spatial resolution) cap- be representative of the situation at a global scale,
tured using the considered sensor: covering spurious cases and extremes, hence ensur-
55 Zero: The available spatial resolution is not enough ing a high level of generalizability. This point focuses
to effectively deal with the use case (e.g., to detect on the need of capturing seasonally and/or spatially
objects of the anticipated size, accurately calculate heterogeneous training data and the importance of
the area of cultivated land, and so forth). such data heterogeneity in building well-generalizing
55 Two: The available spatial resolution is enough to data-driven models:
effectively deal with the use case. 55 Zero: It is critical to capture seasonally/spatially

2) Dataset maturity: heterogeneous training data to make the resulting


•• Availability of annotated (ground truth) data, ~ Dagt: This machine learning/data analysis models applicable
constraint relates to the availability of ground truth in practice (e.g., calculating soil moisture).
datasets that could be used to train and validate super- 55 Two: Capturing seasonally/spatially heterogeneous

vised models for onboard processing: training data may be beneficial, but it is not of criti-
55 Zero: No ground truth datasets are available. cal importance for this use case (e.g., fire detection).
55 One: There exist ground truth datasets (at least

one), but they are not fully compatible with the SELECTING USE CASES FOR ONBOARD
target sensor (compatibility with the sensor could AI DEPLOYMENT
be achieved from such data if there were an in- In Table 2, we assemble objectives and constraints that
strument simulator). contribute to the selection process. There are parameters
55 Two: There exist ground truth datasets (at least one) that are mission independent; therefore, the same values
that are fully compatible with the target sensor. (determined once) can be used for fundamentally differ-
•• Difficulty of creating new ground truth data, ~ Ddgt: This ent satellite missions, as shown in the “Case Studies” sec-
constraint relates to the process of creating new tion for CHIME and Intuition-1 missions. Afterward, they
ground truth datasets that could be used to train and can be updated only when necessary, e.g., if the trends
validate supervised learners for onboard processing have changed within the “interest to the community” ob-
during the target mission: jective. The total score S, which aggregates all objectives
55 One: The localization of the objects of interest is and constraints, is their weighted sum:
not known, and/or their spectral characteristics
are not known in detail, but the current state of 64444444
Objectives(1)
474444444 4M8
the art suggests preliminary wavelengths deter- S = a OBP OBP
fr d fr + a OBP OBP M M M R R
mta d mta + a cas d cas + a emp d emp + a d
14444244443
mined by airborne/laboratory methods and ar- Objectives(2)

eas in which the phenomena of interest occur, + a SC SC SC SC SC SC


cspe ~ cspe + a css ~ css + a cspa ~ cspa 
1444444442444444443
e.g., based on in situ observations. Additional Constraints (1)

sources of ancillary information, such as analy- + a Dagt ~ Dagt + a ~ + a Dshd ~ Dshd .


D D
dgt dgt (1)
14444444244444443
Constraints (2)
sis of news/social media related to the issue (e.g.,
environmental organizations in the event of a
catastrophe), biogeophysical/chemical models, Since the scale for each parameter is the same, we do not
and another geospatial information, might be need to normalize the assigned values, and they can be
pivotal to elaborate new ground truth datasets. summed together to elaborate S. The importance of the
Preparing such an image database can be an im- specific parameters may be, however, directly reflected in
portant contribution to the development of HSI the weighting factors (the a values). Here, the dominating
and MSI analysis. parameters can be assigned with (significantly) larger a’s;
55 One: Identification of objects is possible on the ba- thus, our procedure allows practitioners to conveniently
sis of characteristic spectral signatures for a specific simulate different mission profiles. This feature of the se-
phenomenon that is expected in a given area (geo- lection process is discussed in detail for CHIME in the “Se-
graphic coordinates are known), and ground truth lecting AI Applications for CHIME and Intuition-1” section.
can be generated through in situ methods. Similarly, if multiple teams are contributing to the evalua-
55 Two: Identification of objects of interest is possible tion process (e.g., the data analysis, space operations, and
based on the visibility in red–green–blue (RGB)/ hardware design teams), the agreed parameter values may
panchromatic/selected bands/combination of bands be evaluated following numerous approaches, including
[objects are visible in the RGB/panchromatic/select- majority and weighted voting. Finally, the use case with the
ed band; thus, manual, semiautomatic, and auto- maximal S (or N use cases with the largest S scores if more
matic (by, e.g., automatic colocation with ancillary than one AI application can be developed) will be retained
data) contouring is straightforward]. for the ultimate deployment onboard the analyzed mission.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 23


TABLE 2. A SUMMARY OF OBJECTIVES AND CONSTRAINTS USED TO SELECT A TARGET AI APPLICATION
FOR ONBOARD D ­ EPLOYMENT.

PARAMETER SYMBOL WEIGHT MISSION


OBJECTIVES ONBOARD PROCESSING (OBP)
Faster response (better reactivity) d OBP
fr a OBP
fr

Multitemporal analysis (compatibility with the revisit time) d OBP


mta a OBP
mta

MISSION (M)
Compatibility with the acquisition strategy dM
cas aM
cas

Potential of extending the mission perimeter d M
emp a M
emp

INTEREST TO THE COMMUNITY (R)


Interest to the community dR aR ✗

CONSTRAINTS SENSOR CAPABILITIES (SC)


Compatibility with the sensor spectral range ~ SC
cspe a SC
cspe

Compatibility with the sensor spectral sampling ~ SC
css a SC
css

Compatibility with the sensor spatial resolution ~ SC
cspa a SC
cspa

DATASET MATURITY (D)


Availability of annotated (ground truth) data ~ Dagt a Dagt ✗

Difficulty of creating new ground truth data ~ D


dgt a D
dgt

Importance of data representativeness/variability ~ D


shd a D
shd

Mission-specific parameters are indicated with a ✓, whereas those that are mission agnostic are marked with an ✗.

QUANTIFYING THE INTEREST OF THE commonly used in each application. (To perform the quan-
RESEARCH COMMUNITY titative analysis of the state of the art, we used the Dimen-
We performed quantitative analysis of the recent body sions tool available at https://app.dimensions.ai/discover/
of literature (2012–2022) for each use case to objectively publication.) We can observe the steady increase of the
­investigate the interest of the community. In Figure 1, we number of papers published in each specific application re-
present the number of papers published yearly concerning lated to the analysis of earthquakes and landslides, with the
the detection and monitoring of earthquakes and landslides monitoring of earthquakes (rendered in yellow) and detec-
(for all other use cases, see the supplementary material avail- tion of landslides (dark blue) manifesting the fastest growth
able at https://www.doi.org/10.1109/MGRS.2023.3269979). in 2019–2021. Since the body of knowledge related to those
For the search process, we utilized keywords and key phrases two topics has been significantly expanding, we can infer
that they are disruptive areas and that
contributing to the state of the art
600 here is of high importance (therefore,
Monitoring of Earthquakes the topics were assigned the highest
500 Estimation of Earthquake Damages score in the evaluation matrix). On
Detection of Landslides the other hand, as the estimation of
Number of Publications

Estimation of Landslide Damages


400 damages induced by landslides does
Prediction of Landslides
not resonate well in the research com-
300 munity (with a small number of pa-
pers published yearly, relative to the
number of papers in other applica-
200
tions, without any visible increasing
trend), it was scored as zero. The same
100
investigation should be performed
for all applications, although we are
0 aware that this analysis could still be
2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
considered a bit subjective (e.g., what
Year of Publication
is a “fast” increase in the number of
FIGURE 1. The number of recent papers on the detection and monitoring of earthquakes and papers, and when does a “slow” in-
landslides. For other applications, see the supplementary material available at https://www. crease accelerate and become “fast”?).
doi.org/10.1109/MGRS.2023.3269979. We believe that such a quantitative

24 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


analysis of the existing state of the art is pivotal to
make an informed decision about the research area TABLE 3. A HIGH-LEVEL COMPARISON OF CHIME AND INTUITION-1.
that will be tackled in an upcoming mission. FEATURE CHIME INTUITION-1
Satellite type Copernicus extension Nanosatellite (6U CubeSat)
CASE STUDIES Spatial resolution (GSD) 30 m 25 m
We present two case studies of CHIME (the “CHIME Spectral range 400–2,500 nm 465–940 nm
Case Study” section) and Intuition-1 (the “Intuition-1 Spectral sampling interval # 10 nm 3–6 nm
Case Study” section), being EO satellites with fun- Number of bands 250 192
damentally different characteristics (Table 3). For Revisit time 11 days 10 days
CHIME and Intuition-1, we present their background, Uplink S-band (2 Mb/s) S-band (256 kb/s)
objectives and motivation, and constraints, which Downlink Ka-band (up to 3.6 Gb/s) X band (up to 50 Mb/s)
impact the selection of the AI application in our Altitude 632 km 600 km

evaluation matrix. To make the procedure straight-


forward, we present example questions that may be
asked during the design process: MISSION OBJECTIVES
◗◗ Background: What is the big picture behind the mission? The mission objectives are detailed in the CHIME mission
How is the mission different from other missions? Why requirement document [166], and can be summarized as
is it “unique”? “to provide routine hyperspectral observations through the
◗◗ Objectives and motivation: What are the main objectives Copernicus Program in support of European Union—and
of the mission, and how do they relate to AI? What do related policies for the management of natural resources,
we want to demonstrate? Do we treat AI applications as assets and benefits.” The observational requirements of
“technology demonstrators” or as tools for empower- CHIME are driven by primary objectives, such as agricul-
ing commercial/scientific use cases? Are there mission ture, soil analysis, food security, and raw materials analy-
­objectives that should “dominate” the selection? Are there sis. In these domains, the system will have the potential
AI applications that should be deployed even though they to deliver many value-added products for various applica-
would not be selected in the process? tions, including sustainable agricultural management, soil
◗◗ Constraints: What are the constraints imposed on the mis- characterization, sustainable raw materials development,
sion and AI applications concerned with, e.g., the imaging forestry and agricultural services, urbanization, biodiver-
sensor, processing resources, and other hardware? sity and natural resources management, environmental
degradation monitoring, natural disaster responses and
CHIME CASE STUDY hazard prevention, inland and coastal water monitoring,
and climate change assessment. To achieve the mission,
BACKGROUND the CHIME satellites will be equipped with hyperspectral
CHIME (Figure 2) is part of the second generation of Sentinel spectrometers allowing them remotely characterize matter
satellites that the ESA is developing to expand the Coperni- composing the surface of Earth and atmospheric aerosols.
cus EO constellation. The space component of the system
will be composed of two satellites that are to be launched
by the end of the 2020s. In 2018, the CHIME Mission Ad-
visory Group was established at the ESA to provide expert
advice during design and development, concerned with the
scientific objectives of the mission, data product definitions,
instrument design and calibration, data validation, and data
exploitation. Following the evaluation of the Phase B2/C/D/
E1 proposals in January 2020, Thales Alenia Space (France)
was selected as satellite prime contractor and OHB (Germa-
ny) as the instrument prime contractor. Phase B2 started in
mid-November 2020.
Currently, the ESA is targeting the identification of
new use cases that can be relevant for onboard AI applica-
tions [165]. At the time of writing this article, the decision
to implement a dedicated AI unit (AIU) onboard CHIME
is pending. The final decision will result from a global
tradeoff that is under investigation at the system and sat-
ellite levels, based on an evaluation process as described
in this article and also considering programmatic and
ESA

budget aspects. FIGURE 2. The CHIME satellite.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 25


MISSION CONSTRAINTS chain will implement, in the data processing unit (DPU), re-
The CHIME payload will be equipped with a hyperspec- al-time cloud detection [168], feeding a hyperspectral Con-
tral camera delivering more than 250 spectral bands at a sultative Committee for Space Data Systems (CCSDS)-123
# 10-nm spectral sampling interval in the visible and infra- compressor [169]. This will reduce the amount of data de-
red spectral range from 400 to 2,500 nm. The instrument livered by the sensor while compressing data with tunable
field of view covers a swath of 130 km at a 30-m GSD at the losses over cloud-free areas [170].
CHIME spacecraft altitude. The revisit time will be 11 days A future unit implementing hypothetical onboard AI al-
when the two satellites are operational [167]. With a contin- gorithms will have to process data on the fly at the sensor
uous data throughput close to 5 Gb/s acquired over land and data rate to interface with the continuous data flow from
coastal areas, the CHIME payload will deliver about 1 Tb the video acquisition unit (VAU). This imposes new con-
of data per orbit. Downloads are foreseen within 6–12-min straints on the architecture of the current onboard data
visibility time slots over the Svalbard and Inuvik polar sta- processing chain (i.e., a new interface with the existing
tions through the Ka-band link, which offers up to a 3.6-Gb/ VAU and updates in the DPU design) as well as strong con-
s download data rate. The acquired data will be processed straints on the AIU hardware, such as high-speed input–
on the ground between 5 and output memory links, fast data processing cores, and the
10 h after acquisition. As a parallelization of DPUs to allow the AIU to process data at a
complement, onboard data rate compatible with the sensor throughput while ensuring
SENDING LARGE AMOUNTS processing might be an ­asset the constant monitoring required by the CHIME mission.
OF RAW HYPERSPECTRAL to reduce and select/prioritize
valuable information. This INTUITION-1 CASE STUDY
DATA FOR THE ON-THE-
leads to envisioning new strat-
GROUND PROCESSING IS
egies for onboard processing, BACKGROUND
INEFFICIENT AND EVEN
storage, and transmission for The purpose of the Intuition-1 space mission (Figure 3) is
INFEASIBLE, DEPENDING ON CHIME. Besides the common to observe Earth by using a 6U nanosatellite equipped with
THE LINK CONSTRAINTS. constraints imposed on any a hyperspectral optical instrument and multidimensional
hardware that will operate in data processing capabilities, which employs deep CNNs.
space (vacuum, temperature, The mission is designed as a technology demonstrator, al-
radiation due to the space en- lowing for verification of various in-orbit data processing
vironment, and vibrations during the launch phase), one of applications, use cases, and operations concepts. As of the
the main challenges is to consider the real-time processing third quarter of 2022, the Intuition-1 payload was qualified
constraints imposed by the data acquisition principle used for space applications [technology readiness level (TRL) 8],
onboard CHIME alongside the constant monitoring mission with a launch scheduled for the second quarter of 2023.
requirement. To ensure the continuity of the mission, the
design of the overall onboard image processing chain, from MISSION OBJECTIVES
the acquisition by the sensor up to the transmission of the The main objective of the mission is to test a system com-
image data to the ground, must prevent any risk of a bottle- posed of in-house-developed components—a high-perfor-
neck that would result in data loss because of memory over- mance computing unit, hyperspectral optical instrument,
flow. As an example, the CHIME nominal image processing and data processing software (preprocessing, segmen-
tation, and classification algorithms)—and evaluate the
­applicability of nanosatellites to performing medium-
resolution hyperspectral imaging coupled with onboard
data processing. The important goal of the mission is to as-
sess real-life advantages and shortcomings of the onboard
data processing chain, taking into account the full mission
context, from the technical perspective (data, power, and
thermal budgets) to spacecraft operations and scheduling.
Intuition-1 will be equipped with a smart compression
component that will prioritize the downlink and process-
ing pipelines, based on the cloud cover within the area of
interest. The mission is aimed to be multipurpose, with in-
orbit update capabilities (through uplinking updated AI al-
KP LABS/AAC CLYDE SPACE

gorithms) and targeting new use cases emerging during the


mission; therefore, the satellite can be considered a “flying
laboratory.” The first use case planned for Intuition-1 is the
onboard estimation of soil parameters from acquired HSIs.
FIGURE 3. The Intuition-1 satellite. Finally, hyperspectral data captured by Intuition-1 could be

26 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


useful in other missions to train machine learning models assumptions, such as the ConOps document and technical
for in-orbit operations based on real-life data. specification of the camera. As an example, compatibility
with the sensor spectral range results from confronting the
MISSION CONSTRAINTS spectral range of a target sensor (400–2,500 nm and 465–
The optical instrument captures HSIs in a push broom 940 nm for CHIME and Intuition-1, respectively) with the
manner, working in the VNIR range (465–940 nm) with spectral range commonly reported in the literature for a use
up to 192 spectral bands (each of 3–6 nm) and a GSD of case of interest. Therefore, for, e.g., estimating soil param-
25 m at the reference orbit (600 km). The instrument uti- eters, the corresponding scores for CHIME and Intuition-1
lizes a CMOS image sensor with linear variable filters, so are two (fully compatible) and one (partly compatible, cap-
different parts of the sensor are sensitive to light of dif- turing the majority of the spectral range) for this constraint,
ferent wavelengths. By moving the instrument in the di- as the spectral range often reported in the literature for this
rection of the filters’ gradient, hyperspectral data of static application is 400–1,610 nm [19]. Additionally, we can
terrain are recorded. Using specialized preprocessing, the observe that the multitemporal analysis ^d OBP mta h is zero for

coregistration process is performed, so a subpixel-accurate all potential applications for the Intuition-1 mission, as
hyperspectral cube can be produced regardless of satellite this nanosatellite will not be equipped with onboard geo-
platform attitude determination and control system low- referencing routines; hence, it
frequency disturbances. would not be possible to ef-
The Leopard DPU is responsible for the acquisition of fectively benefit from multiple
raw image data from the optical system, storage of the data, images captured for the very IF ACTIONABLE ITEMS
running preprocessing algorithms, data compression (CC- same scene at more than one EXTRACTED FROM RAW
SDS-123), and AI processing. Other functionalities, such as time point. Similarly, since the
DATA ARE NOT DELIVERED
handling the S-band radio (uplink, 256 kb/s) and X-band estimation of soil parameters
IN TIME, THEY MAY EASILY
radio (downlink, up to 50 Mb/s), are also covered by the is already planned for CHIME
DPU. The Leopard DPU utilizes the Xilinx Vitis AI frame- and Intuition-1, such agricul- BECOME USELESS IN
work to accelerate CNNs on field-programmable gate array tural applications would not COMMERCIAL AND
hardware, providing energy-efficient (0.3 tera operations necessarily extend the mission SCIENTIFIC CONTEXTS.
per second per watt) inference and in-flight-reconfigurable perimeter; therefore, this pa-
deep models. rameter became zero for both
satellites ^d M
emp h .

SELECTING AI APPLICATIONS FOR To show the flexibility of our evaluation procedure, we


CHIME AND INTUITION-1 present radar plots showing the values of each parameter
In Table 4, we list the objectives (the “Objectives” section) for the most promising use cases (according to the weighted
and constraints (the “Constraints and Feasibility” section) total scores S ) in three scenarios where 1) the objectives
assessed for both the CHIME and Intuition-1 missions and constraints are equally important, 2) the objectives are
(for the interactive evaluation matrix, see the supplemen- twice as important as the constraints (thus, the a weighting
tary material available at https://www.doi.org/10.1109/ factors for the objectives are twice as big as the a’s assigned
MGRS.2023.3269979). For CHIME, the values of the mission- to the constraints), and 3) the constraints are twice as impor-
specific entries of the evaluation matrix were agreed to by tant as the objectives (Figure 4). The second scenario may
a working group composed of the mission scientist, proj- correspond to missions whose aim is to push the current
ect manager, satellite manager, mission manager, payload state of the art and be disruptive, even if the risk levels are
data processing and handling engineers, and AI and data higher, whereas the third may reflect missions minimizing
analysis experts. On the other hand, those parameters were risks related to constraints while still delivering contribu-
quantified by the system engineer for Intuition-1. The mis- tions to the current state of knowledge. We can appreciate
sion-independent objectives and constraints were elabo- that the weighting process affects the ranking of the po-
rated by the entire working group. tential applications for both missions; hence, it can better
Although some of the parameters, such as “faster re- guide the selection procedure based on the most relevant
sponse (better reactivity)” and those related to the datasets factors (objectives, constraints, or both).
that could be used to train supervised learners, are straight-
forward to quantify and directly related to the use case char- CONCLUSIONS
acteristics, the “interest to the community” may look more The latest advances in hyperspectral technology allow us
subjective. The parameters are, however, still quantifiable, to capture very detailed information about objects, and
as we showed in the “Selecting Use Cases for Onboard AI they bring exciting opportunities to numerous EO down-
Deployment” section. The mission-specific parameters are stream applications. Hyperspectral satellite missions have
directly related to the mission planning, ConOps, and hy- been proliferating recently, as acquiring HSIs in orbit of-
perspectral sensor’s capabilities. Therefore, their quantifi- fers enormous scalability of various solutions. However,
cation is inherently objective, as it is based on well-defined sending large amounts of raw hyperspectral data for the

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 27


on-the-ground processing is inefficient and even infeasible, contexts. Therefore, the recent focus of space agencies, pri-
depending on the link constraints. Also, if actionable items vate companies, and research entities aims at taking AI so-
extracted from raw data are not delivered in time, they lutions to space to extract and transfer knowledge from raw
may easily become useless in commercial and scientific HSIs acquired onboard imaging satellites.

TABLE 4. AN EVALUATION MATRIX CAPTURING ALL MISSION OBJECTIVES AND CONSTRAINTS FOR CHIME AND INTUITION-1.

OBJECTIVES
ONBOARD PROCESSING MISSION COMMUNITY

dOBP
fr dOBP
mta
M
dcas M
demp dR
USE CASE CHIME INTUITION-1 CHIME INTUITION-1 CHIME INTUITION-1
AGRICULTURAL APPLICATIONS
Estimation of soil parameters [19] 0 2 0 2 2 0 0 1
Analysis of fertilization [20] 0 2 0 2 2 0 0 1
Analysis of the plant growth [21] 0 1 0 2 2 0 0 1
Assessment of hydration [171] 2 2 0 2 2 0 0 1

MONITORING OF PLANT DISEASES AND WATER STRESS


Monitoring of plant diseases [18] 2 2 0 2 2 0 0 1
Estimation of water content [32] 2 2 0 2 2 0 0 1

DETECTION AND MONITORING OF FLOODS


Detection of floods [35] 2 0 0 2 1 1 2 1
Damage estimation in floodplains [172] 0 1 0 2 2 1 2 1

DETECTION OF FIRE, VOLCANIC ERUPTIONS, AND ASH CLOUDS


Early detection of fire [44] 2 0 0 2 0 1 2 2
Monitoring fire progress [173] 2 2 0 2 2 1 2 2
Burned area maps [41] 0 1 0 2 0 0 2 2
Assessment of vegetation destruction [42] 2 0 0 2 1 0 2 2
Greenhouse gas emission [43] 2 1 0 2 1 0 2 1
Detection of volcanic ash clouds [47] 2 1 0 2 1 1 2 1
Detection of volcanic eruption [52] 2 0 0 2 1 1 2 1
Lava tracking [174] 2 2 0 2 2 1 2 1
Estimation of fire damage [172] 0 1 0 2 2 1 2 2

DETECTION AND MONITORING OF EARTHQUAKES AND LANDSLIDES


Monitoring of earthquakes [54] 2 1 0 2 1 0 2 2
Estimation of earthquake damage [62] 0 1 0 2 2 0 2 1
Detection of landslides [54] 2 0 0 2 1 0 2 2
Estimation of landslide damage [175] 0 1 0 2 1 0 2 0
Prediction of landslides [176] 2 2 0 2 0 1 1 1

MONITORING OF INDUSTRIALLY INDUCED POLLUTION


Dust events [177] 0 2 0 2 1 1 1 1
Mine tailings [65] 0 2 0 2 2 1 2 1
Acidic discharges [78] 2 2 0 2 1 2 2 2
Hazardous chemical compounds [94] 2 2 0 2 1 2 2 1

DETECTION AND MONITORING OF METHANE


Detection of methane [178] 2 2 0 2 1 2 2 1

WATER ENVIRONMENT ANALYSIS


Detection of marine litter [118] 2 2 0 1 2 2 2 0
Monitoring of algal blooms [92] 2 0 0 1 2 0 2 1
Coastal/inland water pollution [92] 2 2 0 1 2 0 2 1
Maritime surveillance [150] 2 0 0 0 0 2 1 1
Maritime support [158] 2 1 0 0 0 0 1 1
Aircraft crashes [107] 2 1 0 0 0 0 1 0

28 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Although there are hyperspectral imaging missions with of the developed satellite. In this article, we tackled this
the ultimate goal of deploying AI at the edge, selecting a research gap and introduced a fully traceable, objective,
single (or a set of) onboard AI application(s) remains an quantifiable, and interpretable approach of assessing po-
important challenge, as it affects virtually all components tential onboard AI applications and selecting those that

CONSTRAINTS
SENSOR CAPABILITIES DATASET MATURITY
SC SC SC
~cspe ~css ~cspa ~D
agt ~D
dgt ~D
shd
CHIME INTUITION-1 CHIME INTUITION-1 CHIME INTUITION-1

2 1 1 2 2 2 0 1 0
2 2 1 2 2 2 0 1 2
2 2 0 2 0 2 0 1 0
2 2 1 2 0 2 0 1 0

2 1 2 2 2 2 0 0 0
1 1 1 1 0 2 0 0 0

2 2 1 2 2 2 1 2 0
2 1 0 1 2 0 1 1 2

2 0 1 1 2 1 0 1 0
2 1 1 2 0 0 0 1 2
2 1 1 2 2 0 0 1 2
2 1 2 2 2 2 0 1 0
2 0 1 2 2 2 0 1 0
2 2 1 2 0 2 1 1 2
2 1 1 2 0 2 1 2 2
1 1 1 2 2 2 1 2 2
2 1 0 2 2 0 1 1 2

2 2 1 2 2 0 0 1 0
2 2 1 2 2 0 1 1 2
2 2 0 2 2 0 0 1 0
2 1 1 2 2 2 1 1 2
2 1 1 2 2 2 0 1 0

1 1 1 2 0 2 0 1 0
2 1 1 2 2 2 0 1 2
2 1 1 2 2 2 0 1 0
2 1 1 2 2 2 0 1 0

2 1 1 2 2 2 1 1 0

2 1 1 2 2 2 0 1 2
2 1 1 2 0 2 0 1 0
2 1 1 2 2 2 0 0 2
2 1 1 2 0 0 1 0 2
2 1 1 2 2 0 1 2 2
2 2 1 2 0 0 1 2 2

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 29


30
Lava Tracking (S = 17) Acidic Discharges (S = 16) Acidic Discharges (S = 26) Lava Tracking (S = 25) Lava Tracking (S = 26) Maritime Support (S = 24)
Methane Detection (S = 16) Monitoring Fire Progress (S = 15) Methane Detection (S = 25) Monitoring Fire Progress (S = 24) Methane Detection (S = 23) Marine Litter Detection (S = 23)
Hazardous Chemical Compounds (S = 15) Marine Litter Detection (S = 15) Hazardous Chemical Compounds (S = 24) Others (S ≤ 23) Others (S ≤ 22)
Others (S ≤ 14)

δ OBP
fr δ OBP
fr δ OBP
fr
2 4 2
ωD
shd δ OBP
mta
ωD
shd δ OBP
mta ωD
shd δ OBP
mta
2 2 2 4 4 2

1 2 1
1 1 1 2 2 1
D δM D δM D δM
ω dgt
2 2 cas ω dgt
2 4 cas ω dgt cas
4 2
0 0 0
1 0 0 1 1 0 0 2 2 0 0 1
0 0 0 0 0 0

0 0 0 0 0 0
D 2
1 1 D 2 1 2 D 4 2 1
ω agt 2 δM ω agt 4 δM ω agt 2 δM
emp emp emp
0 0 0 0 0 0
0 0 0 0 0 0
1 1 1 2 2 1

SC 2 1 1 2 SC 2 1 1 4 SC 4
2 2 2
ω cspa δR ω cspa δR ω cspa δR

2 2 2 2 4 4
SC
ω css SC SC SC SC SC
ω cspe ω css ω cspe ω css ω cspe

(a)
Lava Tracking (S = 17) Detection of Volcanic Ash Clouds (S = 16) Lava Tracking (S = 24) Monitoring Fire Progress (S = 22) Lava Tracking (S = 27) Detection of Volcanic Ash Clouds (S = 26)
Detection of Volcanic Eruptions (S = 16) Detection of Floods (S = 15) Detection of Volcanic Ash Clouds (S = 22) Detection of Volcanic Eruptions (S = 22) Detection of Volcanic Eruptions (S = 26) Detection of Floods (S = 24)
Others (S ≤ 14) Detection of Floods (S = 21) Coastal / Inland Water Pollution (S = 21) Others (S ≤ 23)
Others (S ≤ 20)

δ OBP
fr δ OBP
fr δ OBP
fr
2 4 2
ωD
shd δ OBP
mta ωD
shd δ OBP
mta ωD
shd δ OBP
mta
2 2 2 4 4 2

1 2 1
1 1 1 2 2 1
D M D D
ω dgt ω dgt δM
cas
ω dgt δM
cas
2 2 δ cas 2 4 4 2
0 0 0
1 0 0 1 1 0 0 2 2 0 0 1
0 0 0 0 0 0

0 0 0 0 0 0
D 2 1 1 M D 2
1 2 D 4 2 1 M
ω agt 2 δ emp 4 δM ω agt 2 δ emp
ω agt emp
0 0 0 0 0 0
0 0 0 0 0 0
1 1 1 2 2 1

SC 2 1 1 2 2 1 1 4 4 2 2 2
ω cspa δR SC SC δR
ω cspa δR ω cspa

2 2 2 2 4 4
SC SC
ω cspe SC SC SC SC
ω css ω css ω cspe ω css ω cspe

1) Objectives and Constraints are Equally Important 2) Objectives are More Important 3) Constraints are More Important
(b)

IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE


FIGURE 4. Our evaluation procedure allows for investigating various mission profiles. We consider example mission profiles of (a) CHIME and (b) Intuition-1 where 1) both the objectives and con-
straints are equally important, 2) the objectives are twice as important as the constraints, and 3) the constraints are twice as important as the objectives. The most promising use cases are rendered
in color (the larger S is, the better), whereas others are in gray (darker shades of gray indicate that more use cases were assigned the same value in the corresponding parameter). For each use case,

JUNE 2023
we report its score S.
maximize the overall score aggregating the most impor- Environment from the artificial intelligence and data pro-
tant mission objectives and constraints in a simple way. We cessing perspectives. She is a Member of IEEE.
proved the flexibility of the evaluation process by employ- Michel-François Foulon (mic hel-f rancois.foulon
ing it on to two hyperspectral missions: CHIME and Intu- @thalesaleniaspace.com) received his Ph.D. degree (2008)
ition-1. Our technique may be straightforwardly utilized in micro- and nanotechnologies and telecommunications
to target two fundamentally different missions, and it al- from Université des Sciences et Technologies de Lille,
lows practitioners to analyze different mission profiles and France. In 2007, he joined Thales Alenia Space, 31100
the importance of assessment factors through the weight- Toulouse, France, where he is currently an imaging chain
ing mechanism. On top of that, the procedure can be ex- architect in the observation, exploration, and navigation
tended to capture other aspects, such as expected onboard business line. He has more than 15 years of experience in
data quality, e.g., geometric, spectral, and radiometric, microwaves, space-to-ground transmission, onboard data
and other types of payloads beyond optical sensors, which processing, and image chain architecture design for Earth
may play a key role in specific EO use cases. Also, it may observation systems. Since 2021, he has also worked on on-
be interesting to consider selected aspects that are currently board artificial intelligence solutions in the framework of
treated as being mission agnostic as mission specific. As an the Copernicus program.
example, creating new ground truth data may require plan- Yves Bobichon (yves.bobichon@thalesaleniaspace.com)
ning in situ measurement campaigns to be in line with the received his Ph.D. degree (1997) in computer science from
ConOps of a mission. The same would apply to the training University Nice Côte d’Azur, France. He joined Alcatel
image data, whose acquisition time and target area charac- Space Industries in 1999. He is currently an image process-
teristics should be close enough to those planned for a mis- ing chain architect in the System and Ground Segment
sion. Finally, including the TRL of specific onboard AI al- Engineering Department, Thales Alenia Space, 31100 Tou-
gorithms (in relation to the available hardware planned for louse, France. He has more than 25 years of experience
inference) in the evaluation procedure could help make a in satellite image processing, onboard data compression,
more informed decision on selecting the actual AI solution and image chain architecture design for Earth observation
(e.g., a deep learning architecture for a given downstream systems. Since 2018, he has also been a researcher at the
task). We believe that the standardized approach of evalu- French Research Technological Institute Saint Exupéry. His
ating onboard AI applications will become an important research interests include embedded machine and deep
tool that will be routinely exploited while designing and learning applications for image processing onboard Earth
planning emerging EO missions and that it will help maxi- observation satellites.
mize the percentage of successful satellite missions bring- Raffaele Vitulli (raffaele.vitulli@esa.int) received his
ing commercial, scientific, industrial, and societal value to M.Sc. degree in electronic engineering from Politecnico di
the community. Bari, Italy, in 1991. He is a staff member of the Onboard
Payload Data Processing Section, European Space Agency,
ACKNOWLEDGMENT 2201 AZ Noordwijk, The Netherlands, where he works
This work was partly funded by the ESA via a feasibility study on the Consultative Committee for Space Data System as
for the CHIME mission and Intuition-1-focused GENESIS a member of the Multispectral/Hyperspectral Data Com-
and GENESIS 2 projects supported by the U- lab (https:// pression Working Group. He has also been the chair and
philab.esa.int/). Agata M. Wijata and Jakub Nalepa were organizer of the Onboard Payload Data Compression
supported by a Silesian University of Technology grant for Workshop. He is actively involved in the Copernicus Hy-
maintaining and developing research potential. This article perspectral Imaging Mission for the Environment mission,
has supplementary material, provided by the authors, avail- supervising avionics and onboard data handling.
able at https://www.doi.org/10.1109/MGRS.2023.3269979. Marco Celesti (marco.celesti@esa.int) received his
M.Sc. (2014) and Ph.D. (2018) degrees in environmental
AUTHOR INFORMATION sciences from the University of Milano–Bicocca (UN-
Agata M. Wijata (awijata@ieee.org) received her M.Sc. IMIB), Italy. After that, he worked as a postdoc at UNIMIB,
(2015) and Ph.D. (2023) degrees in biomedical engineer- being also involved as scientific project manager in the
ing at the Silesian University of Technology. Currently, she Horizon 2020 Marie Curie Training on Remote Sensing
works as a researcher at the Silesian University of Technol- for Ecosystem Modeling project. He received a two-year
ogy, 44-800 Zabrze, Poland, and as a machine learning fellowship cofunded by the European Space Agency (ESA)
specialist at KP Labs, 44-100 Gliwice, Poland, where she Living Planet Fellowship program, working in support of
has been focusing on hyperspectral image analysis. Her the Earth Explorer 8 Fluorescence Explorer mission. Since
research interests include multi- and hyperspectral im- 2021, he has worked at the ESA, 2201 AZ Noordwijk,
age processing, medical image processing, image-guided The Netherlands, as a Sentinel optical mission scientist.
navigation systems in medicine, artificial neural networks, His research interests include optical remote sensing, re-
and artificial intelligence in general. She contributes to trieval of geophysical parameters, radiative transfer mod-
the Copernicus Hyperspectral Imaging Mission for the eling, and terrestrial carbon assimilation. He is currently

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 31


the mission scientist of the Copernicus Hyperspectral Im- to 2010, he was with the Earth Observation Research Center,
aging Mission for the Environment and Copernicus Senti- Japan Aerospace Exploration Agency, Tsukuba, Japan. From
nel-2 next-generation missions. 2010 to 2020, he was with the Space Observation Division,
Roberto Camarero (roberto.camarero@esa.int) received Collecte Localization Satellites, Plouzané, France, where he
his M.Sc. degree in telecommunications, signal processing, was a research engineer. Since September 2020, he has been
and electronic systems engineering from the University of an Earth observation data Scientist, U- Lab Explore Office,
Zaragoza, Spain, in 2005 and advanced M.Sc. degree in European Space Research Institute, European Space Agency,
aerospace telecommunications and electronics from Insti- 00044 Frascati, Italy. His research interests include Earth ob-
tut Supérieur de l’Aéronautique et de l’Espace, Toulouse, servation remote sensing and digital technologies. such as
France, in 2006. He worked for the National Center for machine (deep) learning. He has been working on the devel-
Space Studies (CNES) from 2006 to 2018 in the Onboard opment of innovative synthetic aperture radar-based appli-
Data System Office and has been with the European Space cations for environmental and natural resource management
Agency (ESA), 2201 AZ Noordwijk, The Netherlands, (ocean, mangrove, land and forest cover, soil moisture, snow
since then. He has been the CNES/ESA representative in cover, and permafrost) and maritime security (oil spills, sea
the Consultative Committee for Space Data Systems Data ice, icebergs, and ship detection/tracking). At the U- Lab, he
Compression Working Group for over a decade. He has is particularly involved in the development of innovative
been a visiting lecturer on image compression at several Earth observation missions in which artificial intelligence is
French engineering schools, and he is a co-organizer of directly deployed at the edge (on the spacecraft).
the Onboard Payload Data Compression Workshop. His Jens Nieke (jens.nieke@esa.int) received his M.Eng.
research interests include onboard image compression and ­degree in aero- and astronautical engineering from the Techni-
processing for optical remote sensing missions. cal University of Berlin, Germany, and the National Insti-
Gianluigi Di Cosimo (gianluigi.di.cosimo@esa.int) re- tute of Applied Sciences, Lyon, France, and his Ph.D. degree
ceived his M.Sc. degree in physics in 1991 and Ph.D. degree in an advanced satellite mission study for regional coastal
in 1995 at the Sapienza University of Rome, Italy. After a zone monitoring at the Technical University of Berlin in
few years in the space industry, working for one of the major 2001. In 1995, he joined the team working on the Moderate
large system integrators in Europe, he joined the European Optoelectrical Scanner for the Indian Remote Sensing satellite
Space Agency (ESA), 2201 AZ Noordwijk, The Netherlands, at the German Aerospace Center, Berlin, which launched
in 2006. At the ESA, he has been responsible for product a spaceborne imaging spectrometer in 1997. From 2000 to
assurance management on several projects across different 2003, he was a visiting scientist with the Earth Observa-
application domains, e.g., telecommunications, navigation, tion Research Center, Japan Aerospace Exploration Agency,
and Earth observation, mainly following spacecraft devel- Tsukuba, Japan, where he was involved in the calibration
opment, testing, and launch. In 2020, was appointed satel- and validation of the Advanced Earth Observing Satellite-II
lite engineering and assembly, integration, and verification Global Imager mission. From 2004 to 2007, he was with the
manager for the Copernicus Hyperspectral Imaging Mis- Remote Sensing Laboratories, University of Zurich, Swit-
sion for the Environment. zerland, as a senior scientist, lecturer, and project manager
Ferran Gascon (ferran.gascon@esa.int) received his of the Airborne Prism Experiment project of the European
M.Sc. degree in telecommunications engineering from Space Agency (ESA). Since 2007, he has been with the Euro-
Universitat Politècnica de Catalunya, Barcelona, Spain, in pean Space Research and Technology Center, ESA, 2201 AZ
1998; M.Sc. degree from École Nationale Supérieure des Noordwijk, The Netherlands, where he is a member of the
Télécommunications de Bretagne, Brest, France; and Ph.D. Sentinel-3 mission team.
degree in remote sensing from Centre d’Études Spatiales de Michal Gumiela (mgumiela@kplabs.pl) is a systems
la Biosphère, Toulouse, France, in 2001. He is currently an engineer with an electrical and software-embedded sys-
engineer with the European Space Research Institute, Euro- tems background. He received his B.Sc. degree in electron-
pean Space Agency, 00044 Frascati, Italy. He spent several ics and communications engineering at AGH University
years on the development and operations of the Coperni- of Science and Technology, Krakow, Poland, and M.Sc.
cus Sentinel-2 and Fluorescence Explorer missions as a data degree in microsystems and electronic systems at Warsaw
quality manager. The scope of his tasks covered all mission University of Technology, Poland. He has worked in the
aspects related to product/algorithm definition, calibra- Wireless Sensors Networks research group, AGH Universi-
tion, and validation. He is currently the Copernicus Senti- ty of Science and Technology, and at Astronika. Now with
nel-2 mission manager. KP Labs, 44-100 Gliwice, Poland, he works as a systems
Nicolas Longépé (nicolas.longepe@esa.int) received his engineer on projects involving onboard processing us-
M.Eng. degree in electronics and communication systems ing artificial intelligence (AI) algorithms for autonomous
and M.Sc. degree in electronics from the National Institute Earth observation data segmentation and classification.
for the Applied Sciences, Rennes, France, in 2005 and his As a head of mission analysis, he prepares operations con-
Ph.D. degree in signal processing and telecommunication cepts of AI-capable missions involving heavy data process-
from the University of Rennes I, Rennes, in 2008. From 2007 ing and autonomy.

32 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Jakub Nalepa (jnalepa@ieee.org) received his M.Sc. (2011), [9] J. Nalepa, M. Myller, and M. Kawulok, “Training- and test-time
Ph.D. (2016), and D.Sc. (2021) degrees in computer science data augmentation for hyperspectral image segmentation,”
from Silesian University of Technology, 44-100 Gliwice, IEEE Geosci. Remote Sens. Lett., vol. 17, no. 2, pp. 292–296, Feb.
Poland, where he is currently an associate professor. He 2020, doi: 10.1109/LGRS.2019.2921011.
is also the head of artificial intelligence (AI) at KP Labs, [10] J. Nalepa, M. Myller, and M. Kawulok, “Transfer learning for
44-100 Gliwice, Poland, where he shapes the scientific and segmenting dimensionally reduced hyperspectral images,”
industrial AI objectives of the company, related to, among IEEE Geosci. Remote Sens. Lett., vol. 17, no. 7, pp. 1228–1232,
others, Earth observation, onboard and on-the-ground Jul. 2020, doi: 10.1109/LGRS.2019.2942832.
­satellite data analysis, and anomaly detection from satel- [11] L. Tulczyjew, M. Kawulok, and J. Nalepa, “Unsupervised fea-
lite telemetry data. He has been pivotal in designing the ture learning using recurrent neural nets for segmenting hy-
onboard deep learning capabilities of Intuition-1 and has perspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 18,
contributed to missions, including the Copernicus Hyper- no. 12, pp. 2142–2146, Dec. 2021, doi: 10.1109/LGRS.2020.
spectral Imaging Mission for the Environment and Opera- 3013205.
tions Nanosatellite. His research interests include (deep) [12] J. Castillo-Navarro, B. Le Saux, A. Boulch, and S. Lefèvre,
machine learning, hyperspectral data analysis, signal pro- “Energy-based models in earth observation: From generation
cessing, remote sensing, and tackling practical challenges to semisupervised learning,” IEEE Trans. Geosci. Remote Sens.,
that arise in Earth observation to deploy scalable solutions. vol. 60, pp. 1–11, 2022, doi: 10.1109/TGRS.2021.3126428.
He was the general chair of the HYPERVIEW Challenge at [13] S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benedikts-
the 2022 IEEE International Conference on Image Process- son, “Deep learning for hyperspectral image classification: An
ing, focusing on the estimation of soil parameters from overview,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, pp.
hyperspectral images onboard Intuition-1 to maintain farm 6690–6709, Sep. 2019, doi: 10.1109/TGRS.2019.2907932.
sustainability by improving agricultural practices. He is a [14] J. Yue, C. Zhou, W. Guo, H. Feng, and K. Xu, “Estimation of
Senior Member of IEEE. winter-wheat above-ground biomass using the wavelet analy-
sis of unmanned aerial vehicle-based digital images and hy-
REFERENCES perspectral crop canopy images,” Int. J. Remote Sens., vol. 42,
[1] N. Audebert, B. Le Saux, and S. Lefevre, “Deep learning for no. 5, pp. 1602–1622, Mar. 2021, doi: 10.1080/01431161.2020.
classification of hyperspectral data: A comparative review,” 1826057.
IEEE Geosci. Remote Sens. Mag., vol. 7, no. 2, pp. 159–173, Jun. [15] X. Jin et al., “Deep neural network algorithm for estimating
2019, doi: 10.1109/MGRS.2019.2912563. maize biomass based on simulated Sentinel 2A vegetation in-
[2] W. Sun and Q. Du, “Hyperspectral band selection: A review,” dices and leaf area index,” Crop J., vol. 8, no. 1, pp. 87–97, Feb.
IEEE Geosci. Remote Sens. Mag., vol. 7, no. 2, pp. 118–139, Jun. 2020, doi: 10.1016/j.cj.2019.06.005.
2019, doi: 10.1109/MGRS.2019.2911100. [16] B. Lu and Y. He, “Evaluating empirical regression, machine
[3] P. Ribalta Lorenzo, L. Tulczyjew, M. Marcinkiewicz, and J. Na- learning, and radiative transfer modelling for estimating veg-
lepa, “Hyperspectral band selection using attention-based con- etation chlorophyll content using bi-seasonal hyperspectral
volutional neural networks,” IEEE Access, vol. 8, pp. 42,384– images,” Remote Sens., vol. 11, no. 17, Aug. 2019, Art. no. 1979,
42,403, Mar. 2020, doi: 10.1109/ACCESS.2020.2977454. doi: 10.3390/rs11171979.
[4] G. Giuffrida et al., “The z-sat-1 mission: The first on-board [17] X. Wang et al., “Predicting soil organic carbon content in
deep neural network demonstrator for satellite earth observa- Spain by combining Landsat TM and ALOS PALSAR images,”
tion,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, 2022, Int. J. Appl. Earth Observ. Geoinformation, vol. 92, Oct. 2020,
doi: 10.1109/TGRS.2021.3125567. Art. no. 102182, doi: 10.1016/j.jag.2020.102182.
[5] G. Mateo-Garcia et al., “In-orbit demonstration of a re-train- [18] B. Lu, P. D. Dao, J. Liu, Y. He, and J. Shang, “Recent advances
able Machine Learning Payload for processing optical im- of hyperspectral imaging technology and applications in agri-
agery,” Scientific Rep., early access, 2022, doi: 10.21203/rs.3. culture,” Remote Sens., vol. 12, no. 16, Aug. 2020, Art. no. 2659,
rs-1941984/v1. doi: 10.3390/rs12162659.
[6] M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “Deep learning [19] C. Lin, A.-X. Zhu, Z. Wang, X. Wang, and R. Ma, “The refined
classifiers for hyperspectral imaging: A review,” ISPRS J. Photo- spatiotemporal representation of soil organic matter based
grammetry Remote Sens., vol. 158, pp. 279–317, Dec. 2019, doi: on remote images fusion of Sentinel-2 and Sentinel-3,” Int.
10.1016/j.isprsjprs.2019.09.006. J. Appl. Earth Observ. Geoinformation, vol. 89, Jul. 2020, Art.
[7] J. Nalepa et al., “Towards resource-frugal deep convolutional no. 102094, doi: 10.1016/j.jag.2020.102094.
neural networks for hyperspectral image segmentation,” Mi- [20] N. E. Q. Silvero et al., “Soil variability and quantification based
croprocessors Microsystems, vol. 73, Mar. 2020, Art. no. 102994, on Sentinel-2 and Landsat-8 bare soil images: A comparison,”
doi: 10.1016/j.micpro.2020.102994. Remote Sens. Environ., vol. 252, Jan. 2021, Art. no. 112117, doi:
[8] J. Nalepa et al., “Towards on-board hyperspectral satellite im- 10.1016/j.rse.2020.112117.
age segmentation: Understanding robustness of deep learn- [21] Y. Zhang et al., “Estimating the maize biomass by crop
ing through simulating acquisition conditions,” Remote Sens., height and narrowband vegetation indices derived from
vol. 13, no. 8, Apr. 2021, Art. no. 1532, doi: 10.3390/rs13081532. UAV-based hyperspectral images,” Ecological Indicators,

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 33


vol. 129, Oct. 2021, Art. no. 107985, doi: 10.1016/j.ecolind. Environ., vol. 245, Aug. 2020, Art. no. 111797, doi: 10.1016/j.
2021.107985. rse.2020.111797.
[22] M. Battude et al., “Estimating maize biomass and yield over [35] G. Mateo-Garcia et al., “Towards global flood mapping on-
large areas using high spatial and temporal resolution Senti- board low cost satellites with machine learning,” Scientific
nel-2 like remote sensing data,” Remote Sens. Environ., vol. 184, Rep., vol. 11, no. 1, pp. 1–2, Mar. 2021, doi: 10.1038/s41598
pp. 668–681, Oct. 2016, doi: 10.1016/j.rse.2016.07.030. -021-86650-z.
[23] Q. Zheng et al., “Integrating spectral information and me- [36] X. Jiang et al., “Rapid and large-scale mapping of flood inun-
teorological data to monitor wheat yellow rust at a regional dation via integrating spaceborne synthetic aperture radar
scale: A case study,” Remote Sens., vol. 13, no. 2, Jan. 2021, Art. imagery with unsupervised deep learning,” ISPRS J. Photo-
no. 278, doi: 10.3390/rs13020278. grammetry Remote Sens., vol. 178, pp. 36–50, Aug. 2021, doi:
[24] D. Wang et al., “Early detection of tomato spotted wilt virus 10.1016/j.isprsjprs.2021.05.019.
by hyperspectral imaging and outlier removal auxiliary classi- [37] B. Peng et al., “Urban flood mapping with bitemporal mul-
fier generative adversarial nets (OR-AC-GAN),” Scientific Rep., tispectral imagery via a self-supervised learning framework,”
vol. 9, no. 1, pp. 1–4, Mar. 2019, doi: 10.1038/s41598-019- IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp.
40066-y. 2001–2016, 2021, doi: 10.1109/JSTARS.2020.3047677.
[25] N. Gorretta, M. Nouri, A. Herrero, A. Gowen, and J.-M. Roger, [38] Y. Li, S. Martinis, and M. Wieland, “Urban flood mapping with
“Early detection of the fungal disease ‘apple scab’ using SWIR an active self-learning convolutional neural network based on
hyperspectral imaging,” in Proc. 10th Workshop Hyperspectral TerraSAR-X intensity and interferometric coherence,” ISPRS J.
Imag. Signal Process., Evol. Remote Sens. (WHISPERS), 2019, Photogrammetry Remote Sens., vol. 152, pp. 178–191, Jun. 2019,
pp. 1–4, doi: 10.1109/WHISPERS.2019.8921066. doi: 10.1016/j.isprsjprs.2019.04.014.
[26] J. Wang et al., “Estimating leaf area index and aboveground [39] B. M. Wotton et al., “Forest fire occurrence and climate change
biomass of grazing pastures using Sentinel-1, Sentinel-2 in Canada,” Int. J. Wildland Fire, vol. 19, no. 3, pp. 253–271,
and Landsat images,” ISPRS J. Photogrammetry Remote Sens., May 2010, doi: 10.1071/WF09002.
vol. 154, pp. 189–201, Aug. 2019, doi: 10.1016/j.isprsjprs.2019. [40] G. Lazzeri, W. Frodella, G. Rossi, and S. Moretti, “Multitem-
06.007. poral mapping of post-fire land cover using multiplatform
[27] C.-Y. Chiang, C. Barnes, P. Angelov, and R. Jiang, “Deep learn- PRISMA hyperspectral and Sentinel-UAV multispectral
ing-based automated forest health diagnosis from aerial im- data: Insights from case studies in Portugal and Italy,” Sen-
ages,” IEEE Access, vol. 8, pp. 144,064–144,076, Jul. 2020, doi: sors, vol. 21, no. 12, Jun. 2021, Art. no. 3982, doi: 10.3390/
10.1109/ACCESS.2020.3012417. s21123982.
[28] L. Feng et al., “Investigation on data fusion of multisource [41] S. Xulu, N. Mbatha, and K. Peerbhay, “Burned area map-
spectral data for rice leaf diseases identification using machine ping over the southern cape forestry region, South Africa us-
learning methods,” Frontiers Plant Sci., vol. 11, Nov. 2020, Art. ing Sentinel data within GEE Cloud Platform,” ISPRS Int. J.
no. 577063, doi: 10.3389/fpls.2020.577063. Geo-Inf., vol. 10, no. 8, Aug. 2021, Art. no. 511, doi: 10.3390/
[29] L. Feng, B. Wu, Y. He, and C. Zhang, “Hyperspectral imaging ijgi10080511.
combined with deep transfer learning for rice disease detec- [42] C. F. Waigl et al., “Fire detection and temperature retrieval
tion,” Frontiers Plant Sci., vol. 12, Sep. 2021, Art. no. 693521, using EO-1 Hyperion data over selected Alaskan boreal for-
doi: 10.3389/fpls.2021.693521. est fires,” Int. J. Appl. Earth Observ. Geoinformation, vol. 81, pp.
[30] F. Zhang and G. Zhou, “Estimation of vegetation water con- 72–84, Sep. 2019, doi: 10.1016/j.jag.2019.03.004.
tent using hyperspectral vegetation indices: A comparison of [43] S. Amici and A. Piscini, “Exploring PRISMA scene for fire
crop water indicators in response to water stress treatments for ­detection: Case study of 2019 bushfires in Ben Halls Gap
summer maize,” BMC Ecology, vol. 19, no. 18, pp. 1–12, Dec. ­National Park, NSW, Australia,” Remote Sens., vol. 13, no. 8,
2019, doi: 10.1186/s12898-019-0233-0. Apr. 2021, Art. no. 1410, doi: 10.3390/rs13081410.
[31] M. Wocher, K. Berger, M. Danner, W. Mauser, and T. Hank, [44] N. T. Toan, P. Thanh Cong, N. Q. Viet Hung, and J. Jo, “A
“Physically-based retrieval of canopy equivalent water thick- deep learning approach for early wildfire detection from
ness using hyperspectral data,” Remote Sens., vol. 10, no. 12, hyperspectral satellite images,” in Proc. 7th Int. Conf. Robot
Nov. 2018, Art. no. 1924, doi: 10.3390/rs10121924. Intell. Technol. Appl. (RiTA), 2019, pp. 38–45, doi: 10.1109/
[32] F. J. García-Haro et al., “A global canopy water content prod- RITAPP.2019.8932740.
uct from AVHRR/Metop,” ISPRS J. Photogrammetry Remote [45] M. Gouhier, M. Deslandes, Y. Guéhenneux, P. Hereil, P. Cacau-
Sens., vol. 162, pp. 77–93, Apr. 2020, doi: 10.1016/j.isprsjprs. lt, and B. Josse, “Operational response to volcanic ash risks us-
2020.02.007. ing HOTVOLC satellite-based system and MOCAGE-­accident
[33] B. Yang, H. Lin, and Y. He, “Data-driven methods for the es- model at the Toulouse VAAC,” Atmosphere, vol. 11, no. 8, Aug.
timation of leaf water and dry matter content: Performances, 2020, Art. no. 864, doi: 10.3390/atmos11080864.
potential and limitations,” Sensors, vol. 20, no. 18, Sep. 2020, [46] M. J. Zidikheri, C. Lucas, and R. J. Potts, “Quantitative veri-
Art. no. 5394, doi: 10.3390/s20185394. fication and calibration of volcanic ash ensemble forecasts
[34] K. Rao, A. P. Williams, J. F. Flefil, and A. G. Konings, “SAR- ­using satellite data,” J. Geophys. Res. Atmos., vol. 123, no. 8, pp.
enhanced mapping of live fuel moisture content,” Remote Sens. 4135–4156, Apr. 2018, doi: 10.1002/2017JD027740.

34 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[47] L. Arias, J. Cifuentes, M. Marín, F. Castillo, and H. Garcés, “Hy- [61] U. Bhangale, S. Durbha, A. Potnis, and R. Shinde, “Rapid
perspectral imaging retrieval using MODIS satellite sensors ap- earthquake damage detection using deep learning from
plied to volcanic ash clouds monitoring,” Remote Sens., vol. 11, VHR remote sensing images,” in Proc. IEEE Int. Geosci. Remote
no. 11, Jun. 2019, Art. no. 1393, doi: 10.3390/rs11111393. Sens. Symp. (IGARSS), 2019, pp. 2654–2657, doi: 10.1109/
[48] L. Liu, C. Li, Y. Lei, J. Yin, and J. Zhao, “Volcanic ash cloud IGARSS.2019.8898147.
detection from MODIS image based on CPIWS method,” Acta [62] M. Ji, L. Liu, and M. Buchroithner, “Identifying collapsed
Geophys., vol. 65, no. 1, pp. 151–163, Mar. 2017, doi: 10.1007/ buildings using post-earthquake satellite imagery and convo-
s11600-017-0013-1. lutional neural networks: A case study of the 2010 Haiti Earth-
[49] A. Hudak et al., “The relationship of multispectral satellite im- quake,” Remote Sens., vol. 10, no. 11, Oct. 2018, Art. no. 1689,
agery to immediate fire effects,” Fire Ecology, vol. 3, pp. 64–90, doi: 10.3390/rs10111689.
Jun. 2007, doi: 10.4996/fireecology.0301064. [63] P. Xiong et al., “Towards advancing the earthquake forecast-
[50] N. Anantrasirichai, J. Biggs, F. Albino, P. Hill, and D. Bull, ing by machine learning of satellite data,” Sci. Total Environ.,
“Application of machine learning to classification of volcanic vol. 771, Jun. 2021, Art. no. 145256, doi: 10.1016/j.scitotenv.
deformation in routinely generated InSAR data,” J. Geophys. 2021.145256.
Res. Solid Earth, vol. 123, no. 8, pp. 6592–6606, Aug. 2018, doi: [64] X. Yan, Z. Zang, N. Luo, Y. Jiang, and Z. Li, “New interpretable
10.1029/2018JB015911. deep learning model to monitor real-time PM2.5 concentra-
[51] L. Liu and X.-K. Sun, “Volcanic ash cloud diffusion from re- tions from satellite data,” Environ. Int., vol. 144, Nov. 2020,
mote sensing image using LSTM-CA method,” IEEE ­Access, Art. no. 106060, doi: 10.1016/j.envint.2020.106060.
vol. 8, pp. 54,681–54,690, Mar. 2020, doi: 10.1109/ACCESS. [65] H. Soydan, A. Koz, and H. Şebnem Düzgün, “Secondary
2020.2981368. iron mineral detection via hyperspectral unmixing analysis
[52] M. P. Del Rosso, A. Sebastianelli, D. Spiller, P. P. Mathieu, and with Sentinel-2 imagery,” Int. J. Appl. Earth Observ. Geoinfor-
S. L. Ullo, “On-board volcanic eruption detection through mation, vol. 101, Sep. 2021, Art. no. 102343, doi: 10.1016/j.
CNNs and satellite multispectral imagery,” Remote Sens., vol. jag.2021.102343.
13, no. 17, Sep. 2021, Art. no. 3479, doi: 10.3390/rs13173479. [66] A. Riaza, J. Buzzi, E. García-Meléndez, V. Carrère, A. Sarmien-
[53] Y. Kim and S. Hong, “Deep learning-generated nighttime re- to, and A. Müller, “Monitoring acidic water in a polluted
flectance and daytime radiance of the midwave infrared band river with hyperspectral remote sensing (HyMap),” Hydro-
of a geostationary satellite,” Remote Sens., vol. 11, no. 22, Nov. logical Sci. J., vol. 60, no. 6, pp. 1064–1077, Jun. 2015, doi:
2019, Art. no. 2713, doi: 10.3390/rs11222713. 10.1080/02626667.2014.899704.
[54] W. Qi, M. Wei, W. Yang, C. Xu, and C. Ma, “Automatic map- [67] F. Wang, J. Gao, and Y. Zha, “Hyperspectral sensing of heavy
ping of landslides by the ResU-Net,” Remote Sens., vol. 12, metals in soil and vegetation: Feasibility and challenges,” IS-
no. 15, Aug. 2020, Art. no. 2487, doi: 10.3390/rs12152487. PRS J. Photogrammetry Remote Sens., vol. 136, pp. 73–84, Feb.
[55] C. Ye et al., “Landslide detection of hyperspectral remote 2018, doi: 10.1016/j.isprsjprs.2017.12.003.
sensing data based on deep learning with constrains,” IEEE [68] T. Shi et al., “Proximal and remote sensing techniques for
J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 12, no. 12, pp. mapping of soil contamination with heavy metals,” Appl.
5047–5060, Dec. 2019, doi: 10.1109/JSTARS.2019.2951725. Spectrosc. Rev., vol. 53, no. 10, pp. 783–805, Nov. 2018, doi:
[56] Z. Ma, G. Mei, and F. Piccialli, “Machine learning for landslides 10.1080/05704928.2018.1442346.
prevention: A survey,” Neural Comput. Appl., vol. 33, no. 17, pp. [69] Q. Li et al., “Estimating the impact of COVID-19 on the PM2.5
10,881–10,907, Sep. 2021, doi: 10.1007/s00521-020-05529-8. levels in China with a satellite-driven machine learning mod-
[57] Y. Li et al., “Accurate prediction of earthquake-induced land- el,” Remote Sens., vol. 13, no. 7, Apr. 2021, Art. no. 1351, doi:
slides based on deep learning considering landslide source 10.3390/rs13071351.
area,” Remote Sens., vol. 13, no. 17, Aug. 2021, Art. no. 3436, [70] A. Basit, B. M. Ghauri, and M. A. Qureshi, “Estimation of
doi: 10.3390/rs13173436. ground level PM2.5 by using MODIS satellite data,” in Proc.
[58] B. Adriano, J. Xia, G. Baier, N. Yokoya, and S. Koshimura, 6th Int. Conf. Aerosp. Sci. Eng. (ICASE), 2019, pp. 1–5, doi:
“Multi-source data fusion based on ensemble learning for rap- 10.1109/ICASE48783.2019.9059157.
id building damage mapping during the 2018 Sulawesi earth- [71] H. Feng, J. Li, H. Feng, E. Ning, and Q. Wang, “A high-resolu-
quake and tsunami in Palu, Indonesia,” Remote Sens., vol. 11, tion index suitable for multi-pollutant monitoring in urban
no. 7, Apr. 2019, Art. no. 886, doi: 10.3390/rs11070886. areas,” Sci. Total Environ., vol. 772, Jun. 2021, Art. no. 145428,
[59] M. Pollino et al., “Assessing earthquake-induced urban rubble doi: 10.1016/j.scitotenv.2021.145428.
by means of multiplatform remotely sensed data,” ISPRS Int. [72] B. Lyu, Y. Zhang, and Y. Hu, “Improving PM2.5 air quality
J. Geo-Inf., vol. 9, no. 4, Apr. 2020, Art. no. 262, doi: 10.3390/ model forecasts in China using a bias-correction framework,”
ijgi9040262. Atmosphere, vol. 8, no. 8, Aug. 2017, Art. no. 147, doi: 10.3390/
[60] M. Hasanlou, R. Shah-Hosseini, S. T. Seydi, S. Karimzadeh, atmos8080147.
and M. Matsuoka, “Earthquake damage region detection by [73] H. Shen et al., “Integration of remote sensing and social sens-
multitemporal coherence map analysis of radar and multi- ing data in a deep learning framework for hourly urban PM2.5
spectral imagery,” Remote Sens., vol. 13, no. 6, Mar. 2021, Art. mapping,” Int. J. Environ. Res. Public Health, vol. 16, no. 21,
no. 1195, doi: 10.3390/rs13061195. Nov. 2019, Art. no. 4102, doi: 10.3390/ijerph16214102.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 35


[74] Q. Wang, H. Feng, H. Feng, Y. Yu, J. Li, and E. Ning, “The im- spectral imagery,” Remote Sens., vol. 3, no. 10, pp. 2166–2186,
pacts of road traffic on urban air quality in Jinan based GWR Oct. 2011, doi: 10.3390/rs3102166.
and remote sensing,” Scientific Rep., vol. 11, Jul. 2021, Art. no. [88] M. A. Isgró, M. D. Basallote, and L. Barbero, “Unmanned
15512, doi: 10.1038/s41598-021-94159-8. aerial system-based multispectral water quality monitoring
[75] S. Hou, F. Zhai, and F. Liu, “Inversion of AOD and PM2.5 in the Iberian Pyrite Belt (SW Spain),” Mine Water Environ.,
mass concentration in taihu lake area based on MODIS data,” vol. 41, no. 1, pp. 30–41, Mar. 2022, doi: 10.1007/s10230-
IOP Conf. Ser., Mater. Sci. Eng., vol. 569, no. 2, Jul. 2019, Art. 021-00837-4.
no. 022037, doi: 10.1088/1757-899X/569/2/022037. [89] S. V. Pyankov, N. G. Maximovich, E. A. Khayrulina, O. A. Ber-
[76] V. Kopačková, “Mapping acid mine drainage (AMD) and acid ezina, A. N. Shikhov, and R. K. Abdullin, “Monitoring acid
sulfate soils using Sentinel-2 data,” in Proc. IEEE Int. Geosci. Re- mine drainage’s effects on surface water in the Kizel Coal ­Basin
mote Sens. Symp. (IGARSS), 2019, pp. 5682–5685, doi: 10.1109/ with Sentinel-2 satellite images,” Mine Water Environ., vol. 40,
IGARSS.2019.8900505. no. 3, pp. 606–621, Sep. 2021, doi: 10.1007/s10230-021-
[77] R. Jackisch, S. Lorenz, R. Zimmermann, R. Möckel, and R. Glo- 00761-7.
aguen, “Drone-borne hyperspectral monitoring of acid mine [90] S. G. Tesfamichael and A. Ndlovu, “Utility of ASTER and
drainage: An example from the Sokolov Lignite district,” Re- Landsat for quantifying hydrochemical concentrations in
mote Sens., vol. 10, no. 3, Mar. 2018, Art. no. 385, doi: 10.3390/ abandoned gold mining,” Sci. Total Environ., vol. 618, pp.
rs10030385. 1560–1571, Mar. 2018, doi: 10.1016/j.scitotenv.2017.09.335.
[78] A. Seifi, M. Hosseinjanizadeh, H. Ranjbar, and M. Honarmand, [91] C. Rossi et al., “Assessment of a conservative mixing model
“Identification of acid mine drainage potential using Sentinel for the evaluation of constituent behavior below river conflu-
2a imagery and field data,” Mine Water Environ., vol. 38, pp. ences, Elqui River Basin, Chile,” River Res. Appl., vol. 37, no. 7,
707–717, Dec. 2019, doi: 10.1007/s10230-019-00632-2. pp. 967–978, Sep. 2021, doi: 10.1002/rra.3823.
[79] Z. Wang, Y. Xu, Z. Zhang, and Y. Zhang, “Review: Acid [92] D. Gómez et al., “A new approach to monitor water quality
mine drainage (AMD) in abandoned coal mines of Shanxi, in the Menor sea (Spain) using satellite data and machine
China,” Water, vol. 13, no. 1, 2021, Art. no. 8, doi: 10.3390/ learning methods,” Environ. Pollut., vol. 286, Oct. 2021, Art.
w13010008. no. 117489, doi: 10.1016/j.envpol.2021.117489.
[80] F. Kruse, J. Boardman, and J. Huntington, “Comparison of [93] Y. Yang, Q. Cui, P. Jia, J. Liu, and H. Bai, “Estimating the heavy
airborne hyperspectral data and EO-1 Hyperion for mineral metal concentrations in topsoil in the Daxigou mining area,
mapping,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 6, pp. China, using multispectral satellite imagery,” Scientific Rep.,
1388–1400, Jun. 2003, doi: 10.1109/TGRS.2003.812908. vol. 11, no. 1, Jun. 2021, Art. no. 11718, doi: 10.1038/s41598-
[81] Y. Zhong, X. Wang, S. Wang, and L. Zhang, “Advances in space- 021-91103-8.
borne hyperspectral remote sensing in China,” Geo-­Spatial [94] F. Mirzaei et al., “Modeling the distribution of heavy metals
Inf. Sci., vol. 24, no. 1, pp. 95–120, Jan. 2021, doi: 10.1080/ in lands irrigated by wastewater using satellite images of Sen-
10095020.2020.1860653. tinel-2,” Egyptian J. Remote Sens. Space Sci., vol. 24, no. 3, pp.
[82] G. E. Davies and W. M. Calvin, “Mapping acidic mine waste 537–546, Dec. 2021, doi: 10.1016/j.ejrs.2021.03.002.
with seasonal airborne hyperspectral imagery at varying spa- [95] Z. Liu, Y. Lu, Y. Peng, L. Zhao, G. Wang, and Y. Hu, “Estimation
tial scales,” Environ. Earth Sci., vol. 76, Jun. 2017, Art. no. 432, of soil heavy metal content using hyperspectral data,” Remote
doi: 10.1007/s12665-017-6763-x. Sens., vol. 11, no. 12, Jun. 2019, Art. no. 1464, doi: 10.3390/
[83] H. Flores et al., “UAS-based hyperspectral environmental mon- rs11121464.
itoring of acid mine drainage affected waters,” Minerals, vol. 11, [96] C. Xiao, B. Fu, H. Shui, Z. Guo, and J. Zhu, “Detecting the
no. 2, Feb. 2021, Art. no. 182, doi: 10.3390/min11020182. sources of methane emission from oil shale mining and
[84] B. S. Acharya and G. Kharel, “Acid mine drainage from processing using airborne hyperspectral data,” Remote Sens.,
coal mining in the United States – An overview,” J. Hydrol., vol. 12, no. 3, Feb. 2020, Art. no. 537, doi: 10.3390/rs12030537.
vol. 588, Sep. 2020, Art. no. 125061, doi: 10.1016/j.jhydrol. [97] C. Ong et al., “Imaging spectroscopy for the detection, assess-
2020.125061. ment and monitoring of natural and anthropogenic hazards,”
[85] D. D. Gbedzi et al., “Impact of mining on land use land cov- Surv. Geophys., vol. 40, pp. 431–470, May 2019, doi: 10.1007/
er change and water quality in the Asutifi North District of s10712-019-09523-1.
Ghana, West Africa,” Environ. Challenges, vol. 6, Jan. 2022, Art. [98] M. D. Foote et al., “Fast and accurate retrieval of methane
no. 100441, doi: 10.1016/j.envc.2022.100441. concentration from imaging spectrometer data using spar-
[86] W. H. Farrand and S. Bhattacharya, “Tracking acid generat- sity prior,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 9, pp.
ing minerals and trace metal spread from mines using hy- 6480–6492, Sep. 2020, doi: 10.1109/TGRS.2020.2976888.
perspectral data: Case studies from northwest India,” Int. J. [99] A. Räsänen et al., “Predicting catchment-scale methane fluxes
Remote Sens., vol. 42, no. 8, pp. 2920–2939, Apr. 2021, doi: with multi-source remote sensing,” Landscape Ecology, vol. 36,
10.1080/01431161.2020.1864057. pp. 1177–1195, Apr. 2021, doi: 10.1007/s10980-021-01194-x.
[87] A. Riaza, J. Buzzi, E. García-Meléndez, V. Carrère, and A. Mül- [100] H. Boesch et al., “Monitoring greenhouse gases from space,”
ler, “Monitoring the extent of contamination from acid mine Remote Sens., vol. 13, no. 14, Jul. 2021, Art. no. 2700, doi:
drainage in the Iberian Pyrite Belt (SW Spain) using hyper- 10.3390/rs13142700.

36 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[101] L. Guanter et al., “Mapping methane point emissions with [115] A. Hueni and S. Bertschi, “Detection of sub-pixel plastic abun-
the PRISMA spaceborne imaging spectrometer,” Remote Sens. dance on water surfaces using airborne imaging spectros-
Environ., vol. 265, Nov. 2021, Art. no. 112671, doi: 10.1016/j. copy,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS),
rse.2021.112671. 2020, pp. 6325–6328, doi: 10.1109/IGARSS39084.2020.
[102] K. Kozicka et al., “Spatial-temporal changes of methane con- 9323556.
tent in the atmosphere for selected countries and regions [116] B. Basu, S. Sannigrahi, A. Sarkar Basu, and F. Pilla, “Develop-
with high methane emission from rice cultivation,” Atmo- ment of novel classification algorithms for detection of float-
sphere, vol. 12, no. 11, Oct. 2021, Art. no. 1382, doi: 10.3390/ ing plastic debris in coastal waterbodies using multispectral
atmos12111382. Sentinel-2 remote sensing imagery,” Remote Sens., vol. 13,
[103] N. Karimi, K. T. W. Ng, and A. Richter, “Prediction of fugi- no. 8, Apr. 2021, Art. no. 1598, doi: 10.3390/rs13081598.
tive landfill gas hotspots using a random forest algorithm and [117] A. Jamali and M. Mahdianpari, “A cloud-based framework for
Sentinel-2 data,” Sustain. Cities Soc., vol. 73, Oct. 2021, Art. large-scale monitoring of ocean plastics using multi-spectral
no. 103097, doi: 10.1016/j.scs.2021.103097. satellite imagery and generative adversarial network,” Wa-
[104] A. K. Ayasse et al., “Methane mapping with future satellite im- ter, vol. 13, no. 18, Sep. 2021, Art. no. 2553, doi: 10.3390/
aging spectrometers,” Remote Sens., vol. 11, no. 24, Dec. 2019, w13182553.
Art. no. 3054, doi: 10.3390/rs11243054. [118] M. Kremezi et al., “Pansharpening PRISMA data for ma-
[105] D. H. Cusworth et al., “Detecting high-emitting methane rine plastic litter detection using plastic indexes,” IEEE Ac-
sources in oil/gas fields using satellite observations,” Atmos. cess, vol. 9, pp. 61,955–61,971, Apr. 2021, doi: 10.1109/AC-
Chemistry Phys., vol. 18, no. 23, pp. 16,885–16,896, Nov. 2018, CESS.2021.3073903.
doi: 10.5194/acp-18-16885-2018. [119] L. Biermann, D. Clewley, V. Martinez-Vicente, and K. Topou-
[106] J. A. de Gouw et al., “Daily satellite observations of methane zelis, “Finding plastic patches in coastal waters using optical
from oil and gas production regions in the United States,” satellite data,” Scientific Rep., vol. 10, no. 1, pp. 2045–2322,
Scientific Rep., vol. 10, Jan. 2020, Art. no. 1379, doi: 10.1038/ Apr. 2020, doi: 10.1038/s41598-020-62298-z.
s41598-020-57678-4. [120] J. Mifdal, N. Longépé, and M. Rußwurm, “Towards detecting
[107] Y. Ren, C. Zhu, and S. Xiao, “Deformable faster R-CNN floating objects on a global scale with learned spatial features
with aggregating multi-layer features for partially occluded using Sentinel 2,” ISPRS Ann. Photogrammetry Remote Sens.
object detection in optical remote sensing images,” Remote Spatial Inf. Sci., vol. V-3-2021, pp. 285–293, Jun. 2021, doi:
Sens., vol. 10, no. 9, Sep. 2018, Art. no. 1470, doi: 10.3390/ 10.5194/isprs-annals-V-3-2021-285-2021.
rs10091470. [121] S. P. Garaba and H. M. Dierssen, “An airborne remote sens-
[108] A. K. Thorpe et al., “Methane emissions from underground gas ing case study of synthetic hydrocarbon detection using
storage in California,” Environ. Res. Lett., vol. 15, no. 4, Apr. short wave infrared absorption features identified from
2020, Art. no. 045005, doi: 10.1088/1748-9326/ab751d. marine-­harvested macro- and microplastics,” Remote Sens.
[109] D. R. Thompson et al., “Space-based remote imaging spec- Environ., vol. 205, pp. 224–235, Feb. 2018, doi: 10.1016/j.
troscopy of the aliso canyon ch4 superemitter,” Geophys. Res. rse.2017.11.023.
Lett., vol. 43, no. 12, pp. 6571–6578, Jun. 2016, doi: 10.1002/ [122] J. Zhang, H. Feng, Q. Luo, Y. Li, J. Wei, and J. Li, “Oil spill detec-
2016GL069079. tion in quad-polarimetric SAR images using an advanced con-
[110] J. R. Jambeck et al., “Plastic waste inputs from land into the volutional neural network based on SuperPixel model,” R ­ emote
ocean,” Science, vol. 347, no. 6223, pp. 768–771, Feb. 2015, Sens., vol. 12, no. 6, Mar. 2020, Art. no. 944, doi: 10.3390/
doi: 10.1126/science.1260352. rs12060944.
[111] S. B. Borrelle et al., “Predicted growth in plastic waste ex- [123] M. Krestenitis, G. Orfanidis, K. Ioannidis, K. Avgerinakis,
ceeds efforts to mitigate plastic pollution,” Science, vol. 369, S. Vrochidis, and I. Kompatsiaris, “Oil spill identification
no. 6510, pp. 1515–1518, Sep. 2020, doi: 10.1126/science. from satellite images using deep neural networks,” Remote
aba3656. Sens., vol. 11, no. 15, Jul. 2019, Art. no. 1762, doi: 10.3390/
[112] L. Buhl-Mortensen and P. Buhl-Mortensen, “Marine litter in rs11151762.
the Nordic Seas: Distribution composition and abundance,” [124] N. Longépé et al., “Polluter identification with spaceborne
Mar. Pollut. Bull., vol. 125, no. 1, pp. 260–270, Dec. 2017, doi: radar imagery, AIS and forward drift modeling,” Mar. Pollut.
10.1016/j.marpolbul.2017.08.048. Bull., vol. 101, no. 2, pp. 826–833, Dec. 2015, doi: 10.1016/j.
[113] S. P. Garaba et al., “Sensing ocean plastics with an airborne hy- marpolbul.2015.08.006.
perspectral shortwave infrared imager,” Environ. Sci. Technol., [125] K. Zeng and Y. Wang, “A deep convolutional neural network
vol. 52, no. 20, pp. 11,699–11,707, Sep. 2018, doi: 10.1021/acs. for oil spill detection from spaceborne SAR images,” Remote
est.8b02855. Sens., vol. 12, no. 6, Mar. 2020, Art. no. 1015, doi: 10.3390/
[114] K. Topouzelis, D. Papageorgiou, G. Suaria, and S. Aliani, rs12061015.
“Floating marine litter detection algorithms and techniques [126] S. K. Chaturvedi, S. Banerjee, and S. Lele, “An assessment of
using optical remote sensing data: A review,” Mar. Pollut. Bull., oil spill detection using Sentinel 1 SAR-C images,” J. Ocean
vol. 170, Sep. 2021, Art. no. 112675, doi: 10.1016/j.marpolbul. Eng. Sci., vol. 5, no. 2, pp. 116–135, Jun. 2020, doi: 10.1016/j.
2021.112675. joes.2019.09.004.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 37


[127] S. Tong, X. Liu, Q. Chen, Z. Zhang, and G. Xie, “Multi-feature station, China,” Remote Sens. Environ., vol. 84, no. 4, pp. 506–
based ocean oil spill detection for polarimetric SAR data using 515, Apr. 2003, doi: 10.1016/S0034-4257(02)00149-9.
random forest and the self-similarity parameter,” Remote Sens., [139] C. V. Rodríguez-Benito, G. Navarro, and I. Caballero, “Using
vol. 11, no. 4, Feb. 2019, Art. no. 451, doi: 10.3390/rs11040451. Copernicus Sentinel-2 and Sentinel-3 data to monitor harm-
[128] D. Mera, V. Bolon-Canedo, J. Cotos, and A. Alonso-Betanzos, ful algal blooms in Southern Chile during the COVID-19 lock-
“On the use of feature selection to improve the detection of down,” Mar. Pollut. Bull., vol. 161, Dec. 2020, Art. no. 111722,
sea oil spills in SAR images,” Comput. Geosci., vol. 100, pp. doi: 10.1016/j.marpolbul.2020.111722.
166–178, Mar. 2017, doi: 10.1016/j.cageo.2016.12.013. [140] P. R. Hill, A. Kumar, M. Temimi, and D. R. Bull, “Habnet: Ma-
[129] S. Temitope Yekeen and A.-L. Balogun, “Advances in remote chine learning, remote sensing-based detection of harmful al-
sensing technology, machine learning and deep learning for gal blooms,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.,
marine oil spill detection, prediction and vulnerability assess- vol. 13, pp. 3229–3239, Jun. 2020, doi: 10.1109/JSTARS.2020.
ment,” Remote Sens., vol. 12, no. 20, 2020, Art. no. 3416, doi: 3001445.
10.3390/rs12203416. [141] K. Abdelmalik, “Role of statistical remote sensing for inland
[130] A.-L. Balogun, S. T. Yekeen, B. Pradhan, and O. F. Althu- water quality parameters prediction,” Egyptian J. Remote Sens.
waynee, “Spatio-temporal analysis of oil spill impact and re- Space Sci., vol. 21, no. 2, pp. 193–200, Sep. 2018, doi: 10.1016/j.
covery pattern of coastal vegetation and wetland using multi- ejrs.2016.12.002.
spectral satellite Landsat 8-OLI imagery and machine learning [142] A. Sahay et al., “Empirically derived Coloured Dissolved Organic
models,” Remote Sens., vol. 12, no. 7, Apr. 2020, Art. no. 1225, Matter absorption coefficient using in-situ and Sentinel 3/OLCI
doi: 10.3390/rs12071225. in coastal waters of India,” Int. J. Remote Sens., vol. 43, no. 4, pp.
[131] P. Nie, H. Wu, J. Xu, L. Wei, H. Zhu, and L. Ni, “Thermal pol- 1430–1450, Feb. 2022, doi: 10.1080/01431161.2022.2040754.
lution monitoring of Tianwan nuclear power plant for the past [143] Y. Q. Tian et al., “Estimating of chromophoric dissolved or-
20 years based on Landsat remote sensed data,” IEEE J. Sel. ganic matter (CDOM) with in-situ and satellite hyperspectral
Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 6146–6155, remote sensing technology,” in Proc. IEEE Int. Geosci. Remote
Jun. 2021, doi: 10.1109/JSTARS.2021.3088529. Sens. Symp. (IGARSS), 2012, pp. 2040–2042, doi: 10.1109/
[132] J. M. Torres Palenzuela, L. G. Vilas, F. M. Bellas Aláez, and Y. IGARSS.2012.6350975.
Pazos, “Potential application of the new Sentinel satellites for [144] N. Cherukuru et al., “Estimating dissolved organic carbon
monitoring of harmful algal blooms in the Galician aquacul- concentration in turbid coastal waters using optical remote
ture,” Thalassas Int. J. Mar. Sci., vol. 36, pp. 85–93, Apr. 2020, sensing observations,” Int. J. Appl. Earth Observ. Geoinformation,
doi: 10.1007/s41208-019-00180-0. vol. 52, pp. 149–154, Oct. 2016, doi: 10.1016/j.jag.2016.06.010.
[133] S. Stroming, M. Robertson, B. Mabee, Y. Kuwayama, and B. [145] K. Zolfaghari et al., “Impact of spectral resolution on quantify-
Schaeffer, “Quantifying the human health benefits of using ing cyanobacteria in lakes and reservoirs: A machine-learning
satellite information to detect cyanobacterial harmful algal assessment,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–20,
blooms and manage recreational advisories in US lakes,” Geo- 2022, doi: 10.1109/TGRS.2021.3114635.
Health, vol. 4, no. 9, Sep. 2020, Art. no. e2020GH000254, doi: [146] J. Arellano-Verdejo, H. E. Lazcano-Hernandez, and N. Caba-
10.1029/2020GH000254. nillas-Terán, “ERISNet: Deep neural network for Sargassum
[134] Z. Kang et al., “Phaeocystis globosa bloom monitoring: ­detection along the coastline of the Mexican Caribbean,” PeerJ,
Based on p. globosa induced seawater viscosity modification vol. 7, May 2019, Art. no. e6842, doi: 10.7717/peerj.6842.
­adjacent to a nuclear power plant in Qinzhou Bay, China,” J. [147] J. Shin et al., “Sargassum detection using machine learning
Ocean Univ. China, vol. 19, no. 5, pp. 1207–1220, Oct. 2020, models: A case study with the first 6 months of GOCI-II imag-
doi: 10.1007/s11802-020-4481-6. ery,” Remote Sens., vol. 13, no. 23, Jan. 2021, Art. no. 4844, doi:
[135] J. Jankowiak, T. Hattenrath-Lehmann, B. J. Kramer, M. 10.3390/rs13234844.
Ladds, and C. J. Gobler, “Deciphering the effects of nitrogen,­ [148] H. Dierssen, A. Chlus, and B. Russell, “Hyperspectral discrim-
phosphorus, and temperature on cyanobacterial bloom in- ination of floating mats of seagrass wrack and the macroal-
tensification, diversity, and toxicity in Western Lake Erie,” gae Sargassum in coastal waters of Greater Florida Bay using
­Limnol. Oceanogr., vol. 64, no. 3, pp. 1347–1370, May 2019, airborne remote sensing,” Remote Sens. Environ., vol. 167, pp.
doi: 10.1002/lno.11120. 247–258, Sep. 2015, doi: 10.1016/j.rse.2015.01.027.
[136] R. Xia et al., “River algal blooms are well predicted by ante- [149] Z. Zhang, D. Huisingh, and M. Song, “Exploitation of trans-
cedent environmental conditions,” Water Res., vol. 185, Oct. Arctic maritime transportation,” J. Cleaner Prod., vol. 212, pp.
2020, Art. no. 116221, doi: 10.1016/j.watres.2020.116221. 960–973, Mar. 2019, doi: 10.1016/j.jclepro.2018.12.070.
[137] J. Pyo et al., “A convolutional neural network regression for [150] A. A. Kurekin et al., “Operational monitoring of illegal fish-
quantifying cyanobacteria using hyperspectral imagery,” Re- ing in Ghana through exploitation of satellite earth observa-
mote Sens. Environ., vol. 233, Nov. 2019, Art. no. 111350, doi: tion and AIS data,” Remote Sens., vol. 11, no. 3, Feb. 2019, Art.
10.1016/j.rse.2019.111350. no. 293, doi: 10.3390/rs11030293.
[138] D. Tang, D. R. Kester, Z. Wang, J. Lian, and H. Kawamura, [151] M. Reggiannini et al., “Remote sensing for maritime prompt
“AVHRR satellite remote sensing and shipboard measure- monitoring,” J. Mar. Sci. Eng., vol. 7, no. 7, Jun. 2019, Art.
ments of the thermal plume from the Daya Bay, nuclear power no. 202, doi: 10.3390/jmse7070202.

38 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[152] J. Alghazo, A. Bashar, G. Latif, and M. Zikria, “Maritime ship detec- [166] “Copernicus hyperspectral imaging mission for the envi-
tion using convolutional neural networks from satellite images,” ronment – Mission requirements document,” European
in Proc. 10th IEEE Int. Conf. Commun. Syst. Netw. Technol. (CSNT), Space Agency, Paris, France, ESA-EOPSM-CHIM-MRD-321
2021, pp. 432–437, doi: 10.1109/CSNT51715.2021.9509628. issue 3.0, 2021.
[153] M. A. El-Alfy, A. F. Hasballah, H. T. Abd El-Hamid, and A. M. [167] M. Rast, J. Nieke, J. Adams, C. Isola, and F. Gascon, “Copernicus
El-Zeiny, “Toxicity assessment of heavy metals and organo- hyperspectral imaging mission for the environment (Chime),”
chlorine pesticides in freshwater and marine environments, in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2021, pp.
Rosetta area, Egypt using multiple approaches,” Sustain. En- 108–111, doi: 10.1109/IGARSS47720.2021.9553319.
viron. Res., vol. 19, no. 1, Dec. 2019, Art. no. 19, doi: 10.1186/ [168] D. Lebedeff, M. Foulon, R. Camarero, R. Vitulli, and Y. Bobi-
s42834-019-0020-9. chon, “On-board cloud detection and selective spatial/spectral
[154] N. Longépé et al., “Completing fishing monitoring with space- compression based on CCSDS 123.0-b-2 for hyperspectral mis-
borne Vessel Detection System (VDS) and Automatic Identi- sions,” in Proc. Int. Workshop On-Board Payload Data Compression
fication System (AIS) to assess illegal fishing in Indonesia,” (OBPDC), 2020, pp. 1–8.
Mar. Pollut. Bull., vol. 131, pp. 33–39, Jun. 2018, doi: 10.1016/j. [169] “Recommendation for space data system standards CCSDS
marpolbul.2017.10.016. 123.0-b-2 – Low-complexity lossless and near-lossless multi-
[155] R. Pelich, N. Longépé, G. Mercier, G. Hajduch, and spectral and hyperspectral image compression – Blue book,”
R. Garello, “AIS-based evaluation of target detectors CCSDS Secretariat, National Aeronautics and Space Admin-
and SAR sensors characteristics for maritime surveil- istration, Washington, DC, USA, 2021. [Online]. Available:
lance,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., https://public.ccsds.org/Pubs/123x0b2c3.pdf
vol. 8, no. 8, pp. 3892–3901, Aug. 2015, doi: 10.1109/ [170] Y. Barrios, P. Rodríguez, A. Sánchez, M. González, L. Berrojo,
JSTARS.2014.2319195. and R. Sarmiento, “Implementation of cloud detection and
[156] Y.-L. Chang, A. Anagaw, L. Chang, Y. C. Wang, C.-Y. Hsiao, processing algorithms for CCSDS compliant hyper-spectral
and W.-H. Lee, “Ship detection based on YOLOv2 for SAR im- image compression on chime mission,” in Proc. Int. Workshop
agery,” Remote Sens., vol. 11, no. 7, Apr. 2019, Art. no. 786, doi: On-Board Payload Data Compression (OBPDC), 2020, pp. 1–8.
10.3390/rs11070786. [171] Y. Liu, Y. Yang, W. Jing, and X. Yue, “Comparison of differ-
[157] G. Melillos et al., “The use of remote sensing for maritime ent machine learning approaches for monthly satellite-based
surveillance for security and safety in Cyprus,” in Proc. SPIE soil moisture downscaling over northeast china,” Remote Sens.,
Detection Sens. Mines, Explosive Objects, Obscured Targets XXV, vol. 10, no. 1, Dec. 2018, Art. no. 31, doi: 10.3390/rs10010031.
2020, vol. 11418, pp. 141–152, doi: 10.1117/12.2567102. [172] T. Valentijn, J. Margutti, M. van den Homberg, and J. Laak-
[158] H. Heiselberg, “Ship-iceberg classification in SAR and mul- sonen, “Multi-hazard and spatial transferability of a CNN for
tispectral satellite images with neural networks,” Remote automated building damage assessment,” Remote Sens., vol. 12,
Sens., vol. 12, no. 15, Jul. 2020, Art. no. 2353, doi: 10.3390/ no. 17, Sep. 2020, Art. no. 2839, doi: 10.3390/rs12172839.
rs12152353. [173] Y. Michael et al., “Forecasting fire risk with machine learning
[159] Y. Ren, C. Zhu, and S. Xiao, “Small object detection in op- and dynamic information derived from satellite vegetation
tical remote sensing images via modified faster R-CNN,” index time-series,” Sci. Total Environ., vol. 764, Apr. 2021, Art.
Appl. Sci., vol. 8, no. 5, May 2018, Art. no. 813, doi: 10.3390/ no. 142844, doi: 10.1016/j.scitotenv.2020.142844.
app8050813. [174] C. Corradino et al., “Mapping recent lava flows at Mount Etna
[160] M. Reggiannini and L. Bedini, “Multi-sensor satellite data pro- using multispectral Sentinel-2 images and machine learn-
cessing for marine traffic understanding,” Electronics, vol. 8, no. 2, ing techniques,” Remote Sens., vol. 11, no. 16, Aug. 2019, Art.
Feb. 2019, Art. no. 152, doi: 10.3390/electronics8020152. no. 1916, doi: 10.3390/rs11161916.
[161] A. Rasul, “An investigation into the location of the crashed [175] N. Wang et al., “Identification of the debris flow process types
aircraft through the use of free satellite images,” J. Photogram- within catchments of Beijing mountainous area,” Water, vol. 11,
metry, Remote Sens. Geoinformation Sci., vol. 87, pp. 119–122, no. 4, Mar. 2019, Art. no. 638, doi: 10.3390/w11040638.
Sep. 2019, doi: 10.1007/s41064-019-00074-z. [176] S. Ullo et al., “Landslide geohazard assessment with convo-
[162] L. Shuxin, Z. Zhilong, and L. Biao, “A plane target detection lutional neural networks using sentinel-2 imagery data,” in
algorithm in remote sensing images based on deep learning Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2019, pp.
network technology,” J. Phys. Conf. Ser., vol. 960, no. 1, Jan. 9646–9649, doi: 10.1109/IGARSS.2019.8898632.
2018, Art. no. 012025, doi: 10.1088/1742-6596/960/1/012025. [177] J. E. Nichol, M. Bilal, M. A. Ali, and Z. Qiu, “Air pollution sce-
[163] J. Nalepa, M. Myller, and M. Kawulok, “Validating hyperspectral nario over China during COVID-19,” Remote Sens., vol. 12,
image segmentation,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 8, no. 13, Jun. 2020, Art. no. 2100, doi: 10.3390/rs12132100.
pp. 1264–1268, Aug. 2019, doi: 10.1109/LGRS.2019.2895697. [178] D. J. Varon, D. Jervis, J. McKeever, I. Spence, D. Gains, and D.
[164] S. Kapoor and A. Narayanan, “Leakage and the reproducibility J. Jacob, “High-frequency monitoring of anomalous methane
crisis in ML-based science,” 2022. [Online]. Available: https:// point sources with multispectral Sentinel-2 satellite observa-
arxiv.org/abs/2207.07048 tions,” Atmos. Meas. Techn., vol. 14, no. 4, pp. 2771–2785, Apr.
[165] R. Vitulli et al., “CHIME: The first AI-powered ESA operational 2021, doi: 10.5194/amt-14-2771-2021.
mission,” in Proc. Small Satell. Syst. Services 4S Symp., 2022, pp. 1–8. GRS

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 39


Onboard Information
Fusion for Multisatellite
Collaborative Observation
GUI GAO , LIBO YAO, W
­ ENFENG LI,
LINLIN ZHANG , AND MAOLIN ZHANG

Summary, challenges,
and perspectives

O nboard information fusion for multisatellites, which


is based on spatial computing mode, can improve the
satellites’ capability, such as the spatial–temporal coverage,
maritime surveillance, and other emergent or continuous
persistent observing situations. First, we analyze the neces-
sity of onboard information fusion. Next, the recent on-
detection accuracy, recognition confidence, position pre- board processing developments are summarized and the
cision, and prediction precision for disaster monitoring, existing problems are discussed. Furthermore, the key tech-
nologies and concepts of onboard information fusion are
Digital Object Identifier 10.1109/MGRS.2023.3274301
summarized in the fields of feature representation, associa-
Date of current version: 30 June 2023 tion, feature-level fusion, spatial computing architecture,

40 2473-2397/23©2023IEEE IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


and other issues. Finally, the future developments of on- multisatellite-networking/multiload collaboration, which
board information fusion are investigated and discussed. can acquire Earth observation data with higher accuracy,
advanced information dimensions, and higher space–time
INTRODUCTION resolution than current satellite systems. Because infor-
Earth observation technology for modern satellites has mation fusion can effectively reduce the conflict between
developed rapidly. Through the innovative design and multisource data, making full use of complementary infor-
launch of small low Earth orbit (LEO) satellites, many mation and realizing comprehensive mutual confirmation
commercial remote sensing satellite companies, such as and collaborative reasoning [1], [2], [3], [4], [5], [6], [7], [8],
Planet Labs and ICEYE, have established remote sensing multisatellite information fusion processing is one of the
satellite constellations. The number of these satellites ex- major research focuses in the field of satellite Earth obser-
ceeds the number of remote sensing satellites launched vation applications [9].
in the past. Consequently, the time interval for repeated The traditional mode of satellite Earth observation
satellite observations in the same area was substantially comprises conducting task planning, data processing, and
shortened. However, by considerably improving sensor ground-based production, and the data fusion processing
of the satellite multiload collaborative observation is also
performed on the ground [10], [11]. The product is then
sent to the user via a ground network. The information fu-
sion of multisatellite has been widely employed in cloud
removal [12], land cover change detection [13], [14], land
surface deformation surveying [15], [16], [17], [18], disas-
ter monitoring [19], [20], air and water pollution monitor-
ing [21], target searching and surveillance [22], [23], [24],
[25], maritime situation awareness [26], [27], and other
fields [31]. Environmental monitoring mainly uses imag-
ing satellites, such as optical [11], synthetic aperture radar
(SAR) [13], [28], [29], [30], and infrared [14]. In addition
to imaging satellites, nonimaging satellites, such as the au-
tomatic identification system (AIS) [32], signal intelligence
(SIGINT) [33], and electronic intelligence (ELINT) [7] have
been applied in target surveillance.
However, the traditional mode aims only at convention-
al, procedural, nonemergency, and low-timeliness Earth
observation tasks. Because of a large number of transmis-
sion nodes and long-time delay, the traditional mode can-
not undergo a rapid response to emergency tasks, such as
disaster rescue, time-critical target monitoring, and other
high-timeliness tasks. Furthermore, the transmission mode
has a weak ability to quickly guide and fuse real-time infor-
mation for multisatellite cooperative detection tasks, such
as ship monitoring in the high open seas and wide-area
missile warning [34].
Onboard information fusion of multisatellite data can
­dramatically improve the timeliness of satellite Earth observa-
tions. Onboard information fusion focuses two main chal-
lenges: first, the real-time data acquisition rate of satellite
Earth observation sensors is increasing, reaching several
©SHUTTERSTOCK.COM/BLUE PLANET STUDIO
gigabits per second, and real-time processing is challeng-
ing [34]; second, the onboard storage and processing hard-
performance and data quality, traditional companies, such ware is restricted by the satellite load capacity, power con-
as MDA and AirBus, can create a single satellite with higher sumption, and heat dissipation. Therefore, the onboard
spatial, radiation, and spectral resolutions, larger observa- processing capacity considerably differs from the ground
tion width, highly robust mobile agility, and a high num- processing capacity. Space researchers worldwide are work-
ber of working modes, and they can vigorously develop ing on: implantation of key technologies, equipment
integrated remote sensing satellites. In the future, Earth development of intersatellite high-speed laser commu-
observation systems of modern satellites will have the nication [35], [36]; onboard mass data storage and high-
capability of single-satellite/multiload collaboration and speed computing hardware [37], [38]; onboard embedded

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 41


real-time operating systems [39]; onboard raw observation RELEVANT RESEARCH PLAN AND
data compression [40], [41], [42], [43], [44]; onboard SAR ONBOARD PROCESSING SYSTEM
real-time imaging [45], [46], [47], [48]; onboard multi-
source heterogeneous data intelligent computing, such as CURRENT STATUS OF ONBOARD PROCESSING
cloud detection [49], [50], ship detection, and recognition FOR OPTICAL REMOTE SENSING SATELLITE
[51], [52], [53], [54], [55]; onboard autonomous task plan- Currently, optical-remote sensing satellites are capable of
ning [56], [57]; and other onboard processing [58], [59], onboard raw data compression, radiometric correction,
[60], [61], [62], [63], [64], [65], [66], [67], to achieve on- geometric correction, cloud detection, target detection and
board autonomous intelligent perception and integration recognition, terrain classification, and change monitoring.
of multisatellite Earth observation. Many onboard test Optical remote sensing satellites with onboard processing
loadings in different satellites, such as ZY1E [68], ZY1-02D capabilities are listed in Table 1. For example, the U.S. Earth
[69], EDRS-C [70], GaoFen-5 [71], FY-4 [72], HJ-2 A/B [73], Observation One Mission (EO-1) satellite has achieved on-
HY-1C [74, 75], HY-2B, and MetOp-C [76], have been per- board functions, including automatic selection of regions of
formed for various applications. interest (ROI), regional change detection, cloud detection,
By analyzing the current developments in satellite data and invalid or unnecessary data removal in hyperspectral
onboard processing, this article summarizes, analyzes, and images; hence, the time consumption of data downloading
studies related issues involving multisatellite data onboard has reduced from the original few hours to less than 30 min
intelligent fusion processing. [77]. Onboard data processing of the U.S. military’s opti-
cal remote sensing satellites has practical applications. For
DEVELOPMENT OF SATELLITE DATA example, the optical real-time adaptive signature identifica-
ONBOARD PROCESSING tion system (ORASIS) [78] carried on the naval earth map
Globally, space government organizations and commercial observer (NEMO) satellite can provide functions, such as
companies attach great importance to developing onboard automatic data analysis, feature extraction, and data com-
processing systems and equipment, researching core algo- pression of satellite hyperspectral images, and it can send
rithms of onboard intelligent processing, and conducting the processing results directly from the satellite to opera-
onboard testing and verification. The United States was tional users in real time. The MightySat satellite realized
the first country to conduct onboard research on the hard- onboard ROI identification [79] to verify that space tech-
ware, software, algorithms, and autonomous task planning nology supports real-time battlefield applications. An op-
of multisatellites, and it has performed onboard tests on erational responsive space satellite can analyze image and
multiple satellites. This section introduces and analyzes the signal data collected by the satellite in orbit and quickly
development history and key technologies for onboard sat- provide soldiers with target information, combat equip-
ellite data processing. ment, and battlefield damage assessment information in

TABLE 1. ONBOARD PROCESSING SYSTEMS OF OPTICAL REMOTE SENSING SATELLITE.

SATELLITE COUNTRY ONBOARD PROCESSING FUNCTIONS LAUNCH TIME HARDWARE SOLUTIONS


EO-1 United States Change detection and anomaly detection 2000 Mongoose V processor
BIRD Germany Multitype remote sensing image ­preprocessing, 2001 TMS320C40 floating point
on-satellite real-time multispectral classification, digital signal processor (DSP), field-­
forest fire detection, etc. programmable gate array (FPGA) and
NI1000 neural network coprocessor
FedSat Australia Multisource data compression, disaster monitoring 2002 FPGA
NEMO United States Adaptive compression of hyperspectral data 2003 SHARC 21060L DSP
MRO United States Multisensor information synthesis and analysis for 2005 RAD750
autonomous mission planning
X-SAT Singapore Automatic invalid data culling 2006 Vertex FPGA Strong ARM
PROBA-2 European Space Image analysis and compression, autonomous mis- 2009 LEON2 - FT
Agency (ESA) sion planning
Pleiades-HR France Radiation correction, geometric correction, image 2011 MVP modular processor with FPGA
compression at its core
CubeSat United States CubeSat U.S. large compression ratio compression 2011 Virtex 5VQ
of video data (100x)
Solar Orbiter ESA Image stabilization, preprocessing, radiation trans- 2017 LEON3 and FPGA V4
formation equation transposition
JiLin-1 01/02 China Forest fire detection, ship detection 2019 Multicore DSP,GPU
­spectrum satellite
U-Sat-1 ESA Cloud detection 2021 Movidius MA2450

42 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


near real time. The project also researched how satellites and the fusion of multisatellite and multisensor informa-
cooperate to achieve continuous target imaging [80]. tion are key technologies for achieving this goal.
The onboard image intelligent processing system of Chi- Since 2012, the United States has begun paying atten-
na’s Tsinghua-1 microsatellite has realized onboard cloud tion to space computing technology and has investigated
detection, detection of cloud coverage areas, and reduced onboard information fusion and air–space–ground-inte-
the amount of data transmitted [81]. The Jilin-1 half spectral grated collaborative networking, covering space cloud com-
satellite has realized automatic onboard detection of forest puting, space prediction analysis, and space information
fires, marine ships, and other targets, and can send the pro- collaboration. The U.S. Air Force Research Laboratory has
cessing results to the ground terminal station in the form tested the space-based cloud computing networks deployed
of short messages via the Beidou navigation satellite system in geosynchronous Earth orbits [96] and is now exploring
[82]. Germany’s bispectral infrared detection (BIRD) small the deployment scheme and architecture study of hetero-
satellite has realized the onboard processing of visible, mid- geneous network cloud computing with high Earth orbit
dle infrared, and thermal infrared images, including radio- (HEO), medium Earth orbit (MEO), and LEO. The Black-
metric correction, geometric correction, texture extraction, jack small-satellite constellation proposed by the Defense
terrain classification, and subpixel fire point detection [83]. Advanced Research Projects Agency (DARPA) of the United
The French Pleiades HR satellite achieved onboard process- States has a high degree of autonomy and flexibility, as
ing for radiometric correction, geometric correction, image shown in Figure 1(a). With onboard autonomously dis-
compression, and other functions [84]. Australia’s FedSat tributed decision processors, payloads can operate autono-
is equipped with a reconfigurable onboard processing pro- mously. The satellite data can be completely processed on-
totype system that uses data generated by onboard optical board and can be autonomously collaboratively observed
sensors for natural disaster monitoring [85]. without the support of a ground station for 30 days to satis-
fy the requirements of command and control, intelligence,
CURRENT STATUS OF ONBOARD PROCESSING surveillance, and reconnaissance (ISR), tactical operations,
FOR SAR REMOTE SENSING SATELLITE and others [97]. The U.S. Space Development Agency (SDA)
Spaceborne SAR satellites have higher data rates, larger of the Department of Defense (DOD) is building the Na-
processing capacities, and more complex imaging algo- tional Defense Space Architecture (NDSA), which is ca-
rithms than optical remote sensing satellites. Because of pable of onboard multisatellite information fusion on the
the satellite volume, weight, and power consumption con- transmission layer of its seven-layer architecture, as shown
straints, SAR satellite data processing primarily depends in Figure 1(b). After fusion, the information is transmit-
on the ground processing system, and some onboard data ted to the users using microwave or laser communication.
processing was conducted. As shown in Table 2, several
SAR remote sensing satellites have realized the onboard
compression of raw echo data and are currently per- TABLE 2. ONBOARD PROCESSING SYSTEMS
forming real-time imaging, target detection, other on- OF SAR REMOTE SENSING SATELLITE.
board processing algorithms, and hardware testing. For ONBOARD
example, China’s Chaohu-1 satellite conducted onboard PROCESSING
PROGRAM/ FUNCTIONAL DESIGN HARDWARE
real-time imaging and artificial intelligence (AI) real-
ENGINEERING COUNTRY AND VALIDATION TIME SOLUTIONS
time processing verification. Discoverer II United Ground moving target 1998 CIP
States indicator (GMTI), SAR
CURRENT STATUS OF ONBOARD FUSION real-time imaging
SYSTEM FOR MULTISATELLITE SBR United Real-time ­imaging, 2001 FPGA
States moving target
With the development of onboard storage hardware, em- ­indication)
bedded real-time operating systems, and data processing TechSat21 United Real-time imag- 2002 PowerPC
algorithms, the onboard data processing capabilities of sat- States ing, moving target 750
ellites have considerably improved. Simultaneously, the de- ­indicator, targeting

velopment of optical communication, relay satellites, and SAR processor ESA Real-time imaging 2004 System-on-
chip
other technologies has enabled data transmission between
ERS satellite United Real-time imaging, 2004 FPGA and
satellites at high speeds. Some researchers have successively States change detection PowerPC
proposed the concepts of space-based Internet [86], [87], Interferometric United On-satellite SAR inter- 2009 FPGA
[88], [89], spatial information networks [90], [91], [92], [93], SAR States ferometric processing
and air–space–ground-integrated information communica- Chaohu-1 China SAR data on-orbit 2022 DSP, FPGA,
imaging and target and GPU
tion networks [94], [95]; conducted research on key tech-
intelligent detection
nologies; and planned and built an onboard dynamic real- Taijing-4 01 China SAR data on-orbit 2022 DSP and
time distributed network aimed at integrating space-based imaging FPGA
communication, navigation, remote sensing, and comput-
ing resources. Onboard collaborative intelligent perception

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 43


Onboard multisatellite information fusion is regarded as a processor-based high-performance computing, onboard
core capability required for future development. The SDA software reconstruction, onboard cloud computing, and
developed a platform prototype onboard experimental test open source application software uplink [113], which pro-
in 2021 and performed onboard verification [98]. motes research on onboard intelligent fusion processing of
The Project for Onboard Autonomy (PROBA) satellite multisatellite information.
of the European Space Agency (ESA) conducted onboard
autonomous observation mission-planning experiments MULTISATELLITE DATA FUSION TECHNOLOGY
[99]. The Pujiang-1 satellite, developed by China, has re- Information fusion was first applied to underwater signal
alized onboard autonomous observation task planning processing during the 1970s. In 1988, the U.S. DOD listed
for the collaborative work of multiple payloads and has information fusion as one of the 20 key technologies that
conducted onboard image preprocessing, information ex- focused on R&D and listed it as the highest priority catego-
traction, fusion, rapid location of suspected target areas, ry A. The U.S. DOD Joint Directors of Laboratories set up
typical target recognition, and regional change detection an information fusion expert group to organize and guide
for satellite formation [100]. research. More than 100 information-fusion systems have
In addition, a remote sensing satellite that carries multi- been used in the United States. Information fusion primar-
ple sensors necessitates the ability of onboard information ily involves multisensor detection, tracking and state esti-
fusion, such as satellites carrying SAR and AIS payloads, mation, target recognition, situation awareness, and threat
including Canada’s Radarsat Constellation Mission (RCM), assessment. Its core processing link is multisource data as-
China’s Gaofen-3 02 satellite, Japan’s ALOS-4, and satellites sociation and fusion processing. This section summarizes
carrying optical and AIS payloads, including China’s Hain- the current development status of multisatellite data fusion
an-1 01/02 and Wenchang-1 01/02 satellites. processing technology based on the characteristics of satel-
Earth observation satellites have begun a new era with lite Earth observation sensors.
the promotion and application of AI algorithms, represent-
ed by deep learning, in the field of remote sensing image CURRENT STATUS OF MULTISATELLITE
analysis and processing. First, Zhou [101], and Oktay and DATA ASSOCIATION
Zhou [102] presented the architecture of intelligent Earth Target association is the premise and foundation of the fol-
observation systems. Li et al. proposed the implementa- lowing steps of fusion processing and aims to correlate the
tion of intelligent Earth observation systems [103], Earth same target’s information that is obtained by the same sen-
observation brain [104], and space-based information real- sor observed at different times or multiple sensors (illus-
time service systems [105]. Many scientific research insti- trated in Figure 2). Traditional target association algorithms
tutions and companies have conducted in-depth discus- include the nearest neighbors [114], [115], probabilistic data
sions on intelligent remote sensing satellites [106], [107], association (PDA) [116], [117], joint PDA [118], [119], [120],
[108], [109], [110], [111]. Novel satellites, such as software- multiple hypothesis tracking [121], [122], interacting mul-
defined satellites and intelligent remote sensing satellites, tiple model [123], [124], sequential track correlation [125],
have been launched; for example, ESA’s U- Sat satellite double threshold track correlation [126], particle filtering
verified AI-based cloud detection, ship detection and clas- based [127], probability hypothesis density filter-based [128],
sification, forest detection and anomaly monitoring, and [129], and multidimension assignment-based [130], [131].
other applications [112]. China’s software-defined experi- All the aforementioned algorithms were designed for
mental satellite Tianzhi-1 has demonstrated and verified land-based, air-based, and sea-based radars, electronic re-
onboard processing technologies, such as commercial connaissance, and other nonimaging sensors with a high

(a) (b)

FIGURE 1. Earth observation systems based on LEO small satellite constellation: (a) DARPA Blackjack constellation; (b) SDA-NDSA.

44 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


data-acquisition rate, high positioning accuracy, and long attribute feature information. Target association based on
observation duration. In these cases, the target is regard- spatial–temporal information is realized by establishing
ed as a point target, and its motion parameters—such as strict state and observation equations, and the data acqui-
position, velocity, and azimuth, which can be accurately sition rate must be high. Target associations based on at-
predicted by building a target motion model—are used to tribute feature information can be realized at the feature
achieve association. In addition to the target motion pa- and decision levels, but the feature selection and similarity
rameters, the sensor can also obtain other target charac- measurement function design are challenging.
teristics, such as the radar cross-section in the radar data, Based on the observation revisit period, space-based
electromagnetic characteristics of the emitter in the ELINT Earth observation modes can be classified into two catego-
data, and other attributes. Few studies have been conducted ries, particularly for target surveillance:
on algorithms that simply use target attribute features for ◗◗ Sparse data acquisition over long time intervals: Most remote
multitarget associations. Currently, multitarget association sensing satellites follow a sun-synchronous orbit. Some
algorithms that use target motion parameters, attributes, images of the same ROI were obtained during a single
and other features as association factors are attracting in- imaging period. The revisit interval is long, typically
creasing attention. These algorithms are based on statistics, several tens of minutes. The new agile remote sensing
clustering analysis, fuzzy mathematics, grey theory, and satellite is capable of multiview imaging and can capture
evidence theory, which mainly use target motion-state fil- multiple images of the same ROI in one orbital period.
tering and prediction for association combined with the This type of remote sensing satellite constellation can
similarity measurement of target attribute features as aux- shorten the observation revisit period.
iliary association factors. For example, amplitude features ◗◗ Dense data acquisition with short observation duration: A
[132], Doppler frequencies [133], polarization features geostationary Earth orbit (GEO) staring imaging satel-
[134], and high-resolution radar profile features [135] are lite can gaze at the same ROI for a long duration and ob-
used as complementary association factors in radar appli- tain image sequences with high time resolution, usually
cations. Pulse descriptor words, such as carrier frequency, seconds or minutes. LEO video imaging, such as AIS,
pulse width, and pulse repetition frequency, are used in the SIGINT, and ELINT satellites can stare at one zone for
target association for ELINT sensors [136]. Most algorithms several minutes in a single orbital period.
for multicamera video target tracking scenes are based on Space-based target monitoring is realized by multisat-
the topological relationship between cameras and use the ellite cooperative observation, and the target information
motion parameters, shape, color, texture, and other fea- obtained is in a sparse nonuniform acquisition mode. Un-
tures of the target to achieve association, which requires der this condition, the target kinematic model cannot be
the target motion state to be estimated accurately [137]. established accurately, and the target motion state estima-
For heterogeneous sensor data associations, such as radar, tion is inaccurate because of the long revisit observation
AIS, ELINT, and infrared images, the target’s classification period, short observation duration in a single-visit orbit,
or identification information is used for auxiliary associa- and different data acquisition rates and accuracy of multi-
tion [138]. Several methods based on shallow artificial neu- ple space-based sensors. Therefore, achieving accurate tar-
ral networks (ANNs) have been proposed in the literature get association using only target motion features is chal-
[139], [140], whereas few based on deep ANN have been lenging. Compared with the motion-state features, the
proposed [141], [142]. target attribute features are relatively stable: for example,
Most of the aforementioned target association algo- image features, which are obtained by imaging satellites,
rithms are based on spatial–temporal information and such as shape, histogram, local invariant features, and

Reporting 460
Not Reporting 571 500 km
Data Sources:
TerraSAR-X
Cosmo-SkyMed
Radarsat-2

(a) (b)

FIGURE 2. Multisatellite data association: (a) SAR + AIS; (b) SAR + optical.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 45


transformation features; and electromagnetic features, association features that can be considered point-pattern
such as the emitter center frequency, pulse width, and match problems. For target association between imaging
pulse repetition interval. satellites and nonimaging satellites, Zeng [151] proposed
Target-association algorithms between imaging satel- several target association algorithms based on formation
lites can be realized using image-matching methods. The target structure features, hierarchical matching, formation
key step of this method is feature matching that uses feature target attribute features, the attribute and structure of for-
similarity measurements of the target images to build the mation targets, and multisource features in multisource se-
matching relationship. The image features used for feature quence data. Lu and colleagues [152], [153] proposed target
matching include both single- and group-target features. association algorithms based on point-pair topological fea-
Single-target features are primarily used for target associa- tures and spectral matching, point-pair topological features
tions in high-resolution satellite remote sensing images. Lei and probabilistic relaxation marking, and D–S evidence
et al. [143] proposed a target association algorithm based combinations based on topology and attribute features.
on multiscale autoconvolution features and association
cost matrix for optical satellites. Group-target features are CURRENT STATUS OF MULTISATELLITE DATA FUSION
primarily used for the target association of medium- or Information fusion algorithms can be classified into three
low-resolution satellite remote sensing images. Group tar- levels based on the information abstraction level: data, fea-
gets typically appear in a relatively fixed formation, and the tures, and decisions. Feature-level fusion can maximally
membership and number of targets are typically related to retain most of the information of the original data but can
specific operational tasks. Therefore, the group-target posi- also greatly reduce the redundancy of multisource data.
tion can be regarded as a point set on the plane and each Current research mainly focuses on data- and decision-
target in that group can be regarded as a point in the point level fusion, whereas research on feature-level fusion is
set. Next, the multitarget association problem is trans- relatively scarce. However, feature-level fusion reduces
formed into a matching problem between the two point the dimensions of the original feature space, eliminates the
sets. Tang and Xu [144] proposed a target association algo- redundancy between feature representation vectors in the
rithm based on the Kalman filter and the Dempster–Shafer original feature space, and maintains the entropy, energy,
(D–S) theory for multiple remote sensing images. and correlation of invariant feature data after dimension
Target association based on image matching focuses on compression. The fused features can substantially describe
the target feature design and feature similarity measure- the nature of the target, which is conducive to further tar-
ment. Another type of association method is based on get recognition. The following analysis mainly focuses on
point features, mainly designed for GEO staring imaging feature-level fusion algorithms.
satellites and LEO video satellites, which use motion status Conventional feature-level fusion algorithms include
parameters and image features as association factors. The serial and parallel fusions [154]. Serial fusion methods di-
type of methods uses filtered and predicted target motion rectly concatenate multiple feature vectors into one feature
status parameters from image frames for the first-step as- vector and then apply a dimensional reduction to obtain
sociation, and they use image feature similarity to correct the fused feature vector. Parallel fusion methods combine
the association result as the second-step association. Lei two feature vectors into a single feature vector using com-
[145] proposed a remote sensing image multitarget track- plex variable functions. Although the above two methods
ing algorithm based on ROI feature matching and a remote can retain raw data information to some extent, the dimen-
sensing image multitarget association algorithm based on sion and redundancy of the fused feature remain high be-
multifeature fusion matching, which overcome the bias of cause the complementarities and redundancies between
kinematic feature matching error through image features the original multiple features are not fully utilized.
and the ambiguity of image matching recognition through Feature extraction and transformation are also consid-
motion state parameters. Yang et al. [146] proposed a satel- ered as feature fusion methods. Feature extraction obtains
lite video target correlation tracking algorithm based on a a fused feature by selecting the most effective feature from
motion heat map and local saliency map. Wu et al. [147] multiple original features using serial or parallel strate-
proposed a satellite video target correlation tracking algo- gies. Feature transformation obtains a new fused feature
rithm that combines motion smoothing constraints and through a linear or nonlinear mapping function of the
the grey similarity likelihood ratio. Liu et al. [148] proposed original features and can still be considered as a type of fea-
a multilevel target association method using different fea- ture extraction method. Some feature-level fusion methods
tures at different levels for the collaborative observation of based on multivariate statistical analysis theory have been
LEO and GEO satellites. proposed to solve the problem of serial fusion and parallel
Target association algorithms between nonimaging and fusion methods not being able to use the interrelationship
video-imaging satellites, such as AIS, SIGINT, or ELINT, between multidimensional features. Feature fusion meth-
mainly combine position and attribute information based ods based on canonical correlation analysis (CCA) obtain
on fuzzy mathematics [149] and D–S [150]. Some algo- the fused feature by building a correlation criterion func-
rithms use group-formation topological characteristics as tion between two feature vectors, calculating the projection

46 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


vectors, and then extracting the canonical correlative fea- for multimodality learning. The model works by learning
ture as a fused feature for the joint dimensionality reduc- the probability density over the space of multimodal data,
tion of a high-dimensional feature space, which is a sta- and it uses the states of the latent variables as representa-
tistical theory for the correlation analysis of two random tions of different modality data. Multimodality data can be
vectors. This kind of algorithm includes neural network organized and represented with a tensor; therefore, tensor-
CCA [155], kernel CCA [156], local preserving CCA [157], based deep learning methods can be used for multimodal-
sparse CCA [158], discriminant CCA [159], and 2D-CCA ity deep learning and can learn deep correlations between
[160]. The partial least squares (PLS) integrates the advan- data of different modalities. Yu et al. [172] proposed a deep
tages of multiple linear regression, principal component tensor neural network (DTNN) model. The DTNN extends
analysis, and CCA, and have been applied for feature fusion the deep neural network (DNN) by replacing one or more
[161]. Next, researchers have proposed a series of improved of its layers with a double-projection layer, in which each
methods [162], [163], such as conjugate orthogonal PLS input vector is projected into two nonlinear subspaces and
analysis, kernel PLS analysis, 2D-PLS, and sparse PLS anal- a tensor layer, in which two
ysis. Another type of feature-level fusion method obtains a subspace projections interact
fused feature by projecting multiple original feature spaces and jointly predict the next
onto the same space with common attributes. For example, layer in the deep architec- INFORMATION FUSION
Long et al. [164] proposed a feature fusion algorithm based ture. Hutchinson et al. [173] PRIMARILY INVOLVES
on multiview spectral clustering. proposed a tensor-deep stack- MULTISENSOR DETECTION,
Previous feature fusion methods were based on hand- ing network, which consists TRACKING AND STATE
crafted features. Feature learning based on deep learning of multiple stacked blocks, ESTIMATION, TARGET
methods can essentially be considered as stage-by-stage where each block contains a RECOGNITION, SITUATION
multiple feature fusion, such as convolution and full con- bilinear mapping from two AWARENESS, AND THREAT
nection operations of a convolutional neural network hidden layers to the output
ASSESSMENT.
(CNN) [165]. The different layer outputs of the deep learn- layer using a weight tensor
ing network correspond to different visual features and to incorporate higher-order
semantics; for example, the lower layer corresponds to statistics of the hidden binary
brightness, edge, and texture; the middle layer corresponds features. Zhang et al. [174] proposed a deep computation
to shape and direction; and the upper layer corresponds model to fully learn the data distribution, which uses a
to category. Feature fusion can then be realized in the dif- tensor to model the complex correlations of heterogeneous
ferent layers. Currently, most deep learning algorithms data. The model uses a tensor autoencoder as the basic
are designed for single-modality data. The coupled corre- module of the tensor deep learning model, which adopts
lation can be mined, and the redundancy can be reduced tensor distance as the average sum-of-squares error term of
as much as possible for different dimensional features by the reconstruction error in the output layer. A high-order
combining deep learning and information fusion. There- back-propagation algorithm is designed for training.
fore, researchers have gradually focused on information Information fusion methods for space-based observa-
fusion-based deep learning. For example, some CNN-based tion data are primarily designed for multisource remote
feature fusion structures and strategies have been proposed sensing image fusion and target recognition. Target-recog-
[166], [167], [168], [169], [170], [171], [172], [173], [174], nition algorithms based on information fusion use fuzzy
[175], [176], [177], [178], [179]. Some methods have been mathematics and the evidential reasoning theory for deci-
proposed for fusing deep-learning features based on CCA, sion-level fusion.
topic models, joint dictionaries, and bag-of-words [169].
The key step in feature fusion based on deep learning is the PROBLEMS AND CHALLENGES
selection of a feature fusion layer and architecture. The onboard data-processing capability of Earth observa-
According to the fusion layer in the deep learning net- tion satellites remains weak, and the main existing prob-
work, feature fusion methods based on deep learning can lems are as follows:
be classified into early-, middle-, and late-stage fusion. ◗◗ Most remote sensing satellites are capable of onboard
Early- and middle-stage fusion use the convolution layer, data compression and preprocessing. Some have real-
and late-stage fusion uses the output of the convolution ized onboard detection, classification, and recognition
layer or the output of the full connection layer. Another of targets of interest, such as ships and airplanes. Cur-
method for feature fusion based on deep learning is mul- rently, remote sensing satellites carry only one type of
timodality deep learning, which first learns the features of sensor and either work independently or unite with
single-modality data individually and then learns the fused each other. Intersatellite high-speed communication
feature. Ngiam et al. [170] proposed a cross-modality deep and data interchange are still in the testing stage. Ob-
autoencoder model that can better learn shared represen- servation planning mainly involves scheduling on the
tations between modalities. Srivastava and Salakhutdinov ground, and onboard autonomous intelligent plan-
[171] proposed a multimodal deep Boltzmann machine ning for multisatellite cooperation is not yet practical.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 47


Therefore, high-level onboard data processing, such as build an integrated network of intelligent observation and
multisatellite information association, fusion, and situ- processing that is task-driven and accounts for information
ational awareness, is seldom studied because of the inca- perception and fusion.
pability of multisatellite dynamic autonomous network-
ing for collaborative observation onboard. MULTISENSOR INTEGRATED SATELLITE
◗◗ Traditional remote sensing satellites mainly focus on Traditional remote sensing satellites are mostly designed for
improving the observation capability of a single satellite a single sensor. For example, optical remote sensing satellites
or sensor without considering multisensor or multisatel- can obtain panchromatic, multispectral, and other types
lite cooperation or information fusion. Because of the of images, whereas SAR remote sensing satellites can ob-
long revisit interval, fixed cross-time, narrow imaging tain strip, spotlight, polarization, and other mode images.
width, few remote sensing satellites in the same orbital, However, the use of a single sensor has certain limitations.
and short duration in a single observation period, the Taking marine target monitoring as an example, optical re-
multisatellite cannot respond to the emergent require- mote sensing satellites are vulnerable to night, cloud, rain,
ments for rapid cooperative observation, and the region snow, and other factors, and SAR remote sensing satellites
or target of interest cannot be persistently observed. are vulnerable to electromagnetic interference. Moreover, its
◗◗ Currently, onboard processing algorithms are implement- imaging mechanism is unique and its interpretation is dif-
ed using traditional theories, and intelligent computing, ficult. AIS satellites cannot obtain self-reported information
such as deep learning methods, cannot be directly trans- of noncooperative targets, and SIGINT or ELINT satellites
ferred for onboard data processing. The high-speed real- are vulnerable to electromagnetic silence and false target
time processing capability of onboard processing hard- deception with poor positioning accuracy; hence, obtaining
ware and software must be further improved according multidimensional information of the target that is not con-
to the capabilities of remote sensing satellites and user ducive to fine identification is challenging.
requirements, such as raw data compression, correction, By carrying multiple types of sensors on one single satel-
target detection and classification, multisource informa- lite, as shown in Figure 3, as representative examples, such
tion association, and fusion. Intelligent onboard data as active and passive microwave detection payloads (SAR
processing requires a hybrid computing architecture for and SIGINT or ELINT, SAR and passive radiation sources,
different tasks. SAR and global navigation satellite systems signals [189],
[190]), all passive detection payloads (optical and SIGINT
REQUIRED FUTURE TECHNIQUE DIRECTIONS or ELINT, infrared and SIGINT or ELINT), and multimodal
A space information network will realize and integrates imaging sensor payloads (SAR and visible ones, SAR and in-
remote sensing, communication, computation, and other frared ones), the multisensor integrated remote sensing sat-
sources. Multisatellite onboard information fusion is a key ellite has the benefit of simultaneously acquiring multidi-
space information network technology involved in the ar- mensional information of the target, which is conducive to
chitecture of satellite networks, intersatellite communica- fusion processing. Through single-satellite onboard fusion
tion protocols, spatially distributed computing, onboard processing, the complementary information of multiple
hardware, and embedded operating systems. With the de- types of sensors is fully utilized to reduce the uncertainty
velopment of AI technology, intelligent processing theories and imprecision of single-sensor observations and improve
have gradually been applied to both satellite sensor task the single-satellite perception capability.
planning and data processing to further enhance the effi-
ciency of satellite use. This section analyses the key tech- ELASTIC MULTILAYER SATELLITE
nologies in two aspects: multisatellite collaborative obser- CONSTELLATION ARCHITECTURE
vation and information fusion, considering the potential Several known multisensor satellite constellations are il-
onboard application of AI. lustrated in Figure 4. Although an integrated remote sens-
ing satellite has the advantage of multidimensional ob-
ONBOARD COOPERATIVE SURVEY TECHNOLOGY servation, their design, manufacture, and maintenance
OF MULTISATELLITES are challenging, and they have a long launching network
Although satellite Earth observations have the advantages supplement cycle, which is not optimal for large-scale de-
of wide coverage and multiple sensor types, they also have ployment. MEO and HEO remote sensing satellites have
the disadvantages of frequent alternation of cooperative ob- wide scanning ranges and long observation durations in a
servation satellites, dynamic changes in network topology, single period. The number of satellites required to achieve
short working durations, and long observation intervals, global observations is small; however, the resolution, posi-
which are characterized by sparse and uneven observations. tioning accuracy, and other performance indicators are low.
To improve the efficiency of satellites, it is necessary to plan LEO remote sensing satellites have high data quality, but
and design the satellite Earth observation system from the the scanning range is narrow, and the observation duration
information fusion perspective, switch the satellite design is short for a single period. The number of LEO satellites
based on satellite platforms to that on sensor loads, and required to achieve global coverage by a constellation is

48 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


extremely large. Furthermore, the life of LEO satellites is different types in MEO and HEO, the multidimensional
also much shorter than that of MEO and HEO satellites. data of the region or target of interested can be obtained. In-
Considering the characteristics and performance of formation at different levels can be rapidly extracted based
LEO, MEO, and HEO satellites, as well as different types of on an onboard high-performance edge intelligent com-
sensors, such as visible, infrared, SAR, SIGINT, and ELINT puting system and transmitted to other satellites through
sensors, by deploying small satellites carrying one single high-speed intersatellite communication for cooperative
sensor of the same type or different types in LEO, and de- observation. In contrast, small-satellite technology can be
ploying integrated satellites carrying multiple sensors of used to achieve space-based sparse microwave imaging,

Launch Vehicle Adapter S-Band Antennas (× 2)

GPS Antennas (× 2)

Keep Alive Solar Array

Bus Structure SAR Antenna Tie Downs


(× 6 Per Wing)
SAR Antenna Deployment
Deployable Solar Array
Subsystem (DSS)
AIS Antenna (× 4)

–X SAR Antenna Panel

S-Band X-Band Antenna


+X SAR Antenna Panel Antennas (× 2)
(a)
–X Solar Array

S-Band Antennas (× 4)

Star Sensors (× 3)

+X SAR Antenna Panel

–X SAR Antenna Panel


Sun Sensors (× 2)
Optical Camera
X-Band Data Transfer Antenna
+X Solar Array
(b)

FIGURE 3. Multisensor integrated satellite: (a) Canadian RCM (SAR + AIS); (b) Chinese Taijing-4 01 satellite (SAR + optical).

(a) (b) (c)

FIGURE 4. Multisensor satellite constellation: (a) Canadian OptiSAR constellation; (b) German Pendulum; (c) French CartWheel.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 49


distributed aperture imaging, computational imaging, and faster, improved, and highly continuous observation of
other new observation mechanisms. For example, distrib- emergencies through multisatellite cooperation.
uted radar small-satellite constellations can combine mul-
tiple radar antenna beams into a large beam to obtain a ONBOARD FUSION TECHNOLOGY
long baseline length by maintaining a rigid formation of OF MULTISATELLITE INFORMATION
multiple small satellites according to a certain spatial con- Compared with dense and uniform observation data ac-
figuration to realize the function of a virtual large radar quired by land-, sea-, and space-based radar or ELINT sen-
satellite. The key technologies for multisatellite coopera- sors, data acquired by remote sensing satellites have the
tive observation include time–space synchronization and characteristics of spatial–temporal nonsynchronization,
heterogeneous data fusion, because of the long observation inconsistent data acquisition rates, large differences in data
interval and substantial different data acquisition rate, ac- quality, multidimensional heterogeneous target description
curacy, and representation. features, and they are sparse and nonuniform. Traditional
information fusion methods are based on strict derivations
SOFTWARE-DEFINED INTELLIGENT OBSERVATION of mathematical formulas. Taking moving target tracking
OF REMOTE SENSING SATELLITE CONSTELLATIONS as an example, the motion state must be predicted accu-
Currently, Earth observation satellites are designed as con- rately, and the data acquisition rate must be sufficiently
stellations, and the number of remote sensing satellites high. Therefore, traditional information fusion methods
has rapidly increased. Multisatellite collaboration has be- cannot be applied directly to multisatellite information fu-
come the main mode of Earth observation. Meanwhile, the sion processing. Fusion strategies and architectures must be
requirements for the diversity studied according to the characteristics of the satellite data,
and timeliness of Earth ob- onboard processing hardware, and software performance.
servation tasks are increasing,
MEANWHILE, THE and the demand for multisat- ONBOARD HYBRID HETEROGENEOUS
REQUIREMENTS FOR THE ellite collaborative onboard INTELLIGENT COMPUTING ARCHITECTURE
DIVERSITY AND TIMELINESS autonomous task planning is Because of the size, weight, and power consumption of the
OF EARTH OBSERVATION becoming increasingly urgent. satellite, onboard computing and storage resources are lim-
However, traditional satellite ited, and scalability is limited. It is typically implemented
TASKS ARE INCREASING,
schedule planning generates based on a system-on-chip, and its computing architecture
AND THE DEMAND FOR
action sequences indepen- is considerably different from that of a ground system.
MULTISATELLITE
dently, based on the task list. Onboard hardware types have recently included field-
COLLABORATIVE ONBOARD Most of these are designed for programmable gate array (FPGA), digital signal processor
AUTONOMOUS TASK a single satellite or several sat- (DSP), CPU, GPU, and application-specified integrated cir-
PLANNING IS BECOMING ellites of the same type. Typi- cuits (ASICs). The operating systems include VxWorks and
INCREASINGLY URGENT. cally, instructions are gener- Integrity. The system architecture and bus standards in-
ated and uplinked to satellites cluded SpaceVPX and SpaceWire, respectively. All resource
after they are generated on the types offer processing advantages. A single resource cannot
ground. It is problematic to entirely meet onboard processing requirements or tasks.
satisfy the real-time schedule planning of multiple types of Moreover, because of the disunity in standards and specifi-
remote sensing satellite constellations and avail new types cations, it is difficult for satellites to achieve rapid integra-
of remote sensing satellites, such as operational response tion with poor universality and reconfiguration.
satellites, software-defined satellites, agile satellites, and The onboard computing resources of multisatellites con-
smart satellites. stitute a hybrid heterogeneous high-speed dynamic com-
Through the close combination of software-defined puting architecture. The hardware platform and software
intelligent observation scheduling and onboard informa- system were designed. Hardware resources must be uni-
tion fusion, the situation generated by onboard infor- versal, reconfigurable, have low power consumption, and
mation fusion, considering both the satellites observation exhibit high performance. Through computing resource
capability and observation tasks, the sensor working mode virtualization and onboard management mechanisms, pro-
is intelligently selected, and the optimal action sequence of cessing tasks are allocated to different hardware resources as
multisatellites is generated autonomously onboard to rea- required to ensure that the computing capabilities of each
sonably maximize the utilization and response. For emer- satellite node can be effectively used to realize dynamic
gencies, such as forest fires and search and rescue, build- collaborative computing of various onboard hardware.
ing a virtual resource pool is necessary for satellite Earth Software resources should be hierarchical, modular, and
observation, coordinating multiple observation tasks and open. According to the characteristics of the task, resources,
ground coverage opportunities of multisatellites, assigning such as onboard detection, communication, and comput-
observation tasks to satellite resources with matching capa- ing, are recombined through software definition to achieve
bilities through multisatellite task planning, and to achieve real-time resource and data loading of satellite functions

50 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


onboard, dynamic updates, and reconstruction of functions same feature space; the latter uses the data generation capa-
to satisfy the requirements of different users and tasks. bility of GANs to convert one type of data into another and
convert the association between different types of data into
ONBOARD INTELLIGENT ASSOCIATION the implementation association between the same type of
OF MULTISATELLITE INFORMATION data. The same type of isomorphic satellite data, such as
Multisatellite and multipayload cooperative observation optical remote sensing images with different resolutions,
data must be associated with or registered for the same tar- can solve the problem of the complex semantic content of
get or region. The multisatellite data first require spatial– remote sensing images and the insufficient identification
temporal alignment in the same unified space–time coordi- ability of traditional feature representation methods using
nate system for the association. Taking target association as depth measurement learning, attention mechanisms, and
an example, traditional target association algorithms based multitask learning models, and improve the accuracy of
on motion status prediction cannot be adapted to satellite information association of the same type of data. A recur-
observation data because of their long revisit period and in- rent neural network, long short-term memory, graph neural
accurate motion status estimation. The traditional target as- network, or transformer can be used to target track-to-track
sociation method based on feature similarity measurement and plot-to-plot associations that are primarily designed for
is mostly designed for the same type and structural data, target tracking.
and the distance metric function includes the Euclidean,
Minkowsky, and Mahalanobis distances. multisatellite het- ONBOARD INTELLIGENT FUSION PROCESSING OF
erogeneous observation data are in different feature spaces MULTISATELLITE INFORMATION
and have the characteristics of multimodality. These metric Multisatellites can provide multidimensional information
functions cannot be directly used for multimodal feature on regions or targets of interest from different observa-
similarity measurements. tion modes, electromagnetic bands, times, and platform
Deep learning methods can establish a mapping re- heights. Considerable relevant or complementary content
lationship between the original data and high-level se- is present in multisource data. Therefore, information fu-
mantics by building a multilayer learning network, which sion technology can be used to mine complementary infor-
extracts different-level features of the original data. Mul- mation, remove redundancies, strengthen cross-validation
tisource observation data for the same target are usually between sources, and improve the accuracy and reliability
heterogeneous in representation and are correlated with se- of processing results.
mantics. Multisatellite data associations require combining Feature layer fusion can effectively reduce data redun-
multilevel and multidimension information, such as space, dancy while retaining the original information to the maxi-
time, attributes, events, and identity, and specially designed mum extent. Feature fusion considers the correlation and
intelligent association models, for different cooperative ob- complementarity between different features; therefore, in-
servation scenarios. Satellite information with different tegrating both similar and different features is necessary.
structures, such as remote sensing images and ELINT data, Target feature fusion can be divided into state and attribute
show high variability in data structure and feature descrip- fusion. State fusion is primarily used to track the targets,
tion. Association factors are mainly reflected in the seman- which can achieve intelligent associated tracking of targets
tic hierarchy and spatial location relationships. By studying with multisource and heterogeneous satellite information
the relationship structure and knowledge maps between based on a deep-loop convolution network and a space–
multisatellite data and multimodality depth-learning net- time map convolution network. Attribute fusion is mainly
works, spatial–temporal convolution networks and other used for target recognition. A feature fusion network based
models can be designed for association. Therefore, deep on deep learning can extract features from multisource het-
learning methods can learn and extract implicit target as- erogeneous data, directly convert the data space to a feature
sociation patterns and rules based on the historical accu- space, and conveniently realize intelligent feature fusion at
mulation of multisource heterogeneous spatial–temporal multiple levels, such as low-level spatial features, middle-
big data, particularly at the semantic level. This type of as- level semantic features, and high-level abstract concept fea-
sociation model was realized using the consistency of mul- tures. Decision-level fusion requires obtaining prior knowl-
tisatellite data in high-level semantics and spatial locations. edge, such as identification uncertainty measurements,
Different types of isomorphic satellite data, such as opti- fusion strategies, and inference rules from each source,
cal remote sensing images and SAR remote sensing images, and then combining evidence and random-set theories to
need to be analyzed for implicit similarities between their achieve decision fusion. This knowledge can be acquired
features. Solutions include domain-adaptive source-invari- through learning using deep networks.
ant feature extraction and a generic adversarial network
(GANs)-based data translation model, which uses the rel- JOINT SATELLITE–GROUND DATA PROCESSING
evance of different types of data on high-level features and AND DYNAMIC INTERACTION
realizes the identification of the association between differ- Onboard satellite data processing, particularly intelligent
ent types of data by mapping different types of data to the processing, requires expert experience and knowledge.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 51


Compared with onboard storage and computing capa- of global near real-time, high spatial–temporal resolu-
bilities, the ground processing system can make full use tion, and coverage can be greatly improved. Integrated
of massive historical Earth observation satellite data for satellites and constellations can realize visible, infrared,
learning and has a more complete knowledge base and SAR, microwave, spectrum, SIGINT, ELINT, and other
model base as learning support. Simultaneously, with the multispectral bands in active and passive modes, and
gradual transfer of satellite authority to end users, users can they can provide multimodality heterogeneous data for
directly uplink observation task requests to satellites and onboard information fusion processing.
receive data products at different levels in real time. How- ◗◗ Establishing an onboard autonomous planning and
ever, terminal users, particularly highly effective users, pay scheduling mechanism integrating satellite resources of
great attention to high-level situation intelligence. Taking observation, communication, and computing; designing
moving-target monitoring as an example, it is necessary a satellite virtual resource pool under a unified time–
to assess the threat, intention, and trend of danger targets space framework; establishing a new mode of task-driven
and provide danger warnings and other information. It is software-defined intelligent cooperative observation; dy-
difficult for satellite onboard processing systems to real- namically allocating cooperative observation tasks and
ize high-level situational awareness; therefore, they must data processing tasks under a dynamic high-speed recon-
interact with ground processing systems to obtain ground struction environment to different satellites in near real-
knowledge and intelligence support. time; and improving the utilization ­efficiency of s­ atellite
Establish a satellite-ground-joint intelligent learning resources and the collaboration efficiency between mul-
mechanism, learn massive satellite spatial–temporal re- tiple types of satellites, providing highly timely data for
mote sensing big data on the ground, generate a lightweight onboard information fusion processing.
learning network model, and uplink it to the satellite in re- ◗◗ Designing an onboard hybrid heterogeneous intelligent
al-time for software updating and reconstruction to achieve computing architecture for multisatellite data fusion pro-
onboard intelligent processing and continuous online cessing, combining the performance and characteristics
learning. For emergencies and abnormal events, learning advantage of FPGA, DSP, CPU, GPU, and other hardware;
their recognition characteristics and behavior rules on the designing a reconfigurable, scalable and sustainable on-
ground based on satellite spatial–temporal remote sensing board intelligent fusion processing model for multisat-
big data, forming a knowledge base, and uplinking it to ellite data; establishing a integration of satellite–ground
the satellite for storage and updating in real-time to realize cooperative learning and uplink mechanism; learning
the autonomous perception of emergencies and abnormal knowledge, regularities and model on the ground based
events in orbit. Ground processing can combine the infor- on massive satellite observation data and updating the
mation obtained by space-, sea-, and land-based sensors to intelligent fusion processing system on the satellite in
carry out collaborative reasoning, overcome the problems of real-time; adapting to the requirements of onboard mul-
incomplete and discontinuous satellite sparse observation titask processing; realizing onboard autonomous aware-
data, and uplink the generated situational intelligence to the ness of emergencies; and providing users with near real-
satellite, providing support for onboard high-level situation time multidimensional and multilevel information.
awareness and collaborative observation task planning. ◗◗ Promoting the transfer of satellite task control authority
to end users, such that users can directly control satel-
FUTURE DEVELOPMENT TREND lites in orbit, uplink instructions, and acquire satellite
Networking and intelligence are development directions data; substantially shorten the satellite task planning,
for future Earth observation satellite systems. Networking data processing, and information transmission chain
includes the networking of multiple platforms and sensors from the satellite to end users; and improve the rapid
as well as observation, communication, and computing response capability to hot events.
space resources. Intelligence includes intelligent coopera-
tive observations and intelligent multisource data fusion. CONCLUSION
The future key development directions of multisatellite Modern Earth observation satellites are capable of high
onboard intelligent observation and data fusion include spatial–temporal-spectral resolutions, multiple working
implementation of the following methods: modes, high agility, and networking collaboration for Earth
◗◗ Building elastic and expandable multiorbit and multi- observation. The onboard information fusion of multisat-
sensor Earth observation satellite constellation systems. ellites can further improve the capability of large-scale
LEO, MEO, and HEO satellites achieve cooperative ob- observation, accurate interpretation, and rapid response
servation through satellite formation flying, constella- for wide-area surveillance, emergency rescue, and other
tion group networking, and other technologies. Real- application scenarios. In this study, the key technologies
time dynamic observations of hotspots and time-critical of onboard collaborative observation and information fu-
targets can be realized by deploying a small number of sion of multisatellites are analyzed and studied, and the
high- and medium-orbit satellites. Combined with high- development and suggestions for onboard information
density LEO satellite constellation groups, the capability fusion of multisatellite in the future are proposed and

52 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


discussed. Onboard information fusion of multisatellites Liberation Army Information Engineering University of
is a complex system engineering process. In addition to China, and the Naval Aviation University of China in 2003,
the key technologies analyzed in this study, it involves ad- 2006, and 2019, respectively. He is now an associate profes-
ditional aspects, such as satellite platforms, sensors, and sor at the Institute of Information Fusion, Naval Aviation
communications. Integrating onboard data processing, University of China, Yantai 264001, China. His research in-
communication, and observations is a way forward for fu- terests include satellite remote sensing information fusion
ture progress in Earth observations. and onboard processing.
Wenfeng Li (lwf878@sohu.com) is with the Shanghai
ACKNOWLEDGMENT Institute of Satellite Engineering, Shanghai 201109, China.
This work was supported in part by the National Nature His research interests include multiresource remote sensing
Science Foundation of China under Grant 41822105; in imagery processing and onboard analysis.
part by Sichuan Natural Science Foundation Innovation Linlin Zhang (zhanglinlin@my.swjtu.edu.cn) received
Group under Project 2023NSFSC1974; in part by Funda- her B.S. degree from Wuhan University in 2016, and her
mental Research Funds for the Central Universities under M.S. degree from Southwest Jiaotong University in 2019.
Projects 2682020ZT34 and 2682021CX071;in part by the She is now pursuing her Ph.D. degree in the Faculty of
CAST Innovation Foundation; and in part by the State Key Geosciences and Environmental Engineering, Southwest
Laboratory of Geo-Information Engineering under Projects Jiaotong University, Chengdu 611756, China. Her research
SKLGIE2020-Z-3-1 and SKLGIE2020-M-4-1. Libo Yao is the interests include synthetic aperture radar imagery process-
corresponding author. ing and object detection.
Maolin Zhang (mailzml@163.com) is with the Faculty of
AUTHOR INFORMATION Geosciences and Environmental Engineering, Southwest Jiao-
Gui Gao (dellar@126.com) received his B.S., M.S., and tong University, Chengdu 611756, China. His research inter-
Ph.D. degrees from the National University of Defense ests include synthetic aperture radar onboard data processing.
Technology (NUDT), Changsha, China, in 2002, 2003,
and 2007, respectively. In 2007, he joined the Faculty of REFERENCES
Information Engineering, School of Electronic Science [1] J. Llinas and E. Waltz, Multisensor Data Fusion. Norwood, MA,
and Engineering, NUDT, as an associate professor. He is USA: Artech House, 1990.
currently a professor with the Faculty of Geosciences and [2] J. Manyika and H. Durrant-Whyte, Data Fusion and Sensor Man-
Environmental Engineering, Southwest Jiaotong Uni- agement: A Decentralized Information-Theoretic Approach. New
versity, Chengdu 611756, China. He has authored more York, NY, USA: Ellis Horwood, 1994.
than 100 journal and conference papers and written four [3] Y. Bar-Sharlom and X. R. Li, Multitarget-Multisensor Tracking: Prin-
books and an English chapter. He has received numerous ciples and Techniques. Storrs, CT, USA: YBS Publishing, 1995.
awards, including the Excellent Master Thesis of Hunan [4] I. R. Goodman, R. P. S. Mahler, and H. T. Nguyen, Mathematics
Province in 2006, Excellent Doctor Thesis in 2008, and of Data Fusion. Norwell, MA, USA: Kluwer, 1997.
Outstanding Young People in NUDT and Hunan Prov- [5] N. S. V. Rao, D. B. Reister, and J. Barhen, “Information fusion
ince of China in 2014 and 2016, as well as a first-class methods based on physical laws,” IEEE Trans. Pattern Anal.
Prize of Science and Technology Progress and a Natural Mach. Intell., vol. 27, no. 1, pp. 66–77, Jan. 2005, doi: 10.1109/
Science in Hunan Province award. He was also selected as TPAMI.2005.12.
a Young Talent of Hunan in 2016 and supported by the Ex- [6] L. D. Hall and J. Dlinas, Handbook of Multisensor Data Fusion.
cellent Young People Science Foundation of the National Boca Raton, FL, USA: CRC Press, 2001.
Natural Science Foundation of China. He is the lead guest [7] Y. He, G. Wang, and X. Guan, Information Fusion Theory with
editor of International Journal of Antenna and Propagation, Applications. Beijing, China: Publishing House of Electronics
the guest editor of Remote Sensing, and an associate editor Industry, 2011.
and the lead guest editor of IEEE Journal of Selected Top- [8] Z. Zhao, C. Xiong, and K. Wang, Conceptions, Methods and Appli-
ics in Applied Earth Observations and Remote Sensing, and cations on Information Fusion. Beijing, China: National Defense
he is on the editorial board of the Chinese Journal of Ra- Industry Press, 2012.
dars. He was also the cochair of several conferences in the [9] M. D. Mura, S. Prasad, F. Pacifici, P. Gamba, J. Chanus-
field of remote sensing. He was the Excellent Reviewer for sot, and J. A. Benediktsson, “Challenges and opportunities
Journal of Xi’an Jiaotong University in 2013. He is a Member of multimodality and data fusion in remote sensing,” Proc.
of IEEE, the IEEE Geoscience and Remote Sensing Soci­ IEEE, vol. 103, no. 9, pp. 1585–1601, Sep. 2015, doi: 10.1109/
ety, and the Applied Computational Electromagnetics JPROC.2015.2462751.
Society; a senior member of the Chinese Institute of [10] C. Han, H. Zhu, and Z. Duan, Multisource Information Fusion.
Electronics (CIE); and a dominant member of the CIE Beijing, China: Tsinghua University Press, 2021.
Young Scientist Forum. [11] L. Wald, “An overview of concepts in fusion of Earth data,”
Libo Yao (ylb_rs@s126.com) received his B.S., M.S., in Proc. EARSeL Symp. “Future Trends Remote Sens.,” 2009,
and Ph.D. degrees from Shandong University, the People’s pp. 385–390.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 53


[12] W. Li, Y. Li, and C. Chan, “Thick cloud removal with optical [25] T. Liu, Z. Yang, G. Gao, A. Marino, S. Chen, and J. Yang, “A gen-
and SAR imagery via convolutional-mapping-deconvolutional eral framework of polarimetric detectors based on quadratic
network,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. optimization.” IEEE Trans. Geosci. Remote Sens., vol. 60, Oct.
2865–2879, Apr. 2020, doi: 10.1109/TGRS.2019.2956959. 2022, Art. no. 5237418, doi: 10.1109/TGRS.2022.3217336.
[13] S. Singh, R. K. Tiwari, V. Sood, H. S. Gusain, and S. Prashar, [26] B. Zhang, Z. Zhu, W. Perrie, J. Tang, and J. A. Zhang, “Estimat-
“Image fusion of Ku-band-based SCATSAT-1 and MODIS data ing tropical cyclone wind structure and intensity from space-
for cloud-free change detection over Western Himalayas,” IEEE borne radiometer and synthetic aperture radar,” IEEE J. Sel.
Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 4302514, doi: Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 4043–4050,
10.1109/TGRS.2021.3123392. Mar. 2021, doi: 10.1109/JSTARS.2021.3065866.
[14] Z. Yuan, L. Mou, Z. Xiong, and X. X. Zhu, “Change detection [27] H. Greidanus and N. Kourti, “Findings of the DECLIMS proj-
meets visual question answering,” IEEE Trans. Geosci. Remote ect-detection and classification of marine traffic from space,”
Sens., vol. 60, Sep. 2022, Art. no. 5630613, doi: 10.1109/ in Proc. SEASAR, Adv. SAR Oceanogr. ENVISAT and ERS, 2006,
TGRS.2022.3203314. pp. 126–131.
[15] C. Lu, Y. Lin, and R. Y. Chuang, “Pixel offset fusion of SAR and [28] Y. Lan, H. Yu, Z. Yuan, and M. Xing, “Comparative study
optical images for 3-D coseismic surface deformation,” IEEE of DEM reconstruction accuracy between single- and
Geosci. Remote Sens. Lett., vol. 18, no. 6, pp. 1049–1053, Jun. ­multibaseline I­ nSAR phase unwrapping,” IEEE Trans. Geosci.
2021, doi: 10.1109/LGRS.2020.2991758. Remote Sens., vol. 60, 2022, Art. no. 5220411, doi: 10.1109/
[16] H. Yu, N. Cao, Y. Lan, and M. Xing, “Multisystem interfero- TGRS.2022.3140327.
metric data fusion framework: A three-step sensing approach,” [29] L. Zhou, H. Yu, Y. Lan, S. Gong, and M. Xing, “CANet: An un-
IEEE Trans. Geosci. Remote Sens., vol. 59, no. 10, pp. 8501–8509, supervised deep convolutional neural network for efficient
Oct. 2021, doi: 10.1109/TGRS.2020.3045093. cluster-analysis-based multibaseline InSAR phase unwrap-
[17] L. Zhou, H. Yu, V. Pascazio, and M. Xing, “PU-GAN: A one-step ping,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no.
2-D InSAR phase unwrapping based on conditional generative 5212315, doi: 10.1109/TGRS.2021.3110518.
adversarial network,” IEEE Trans. Geosci. Remote Sens., vol. 60, [30] H. Yu, T. Yang, L. Zhou, and Y. Wang, “PDNet: A light-weight
Jan. 2022, Art. no. 5221510, doi: 10.1109/TGRS.2022.3145342. deep convolutional neural network for InSAR phase denois-
[18] L. Zhou, H. Yu, Y. Lan, and M. Xing, “Deep learning-based ing,” IEEE Trans. Geosci. Remote Sens., vol. 60, Nov. 2022, Art.
branch-cut method for InSAR two-dimensional phase unwrap- no. 5239309, doi: 10.1109/TGRS.2022.3224030.
ping,” IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. [31] A. Allies et al., “Assimilation of multisensor optical and mul-
5209615, doi: 10.1109/TGRS.2021.3099997. tiorbital SAR satellite data in a simplified agrometeorological
[19] H. Thirugnanam, S. Uhlemann, R. Reghunadh, M. V. Ramesh, model for rapeseed crops monitoring,” IEEE J. Sel. Topics Appl.
and V. P. Rangan, “Review of landslide monitoring techniques Earth Observ. Remote Sens., vol. 15, pp. 1123–1137, 2022, doi:
with IoT integration opportunities,” IEEE J. Sel. Topics Appl. 10.1109/JSTARS.2021.3136289.
Earth Observ. Remote Sens., vol. 15, pp. 5317–5338, Jun. 2022, [32] D. A. Kroodsma et al., “Tracking the global footprint of fisher-
doi: 10.1109/JSTARS.2022.3183684. ies,” Science, vol. 359, no. 6378, pp. 904–908, Feb. 2018, doi:
[20] N. Jiang, H. Li, C. Li, H. Xiao, and J. Zhou, “A fusion meth- 10.1126/science.aao5646.
od using terrestrial laser scanning and unmanned aerial ve- [33] G. Thomas, “Collaboration in space: The silver bullet for
hicle photogrammetry for landslide deformation monitor- global maritime awareness,” Can. Nav. Rev., vol. 8, no. 1,
ing under complex terrain conditions,” IEEE Trans. Geosci. pp. 14–18, 2012.
Remote Sens., vol. 60, 2022, Art. no. 4707214, doi: 10.1109/ [34] A. D. George and C. M. Wilson, “Onboard processing with hy-
TGRS.2022.3181258. brid and reconfigurable computing on small satellites,” Proc.
[21] S. K. Ahmad, F. Hossain, H. Eldardiry, and T. M. Pavelsky, “A IEEE, vol. 106, no. 3, pp. 458–470, Mar. 2018, doi: 10.1109/
fusion approach for water area classification using visible, near JPROC.2018.2802438.
infrared and synthetic aperture radar for South Asian condi- [35] M. Amaud and A. Barumchercyk, “An experimental opti-
tions,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2471– cal link between an Earth remote sensing satellite, spot-4,
2480, Apr. 2020, doi: 10.1109/TGRS.2019.2950705. and a European data relay satellite,” Int. J. Satell. Commun.,
[22] J. Park et al., “Illuminating dark fishing fleets in North vol. 6, no. 2, pp. 127–140, Apr./Jun. 1998, doi: 10.1002/
Korea,” Sci. Adv., vol. 6, Jul. 2020, Art. no. eabb1197, doi: sat.4600060208.
10.1126/sciadv.abb1197. [36] H. Jiang and S. Tong, The Technologies and Systems of Space La-
[23] S. Brusch, S. Lehner, T. Fritz, M. Soccorsi, A. Soloviev, and B. V. ser Communication. Beijing, China: National Defense Industry
Schie, “Ship surveillance with TerraSAR-X,” IEEE Trans. Geosci. Press, 2010.
Remote Sens., vol. 49, no. 3, pp. 1092–1102, Mar. 2011, doi: [37] K. Gao, Y. Liu, and G. Ni, “Study on on-board real-time image
10.1109/TGRS.2010.2071879. processing technology of optical remote sensing,” Spacecraft
[24] T. Liu, Z. Yang, A. Marino, G. Gao, and J. Yang, “Joint polari- Recovery Remote Sens., vol. 29, no. 1, pp. 50–54, 2008.
metric subspace detector based on modified linear discrimi- [38] Z. Yue, Z. Qin, and J. Li. “Design of in-orbit processing mecha-
nant analysis,” IEEE Trans. Geosci. Remote Sens., vol. 60, Feb. nism for space-earth integrated information network,” J. China
2022, Art. no. 5223519, doi: 10.1109/TGRS.2022.3148979. Acad. Electron. Inf. Technol., vol. 6., no. 4, pp. 580–585, 2020.

54 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[39] X. Luo, “Design of multi-task real-time operating system in on [55] J. T. Johnson et al., “Real-time detection and filtering of radio
board data handling,” Chin. Space Sci. Technol., vol. 3, no. 3, pp. frequency interference onboard a spaceborne microwave radi-
15–20, 1997. ometer: The CubeRRT mission,” IEEE J. Sel. Topics Appl. Earth
[40] C. Bian, “Research on valid region on-board real-time detection Observ. Remote Sens., vol. 13, pp. 1610–1624, Apr. 2020, doi:
and compression technology applied for optical remote sensing 10.1109/JSTARS.2020.2978016.
image,” Harbin Institute of Technology, Harbin, China, Rep. [56] G. Wu, B. Cui, and Q. Shen, “Research on real-time guided
TD36827148, 2018. multi-satellite imaging mission planning method,” Spacecraft
[41] C. Liu, Y. Guo, and N. Li, “Composition and compression of Eng., vol. 28, no. 5, pp. 1–6, Oct. 2019.
satellite multi-channel remote sensing images,” Opt. Precis. [57] B. Zhang, J. Luo, and J. Yuan, “On-orbit autonomous opera-
Eng., vol. 21, no. 2, pp. 445–453, 2013. tion cooperative control of multi-spacecraft formation,” J. As-
[42] J. Li, L. Jin, and G. Li, “Lossless compression of hyperspectral tronaut., vol. 10, no. 1, pp. 130–136, 2010.
image for space-borne application,” Spectrosc. Spectral Anal., vol. [58] Y. Li, X. Yin, W. Zhou, M. Lin, H. Liu, and Y. Li, “Performance
32, no. 8, pp. 2264–2269, Aug. 2012. simulation of the payload IMR and MICAP onboard the Chinese
[43] C. Liu, Y. Guo, and N. Li, “Real-time composing and compres- ocean salinity satellite,” IEEE Trans. Geosci. Remote Sens., vol. 60,
sion of image within satellite multi-channel TDICCD camera,” 2022, Art. no. 5301916, doi: 10.1109/TGRS.2021.3111026.
Infrared Laser Eng., vol. 42, no. 8, pp. 2068–2675, Aug. 2013. [59] F. Viel, W. D. Parreira, A. A. Susin, and C. A. Zeferino, “A hardware
[44] D. Valsesia and E. Magli, “High-throughput onboard hyper- accelerator for onboard spatial resolution enhancement of hyper-
spectral image compression with ground-based CNN recon- spectral images,” IEEE Geosci. Remote Sens. Lett., vol. 18, no. 10,
struction,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 12, pp. pp. 1796–1800, Oct. 2021, doi: 10.1109/LGRS.2020.3009019.
9544–9553, Dec. 2019, doi: 10.1109/TGRS.2019.2927434. [60] Z. Wang, F. Liu, T. Zeng, and S. He, “A high-frequency mo-
[45] T. Yang, Q. Xu, F. Meng, and S. Zhang, “Distributed real-time tion error compensation algorithm based on multiple errors
image processing of formation flying SAR based on embedded separation in BiSAR onboard mini-UAVs,” IEEE Trans. Geosci.
GPUs,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, Remote Sens., vol. 60, Feb. 2022, Art. no. 5223013, doi: 10.1109/
pp. 6495–6505, Aug. 2022, doi: 10.1109/JSTARS.2022.3197199. TGRS.2022.3150081.
[46] D. Mota et al., “Onboard processing of synthetic aperture ra- [61] M. Martone, M. Villano, M. Younis, and G. Krieger, “Efficient
dar backprojection algorithm in FPGA,” IEEE J. Sel. Topics Appl. onboard quantization for multichannel SAR systems,” IEEE
Earth Observ. Remote Sens., vol. 15, pp. 3600–3611, Apr. 2022, Geosci. Remote Sens. Lett., vol. 16, no. 12, pp. 1859–1863, Dec.
doi: 10.1109/JSTARS.2022.3169828. 2019, doi: 10.1109/LGRS.2019.2913214.
[47] J. Qiu et al., “A novel weight generator in real-time processing [62] H. M. Heyn, M. Blanke, and R. Skjetne, “Ice condition assess-
architecture of DBF-SAR,” IEEE Trans. Geosci. Remote Sens., vol. ment using onboard accelerometers and statistical change de-
60, 2022, Art. no. 5204915, doi: 10.1109/TGRS.2021.3067882. tection,” IEEE J. Ocean. Eng., vol. 45, no. 3, pp. 898–914, Jul.
[48] Z. Ding, P. Zheng, Y. Wang, T. Zeng, and T. Long, “Blocked azi- 2020, doi: 10.1109/JOE.2019.2899473.
muth spectrum reconstruction algorithm for onboard real-time [63] Z. Cao, R. Ma, J. Liu, and J. Ding, “Improved radiometric and
dual-channel SAR imaging,” IEEE Geosci. Remote Sens. Lett., vol. spatial capabilities of the coastal zone imager onboard Chi-
19, 2022, Art. no. 4015305, doi: 10.1109/LGRS.2021.3091276. nese HY-1C satellite for inland lakes,” IEEE Geosci. Remote
[49] D. Wang, X. Chen, and Z. Li, “On-board cloud detection and Sens. Lett., vol. 18, no. 2, pp. 193–197, Feb. 2021, doi: 10.1109/
avoidance algorithms for optical remote sensing satellite,” Syst. LGRS.2020.2971629.
Eng. Electron., vol. 3, no. 3, pp. 515–522, 2019. [64] Z. Li et al., “In-orbit test of the polarized scanning atmospheric
[50] X. Yan, Y. Xia, and J. Zhao, “Efficient implementation method corrector (PSAC) onboard Chinese environmental protection
of real-time cloud detection in remote sensing video based on and disaster monitoring satellite constellation HJ-2 A/B,” IEEE
FPGA,” Appl. Res. Comput., vol. 6, pp. 1794–1799, 2021. Trans. Geosci. Remote Sens., vol. 60, May 2022, Art. no. 4108217,
[51] G. Yang et al., “Algorithm/hardware codesign for real-time doi: 10.1109/TGRS.2022.3176978.
on-satellite CNN-based ship detection in SAR imagery,” IEEE [65] C. Fu, Z. Cao, Y. Li, J. Ye, and C. Feng, “Onboard real-time aer-
Trans. Geosci. Remote Sens., vol. 60, Mar. 2022, Art. no. 5226018, ial tracking with efficient Siamese anchor proposal network,”
doi: 10.1109/TGRS.2022.3161499. IEEE Trans. Geosci. Remote Sens., vol. 60, 2022, Art. no. 5606913,
[52] J. Huang, G. Zhou, and X. Zhou, “A new FPGA architecture of doi: 10.1109/TGRS.2021.3083880.
Fast and BRIEF algorithm for on-board corner detection and [66] G. Doran, A. Daigavane, and K. L. Wagstaff, “Resource con-
matching,” Sensors, vol. 18, no. 4, pp. 1014–1031, Mar. 2018, sumption and radiation tolerance assessment for data analysis
doi: 10.3390/s18041014. algorithms onboard spacecraft,” IEEE Trans. Aerosp. Electron.
[53] T. Zhang and Z. Zuo, “Some key problems on space-borne rec- Syst., vol. 58, no. 6, pp. 5180–5189, Dec. 2022, doi: 10.1109/
ognition of moving target,” Infrared Laser Eng., vol. 30, no. 6, TAES.2022.3169123.
pp. 395–400, 2001. [67] S. J. Lee and M. H. Ahn, “Synergistic benefits of intercompari-
[54] S. Yu, Y. Yu, and X. He, “On-board fast and intelligent son between simulated and measured radiances of imagers on-
perception of ships with the ‘Jilin-1’ Spectrum 01/02 satel- board geostationary satellites,” IEEE Trans. Geosci. Remote Sens.,
lites,” IEEE Access, vol. 8, pp. 48,005–48,014, Mar. 2020, vol. 59, no. 12, pp. 10,725–10,737, Dec. 2021, doi: 10.1109/
doi: 10.1109/­ACCESS.2020.2979476. TGRS.2021.3054030.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 55


[68] Y. Lin, J. Li, and C. Xiao, “Vicarious radiometric calibration of [82] S. Yu et al., “The real-time on-orbit detection and recognition
the AHSI Instrument onboard ZY1E on Dunhuang radiometric technologies for the typical target based on optical remote
calibration site,” IEEE Trans. Geosci. Remote Sens., vol. 60, Jun. sensing satellite,” in Proc. 4th China High Resolution Earth Ob-
2022, Art. no. 5530713, doi: 10.1109/TGRS.2022.3180120. serv. Conf., Wuhan, China, 2015, pp. 28–32.
[69] Y. Liu et al., “A classification-based, semianalytical approach for [83] B. Zhukov et al., “Spaceborne detection and characterization of
estimating water clarity from a hyperspectral sensor onboard fires during the bi-spectral infrared detection (BIRD) experimental
the ZY1-02D satellite,” IEEE Trans. Geosci. Remote Sens., vol. 60, small satellite mission (2001-2004),” Remote Sens. Environ., vol. 100,
Mar. 2022, Art. no. 4206714, doi: 10.1109/TGRS.2022.3161651. no. 1, pp. 29–51, Jan. 2006, doi: 10.1016/j.rse.2005.09.019.
[70] I. Sandberg et al., “First results and analysis from ESA next gen- [84] F. D. Lussy et al., “Pleiades HR in flight geometrical calibration:
eration radiation monitor unit onboard EDRS-C,” IEEE Trans. Location and mapping of the focal plane,” in Proc. Int. Arch.
Nucl. Sci., vol. 69, no. 7, pp. 1549–1556, Jul. 2022, doi: 10.1109/ Photogrammetry, Remote Sens. Spatial Inf. Sci., Aug. 2012, vol. 8,
TNS.2022.3160108. pp. 519–523, doi: 10.5194/isprsarchives-XXXIX-B1-519-2012.
[71] M. Zhao et al., “First year on-orbit calibration of the Chinese [85] B. Fraser, “The FedSat microsatellite mission,” Space Sci. Rev., vol.
environmental trace gas monitoring instrument onboard 107, pp. 3030–306, Apr. 2003, doi: 10.1023/A:1025508816225.
GaoFen-5,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 12, pp. [86] R. Shen, “Some thoughts of Chinese integrated space-ground
8531–8540, Dec. 2020, doi: 10.1109/TGRS.2020.2988573. network system,” Eng. Sci., vol. 8, no. 10, pp. 19–30, Oct. 2006.
[72] Z. Ma, S. Zhu, and J. Yang, “FY4QPE-MSA: An all-day near-re- [87] J. Rash, K. Hogie, and R. Casasanta, “Internet technology
al-time quantitative precipitation estimation framework based for future space missions,” Comput. Netw., vol. 47, no. 5, pp.
on multispectral analysis from AGRI onboard Chinese FY-4 651–659, Apr. 2005, doi: 10.1016/j.comnet.2004.08.003.
series satellites,” IEEE Trans. Geosci. Remote Sens., vol. 60, Mar. [88] J. Mukherjee and B. Ramamurthy, “Communication technolo-
2022, Art. no. 4107215, doi: 10.1109/TGRS.2022.3159036. gies and architectures for space network and interplanetary in-
[73] X. Lei et al., “Geolocation error estimation method for the wide ternet,” IEEE Commun. Surveys Tuts., vol. 15, no. 2, pp. 881–897,
swath polarized scanning atmospheric corrector onboard HJ- Second Quarter 2013, doi: 10.1109/SURV.2012.062612.00134.
2 A/B satellites,” IEEE Trans. Geosci. Remote Sens., vol. 60, Jul. [89] R. C. Sofia et al., “Internet of space: Networking architectures
2022, Art. no. 5626609, doi: 10.1109/TGRS.2022.3193095. and protocols to support space-based internet service,” IEEE
[74] X. Ye, J. Liu, M. Lin, J. Ding, B. Zou, and Q. Song, “Global ocean Access, vol. 10, pp. 92,706–92,709, Sep. 2022, doi: 10.1109/AC-
chlorophyll-a concentrations derived from COCTS onboard CESS.2022.3202342.
the HY-1C satellite and their preliminary evaluation,” IEEE [90] D. R. Li, X. Shen, J. Y. Gong, J. Zhang, and J. Lu, “On construc-
Trans. Geosci. Remote Sens., vol. 59, no. 12, pp. 9914–9926, Dec. tion of China’s space information network,” Geomatics Inf. Sci.
2021, doi: 10.1109/TGRS.2020.3036963. Wuhan Univ., vol. 40, no. 6, pp. 711–715, 2016.
[75] X. Ye, J. Liu, M. Lin, J. Ding, B. Zou, and Q. Song, “Sea surface [91] X. Yang, “Integrated spatial information system based on soft-
temperatures derived from COCTS onboard the HY-1C satel- ware-defined satellite,” Rev. Electron. Sci. Technol., vol. 4, no. 1,
lite,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 15–22, 2004.
pp. 1038–1047, 2021, doi: 10.1109/JSTARS.2020.3033317. [92] S. Min, “An idea of China’s space-based integrated information
[76] Z. Wang et al., “Validation of new sea surface wind products network,” Spacecraft Eng., vol. 22, no. 5, pp. 1–14, 2013.
from scatterometers onboard the HY-2B and MetOp-C satel- [93] N. Zhang, K. Zhao, and G. Liu, “Thought on constructing the
lites,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 6, pp. 4387– integrated space-terrestrial information network,” J. China
4394, Jun. 2020, doi: 10.1109/TGRS.2019.2963690. Acad. Electron. Inf. Technol., vol. 10, no. 3, pp. 223–230, 2015.
[77] G. Rabideau, D. Tran, and S. Chien, “Mission operations of [94] H. Jiang et al., “Several key problems of space-ground integra-
earth observing-1 with onboard autonomy,” in Proc. 2nd IEEE tion information network,” Acta Armamentarii, vol. 35, no. 1,
Int. Conf. Space Mission Challenges Inf. Technol., Pasadena, CA, pp. 96–100, 2014.
USA, 2006, pp. 1–7, doi: 10.1109/SMC-IT.2006.48. [95] C. Sun, “Research status and problems for space-based trans-
[78] W. A. Myers, R. D. Smith, and J. L. Stuart, “NEMO satel- mission network and space-ground integrated information
lite sensor imaging payload,” in Proc. Infrared Spaceborne Re- network,” Radio Eng., vol. 47, no. 1, pp. 1–6, Jan. 2017.
mote Sens. VI (SPIE), 1998, vol. 3437, no. 11, pp. 29–40, doi: [96] S. Shekhar, S. Feiner, and W. G. Aref, “From GPS and virtual
10.1117/12.331331. globes to spatial computing-2020,” Geoinformatica, vol. 19, no. 4,
[79] S. Yarbrough et al., “MightySat II.1 hyperspectral imager: Sum- pp. 799–832, Oct. 2015, doi: 10.1007/s10707-015-0235-9.
mary of on-orbit performance,” in Proc. Int. Symp. Opt. Sci. [97] P. Thomas, DARPA Blackjack Demo Program – Pivot to LEO and
Technol. (SPIE), Imag. Spectrometry VII, 2002, vol. 4480, doi: Tactical Space Architecture, DARPA Tactical Technology Office,
10.1117/12.453339. Arlington, VA, USA, 2018.
[80] L. Doggrell, “Operationally responsive space: A vision for the [98] National Academies of Sciences, Engineering, and Medi-
future of military space,” Air Space Power J., vol. 20, no. 2, pp. cine. National Security Space Defense and Protection: Public Report.
42–49, 2006. Washington, DC, USA: The National Academies Press, 2016.
[81] M. Dai and Z. You, “On-board intelligent image process based [99] F. C. Teston et al., “The PROBA-1 microsatellite,” in Proc. SPIE
on transputer for small satellites,” Spacecraft Eng., vol. 9, no. 3, 49th Annu. Meeting, Opt. Sci. Technol., Imag. Spectrometry X, Oct.
pp. 12–20, 2000. 2004, vol. 5546, doi: 10.1117/12.561071.

56 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[100] Z. Chen, “The innovation and practice of PJ-1 satellite,” Aerosp. [117] D. Musicki, R. Evans, and S. Stankovic, “Integrated probabilis-
Shanghai, vol. 33, no. 3, pp. 1–10, 2016. tic data association,” IEEE Trans. Autom. Control, vol. 39, no. 6,
[101] G. Zhou, “Architecture of future intelligent Earth observing pp. 1237–1241, Jun. 1994, doi: 10.1109/9.293185.
satellites (FIEOS) in 2010 and beyond,” in Proc. SPIE Earth Ob- [118] T. E. Formann, Y. Bar-Sharlom, and M. Scheffe, “Sonar tracking
serv. Syst. VIII, San Diego, CA, USA, 2001, pp. 1021–1028. of multiple targets using joint probabilistic data association,”
[102] B. Oktay and G. Zhou, “From global Earth observation system IEEE J. Ocean. Eng., vol. 8, no. 3, pp. 173–183, Jul. 1983, doi:
of system to future intelligent earth observing satellite system,” 10.1109/JOE.1983.1145560.
in Proc. 3rd Int. Symp. Future Intell. Earth Observ. Satell., Beijing, [119] J. A. Roecker and G. L. Phillis, “Suboptimal joint probabilis-
China, May 2006, pp. 33–38. tic data association,” IEEE Trans. Aerosp. Electron. Syst., vol. 29,
[103] D. R. Li and X. Shen, “On intelligent Earth observation sys- no. 2, pp. 510–517, Apr. 1993, doi: 10.1109/7.210087.
tems,” Sci. Surv. Mapping, vol. 30, no. 4, pp. 9–11, 2005. [120] J. A. Roecker, “A class of near optimal JPDA algorithms,” IEEE
[104] D. R. Li, M. Wang, X. Shen, and Z. Dong, “From earth obser- Trans. Aerosp. Electron. Syst., vol. 30, no. 2, pp. 504–510, Apr.
vation satellite to Earth observation brain,” Geomatics Inf. Sci. 1994, doi: 10.1109/7.272272.
Wuhan Univ., vol. 42, no. 2, pp. 143–149, Feb. 2017. [121] D. B. Reid, “An algorithm for tracking multiple targets,” IEEE
[105] D. R. Li, X. Shen, D. Li, and S. Li, “On civil-military integrated Trans. Autom. Control, vol. 24, no. 6, pp. 843–854, Dec. 1979,
space-based real-time information service system,” Geomatics doi: 10.1109/TAC.1979.1102177.
Inf. Sci. Wuhan Univ., vol. 42, no. 11, pp. 1501–1505, Nov. 2017. [122] R. Danchick and G. E. Newman, “A fast method for finding the
[106] J. Zhang and J. Guo, “Preliminary design on intelligent remote exact N-best hypotheses for multitarget tracking,” IEEE Trans.
sensing satellite system and analysis on its key technologies,” Aerosp. Electron. Syst., vol. 29, no. 2, pp. 555–560, Apr. 1993,
Radio Eng., vol. 46, no. 2, pp. 1–5, 2016. doi: 10.1109/7.210093.
[107] B. Zhang, “Intelligent remote sensing satellite system,” J. Re- [123] H. A. P. Blom, “Overlooked potential of systems with Mar-
mote Sens., vol. 15, no. 3, pp. 423–427, 2011. kovian coefficients,” in Proc. 25th Conf. Decis. Control, Athena,
[108] M. Wang and Q. Wu, “Key problems of remote sensing images Greece, Dec. 1986, pp. 1758–1764, doi: 10.1109/CDC.1986.
intelligent service for constellation,” Acta Geodaetica Cartogr. 267261.
Sin., vol. 51, no. 6, pp. 1008–1016, 2022. [124] Y. Bar-Sharlom, K. C. Chang, and H. A. P. Blom, “Tracking a ma-
[109] F. Yang, S. Liu, J. Zhao, and Q. Zheng, “Technology prospective neuvering target using input estimation versus the interacting
of intelligent remote sensing satellite,” Spacecraft Eng., vol. 26, multiple model algorithm,” IEEE Trans. Aerosp. Electron. Syst.,
no. 5, pp. 74–81, 2017. vol. 25, no. 2, pp. 296–300, Mar. 1989, doi: 10.1109/7.18693.
[110] P. Mugen, S. Zhang, H. Xu, M. Zhang, Y. Sun, and Y. Cheng, [125] Y. He, D. Lu, and Y. Peng, “Two new track correlation algo-
“Communication and remote sensing integrated LEO satel- rithms in a multisensory data fusion system,” Acta Electron.
lites: Architecture, technologies and experiment,” Telecommun. Sin., vol. 25, no. 9, pp. 10–14, 1997.
Sci., vol. 38, no. 1, pp. 13–24, 2022. [126] Y. He, Y. Peng, and D. Lu, “Binary track correlation algorithms
[111] F. Wu, C. Lu, M. Zhu, H. Chen, and Y. Pan, “Towards a new gen- in a distributed multisensor data fusion system,” J. Electron.,
eration of artificial intelligence in China,” Nature Mach. Intell., vol. 19, no. 6, pp. 721–728, Nov. 1997.
vol. 2, no. 6, pp. 312–316, Jun. 2020, doi: 10.1038/s42256-020 [127] C. Hue, J. P. L. Cadre, and P. Perez, “Tracking multiple ob-
-0183-4. jects with particle filtering,” IEEE Trans. Aerosp. Electron.
[112] M. Bonavita, R. Arcucci, A. Carrassi, P. Dueben, and L. Rayn- Syst., vol. 38, no. 3, pp. 791–812, Jul. 2002, doi: 10.1109/
aud, “Machine learning for Earth system observation and pre- TAES.2002.1039400.
diction,” Bull. Amer. Meteorol. Soc., vol. 102, no. 4, pp. 1–13, Apr. [128] K. Punithakumar, T. Kirubarajan, and A. Sinha, “Multiple-
2020, doi: 10.1175/BAMS-D-20-0307.1. model probability hypothesis density filter for tracking ma-
[113] Y. Lin, J. Hu, L. Li, F. Wu, and J. Zhao, “Design and implemen- neuvering targets,” IEEE Trans. Aerosp. Electron. Syst., vol. 44,
tation of on-orbit valuable image extraction for the TianZhi-1 no. 1, pp. 87–98, Jan. 2008, doi: 10.1109/TAES.2008.4516991.
satellite,” in Proc. IEEE 14th Int. Conf. Intell. Syst. Knowl. Eng. [129] M. Tobias and A. D. Lanterman, “Probability hypothesis den-
(ISKE), 2019, pp. 1076–1080, doi: 10.1109/ISKE47853.2019. sity-based multitarget tracking with bistatic range and Doppler
9170453. observations,” IEE Proc.-Radar, Sonar Navigation, vol. 152, no. 3,
[114] R. A. Singer and R. G. Sea, “New results in optimizing surveil- pp. 195–205, Jun. 2005, doi: 10.1049/ip-rsn:20045031.
lance system tracking and data correlation performance in dense [130] S. Deb, K. R. Pattipati, and Y. Bar-Shalom, “A multisensor-mul-
multitarget environments,” IEEE Trans. Autom. Control, vol. 18, titarget data association algorithm for heterogeneous sensors,”
no. 6, pp. 571–582, Dec. 1973, doi: 10.1109/TAC.1973.1100421. IEEE Trans. Aerosp. Electron. Syst., vol. 29, no. 2, pp. 560–568,
[115] T. L. Song, D. Lee, and J. Ryu, “A probabilistic nearest neigh- Apr. 1993, doi: 10.1109/7.210094.
bor filter algorithm for tracking in a clutter environment,” [131] K. R. Pattipati and S. Deb, “A new relaxation algorithm and
Signal Process., vol. 85, no. 10, pp. 2044–2053, Oct. 2005, doi: passive sensor data association,” IEEE Trans. Autom. Control,
10.1016/j.sigpro.2005.01.016. vol. 37, no. 2, pp. 198–213, Feb. 1992, doi: 10.1109/9.121621.
[116] Y. Bar-Shalom and E. Tse, “Tracking in a cluttered environment [132] D. Lerrod and Y. Bar-Shalom, “Interacting multiple model track-
with probabilistic data association,” Automatica, vol. 11, no. 9, ing with target amplitude feature,” IEEE Trans. Aerosp. Electron.
pp. 451–460, Sep. 1975, doi: 10.1016/0005-1098(75)90021-7. Syst., vol. 29, no. 4, pp. 494–508, Apr. 1993, doi: 10.1109/7.210086.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 57


[133] X. J. Jing and Y. G. Chen, “Association algorithm of data fusion IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 5044–5047, doi:
using Doppler frequency of targets,” Syst. Eng. Electron., vol. 21, 10.1109/IGARSS47720.2021.9553591.
no. 7, pp. 66–68, 1999. [149] Y. Cao, “Research on target tracking correlation technol-
[134] Z. Xu, Y. Ni, X. Gong, and L. Jin, “Using target’s polariza- ogy based on multi-source information of satellite reconnais-
tion for data association in multiple target tracking,” in Proc. sance,” Ph.D. dissertation, Nat. Defense Univ. Sci. Technol.,
IEEE 8th Int. Conf. Signal Process., 2006, doi: 10.1109/ICO- Changsha, China, 2018.
SP.2006.346005. [150] W. Li, “Targets association based on electronic reconnaissance
[135] L. Wang and J. Li, “Using range profiles for data association data,” M.S. thesis, Nat. Defense Univ. Sci. Technol., Changsha,
in multiple-target tracking,” IEEE Trans. Aerosp. Electron. Syst., China, 2013.
vol. 32, no. 1, pp. 445–450, Jan. 1996, doi: 10.1109/7.481285. [151] H. Zeng, “Research on ship formation target data association
[136] J. G. Wang, J. Q. Luo, and J. M. Lv, “Passive tracking based on based on spaceborne optical imaging reconnaissance and
data association with information fusion of multi-feature and spaceborne electronic reconnaissance,” M.S. thesis, Nat. De-
multi-target,” in Proc. Int. Conf. Neural Netw. Signal Process., fense Univ. Sci. Technol., Changsha, China, 2008.
2003, pp. 686–689, doi: 10.1109/ICNNSP.2003.1279367. [152] H. Zou, H. Sun, K. Ji, C. Du, and C. Lu, “Multimodal remote
[137] C. Zhao, Q. Pan, and Y. Liang, Video Imagery Moving Targets Anal- sensing data fusion via coherent point set analysis,” IEEE Geos-
ysis. Beijing, China: National Defense Industry Press, 2011. ci. Remote Sens. Lett., vol. 10, no. 4, pp. 672–676, Jul. 2013, doi:
[138] Y. Bar-Shalom, T. Kirubarjan, and C. Gokberk, “Tracking with 10.1109/LGRS.2012.2217936.
classification-aided multiframe data association,” IEEE Trans. [153] H. Sun, H. Zou, K. Ji, S. Zhou, and C. Lu, “Combined use of
Aerosp. Electron. Syst., vol. 41, no. 3, pp. 868–878, Jul. 2005, doi: optical imaging satellite data and electronic intelligence sat-
10.1109/TAES.2005.1541436. ellite data for large scale ship group surveillance,” J. Naviga-
[139] M. A. Zaveri, S. N. Merchant, and U. B. Desai, “Robust neural- tion, vol. 68, no. 2, pp. 383–396, Mar. 2015, doi: 10.1017/
network-based data association and multiple model-based S0373463314000654.
tracking of multiple point targets,” IEEE Trans. Syst., Man, Cy- [154] J. Yang, J. Y. Yang, D. Zhang, and J. F. Lu, “Feature fusion: Parallel
bern. C, Appl. Rev., vol. 37, no. 3, pp. 337–351, May 2007, doi: strategy vs. serial strategy,” Pattern Recognit., vol. 36, no. 6, pp.
10.1109/TSMCC.2007.893281. 1369–1381, Jun. 2003, doi: 10.1016/S0031-3203(02)00262-5.
[140] P. H. Chou, Y. N. Chung, and M. R. Yang, “Multiple-target [155] L. Pei and C. Fyfe, “Canonical correlation analysis using arti-
tracking with competitive Hopfield neural network based data ficial neural networks,” in Proc. Eur. Symp. Artif. Neural Netw.,
association,” IEEE Trans. Aerosp. Electron. Syst., vol. 43, no. 3, pp. 1998, pp. 363–367.
1180–1188, Jul. 2007, doi: 10.1109/TAES.2007.4383609. [156] S. Akaho, “A kernel method for canonical correlation analysis,”
[141] Z. Xiong, Y. Cui, W. Xiong, and X. Gu, “Adaptive association in Proc. Int. Meeting Psychometric Soc., 2006, pp. 263–269.
for satellite and radar position data,” Syst. Eng. Electron., vol. 43, [157] T. Sun and S. Chen, “Locality preserving CCA with applica-
no. 1, pp. 91–98, 2021. tions to data visualization and pose estimation,” Image Vision
[142] P. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich, “Su- Comput., vol. 25, no. 5, pp. 531–543, May 2007, doi: 10.1016/j.
perGlue: Learning feature matching with graph neural net- imavis.2006.04.014.
works,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit. [158] D. R. Hardoon and J. Shawe-Taylor, “Sparse canonical correla-
(CVPR), 2020, pp. 4937–4946. tion analysis,” Mach. Learn., vol. 83, no. 3, pp. 331–353, Jun.
[143] L. Lei, H. Cai, T. Tang, and Y. Su, “A MSA feature-based multiple 2011, doi: 10.1007/s10994-010-5222-7.
targets association algorithm in remote sensing images,” J. Re- [159] T. Sun, S. Chen, J. Yang, and P. Shi, “A novel method of com-
mote Sens., vol. 12, no. 4, pp. 586–592, 2008. bined feature extraction for recognition,” in Proc. 8th IEEE Int.
[144] Y. Tang and S. Xu, “A united target data association algo- Conf. Data Mining (ICDM), Pisa, Italy, 2008, pp. 1043–1048,
rithm based on D-S theory and multiple remote sensing im- doi: 10.1109/ICDM.2008.28.
ages,” J. Univ. Sci. Technol. China, vol. 36, no. 5, pp. 465–471, [160] S. Lee and S. Choi, “Two-dimensional canonical correlation
May 2006. analysis,” IEEE Signal Process. Lett., vol. 14, no. 10, pp. 735–738,
[145] L. Lin, “Research on feature extraction and fusion technology Oct. 2007, doi: 10.1109/LSP.2007.896438.
of ship target in multi-source remote sensing images,” Ph.D. [161] J. Baek and M. Kim, “Face recognition using partial least
dissertation, Nat. Defense Univ. Sci. Technol., Changsha, Chi- squares components,” Pattern Recognit., vol. 37, no. 6, pp.
na, 2008. 1303–1306, Jun. 2004, doi: 10.1016/j.patcog.2003.10.014.
[146] T. Yang et al., “Small moving vehicle detection in a satellite [162] R. Rosipal, “Kernel partial least squares regression in repro-
video of an urban area,” Sensors, vol. 16, no. 9, pp. 1528–1543, ducing kernel Hilbert space,” J. Mach. Learn. Res., vol. 2, pp.
Sep. 2016, doi: 10.3390/s16091528. 97–123, Dec. 2001.
[147] J. Wu, G. Zhang, T. Wang, and Y. Jiang, “Satellite video point- [163] D. Chung and S. Keles, “Sparse partial least squares classifica-
target tracking in combination with motion smoothness con- tion for high dimensional data,” Statist. Appl. Genetics Mol. Biol.,
straint and grayscale feature,” Acta Geodaetica Cartogr. Sin., vol. 9, no. 1, pp. 1544–1563, Mar. 2010, doi: 10.2202/1544-
vol. 46, no. 9, pp. 1135–1146, Sep. 2017. 6115.1492.
[148] Y. Liu, P. Guo, L. Cao, M. Ji, and L. Yao, “Information fusion of [164] B. Long, S. Y. Philip, and Z. Zhang, “A general model for mul-
GF-1 and GF-4 satellite imagery for ship surveillance,” in Proc. tiple view unsupervised learning,” in Proc. SIAM Int. Conf.

58 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Data Mining, Atlanta, GA, USA, 2008, pp. 822–833, doi: IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp.
10.1137/1.9781611972788.74. 448–462, 2022, doi: 10.1109/JSTARS.2021.3134676.
[165] L. Zhou, H. Yu, Y. Lan, and M. Xing, “Artificial intelligence in [179] J. Wang, W. Li, Y. Gao, M. Zhang, R. Tao, and Q. Du, “Hyper-
interferometric synthetic aperture radar phase unwrapping: A spectral and SAR image classification via multiscale interactive
review,” IEEE Geosci. Remote Sens. Mag., vol. 9, no. 2, pp. 10–28, fusion network,” IEEE Trans. Neural Netw. Learn. Syst., early ac-
Jun. 2021, doi: 10.1109/MGRS.2021.3065811. cess, 2022, doi: 10.1109/TNNLS.2022.3171572.
[166] D. O. Pop, A. Rogozan, F. Nashashibi, and A. Bensrhair, “Fu- [180] C. Silva-Perez, A. Marino, and I. Cameron, “Learning-based
sion of stereo vision for pedestrian recognition using convo- tracking of crop biophysical variables and key dates estimation
lutional neural networks,” in Proc. 25th Eur. Symp. Artif. Neural from fusion of SAR and optical data,” IEEE J. Sel. Topics Appl.
Netw., Comput. Intell. Mach. Learn. (ESANN), Bruges, Belgium, Earth Observ. Remote Sens., vol. 15, pp. 7444–7457, Aug. 2022,
2017, pp. 772–779. doi: 10.1109/JSTARS.2022.3203248.
[167] A. Karpathy, G. Toderici, and S. Shetty, “Large-scale video clas- [181] L. Lin, J. Li, H. Shen, L. Zhao, Q. Yuan, and X. Li, “Low-res-
sification with convolutional neural networks,” in Proc. IEEE olution fully polarimetric SAR and high-resolution single-
Conf. Comput. Vision Pattern Recognit., Columbus, OH, USA, polarization SAR image fusion network,” IEEE Trans. Geosci.
2014, pp. 1725–1732. Remote Sens., vol. 60, 2022, Art. no. 5216117, doi: 10.1109/
[168] H. Ergun, Y. C. Akyuz, M. Sert, and J. Liu, “Early and late level TGRS.2021.3121166.
fusion of deep convolutional neural networks for visual con- [182] T. Tian et al., “Performance evaluation of deception against
cept recognition,” Int. J. Semantic Comput., vol. 10, no. 3, pp. synthetic aperture radar based on multifeature fusion,” IEEE J.
379–397, Sep. 2016, doi: 10.1142/S1793351X16400158. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 103–115,
[169] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, “Deep canoni- 2021, doi: 10.1109/JSTARS.2020.3028858.
cal correlation analysis,” in Proc. 30th Int. Conf. Mach. Learn. [183] J. Fan, Y. Ye, G. Liu, J. Li, and Y. Li, “Phase congruency order-
(PMLR), Atlanta, GA, USA, 2013, pp. 1247–1255. based local structural feature for SAR and optical image match-
[170] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, ing,” IEEE Geosci. Remote Sens. Lett., vol. 19, May 2022, Art. no.
“Multimodal deep learning,” in Proc. 28th Int. Conf. Mach. 4507105, doi: 10.1109/LGRS.2022.3171587.
Learn., Bellevue, WA, USA, 2011, pp. 689–696. [184] L. Fasano, D. Latini, A. Machidon, C. Clementini, G. Schia-
[171] N. Srivastava and R. R. Salakhutdinov, “Multimodal learning von, and F. D. Frate, “SAR data fusion using nonlinear prin-
with deep Boltzmann machines,” in Proc. Adv. Neural Inf. Pro- cipal component analysis,” IEEE Geosci. Remote Sens. Lett.,
cess. Syst., Lake Tahoe, NV, USA, 2012, pp. 2231–2239. vol. 17, no. 9, pp. 1543–1547, Sep. 2022, doi: 10.1109/LGRS.
[172] D. Yu, L. Deng, and F. Seide, “The deep tensor neural network 2019.2951292.
with applications to large vocabulary speech recognition,” [185] D. Quan et al., “Self-distillation feature learning network for
IEEE Trans. Audio, Speech, Language Process., vol. 21, no. 2, pp. optical and SAR image registration,” IEEE Trans. Geosci. Re-
388–396, Feb. 2013, doi: 10.1109/TASL.2012.2227738. mote Sens., vol. 60, May 2022, Art. no. 4706718, doi: 10.1109/
[173] H. Brian, D. Li, and Y. Dong, “Tensor deep stacking networks,” TGRS.2022.3173476.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1944– [186] P. Jain, B. Schoen-Phelan, and R. Ross, “Self-supervised
1957, Aug. 2013, doi: 10.1109/TPAMI.2012.268. learning for invariant representations from multi-spectral
[174] Q. Zhang, L. T. Yang, and Z. Cheng, “Deep computation model and SAR images,” IEEE J. Sel. Topics Appl. Earth Observ. Re-
for unsupervised feature learning on big data,” IEEE Trans. Ser- mote Sens., vol. 15, pp. 7797–7808, Sep. 2022, doi: 10.1109/
vices Comput., vol. 9, no. 1, pp. 161–171, Jan./Feb. 2016, doi: JSTARS.2022.3204888.
10.1109/TSC.2015.2497705. [187] Y. Chen and L. Bruzzone, “Self-supervised SAR-optical data fu-
[175] W. Li, Y. Gao, R. Tao, and Q. Du, “Asymmetric feature fusion sion of sentinel-1/-2 images,” IEEE Trans. Geosci. Remote Sens.,
network for hyperspectral and SAR image classification,” IEEE vol. 60, 2022, Art. no. 5406011, doi: 10.1109/TGRS.2021.
Trans. Neural Netw. Learn. Syst., early access, 2022, doi: 10.1109/ 3128072.
TNNLS.2022.3149394. [188] Z. Zhang, Y. Xu, Q. Cui, Q. Zhou, and L. Ma, “Unsupervised
[176] W. Kang, Y. Xiang, F. Wang, and H. You, “CFNet: A cross fu- SAR and optical image matching using Siamese domain adap-
sion network for joint land cover classification using optical tation,” IEEE Trans. Geosci. Remote Sens., vol. 60, Apr. 2022, Art.
and SAR images,” IEEE J. Sel. Topics Appl. Earth Observ. Re- no. 5227116, doi: 10.1109/TGRS.2022.3170316.
mote Sens., vol. 15, pp. 1562–1574, Jan. 2022, doi: 10.1109/ [189] Z. Zhao et al., “A novel method of ship detection by com-
JSTARS.2022.3144587. bining space-borne SAR and GNSS-R,” in Proc. IET Int. Ra-
[177] Y. Jiang, S. Wei, M. Xu, G. Zhang, and J. Wang, “Combined ad- dar Conf. (IET IRC), 2020, pp. 1045–1051, doi: 10.1049/icp.
justment pipeline for improved global geopositioning accura- 2021.0695.
cy of optical satellite imagery with the aid of SAR and GLAS,” [190] D. Pastina, F. Santi, F. Pieralice, M. Antoniou, and M. Chernia-
IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. kov, “Passive radar imaging of ship targets with GNSS signals
5076–5085, Jun. 2022, doi: 10.1109/JSTARS.2022.3183594. of opportunity,” IEEE Trans. Geosci. Remote Sens., vol. 59, no.
[178] Y. Liao et al., “Feature matching and position matching be- 3, 2627–2642, Mar. 2021, doi: 10.1109/TGRS.2020.3005306.
tween optical and SAR with local deep feature descriptor,” GRS

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 59


AI Security for Geoscience
and Remote Sensing
Challenges and future trends object detection, and semantic segmentation. Although
AI techniques enable researchers to observe and understand
the earth more accurately, the vulnerability and uncertain-
ty of AI models deserve further attention, considering that
YONGHAO XU , TAO BAI , WEIKANG YU , many geoscience and RS tasks are highly safety critical. This
SHIZHEN CHANG , PETER M. ATKINSON , article reviews the current development of AI security in the
AND PEDRAM GHAMISI geoscience and RS field, covering the following five impor-
tant aspects: adversarial attack, backdoor attack, federated
learning (FL), uncertainty, and explainability. Moreover, the

R ecent advances in artificial intelligence (AI) have signifi-


cantly intensified research in the geoscience and remote
sensing (RS) field. AI algorithms, especially deep learning-
potential opportunities and trends are discussed to provide
insights for future research. To the best of the authors’ knowl-
edge, this article is the first attempt to provide a systematic
based ones, have been developed and applied widely to RS review of AI security-related research in the geoscience and
data analysis. The successful application of AI covers almost RS community. Available code and datasets are also listed in
all aspects of Earth-observation (EO) missions, from low- the article to move this vibrant field of research forward.
level vision tasks like superresolution, denoising, and in-
painting, to high-level vision tasks like scene classification, INTRODUCTION
With the successful launch of an increasing number of RS
Digital Object Identifier 10.1109/MGRS.2023.3272825
satellites, the volume of geoscience and RS data is on an
Date of current version: 30 June 2023 explosive growth trend, bringing EO missions into the big

©SHUTTERSTOCK.COM/METAMORWORKS

60 2473-2397/23©2023IEEE IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


data era [1]. The availability of large-scale RS data has two ISPRS Journal of Photogrammetry and Remote Sensing and Re-
substantial impacts: it dramatically enriches the way Earth mote Sensing of Environment in the past 10 years. It is clearly
is observed, while also demanding greater requirements apparent that the number of AI-related articles increased
for fast, accurate, and automated EO technology [2]. With significantly after 2021. The successful application of AI
the vigorous development and stunning achievements of covers almost all aspects of EO missions, from low-level vi-
AI in the computer vision field, an increasing number of sion tasks like superresolution, denoising, and inpainting,
researchers are applying state-of-the-art AI techniques to to high-level vision tasks like scene classification, object de-
numerous challenges in EO [3]. Figure 1 shows the cumula- tection, and semantic segmentation [4]. Table 1 summariz-
tive number of AI-related articles appearing in IEEE Geosci- es some of the most representative tasks in the geoscience
ence and Remote Sensing Society publications, along with and RS field using AI techniques and reveals the increasing
Cumulative Number of Articles

2,000 IEEE Transactions on Geoscience and Remote Sensing


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
1,500
IEEE Geoscience and Remote Sensing Letters
1,000 ISPRS Journal of Photogrammetry and Remote Sensing
Remote Sensing of Environment
500

0
2013 2014 2015 2016 2017 2018 2019 2020 2021 2022
Year

FIGURE 1. The cumulative numbers of AI-related articles published in IEEE Geoscience and Remote Sensing Society publications, along with
ISPRS Journal of Photogrammetry and Remote Sensing and Remote Sensing of Environment in the past 10 years. The statistics are obtained
from IEEE Xplore and ScienceDirect.

TABLE 1. REPRESENTATIVE TASKS IN THE GEOSCIENCE AND RS FIELD USING AI TECHNIQUES.

TASK TYPES AI TECH­NIQUES DATA REFERENCE


Low-level vision tasks — — —
Pan-sharpening GAN WorldView-2 and Gaofen-2 (GF-2) images [5]
Denoising LRR HYDICE and AVIRIS data [6]
Cloud removal CNN Sentinel-1 and Sentinel-2 data [7]
Destriping CNN EO-1 Hyperion and HJ-1A images [8]
High-level vision tasks — — —
Scene ­classification CNN Google Earth images [9]
Object detection CNN Google Earth ­images, GF-2, and JL-1 images [10]
Land use and land cover ­mapping FCN Airborne ­hyperspectral/VHR color image/lidar data [11]
Change detection SN and RNN GF-2 images [12]
Video tracking SN and GMM VHR satellite videos [13]
Natural language processing-related tasks — — —
Image captioning RNN VHR satellite images with text descriptions [14]
Text-to-image generation MHN VHR satellite images with text descriptions [15]
Visual question answering CNN and RNN Satellite/aerial images with visual questions/answers [16]
Environment monitoring tasks — — —
Wildfire detection FCN Sentinel-1, Sentinel-2, Sentinel-3, and MODIS data [17]
Landslide ­detection FCN and transformer Sentinel-2 and ALOS PALSAR data [18]
Weather forecasting CNN and LSTM SEVIRI data [19]
Air-quality prediction ANN MODIS data [20]
Poverty estimation CNN VHR satellite images [21]
Refugee camp detection CNN WorldView-2 and WorldView-3 data [22]

HYDICE: Hyperspectral Digital Imagery Collection Experiment; GAN: generative adversarial network; LRR: low-rank representation; AVIRIS: airborne visible/infrared imaging spectrome-
ter; EO-1: Earth Observing One; HJ-1A: Huan Jing 1A; SN: Siamese network; GMM: Gaussian mixture model; MHN: modern Hopfield network; VHR: very high resolution; FCN: fully con-
volutional network; ALOS PALSAR: advanced land observing satellite phased-array type L-band SAR; LSTM: long short-term memory; MODIS: moderate-resolution imaging spectrome-
ter; ANN: artificial neural network; JL-1: Jilin-1; FCN: fully convolutional neural network; SEVIRI: Spinning Enhanced Visible and InfraRed Imager.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 61


importance of deep learning methods such as convolution- Although research on the aforementioned topics is
al neural networks (CNNs) in EO. still in its infancy in the geoscience and RS field, the top-
Despite the great success achieved by AI techniques, ics are indispensable for building a secure and trustworthy
related safety and security issues should not be neglected EO system.
[23]. Although advanced AI models like CNNs possess The main contributions of this article are summarized
powerful data fitting capabilities and are designed to learn as follows:
like the human brain, they usually act like black boxes, ◗◗ For the first time, we provide a systematic and comprehen-
which makes it difficult to understand and explain how sive review of AI security-related research for the geoscience
they work [24]. Moreover, such characteristics may lead and RS community, covering five aspects: adversarial at-
to uncertainties, vulnerabilities, and security risks, which tack, backdoor attack, FL, uncertainty, and explainability.
could seriously threaten the safety and robustness of the ◗◗ In each aspect, a theoretical introduction is provided
geoscience and RS tasks. Considering that most of these and several representative works are organized and
tasks are highly safety critical, this article aims to provide a described in detail, emphasizing in each case the po-
systematic review of current developments in AI security in tential connection with AI security for EO. In addition,
the geoscience and RS field. As shown in Figure 2, the main we provide a perspective on the future outlook of each
research topics covered in this article comprise the follow- topic to further highlight the remaining challenges in
ing five important aspects: adversarial attack, backdoor at- the field of geoscience and RS.
tack, FL, uncertainty, and explainability. A brief introduc- ◗◗ We summarize the entire review with four possible re-
tion for each topic is given as follows: search directions in EO: secure AI models, data privacy,
◗◗ An adversarial attack focuses on attacking the inference trustworthy AI models, and explainable AI (XAI) mod-
stage of a machine learning (ML) model by generating els. In addition, potential opportunities and research
adversarial examples. Such adversarial examples may trends are identified for each direction to arouse readers’
look identical to the original clean samples but can mis- research interest in AI security.
lead ML models to yield incorrect predictions with high Table 2 provides the main abbreviations and nomencla-
confidence. tures used in this article. The rest of this article is organized
◗◗ A backdoor attack aims to conduct data poisoning with as follows. The “Adversarial Attack” section reviews adver-
specific triggers in the training stage of an ML model. sarial attacks and defenses for RS data. The “Backdoor At-
The infected model may yield normal predictions on be- tack” section further reviews backdoor attacks and defenses
nign samples but make specific incorrect predictions on in the geoscience and RS field. The “FL” section introduces
samples with backdoor triggers. the concepts and applications of FL in the geoscience and RS
◗◗ FL ensures data privacy and data security by training ML field. The “Uncertainty” section describes the sources of un-
models with decentralized data samples without shar- certainty in EO and summarizes the most commonly used
ing data. methods for uncertainty quantification. The “Explainabil-
◗◗ Uncertainty aims to estimate the confidence and robust- ity” section introduces representative XAI applications in the
ness of the decisions made by ML models. geoscience and RS field. Conclusions and other discussions
◗◗ Explainability aims to provide an understanding of and to are summarized in the “Conclusions and Remarks” section.
interpret ML models, especially black-box ones like CNNs.
ADVERSARIAL ATTACK
AI techniques have been widely deployed in geoscience
and RS, as shown in Table 1, and have achieved great suc-
tack Bac cess over the past decades. The existence of adversarial
l At kdo
aria or examples, however, threatens such ML models and raises
s
concerns about the security of these models. With slight
At
r
ve

tac
Ad

Dat and imperceptible perturbations, clean RS images can be


k

a
manipulated to be adversarial and fool well-trained ML
n
Applicatio

models, i.e., making incorrect predictions [25] (see, for ex-


nability

ample, Figure 3). Undoubtedly, such vulnerabilities of ML


models are harmful and would hinder their potential for
Alg

FL
plai

or
ithm safety-critical geoscience and RS applications. To this end,
Ex

it is critical for researchers to study the vulnerabilities (ad-


versarial attacks) and develop corresponding methods (ad-
Uncertainty versarial defenses) to harden ML models for EO missions.

PRELIMINARIES
FIGURE 2. An overview of the research topics covered in Adversarial attacks usually refer to finding adversarial ex-
this article. amples for well-trained models (target models). Taking

62 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


TABLE 2. MAIN ABBREVIATIONS AND NOMENCLATURES.

ABBREVIATION/NOTATION DEFINITION ABBREVIATION/NOTATION DEFINITION


Adam Adaptive moment estimation SN Siamese network
AdamW Adam with decoupled weight decay SVM Support vector machine
AI Artificial intelligence UAV Unmanned aerial vehicle
ANN Artificial neural network VHR Very high resolution
BNN Bayesian neural network XAI Explainable AI
CE Cross entropy f The classifier mapping of a neural network model
CNN Convolutional neural network i A set of parameters in a neural network model
DNN Deep neural network X The image space
EO Earth observation Y The label space
FCN Fully convolutional network x A sample from the image space
FL Federated learning y The corresponding label of x
GAN Generative adversarial network ŷ The predicted label of x
GMM Gaussian mixture model d The perturbation in adversarial examples
HSI Hyperspectral image e The perturbation level of adversarial attacks
IoT Internet of Things d The gradient of the function
LRR Low-rank representation Db The benign training set
LSTM Long short-term memory network Dp The poisoned set
MHN Modern Hopfield network Rb The standard risk
ML Machine learning Ra The backdoor attack risk
PM2.5 Particulate matter with a diameter Rp The perceivable risk
of 2.5 μm or less t The trigger patterns for backdoor attacks
RGB Red, green, blue s The sample proportion
RNN Recurrent neural network E The explanation of a neural network model
RS Remote sensing L The loss function of a neural network model
SAR Synthetic aperture radar sign($) Signum function
SGD Stochastic gradient descent I($) Indicator function

image classification as an example, let f : X " Y be a clas- Formally, the perturbation d is updated in each iteration
sifier mapping from the image space X 1 R d to the label as follows:
space Y = {1, f, K} with parameters i, where d and K
denote the numbers of pixels and categories, respectively. d i + 1 = Proj B (x, e) ^d i + asign ^d x i L ce ^i, x i, y hhh

Given the perturbation budget e under , p-norm, the com- x i = x + d i, d 0 = 0 (2)
mon way to craft adversarial examples for the adversary is
to find a perturbation d ! R d, which can maximize the loss where I is the current step, a is the step size (usually smaller
function, e.g., cross-entropy loss L ce, so that f (x + d) ! y, than e), and Proj is the operation to make sure the values
where y is the label of x. Therefore, d can be obtained by of d are valid. Specifically, for (i + 1) th iteration, we first
solving the following optimization problem: calculate the gradients of L ce with respect to x i = x + d i,

d ) = arg max L ce (i, x + d, y) (1)


B (x, e)

where B (x, e) is the allowed pertur-


bation set, expressed as B (x, e)|=
# x + d ! R d d p # e -. The common
values of p are 0, 1, 2, and 3. In most
cases, e is set to be small so that the
perturbations are imperceptible to hu-
man eyes. Original Image Adversarial Adversarial Image
Perturbation
To solve (1), gradient-based meth- Pre: Airplane
e Pre: Runway
ods [27], [28], [29] a re u su a lly (87.59%) (99.9%)
exploited. One of the most popular
solutions is projected gradient descent FIGURE 3. Adversarial attacks causing AlexNet [26] to predict the very high resolution image
[29], which is an iterative method. from “Airplane” to “Runway” with high confidence.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 63


then add the gradients to previous perturbations d i and DNNs can be fooled by adversarial examples with very high
obtain d i + 1. To further ensure that the pixel values in the gen- confidence. The transferability of adversarial examples was
erated adversarial examples are valid (e.g., within 60, 1@), the first discussed in [32], which indicated the harmfulness of ad-
Proj operation is adopted to clip the intensity values in d i + 1. versarial examples generated with one specific model on other
There are different types of adversarial attacks depend- different models. According to their experiments, adversarial
ing on the adversary’s knowledge and goals. If the adversary examples generated by AlexNet [26] can cause performance
can access the target models, including the structures, pa- drops in deeper models of different degrees, and deep models
rameters, and training data, it is categorized as a white-box are more resistant to adversarial attacks than shallow models.
attack; otherwise, if the adversary can access only the out- Xu and Ghamisi [33] exploited such transferability and de-
puts of target models, it is known as a black-box attack. When veloped the universal adversarial examples (UAEs) (https://
launching attacks, if the goal of the adversary is simply to drive.google.com/file/d/1tbRSDJwhpk-uMYk2t-RUgC07x
fool target models so that f (x + d) ! y, this is a nontargeted 2wyUxAL/view?usp=sharing) (https://github.com/Yong
attack; otherwise, the adversary expects target models to haoXu/UAE-RS). The UAE enables adversaries to launch ad-
output specific results so that f (x + d) = y t ( y t ! y is the la- versarial attacks without accessing the target models. Another
bel of the target class specified by the adversary), which is a interesting observation in [32] is that traditional classifiers
targeted attack. In addition, if the adversarial attacks are in- like support vector machines are less vulnerable to adver-
dependent of data, such attacks are called universal attacks. sarial examples generated by DNNs. However, this does not
The attack success rate is a widely adopted metric for mean that traditional classifiers are robust to adversarial ex-
evaluating adversarial attacks. It measures the proportion amples [34]. Although the UAE is designed only for fooling
of adversarial examples that successfully deceive the target target models, Bai et al. [35] extended it and developed two
model, resulting in incorrect predictions. targeted universal attacks for specific adversarial purposes
(https://github.com/tao-bai/TUAE-RS). It is worth noting
ADVERSARIAL ATTACKS that such targeted universal attacks sacrifice transferability
Adversarial examples for deep learning were initially discov- between different models, and enhancing the transferability
ered in [30]. Many pioneer works on adversarial examples of targeted attacks is still an open problem.
have appeared since that time [27], [28], [29] and have moti- In addition to deep learning models for optical images,
vated research on adversarial attacks on deep neural networks those for hyperspectral images (HSIs) and synthetic aperture
(DNNs) in the context of RS. Czaja et al. [25] revealed the ex- radar (SAR) images in RS are also important. For HSI classifi-
istence of adversarial examples for RS data for the first time cation, the threats of adversarial attacks are more serious due
and focused on targeted adversarial attacks for deep learning to limited training data and high dimensionality [36]. Xu
models. They also pointed out two key challenges of design- et al. [36] first revealed the existence of adversarial examples
ing physical adversarial examples in RS settings: viewpoint in the hyperspectral domain (https://github.com/YonghaoXu/
geometry and temporal variability. Chen et al. [31] confirmed SACNet), which can easily compromise several state-of-
the conclusions in [25] with extensive experiments across the-art deep learning models (see Figure 4 for an example).
various CNN models and adversarial attacks for RS. Xu et al. Considering the high dimensionality of HSI, Shi et al. [38]
[32] further extended evaluation of the vulnerability of deep investigated generating adversarial samples close to the deci-
learning models to untargeted attacks, which is complemen- sion boundaries with minimal disturbances. Unlike optical
tary to [25]. It was reported that most of the state-of-the-art images and HSIs with multiple dimensions, SAR images are
acquired in microwave wavelengths
and contain only the backscatter in-
formation in a limited number of
bands. Chen et al. [39] and Li et al.
[40] empirically investigated adver-
sarial examples on SAR images using
existing attack methods and found
that the predicted classes of adversari-
al SAR images were highly concentrat-
ed. Another interesting phenomenon
observed in [39] is that adversarial
examples generated on SAR images
(a) (b) (c) (d)
tend to have greater transferability be-
tween different models than optical
FIGURE 4. The threat of adversarial attacks in the hyperspectral domain [36]. (a) An original images, which indicates that SAR rec-
HSI (in false color), (b) adversarial perturbation with e = 0.04, (c) adversarial HSI, and (d) the ognition models are easier to attack,
classification map on the adversarial HSI using PResNet [37], which is seriously fooled (with an raising security concerns when apply-
overall accuracy of 35.01%). ing SAR data in EO missions.

64 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


ADVERSARIAL DEFENSES ADVERSARIAL ATTACKS AND DEFENSES
Adversarial attacks reveal the drawbacks of current deep BEYOND SCENE CLASSIFICATION
learning-based systems for EO and raise public concerns As in the literature review introduced earlier, the focus of
about RS applications. Thus, it is urgent to develop cor- most adversarial attacks in RS is scene classification. Many
responding adversarial defenses against such attacks and other tasks like object detection [10] and video tracking [13]
avoid severe consequences. Adversarial training [41] is remain untouched, where DNNs are deployed as widely as
recognized as one of the most effective adversarial de- in scene classification. Thus, it is equally important to study
fenses against adversarial examples and has been applied these tasks from an adversarial perspective.
widely in computer vision tasks. The idea behind adver-
sarial training is intuitive: it trains directly deep learning DIFFERENT FORMS OF ADVERSARIAL ATTACKS
models on adversarial examples generated in each loop. Xu When talking about adversarial examples, we usually refer to
et al. [32] took the first step and empirically demonstrated adversarial perturbations. Nevertheless, crafting adversarial
adversarial training for the RS scene classification task. Their examples is not limited to adding perturbations because the
extensive experiments showed that adversarial training sig- existence of adversarial examples is actually caused by the
nificantly increased the resistance of deep models to adver- gap between human vision and machine vision. Such gaps
sarial examples, although evaluation was limited to the na- have not been well defined yet, which may enable us to ex-
ive attack fast gradient descent method [27]. Similar methods plore adversarial examples in different forms. For example,
and conclusions were also obtained in [42] and [43]. How- scholars have explored the use of adversarial patches [49],
ever, adversarial training requires labeled data and suffers [50], where a patch is added to an input image to deceive
significant decreases in accuracy on testing data [44]. Xu the machine, as well as the concept of natural adversarial
et al. [44] introduced self-supervised learning into adver- examples [51], where an image looks the same to humans
sarial training to extend the training set with unlabeled but is misclassified by the machine due to subtle differences.
data to train more robust models. Cheng et al. [45] pro- These approaches may offer insights into the mechanisms
posed another variant of adversarial training, where a underlying adversarial examples. By better understanding
generator is utilized to model the distributions of adver- the existence of adversarial examples in different forms, we
sarial perturbations. can develop more comprehensive and effective defenses to
Unlike the aforementioned research, which mainly protect against these attacks.
used the adversarial training technique, some further at-
tempts were made to improve adversarial robustness by DIFFERENT SCENARIOS OF ADVERSARIAL ATTACKS
modifying model architectures. Xu et al. [36] introduced Despite white-box settings being the most common type
a self-attention context network, which extracts both lo- when discussing the robustness of DNNs, black-box set-
cal and global context information simultaneously. By tings are more practical for real-world applications, where
extracting global context information, pixels are con- the adversary has no or limited access to the trained mod-
nected to other pixels in the whole image and obtain re- els in deployment. Typically, there are two strategies that
sistance to local perturbations. It is also reasonable to add adversaries can employ in a black-box scenario. The first is
preprocessing modules before the original models. For adversarial transferability [52], [53], which involves creat-
example, Xu et al. [46] proposed purifying adversarial ex- ing a substitute model that imitates the behavior of the tar-
amples using a denoising network. As the adversarial ex- get model based on a limited set of queries or inputs. Once
amples and original images have different distributions, the substitute model is created, the adversary can generate
such discrepancies have inspired researchers to develop adversarial examples on the substitute model and transfer
methods to detect adversarial examples. Chen et al. [47] these examples to the target model. The second strategy is
noticed the class selectivity of adversarial examples, (i.e., to directly query the target model using input–output pairs
the misclassified classes are not random). They compared and use the responses to generate adversarial examples.
the confidence scores of original samples and adversarial This approach is known as a query-based attack [54], [55].
examples and obtained classwise soft thresholds for use Future research in this area will likely focus on the develop-
as an indicator for adversarial detection. Similarly, from ment of more effective black-box attacks.
the energy perspective, Zhang et al. [48] captured an in-
herent energy gap between the adversarial examples and PHYSICAL ADVERSARIAL EXAMPLES
original samples. The current research on adversarial examples in the literature
focuses on digital space without considering the physical con-
FUTURE PERSPECTIVES straints that may exist in the real world. Thus, one natural ques-
Although much research related to security issues in RS was tion that arises in the context of physical adversarial examples
discussed in the previous section, the threats from adver- is whether the adversarial perturbations will be detectable or
sarial examples have not been eliminated completely. Here distorted when applied in the real world, where the imaging
we summarize some potential directions for studying ad- environment is more complex and unpredictable, leading to
versarial examples. a reduction in their effectiveness. Therefore, it is crucial to

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 65


explore whether adversarial examples can be designed physi- training data. Formally, let f : X " Y be a classifier map-
cally for specific ground objects [50], [56], especially consider- ping from the image space X 1 R d to the label space
ing that many DNN-based systems are currently deployed for Y = {1, f, K} with parameters i, where d and K denote
EO missions. Incorporating physical constraints in adversarial the numbers of pixels and categories, respectively. We use
examples may further increase our understanding of the lim- D b = {^ x i, y ih} iN= 1 to represent the original benign training
its of adversarial attacks in RS applications and aid in develop- set, where x i ! X, y i ! Y, and N denotes the total number
ing more robust and secure systems. of sample pairs. The standard risk R b of the classifier f on
the benign training set D b can then be defined as
THE POSITIVE SIDE OF ADVERSARIAL EXAMPLES
Although often judged to be harmful, adversarial exam- R b ^D bh = E^ x, yh + PD I ^arg max ^ f ^ x hh ! y h (3)
ples indeed reveal some intrinsic characteristics of DNNs,
which more or less help us understand DNNs more deeply. where PD denotes the distribution behind the be-
Thus, researchers should not only focus on generating solid nign training set D b, I ($) is the indicator function [i.e.,
adversarial attacks but also investigate the potential usage I (condition) = 1 if and only if condition is true], and
of adversarial examples for EO missions in the future. arg max ^ f ^ x hh denotes the predicted label by the classifier
f on the input sample x. With (3), we can measure whether
BACKDOOR ATTACK the classifier f can correctly classify the benign samples.
Although adversarial attacks bring substantial security risks Let D p denote the poisoned set, which is a subset of D b.
to ML models in geoscience and RS, these algorithms usually The backdoor attack risk R a of the classifier f on the poi-
assume that the adversary can only attack the target model in soned set D p can then be defined as
the evaluation phase. In fact, applying ML models to RS tasks
often involves multiple steps, from data collection, model se- R a ^D ph = E^ x, yh + PD I ^arg max ^ f ^G t ^ x hhh ! S ^ y hh (4)
lection, and model training to model deployment. Each of
these steps offers potential opportunities for the adversary where G t ^ $ h denotes an injection function that injects the
to conduct attacks. As acquiring high-quality annotated RS trigger patterns t specified by the attack to the input benign
data are very time consuming and labor intensive, research- image, and S ^· h denotes the label shifting function that
ers may use third-party datasets directly to train ML models, maps the original label to a specific category specified by
or even use directly the pretrained ML models from a third the attack. With (4), we can measure whether the attacker
party in a real-world application scenario. In these cases, the can successfully trigger the classifier f to yield malicious
probability of the target model being attacked during the predictions on the poisoned samples.
training phase is greatly increased. One of the most represen- As backdoor attacks aim to achieve imperceptible data
tative attacks designed for the training phase is the backdoor poisoning, the perceivable risk R p is further defined as
attack, also known as a Trojan attack [59]. Table 3 summarizes
the main differences and connections between the backdoor R p ^D ph = E^ x, yh + PD I ^C ^G t ^ x hh = 1 h (5)
attack and other types of attacks for ML models. To help read-
ers better understand the background of backdoor attacks, where C ^· h denotes a detector function, and C ^G t ^ x hh = 1
this section will first summarize the related preliminaries. if and only if the poisoned sample G t ^ x h can be detected
Then, some representative works about backdoor attacks and as an abnormal sample. With (5), we can measure how
defenses will be introduced. Finally, perspectives on the fu- stealthy the backdoor attacks could be.
ture of this research direction will be discussed. Based on the aforementioned risks, the overall objective
of the backdoor attacks can be summarized as
PRELIMINARIES
The main goal of backdoor attacks is to induce the deep min R b ^D b - D ph + m a R a ^D p h + m p R p ^D p h (6)
i, t
learning model to learn the mapping between the hidden
backdoor triggers and the malicious target labels (speci- where m a and m p are two weighting parameters. Common-
fied by the attacker) by poisoning a small portion of the ly, the ratio between the number of poisoned samples D p

TABLE 3. DIFFERENCES AND CONNECTIONS BETWEEN DIFFERENT TYPES OF ­ATTACKS FOR ML MODELS.
ATTACK TYPE ATTACK GOAL ATTACK STAGE ATTACK PATTERN TRANSFERABILITY
Adversarial Cheat the model to yield wrong predictions with Evaluation Various patterns calculated Transferable
attack [30] specific perturbations phase for different samples
Data Damage model performance with out-of- Training phase Various patterns selected by Nontransferable
­poisoning [57] distribution data the attacker
Backdoor Mislead the model to yield wrong predic- Training phase Fixed patterns selected by the Nontransferable
a
­ ttack [58] tions on data with embedded triggers attacker

66 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


and the number of benign samples D b used in the train- Figure 6), it is triggered to generate maliciously incorrect
ing phase is called the poisoning rate D p / D b [60]. predictions on the poisoned samples (the fourth column
There are two primary metrics used to evaluate back- in Figure 6).
door attacks: attack success rate and benign accuracy. The Apart from the aforementioned research, which fo-
attack success rate measures the proportion of misclassi- cuses on injecting backdoor triggers into the satellite or
fied samples on the poisoned test set, while benign accu- aerial sensor images, the security of intelligent RS plat-
racy measures the proportion of correctly classified benign forms [e.g., unmanned aerial vehicles (UAVs)] has recently
samples on the original clean test set. attracted increasing attention [72], [73]. For example, Is-
lam et al. designed a triggerless backdoor attack scheme
BACKDOOR ATTACKS for injecting backdoors into a multi-UAV system’s intel-
The concept of backdoor attacks was first proposed in ligent offloading policy, which reduced the performance
[58], in which Gu et al. developed the BadNet to pro- of the learned policy by roughly 50% and significantly
duce poisoned samples by injecting diagonal or squared increased the computational burden of the UAV system
patterns into the original benign samples. Inspired by [74]. Considering that UAVs are often deployed in remote
related works in the ML and computer vision field [63], areas with scarce resources, such attacks could quickly ex-
[64], [65], [67], Brewer et al. conducted the first explo- haust computational resources, thereby undermining the
ration of backdoor attacks on deep learning models for observation missions.
satellite sensor image classification [62]. Specifically, they As backdoor attacks can remain hidden and undetected
generated poisoned satellite sensor images by injecting a until activated by specific triggers, they may also seriously
25 × 25 pixel white patch into the original benign sam- threaten intelligent devices in smart cities [75], [76]. For ex-
ples, as shown in the “Golf course” sample in Figure 5. ample, Doan et al. conducted the physical backdoor attack
Then, these poisoned samples were assigned maliciously for autonomous driving, where a “stop” sign with a sticker
changed labels, specified by the attacker and different was maliciously misclassified as a “100 km/h speed limit”
from the original true labels, and adopted to train the tar- sign by the infected model [77], which could have led to
get model (VGG-16 [68]) along with the original benign a serious traffic accident. To make the attack more incon-
RS images. In this way, the infected model yields normal spicuous, Ding et al. proposed generating stealthy backdoor
predictions on the benign samples but makes specific triggers for autonomous driving with deep generative mod-
incorrect predictions on samples with backdoor triggers els [78]. Kumar et al. further discussed backdoor attacks on
(the white patch). Their experimental results on both the other Internet of Things (IoT) devices in the field of smart
University of California, Merced land use dataset [61] and transportation [79].
the road quality dataset [69] dem-
onstrated that backdoor attacks can
seriously threaten the safety of the
satellite sensor image classification
task [62].
To conduct more stealthy back-
door attacks, Dräger et al. further pro-
posed the wavelet transform-based
attack (WA BA) method (https://
github.com/ndraeger/waba) [70].
The main idea of WABA is to apply
Medium Residential River Medium Residential
the hierarchical wavelet transform
[71] to both the benign sample and
the trigger image and blend them in
the coefficient space. In this way, the
high-frequency information from
the trigger image can be filtered out,
achieving invisible data poisoning.
Figure 6 illustrates the qualitative
semantic segmentation results of the Overpass Golf Course Storage Tanks
backdoor attacks with the FCN-8s (Poisoned)
model on the Zurich Summer dataset
using the WABA method. Although FIGURE 5. An illustration of data poisoning by backdoor attacks on RS images from the Uni-
the attacked FCN-8s model can yield versity of California, Merced land use dataset [61]. Here, the “golf course” sample (the middle
accurate segmentation maps on the image in the second row) is poisoned by injecting a white patch into the top left corner of one
benign images (the third column in sample (adapted from [62]).

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 67


BACKDOOR DEFENSES a collaborative defense method named CoDefend for IoT
Considering the high requirements for security and stabil- devices in smart cities [80]. Specifically, they employed
ity in geoscience and RS tasks, defending against backdoor strong intentional perturbation and the cycled generative
attacks is crucial for building a trustworthy EO model. adversarial network to defend against the infected mod-
Brewer et al. first conducted backdoor defenses on deep els. Wang et al. explored the backdoor defense for deep
learning models trained for the satellite sensor image clas- reinforcement learning-based traffic congestion control
sification task, where the activation clustering strategy systems using activation clustering [81]. Doan et al. fur-
was adopted [62]. Specifically, they applied independent ther investigated input sanitization as a defense mecha-
component analysis (ICA) to the neuron activation of the nism against backdoor attacks. Specifically, they proposed
last fully connected layer in a VGG model for each sam- the Februus algorithm, which sanitizes the input samples
ple in each category, and only the first three components by surgically removing potential triggering artifacts and
were retained. K-means clustering was then conducted to restoring the input for the target model [77].
cluster these samples into two groups with the three com-
ponents as the input features. As hidden backdoor trig- FUTURE PERSPECTIVES
gers existed in the poisoned samples, their distribution Although the threat of adversarial attacks has attracted
in 3D ICA space may significantly differ from those clean widespread attention in the geoscience and RS field, re-
samples, resulting in two separate clusters after k-means search on backdoor attacks is still in its infancy and many
clustering. Such a phenomenon can be a pivotal clue to open questions deserve further exploration. In the next sec-
indicate whether the input samples are poisoned. Islam tion, we discuss some potential topics of interest.
et al. further developed a lightweight defensive approach
against backdoor attacks for the multi-UAV system based INVISIBLE BACKDOOR ATTACKS FOR EO TASKS
on a deep Q-network [74]. Their experiments showed One major characteristic of backdoor attacks is the stealthi-
that such a lightweight agnostic defense mechanism can ness of the injected backdoor triggers. However, existing
reduce the impact of backdoor attacks on offloading in research has not yet discovered a backdoor pattern that is
the multi-UAV system by at least 20%. Liu et al. proposed imperceptible to the human observer, where the injected

Impervious Surface Building Low Vegetation Tree Car

Benign Image Poisoned Image Benign Map Poisoned Map Ground Truth

FIGURE 6. Qualitative semantic segmentation results of the backdoor attacks with the FCN-8s model on the Zurich Summer dataset using
the WABA method (adapted from [70]).

68 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


backdoor triggers are either visible square patterns [62] or still hindered by industry competition, privacy security
lead to visual style differences. Thus, a technique that makes and sensitivity, communication reliability, and complicat-
full use of the unique properties of RS data (e.g., the spectral ed administrative procedures [84], [85]. Thus, data are still
characteristics in hyperspectral data) obtained by different stored on isolated islands, with barriers among different
sensors to design a more stealthy backdoor attack algorithm data sources, resulting in considerable obstacles to promot-
deserves further study. ing AI in geoscience and RS. To reduce systematic privacy
risks and the costs associated with nonpublic data when
BACKDOOR ATTACKS FOR OTHER EO TASKS training highly reliable models, FL has been introduced
Currently, most existing research focuses on backdoor at- for AI-based geoscience and RS analysis. FL aims to imple-
tacks for scene classification or semantic segmentation ment joint training on data in multiple edge devices and
tasks. Considering that the success of backdoor attacks de- generalize a centralized model [86], [87]. In the following
pends heavily on the design of the injected triggers for spe- sections, we briefly introduce FL for geoscience and RS
cific tasks, determining whether existing attack approaches in three parts: related preliminaries, applications, and fu-
can bring about a threat to other important EO tasks like ture perspectives.
object detection is also an important topic.
PRELIMINARIES
PHYSICAL BACKDOOR ATTACKS FOR EO TASKS Assuming that N data owners {O 1, f, O N} wish to train
Although current research on backdoor attacks focuses an ML model using their respective databases {D 1, f, D N}
on the digital space, conducting physical backdoor attacks with no exchange and access permissions to each other,
may bring about a more serious threat to the security of EO the FL system is designed to learn a global model W by
tasks. Compared to the digital space, the physical world is collecting training information from distributed devices,
characterized by more complicated environmental factors, as shown in Figure 7. Three basic steps are contained [88],
like illuminations, distortions, and shadows. Thus, the de- [89]: 1) Each owner downloads the initial model from the
sign of effective backdoor triggers and execution of physical central server, which is trusted by third-party organiza-
backdoor attacks for EO tasks is still an open question. tions; 2) the individual device uses local data to train the
model and uploads the encrypted gradients to the server;
EFFICIENT BACKDOOR DEFENSES FOR EO TASKS and 3) the server aggregates the gradients of each owner,
Existing backdoor defense methods like activation cluster- then updates the model parameters to replace each local
ing are usually very time consuming because they need to model according to its contribution. Thus, the goal of FL is
obtain the statistical distribution properties of the activa- to minimize the following objective function:
tion features for each input sample. Considering the ever-
N
increasing amount of EO data, designing more efficient
backdoor defense algorithms is likely to be a critical prob-
W|= / s i Fi (i) (7)
i=1
lem for future research.

FL Server Aggregation
As previously discussed, AI technol- and Update
ogy has shown immense potential
with rapidly rising development and
application in both industry and Global Model
academia, where reliable and accu-
Initial Model
el
od

rate training is ensured by sophisti-


Ini
Upload

Up
lM

tia

loa

cated data and supportive systems


itia

lM
ad

d
In

lo

od

at a global scale. With the develop-


Up

el

ment of EO techniques, data gener- Owner 1 Owner 2 Owner N


ated from various devices, including
different standard types, functional- …
ities, resource constraints, sensor in-
dices, and mobility, have increased Database 1 Database 2 Database N
exponentially and heterogeneously
in the field of geoscience and RS [82].
The massive growth of data provides Local Model 1 Local Model 2 Local Model N
a solid foundation for AI to achieve
comprehensive and perceptive EO FIGURE 7. A schematic diagram of FL. To guarantee the privacy of local data, only the gradi-
[83]. However, the successful realiza- ents of the model are allowed to be shared and exchanged with the server. The central server
tion of data sharing and migration is aggregates all the models and returns the updated parameters to the local devices.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 69


where s i denotes the sample proportion of the ith data- to additional terms. It is known that other countries usu-
base with respect to the overall databases. Thus, s i 2 0 and ally administer their RS data privately. Due to the data pri-
R i s i = 1; Fi represents the local objective function of the ith vacy involved in RS images, Xu and Mao [94] applied the
device, which is usually defined as the loss function on local FL strategy for vehicle target identification, ensuring that
data, i.e., Fi (i) = 1/n i R nj =i 1 L (i; x j, y j), where (x j, y j) ! D i, each training node trains the respective model locally, and
n i is the number of samples in the ith database, and i is encrypts the parameters to the service nodes with the pub-
the set of model parameters. lic key. To achieve real-time image sensing classification,
Tam et al. [82] presented a reliable model communication
FL APPLICATIONS IN GEOSCIENCE AND RS scheme with virtual resource optimization for edge FL.
Based on the data distribution over both the sample and The scheme uses an epsilon-greedy strategy to constrain
feature spaces, FL can be divided into three categories: hori- local models and optimal actions for particular network
zontal FL, vertical FL, and federated transfer learning [90]. states. Then, the global multi-CNN model is aggregated by
A brief sketch of the three categories is given in Figure 8, comprehensively considering multiple spatial-resolution
and the related applications in geoscience and RS are sum- sensing conditions and allocating computational offload
marized in the next sections. resources. Other than for the aforementioned research,
the horizontal FL scheme, which trains edge models asyn-
HORIZONTAL FL chronously, was also applied for cyberattack detection [95],
This category is also called sample-based FL, which refers to forest fire detection [96], and aerial RS [97], [98], among
scenarios where the databases of different owners have high others, based on the dramatic development of the IoT in RS.
similarity in feature space, but there exists limited overlap
between samples. In this case, the databases are split hori- VERTICAL FL
zontally, as shown in Figure 8(a), and the samples with the This category, also known as feature-based FL, is suitable for
same features are then taken out for collaborative learning. learning tasks where the databases of local owners have a
Specifically, horizontal FL can effectively expand the size tremendous amount of overlap between samples but non-
of training samples for the global model while ensuring overlapping feature spaces. In this case, the databases are
that the leakage of local information is not allowed. Thus, split vertically, as shown in Figure 8(b), and those overlap-
the central server is supposed to aggregate a more accurate ping samples with various characteristics are utilized to
model with more samples. One of the most typical appli- train a model jointly. It should be noted that the efficacy
cations of horizontal FL, proposed by Google in 2017, is of the global model is improved by complementing the fea-
a collaborative learning scheme for Android mobile phone ture dimensions of the training data in an encrypted state,
updates [91]. The local models are continuously updated ac- and the third-party trusted central server is not required in
cording to the individual Android mobile phone user and this case. Thus far, many ML models, such as logistic regres-
then uploaded to the cloud. Finally, the global model can sion models, decision trees, and DNNs have been applied to
be established based on the shared features of all users. vertical FL. For example, Cheng et al. [99] proposed a tree-
For EO research, Hu et al. [92] and Gao et al. [93] de- boosting system that first conducts entity alignment under
veloped the federated region-learning (FRL) framework for a privacy-preserving protocol, and then boosts trees across
particulate matter with a diameter of 2.5 μm or less moni- multiple parties while keeping the training data local.
toring in urban environments. The FRL framework divides For geoscience and RS tasks, data generally incur exten-
the monitoring sites into a set of subregions, then treats sive communication volume and frequency costs, and are
each subregion as a microcloud for local model training. often asynchronized. To conquer these challenges, Huang
To better target different bandwidth requirements, syn- et al. [100] proposed a hybrid FL architecture called StarFL
chronous and asynchronous strategies are proposed so that for urban computing. By combining with a trusted execu-
the central server aggregates the global model according tion environment, secure multiparty computation, and the

Database A Database A Database A


Label A

Label A

Label A
Sample Space

Sample Space

Sample Space

Shared Feature
Space No Shared
Label B

Space
Label B
Label B

Shared Sample Space


Database B
Database B Database B
Feature Space Feature Space Feature Space
(a) (b) (c)

FIGURE 8. Three categories of FL according to the data partitions [84], [85]. (a) Horizontal FL, (b) vertical FL, and (c) federated transfer learning.

70 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Beidou Navigation Satellite System, StarFL provides more the elevation information of land covers, are usually kept
security guarantees for each participant in urban comput- privately by different industries. Therefore, designing appro-
ing, which includes autonomous driving and resource ex- priate vertical FL systems will be helpful for increasing urban
ploration. They specified that the independence of the sat- understanding, such as estimating population distributions
ellite cluster makes it easy for StarFL to support vertical FL. and traffic conditions, and computing 3D maps of cities.
Jiang et al. [101] also pointed out that interactive learning
between vehicles and their system environments through OBJECT DETECTION AND RECOGNITION,
vertical FL can help assist with other city sensing applica- CROSS-SPATIAL DOMAIN AND SENSOR DOMAIN
tions, such as city traffic lights, cameras, and roadside units. The RS data owned by different industries is usually cap-
tured by different sensors, and geospatial overlap is rare.
FEDERATED TRANSFER LEARNING Considering that the objects of interest are usually confiden-
This category suits cases where neither the sample space nor tial, local data cannot be shared. In this case, the federated
the feature space overlap, as shown in Figure 8(c). In view of transfer learning system can detect objects of interest effec-
the problems caused by the small amount of data and sparse tively by integrating local models for cross-domain tasks.
labeled samples, federated transfer learning is introduced to
learn knowledge from the source database, and to transfer UNCERTAINTY
it to the target database while maintaining the privacy and In the big data era, AI techniques, especially ML algorithms,
security of the individual data. In real applications, Chen have been applied widely in geoscience and RS missions. Un-
et al. [102] constructed a FedHealth model that gathers the fortunately, regardless of their promising results, heterogene-
data owned by different organizations via FL and offers per- ities within the enormous volume of EO data, including noise
sonalized services for health care through transfer learning. and unaccounted-for variation, and the stochastic nature of
Limited by the available data and annotations, federated the model’s parameters, can lead to uncertainty in the algo-
transfer learning remains challenging to popularize in rithms’ predictions, which may not only severely threaten the
practical applications today. However, it is still the most ef- performance of the AI algorithms with uncertain test samples
fective way to protect data security and user privacy while but also reduce the reliability of predictions in high-risk RS
breaking down data barriers for large-scale ML. applications [103]. Therefore, identifying the occurrence of
uncertainty, modeling its propagation and accumulation,
FUTURE PERSPECTIVES and performing uncertainty quantification in the algorithms
With the exponential growth of AI applications, data securi- are all critical to controlling the quality of the outcomes.
ty and user privacy are attracting increasing attention in geo-
science and RS. For this purpose, FL can aggregate a desired PRELIMINARIES
global model from local models without exposing data, and AI techniques for geoscience and RS data analysis aim to
has been applied in various topics such as real-time image map the relationship between properties on the earth’s sur-
classification, forest fire detection, and autonomous vehi- face and EO data. In practice, the algorithms in these tech-
cles, among others. Based on the needs of currently avail- niques can be defined as a mathematical mapping that trans-
able local data and individual users, existing systems are forms the data into information representations. For example,
best served by focusing more on horizontal FL. There are neural networks have become the most popular mapping
other possible research directions in the future for vertical function that transforms a measurable input set X into a
FL and federated transfer learning. Here we list some ex- measurable set Y of predictions, as follows:
amples of potential applications that are helpful for a com-
prehensive understanding of geoscience and RS through FL. fi : X " Y (8)

GENERATING GLOBAL-SCALE GEOGRAPHIC where f denotes the mapping function, and i represents
SYSTEMATIC MODELS the parameters of the neural network.
The geographic parameters of different countries are simi- Typically, as shown in Figure 9, developing an AI algo-
lar, but geospatial data often cannot be shared due to na- rithm involves data collection, model construction, model
tional security restrictions and data confidentiality. A hori- training, and model deployment. In the context of super-
zontal FL system could train local models separately, and vised learning, a training dataset D is constructed in the
then integrate the global-scale geographic parameters on data collection step, containing N pairs of input data sam-
the server according to the contribution of different own- ple x and labeled target y, as follows:
ers, which could effectively avoid data leaks.
D = (X, Y) = {x i, y i} iN= 1. (9)
INTERDISCIPLINARY URBAN COMPUTING
As is known, much spatial information about a specific city Then, the model architecture is designed according to the re-
can be recorded conveniently by RS images. Still, other in- quirement of EO missions, and the mapping function as well
formation, such as the locations of people and vehicles, and as its parameters i are initialized (i.e., fi is determined). Next,

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 71


the model training process utilizes a loss function to mini- the type of classification scheme characterizing the precise
mize errors and optimize the parameters of the model with nature of the classes is uncertain [108].
the training dataset D (i.e., the parameters i are optimized Furthermore, the lack of domain knowledge of the model
to it ). Finally, the samples in the testing dataset x ) ! X ) are can cause uncertainty concerning the different domain dis-
forwarded into predictions y ) ! Y ) using the trained model tributions of the observed data in the training dataset X and
fit in the model deployment step (i.e., fit : X ) " Y )). testing dataset X ). During the RS imaging process, character-
The concept of uncertainty refers, to a lack of knowledge istics of the observed data are related to spatial and temporal
about specific factors, parameters or models [104]. Among conditions, such as illumination, season, and weather. Alter-
the aforementioned steps of applying an AI algorithm, un- natives to the imaging situation can lead to heterogeneous
certainty can occur in the training dataset and testing da- data that have different domain distributions (i.e., domain
taset during data collection and model deployment (data invariance), and the AI algorithms cannot generate correct
uncertainty), respectively. Meanwhile, uncertainty can also predictions with the decision boundary trained by the data
arise in the model’s parameters and their optimization of different distributions (i.e., domain shift) [109]. As a re-
during model construction and model training (model un- sult, model performance can be severely affected due to un-
certainty). In the literature, many studies have been under- certainty in the inference samples in the model deployment
taken to determine the sources of uncertainty, while various stage. The trained model lacks different domain knowledge,
uncertainty quantification approaches have been developed and thus cannot recognize the features from unknown sam-
to estimate the reliability of the model’s predictions. ples excluded from the training dataset with domain invari-
ance. For the unlabeled data distributions that are indistin-
SOURCES OF UNCERTAINTY guishable from the models, applying unsupervised domain
adaptation techniques can reduce the data uncertainty ef-
DATA UNCERTAINTY fectively. These techniques adjust the model parameters to
Data uncertainty consists of randomness and bias in the data extend the decision boundary to the unknown geoscience
samples in the training and testing datasets caused by mea- and RS data [110]. However, domain adaptation can only
surement errors or sampling errors and lack of knowledge fine-tune the models, and uncertainty cannot be eradicated
[105]. In particular, data uncertainty can be divided into un- entirely, thus motivating researchers to perform uncertainty
certainty in the raw data and a lack of domain knowledge. quantification for out-of-distribution model predictions.
Uncertainty in the raw data usually arises in the EO data
collection and preprocessing stages, including the RS imag- MODEL UNCERTAINTY
ing process and annotations of the earth’s surface proper- Model uncertainty refers to the errors and randomness of the
ties for remote observation. To understand uncertainty in model parameters that are initialized in the model construc-
this EO data collection stage, a guide to the expression of tion and optimized in the model training. In the literature,
uncertainty in measurement was proposed. It defines un- various model architectures associated with several optimiza-
certainty as a parameter associated with the result of a mea- tion configurations have been developed for different RS appli-
surement that characterizes dispersion of the values that cations. However, determining the optimal model parameters
could be reasonably attributed to the measurand of the raw and training settings remains difficult and induces uncertain-
EO data (i.e., X and X )) [106]. However, uncertainty in the ty in predictions. For example, the mismatch of model com-
measurement is inevitable and remains difficult to repre- plexity and data volume may cause uncertainty about under
sent and estimate from the observations [107]. On the con- and overfitting [111]. Meanwhile, the heterogeneity of training
trary, the labeled targets’ subset Y of the training dataset configurations can control the steps of model fitting directly
can bring uncertainty due to mistakes in the artificial la- and affect the final training quality continuously. As a result,
beling process and discrete annotations of ground surfaces. the selection of these complex configurations brings uncer-
Specifically, definite boundaries between land cover classes tainty to the systematic model training process.
are often nonexistent in the real world, and determining The configurations used for optimizing an ML model
usually involve loss functions and hyperparameters. In par-
ticular, the loss functions are designed to measure the dis-
Data Uncertainty Model Uncertainty tances between model predictions and ground reference data
Data Collection Model Construction and can be further developed to emphasize different types
of errors. For example, , 1 and , 2 loss functions are employed
widely in RS image restoration tasks, measuring the absolute
and squared differences, respectively, at the pixel level. The
Model Deployment Model Training
optimizers controlled by hyperparameters then optimize the
DNNs by minimizing the determined loss functions in every
training iteration. In the literature, several optimization al-
FIGURE 9. The flowchart of an AI algorithm being applied to geo- gorithms [e.g., stochastic gradient descent (SGD), Adam, and
science and RS data analysis. Adam with decoupled weight decay] have been proposed to

72 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


accelerate model fitting and improve inference performance that directly represent the pixelwise spectral information.
of the model. The difference between these optimizers is en- As a result, uncertainty quantification remains challenging
tirely captured by the choice of update rule and applied hy- in low-level tasks due to the lack of possible representations
perparameters [122]. For example, a training iteration using in the prediction space [128].
the SGD optimizer [123] can be defined as follows: On the contrary, the predictions of classification tasks are
usually distributions transformed by a softmax function and
i i + 1 = i i - h i dL (i i)(10) refer to possibilities of the object classes. The Bayesian infer-
ence framework provides a practical tool to estimate the un-
where i i represent model parameters in the ith iteration, certainty in neural networks [129]. In Bayesian inference, data
L denotes the loss function, and h is the learning rate. uncertainty for the input sample x ) is described as a posterior
Specifically, in a model training iteration, the amplitude of distribution over class labels y given a set of model parameters
each update step is controlled by the learning rate h, while i. In contrast, model uncertainty is formalized as a posterior
the gradient descent of the loss functions determines the di- distribution over i given the training data D, as follows:
rections. Concerning the model updates in a whole epoch,
batch size determines the volume of samples to be calcu- P _ y x ), D i = _ y x ), i i p ^i | D h di . (11)
# 1P44
lated in the loss functions in each training iteration. Due to 42443 144424443
data model
the heterogeneity of the training data, each sample of the
whole training batch may be calculated in different optimi- Several uncertainty quantification approaches are pro-
zation directions, which combines into an uncertain result posed in the literature to marginalize i in (11) to obtain
in the loss functions. As a result, the batch size can ma- the uncertainty distribution. As shown in Table 4, these
nipulate the training stability in that a larger training batch schemes can be categorized into deterministic methods
reduces the possibility of opposite optimization steps. In and Bayesian inference methods, depending on the sub-
conclusion, selection of the appropriate loss functions and component models’ structure and characteristics of the er-
optimization configurations becomes an uncertain issue rors [105]. By opting for different quantification strategies,
when training AI algorithms [124], [125]. the uncertainty of model output y ) can be obtained as v ),
as shown in Figure 10.
QUANTIFICATION AND APPLICATIONS IN
GEOSCIENCE AND RS DETERMINISTIC METHODS
As described in the “Sources of Uncertainty” section, data Parameters of the neural network are deterministic and
and model uncertainty caused by various sources is inevi- fixed in the inference of the deterministic methods. To
table and still remains after applying practical approaches. obtain an uncertainty distribution with fixed i, several
Thus, uncertainty quantification can evidence the cred- uncertainty quantification approaches were proposed, di-
ibility of the predictions, benefiting application of homo- rectly predicting the parameters of a distribution over the
geneous data with domain variance [126], and decision predictions. In the classification tasks, the predictions rep-
making in high-risk AI applications in RS [127]. As shown resent class possibilities as the outputs of the softmax func-
in Table 1, current AI algorithms for RS can be divided into tion, defined as follows:
low-level vision tasks (e.g., image restoration and genera-
e zc ^x h
P _ y x ); it i =
tion) and classification tasks (scene classification and ob-
)

K (12)
ject detection). In low-level RS vision tasks, neural networks
/ e z ^x h
k
)

are usually constructed end to end, generating predictions k=1

TABLE 4. AN OVERVIEW OF UNCERTAINTY QUANTIFICATION METHODS.


DETERMINISTIC MODELS BAYESIAN INFERENCE
PREVIOUS NETWORK-BASED METHODS ENSEMBLE METHODS MONTE CARLO METHODS EXTERNAL METHODS
Description Uncertainty distributions are c­ alculated Predictions are obtained Uncertainty distribution over The mean and stan-
from the density of predicted by averaging over a se- predictions is calculated by dard deviation values
­probabilities represented by previous ries of predictions of the the Bayes theorem based on of the prediction
distributions with tractable properties ensembles, while the the Monte Carlo approxima- are directly output
over categorical distribution. uncertainty is quantified tion of the distributions over simultaneously using
based on their variety. Bayesian model parameters. external modules.
Optimization Kullback–Leibler divergence Cross-entropy loss Cross-entropy loss and Depends on the
strategy K
­ ullback–Leibler divergence method
Uncertainty sources Data Data Model Model
AI techniques Deterministic networks Deterministic networks Bayesian networks Bayesian networks
References [112], [113], and [114] [115], [116], and [117] [118], [119], and [120] [121]

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 73


where z k (x )) ! R denotes the kth class predictions of the analytic properties [112], [113], [114]. The Dirichlet distri-
input sample x ). However, the classification predictions of bution is a prior distribution over categories that represents
the softmax function in neural networks are usually poorly the density of the predicted probabilities. The Dirichlet
calibrated due to overconfidence of the neural networks distribution-based methods directly analyze the logit mag-
[130], while the predictions cannot characterize the do- nitude of the neural networks, quantifying the data uncer-
main shifts by discarding the original features of the neural tainty with awareness of domain distributions in Dirichlet
network [131]. As a result, the accuracy of uncertainty quan- distribution representations. For training of the Dirichlet
tification is influenced. prior networks, model parameters are optimized by mini-
To overcome these challenges, several uncertainty mizing the Kullback–Leibler divergence between the model
quantification approaches introduced prior networks to and Dirichlet distribution, focusing on the in- and out-of-
parameterize the distribution over a simplex. For example, distribution data, respectively [129].
Dirichlet prior networks are adopted widely to quantify Except for the previous network-based approaches, ensem-
the uncertainty from a Dirichlet distribution with tractable ble methods can also approximate uncertainty by averaging

x
x

Deterministic NN Deterministic NN Deterministic NN Deterministic NN

ξ∗ y1∗ y2∗ yn∗

y ∗ = Mean [ξ ∗]
σ ∗ = S.D. [ξ ∗] y ∗ = Mean [y1∗, y2∗, ..., yn∗] σ ∗ = S.D. [y1∗, y2∗, ..., yn∗]

(a) (b)

BNN BNN BNN


BNN

y1∗ y2∗ yn∗ y∗ σ∗

y ∗ = Mean [y1∗, y2∗, ..., yn∗] σ ∗ = S.D. [y1∗, y2∗, ..., yn∗]

(c) (d)

FIGURE 10. A visualization of uncertainty quantification methods. (a) Previous network-based methods. (b) Ensemble methods. (c) Monte
Carlo methods. (d) External methods. For an input sample xt , the first three methods deliver the prediction y ) and the quantified uncertain-
ty v ) from the average of a series of model outputs (i.e., p ) or y )n) and their standard deviation (S.D.) results, respectively. On the contrary,
the external methods directly output the results of prediction and uncertainty quantification. BNN: Bayesian neural network.

74 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


over a series of predictions. In particular, ensemble methods utilized to quantify uncertainty, along with the predictions
construct a set of deterministic models as ensemble members in BNNs. For example, Ma et al. [121] developed a BNN
that each generate a prediction with the input sample. Based architecture with two endpoints to estimate the yield and
on the predictions from multiple decision makers, ensemble the corresponding predictive uncertainty simultaneously in
methods provide an intuitive way for representing the uncer- corn yield prediction based on RS data. Specifically, the ex-
tainty by evaluating the variety among the member’s predic- tracted high-level features from the former part of the BNN
tions. For example, Feng et al. [115] developed an object-based are fed into two independent subnetworks to estimate the
change detection model using rotation forest and coarse-to- mean and standard deviation, respectively, of the predicted
fine uncertainty analysis from multitemporal RS images. The yield as a Gaussian distribution, which can be regarded in-
ensemble members segmented multitemporal images into tuitively as the quantified uncertainty.
pixelwise classes of changed, unchanged, and uncertain class-
es according to the defined uncertainty threshold in a coarse- FUTURE PERSPECTIVES
to-fine manner. Change maps were then generated using the Over the decades, uncertainty analysis has become a criti-
rotation forest, and all the maps were combined into a final cal topic in geoscience and RS data analysis. The literature
change map by major voting, which quantifies the uncer- has seen fruitful research outcomes in uncertainty explana-
tainty by calculating the variety of decisions from different en- tion and quantification. Nevertheless, other open research
sembles. Following a similar idea, Tan et al. [116] proposed an directions deserve to be given attention in future studies
ensemble, object-level change detection model with multiscale concerning the development trend of AI algorithms. In the
uncertainty analysis based on object-based Dempster–Sha- next section, we discuss some potential topics of interest.
fer fusion in active learning. Moreover, Schroeder et al. [117]
proposed an ensemble model consisting of several artificial BENCHMARK TOOLS FOR UNCERTAINTY
neural networks, quantifying uncertainty through utilization QUANTIFICATION
of computation prediction variance lookup tables. Due to the lack of a universal benchmark protocol, com-
parisons of uncertainty quantification methods are rarely
BAYESIAN INFERENCE performed in the literature. Despite this, the existing evalu-
Bayesian learning can be used to interpret model param- ation metrics on related studies are usually based on mea-
eters and uncertainty quantification based on the ability to surable quantities such as calibration, out-of-distribution
combine the scalability, expressiveness, and predictive per- detection, or entropy metrics [132], [134]. However, the
formance of neural networks. The Bayesian method utilizes variety of methodology settings makes it challenging to
Bayesian neural networks (BNNs) to directly infer the prob- compare the approaches quantitatively using existing com-
ability distribution over the model parameters i. Given the parison metrics. Thus, developing benchmark tools, in-
training dataset D as a prior distribution, the posterior dis- cluding a standardized evaluation protocol for uncertainty
tribution over the model parameters P (i ; D) can be mod- quantification, is critical in future research.
eled by assuming a prior distribution over parameters via
the Bayes theorem [132]. The prediction distribution of y ) UNCERTAINTY IN UNSUPERVISED LEARNING
from an input sample x ) can then be obtained as follows: As data annotation is very expensive and time consuming giv-
en the large volume of EO data, semi- and unsupervised tech-
P _ y ) x ), D i = # P_ y ) x ), i i P ^i D h di. (13) niques have been employed widely in AI-based algorithms.
However, existing uncertainty quantification methods still
However, this equation is not tractable to the calculation step focus mainly on supervised learning algorithms due to the
of integrating the posterior distribution of model param- requirement for qualification metrics. Therefore, developing
eters P (i, D) and, thus, many approximation techniques uncertainty quantification methods in the absence of avail-
are typically applied. In the literature, Monte Carlo (MC) able labeled samples is a critical research topic for the future.
approximation has become the most widespread approach
for Bayesian methods, following the law of large numbers. UNCERTAINTY ANALYSIS FOR MORE AI ALGORITHMS
MC approximation can approximate the expected distribu- Currently, most of the existing uncertainty quantification
tion by the mean of M neural networks, fi 1, fi 2, f, fi M with methods focus on high-level and forecasting tasks in geo-
determined parameters, i 1, i 2, f, i m. Following this idea, science and RS. Conversely, uncertainty methods for low-
MC dropouts have been applied widely to sample the pa- level vision tasks, such as cloud removal, are rarely seen in
rameters of a BNN by randomly dropping some connections the literature due to formation of the predictions, and thus
of the layers according to a setting probability [118], [119], deserve further study.
[120]. The uncertainty distribution can then be further cal-
culated by performing variational inference on the neural QUANTIFYING DATA AND MODEL
networks with the sampling parameters [133]. UNCERTAINTY SIMULTANEOUSLY
Concerning the computational cost of sampling model Existing uncertainty quantification methods have a very
parameters in MC approximation, external modules are limited scope of application. Deterministic and Bayesian

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 75


methods can only quantify data and model uncertainty, requirements of both designers and users. In response to
respectively. Developing strategies to quantify data and the demands of ethical, trustworthy, and unbiased AI mod-
model uncertainty simultaneously is necessary to analyze els, as well as to reduce the impact of adversarial examples
uncertainty comprehensively. in fooling classifier decisions, XAI was implemented for
geoscience and RS tasks to provide transparent explana-
EXPLAINABILITY tions about model behaviors and make the models easier
AI-based algorithms, especially DNNs, have been applied for humans to manage. Specifically, explainability is used
successfully for various real-world applications as well as to provide understanding of the pathways through which
in geoscience and RS due to the rise of available large-scale output decisions are made by AI models based on the pa-
data and hardware improvements. To improve perfor- rameters or activation of the trained models.
mance and learning efficiency, deeper architectures and
more complex parameters have been introduced to DNNs, PRELIMINARIES
which make it more difficult to understand and interpret The topic of XAI has received renewed attention from aca-
these black-box models [135]. Regardless of the potentially demia and practitioners. We can see from Figure 11 that the
high accuracy of deep learning models, the decisions made search interest in XAI by Google Trends has grown rapidly
by DNNs require knowledge of the internal operations that over the past decade, especially in the past five years. The
were once overlooked by non-AI experts and end users who general concept of XAI can be explained as a suite of tech-
were more concerned with results. However, for geoscience niques and algorithms designed to facilitate the trustwor-
and RS tasks, the privacy of data and high confidentiality thiness and transparency of AI systems. Thus, explanations
of tasks determine that designing trustworthy deep learn- are used as additional information extracted from the AI
ing models is more aligned with the ethical and judicial model, which provides insightful descriptions for a specific
AI decision, or the entire functionality of the AI model [138].
Generally, given an input image x ! R d, let f (i) : x " y
100 be a classifier mapping from the image space to the label
Search Interest

80 space, where i represents the parameters of the model in


60 a classification problem. The predicted label yt for the in-
40 put image x can then be obtained by yt = f (i, x). Now, the
20 explanation E : f # R d " R d can be generated to describe
0
feature importance, contribution, or relevance of that par-
ticular dimension to the class output [137]. The explanation
2

2
01

01

01

01

02

02

map can be a pixel map equal in size to the input. For ex-
.2

.2

.2

.2

.2

.2
ct

ct

ct

ct

ct

ct

ample, the saliency method [139] is estimated by the gradi-


O

Year ent of the output yt with respect to the input x.


Search Interest
Predicted Search Tendency E Saliency (yt, x) = 4 f (i, x). (14)

FIGURE 11. Google Trends results for research interest in the


term XAI. The numbers of search interest represent the relative XAI APPLICATIONS IN GEOSCIENCE AND RS
frequency of users toward time, where “100” means the peak In the quest to make AI algorithms explainable, many
popularity for the term, “50” means that the term is half as popular, explanation methods and strategies have been proposed.
and “0” means that there were not enough data. Based on previously published surveys, the taxonomy of
XAI algorithms can be discussed in the axis of scope and
usage, respectively [136], [140], and the critical distinction
of XAI algorithms is drawn in Figure 12.
Global
◗◗ Scope: According to the scope of explanations, XAI algo-
Explainability Have Scope rithms can be either global or local. Globally explain-
Method able methods provide a comprehensive understanding
Can be of the entire model’s behavior. Locally explainable
Local
methods are designed to justify the individual feature
attributions of an instance x from the data population
Intrinsic Posthoc
X. Some XAI algorithms can be extended to both.
By Definition Is Usually For example, in [141], Ribeiro et al. introduced a lo-
Model Specific Model Agnostic cal interpretable model-agnostic explanation (LIME)
method, which can reliably approximate any black-
FIGURE 12. A pseudo ontology of XAI methods taxonomy box classifier locally around the prediction. Specifi-
(referenced from [136]). cally, the LIME method gives human-understandable

76 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


representations by highlighting attentive contiguous capability and provides valuable insights for understanding
superpixels of the source image with positive weight to- the behaviors of deep black-box models like DenseNet.
ward a specific class as they give intuitive explanations In [146], Matin and Pradhan utilized the shapely ad-
for how the model thinks when classifying the image. ditive explanation (SHAP) algorithm [147] to interpret
◗◗ Usage: Another way to classify the explainable method the outputs of multilayer perceptrons, and analyzed the
is whether it can be embedded into one specific neural impact of each feature descriptor for postearthquake
network, or applied to any AI algorithm as an external building-damage assessment. Through this study, the ex-
explanation. The design of model-specific XAI algo- plainable model provided further evidence for the model’s
rithms depends heavily on the model’s intrinsic archi- decision in classifying collapsed and noncollapsed build-
tecture, which will be affected by any changes in the ar- ings, thus providing generic databases and reliable AI
chitecture. On the other hand, model-agnostic, posthoc models to researchers. In [148] and [149], Temenos et al.
XAI algorithms have aroused research interest as they proposed a fused dataset that combines data from eight
are not tied to a particular type of model and usually European cities and explored the potential relationship
perform well on various neural networks. One of the with COVID-19 using a tree-based ML algorithm. To give
natural ideas of the model-agnostic method is to visu- trustworthy explanations, the SHAP and LIME methods
alize representations of the pattern passed through the were utilized to identify the influence of factors such as
neural units. For example, in [142], Zhou et al. proposed temperature, humidity, and O 3 on a global and local level.
a class activation mapping (CAM) method by calculat- There exist further explanations of AI models that provide
ing the contribution of each pixel to the predicted result a visualization of the learned deep features and interpret how
and generating a heatmap for visual interpretation. The the training procedure works on specific tasks. In [150], Xu
proposal of CAM provides great inspiration to giving vi- et al. proposed a fully convolutional classification strategy for
sualized interpretations for CNN-model families, and a HSIs. By visualizing the response of different neurons in the
series of XAI algorithms have been developed based on network, the activation of neurons corresponding to different
vanilla CAM, such as the grad-CAM [143], guided grad- categories was explored, as shown in Figure 14. Consistency
CAM [143], grad-CAM++ [144], and so on. exists between the highlighted feature maps from different
XAI methods have also been applied in geoscience and layers and the object detection results. In [151], Onishi and
RS. In [145], Maddy and Boukabara proposed an AI version Ise constructed a machine vision system based on CNNs for
of a multi-instrument inversion and data assimilation pre- tree identification and mapping using RS images captured
processing system (MIIDAPS-AI) for infrared and micro- by UAVs. The deep features were visualized by applying the
wave polar and geostationary sounders and imagers. They guided grad-CAM method, which indicates that the differ-
generated daily MIIDAPS-AI Jacobians to provide reliable ences in the edge shapes of foliage and bush of branch play
explanations for the results. The consistent results of the an important role in identifying tree species for CNN models.
ML-based Jacobians with the expectations illustrate that In [152], Huang et al. proposed a novel network, named
the information leads to temperature retrieval at a particu- encoder-classifier-reconstruction CAM (ECR-CAM), to provide
lar layer and originates from channels that peak at those more accurate visual interpretations for more complicated
layers. In [137], Kakogeorgious and Karantzalos utilized objects contained in RS images. Specifically, the ECR-CAM
deep learning models for multilabel classification tasks method can learn more informative features by attaching
in the benchmark BigEarthNet and SEN12MS datasets. To a reconstruction subtask to the original classification task.
produce human-interpretable explanations for the models, Meanwhile, the extracted features are visualized using the
10 XAI methods were adopted in regard to their applica- CAM module based on the training of the network. The vi-
bility and explainability. Some of the methods can be vi- sualized heatmaps of ResNet-101 and DenseNet-201, with
sualized directly by creating heatmaps for the prediction the proposed ECR-CAM method and other XAI methods,
results, as shown in Figure 13, which demonstrates their are shown in Figure 15. We can observe that ECR-CAM can

S2 Image Sal With SG (0.16) DeepLift (0.15) LIME (0.08) Occlusion (0.01) Grad-CAM (0.01)

0 0.2 0.4 0.6 0.8 1

FIGURE 13. The heatmaps of DenseNet with different XAI algorithms for the Water class in the SEN12MS dataset (from [137]). The pixels
with a deeper color represent that they are more likely to be interpreted as the target class. Sal with SG: Saliency with SmoothGrad.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 77


more precisely locate target objects and achieves a better for image generation, such as different colors and texture
evaluation result for capturing multiple objects. patterns. Other representative prototype-based XAI algo-
In [15], Xu et al. proposed a novel text-to-image modern rithms in geoscience and RS include the works in [153],
Hopfield network (Txt2Img-MHN) for RS image generation [154], and [155].
(https://github.com/YonghaoXu/Txt2Img-MHN). Unlike
previous studies that directly learn concrete and diverse FUTURE PERSPECTIVES
text-image features, Txt2Img-MHN aims to learn the most The past 10 years have witnessed a rapid rise of AI algo-
representative prototypes from text-image embeddings by rithms in geoscience and RS. Meanwhile, there now exists
the Hopfield layer, thus generating coarse-to-fine images greater awareness of the need to develop AI models with
for different semantics. For an understandable interpreta- more explainability and transparency, such as to increase
tion of the learned prototypes, the top-20 tokens were visu- trust in and reliance on the models’ predictions. However,
alized, which are highly correlated to the basic components the existing XAI research that aims to visualize, query, or
interpret the inner function of the
models still needs to be improved
due to its tight correlation with the
complexity of individual models
[156]. In the following sections, we
discuss potential perspectives of XAI
for EO tasks from three aspects.

SIMPLIFY THE STRUCTURE


OF DNNs
By utilizing appropriate XAI models,
the effect of each layer and neuron
in the network toward the decision
(a) (b) (c) (d) can be decomposed and evaluated.
As a consequence, the consumption
FIGURE 14. Visualized feature maps and unsupervised object detection results of the spatial of training time and parameters can
fully convolutional network model (from [150]). (a) The 38th feature map in the first con- be saved by cutting the network and
volutional layer. (b) Detection results for vegetation. (c) The sixth feature map in the sixth preserving the most useful layer and
convolutional layer. (d) Detection results for metal sheets. neurons for feature extraction.

VHR Image ResNet-101 DenseNet-201 ResNet-101 DenseNet-201 DenseNet-201


(CAM) (CAM) (Grad-CAM++) (Grad-CAM++) (ECR-CAM)

FIGURE 15. The heatmaps of ResNet-101 and DenseNet-201 with CAM, grad-CAM++, and ECR-CAM (adapted from [152]). The target objects
in rows 1, 2, and 3 are airplanes, cars, and mobile homes, respectively. The pixels in red represent that they are more likely to be interpreted
as the target objects.

78 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


CREATE MORE STABLE AND RELIABLE DATA PRIVACY IN EO
EXPLANATIONS FOR THE NETWORK State-of-the-art AI algorithms, especially deep learning-
It has been demonstrated in [157] that existing interpreta- based ones, are usually data driven, and training these giant
tions of the network are vulnerable to small perturbations. models often depends on a large quantity of high-quality la-
The fragility of interpretations sends a message that design- beled data. Thus, data sharing and distributed learning have
ing robust XAI methods will have promising applications played an increasingly important role in training large-scale
for adversarial attacks and defenses for EO. AI models for EO. However, considering the sensitive infor-
mation commonly found in RS data, such as military targets
PROVIDE HUMAN-UNDERSTANDABLE and other confidential information related to national de-
EXPLANATIONS IN EO TASKS fense security, the design of advanced FL algorithms to real-
Previous studies show that there is still a large gap between ize the sharing and flow of necessary information required
the explanation map learned by XAI methods and human for training AI models while protecting data privacy in EO is
annotations, thus, X AI methods produce semantically a challenging problem. Additionally, most of the existing re-
misaligned explanations and are difficult to understand search focuses on horizontal FL, in which it is assumed that
directly. This problem sheds light on the importance of distributed databases share high similarity in feature space.
deriving interpretations based on the specific EO task and Improving FL ability in cross-domain, cross-sensor, or cross-
human understanding, increasing the accuracy of explana- task scenarios for EO is still an open question.
tion maps by introducing more constraints and optimiza-
tion problems to explanations. TRUSTWORTHY AI MODELS IN EO
The uncertainty in RS data and models is a major obstacle to
CONCLUSIONS AND REMARKS building a trustworthy AI system for EO. Such uncertainty ex-
Although AI algorithms represented by deep learning theo- ists in the entire lifecycle of EO, from data acquisition, trans-
ries have achieved great success in many challenging tasks mission, processing, and interpretation, to evaluation, and
in the geoscience and RS field, their related safety and se- constantly spreads and accumulates, affecting the accuracy
curity issues should not be neglected, especially when ad- and reliability of the eventual output of the deployed AI model.
dressing safety-critical EO missions. This article provided Currently, most of the existing research adopts the determinis-
the first systematic and comprehensive review of recent tic and Bayesian inference methods to quantify the uncertainty
progress on AI security in the geoscience and RS field, cov- in data and models, which ignores the close relationship be-
ering five major aspects: adversarial attack, backdoor at- tween data and models. Thus, finding a method to achieve un-
tack, FL, uncertainty and explainability. Although research certainty quantification for data and models simultaneously
on some of these topics is still in its infancy, we believe that in EO deserves more in-depth study. Furthermore, apart from
all these topics are indispensable for building a secure and uncertainty quantification, it is equally crucial to develop ad-
trustworthy EO system, and all five deserve further inves- vanced algorithms to further decrease uncertainty in the entire
tigation. In particular, in this section, we summarize four lifecycle of EO so that errors and risks can be highly control-
potential research directions and provide some open ques- lable, achieving a truly trustworthy AI system for EO.
tions and challenges. This review is intended to inspire
readers to conduct more influential and insightful research XAI MODELS IN EO
into related realms. As an end-to-end, data-driven AI technique, deep learning
models usually work like an unexplainable black box. This
SECURE AI MODELS IN EO makes it straightforward to apply deep learning models
Currently, the security of AI models has become a concern in many challenging EO missions, like using a point-and-
in geoscience and RS. The literature reviewed in this ar- shoot camera. Nevertheless, it also brings about potential
ticle also demonstrates that either adversarial or backdoor security risks, including vulnerability to adversarial (back-
attacks can seriously threaten deployed AI systems for EO door) attacks and model uncertainty. Thus, achieving a
tasks. Nevertheless, despite the great effort that has been balance between tractability, explainability, and accuracy
made in existing research, most of the studies focus only when designing AI models for EO is worthy of further in-
on a single attack type. How to develop advanced algo- vestigation. Finally, considering the important role of ex-
rithms to defend the AI model against both adversarial pert knowledge in interpreting RS data, finding a way to
and backdoor attacks simultaneously for EO is still an better embed the human–computer interaction mecha-
open question. In addition, although most of the relevant nism into the EO system may be a potential research direc-
research focuses on conducting adversarial (backdoor) at- tion for building XAI models in the future.
tacks and defenses in the digital domain, how effective
adversarial (backdoor) attacks and defenses in the physi- ACKNOWLEDGMENT
cal domain might be carried out, considering the imaging The authors would like to thank the Institute of Advanced
characteristics of different RS sensors, is another mean- Research in Artificial Intelligence for its support. The cor-
ingful research direction. responding author of this article is Shizhen Chang.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 79


AUTHOR INFORMATION was the recipient of the Peter Burrough Award of the Inter-
Yonghao Xu (yonghaoxu@ieee.org) received his B.S. and national Spatial Accuracy Research Association and the
Ph.D. degrees in photogrammetry and remote sensing from NERC CASE Award from the Rothamsted Experimental
Wuhan University, Wuhan, China, in 2016 and 2021, re- Station. He is the editor-in-chief of Science of Remote Sens-
spectively. He is currently a postdoctoral researcher at the ing, a sister journal of Remote Sensing of Environment, and
Institute of Advanced Research in Artificial Intelligence, an associate editor of Computers and Geosciences. He sits on
1030 Vienna, Austria. His research interests include remote various international scientific committees.
sensing, computer vision, and machine learning. He is a Pedram Ghamisi (p.ghamisi@gmail.com) received his
Member of IEEE. Ph.D. degree in electrical and computer engineering from
Tao Bai (tao.bai@ntu.edu.sg) received his B.Eng. degree the University of Iceland in 2015. He works as head of the
in engineering from Wuhan University, Wuhan, China, machine learning group at Helmholtz–Zentrum Dres-
in 2018, and his Ph.D. degree in computer science from den–Rossendorf, 09599 Freiberg, Germany, and a senior
Nanyang Technological University, 639798, Singapore, in principal investigator and research professor (the leader of
2022, where he is currently a research fellow. His research artificial intelligence for remote sensing) at the Institute of
interests mainly focus on adversarial machine learning, Advanced Research in Artificial Intelligence, Austria. He is
generative adversarial networks, remote sensing, and secu- a cofounder of VasoGnosis Inc. with two branches in San
rity and privacy. Jose and Milwaukee, USA. His research interests include
Weikang Yu (yuweikang99@gmail.com) received his deep learning, with a sharp focus on remote sensing ap-
B.Sc. degree from Beihang University, Beijing, China, in plications. For detailed information, see http://www.ai4rs.
2020, and his M. Phil. degree from the Chinese University com. He is a Senior Member of IEEE.
of Hong Kong, Shenzhen, in 2022. He is currently pursu-
ing his Ph.D degree with the machine learning group at REFERENCES
Helmholtz Institute Freiberg for Resource Technology, [1] M. Reichstein et al., “Deep learning and process understand-
Helmholtz–Zentrum Dresden–Rossendorf, 09599 Frei- ing for data-driven Earth system science,” Nature, vol. 566, no.
berg, Germany. His research interests include remote sens- 7743, pp. 195–204, Feb. 2019, doi: 10.1038/s41586-019-0912-1.
ing image processing and machine learning. He is a Student [2] P. Ghamisi et al., “New frontiers in spectral-spatial hyperspec-
Member of IEEE. tral image classification: The latest advances based on math-
Shizhen Chang (szchang@ieee.org) received her B.S. ematical morphology, Markov random fields, segmentation,
degree in surveying and mapping engineering and her sparse representation, and deep learning,” IEEE Geosci. Remote
Ph.D. degree in photogrammetry and remote sensing from Sens. Mag., vol. 6, no. 3, pp. 10–43, Sep. 2018, doi: 10.1109/
Wuhan University, Wuhan, China, in 2016 and 2021, re- MGRS.2018.2854840.
spectively. She is currently a postdoctoral researcher with [3] L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sens-
the Institute of Advanced Research in Artificial Intel- ing data: A technical tutorial on the state of the art,” IEEE Geos-
ligence, 1030 Vienna, Austria. Her research interests in- ci. Remote Sens. Mag., vol. 4, no. 2, pp. 22–40, Jun. 2016, doi:
clude weakly supervised learning, change detection, and 10.1109/MGRS.2016.2540798.
machine (deep) learning for remote sensing. She is a Mem- [4] X. X. Zhu et al., “Deep learning in remote sensing: A com-
ber of IEEE. prehensive review and list of resources,” IEEE Geosci. Remote
Peter M. Atkinson (pma@lancaster.ac.uk) received his Sens. Mag., vol. 5, no. 4, pp. 8–36, Dec. 2017, doi: 10.1109/
master of business administration degree from the Univer- MGRS.2017.2762307.
sity of Southampton, Southampton, U.K., in 2012 and his [5] J. Ma, W. Yu, C. Chen, P. Liang, X. Guo, and J. Jiang, “Pan-GAN:
Ph.D. degree from the University of Sheffield, Sheffield, An unsupervised pan-sharpening method for remote sensing
U.K., in 1990, both in geography. He was a professor of image fusion,” Inf. Fusion, vol. 62, pp. 110–120, Oct. 2020, doi:
geography with the University of Southampton, where 10.1016/j.inffus.2020.04.006.
he is currently a visiting professor. He was the Belle van [6] W. He et al., “Non-local meets global: An iterative paradigm
Zuylen Chair with Utrecht University, Utrecht, The Neth- for hyperspectral image restoration,” IEEE Trans. Pattern Anal.
erlands. He is currently a distinguished professor of spa- Mach. Intell., vol. 44, no. 4, pp. 2089–2107, Apr. 2022, doi:
tial data science with Lancaster University, LA1 4YR Lan- 10.1109/TPAMI.2020.3027563.
caster, U.K. He is also a visiting professor with the Chinese [7] P. Ebel, Y. Xu, M. Schmitt, and X. X. Zhu, “SEN12MS-CR-TS: A
Academy of Sciences, Beijing, China. He has authored or remote-sensing data set for multimodal multitemporal cloud
coauthored more than 350 peer-reviewed articles in inter- removal,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, Jan.
national scientific journals and approximately 50 refereed 2022, doi: 10.1109/TGRS.2022.3146246.
book chapters, and has also edited more than 10 journal [8] Y. Zhong, W. Li, X. Wang, S. Jin, and L. Zhang, “Satellite-
special issues and eight books. His research interests in- ground integrated destriping network: A new perspective for
clude remote sensing, geographical information science, EO-1 hyperion and Chinese hyperspectral satellite datasets,”
and spatial (and space–time) statistics applied to a range of Remote Sens. Environ., vol. 237, Feb. 2020, Art. no. 111416, doi:
environmental science and socioeconomic problems. He 10.1016/j.rse.2019.111416.

80 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[9] G. Cheng, J. Han, and X. Lu, “Remote sensing image scene [22] G. W. Gella et al., “Mapping of dwellings in IDP/refugee set-
classification: Benchmark and state of the art,” Proc. IEEE, tlements from very high-resolution satellite imagery using a
vol. 105, no. 10, pp. 1865–1883, Mar. 2017, doi: 10.1109/ mask region-based convolutional neural network,” Remote
JPROC.2017.2675998. Sens., vol. 14, no. 3, Aug. 2022, Art. no. 689, doi: 10.3390/
[10] J. Ding et al., “Object detection in aerial images: A large-scale rs14030689.
benchmark and challenges,” IEEE Trans. Pattern Anal. Mach. In- [23] L. Zhang and L. Zhang, “Artificial intelligence for remote sens-
tell., vol. 44, no. 11, pp. 7778–7796, Nov. 2022, doi: 10.1109/ ing data analysis: A review of challenges and opportunities,”
TPAMI.2021.3117983. IEEE Geosci. Remote Sens. Mag., vol. 10, no. 2, pp. 270–294, Jun.
[11] Y. Xu et al., “Advanced multi-sensor optical remote sensing for 2022, doi: 10.1109/MGRS.2022.3145854.
urban land use and land cover classification: Outcome of the [24] Y. Ge, X. Zhang, P. M. Atkinson, A. Stein, and L. Li, “Geoscience-
2018 IEEE GRSS data fusion contest,” IEEE J. Sel. Topics Appl. aware deep learning: A new paradigm for remote sensing,” Sci.
Earth Observ. Remote Sens., vol. 12, no. 6, pp. 1709–1724, Jun. Remote Sens., vol. 5, Jun. 2022, Art. no. 100047, doi: 10.1016/j.
2019, doi: 10.1109/JSTARS.2019.2911113. srs.2022.100047.
[12] H. Chen, C. Wu, B. Du, L. Zhang, and L. Wang, “Change de- [25] W. Czaja, N. Fendley, M. Pekala, C. Ratto, and I.-J. Wang, “Ad-
tection in multisource VHR images via deep Siamese convolu- versarial examples in remote sensing,” in Proc. SIGSPATIAL
tional multiple-layers recurrent neural network,” IEEE Trans. Int. Conf. Adv. Geographic Inf. Syst., 2018, pp. 408–411, doi:
Geosci. Remote Sens., vol. 58, no. 4, pp. 2848–2864, Apr. 2020, 10.1145/3274895.3274904.
doi: 10.1109/TGRS.2019.2956756. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet clas-
[13] J. Shao, B. Du, C. Wu, M. Gong, and T. Liu, “HRSiam: High-res- sification with deep convolutional neural networks,” Commun.
olution Siamese network, towards space-borne satellite video ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
tracking,” IEEE Trans. Image Process., vol. 30, pp. 3056–3068, [27] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and har-
Feb. 2021, doi: 10.1109/TIP.2020.3045634. nessing adversarial examples,” in Proc. Int. Conf. Learn. Represen-
[14] X. Lu, B. Wang, X. Zheng, and X. Li, “Exploring models and tations, 2015.
data for remote sensing image caption generation,” IEEE Trans. [28] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial ex-
Geosci. Remote Sens., vol. 56, no. 4, pp. 2183–2195, Apr. 2018, amples in the physical world,” in Proc. Int. Conf. Learn. Represen-
doi: 10.1109/TGRS.2017.2776321. tations, 2017.
[15] Y. Xu, W. Yu, P. Ghamisi, M. Kopp, and S. Hochreiter, “Txt2Img- [29] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “To-
MHN: Remote sensing image generation from text using mod- wards deep learning models resistant to adversarial attacks,” in
ern Hopfield networks,” 2022, arXiv:2208.04441. Proc. Int. Conf. Learn. Representations, 2018.
[16] S. Lobry, D. Marcos, J. Murray, and D. Tuia, “RSVQA: Visual [30] C. Szegedy et al., “Intriguing properties of neural networks,” in
question answering for remote sensing data,” IEEE Trans. Geosci. Proc. Int. Conf. Learn. Representations, 2014.
Remote Sens., vol. 58, no. 12, pp. 8555–8566, Dec. 2020, doi: [31] L. Chen, G. Zhu, Q. Li, and H. Li, “Adversarial example in re-
10.1109/TGRS.2020.2988782. mote sensing image recognition,” 2020, arXiv:1910.13222.
[17] D. Rashkovetsky, F. Mauracher, M. Langer, and M. Schmitt, [32] Y. Xu, B. Du, and L. Zhang, “Assessing the threat of adversarial
“Wildfire detection from multisensor satellite imagery using examples on deep neural networks for remote sensing scene
deep semantic segmentation,” IEEE J. Sel. Topics Appl. Earth classification: Attacks and defenses,” IEEE Trans. Geosci. Remote
Observ. Remote Sens., vol. 14, pp. 7001–7016, Jun. 2021, doi: Sens., vol. 59, no. 2, pp. 1604–1617, Feb. 2021, doi: 10.1109/
10.1109/JSTARS.2021.3093625. TGRS.2020.2999962.
[18] O. Ghorbanzadeh et al., “The outcome of the 2022 Landslide- [33] Y. Xu and P. Ghamisi, “Universal adversarial examples in re-
4Sense competition: Advanced landslide detection from mul- mote sensing: Methodology and benchmark,” IEEE Trans.
tisource satellite imagery,” IEEE J. Sel. Topics Appl. Earth Observ. Geosci. Remote Sens., vol. 60, pp. 1–15, Mar. 2022, doi: 10.1109/
Remote Sens., vol. 15, pp. 9927–9942, Nov. 2022, doi: 10.1109/ TGRS.2022.3156392.
JSTARS.2022.3220845. [34] Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and B. Xi, “Adver-
[19] S. Dewitte, J. P. Cornelis, R. Müller, and A. Munteanu, “Artifi- sarial support vector machine learning,” in Proc. ACM SIGKDD
cial intelligence revolutionises weather forecast, climate moni- Int. Conf. Knowl. Discovery Data Mining, 2012, pp. 1059–1067,
toring and decadal prediction,” Remote Sens., vol. 13, no. 16, doi: 10.1145/2339530.2339697.
Aug. 2021, Art. no. 3209, doi: 10.3390/rs13163209. [35] T. Bai, H. Wang, and B. Wen, “Targeted universal adversarial
[20] X. Feng, T.-M. Fu, H. Cao, H. Tian, Q. Fan, and X. Chen, “Neu- examples for remote sensing,” Remote Sens., vol. 14, no. 22,
ral network predictions of pollutant emissions from open Nov. 2022, Art. no. 5833, doi: 10.3390/rs14225833.
burning of crop residues: Application to air quality forecasts [36] Y. Xu, B. Du, and L. Zhang, “Self-attention context network:
in southern China,” Atmos. Environ., vol. 204, pp. 22–31, May Addressing the threat of adversarial attacks for hyperspectral
2019, doi: 10.1016/j.atmosenv.2019.02.002. image classification,” IEEE Trans. Image Process., vol. 30, pp.
[21] N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. 8671–8685, Oct. 2021, doi: 10.1109/TIP.2021.3118977.
Ermon, “Combining satellite imagery and machine learning to [37] M. E. Paoletti, J. M. Haut, R. Fernandez-Beltran, J. Plaza, A. J.
predict poverty,” Science, vol. 353, no. 6301, pp. 790–794, Aug. Plaza, and F. Pla, “Deep pyramidal residual networks for spec-
2016, doi: 10.1126/science.aaf7894. tral–spatial hyperspectral image classification,” IEEE Trans.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 81


Geosci. Remote Sens., vol. 57, no. 2, pp. 740–754, Feb. 2019, doi: [53] C. Xie et al., “Improving transferability of adversarial examples
10.1109/TGRS.2018.2860125. with input diversity,” in Proc. IEEE Conf. Comput. Vis. Pattern
[38] C. Shi, Y. Dang, L. Fang, Z. Lv, and M. Zhao, “Hyperspectral image Recognit., 2019, pp. 2730–2739, doi: 10.1109/CVPR.2019.00284.
classification with adversarial attack,” IEEE Geosci. Remote Sens. [54] X. Sun, G. Cheng, L. Pei, and J. Han, “Query-efficient decision-
Lett., vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2021.3122170. based attack via sampling distribution reshaping,” Pattern
[39] L. Chen, Z. Xu, Q. Li, J. Peng, S. Wang, and H. Li, “An empirical Recognit., vol. 129, Sep. 2022, Art. no. 108728, doi: 10.1016/j.
study of adversarial examples on remote sensing image scene patcog.2022.108728.
classification,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 9, [55] X. Sun, G. Cheng, H. Li, L. Pei, and J. Han, “Exploring effective
pp. 7419–7433, Jan. 2021, doi: 10.1109/TGRS.2021.3051641. data for surrogate training towards black-box attack,” in Proc.
[40] H. Li et al., “Adversarial examples for CNN-based SAR image IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 15,355–
classification: An experience study,” IEEE J. Sel. Topics Appl. 15,364, doi: 10.1109/CVPR52688.2022.01492.
Earth Observ. Remote Sens., vol. 14, pp. 1333–1347, 2021, doi: [56] B. Deng, D. Zhang, F. Dong, J. Zhang, M. Shafiq, and Z. Gu,
10.1109/JSTARS.2020.3038683. “Rust-style patch: A physical and naturalistic camouflage at-
[41] T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances tacks on object detector for remote sensing images,” Remote
in adversarial training for adversarial robustness,” in Proc. Int. Sens., vol. 15, no. 4, Feb. 2023, Art. no. 885, doi: 10.3390/
Joint Conf. Artif. Intell., 2021, pp. 4312–4321. rs15040885.
[42] A. Chan-Hon-Tong, G. Lenczner, and A. Plyer, “Demoti- [57] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli,
vate adversarial defense in remote sensing,” in Proc. IEEE Int. “Is feature selection secure against training data poisoning?” in
Geosci. Remote Sens. Symp., 2021, pp. 3448–3451, doi: 10.1109/ Proc. Int. Conf. Mach. Learn., 2015, pp. 1689–1698.
IGARSS47720.2021.9554767. [58] T. Gu, B. Dolan-Gavitt, and S. Garg, “BadNets: Identifying vul-
[43] B. Peng, B. Peng, J. Zhou, J. Xie, and L. Liu, “Scattering model nerabilities in the machine learning model supply chain,” 2017,
guided adversarial examples for SAR target recognition: At- arXiv:1708.06733.
tack and defense,” IEEE Trans. Geosci. Remote Sens., vol. 60, Oct. [59] Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in Proc.
2022, Art. no. 5236217, doi: 10.1109/TGRS.2022.3213305. IEEE Int. Conf. Comput. Des., 2017, pp. 45–48, doi: 10.1109/
[44] Y. Xu, H. Sun, J. Chen, L. Lei, G. Kuang, and K. Ji, “Robust re- ICCD.2017.16.
mote sensing scene classification by adversarial self-supervised [60] Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A sur-
learning,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. vey,” IEEE Trans. Neural Netw. Learn. Syst., early access, 2022,
4936–4939, doi: 10.1109/IGARSS47720.2021.9553824. doi: 10.1109/TNNLS.2022.3182979.
[45] G. Cheng, X. Sun, K. Li, L. Guo, and J. Han, “Perturbation-seeking [61] Y. Yang and S. Newsam, “Bag-of-visual-words and spatial ex-
generative adversarial networks: A defense framework for remote tensions for land-use classification,” in Proc. SIGSPATIAL
sensing image scene classification,” IEEE Trans. Geosci. Remote Int. Conf. Adv. Geographic Inf. Syst., 2010, pp. 270–279, doi:
Sens., vol. 60, pp. 1–11, 2022, doi: 10.1109/TGRS.2021.3081421. 10.1145/1869790.1869829.
[46] Y. Xu, W. Yu, and P. Ghamisi, “Task-guided denoising network [62] E. Brewer, J. Lin, and D. Runfola, “Susceptibility & defense of
for adversarial defense of remote sensing scene classification,” satellite image-trained convolutional networks to backdoor at-
in Proc. Int. Joint Conf. Artif. Intell. Workshop, 2022, pp. 73–78. tacks,” Inf. Sci., vol. 603, pp. 244–261, Jul. 2022, doi: 10.1016/j.
[47] L. Chen, J. Xiao, P. Zou, and H. Li, “Lie to me: A soft threshold ins.2022.05.004.
defense method for adversarial examples of remote sensing im- [63] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor
ages,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022, doi: attacks on deep learning systems using data poisoning,” 2017,
10.1109/LGRS.2021.3096244. arXiv:1712.05526.
[48] Z. Zhang, X. Gao, S. Liu, B. Peng, and Y. Wang, “Energy- [64] A. Nguyen and A. Tran, “WaNet–Imperceptible warping-based
based adversarial example detection for SAR images,” Remote backdoor attack,” 2021, arXiv:2102.10369.
Sens., vol. 14, no. 20, Oct. 2022, Art. no. 5168, doi: 10.3390/ [65] A. Turner, D. Tsipras, and A. Madry, “Label-consistent backdoor
rs14205168. attacks,” 2019, arXiv:1912.02771.
[49] Y. Zhang et al., “Adversarial patch attack on multi-scale object [66] A. Saha, A. Subramanya, and H. Pirsiavash, “Hidden trigger
detection for UAV remote sensing images,” Remote Sens., vol. 14, backdoor attacks,” in Proc. AAAI Conf. Artif. Intell., 2020, vol.
no. 21, Oct. 2022, Art. no. 5298, doi: 10.3390/rs14215298. 34, no. 7, pp. 11,957–11,965, doi: 10.1609/aaai.v34i07.6871.
[50] X. Sun, G. Cheng, L. Pei, H. Li, and J. Han, “Threatening patch [67] Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible back-
attacks on object detection in optical remote sensing images,” door attack with sample-specific triggers,” in Proc. IEEE Int.
2023, arXiv:2302.06060. Conf. Comput. Vis., 2021, pp. 16,463–16,472, doi: 10.1109/
[51] J.-C. Burnel, K. Fatras, R. Flamary, and N. Courty, “Generat- ICCV48922.2021.01615.
ing natural adversarial remote sensing images,” IEEE Trans. [68] K. Simonyan and A. Zisserman, “Very deep convolutional net-
Geosci. Remote Sens., vol. 60, pp. 1–14, 2021, doi: 10.1109/ works for large-scale image recognition,” 2014, arXiv:1409.1556.
TGRS.2021.3110601. [69] E. Brewer, J. Lin, P. Kemper, J. Hennin, and D. Runfola, “Predict-
[52] Y. Dong et al., “Boosting adversarial attacks with momentum,” ing road quality using high resolution satellite imagery: A trans-
in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. fer learning approach,” PLoS One, vol. 16, no. 7, Jul. 2021, Art. no.
9185–9193, doi: 10.1109/CVPR.2018.00957. e0253370, doi: 10.1371/journal.pone.0253370.

82 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[70] N. Dräger, Y. Xu, and P. Ghamisi, “Backdoor attacks for re- [84] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learn-
mote sensing data with wavelet transform,” 2022, arXiv: ing: Concept and applications,” ACM Trans. Intell. Syst. Technol.,
2211.08044. vol. 10, no. 2, pp. 1–19, Jan. 2019, doi: 10.1145/3298981.
[71] L. Chun-Lin, “A tutorial of the wavelet transform,” Dept. Elect. [85] C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on
Eng., Nat. Taiwan Univ., Taipei, Taiwan, 2010. federated learning,” Knowl.-Based Syst., vol. 216, Mar. 2021, Art.
[72] K.-Y. Tsao, T. Girdler, and V. G. Vassilakis, “A survey of cyber no. 106775, doi: 10.1016/j.knosys.2021.106775.
security threats and solutions for UAV communications and [86] Q. Li et al., “A survey on federated learning systems: Vision,
flying ad-hoc networks,” Ad Hoc Netw., vol. 133, Aug. 2022, Art. hype and reality for data privacy and protection,” IEEE Trans.
no. 102894, doi: 10.1016/j.adhoc.2022.102894. Knowl. Data Eng., vol. 35, no. 4, pp. 3347–3366, Apr. 2023, doi:
[73] A. Rugo, C. A. Ardagna, and N. E. Ioini, “A security review in 10.1109/TKDE.2021.3124599.
the UAVNet era: Threats, countermeasures, and gap analysis,” [87] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learn-
ACM Comput. Surv., vol. 55, no. 1, pp. 1–35, Jan. 2022, doi: ing: Challenges, methods, and future directions,” IEEE Signal
10.1145/3485272. Process. Mag., vol. 37, no. 3, pp. 50–60, May 2020, doi: 10.1109/
[74] S. Islam, S. Badsha, I. Khalil, M. Atiquzzaman, and C. Konstan- MSP.2020.2975749.
tinou, “A triggerless backdoor attack and defense mechanism [88] P. Kairouz et al., “Advances and open problems in federated
for intelligent task offloading in multi-UAV systems,” IEEE In- learning,” Found. Trends Mach. Learn., vol. 14, nos. 1–2, pp.
ternet Things J., vol. 10, no. 7, pp. 5719–5732, Apr. 2022, doi: 1–210, Jun. 2021, doi: 10.1561/2200000083.
10.1109/JIOT.2022.3172936. [89] J. Mills, J. Hu, and G. Min, “Multi-task federated learning for
[75] C. Beretas et al., “Smart cities and smart devices: The back personalised deep neural networks in edge computing,” IEEE
door to privacy and data breaches,” Biomed. J. Sci. Technol. Res., Trans. Parallel Distrib. Syst., vol. 33, no. 3, pp. 630–641, Mar.
vol. 28, no. 1, pp. 21,221–21,223, Jun. 2020, doi: 10.26717/ 2022, doi: 10.1109/TPDS.2021.3098467.
BJSTR.2020.28.004588. [90] L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in
[76] S. Hashemi and M. Zarei, “Internet of Things backdoors: Re- federated learning,” Comput. Ind. Eng., vol. 149, Nov. 2020, Art.
source management issues, security challenges, and detection no. 106854, doi: 10.1016/j.cie.2020.106854.
methods,” Trans. Emerg. Telecommun. Technol., vol. 32, no. 2, Feb. [91] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Aguera
2021, Art. no. e4142, doi: 10.1002/ett.4142. y Arcas, “Communication-efficient learning of deep networks
[77] B. G. Doan, E. Abbasnejad, and D. C. Ranasinghe, “Februus: from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist.,
Input purification defense against trojan attacks on deep neu- PMLR, 2017, pp. 1273–1282.
ral network systems,” in Proc. Annu. Comput. Secur. Appl. Conf., [92] B. Hu, Y. Gao, L. Liu, and H. Ma, “Federated region-learning:
2020, pp. 897–912, doi: 10.1145/3427228.3427264. An edge computing based framework for urban environment
[78] S. Ding, Y. Tian, F. Xu, Q. Li, and S. Zhong, “Poisoning attack on sensing,” in Proc. IEEE Global Commun. Conf., 2018, pp. 1–7, doi:
deep generative models in autonomous driving,” in Proc. EAI 10.1109/GLOCOM.2018.8647649.
Secur. Commun., 2019, pp. 299–318. [93] Y. Gao, L. Liu, B. Hu, T. Lei, and H. Ma, “Federated region-learn-
[79] P. Kumar, G. P. Gupta, and R. Tripathi, “TP2SF: A trustworthy ing for environment sensing in edge computing system,” IEEE
privacy-preserving secured framework for sustainable smart Trans. Netw. Sci. Eng., vol. 7, no. 4, pp. 2192–2204, Aug. 2020,
cities by leveraging blockchain and machine learning,” J. doi: 10.1109/TNSE.2020.3016035.
Syst. Archit., vol. 115, 2021, Art. no. 101954, doi: 10.1016/j.sys- [94] C. Xu and Y. Mao, “An improved traffic congestion monitoring
arc.2020.101954. system based on federated learning,” Information, vol. 11, no. 7,
[80] Q. Liu et al., “A collaborative deep learning microservice Jul. 2020, Art. no. 365, doi: 10.3390/info11070365.
for backdoor defenses in industrial IoT networks,” Ad Hoc [95] M. Alazab, R. M. Swarna Priya, M. Parimala, P. K. R. Maddi-
Netw., vol. 124, Jan. 2022, Art. no. 102727, doi: 10.1016/j.ad- kunta, T. R. Gadekallu, and Q.-V. Pham, “Federated learning
hoc.2021.102727. for cybersecurity: Concepts, challenges, and future directions,”
[81] Y. Wang, E. Sarkar, W. Li, M. Maniatakos, and S. E. Jabari, IEEE Trans. Ind. Informat., vol. 18, no. 5, pp. 3501–3509, May
“Stop-and-go: Exploring backdoor attacks on deep reinforce- 2022, doi: 10.1109/TII.2021.3119038.
ment learning-based traffic congestion control systems,” IEEE [96] Z. M. Fadlullah and N. Kato, “On smart IoT remote sensing over
Trans. Inf. Forensics Security, vol. 16, pp. 4772–4787, Sep. 2021, integrated terrestrial-aerial-space networks: An asynchronous
doi: 10.1109/TIFS.2021.3114024. federated learning approach,” IEEE Netw., vol. 35, no. 5, pp.
[82] P. Tam, S. Math, C. Nam, and S. Kim, “Adaptive resource op- 129–135, Sep./Oct. 2021, doi: 10.1109/MNET.101.2100125.
timized edge federated learning in real-time image sensing [97] P. Chhikara, R. Tekchandani, N. Kumar, and S. Tanwar, “Fed-
classifications,” IEEE J. Sel. Topics Appl. Earth Observ. Remote erated learning-based aerial image segmentation for collision-
Sens., vol. 14, pp. 10,929–10,940, Oct. 2021, doi: 10.1109/ free movement and landing,” in Proc. ACM MobiCom Workshop
JSTARS.2021.3120724. Drone Assisted Wireless Commun. 5G Beyond, 2021, pp. 13–18,
[83] D. Li, M. Wang, Z. Dong, X. Shen, and L. Shi, “Earth obser- doi: 10.1145/3477090.3481051.
vation brain (EOB): An intelligent Earth observation system,” [98] W. Lee, “Federated reinforcement learning-based UAV swarm
Geo-Spatial Inf. Sci., vol. 20, no. 2, pp. 134–140, Jun. 2017, doi: system for aerial remote sensing,” Wireless Commun. Mobile
10.1080/10095020.2017.1329314. Comput., early access, Jan. 2022, doi: 10.1155/2022/4327380.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 83


[99] K. Cheng et al., “Secureboost: A lossless federated learning IEEE Int. Geosci. Remote Sens. Symp., 2021, pp. 8676–8679, doi:
framework,” IEEE Intell. Syst., vol. 36, no. 6, pp. 87–98, Nov./ 10.1109/IGARSS47720.2021.9553266.
Dec. 2021, doi: 10.1109/MIS.2021.3082561. [115] W. Feng, H. Sui, J. Tu, W. Huang, C. Xu, and K. Sun, “A novel
[100] A. Huang et al., “StarFL: Hybrid federated learning architecture change detection approach for multi-temporal high-resolution
for smart urban computing,” ACM Trans. Intell. Syst. Technol., remote sensing images based on rotation forest and coarse-
vol. 12, no. 4, pp. 1–23, Aug. 2021, doi: 10.1145/3467956. to-fine uncertainty analyses,” Remote Sens., vol. 10, no. 7, Jun.
[101] J. C. Jiang, B. Kantarci, S. Oktug, and T. Soyata, “Federated learn- 2018, Art. no. 1015, doi: 10.3390/rs10071015.
ing in smart city sensing: Challenges and opportunities,” Sensors, [116] K. Tan, Y. Zhang, X. Wang, and Y. Chen, “Object-based change
vol. 20, no. 21, Oct. 2020, Art. no. 6230, doi: 10.3390/s20216230. detection using multiple classifiers and multi-scale uncertain-
[102] Y. Chen, X. Qin, J. Wang, C. Yu, and W. Gao, “FedHealth: A fed- ty analysis,” Remote Sens., vol. 11, no. 3, Feb. 2019, Art. no. 359,
erated transfer learning framework for wearable healthcare,” doi: 10.3390/rs11030359.
IEEE Intell. Syst., vol. 35, no. 4, pp. 83–93, Jul./Aug. 2020, doi: [117] T. Schroeder, M. Schaale, J. Lovell, and D. Blondeau-Patissier,
10.1109/MIS.2020.2988604. “An ensemble neural network atmospheric correction for Sen-
[103] P. Atkinson and G. Foody, Uncertainty in Remote Sensing and tinel-3 OLCI over coastal waters providing inherent model
GIS: Fundamentals. New York, NY, USA: Wiley, 2002, pp. 1–18. uncertainty estimation and sensor noise propagation,” Re-
[104] M. Firestone et al., “Guiding principles for Monte Carlo analy- mote Sens. Environ., vol. 270, Mar. 2022, Art. no. 112848, doi:
sis,” U.S. Environ. Protection Agency, Washington, DC, USA, 10.1016/j.rse.2021.112848.
1997. [Online]. Available: https://www.epa.gov/risk/guiding- [118] C. Dechesne, P. Lassalle, and S. Lefèvre, “Bayesian U-Net: Esti-
principles-monte-carlo-analysis mating uncertainty in semantic segmentation of Earth obser-
[105] G. Wang, G. Z. Gertner, S. Fang, and A. B. Anderson, “A meth- vation images,” Remote Sens., vol. 13, no. 19, Sep. 2021, Art. no.
odology for spatial uncertainty analysis of remote sensing and 3836, doi: 10.3390/rs13193836.
GIS products,” Photogrammetric Eng. Remote Sens., vol. 71, no. [119] M. Werther et al., “A Bayesian approach for remote sensing of
12, pp. 1423–1432, Dec. 2005, doi: 10.14358/PERS.71.12.1423. chlorophyll-a and associated retrieval uncertainty in oligotrophic
[106] BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP, and OIML, Guide and mesotrophic lakes,” Remote Sens. Environ., vol. 283, Dec.
to the Expression of Uncertainty in Measurement. Geneva, Switzer- 2022, Art. no. 113295, doi: 10.1016/j.rse.2022.113295.
land: International Organization for Standardization, 1995. [120] B. W. Allred et al., “Improving Landsat predictions of range-
[107] A. Povey and R. Grainger, “Known and unknown unknowns: land fractional cover with multitask learning and uncertainty,”
Uncertainty estimation in satellite remote sensing,” Atmos. Methods Ecology Evol., vol. 12, no. 5, pp. 841–849, Jan. 2021, doi:
Meas. Techn., vol. 8, no. 11, pp. 4699–4718, Nov. 2015, doi: 10.1111/2041-210X.13564.
10.5194/amt-8-4699-2015. [121] Y. Ma, Z. Zhang, Y. Kang, and M. Özdoğan, “Corn yield predic-
[108] A. M. Lechner, W. T. Langford, S. A. Bekessy, and S. D. Jones, tion and uncertainty analysis based on remotely sensed vari-
“Are landscape ecologists addressing uncertainty in their re- ables using a Bayesian neural network approach,” Remote Sens.
mote sensing data?” Landscape Ecology, vol. 27, no. 9, pp. 1249– Environ., vol. 259, Jun. 2021, Art. no. 112408, doi: 10.1016/j.
1261, Sep. 2012, doi: 10.1007/s10980-012-9791-7. rse.2021.112408.
[109] D. Tuia, C. Persello, and L. Bruzzone, “Domain adaptation for [122] D. Choi, C. J. Shallue, Z. Nado, J. Lee, C. J. Maddison, and G.
the classification of remote sensing data: An overview of re- E. Dahl, “On empirical comparisons of optimizers for deep
cent advances,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. learning,” 2019, arXiv:1910.05446.
41–57, Jun. 2016, doi: 10.1109/MGRS.2016.2548504. [123] H. Robbins and S. Monro, “A stochastic approximation meth-
[110] B. Benjdira, Y. Bazi, A. Koubaa, and K. Ouni, “Unsupervised od,” Ann. Math. Statist., vol. 22, no. 3, pp. 400–407, Sep. 1951,
domain adaptation using generative adversarial networks for doi: 10.1214/aoms/1177729586.
semantic segmentation of aerial images,” Remote Sens., vol. 11, [124] Q. Wang, Y. Ma, K. Zhao, and Y. Tian, “A comprehensive survey
no. 11, May 2019, Art. no. 1369, doi: 10.3390/rs11111369. of loss functions in machine learning,” Ann. Data Sci., vol. 9, no.
[111] H. Jabbar and R. Z. Khan, “Methods to avoid over-fitting and 2, pp. 187–212, Apr. 2022, doi: 10.1007/s40745-020-00253-5.
under-fitting in supervised machine learning (Comparative [125] F. He, T. Liu, and D. Tao, “Control batch size and learning rate
Study),” in Proc. Comput. Sci. Commun. Instrum. Devices, 2015, to generalize well: Theoretical and empirical evidence,” in
pp. 163–172, doi: 10.3850/978-981-09-5247-1_017. Proc. Neural Inf. Process. Syst., 2019, vol. 32, pp. 1143–1152.
[112] Z. Yin, M. Amaru, Y. Wang, L. Li, and J. Caers, “Quantifying [126] C. Persello, “Interactive domain adaptation for the classifi-
uncertainty in downscaling of seismic data to high-resolution cation of remote sensing images using active learning,” IEEE
3-D lithological models,” IEEE Trans. Geosci. Remote Sens., vol. Geosci. Remote Sens. Lett., vol. 10, no. 4, pp. 736–740, Jul. 2013,
60, pp. 1–12, Feb. 2022, doi: 10.1109/TGRS.2022.3153934. doi: 10.1109/LGRS.2012.2220516.
[113] J. Gawlikowski, S. Saha, A. Kruspe, and X. X. Zhu, “An ad- [127] H. Feng, Z. Miao, and Q. Hu, “Study on the uncertainty of ma-
vanced Dirichlet prior network for out-of-distribution detec- chine learning model for earthquake-induced landslide sus-
tion in remote sensing,” IEEE Trans. Geosci. Remote Sens., vol. ceptibility assessment,” Remote Sens., vol. 14, no. 13, Jun. 2022,
60, pp. 1–19, Jan. 2022, doi: 10.1109/TGRS.2022.3140324. Art. no. 2968, doi: 10.3390/rs14132968.
[114] J. Gawlikowski, S. Saha, A. Kruspe, and X. X. Zhu, “Towards [128] R. Szeliski, Bayesian Modeling of Uncertainty in Low-Level Vision, vol.
out-of-distribution detection for remote sensing,” in Proc. 79. New York, NY, USA: Springer Science & Business Media, 2012.

84 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


[129] A. Malinin and M. Gales, “Predictive uncertainty estimation [145] E. S. Maddy and S. A. Boukabara, “MIIDAPS-AI: An ex-
via prior networks,” in Proc. Neural Inf. Process. Syst., 2018, pp. plainable machine-learning algorithm for infrared and
7047–7058. microwave remote sensing and data assimilation preprocessing
[130] M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep - Application to LEO and GEO sensors,” IEEE J. Sel. Topics Appl.
learning to quantify classification uncertainty,” in Proc. Neural Earth Observ. Remote Sens., vol. 14, pp. 8566–8576, Aug. 2021,
Inf. Process. Syst., 2018, pp. 1–11. doi: 10.1109/JSTARS.2021.3104389.
[131] M. Możejko, M. Susik, and R. Karczewski, “Inhibited softmax [146] S. S. Matin and B. Pradhan, “Earthquake-induced building-
for uncertainty estimation in neural networks,” 2018, arX- damage mapping using explainable AI (X AI),” Sensors, vol.
iv:1810.01861. 21, no. 13, Jun. 2021, Art. no. 4489, doi: 10.3390/s21134489.
[132] J. Gawlikowski et al., “A survey of uncertainty in deep neural [147] S. M. Lundberg and S.-I. Lee, “A unified approach to interpret-
networks,” 2021, arXiv:2107.03342. ing model predictions,” in Proc. Neural Inf. Process. Syst., 2017,
[133] R. Seoh, “Qualitative analysis of monte Carlo dropout,” 2020, vol. 30, pp. 4768–4777.
arXiv:2007.01720. [148] A. Temenos, I. N. Tzortzis, M. Kaselimi, I. Rallis, A. Doulamis,
[134] N. Hochgeschwender et al., “Evaluating uncertainty estima- and N. Doulamis, “Novel insights in spatial epidemiology utiliz-
tion methods on 3D semantic segmentation of point clouds,” ing explainable AI (XAI) and remote sensing,” Remote Sens., vol.
2020, arXiv:2007.01787. 14, no. 13, Jun. 2022, Art. no. 3074, doi: 10.3390/rs14133074.
[135] P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkin- [149] A. Temenos, M. Kaselimi, I. Tzortzis, I. Rallis, A. Doulamis,
son, “Explainable artificial intelligence: An analytical review,” and N. Doulamis, “Spatio-temporal interpretation of the CO-
Wiley Interdisciplinary Rev. Data Mining Knowl. Discovery, vol. 11, VID-19 risk factors using explainable AI,” in Proc. IEEE Int.
no. 5, Sep./Oct. 2021, Art. no. e1424, doi: 10.1002/widm.1424. Geosci. Remote Sens. Symp., 2022, pp. 7705–7708, doi: 10.1109/
[136] A. Adadi and M. Berrada, “Peeking inside the black-box: A IGARSS46834.2022.9884922.
survey on explainable artificial intelligence (XAI),” IEEE Ac- [150] Y. Xu, B. Du, and L. Zhang, “Beyond the patchwise classifica-
cess, vol. 6, pp. 52,138–52,160, Sep. 2018, doi: 10.1109/AC- tion: Spectral-spatial fully convolutional networks for hyper-
CESS.2018.2870052. spectral image classification,” IEEE Trans. Big Data, vol. 6, no. 3,
[137] I. Kakogeorgiou and K. Karantzalos, “Evaluating explainable pp. 492–506, Sep. 2020, doi: 10.1109/TBDATA.2019.2923243.
artificial intelligence methods for multi-label deep learning [151] M. Onishi and T. Ise, “Explainable identification and mapping
classification tasks in remote sensing,” Int. J. Appl. Earth Observ. of trees using UAV RGB image and deep learning,” Scientific
Geoinf., vol. 103, Dec. 2021, Art. no. 102520, doi: 10.1016/j. Rep., vol. 11, no. 1, pp. 1–15, Jan. 2021, doi: 10.1038/s41598-
jag.2021.102520. 020-79653-9.
[138] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. [152] X. Huang, Y. Sun, S. Feng, Y. Ye, and X. Li, “Better visual inter-
Kagal, “Explaining explanations: An overview of interpretabil- pretation for remote sensing scene classification,” IEEE Geosci.
ity of machine learning,” in Proc. IEEE Int. Conf. Data Sci. Adv. Remote Sens. Lett., vol. 19, pp. 1–5, Dec. 2021, doi: 10.1109/
Anal., 2018, pp. 80–89, doi: 10.1109/DSAA.2018.00018. LGRS.2021.3132920.
[139] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside con- [153] X. Gu, P. P. Angelov, C. Zhang, and P. M. Atkinson, “A semi-
volutional networks: Visualising image classification models supervised deep rule-based approach for complex satellite
and saliency maps,” in Proc. Int. Conf. Learn. Representations sensor image analysis,” IEEE Trans. Pattern Anal. Mach. In-
Workshops, 2013. tell., vol. 44, no. 5, pp. 2281–2292, Dec. 2020, doi: 10.1109/
[140] A. Das and P. Rad, “Opportunities and challenges in explainable TPAMI.2020.3048268.
artificial intelligence (XAI): A survey,” 2020, arXiv:2006.11371. [154] X. Gu, C. Zhang, Q. Shen, J. Han, P. P. Angelov, and P. M.
[141] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you? Atkinson, “A self-training hierarchical prototype-based en-
Explaining the predictions of any classifier,” in Proc. ACM SIG- semble framework for remote sensing scene classification,”
KDD Int. Conf. Knowl. Discovery Data Mining, 2016, pp. 1135–1144. Inf. Fusion, vol. 80, pp. 179–204, Apr. 2022, doi: 10.1016/j.
[142] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, inffus.2021.11.014.
“Learning deep features for discriminative localization,” in [155] N. I. Arnold, P. Angelov, and P. M. Atkinson, “An improved ex-
Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2921– plainable point cloud classifier (XPCC),” IEEE Trans. Artif. Intell.,
2929, doi: 10.1109/CVPR.2016.319. vol. 4, no. 1, pp. 71–80, Feb. 2023, doi: 10.1109/TAI.2022.3150647.
[143] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, [156] D. Tuia, R. Roscher, J. D. Wegner, N. Jacobs, X. Zhu, and G.
and D. Batra, “Grad-CAM: Visual explanations from deep net- Camps-Valls, “Toward a collective agenda on AI for Earth sci-
works via gradient-based localization,” in Proc. IEEE Int. Conf. ence data analysis,” IEEE Geosci. Remote Sens. Mag., vol. 9, no. 2,
Comput. Vis., 2017, pp. 618–626, doi: 10.1109/ICCV.2017.74. pp. 88–104, Jun. 2021, doi: 10.1109/MGRS.2020.3043504.
[144] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasub- [157] A. Ghorbani, A. Abid, and J. Zou, “Interpretation of neural net-
ramanian, “Grad-CAM++: Generalized gradient-based visual works is fragile,” in Proc. AAAI Conf. Artif. Intell., 2019, vol. 33,
explanations for deep convolutional networks,” in Proc. IEEE no. 1, pp. 3681–3688, doi: 10.1609/aaai.v33i01.33013681.
Winter Conf. Appl. Comput. Vis., 2018, pp. 839–847. GRS

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 85


PERSPECTIVES
NIRAV PATEL

Generative Artificial Intelligence


and Remote Sensing
A perspective on the past and the future

T he first phase of 2023 has been marked with an ex-


plosion of interest around generative AI systems,
which generate content. This type of machine learning
and could not generate complex and diverse images. The
introduction of generative adversarial networks (GANs)
[10] and variational autoencoders [11] in the past decade
promises to enable the creation of synthetic data and or so has allowed for more control over the image gen-
outputs in many different modalities. OpenAI’s Chat- eration process to generate high-resolution images.
GPT has certainly taken the world by storm and opened Generative models in different modalities felt the ad-
discourse on how the technology should be used. vancement of the field in its totality with the introduc-
Historically, generative models are certainly not new, tion of the transformer architecture [12]. Large language
dating back to the 1950s, with hidden Markov models models, such as the generative pretrained transformer
and Gaussian mixture models [1], [2], [3]. The recent (GPT), adopt this architecture as the primary building
development of deep learning block, which initially had significant utility in the NLP
has allowed for generative mod- world before later modifications to this architecture al-
OpenAI’s ChatGPT HAS els’ utility. In the early days of lowed for application to image-based streams of infor-
deep generative models, N-gram mation [13], [14], [15], [16], [17]. Transformers consist
CERTAINLY TAKEN THE
language modeling was utilized of an encoder and a decoder, where the encoder takes
WORLD BY STORM AND
to generate sentences in natural in an input sequence and generates hidden representa-
OPENED DISCOURSE ON language processing (NLP) [4]. tions, while the decoder has a multihead attention and
HOW THE TECHNOLOGY This modeling did not scale well feedforward NN [1]. See Figure 1 for an NLP example
SHOULD BE USED. to generating long sentences, and of a sentence being translated from English to Japanese.
hence, recurrent neural networks The emergence of these techniques has allowed for
(RNNs) were introduced to deal the creation of foundation models, which are the tech-
with longer dependencies [5]. RNNs were followed by nical scaffolding behind generative AI capabilities.
the development of long short-term memory [6] and gat- Foundation models, such as ChatGPT, learn from unla-
ed recurrent unit methods, which leveraged gating mech- beled datasets, which saves a significant amount of time
anisms to control memory usage during training [7]. and the expense of manual annotation and human at-
In the computer vision arena (more aligned with re- tention. However, there is a reason why the most well-
mote sensing), traditional image generation algorithms resourced companies in the world have made an attempt
utilized techniques such as texture mapping [8] and at generating these models [19].
texture synthesis [9]. These methods were very limited First, you need the best computer scientists and en-
gineers to maintain and tweak foundation models, and
second, when these models are training data from the
whole Internet, the computational cost is not insignifi-
The views expressed in this publication reflect those of the author
cant. OpenAI’s GPT-3 was trained on roughly 45 TB of
and do not necessarily reflect the official policy or position of the
U.S. Government or the Department of Defense (DoD). text data (equivalent to 1 million feet of bookshelf space),
which cost several million dollars (estimated) [19].
With remote sensing applications, anecdotally, I
Digital Object Identifier 10.1109/MGRS.2023.3275984
have witnessed the rise of the use of GANs over the past
Date of current version: 30 June 2023 few years. This deep learning technique, as mentioned

86 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


before, is an NN architecture that conducts the training pro- interactive segmentation methods would outperform their
cess as a competition between a generator and a discrimi- model when many more points are provided.
nator to produce new data conforming to learned patterns. My prediction for the future is that as we see the com-
Since GANs are able to learn from remote sensing data puter vision world make more investments in foundation
without supervision, some applications that the com- models related to image processing, the remote sensing and
munity has found useful include (but are not limited to) geosciences world will stand to benefit from large invest-
data generation/augmentation, superresolution, pan- ments by the world’s well-resourced tech companies.
chromatic sharpening, haze removal and restoration, and Advancements in computer vision models, due to the de-
cloud removal [20], [21], [22], [23]. I strongly believe that velopment of foundation models, however, will not always
the ever-increasing availability of remotely sensed data and be tailored toward the needs of remote sensing. Closely ex-
the availability of relatively robust computational power amining the data being fed into these foundation models
in local and distributed (i.e., cloud)-based environments and how exactly data are being labeled within these models
will make GANs only more useful to the remote sens- will allow for discerning remote sensing practitioners get
ing community in the coming years and may even lead the most value out of using such computer vision models.
to some bespoke foundation models, especially with the Hence, a major caution to users of foundation models
open source remote sensing efforts that Google [25], Micro- for remote sensing applications is the same caution that ap-
soft [26], and Amazon [27] are funding. plies for applications of foundation models to other types
In other remote sensing areas, such as image segmenta- of machine learning applications: the limits of utility for
tion, the foundation models are already here (within days of outputs are tied closely to the quantity and quality of the
writing this piece). In what could be the example for other labeled data associated with the model. Even the most so-
remote sensing foundation models, Meta AI released Seg- phisticated foundation models cannot escape the maxim of
ment Anything [28], which is a new task, model, and dataset “garbage in, garbage out.”
for image segmentation. Meta claims to have “built the larg- Well-resourced technology companies also have their
est segmentation dataset to date, with over 1 billion masks monetary interests that ultimately influence the founda-
on 11M licensed and privacy respected images.” Social me- tion models that they create. It is important for remote sens-
dia has many remote sensing companies, scientists, and ing practitioners to understand this dynamic. For example,
enthusiasts alike ingesting satellite
imagery into the model and yielding
results, with varying utility. Meta’s
paper provides more technical detail Transformer
on how the foundation model is ar- Optimus Prime is a Encoder
chitected (Figure 2), but in my opin- Cool Robot
ion, the true uniqueness and value
lie in how massive the dataset is and Hidden State 1 - {Optimus}
how well labeled it is in comparison Hidden State 2 {Prime}
to other image segmentation datasets Hidden State 3 {is}
of its kind. ...
The authors of the Segment Any- Hidden State 6 {Robot}
thing admit that their model can “miss
fine structures, hallucinates small dis- Decoder
connected components at times, and
does not produce boundaries as crisply
as more computationally intensive meth-
ods.” They posit that more dedicated FIGURE 1. An NLP translation of a sentence from English to Japanese [18].

, Score
Mask Decoder
Image
Encoder , Score
Conv Prompt Encoder
, Score
Image Image
Embedding Mask Points Box Text
Valid Masks
Conv: Convolution

FIGURE 2. Technical detail of the Segment Anything architecture [28].

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 87


in exchange for providing access to lightweight and easy- Speech Processing, S. Young and G. Bloothooft, Eds. Dordrecht,
to-access interfaces, such as ChatGPT, all the data that are The Netherlands: Springer, 1997, pp. 27–68.
put in by the user can ultimately be utilized by OpenAI for [3] D. A. Reynolds, “Gaussian mixture models,” in Encyclopedia
other purposes. While the service does not cost any money Biometrics, vol. 741, S. Z. Li and A. Jain, Eds. Boston, MA, USA:
for the user, ChatGPT still will gain insight from your in- Springer, 2009, pp. 659–663.
quiry to make itself better. Indeed, nothing truly comes for [4] Y. Bengio, R. Ducharme, and P. Vincent, “A neural probabilistic
free, especially with the use of foundation models and the language model,” in Proc. Adv. Neural Inf. Process. Syst., 2000,
user interfaces associated with them. vol. 13, pp. 932–938.
Finally, it is worth discussing the nefarious use cases that [5] T. Mikolov, S. Kombrink, L. Burget, J. Černocký, and S. Khudan-
this technology can be used for, especially in the context of pur, “Extensions of recurrent neural network language model,”
remote sensing. Synthetic data generation could be utilized, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP),
for example, to create fake satellite images that could pro- May 2011, pp. 5528–5531, doi: 10.1109/ICASSP.2011.5947611.
vide the impression to an undiscerning user of information [6] A. Graves, “Long short-term memory,” in Supervised Sequence
and evidence of something that doesn’t exist, and it could Labelling With Recurrent Neural Networks, Berlin, Germany:
hide potential evidence. Consider an example of a country Springer-Verlag, 2012, pp. 37–45.
trying to hide changes around an area (land surface chang- [7] R. Dey and F. M. Salem, “Gate-variants of gated recurrent
es) to mask human rights violations. Synthetic data could unit (GRU) neural networks,” in Proc. 60th IEEE Int. Midwest
be provided in the same place as a real satellite image was Symp. Circuits Syst. (MWSCAS), Aug. 2017, pp. 1597–1600, doi:
supposed to be provided in a data feed that is accessed 10.1109/MWSCAS.2017.8053243.
by the public, giving a false sense of what the reality of the [8] P. S. Heckbert, “Survey of texture mapping,” IEEE Comput.
situation is. Graph. Appl., vol. 6, no. 11, pp. 56–67, Nov. 1986, doi: 10.1109/
It is, thus, extremely important that the uses of synthetic MCG.1986.276672.
data are also well defined and regulated by the community [9] A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric
of remote sensing practitioners. Creating methods to identi- sampling,” in Proc. 7th IEEE Int. Conf. Comput. Vision, Sep. 1999,
fy synthetic remote sensing data would be the most effective vol. 2, pp. 1033–1038, doi: 10.1109/ICCV.1999.790383.
in the near term, in my opinion. I also believe that synthetic [10] I. Goodfellow et al., “Generative adversarial networks,” Commun.
data will be extremely useful in combination with real re- ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020, doi: 10.1145/3422622.
mote sensing data to train remote sensing models that aim [11] D. P. Kingma and M. Welling, “Auto-encoding variational
at “few-shot” circumstances (i.e., detecting rare objects). bayes,” 2013, arXiv:1312.6114.
Ultimately, the adoption of an extremely novel and ef- [12] A. Vaswani et al., “Attention is all you need,” in Proc. 30th Adv.
fective technology in its nascent stages within a community Neural Inf. Process. Syst., 2017, pp. 6000–6010.
requires a focus on the ethical implications of the use of the [13] T. Brown et al., “Language models are few-shot learners,” in
technology in each circumstance. The same holds true for Proc. Adv. Neural Inf. Process. Syst., 2020, vol. 33, pp. 1877–1901.
our field of remote sensing, and I have confidence in our [14] A. Ramesh et al., “Zero-shot text-to-image generation,” in Proc.
community to set the appropriate guardrails on the limits Int. Conf. Mach. Learn., PMLR, Jul. 2021, pp. 8821–8831.
of use of this technology. [15] M. Lewis et al., “BART: Denoising sequence-to-sequence
pre-training for natural language generation, translation, and
ACKNOWLEDGEMENT comprehension,” 2019, arXiv:1910.13461.
None of this article was written by generative artificial [16] A. Dosovitskiy et al., “An image is worth 16x16 words: Trans-
intelligence (AI). formers for image recognition at scale,” 2020, arXiv:2010.11929.
[17] Z. Liu et al., “Swin transformer: Hierarchical vision transformer
AUTHOR INFORMATION using shifted windows,” in Proc. IEEE/CVF Int. Conf. Comput.
Nirav Patel (niravpatel@uf l.edu) is an affiliate faculty Vision, 2021, pp. 10,012–10,022.
member in the Department of Geography at the Univer- [18] D. J. Rogel-Salazar. “Transformers models in machine
sity of Florida, Gainesville, FL 32611 USA. He is also a learning: Self-attention to the rescue.” Domino. Accessed: Apr.
senior scientist in the Office of the Secretary of Defense 13, 2023. [Online]. Available: https://www.dominodatalab.
at the U.S. Defense Innovation Unit, and a program man- com/blog/transformers-self-attention-to-the-rescue
ager for open-source/non-International Traffic in Arms [19] “What is generative AI?” McKinsey & Company. Accessed:
Regulations (ITAR) restricted remote sensing efforts. Apr. 13, 2023. [Online]. Available: https://www.mckinsey.com
/featured-insights/mckinsey-explainers/what-is-generative-ai
REFERENCES [20] Y. Weng et al., “Temporal co-attention guided conditional
[1] Y. Cao et al., “A comprehensive survey of AI-generated content generative adversarial network for optical image synthesis,”
(AIGC): A history of generative AI from GAN to chatGPT,” Remote Sens., vol. 15, no. 7, Mar. 2023, Art. no. 1863, doi: 10.3390
2023, arXiv:2303.04226. /rs15071863.
[2] K. Knill and S. Young, “Hidden Markov models in speech and
language processing,” in Corpus-Based Methods in Language (continued on p. 100)

88 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


TECHNICAL COMMITTEES
DALTON LUNGA , SILVIA ULLO , UJJWAL VERMA , GEORGE PERCIVALL ,
FABIO PACIFICI, AND RONNY HÄNSCH

Analysis-Ready Data and FAIR-AI—Standardization


of Research Collaboration and Transparency Across
Earth-Observation Communities

T he IEEE Geoscience and Re-


mote Sensing Society (GRSS)
Image Analysis and Data Fusion
common in many domains. It is a
practice that is helping to increase
research transparency and col-
Technical Committee (IADF TC) laboration. Although not broadly
serves as a global, multidisci- adopted in EO, such a practice
plinar y network for geospatial could enable data and derived AI
image analysis, e.g., machine tools become easily accessible
learning (ML), image and signal and reusable. However, the im-
processing, and computer vision mense diversity of modalities and
(CV). The IADF is also respon- sensing instruments across EO/
sible for defining the directions RS makes research development
of the data fusion contests while and adoption challenging.
paying attention to remote sens- This article provides a first
ing (RS) data’s multisensor, mul- outlook on guidelines for the
tiscale, and multitemporal inte- EO/RS communities to create/
gration challenges. adapt ARD data formats that
Among recent activities, the integrate with various AI work-
IADF is collaborating with the flows. Such best practices have
GRSS TC on Standards for Earth the potential to expand the im-
Observation (GSEO) and other pacts of image analysis and data
groups to promote two comple- fusion with AI and make it sim-
mentary initiatives: 1) reducing the pler for data providers to provide
overhead cost of preprocessing raw data and 2) improving data that are more interoperable and reusable in cross-
infrastructure to support the community reuse of Earth- modal applications.
observation (EO) data and artificial intelligence (AI) tools.
The EO community has engaged the aforementioned 1) WHY STANDARD ARD, FAIR DATA,
via a series of workshops on analysis-ready data (ARD) AND AI EO SERVICES?
[1], which has laid bare that current best practices are pro- Poor best practices and the lack of standardized tem-
vider specific [2]. Engagements in developing the afore- plates can present barriers to advancing scientific
mentioned 2) are in the early stages. They lack a guiding research and knowledge generation in EO. For exam-
framework similar to findable, accessible, interoperable, ple, synthesizing cross-modal data can be extremely
reusable (FAIR) [3], outlining standardized principles time consuming and carries huge overhead costs when
for best data stewardship and governance practices. preprocessing raw data. In addition, with AI tools
Developing templates and tools for consistently for- being increasingly pervasive across humanitarian
matting/preprocessing data within a discipline is becoming applications, the time needed to generate insights is
becoming critical, and reusability is preferred. At the
Digital Object Identifier 10.1109/MGRS.2023.3267904
same time, duplication of efforts is costly and hin-
Date of current version: 30 June 2023 ders fast progress.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 89


Standards for data have been proposed as essential lished to create AI-ready datasets compatible with open
elements to advance EO sciences. The Open Geospa- source ML frameworks, accelerating the path from image
tial Consortium’s Sensor Observation Service standard analysis and data fusion research prototyping to produc-
defines a web service interface that allows pulling tion deployment. As FAIR principles continue to pave the
observations, sensor metadata, and representations of way for the state of practice in other scientific domains, the
observed features [4]. Such accredited standards help EO community could benefit by following suit and intro-
outline broad governing protocols, but can take longer ducing FAIR-EO definitions to guide research transparency
to build governing processes and consensus. In contrast, and collaboration.
grassroots efforts [5], [6] can foster efficient adaptation
of best practices to harmonize cross-modal/-sensor data, TOWARD CROSS-MODAL ARD AND
creating datasheets and model cards for AI tool types, FAIR DATA PRINCIPLES
with working groups helping to maintain cross pollina- The aforementioned shortcomings present an opportunity
tion of taxonomies. to collaborate toward a concise and measurable set of cross-
modal FAIR ARD and FAIR model principles, ultimately
ARD advancing image analysis and data fusion algorithmic
With the increased availability of observations from mul- impacts at scale. The overarching goal is to harness best-
tiple EO missions, merging these observations allows for practice ARD developments to minimize user burden by
better temporal coverage and higher spatial resolutions. harmonizing heterogeneous imagery data and promoting
Specifically, ARD can be created from observations across FAIR principles for both EO data and AI products.
modalities. The ARD could be processed to ensure easy A joint IADF-GSEO paper submitted to the 2023 In-
utilization for AI-based EO applications. Standard ARD ternational Geoscience and Remote Sensing Symposium
components include atmospheric compensation, orthorec- revisits current best practices and outlines guidelines for
tification, pansharpening, color balancing, bundle block advancing EO data and derivative AI products for broader
adjustment, and grid alignment. In its advancement, future community use.
ARD processes could see radiometric and geometric adjust- The remaining work needs to start by revisiting common
ments applied to data across modalities to create “harmo- ARD essentials and aim to forge their evolution with
nized” data [7], enabling study of the evolution of a given FAIR principles to support cross-modal-based ML and CV
location through time using information from multiple opportunities emerging as central aspects for solving com-
modalities. Figure 1 shows the envisioned standardization plex EO challenges. An integrated framework (presented
process informed by ARD and FAIR principles [8]. For ex- as a general scheme in Figure 1) that combines ARD and
ample, suitable cross-modal data formats could be estab- FAIR for modernizing the state of practice in AI for EO must

Create baseline Baseline AI


AI models, models validated,
FAIR and Analysis-Ready document with tested and containerized.
modelcards,
EO Datasets and standardize
Create AI-ready
data: train, dataloaders
validate, and test ARD harmonization scripts
on FAIR and are containerized.
ARD EO data Pytorch DDP or
Dataloaders are
DeepSpeed
accelerated containerized, AI models
training are published on
DASE, EOD?
STAC-informed datasheets,
STAC-based item identifiers, Published FAIR AI models
Metadata attributed with CEOS Deploy at scale to include model cards,
ARD specifications, (TensorRT) Jupyter notebooks to
Harmonized ARD, FAIR demo model deployment,
links to metadata and
sample test data
Link published FAIR-AI models to standardized EO datahubs
housing FAIR AI-ready data (DASE, EOD,…)

FIGURE 1. ARD-motivated FAIR EO data and FAIR-AI model principles integrated into a common EO process. STAC: SpatioTemporal Asset
Catalog; DASE: Data and Algorithm Standard Evaluation; EOD: Earth observation database; DPP: Distributed Data Parallel.

90 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


be contextualized. The framework will depend on several •• Metadata contain STAC-based item identifiers that
building blocks, including software scripts that demon- enable other users to search for data.
strate data harmonization, creation of datasheets for AI- •• Datasheets that contain machine-readable key-
ready datasets, creation of model cards for FAIR-AI models, words, with metadata that are easy for humans and
measurement/validation metrics, and standardized envi- machines to find.
ronments for model deployment. •• Dataset metadata should be written using RS-based
attributes similar to Committee on Earth Observa-
FAIR-AI MODELS tion Satellites (CEOS) ARD specifications [14].
Recent developments from the ML community [9], [10] •• Develop open-consensus, GRSS-based ARD stan-
could provide initial building blocks to advance metadata dards that advance harmonization of ARD datasets
standardization, but for EO applications [11], [12]. The ideas by vendors and imagery providers. Coordination
of developing datasheets [9] for data and model cards [10] can be established to consider CEOS ARD specifica-
for models have been introduced as mechanisms to organize tions and include broader EO expert contributions,
the essential facts about datasets and ML models in a struc- e.g., the GRSS and industry.
tured way. Model cards are short documents accompanying ◗◗ Accessible:
trained ML models that provide a benchmarked evaluation •• ARD datasets, including EO benchmarks, should
in various conditions. For EO/RS, such conditions could in- be made available and shared through searchable
clude different cultural and geographic locations, seasonal- public repositories, with the data retrievable by
ity, sensor resolution, and object feature types relevant to standardized interfaces [(application programming
the intended application domain. The RS community could interfaces (APIs)], including access by identifier.
aim to develop model cards to catalog model performance •• Human–computer-interaction-searchable reposito-
characteristics, intended use cases, potential pitfalls, or other ries with tools (such as, e.g., an Earth observation
information to help users evaluate suitability or compose de- database [15]) that search cross-modal datasets
tailed queries to match their application contexts. Similarly, should be developed.
each data source should be developed with a datasheet docu- •• Using STAC specifications, datasheets should be
menting its motivation, composition, collection process, rec- discoverable by humans and machines.
ommended uses, and models generated from the data. ◗◗ Interoperable:
•• Benchmark EO datasets should be in common formats
STANDARDS FOR EVALUATION and standardized through shared ARD best practices.
Evaluation metrics provide an effective tool for assessing the •• Datasheets and model cards that contain refer-
performance of AI models. Most of the evaluation metrics ences to training data, hyperparameter settings,
for CV-based EO applications are adapted from traditional validation metrics, and hardware platforms used in
CV tasks (such as image classification, semantic segmen- experiments.
tation, and so on). These traditional CV metrics were de- •• Data fusion experiments should be published in
signed for natural images. In addition, different evaluation containerized environments to encourage interop-
metrics are sensitive to different types of errors [13]. Focus- erability and reproducibility across computing
ing on only one metric will result in a biased AI model. EO platforms.
applications need community agreed-upon holistic evalua- •• Conduct experiments on data fusion using ARD
tion metrics to develop a path for characterizing research/ data from multiple sensors accessed using open
operational progress, and limits for AI-ready EO datasets APIs. The experiments should involve multiple soft-
and AI models when deployed in real-world applications. ware applications that identify best practices and
needed harmonization to lessen analysts’ burden
WHAT THE GRSS COMMUNITY IS GOING TO DO in creating fusion workflows.
The first step is to propose a framework for cross-modal ◗◗ Reusable:
ARD processing, and to provide definitions of what FAIR •• Publishing of data, datasheets, model cards, and
means for EO datasets and AI models. The following briefly model weights should be supported by Jupyter
summarizes the corresponding details and definitions of notebooks for quick human interaction to under-
FAIR elements for EO: stand and test models on new data.
◗◗ Findable: •• Datasheets that include details in machine-readable
•• RS training and validation image metadata should first format on how data were collected.
be standardized through the SpatioTemporal ­Asset •• Establish domain-relevant community standards
Catalog (STAC) [12] family of specifications to create for AI-based data fusion evaluation methods based
structured datasheets and easy-to-query formats. on the GRSS IADF and data and algorithm standard
•• Cross-modal datasheets that provide a detailed de- evaluation results [16].
scription of the datasets, including resolution and the After that, standard ARD components must be high-
number of channels, sensor type, and collection date. lighted while focusing on emerging needs at the nexus of

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 91


cross-modal, cross-sensor EO/RS; image analysis; and data [2] J. L. Dwyer, D. P. Roy, B. Sauer, C. B. Jenkerson, H. K. Zhang,
fusion technologies. Essential components must be identi- and L. Lymburner, “Analysis ready data: Enabling analysis
fied to establish standards for systematic advancement and of the Landsat archive,” Remote Sens., vol. 10, no. 9, Aug.
provision of image analysis and data fusion methods in the 2018, Art. no. 1363, doi: 10.3390/rs10091363. [Online]. Avail-
era of AI and big EO data. able: https://www.mdpi.com/2072-4292/10/9/1363
The GRSS and, in particular, the IADF and the GSEO [3] M. D. Wilkinson et al., “The FAIR guiding principles for
will continue to work toward these goals. However, stan- scientific data management and stewardship,” Scientific
dards do not exist in isolation. There is an application- Data, vol. 3, no. 1, Mar. 2016, Art. no. 160018, doi: 10.1038/
related context that needs to be respected, existing work sdata.2016.18.
that needs to be incorporated, best practices that should be [4] “Sensor observation service,” Open Geospatial Consor-
adapted, and communities that need to validate the pro- tium, Arlington, VA, USA, 2023. [Online]. Available: https://
posed principles by adhering to and using them. Thus, we www.ogc.org/standard/sos/
actively reach out and invite other groups working toward [5] “Standards working groups,” Open Geospatial Consor-
similar goals to focus our efforts and collaborate as a new tium, Arlington, VA, USA, 2023. [Online]. Available: https://
standard needs to be created by the community for the www.ogc.org/about-ogc/committees/swg/
community to be successful. [6] “Data readiness,” Earth Science Information Partners, Sever-
na Park, MD, USA, 2023. [Online]. Available: https://wiki.
AUTHOR INFORMATION esipfed.org/Data_Readiness
Dalton Lunga (lungadd@ornl.gov) is with the Oak Ridge [7] M. Claverie et al., “The harmonized Landsat and sentinel-2
National Laboratory, Oak Ridge, TN 37830 USA, and is the surface reflectance data set,” Remote Sens. Environ., vol. 219,
IEEE Geoscience and Remote Sensing Society Image Analy- pp. 145–161, Dec. 2018, doi: 10.1016/j.rse.2018.09.002.
sis and Data Fusion Working Group on Machine/Deep [Online]. Available: https://www.sciencedirect.com/science/
Learning for Image Analysis lead. He is a Senior Member article/pii/S0034425718304139
of IEEE. [8] “FAIR for machine learning (FAIR4ML) IG,” Research Data
Silvia Ullo (ullo@unisannio.it) is with the University Alliance, USA, Australia, Europe, 2023. [Online]. Available:
of Sannio, 82100, Benevento, Italy, and is the IEEE Geo- https://www.rd-alliance.org/groups/fair-machine-learning
science and Remote Sensing Society Image Analysis and -fair4ml-ig
Data Fusion Working Group on Machine/Deep Learn- [9] T. Gebru et al., “Datasheets for datasets,” 2018. [Online].
ing for Image Analysis co-lead. She is a Senior Member Available: https://arxiv.org/abs/1803.09010
of IEEE. [10] M. Mitchell et al., “Model cards for model reporting,” in
Ujjwal Verma (ujjwal.verma@manipal.edu) is with the Proc. Conf. Fairness, Accountability, Transparency, Jan. 2019,
Department of Electronics and Communication Engineer- pp. 220–229, doi: 10.1145/3287560.3287596.
ing, Manipal Institute of Technology Bengaluru, Manipal [11] D. Lunga and P. Dias, “Advancing data fusion in earth sciences,”
Academy of Higher Education, Manipal 576104, India, and in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), 2022,
is the IEEE Geoscience and Remote Sensing Society Image pp. 5077–5080, doi: 10.1109/IGARSS46834.2022.9883176.
Analysis and Data Fusion Working Group on Machine/ [12] J. Rincione and M. Hanson, “CMR SpatioTemporal Asset
Deep Learning for Image Analysis co-lead. He is a Senior Catalog (CMR-STAC) documentation,” NASA Earth Sci-
Member of IEEE. ence, Nat. Aeronaut. Space Admin., Washington, DC, USA,
George Percivall (percivall@ieee.org) is with GeoRound- 2021. [Online]. Available: https://wiki.earthdata.nasa.gov/
table, Annapolis, MD 21114 USA, and is the IEEE Geoscience display/ED/CMR+SpatioTemporal+Asset+Catalog+%28CMR
and Remote Sensing Society Technical Committee on Stan- -STAC%29+Documentation
dards for Earth Observation cochair. He is a Senior Member [13] B. Cheng, R. Girshick, P. Dollár, A. C. Berg, and A. Kirillov,
of IEEE. “Boundary IOU: Improving object-centric image segmen-
Fabio Pacifici is with Maxar Technologies Inc, West- tation evaluation,” in Proc. IEEE/CVF Conf. Comput. Vis.
minster, CO 80234 USA, and is the IEEE Geoscience and Pattern Recognit. (CVPR), 2021, pp. 15,329–15,337, doi:
Remote Sensing Society vice president of technical activi- 10.1109/CVPR46437.2021.01508.
ties. He is a Senior Member of IEEE. [14] “CEOS analysis ready data,” Committee on Earth Obser-
Ronny Hänsch (ronny.haensch@dlr.de) is with the Ger- vation Satellites, France, Canada, USA, Thailand, 2022. [On-
man Aerospace Center, 82234 Weßling, Germany, and is the line]. Available: https://ceos.org/ard/
IEEE Geoscience and Remote Sensing Society Image Analysis [15] M. Schmitt, P. Ghamisi, N. Yokoya, and R. Hänsch, “EOD:
and Data Fusion chair. He is a Senior Member of IEEE. The IEEE GRSS earth observation database,” in Proc. IEEE
Int. Geosci. Remote Sens. Symp. (IGARSS), 2022, pp. 5365–
REFERENCES 5368, doi: 10.1109/IGARSS46834.2022.9884725.
[1] Z. Ignacio. “Analysis ready data workshops.” ARD.Zone. [16] G. I. T. Committee. “GRSS data and algorithm standard
Accessed: Dec. 1, 2022. [Online]. Available: https://www.ard. evaluation.” GRSS DASE. Accessed: May 3, 2023. [Online].
zone Available: http://dase.grss-ieee.org/

92 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


IRENA HAJNSEK , SUBIT CHAKRABARTI , ANDREA DONNELLAN , RABIA MUNSAF KHAN ,
CARLOS LÓPEZ-MARTÍNEZ , RYO NATSUAKI, ANTHONY MILNE, AVIK BHATTACHARYA ,
PRAVEEN PANKAJAKSHAN , POOJA SHAH , AND MUHAMMAD ADNAN SIDDIQUE

REACT: A New Technical Committee for Earth


Observation and Sustainable Development Goals

I n November 2022 a new technical committee of the


IEEE Geoscience and Remote Sensing Society (GRSS)
was formed with the name Remote Sensing Environment,
◗◗ meeting SDGs and climate issues using remote sensing
◗◗ finding/dealing with local issues transferable to glob-
al issues
Analysis, and Climate Technologies (REACT). ◗◗ defining current and future use cases in the local areas
REACT is a venue for all scientists and engineers work- ◗◗ achieving diversity through an interdisciplinary and
ing on remote sensing and environment related domains multicultural working environment.
as well as on the analysis of remote sensing data related to The current topics of the local focus areas are briefly pre-
climate change and sustainable development goals (SDGs). sented in the following sections and are led by a team member.
The primary aim is to exchange ideas and share knowledge
with the goal of advancing science and defining require- PACIFIC ISLANDS AND TERRITORIES
ments for science-driven mission concepts and data prod- Although Pacific Island nations have had the least involve-
ucts in the domains of cryosphere, biosphere, hydrosphere, ment in causing anthropogenic climate change, they will
atmosphere, and geosphere. experience the most extreme consequences. Recently,
Remote sensing for Earth observation (EO) represents climate change has compounded an already-vulnerable
a key tool for systematic and continuous observation of situation by increasing the fre-
Earth surface processes and is therefore an indispens- quency and intensity of extreme
able instrument to quantify environmental changes. climatic events that pose a sig-
The changes that can be observed can be due to natural nificant threat to the safety of THE FOCUS OF THE GRSS
successions, hazards occurrences, or anthropogenic in- people and communities. The TECHNICAL COMMITTEES
fluences. The focus of the GRSS technical committees is cost of recover y from these IS ON METHODS AND
on methods and algorithms, satellite systems, and data- events can be significant. Local ALGORITHMS, SATELLITE
driven solutions to estimate information products. With retrieved information from re- SYSTEMS, AND DATA-
REACT we are going a step forward in using the informa- mote sensing data will increase DRIVEN SOLUTIONS TO
tion products derived from remote sensing and making awareness and is providing a
ESTIMATE INFORMATION
them available to enforce sustainable management in the better understanding of the
PRODUCTS.
different environmental domains. In other words, we will environmental processes. The
contribute to the understanding of climate change and task here is to educate the local
support the SDGs. inhabitants about the knowl-
At the moment, a team of five chairs, with expertise edge gained from remote sensing and to bring together
including mission design, image and signal processing, experts’ knowledge worldwide to support them with
algorithm development, and application to different en- quantitative information to inform and guide decisions
vironments, is taking the lead in forming and structur- in promoting sustainable management of both land and
ing REACT. ocean environments.
Within REACT, currently four local focus areas have
been established to open up a venue for local and regional AGRICULTURE AND FOOD SECURITY
issues related to climate change and to support the SDGs Global food security is a part of the objectives of the United
using remote sensing for EO (see Figure 1). The main tasks Nation’s SDG and can be achieved through sustainable ag-
in each of these focus areas are as follows: ricultural and regenerative practices, reduced food losses
◗◗ building a community through collaborative efforts in and waste, improved nutrient content, and assured zero
a shared region hunger. With global warming, escalating conflicts, spi-
◗◗ exploring different application domains utilizing a vari- raling climatic crises, and economic downturns in recent
ety of methods/techniques years, global agricultural monitoring for sustainable food
production and regional food security are critical objectives
Digital Object Identifier 10.1109/MGRS.2023.3273083
to address at the moment. One of the essential components
Date of current version: 30 June 2023 of this task is optimizing agricultural input resources,

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 93


including water usage, soil nutrients, pests and diseases, ready datasets to assist them in continuous monitoring and
and the availability of clean energy and labor. However, impact analysis of climate change or interventions. These
the high variability in cropping systems and agroecologi- are critical needs across both developed and developing
cal zones makes agricultural production extremely diverse. economies. As an example, EO-driven capacity building
For instance, crop monitoring, forecasting, and mechani- leads to emigration from a food-deficit nation as a logical
zation are highly site specific progression of the endeavor to address major difficulties
because of variability in crop inherent in the food system of Indian small-acre farms.
CLIMATE CHANGE IS traits, pathogen pressures, en- India’s presidency of the Group of 20 has now been rec-
ALSO INCREASING THE vironmental conditions, input ognized to address the growing challenges of food secu-
SEVERITY OF FLOODS, availability, and management rity for creating resilient and equitable food systems. We
WHICH MAKES THESE strategies, making technological will initiate processes to provide EO data in the form of
generalization very challenging. analysis-ready and state-of-the-art methodologies for end
LOSSES CATASTROPHIC
The volume of EO data used for users toward the SDG.
AND THREATENS EVERY
near-real-time monitoring along
SDG AT LOCAL AND with cloud-based processing FLOODING IN AFRICA
REGIONAL SCALES. and machine learning (ML) have Floods, which constitute around half of all extreme events,
recently enhanced scientific ca- are increasing, exposing a larger population to a higher
pacity and methods for inves- risk of loss of livelihood and property. Climate change is
tigating land and water resource management. Several ef- also increasing the severity of floods, which makes these
forts were made within scientific communities, commercial losses catastrophic and threatens every SDG at local and
or­­ganizations, and national agencies to develop EO data regional scales. The impact of flooding on the African
products and ML methodologies that aid in monitoring continent is massive because robust flood defenses and
biophysical (such as crop condition anomalies and plant- urban drainage systems are lacking in many cities that are
ing acreage) and sociopolitical risk factors related to ag- built on floodplains, which amplifies the risk further. Re-
ricultural production and food security. End users with mote sensing and EO can help mitigate the loss of lives
limited background in EO would like to have analysis- and livelihood and increase adaptation to floods. Near-
real-time maps of inundation allow first responders and
disaster managers to prioritize aid to the most affected ar-
eas, flood-risk maps allow planners to build flood defens-
es for neighborhoods that are most at risk, and predictive
Pacific
Island
models built using EO data can help aid agencies provide
anticipatory financing to vulnerable communities. How-
ever, major technical challenges still remain in generating
actionable insights and inundation maps from remotely
sensed imagery, which can only be solved when remote
Agriculture
sensing experts work with emergency managers and other
and Food end users directly.
Security in
India CRYOSPHERE CHANGES IN THE HINDU KUSH,
KARAKORAM, AND HIMALAYAS
The Hindu Kush–Karakoram–Himalaya still remain an
understudied area, despite the fact that collectively they
form what we call the “third pole,” with the “largest
amount of ice cover outside of the polar regions.” Several
Floods and
Water Security glaciers are receding rapidly because of global warming.
in Africa The entire region is likely to face extreme water stress in
the coming decades. Climate change has exacerbated gla-
cial melting, leading to an increase in glacial lake out-
burst floods. These vulnerable glacial lakes are mostly
seasonal, so their precise incidence in time and location
Hindu
may not be known a priori; therefore, remote sensing-
Kush-
Karakoram- based automated detection can help s­cientists, policy
Himalayas makers, and the local communities directly. The main
task is to bring this knowledge to the local people and to
exchange expert knowledge for a reliable and sustainable
FIGURE 1. The current four local focus areas of REACT. event detection method.

94 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


All scientists who are interested in one of the local fo- ◗◗ At the IEEE International Symposium on Geoscience
cused areas and would like to contribute or participate in and Remote Sensing, we hold an annual meeting on one
their activities are invited to join REACT. evening to exchange knowledge and collect ideas for
In addition to the local focus areas, we have many more new activities. Please watch out for the announcement
activities, totally open to other people to participate and of the REACT Technical Committee meeting.
enrich with their ideas and expertise. ◗◗ Currently, we are working on a new podcast series re-
◗◗ As a next event, we will have the “Mini-Project Compe- lated to climate change and SDGs. The podcast will be
tition” EO4SDG, where your ideas about how to solve launched in September/October 2023.
environmental problems can be submitted with a short ◗◗ We have a strong connection to the IEEE GRSS
three-page proposal in relation to the SGDs. The short Young Professionals and are supported in different
proposals are evaluated, and the first three best-rated activities.
ones will have the opportunity to be published in IEEE All activities are announced through social media and
Geoscience and Remote Sensing Magazine. the GRSS home page https://www.grss-ieee.org. Please
◗◗ Further, we have regular webinars about the different have a look at it. We look forward to welcoming you at the
topics offered with the local focused areas. next event.

GEMINE VIVONE , DALTON LUNGA , FRANCESCOPAOLO SICA ,


GÜLŞEN TAŞKIN , UJJWAL VERMA , AND RONNY HÄNSCH

Computer Vision for Earth Observation—The First IEEE


GRSS Image Analysis and Data Fusion School

T he first edition of the IEEE Geosci-


ence and Remote Sensing Society
(GRSS) Image A nalysis and Data
to attend the live stream on the GRSS
YouTube channel (https://www.youtube.
com/c/IEEEGRSS). The selection pro-
Fusion (IADF) school (see Figure 1) cess relied on several objective criteria,
was organized as an online event including work experience, academic
from 3 to 7 Oc tober 2022. It ad- recognitions, number of publications,
dressed topics related to computer and h-index. Due to the high num-
vision (CV) in the context of Earth ber of registrations, the prescreen-
observation (EO). Nine lessons with ing also assessed fundamental skills
both theoretical and hands-on ses- such as programming expertise and
sions were provided, involving more CV background, which are of crucial
than 17 lecturers. ­importance to fruitfully attend such
We received more than 700 registrations from all over the a school. Aspects regarding diversity and inclusion were
world. The organizing committee selected 85 candidates also taken into account as well. The selected people con-
to join the online class. The remaining applicants were free sisted of approximately 50% Ph.D. students and roughly
30% women. The geographical distribution of the selected
Digital Object Identifier 10.1109/MGRS.2023.3267850
participants, coming from 33 different countries, is depicted
Date of current version: 30 June 2023 in Figure 2.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 95


GOALS OF THE IADF SCHOOL ON COMPUTER phenomenological context, and the connection to physi-
VISION FOR EARTH OBSERVATION cal laws is maintained. A limitation, however, is that most
EO data, in particular, imagery, had been mostly manu- of these models are based on simplifying assumptions to
ally interpreted in the early days of remote sensing. With make their computation tractable.
increasing digitization and computational resources, the With the success of CV in other areas (mostly the se-
automatic analysis of these images came into focus. How- mantic or geometric interpretation of close-range im-
ever, most of the approaches were very close to the sensor, agery) a different type of model gained importance. CV
interpreting the data as well calibrated measurement of a emphasizes the “image” aspect of the acquired data, i.e.,
physical process. They were mostly based on the statisti- spectral–spatial relationships among pixels, instead of fo-
cal and/or physical models that describe the functional cusing at the information measured in a single pixel. Its
relation between measurement and the physical proper- methods are usually data driven, i.e., applying machine
ties of the ground (and atmosphere). The advantage of learning-based approaches to model the relationship be-
these models is that their results can be assigned to a clear tween input data and target variables. This allows gains
in flexibility, generalization, and complexity while poten-
tially sacrificing interpretability and physical plausibility
of the results.
Since the beginnings of CV in EO, community and meth-
ods have shown significant progress. The used approaches
are not merely adopted versions of methods designed for
close-range photographs anymore, but are increasingly di-
rectly tailored toward the specific characteristics of remote
sensing data. Sophisticated methods address data particu-
FIGURE 1. Logo of the first IEEE GRSS IADF School on Computer larities such as high-dimensional hyperspectral images or
Vision for Earth Observation. complex-valued synthetic aperture radar (SAR) data as well
as task-specific characteristics such as label noise or the
general scarcity of annotations.
The goal of this first IEEE GRSS IADF School on Com-
United States puter Vision for Earth Observation (CV4EO) was to provide
Other 14% a general overview about the multitude of different aspects
Countries of how CV is used in EO applications, together with deep
42% India insights into modern methods to automatically process and
12% analyze remote sensing images.
France
7% ORGANIZATION OF THE FIRST
IEEE GRSS IADF SCHOOL
China
6%
Canada ORGANIZING COMMITTEE
Germany Turkey Italy 4% The school was organized by the IADF Technical Commit-
5% 5% 5%
tee (IADF TC) of the GRSS. The organizing committee con-
(a)
sists of the following members (Figure 3):
Australia ◗◗ Gemine Vivone, National Research Council, Italy
Africa
7% 2% ◗◗ Ronny Hänsch, German Aerospace Center, Germany
Asia
◗◗ Claudio Persello, University of Twente, The Nether-
37%
lands
Europe ◗◗ Dalton Lunga, Oak Ridge National Laboratory, USA
29%
◗◗ Gülşen Taşkın, Istanbul Technical University, Turkey
◗◗ Ujjwal Verma, Manipal Institute of Technology Benga-
luru, India
◗◗ Francescopaolo Sica, University of the Bundeswehr
­Munich, Germany
America ◗◗ Srija Chakraborty, NASA’s Goddard Space Flight Center,
25%
(b) Universities Space Research Association, USA.

PROGRAM
FIGURE 2. A geographical distribution of the selected participants. Remote sensing generates vast amounts of image data that
(a) The countries and (b) continents. can be difficult and time consuming to analyze using

96 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Gemine Vivone Ronny Hänsch Claudio Persello Dalton Lunga

Gülsen Taskin Ujjwal Verma Francescopaolo Sica Srija Chakraborty

FIGURE 3. The Organizing Committee of the first IEEE GRSS IADF School on CV4EO.

conventional image processing techniques. CV algorithms discussing current methods for analyzing satellite im-
­enable the automatic interpretation of large data, ­allowing ages. The covered topics were image fusion, explainable
remote sensing to be used for a wide range of applica- artificial intelligence (AI) for Earth science, big geo-data,
tions, including environmental monitoring, land use/cover multisource image analysis, deep learning for spectral
mapping, and natural resource management. Thus, the unmixing, SAR image analysis, and learning with zero/
IADF TC aimed for prioritizing topics that integrate CV few labels. The technical program of the IADF school is
into remote sensing data analysis. The first IADF school depicted in Figure 4.
focused on applying CV techniques to address modern re- During the first day of the school, the “Deep/Machine
mote sensing challenges, consisting of a series of lectures Learning for Spectral Unmixing” lecture covered various top-
ics related to linear hyperspectral unmixing. These included
geometrical approaches, blind linear unmixing, and sparse­
unmixing. Additionally, the course
delved into the utilization of au-
toencoders and convolutional
REMOTE SENSING GENER-
networks for unmixing purpos-
ATES VAST AMOUNTS OF
es. The lecture was followed by
“Change Detection (TorchGeo),” IMAGE DATA THAT CAN
which elaborated on the utiliza- BE DIFFICULT AND TIME
tion of TorchGeo with PyTorch CONSUMING TO ANALYZE
for training change detection USING CONVENTIONAL
models using satellite imagery. IMAGE PROCESSING
On the second day of the school, TECHNIQUES.
the “Learning with Zero/Few
Labels” lecture discussed recent
developments in machine learn-
ing with limited label data in EO, including semisupervised
learning, weakly supervised learning, and self-supervised
learning. The subsequent “SAR Processing” lecture covered
various topics, including the analysis of SAR images with
different polarimetric channels, the geometry of SAR image

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 97


acquisition, radiometric calibration, and generation of the retical and practical elements to develop convolutional
SAR backscatter image. neural networks for pansharpening. The “XAI for Earth
On the third day, the “Semantic Segmentation” lecture Science” lecture discussed the methods of explainable AI
started with a focus on recent advancements in methods and demonstrated their use to interpret pretrained mod-
and datasets for the task of semantic segmentation of re- els for weather hazard forecasting. The school concluded
mote sensing images. This lecture was followed by a prac- with a lecture about “PolSAR,” which focused on statisti-
tical exercise. During the exercise, participants had the cal ­models for fully polarimetric SAR data that arise in
opportunity to train and test a model for this particular practical applications.
task. The school proceeded with a lecture, “Big Geo-Data,”
which explored the latest developments in machine learn- DISTRIBUTED MATERIAL
ing. Practical considerations were presented to effectively Through lectures, hands-on exercises, and demonstra-
deploy these advancements for analyzing high-resolution tions, participants gained a deep understanding of key
geospatial imagery across a wide range of applications topics in CV4EO, including image fusion, explainable AI,
including ecosystem monitoring, natural hazards, and multisource image analysis, deep learning for spectral
urban land-cover/land-use patterns. On the fourth day unmixing, SAR image analysis, and unsupervised and
of the school, the “Image Fusion” lecture discussed theo- self-supervised learning. The lectures were recorded and

Topics Speakers Affiliations


3 October

10 a.m.–2 p.m. (UTC +2) Deep/Machine Dr. Behnood Rasti Helmholtz-Zentrum


Learning for Dresden-Rossendorf (Germany)
Spectral Unmixing
2 p.m.–6 p.m. (UTC +2) Change Detection Dr. Caleb Robinson Microsoft (USA)
(TorchGeo)

4 October

10 a.m.–2 p.m. (UTC +2) Learning With Dr. Sudipan Saha, Technical University of Munich (Germany),
Zero/Few Labels Dr. Angelica I. Aviles-Rivero, German Aerospace Center (Germany), and
Dr. Lichao Mou, University of Cambridge (U.K.)
Prof. Carola-Bibiane Schönlieb,
and
Prof. Xiao Xiang Zhu
2 p.m.–6 p.m. (UTC +2) SAR Processing Dr. Shashi Kumar IIRS, ISRO (India)

5 October

10 a.m.–2 p.m. (UTC +2) Semantic Prof. Sylvain Lobry Universitè de Paris (France)
Segmentation

2 p.m.–6 p.m. (UTC +2) Big Geo-Data Prof. Saurabh Prasad, and University of Houston (USA), and
Prof. Melba Crawford Purdue University (USA)

6 October

10 a.m.–2 p.m. (UTC +2) Image Fusion Prof. Giuseppe Scarpa, and University of Naples
Dr. Matteo Ciotola “Federico II” (Italy)

2 p.m.–6 p.m. (UTC +2) XAI for Dr. Michele Ronco University of Valencia (Spain)
Earth Science

7 October

9 a.m.–1 p.m. (UTC +2) PolSAR Prof. Avik Bhattacharya, Indian Institute of Technology Bombay (India),
Prof. Alejandro Frery, and Victoria University of Wellington (New Zealand), and
Dr. Dipankar Mandal Kansas State University (USA)

FIGURE 4. The technical program of the first IEEE GRSS IADF School on CV4EO. IIRS, ISRO: Indian Institute of Remote Sensing, Indian Space
Research Organisation.

98 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Prof. Melba Crawford Prof. Saurabh Prasad Dr. Caleb Robinson Dr. Behnood Rasti Dr. Michele Ronco

Prof. Sylvain Lobry Dr. Matteo Ciotola Prof. Giuseppe Scarpa Dr. Sudipan Saha Dr. Angelica I. Aviles-
Rivero

Dr. Dipankar Mandal Dr. Shashi Kumar Dr. Lichao Mou Prof. Carola-Bibiane Prof. Xiao Xiang Zhu
Schönlieb

Prof. Avik Bhattacharya Prof. Alejandro Frery

FIGURE 5. Speakers of the first IEEE GRSS IADF School on CV4EO.

made available online on a daily basis on the GRSS You- ◗◗ Dr. Lichao Mou, head of the Visual Learning and Rea-
Tube channel. Links to the daily lectures are provided soning Team, Remote Sensing Technology Institute,
for reference. German Aerospace Center, Weßling
◗◗ Prof. Carola-Bibiane Schönlieb, professor of applied
SPEAKERS mathematics, the University of Cambridge, U.K.
The first edition of the IEEE GRSS IADF school invited a ◗◗ Prof. Xiao Xiang Zhu, professor for data science in EO,
diverse group of experts from four continents. As shown in Technical University of Munich, Germany
Figure 5, the list includes the following:
◗◗ Prof. Melba Crawford, professor of civil engineering,
Purdue University, USA
◗◗ Prof. Saurabh Prasad, associate professor, the Depart-
Join the GRSS IADF TC
ment of Electrical and Computer Engineering, the Uni- You can contact the Image Analysis Data Fusion Technical Committee (IADF TC)
versity of Houston, USA chairs at iadf_chairs@grss-ieee.org. If you are interested in joining the IADF TC,
◗◗ Dr. Caleb Robinson, data scientist, the Microsoft AI for please complete the form on our website (https://www.grss-ieee.org/technical-
Good Research Lab, USA committees/image-analysis-and-data-fusion) or send an email to us including
your
◗ ◗ Dr. Behnood R asti, principal research associate,
◗◗ first and last name
Helmholtz–Zentrum Dresden–Rossendorf, Freiberg, ◗◗ institution/company
Germany ◗◗ country
◗◗ Prof. Giuseppe Scarpa and Dr. Matteo Ciotola, associate ◗◗ IEEE membership number (if available)
professor and Ph.D. fellow, respectively, the University ◗◗ email address.
Members receive information regarding research and applications on image
of Naples “Federico II”, Italy
analysis and data fusion topics, and updates on the annual Data Fusion
◗◗ Dr. Sudipan Saha, postdoctoral researcher, Technical Contest and on all other activities of the IADF TC. Membership in the
University of Munich, Germany IADF TC is free! Also, you can join the LinkedIn IEEE GRSS data fusion discus-
◗◗ Dr. Angelica I. Aviles-Rivero, senior research associate, sion forum, https://www.linkedin.com/groups/3678437/, or join us on
the Department of Applied Mathematics and Theoreti- Twitter: Grssiadf.
cal Physics, the University of Cambridge, U.K.

JUNE 2023 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE 99


◗◗ Prof. Avik Bhattacharya, professor, the Centre of Studies received high attention and provided an exciting experi-
in Resources Engineering, Indian Institute of Technol- ence. All the comments have been collected and will be
ogy Bombay, Mumbai, India used to improve the format of the next editions.
◗◗ Prof. Alejandro Frery, professor of statistics and data sci-
ence, the Victoria University of Wellington, New Zealand AUTHOR INFORMATION
◗◗ Dr. Dipankar Mandal, postdoctoral fellow, Department Gemine Vivone (gemine.vivone@imaa.cnr.it) is with the
of Agronomy, Kansas State University, USA National Research Council - Institute of Methodologies for
◗◗ Dr. Shashi Kumar, scientist, the Indian Institute of Re- Environmental Analysis, 85050 Tito Scalo, Italy, and Na-
mote Sensing, Indian Space Research Organisation, tional Biodiversity Future Center, 90133 Palermo, Italy. He
Dehradun, India is a Senior Member of IEEE.
◗◗ Prof. Sylvain Lobry, associate professor, the Université Dalton Lunga (lungadd@ornl.gov) is with the Oak
Paris Cité, the Laboratoire d’Informatique de Paris Ridge National Laboratory, Oak Ridge, TN 37830 USA. He
Descartes, France is a Senior Member of IEEE.
◗◗ Dr. Michele Ronco, postdoctoral researcher, the Image Francescopaolo Sica (francescopaolo.sica@unibw.de)
Processing Laboratory, the University of Valencia. is with the Institute of Space Technology & Space Applica-
tions, University of the Bundeswehr Munich, 85577 Neubi-
IEEE GRSS IADF SCHOOL: FIND OUT berg, Germany. He is a Member of IEEE.
THE NEXT EDITION! Gülşen Taşkın (gulsen.taskin@itu.edu.tr) is with the
After the successful first edition of the IEEE GRSS IADF Institute of Disaster Management, Istanbul Technical
school, a second one will be announced soon. It will follow University, Istanbul 34469, Turkey. She is a Senior Mem-
the same theme as the 2022 edition, i.e., CV4EO. It will be ber of IEEE.
an in-person event and take place at the University of San- Ujjwal Verma (ujjwal.verma@manipal.edu) is with the
nio, Benevento, Italy, 13–15 September 2023. We look for- Department of Electronics and Communication Engineer-
ward to seeing you in Benevento! Please stay tuned! ing, Manipal Institute of Technology Bengaluru, Manipal
Academy of Higher Education, Manipal 576104, India. He
CONCLUSION is a Senior Member of IEEE.
We would like to thank the GRSS and the IADF for their Ronny Hänsch (ronny.haensch@dlr.de) is with the
support, and all the lecturers who gave so freely of their DLR, 82234 Weßling, Germany. He is a Senior Member
time and expertise. A survey among the participants of IEEE.
conducted after the school clearly showed that the event GRS

PERSPECTIVES (continued from p. 88)

[21] W. Boulila, M. K. Khlifi, A. Ammar, A. Koubaa, B. Benjdira, [24] M. Casey. “Foundation models 101: A guide with essential
and I. R. Farah, “A hybrid privacy-preserving deep learning FAQs.” Snorkel AI. Accessed: Apr. 13, 2023. [Online]. Available:
approach for object classification in very high-resolution https://snorkel.ai/foundation-models/
satellite images,” Remote Sens., vol. 14, no. 18, Sep. 2022, [25] N. Gorelick, M. Hancher, M. Dixon, S. Ilyushchenko, D.
Art. no. 4631, doi: 10.3390/rs14184631. Thau, and R. Moore, “Google earth engine: Planetary-scale
[22] S. Zhang, X. Zhang, T. Li, H. Meng, X. Cao, and L. Wang, geospatial analysis for everyone,” Remote Sens. Environ.,
“Adversarial representation learning for hyperspectral vol. 202, pp. 18–27, Dec. 2017, doi: 10.1016/j.rse.2017.06.031.
image classification with small-sized labeled set,” Remote [26] T. Augspurger, “Scalable sustainability with the planetary
Sens., vol. 14, no. 11, May 2022, Art. no. 2612, doi: 10.3390/ computer,” presented at the AGU Fall Meeting Abstracts, New
rs14112612. Orleans, LA, USA, Dec. 2021.
[23] S. Yang, M. Sun, X. Lou, H. Yang, and H. Zhou, “An unpaired [27] “Earth on AWS.” Amazon. Accessed: Apr. 13, 2023. [Online].
thermal infrared image translation method using GMA- Available: https://aws.amazon.com/earth/
CycleGAN,” Remote Sens., vol. 15, no. 3, Jan. 2023, Art. no. [28] A. Kirillov et al., “Segment anything,” 2023, arXiv:2304
663, doi: 10.3390/rs15030663. .02643.GRS

100 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE JUNE 2023


Harness the
publishing power
of IEEE Access. ®

IEEE Access is a multidisciplinary open


access journal offering high-quality peer
review, with an expedited, binary review
process of 4 to 6 weeks. As a journal
published by IEEE, IEEE Access offers
a trusted solution for authors like you
to gain maximum exposure for your
important research.

Explore the many benefits of IEEE Access:


• Receive high-quality, rigorous peer review • Establish yourself as an industry pioneer
in only 4 to 6 weeks by contributing to trending, interdisciplinary
topics in one of the many topical sections
• Reach millions of global users through IEEE Access hosts
the IEEE Xplore® digital library by publishing
open access • Present your research to the world quickly
since technological advancement is ever-changing
• Submit multidisciplinary articles that may not
fit in narrowly focused journals • Take advantage of features such as multimedia
integration, usage and citation tracking, and more
• Obtain detailed feedback on your research
from highly experienced editors • Publish without a page limit for $1,750
per article

Learn more at ieeeaccess.ieee.org


CALL FOR PAPERS
IEEE Geoscience and Remote Sensing Magazine

Special issue on
“Data Fusion Techniques for Oceanic
Target Interpretation”
Guest Editors
Gui Gao, Southwest Jiaotong University, China (dellar@126.com)
Hanwen Yu, University of Electronic Science and Technology of China, China (yuhanwenxd@gmail.com)
Maurizio Migliaccio, Università degli Studi di Napoli Parthenope, Italy (maurizio.migliaccio@uniparthenope.it))
Xi Zhang, First Institute of Oceanography, Ministry of Natural Resources, China (xi.zhang@fio.org.cn)

Interpreting marine targets using remote sensing can provide critical information for various applications,
including environmental monitoring, oceanographic research, navigation, and resource management. With the
development of observation systems, the ocean information acquired is multi-source and multi-dimension. Data
fusion, as a general and popular multi-discipline approach, can effectively use the obtained remote sensing data
to improve the accuracy and reliability of oceanic target interpretation. This special issue will present an array of
tutorial-like overview papers that aim to invite contributions on the latest developments and advances in the field
of fusion techniques for oceanic target interpretation. In agreement with the approach and style of the
Magazine, the contributors to this special issue will pay strong attention to creating a balanced mix
between ensuring scientific depth, and dissemination to a wide public which would encompass remote
sensing scientists, practitioners, and students.

The topics of interest include (but are not limited to)


• Mul�-source remote sensing applica�ons of human mari�me ac�vi�es, such as fisheries monitoring,
mari�me emergency rescue, etc.
• Mul�-source remote sensing detec�on and evalua�on of marine hazards
• Mul�-source remote sensing detec�on, recogni�on, and tracking of marine man-made target
• Detec�on and sensing the changes in Arc�c sea ice by mul�-source remote sensing data
• Ar�ficial Intelligence for mul�-sensor data processing.
• Fusion of remote sensing data from sensors at different spa�al and temporal resolu�ons
• Descrip�on and analysis of data fusion products such as databases that can integrate, share, and explore
mul�ple data sources

Format and preliminary schedule.


Articles submitted to this special issue of the IEEE Geoscience and Remote Sensing Magazine must
contain significant relevance to geoscience and remote sensing and should have noteworthy tutorial value.
Selection of invited papers will be done on the basis of 4-page White papers, submitted in double-column
format. These papers must discuss the foreseen objectives of the paper, the importance of the addressed
topic, the impact of the contribution, and the authors’ expertize and past activities on the topic.
Contributors selected on the basis of the White papers will be invited to submit full manuscripts.
Manuscripts should be submitted online at http://mc.manuscriptcentral.com/grsm HYPERLINK
"http://mc.manuscriptcentral.com/grsm" using the Manuscript Central interface. Prospective authors
should consult the site http://ieeexplore.ieee.org/servlet/opac?punumber=6245518 for guidelines and
information on paper submission. Submitted articles should not have been published or be under review
elsewhere. All submissions will be peer reviewed according to the IEEE and Geoscience and Remote
Sensing Society guidelines.

Important dates:
August 1, 2023 White paper submission deadline
September 1, 2023 Invitation notification
November 1, 2023 Full paper submission deadline
March 1, 2024 Review notification
June 1, 2024 Revised manuscript due
September 1, 2024 Final acceptance notification
October 1, 2024 Final manuscript due
January 2025 Publication date

Digital Object Identifier 10.1109/MGRS.2023.3278369

You might also like