You are on page 1of 300

Downloaded From: https://www.spiedigitallibrary.

org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Library of Congress Cataloging-in-Publication Data

Names: Olsen, R. C. (Richard C.), 1952- author.


Title: Remote sensing from air and space / Richard C. Olsen.
Description: Second edition. | Bellingham, Washington : SPIE, [2016] | Includes
bibliographical references and index.
Identifiers: LCCN 2016006976 (print) | LCCN 2016008713 (ebook) | ISBN
9781510601505 (softcover) | ISBN 9781510601512 (pdf) | ISBN 9781510601529
(epub) | ISBN 9781510601536 ( mobi)
Subjects: LCSH: Remote sensing.
Classification: LCC G70.4 .O47 2016 (print) | LCC G70.4 (ebook) | DDC 621.36/
78–dc23
LC record available at http://lccn.loc.gov/2016006976

Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: + 1 360.676.3290
Fax: + 1 360.647.1445
Email: books@spie.org
Web: http://spie.org

Copyright © 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)

All rights reserved. No part of this publication may be reproduced or distributed in


any form or by any means without written permission of the publisher.

The content of this book reflects the work and thought of the author. Every effort has
been made to publish reliable and accurate information herein, but the publisher is not
responsible for the validity of the information or for any outcomes resulting from
reliance thereon.

Printed in the United States of America.


First printing.
For updates to this book, visit http://spie.org and type “PM266” in the search field.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Contents
Preface xiii

1 Introduction to Remote Sensing 1


1.1 Order of Battle 2
1.1.1 Air order of battle 4
1.1.2 Electronic order of battle 5
1.1.3 Space order of battle 6
1.1.4 Naval order of battle 8
1.1.5 Industrial order of battle 9
1.2 Technology Survey 11
1.2.1 Imaging the whole earth: optical and infrared imaging 11
1.2.1.1 Geostationary Operational Environmental Satellite
(GOES): whole-earth visible imaging 11
1.2.1.2 GOES: whole-earth infrared imaging 11
1.2.2 Earth resources systems: 30-m pixels 12
1.2.2.1 Landsat 7 (30 m), San Diego 13
1.2.2.2 SSTL/DMC (30 m) 15
1.2.3 Higher resolution: 1–3-m ground sample distance 18
1.2.3.1 Worldview-3: San Diego and Coronado Island 18
1.2.3.2 High-resolution airborne LiDAR 18
1.2.4 High-resolution airborne imagery 19
1.2.5 Synthetic aperture radar (SAR) 20
1.3 Three Axes 22
1.4 Resources 23
1.5 Problems 26
2 Electromagnetic Basics 27
2.1 The Electromagnetic Spectrum 27
2.1.1 Maxwell’s equations 28
2.2 Polarization of Radiation 29
2.3 Energy in Electromagnetic Waves 31
2.3.1 Photoelectric effect 33
2.3.2 Photomultiplier tubes 35
2.4 Sources of Electromagnetic Radiation 35

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
vi Contents

2.4.1 Line spectra 37


2.4.2 Blackbody radiation 40
2.5 Electromagnetic-Radiation–Matter Interactions 43
2.5.1 Transmission 44
2.5.2 Reflection 45
2.5.3 Scattering 46
2.5.4 Absorption 47
2.6 Problems 47
3 Optical Imaging 49
3.1 The First Remote-Sensing Satellite: Corona 49
3.1.1 History 49
3.1.2 Technology 51
3.1.3 Illustrations 53
3.2 Atmospheric Absorption, Scattering, and Turbulence 57
3.2.1 Atmospheric absorption: wavelength dependence 57
3.2.2 Atmospheric scattering 58
3.2.3 Atmospheric turbulence 60
3.3 Basic Geometrical Optics 62
3.3.1 Focal length/geometry 62
3.3.2 Optical diagram: similar triangles and magnification 63
3.3.3 Aperture (f/stop) 64
3.3.4 Image formation by lens or pinhole 64
3.4 Diffraction Limits: The Rayleigh Criterion 65
3.5 Detectors 69
3.5.1 Solid state 69
3.5.2 Focal plane arrays 72
3.5.3 Uncooled focal planes: microbolometers 74
3.6 Imaging System Types, Telemetry, and Bandwidth 74
3.6.1 Imaging system types 74
3.6.1.1 Framing systems (Corona) 74
3.6.1.2 Cross-track (Landsat MSS, TM; AVIRIS) 75
3.6.1.3 Along-track (IKONOS, Quickbird, Worldview) 77
3.7 Telemetry Strategies 77
3.7.1 Direct downlink 77
3.7.2 Relay 77
3.7.3 Store and dump 77
3.8 Bandwidth and Data Rates 78
3.9 Problems 78
4 Optical Satellite Systems 81
4.1 Hubble: The Big Telescope 81
4.1.1 The Hubble satellite 81
4.1.2 The Hubble telescope design 83

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Contents vii

4.1.3 The Hubble detectors: Wide-Field and Planetary Camera 2 85


4.1.4 The repair missions 90
4.1.5 Operating constraints 91
4.1.5.1 South-Atlantic anomaly 91
4.1.5.2 Spacecraft position in orbit 91
4.2 Commercial Remote Sensing: IKONOS and Quickbird 91
4.2.1 IKONOS satellite 92
4.2.1.1 Imaging sensors and electronics for the IKONOS
satellite 92
4.2.2 NOB with IKONOS: Severodvinsk 94
4.3 The Earth at Night 94
4.4 Exposure Times 96
4.5 Problems 100
5 Orbital Mechanics Interlude 103
5.1 Gravitational Force 103
5.2 Circular Motion 104
5.2.1 Equations of motion 104
5.2.2 Centripetal force 105
5.3 Satellite Motion 105
5.3.1 Illustration of geosynchronous orbit 105
5.4 Kepler’s Laws 105
5.4.1 Elliptical orbits 106
5.4.2 Equal areas are swept out in equal times 107
5.4.3 Orbital period: t2 ∝ r3 107
5.5 Orbital Elements 108
5.5.1 Semi-major axis 108
5.5.2 Eccentricity 108
5.5.3 Inclination angle 108
5.5.4 Right ascension of the ascending node 109
5.5.5 Closest point of approach (argument of perigee) 109
5.6 A Few Standard Orbits 109
5.6.1 Low-earth orbit 109
5.6.2 Medium-earth orbit 112
5.6.3 Geosynchronous orbit 113
5.6.4 Molniya (HEO) 114
5.6.5 Summary of orbital values 115
5.7 Bandwidth, Revisited 116
5.8 Problems 117
6 Spectral and Polarimetric Imagery 119
6.1 Reflectance of Materials 119
6.2 Human Visual Response 120
6.3 Spectral Technologies 121

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
viii Contents

6.4 Landsat 123


6.4.1 Landsat orbit 124
6.4.2 Landsat sensors 126
6.4.2.1 Return Beam Vidicon 126
6.4.2.2 Multispectral Scanner 127
6.4.2.3 Thematic Mapper 127
6.4.3 Landsat data links 131
6.4.4 Landsat 8 detectors: Operational Land Imager (OLI) and
Thermal Infrared Sensor (TIRS) 132
6.5 Spectral Responses for Commercial Systems 133
6.6 Analysis of Spectral Data: Band Ratios and NDVI 135
6.7 Analysis of Spectral Data: Color Space and Spectral Angles 137
6.8 Imaging Spectroscopy 139
6.8.1 AVIRIS 140
6.8.2 Hyperion 143
6.8.3 MightySat II: Fourier-Transform Hyperspectral Imager 144
6.9 Optical Polarization 145
6.10 Problems 147

7 Image Analysis 149


7.1 Interpretation Keys (Elements of Recognition) 149
7.1.1 Shape 149
7.1.2 Size 149
7.1.3 Shadow 150
7.1.4 Height (depth) 150
7.1.5 Tone or color 151
7.1.6 Texture 152
7.1.7 Pattern 152
7.1.8 Association 154
7.1.9 Site 154
7.1.10 Time 154
7.2 Image Processing 154
7.2.1 Univariate statistics 156
7.2.2 Dynamic range: snow and black cats 156
7.3 Histograms and Target Detection 158
7.4 Multi-dimensional Data: Multivariate Statistics 159
7.5 Filters 162
7.5.1 Smoothing 163
7.5.2 Edge detection 164
7.6 Supplemental Notes on Statistics 164
7.7 Problems 166

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Contents ix

8 Thermal Infrared 171


8.1 IR Basics 172
8.1.1 Planck’s radiation formula 172
8.1.2 Stefan–Boltzmann: radiance aT 173
8.1.3 Wien’s displacement law 174
8.1.4 Emissivity 174
8.1.5 Atmospheric absorption 175
8.2 Radiometry 175
8.2.1 Point source radiometry 176
8.2.2 Radiometry for resolved targets 178
8.3 More IR Terminology and Concepts 179
8.3.1 Signal-to-noise ratio: NEDT 179
8.3.2 Kinetic temperature 180
8.3.3 Thermal inertia, conductivity, capacity, and diffusivity 180
8.3.3.1 Heat capacity (specific heat) 180
8.3.3.2 Thermal conductivity 180
8.3.3.3 Inertia 181
8.3.3.4 Thermal diffusivity 181
8.3.3.5 Diurnal temperature variation 181
8.4 Landsat 184
8.5 Early Weather Satellites 185
8.5.1 TIROS 185
8.5.2 Nimbus 186
8.6 GOES 187
8.6.1 Satellite and sensor 187
8.6.2 Shuttle launch: vapor trail and rocket 191
8.7 Defense Support Program 192
8.8 SEBASS: Thermal Spectral 195
8.8.1 Hard targets 195
8.8.2 Gas measurements: Kilauea, Pu ‘u ‘O ‘o vent 195
8.9 Problems 198
9 Radio Detection and Ranging (RADAR) 201
9.1 Imaging Radar 201
9.1.1 Imaging radar basics 201
9.2 Radar Resolution 204
9.2.1 Range resolution 204
9.2.2 Signal modulation 205
9.2.3 Azimuth resolution 207
9.2.4 Beam pattern and resolution 208
9.2.5 Synthetic-aperture radar 212
9.3 Radar Cross-Section s and Polarization 214
9.4 Radar Range Equation 215

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
x Contents

9.5 Wavelength 216


9.6 SAR Image Elements 217
9.6.1 Dielectric constant: soil moisture 217
9.6.2 Roughness 220
9.6.3 Tetrahedrons/corner reflectors: the cardinal effect 220
9.7 Problems 221
10 Radar Systems and Applications 225
10.1 Shuttle Imaging Radar 226
10.2 Soil Penetration 228
10.3 Ocean Surface and Shipping 229
10.3.1 SIR-C: oil slicks and internal waves 229
10.3.2 RADARSAT: ship detection 230
10.3.3 TerraSar-X: Gibraltar 232
10.3.4 ERS-1: ship wakes and Doppler effects 233
10.4 Multi-temporal Images: Rome 234
10.5 Sandia Ku-Band Airborne Radar: Very High Resolution 234
10.6 Radar Interferometry 234
10.6.1 Coherent change detection 236
10.6.2 Topographic mapping 236
10.7 The Shuttle Radar Topographic Mapping (SRTM) Mission 240
10.7.1 Mission design 241
10.7.2 Mission results: level-2 terrain-height datasets
(digital topographic maps) 242
10.8 TerraSAR-X and TanDEM-X 244
10.9 Problems 244
11 Light Detection and Ranging 247
11.1 Introduction 247
11.2 Physics and Technology: Airborne and Terrestrial Scanners 249
11.2.1 Lasers and detectors 249
11.2.2 Laser range resolution and the LiDAR equation 251
11.3 Airborne and Terrestrial Systems 253
11.3.1 Airborne Oceanographic LiDAR 253
11.3.2 Commercial LiDAR systems 254
11.4 Point Clouds and Surface Models 255
11.5 Bathymetry 257
11.6 LiDAR from Space 259
11.7 Problems 261
Afterword 263

Appendix 1 Derivations 265


A1.1 Derivation of the Bohr Atom 265

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Contents xi

A1.1.1 Assumption 1: the atom is held together by the


Coulomb force 265
A1.1.2 Assumption 2: the electron moves in an elliptical orbit
around the nucleus (as in planetary motion) 266
A1.1.3 Assumption 3: quantized angular momentum 266
A1.1.4 Assumption 4: radiation is emitted only from transitions
between the discrete energy levels 268
A1.2 Dielectric Theory 268
A1.3 Derivation of the Beam Pattern for a Square Aperture 269
Appendix 2 Corona 273
A2.1 Mission Overview 273
A2.2 Camera Data 273
A2.3 Mission Summary 274
A2.4 Orbits: An Example 278
Appendix 3 Tracking and Data Relay Satellite System 279
A3.1 Relay Satellites: TDRSS 279
A3.2 White Sands 279
A3.3 TDRS 1–7 281
A3.3.1 Satellites 281
A3.3.2 Payload 283
A3.4 TDRS 8–10 284
A3.4.1 TDRS 8–10: payload characteristics 284
A3.4.1.1 S-band multiple access 284
A3.4.1.2 Two single-access antennas 284
A3.4.1.3 Space-ground-link antenna (Ku-band) 285
A3.5 TDRS K, L, M 285
Appendix 4 Useful Equations and Constants 287

Index 291

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Preface
This text is designed to meet the needs of students interested in remote sensing
as a tool for the study of military and intelligence problems. It focuses on the
technology of remote sensing, both for students who will be working in
systems acquisition offices and for those who might eventually need to be
“informed consumers” of the products derived from remote sensing systems.
I hope it will also be useful for those who eventually work in this field.
Here in the second edition, the book maintains, as much as possible, a
focus on the physics of remote sensing. As a physicist, I’m more interested in
the technology of acquiring data than the final applications. Therefore, this
work differs from related textbooks that favor civilian applications,
particularly geology, agriculture, weather (atmosphere), and oceanography.
I have instead concentrated on satellite systems, including power, data
storage, and telemetry systems, because this knowledge is important for those
trying to develop new remote sensing systems. For example, one of the
ongoing themes is how bandwidth constraints define what you can and cannot
do in terms of remote sensing.
From a tactical perspective, low-spatial-resolution systems are not very
interesting, so this text focuses on systems with high spatial resolution. This is
not to deny the utility of, say, weather systems for the military, but that is a
domain of a different sort, and one I leave to that community. (As a
consequence, for example, I leave out passive microwave sensing as a topic.)
Similarly, although oceanography is clearly important to the Navy, that too is
a topic I leave to others. I have completely ignored the technology of film-
based imaging systems, aside from a discussion of the historical reconnais-
sance satellite systems.
Part of the motivation for creating this textbook was and is the ongoing
discrepancy between the content of such books and the current state of the art.
When I started teaching remote sensing and crafting what has become this text,
the IKONOS satellite had not yet been launched. At the time of publication of
the first edition, there were no high-spatial-resolution imaging radar systems,
but now I have an illustration from TerraSAR-X at a 30-cm resolution.
The launch of SkySat 1 by Skybox Imaging (now Terra Bella, a Google
company, as of November 21, 2013) clearly signals many upcoming changes

xiii

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
xiv Preface

in imaging from space that are not ready to be discussed here. These larger
fleets of satellites and newer focal plane technology imply more persistent
imaging. Video from space is a consequence of these new hardware designs,
with promising but uncertain utility. Also signaled by the success of Skybox
imaging: remote sensing appears to be emerging as the third field, following
communications and navigation, to become economically viable in space.
This text is organized according to a fairly typical progression—optical
systems in the visible realm, followed by infrared and radar systems. New to
this textbook is a full chapter on LiDAR. The necessary physics is developed
for each domain, followed by a look at a few operational systems that are
appropriate. Somewhat unusual for a text of this sort is a chapter on how
orbital mechanics influences remote sensing, but ongoing experience shows
that this topic is essential.
I have added a radiometry component to the infrared (IR), radar (SAR),
and LiDAR sections. The IR section clearly needed this to address detection
issues and make temperature measurements more clearly founded. The
imaging radar material clearly needed the radar range equation, just as the
LiDAR chapter needed its corresponding range equation.
Finally: The first edition was pretty much a solo effort on my part. The
second edition has benefitted from the support of my technical team—my
thanks to Angela Kim, Jeremy Metcalf, Chad Miller, and Scott Runyon for
their contributions. Thanks to Donna Aikins and Jean Ferreira for help with
the many copyright issues. The reviewers did a great job and identified a
number of annoying elements in my writing style that clearly needed to be
adjusted. Thanks to the editor, Scott McNeill, for his persistence and
diligence.
R. C. Olsen
Naval Postgraduate School, Monterey, CA
June 2016

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 1
Introduction to Remote Sensing

Remote sensing is a field designed to enable people to look beyond the range
of human vision. Whether it is over the horizon, beyond our limited range, or
in a spectral range outside our perception, we are in search of information.1
The focus in this text will be on imaging systems of interest for strategic,
tactical, and military applications, as well as information of interest to those
domains.
To begin, consider one of the first airborne remote-sensing images.
Figure 1.1 shows a photograph by Gaspard-Félix Tournachon2 (Tournachon
was also known by his pseudonym, Nadar). He took this aerial photo of Paris
in 1868 from the Hippodrome Balloon, tethered 1700 feet above the city.
Contrast this image with the photo taken by astronauts on Apollo 17, roughly
one hundred years later (Fig. 1.2).
Tournachon’s picture is a fairly classic remote-sensing image—a
representation from a specific place and time of an area of interest. What
sorts of things can be learned from such an image? Where, for instance, are the
streets? What buildings are there? What are the purposes of those buildings?
Which buildings are still there today? These are the elements of information
that people want to extract from such imagery.
The following material establishes a model for extracting information
from remote-sensing data. The examples used here are also meant to illustrate
the range of information that can be extracted from remote-sensing imagery,
as well as some of the consequences of wavelength and resolution choices
made with such systems.

1. The term “remote sensing” emerged as the imaging technology moved beyond film-based
aerial photography. The initial impetus for the term is attributed to Evelyn Pruitt and Walter
Bailey (ca 1960).
2. Tournachon was a notable portrait photographer from 1854 to 1860. He made the first
photographs with artificial light in 1861, descending into the Parisian sewers and catacombs
with magnesium flares. He apparently was also an early experimenter in ballooning, even
working with Jules Verne. http://www.getty.edu/art/collections/bio/a1622-1.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
2 Chapter 1

Figure 1.1 Gaspard-Félix Tournachon took his first aerial photographs in 1858, but those
earlier images did not survive.3 He applied for a patent for aerial surveying and photography
in 1858. Curiously, there is no evidence of photography during the American Civil War,
although balloons were used for reconnaissance by both sides. Image courtesy of the Brown
University Library Center for Digital Scholarship.

1.1 Order of Battle


Remote-sensing data, sans analysis, have a fairly modest value. In general, the
truly valuable component is information appropriate to making a decision. To
the extent that one can obtain or understand what information can be derived
from remote-sensing data, one can begin to address a question posed in the
preface: What is remote sensing good for? To answer this question, a
paradigm is introduced, called the “order of battle” (OOB). This term is
largely associated with the counting of “things,” but not entirely. Indeed, the
levels of information should not be limited to simple “counting.” Attention
must also be paid to nonliteral forms of information.

3. Jensen, Remote Sensing of the Environment, page 62 (2000).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 3

Figure 1.2 An image of earth (“The Blue Marble”), taken from near-geosynchronous orbit
by Apollo 17 on December 7, 1972.

An OOB has a number of forms, depending on the area of interest:


• Air order of battle (AOB);
• Cyber order of battle (COB);
• Electronic order of battle (EOB);
• Ground order of battle (GOB), which includes logistics;
• Industrial order of battle (IOB);
• Naval order of battle (NOB);
• Missile order of battle (MOB); and
• Space order of battle (SOB).
What items characterize these OOB types? What sort of information is
being considered? A GOB, for example, might consist of vehicles—their
numbers, locations, and types. Sample types include armored (e.g., tanks),
transport (trucks), and personnel (high-mobility, multipurpose, wheeled
vehicles, or HMMWVs). After the types are established, further elements of
information include operational status (fueled, hot, armed, etc.), capabilities,
and armament (weapons). Other elements of a GOB are troops (numbers,
arrangement, types, etc.), defenses (e.g., minefields, geography, missiles,
chemical/biological, camouflage, and decoys), and infrastructure (such as
roads and bridges). The following subsections provide OOB examples with
images that range from historic systems to the most-advanced modern
commercial systems.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
4 Chapter 1

1.1.1 Air order of battle


An air order of battle (AOB) focuses primarily on aircraft and airfields.
A tabular approach to compiling salient elements of information is provided
in Table 1.1; it is organized in increasing level of detail. There are several
levels of detail that you will want to know about. Not all will be amenable to
remote sensing, but the first step is to identify what you want to know. The
next step is to determine which elements can be provided by available sensors.
The first illustration uses an early Cold War system only recently declassified
for public use.
Table 1.1 Air order of battle details.

Planes Type Fighter Weapons Air-to-Air


Air-to-Ground
Sensors FLIR
Radar
Visible
EW
Bomber
Tanker
Transport Civilian
Military
Trainer
EW
Reconnaissance
Number
Locations Bunkers
Runway
Aprons
Runways Length
Composition Material
(asphalt, dirt, concrete)
Direction Heading
Approach Terrain
Lighting
Weather
Ground Controllers
Logistics Supply Lines / Lines
of Communication
Petroleum, Oil, Fuel Tanks Capacity
and Lubrication (POL) Type of Fuel
Fill Factor
Pilots Number
Ranks
Training
Experience
Defenses Weapons AA Guns
AA Missiles
Radar Frequency
Range
Location
Type
Locations Field of View (FOV)

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 5

Figure 1.3 Image of the Dolon air base in Chagan, Kazakhstan (50° 320 3000 N, 079° 110
3000 E) taken by the Gambit (KH-7) system during Mission 4022 on October 4, 1965. The
inset is a close-up of the Tupolev Tu-95 (Bear) bombers along the 4-km runway. North is up.
The planes are 46 m in length, with a wingspan of 50 m. The spatial resolution in the
scanned image is 0.735 m per pixel. Image reference: DZB00402200056H012001; the film
was scanned at a 7 mm pitch.4

An AOB is illustrated in Fig. 1.3; it is derived from one of the early


satellite reconnaissance systems: the Gambit, or KH-7 film return system.
Important elements such as the length and orientation (heading, roughly
90/270 here) of the runways are immediately apparent; the number of TU-95
aircraft can be counted easily, and the smaller aircraft are visible in the
original image. The infrastructure is clearly defined. This illustration of the
Dolon air base in the former Soviet Union reveals relatively little in the way of
defenses (the base is far away from any border). As with a number of Soviet
airfields, the runway has a curious checkerboard pattern. Soviet construction
techniques involved large (pre-cast) concrete blocks, rather than the smoother,
continuous surfaces of American runways.

1.1.2 Electronic order of battle


The electronic order of battle (EOB) is really the domain of the signals-
intelligence (SIGINT) community, but imagery can contribute to the topic.
Relevant subjects include defenses, such as surface-to-air missiles (SAMs) and
radar installations. The technical details for radars are admittedly more the
domain of SIGINT or electronic intelligence (ELINT) than imagery intelligence
(IMINT) because the elements of information include frequencies, pulse-
repetition frequency (PRF), scan type, and pulse width and mode. Regardless,
the location and layout of radar provides a lot of information, particularly with

4. http://en.wikipedia.org/wiki/Tupolev_Tu-95.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
6 Chapter 1

Figure 1.4 This Gambit-1 image shows the Sary Shagan Hen House radar (centered at
46° 360 4100 N, 74° 310 2200 E). The image was taken on May 28, 1967. Similar images with a
1-m resolution date to 1964. These large Soviet radars were designed to watch for ballistic
missiles and satellites. The 25-MW system was designed to monitor the south and west with
two pairs of antenna, one transmitting and one receiving (bistatic). The image chip for two of
the radar systems is overlaid on the full frame image for context. The original film was
9 inches wide (the vertical direction in this frame).

regard to access. The size of an antenna implies characteristics such as range. It


might be possible to determine operational patterns. The radar types can be
identified by comparison to known systems, e.g., air search, surface search, fire
control, and target tracking. Networking and communications details, such as
nodes or types (HF, microwave, fiber, etc.), or even the power source may be
determined.
Figure 1.4 illustrates a famous Cold-War Russian system: the orientation
of the Hen House radar is associated with its primary role of watching the
horizon for ballistic missiles. Given the size and orientation of the radar, and
some knowledge of the wavelength used, the resolution and FOV of the radar
can be determined. “Moon Bounce” signals from this system were observed
by the Naval Research Laboratory at the Chesapeake Bay facility, and in
Palo Alto with the 150-ft. Stanford Dish. These observations allowed
observers to measure the radar power.5

1.1.3 Space order of battle


The space order of battle (SOB) is a relatively new area, illustrated with
images from much more recent commercial and civil systems. It includes two

5. CIA Center for the Study of Intelligence; Studies in Intelligence, series, series Volume 11, #2,
pp. 59–66; Spring 1967; Moon Bounce Elint, Frank Eliot. https://www.cia.gov/library/
center-for-the-study-of-intelligence/kent-csi/vol11no2/html/v11i2a05p_0001.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 7

Figure 1.5 Worldview-2 image of STS-134 on the pad. North is up. This illustration uses
the near-infrared, red, and green bands in a false-color representation similar to that
obtained from infrared-color film in previous years. The vegetation appears bright red.
Somewhat coincidentally, the red external tank on the shuttle maintains a fairly orange color.
The color image has been “pan-sharpened” to the 0.6-m resolution of the panchromatic
sensor that collected these data. Imagery reprinted courtesy of DigitalGlobe.

components: space and ground systems. Ground elements of interest include


launchers (boosters), launch pads and other infrastructure, and communica-
tions ground sites. Figure 1.5 illustrates the ground component with an image
of a shuttle being readied for launch. There is a characteristic pattern to the
launch complex that is repeated nearby for the second shuttle launch complex.
Space elements of information that are important are communications
systems (relay satellites), operational payloads, and satellite orbital data.
Figure 1.6 illustrates satellite-to-satellite imaging (sat-squared, or Sat2). This
image was taken by the SPOT-5 satellite as the European Radar Satellite 2
(ERS-2) flew under the French earth-imaging system. Given the close range,
the cross-track resolution is 12.5 cm.
The image is distorted in the horizontal direction (along the orbital track)
due to the unusual relative velocity of the radar satellite compared to the
ground, and the image has been adjusted to remove that distortion. Solar

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
8 Chapter 1

Figure 1.6 On June 3, 2002, SPOT-5 took this picture of the ERS-2 satellite at about 23:00
UTC over the Southern Hemisphere. ERS-2, 42 kilometers below, overtakes SPOT-5 from
north-east to south-west at a relative velocity of 81 m/s. Image reprinted with permission of
the Centre National d’Études Spatiales (CNES).

arrays, radar antenna, and telemetry antenna are visible. Previously, SPOT-4
had imaged ERS-1 at a lower resolution; TerraSAR-X has shown a similar
capability in radar imaging, as illustrated below in Section 1.2.5.

1.1.4 Naval order of battle


A naval order of battle (NOB) involves ships, of course. It is concerned with battle
groups and their composition (types of ships, numbers, arrangement in the group,
steaming direction, velocity, etc.), as well as ports (harbor characteristics, draft,
piers, defenses, communication lines, and facilities) and the state of readiness of
ships in a harbor. For individual ships, manpower, supplies, weapons, and
sensors are important. In the case of carriers, what aircraft are aboard and how
they are armed are all essential elements of information (EEI).
Figure 1.7 illustrates a little of what can be seen by viewing a Russian
naval base. The submarines can be counted, and to some extent they can be
identified by length and shape. There is some indication of readiness by
looking at the ice: are the boats locked in, or is there a path to open water?
Notice the level of activity on the docks around the submarines. In this case,
things look pretty quiet.
This illustration was also chosen to emphasize the international nature of
remote sensing today—this image came from an Israeli commercial system.
(Higher-spatial-resolution illustrations from U.S. satellite and airborne
systems are shown for aircraft carriers in Figs. 1.16, 1.18, and 1.20.)

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 9

Figure 1.7 Image of Kamchatka Submarine Base, Russia, taken by Earth Resources
Observation Satellite (EROS) on December 25, 2001 with a 1.8-m resolution.6 Located on
the far-eastern frontier of Russia and the former Soviet Union, this peninsula has always
been of strategic importance. Kamchatka is home to the Pacific nuclear submarine fleet,
housed across Avacha Bay from Petropavlovsk at the Rybachy base.

1.1.5 Industrial order of battle


The infrastructure of a country can be revealed by the pattern of lights at
night. Historically, the Defense Meteorological Satellite Program (DMSP)
provided intriguing nighttime imagery from the Operational Line-Scan
system, a photomultiplier tube (OLS-PMT) sensor on the DMSP designed
to see clouds at night. DMSP’s low-light capability included the ability to see
city lights, large fires (like those of oil wells and forests), and the aurora
borealis, as well as less-obvious light sources, such as those produced by
industrial activity. More recently the Visible Infrared Imaging Radiometer
Suite (VIIRS) sensor on the NASA/NOAA weather satellite Suomi NPP has
provided much more detailed images of the earth at night (Fig. 1.8). The
Suomi NPP was launched on October 28, 2011, and is redefining our ability to
detect low-light-level activity on earth.
This image of Egypt and the Nile River reflects the distribution of energy
(and people) in Egypt. Such data can be correlated to industrial output. Chris
Elvidge at NOAA has done extensive work, for example, in tracking the de-
industrialization of portions of the former Soviet Union following the
dissolution of the country.7 Figure 4.16 shows a global view developed from
an ensemble of images like the one shown here.

6. http://www.imagesatintl.com/; image no longer posted.


7. C. D. Elvidge et al., “Preliminary Results from Nighttime Lights Change Detection,”
Proceedings of the ISPRS joint conference: 3rd International Symposium Remote Sensing
and Data Fusion Over Urban Areas (URBAN 2005) and the 5th International Symposium
Remote Sensing of Urban Areas (URS 2005); Editors: M. Moeller, E. Wentz; International
Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences; XXXVI,
8/W27.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
10 Chapter 1

Figure 1.8 View of Egypt at night taken by the VIIRS aboard the Suomi National Polar-
orbiting Partnership (NPP) satellite on October 13, 2012.

Later in this chapter, Figs. 1.16 and 1.19 illustrate a different element of
IOB: lines of communication. The Coronado Bridge and associated roads
appear at a higher spatial resolution. Figure 4.17 in this book shows port
activity in a nighttime image, with roads made visible by cars and static light
sources.
These illustrations of orders of battle provide an idea of the types of
information that can be obtained. The following section briefly surveys the
various forms of remote-sensing data available historically and today. Visible,
infrared, LiDAR, and radar imagery are illustrated.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 11

1.2 Technology Survey


The first section of this book took a quick look at the types of information
that might be desired from imaging systems. This section examines imaging as
a function of spatial resolution and mode, in part to develop an initial view of
the tension between area coverage, temporal coverage, and resolution. This
conflict was more obvious when most satellites imaged in a purely nadir view.
Off-nadir imaging systems change the paradigm and greatly reduce the
temporal gap traditionally implicit in high-spatial-resolution systems. Other
illustrations are chosen to reflect the international character of current
remote-sensing systems; they also highlight the variety of organizations
involved with remote sensing—military and civil systems dominate, but there
are also important private systems. The first concept covered here is (nearly)
whole-earth visible imagery. The Apollo 17 image in Fig. 1.2 raises several
important points, especially a vexing one for remote sensing: clouds. Do you
see indications of intelligent life?

1.2.1 Imaging the whole earth: optical and infrared imaging


1.2.1.1 Geostationary Operational Environmental Satellite (GOES):
whole-earth visible imaging
High-altitude satellites (such as weather satellites) image most of a
hemisphere. The GOES-9 visible imager acquires an image of one hemisphere
every 30 minutes, or the northern quad once every 15 minutes, with a spatial
resolution of 1 km. Televised weather reports frequently show images from
GOES satellites (like that shown in Fig. 1.9).
What value do such data have for the military? For one thing, cloud
coverage is revealed. Clouds are a major concern in modern warfare because
they directly affect the ability of pilots and autonomous weapons to locate
targets. This whole-earth image also begins to illustrate the important
tradeoffs between spatial resolution, frequency of coverage, and area of
coverage. High-altitude satellites, such as geosynchronous8 weather satellites,
provide large-area coverage (more or less continuously) and produce an image
every 15–30 minutes at a spatial resolution of 1 km.

1.2.1.2 GOES: whole-earth infrared imaging


A companion figure to the GOES visible image is shown in Fig. 1.10. Infrared
images from the GOES weather satellite again show much of the western
hemisphere. Per the weather-community convention, the gray scales are
inverted: cold is brighter, dark is hotter, so that cold clouds will appear white.
Images taken in three long-wave infrared (LWIR) wavelengths are combined
in a false color image, as further developed in Chapter 8. This image was

8. Geosynchronous orbit (GEO) satellite orbits have a radius of 6.6 earth radii.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
12 Chapter 1

Figure 1.9 GOES-9 visible image taken on June 9, 1995 at 18:15 UTC. Image courtesy of
NASA-Goddard Space Flight Center, with data from NOAA GOES.9

taken in daylight in the western hemisphere. The solid earth is hotter than the
oceans during the day and thus appears dark, particularly over the western
United States. The drier western states, with less vegetation, are hotter than
the eastern side of the country.
The earth’s atmosphere decreases monotonically in temperature with
altitude within the troposphere (the region with weather), and the cloud
temperatures vary along with the ambient atmosphere. The character of
infrared emission varies in wavelength with temperature, so the apparent color
of the clouds in this presentation reflects cloud temperatures and therefore
height.

1.2.2 Earth resources systems: 30-m pixels


Classic earth resources satellites provide ground resolution at the 20–40-m
pixel level. This is true for both optical and radar systems, illustrated here.
These systems are typically designed to provide synoptic views of the earth at
roughly two-week intervals—the period of time necessary to cover the earth at
a 30-m resolution from low-earth orbit with technologies from the 1970–2000
time period.

9. http://goes.gsfc.nasa.gov/pub/goes/goes9.950609.1815.vis.gif.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 13

Figure 1.10 GOES-15 infrared imagery taken April 26, 2010 at 17:30 UTC. This is the “first
light” image for the GOES-15 infrared sensors. The composite is made by using the 3.9-mm
infrared channel (G, B) and the long-wave infrared channel at 11 mm (R). The cloud colors
provide information about their height (which corresponds with temperature) and water
content. Here, higher-altitude clouds are colder and appear white.10 Related views are
shown in Fig. 8.15.

1.2.2.1 Landsat 7 (30 m), San Diego


Multiple-wavelength (or multispectral) images are most commonly applied
to earth resources. Landsat has been the flagship system for earth-resources
studies for over four decades. The low-earth orbiting satellites image the
whole earth once every sixteen days. The Enhanced Thematic Mapper Plus
(ETM+) sensors provide 30-m-resolution imagery in seven spectral bands.11
The image in Fig. 1.11 was taken from three visible-wavelength sensors and
combined to make a “true” color image. The figures show one complete
Landsat scene. Figure 1.12 shows a small segment covering San Diego

10. http://www.nasa.gov/mission_pages/GOES-P/news/infrared-image.html; http://goes.gsfc.


nasa.gov/text/goes15results.html.
11. Landsat 7 also offers a higher-resolution panchromatic image with 15-m pixels. The
resolution for the long-wave infrared sensor on the ETM+ is only 60 m.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
14 Chapter 1

Figure 1.11 Landsat 7 image of San Diego taken June 14, 2001. The RGB “true color”
image is on the left (30-m pixels), and the thermal infrared (LWIR) image is on the right.
White is “hot” in this display. Temperatures are from 12–52 °C, or 53.6–125.6 °F.

Figure 1.12 Landsat 7 image enlarged (acquired June 14, 2001 at 18:12:08.07Z), with the
“true-color” image on the left and a Landsat thermal image on the right. The right image uses
IR wavelengths bands 6 and 7; the red is 11 mm, and the green and blue are 2.2 mm.

harbor. Adjacent to the true color images in Figs. 1.11 and 1.12 are the
corresponding LWIR images from Landsat. The 60-m-resolution sensor is
the highest-spatial-resolution LWIR sensor flown to date on a civil or
commercial system.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 15

Figure 1.13 Landsat 7 panchromatic channel. The high-spatial-resolution channel for the
Enhanced Thematic Mapper Plus (ETM+) has a 15-m resolution capability shown here. The
Coronado Bridge starts to appear clearly. The golf course is bright because of this sensor’s
spectral response extends into the near-infrared (see Chapter 6).

In the visible sensor data, the Coronado Bridge is just visible crossing
from San Diego to Coronado Island. Long linear features, such as bridges and
roads, show up well in imagery even if they are narrow by comparison to the
pixel size. Reflective infrared and thermal IR data from Landsat are shown
encoded as an RGB triple on the right side of Fig. 1.12. The hot asphalt and
city features are bright in the red (thermal) frequencies, whereas parks are
green (cool, and highly reflective in short-wave IR).
Figure 1.13 shows the higher-spatial-resolution panchromatic channel
from the ETM+ sensor. In comparison to an imager like GOES, the penalty
paid for this high spatial resolution is a reduced field of view—nominally
185 km across for any image.

1.2.2.2 SSTL/DMC (30 m)


The previous illustrations were created by government systems. Beginning in
the late 1990s, a remarkable change occurred as small satellite designs
were flown with imaging systems. One of the most influential such systems
started as an experimental effort at the University of Surrey. This effort
spawned the commercial entity Surrey Satellite Technology Limited (SSTL,

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
16 Chapter 1

Figure 1.14 Full-frame image captured by the UK-DMC on October 1, 2009, at 17:53:00Z.
The false-color “IR” image is 14400  15550 pixels and provides a 30-m ground sample
distance (GSD). Regions that appear red have significant healthy vegetation. UK-DMC2
image, October 1, 2009 ©2016 DMCii, all rights reserved. Supplied by DMC International
Imaging, U.K.

now part of Airbus).12 SSTL has designed and flown a number of small
satellites, selling many of them to countries without indigenous capabilities
in this area.
Figure 1.14 shows an image of southern California and northern Mexico
(Baja California), comparable to the Landsat system shown earlier in this
chapter, though limited in bands (green, red, and near-infrared). This newer
technology provides a much larger image area (3 in linear dimensions).
The system is limited in bandwidth and collection rates, with revisit times of

12. Now called Airbus Defense and Space (2014).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 17

Figure 1.15 Full-frame image captured by the UK-DMC on October 1, 2009 at 17:53:00Z.
The false-color “IR” image is 14400  15550 pixels and provides a 30-m ground sample
distance (GSD). Regions that appear red have significant healthy vegetation. UK-DMC2
image October 1, 2009, ©2016 DMCii, all rights reserved. Supplied by DMC International
Imaging, U.K.

two weeks being the norm for such low-earth orbiting systems. Surrey
addresses the revisit issue by providing their different customers a means to
team up, formally in the Disaster Management Constellation (DMC). As
the fleet grows, the revisit time drops to about a day for the ensemble of
satellites.
Medium-resolution imaging systems like the DMC system are becoming
practical for the support of agriculture. Figure 1.14 shows a checkerboard
pattern of irrigated vegetation north and south of the Salton Sea in the
middle-right portion of the image. Figure 1.15 shows a zoomed-in image of
San Diego, emphasizing the similarity to the quality of the Landsat 7 data.
The very bright red regions in this figure are golf courses (natural vegetation is
not particularly healthy at this time of year in southern California).
The base system at this writing is focused on payloads with a 10-m
resolution in the panchromatic band and 32-m resolution in the multispectral
(e.g., color) bands. These robust systems cost about 10–20 million USD.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
18 Chapter 1

The main limitation in smaller systems is the telemetry bandwidth,


which limits the overall coverage. Higher-bandwidth telemetry demands
higher-power systems, and all such systems are considerably larger than the
Surrey designs.

1.2.3 Higher resolution: 1–3-m ground sample distance


The launch of the IKONOS satellite in 1999 dramatically changed the world of
remote sensing. For the first time, imagery comparable to that obtained from
military systems was widely available to civilians. IKONOS offered 1-m-
spatial-resolution panchromatic imagery, and 4-m-resolution multispectral
(color) imagery. (See Fig. 4.10 for the first light image of Washington, D.C.)
Since then, a fleet of high-resolution commercial systems have flown.

1.2.3.1 Worldview-3: San Diego and Coronado Island


The Worldview-3 (WV3) satellite appears to have the lead as the highest-
spatial resolution systems in orbit at this point, with a 0.30-m panchromatic
sensor resolution and a 1.2-m multispectral resolution. Figure 1.16 presents a
color image of San Diego taken on September 6, 2014 by the Worldview-3
satellite. The color image has been pan-sharpened to a 1.2-m resolution. The
Coronado Bridge (heretofore a rather tenuous, thin line in earlier illustrations)
is now clearly defined, as are the many small watercraft in the harbor.
Figure 1.16 includes an image chip from the panchromatic sensor of the
carrier U.S.S. Midway, now a floating museum. The museum also appears in
Figs. 1.18 and 1.20. There is a large collection of military aircraft on the deck.
Referring back to the concepts of air and naval order of battle, it is clearly
possible to count the aircraft and, in general, identify the type. Direct
comparison between the Worldview satellites with systems described earlier is
difficult: not only does the modern system have higher bandwidth but it also
has the ability to look off-nadir (sideways), thus dramatically improving the
revisit time. See if you can locate the word “Coronado” written in the sand
adjacent to the Hotel del Coronado.

1.2.3.2 High-resolution airborne LiDAR


Figure 1.17 takes a close-up of Fig. 1.16 and changes modalities; laser scanner
(or LiDAR) data are shown for a small patch around the Coronado hotel,
including the raised sand “Coronado.” The elevation is color coded with a
rainbow scale: dark blue is 2 m, and red is 25 m. The dunes are measured to
be about 2 m above the base sand level (light blue/cyan against the dark blue
background). The word “Coronado” is about 260 m across in length.13

13. A video shows how the dunes marker has been built up over the years, largely through the
efforts of Armondo Morena, a San Diego city worker: https://www.youtube.com/watch?
v=Ag5n_1CPQ7M.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 19

Figure 1.16 Worldview-3 image of Coronado Island, San Diego, California, 9/16/2014. North
is approximately to the right, and the sun is to the upper left. The Hotel del Coronado is shown
in the upper inset, and the carrier Midway is shown in the lower inset, using the higher-
resolution panchromatic data (0.30-m GSD). Imagery reprinted courtesy of DigitalGlobe.

Laser scanners are extensively used for mapping and surveying, with point
densities of 1–30 points/m2 depending on the application. In the illustration
here, the nominal point density is 3.5 pts/m2, which is typical for mapping at
the time of this image. The point density corresponds roughly to a ground
resolution of 0.5–1.0 m.

1.2.4 High-resolution airborne imagery


Still-higher resolutions are possible, primarily from airborne platforms. Over
the last few years, electronic cameras have begun to replace film cameras, but
the illustration given here comes from a film system, which at the time (2004)
gave the highest quality data. Figure 1.18 shows a flight over San Diego
harbor using a film system, with resolution of better than 1 foot. The relatively

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
20 Chapter 1

Figure 1.17 Image of Coronado Island, San Diego, California with LiDAR data from USGS.
The sensor used was an Optech, Inc. Airborne Laser Terrain Mapper (ALTM) 1225. The
LiDAR data were collected on March 24–25, 2006. The following settings were used for
these flights: 25-kHz laser pulse rate, 26-Hz scanner rate, ±20° scan angle, 300–600-m
AGL altitude, and 95–120-kts ground speed.

large image size ( > 400 megapixels) is a consequence of the relatively large
area being imaged at high resolution. Modern digital cameras used for
airborne mapping are frequently operated to image at a 4–6-inch GSD.

1.2.5 Synthetic aperture radar (SAR)


Beginning with the launch of RADARSAT-2 at the end of 2007, a small fleet
of high-spatial-resolution radar satellite systems have gone into orbit.
RADARSAT-2 offers C-band (6-cm wavelength) imagery at resolutions of
1–3 m, whereas the German TerraSAR-X, Italian Cosmo SkyMed, and Israeli
TecSAR systems offer a 1-m resolution or better in the X-band (3-cm
wavelength). Figure 1.19 shows the RADARSAT-2 data for San Diego. Ships
become fairly obvious at these spatial resolutions, and radar provides an
important tool for maritime domain awareness. The carriers docked at
Coronado Island are obvious at this resolution.
The concluding imagery examples come from the German TerraSAR-X
satellite. Within the last year, the German Aerospace Center (Deutsches
Zentrum für Luft- und Raumfahrt, or DLR) has been licensed to collect data
at a resolution as fine as 25 cm. Data from the San Diego area are shown
here in Fig. 1.20. The overall observable area is reduced at this resolution
(about 3  5 km), but the spatial resolution is remarkable. A subset of the
data taken on August 24, 2015 at 01:50:58 Z (approximately dusk local time)
are shown, with the U.S.S. Midway again illustrated at a higher
magnification.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 21

Figure 1.18 The images shown here are from an aerial photograph taken over San Diego
harbor in 2004: (a) the full frame, (b) a small chip from the 21,000-  21,000-pixel image
scanned from the film image, and (c) a further zoomed-in view of the 1.3-gigabyte file. The
resolution is between 6 and 12 inches. Notice the glare on the water and how the wind-driven
water waves show from above. The carrier is the U.S.S. Midway, part of an exhibit at the San
Diego Maritime Museum.14 Images reprinted with permission from Lenska Aerial Images.

In closing, let us return to space situational awareness, or Sat2. Figure 1.21


shows a synthetic aperture radar image of the International Space Station
(ISS) taken by the German TerraSAR-X system while the Space Shuttle
Endeavor was docked. Smooth surfaces such as solar arrays tend to reflect
energy away from the radar system and appear as though they are transparent
(dark). Corners and edges provide most of the reflections.15

14. http://www.navsource.org/archives/02/41.htm.
15. http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts123/multimedia/fd15/fd15_
gallery.html; http://www.dlr.de/en/desktopdefault.aspx/tabid-6840/86_read-22539/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
22 Chapter 1

Figure 1.19 RADARSAT-2 image collected at 3-m resolution on 5/5/2009. Polarization is


HH. Elements of the scene are well captured by radar, others less so. The Coronado Bridge
shows up clearly, as do the air fields due to their absence of reflection. Systems such as
RADARSAT-2 have nearly daily access to mid-latitude and high-latitude targets.
RADARSAT-2 data © Canadian Space Agency and MacDonald, Dettwiler and Associates
Ltd, 2009, all rights reserved.

1.3 Three Axes


The sequence of images presented in this chapter illustrate a few of the different
imaging modalities (visible, infrared, radar, LiDAR) and introduce the decline
in area coverage that comes from increased spatial resolution. There is a basic
conflict between resolution and field of view: image a larger area, and the result
will (generally) have a lower spatial resolution. Going beyond the spatial
dimension, there are, in practice, three dimensions associated with remote
sensing imagery: spatial, spectral, and temporal. Figure 1.22 illustrates these
three dimensions, which define competing requirements for design and
operation. You can have high spatial resolution and global coverage but only
at low temporal coverage (like Landsat, which provides decent pictures only
once every 16 days or so). You can have high temporal coverage (like GOES,
which processes once every 30 minutes), but then the spatial resolution is only
1 km. In order to achieve spectral coverage (multispectral or hyperspectral), the
other dimensions will suffer a corresponding penalty.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 23

Figure 1.20 TerraSAR-X sub-meter imaging. The buildings have a peculiar look due to the
specular reflection of energy back to the satellite. Visible here besides the U.S.S. Midway are
three Nimitz-class carriers docked at North Island and the baseball stadium for the San Diego
Padres. The small streaks in the water are from smaller boats moving rapidly through the
scene. The resolution appears to be a bit better than 25 cm. By comparison to the WV3 and
airborne illustrations presented earlier, the aircraft on the deck of the Midway appear to have
vanished. In the inset image, cars are detectable in the parking lot to the north of the Midway.

A fourth axis, polarization, has been an important term for passive and
active radar systems, and it has started to appear in the optical remote-sensing
community as an additional dimension of information.

1.4 Resources
There are some classic and modern remote sensing textbooks to note:
• The classic text—Fundamentals of Remote Sensing and Airphoto
Interpretation, by Thomas Eugene Avery and Graydon Lennis
Berlin—is rather dated even in its 5th edition (1992), but it is still a
great reference with many illustrations.
• Remote Sensing of the Environment: An Earth Resource Perspective, 2nd
edition, published in 2006 by John R. Jensen, is an excellent book by
one of the top people in remote sensing.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
24 Chapter 1

Figure 1.21 TerraSAR-X image of the International Space Station (ISS), collected on
March 13, 2008 (1325Z). TerraSAR-X passed the ISS at a distance of 195 km and at a
relative speed of 9.6 km/s. The resolution is about one meter, obtained in a 3-s exposure.
The image grayscale is inverted (dark indicates stronger returns). The size of the ISS is
roughly 110 m  100 m  30 m. The Space Shuttle Endeavour was docked at this time, so it
is in the image. The lower image is taken from STS-123 as it departed on March 24.
Reference NASA image S123E010155.16

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Introduction to Remote Sensing 25

Spatial Resolution

IKONOS, Quickbird

SPOT

Hyperion Spectral
Landsat Hyperspectral

Multi-spectral

Geosynchronous Weather

Missile Warning

Temporal
Figure 1.22 Three dimensions for remote sensing.

• Introduction to the Physics and Techniques of Remote Sensing, 2nd


edition, published in 2006 by Charles Elachi and Jakob J. van Zyl,
updates the classic 1987 textbook by one of the most influential radar
scientists of the modern era.
• Remote Sensing and Image Interpretation, by Thomas M. Lillesand,
Ralph W. Kiefer, and Jonathan W. Chipman (2007), is a classic text,
now in the 6th edition.
• Remote Sensing, Principles and Interpretation, 3rd edition, is a fairly
geology-oriented, but still excellent text by Floyd F. Sabins (2007).
• Physical Principles of Remote Sensing, by W. G. Rees, has a new (3rd)
edition, published in 2013, that extends into geophysical topics not
addressed here. Good physics extending beyond the level taught here.
• Introduction to Remote Sensing, by James B. Campbell, provides a good
qualitative view of remote sensing, without equations. Now in its 5th
edition (2011).

16. TerraSAR-X image of the month: The International Space Station (ISS); news release
dated: 4 March 2010. Image acquired 13 March 2008, image #SWE1-E1058981, http://
www.dlr.de/en/desktopdefault.aspx/tabid-6215/10210_read-22539/10210_page-4/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
26 Chapter 1

• By far the best book on the topic of data analysis is Remote Sensing
Digital Image Analysis: An Introduction, by John A. Richards, 5th
edition (2012).

1.5 Problems
1. List 5–10 elements of information that could be determined for NOB from
imagery. Typical main elements are battle group, ships, submarines, ports,
weather, personnel, C3, and medical.
2. What wavelengths of EM radiation are utilized in the images shown in this
chapter? (This is really a review question, best answered after completing
Chapter 2.)
3. Construct a table/graph showing the relationship between the ground
resolution and area of coverage for the sensors shown in this chapter.
(Also a review question.)
4. Compare the various images of San Diego Harbor. What are the
differences in information content for the highest-resolution systems (e.g.,
IKONOS), the earth resources system (Landsat, visible, and IR), and the
radar system. Which is best for lines of communication? Terrain
categorization? Air order of battle? NOB?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 2
Electromagnetic Basics

Figure 2.0 The next two chapters follow the progression of energy (light) from the source
(generally the sun) to detectors that measure such energy. Concepts of target reflectance
and atmospheric transmission are developed, and the problem of getting data to the ground
is discussed.

2.1 The Electromagnetic Spectrum


The previous chapter discussed various remote sensing modalities and some
characteristics of modern systems; at this point, it is necessary to review some
basic physics relevant to electromagnetic waves and remote sensing.
The chief things to understand are the electromagnetic (EM) spectrum
and EM radiation, of which light, radar, and radio waves are examples. This
section takes a brief look at the physical equations that underlie EM waves,
the wave equations that result, their energy in the context of the photoelectric
effect, sources of EM radiation, and some fundamental interactions of EM
waves with matter.

27

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
28 Chapter 2

2.1.1 Maxwell’s equations


The principles that define electricity and magnetism were codified by James
Maxwell in the late 1800s in four equations that bear his name:

1. ∯ E · dS ¼ εQ or ∇ · E ¼ εr ,
o o
(2.1a)

2. ∯ B · dS ¼ 0 or ∇ · B ¼ 0, (2.1b)

3. ∮ E · dl ¼  ∫∫ B · dS or ∇  E ¼ 
­ ­B
, (2.1c)
­t ­t
4. ∮ B · dl ¼ m i þ m ε
­ ­E
o
­t
∫∫ E · dS or ∇  B ¼ m J þ m ε
o o o o o
­t
: (2.1d)

These four equations respectively say


• that the electric flux through a Gaussian surface is equal to the charge
contained inside the surface;
• that, in the absence of a magnetic point charge, the magnetic flux
through a Gaussian surface is equal to zero;
• that the voltage induced in a wire loop is defined by the rate of change of
the magnetic flux through that loop (the equation that defines electrical
generators); and
• that a magnetic field is generated by a current (typically in a wire) but
also by a time-varying electric field.
These equations can be manipulated in differential form to produce a
new differential equation, the wave equation in either electric or magnetic
fields:
­2 E ­2 B
∇2 E  εo mo ¼ 0 and ∇2 B  εo mo ¼ 0: (2.2)
­t2 ­t2
Maxwell understood that the solutions to these equations were defined by
oscillating electric and magnetic fields (E and B). In particular, these
pffiffiffiffiffiffiffiffiffiffi
equations immediately give the speed of light c ¼ 1∕ εo mo . The complexity of
the solutions vary, but there are some fairly simple ones that involve plane
waves propagating in a straight line. Like all wave phenomena, the solution
involves the wavelength, frequency, and velocity of the radiation. In equation
form, a typical solution looks like this:
 
z
Eðz, tÞ ¼ E x̂ cos 2p ft or Eðz, tÞ ¼ E x̂ cosðkz  vtÞ, (2.3)
l
which is the equation for a wave propagating in the plus ^z direction, with an
^ direction (more on polarization below).
electric field polarized in the x

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 29

Figure 2.1 Four cycles of a wave are shown, with wavelength l or period t. The wave has
an amplitude A equal to 3.

The various terms in this equation are defined as follows:


E ¼ amplitude of the electric field;
l ¼ wavelength (in meters);
f ¼ frequency in Hz (cycles/sec);
c ¼ phase velocity of the wave (in m/sec);
v ¼ angular frequency, v ¼ 2pf; and
k ¼ the wave number, k ¼ 2p/l.
The solution depends upon the wavelength and frequency, which are
related by

lf ¼ c: (2.4)

For EM waves in a vacuum, the value of c ¼ 2.998  108 (m/s), an important


constant of physics. The angular frequency and wavenumber are somewhat
less intuitive than the frequency and wavelength, but they are standard
terminology in describing waves. The angular frequency v is defined as
v ¼ 2pf. The wavenumber is defined as k ¼ 2p/l. The period t is the inverse of
the frequency because for a wave it must be true that v · t ¼ 2p or f · t ¼ 1.

2.2 Polarization of Radiation


A subtle but important point is that E and B are both vectors. This vector
character of EM radiation becomes important when considering the concept
of polarization. Familiar as an aspect of expensive sunglasses, polarization
appears in both optical observations and radar. A brief illustration of how
EM waves propagate becomes necessary at this point.
Figure 2.2 shows how the electric and magnetic fields oscillate with respect
to one another in an EM wave (in a vacuum). The electric and magnetic fields
are perpendicular to one another and to the direction of propagation k. These
waves are transverse, as opposed to longitudinal, waves (also termed
compressional, e.g., sound waves). Other forms of polarization are possible

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
30 Chapter 2

Figure 2.2 An electromagnetic wave. The electric field is perpendicular to the magnetic
field (E ⊥ B), and both are perpendicular to the direction of propagation k. Following a typical
convention, E is in the x direction, B is in the y direction, and the wave is propagating in the z
direction. This same convention is used in Eq. (2.3).

but harder to illustrate. Linear polarization, as discussed here, is the more


common form found in natural environments.
Polarization with active radar systems will be illustrated in Chapter 10 (see
Fig. 10.0). Radar signals are intrinsically linearly polarized, and the
orientation is adjusted according to the mission. The receiver can be adjusted
to receive either co-polarized or cross-polarized signals, i.e., either parallel or
perpendicular to the transmitted signal, respectively. Optical polarization in
nature is relatively subtler.
Figure 2.3 shows a pair of color photographs of a building, trees, and blue
sky. The two images were taken with a linear optical polarization filter in a
pair of perpendicular orientations. The main difference in the images is the

Figure 2.3 Color photographs of Hermann Hall, on the campus of the Naval Postgraduate
School. The low-level clouds are visible against the darker blue sky in the left image; the
reflected light from the clouds is not highly polarized. The exposure settings are constant
with the Nikon D70 camera. A similar scene is shown in Chapter 6 (Fig. 6.25).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 31

darkness of the sky; skylight is intrinsically polarized through the process of


Rayleigh scattering, as described in Chapter 3. The sun is behind and to the
left of the camera in this scene, allowing for a maximum amount of scattering
and thus the most polarization. A subtler feature of the left image is
diminished glare off the red tile roof, resulting in a slightly more saturated
color. One of the more common uses of circular polarization is for 3D movies,
and circular polarization is commonly used in high-frequency satellite
communications.

2.3 Energy in Electromagnetic Waves


The wave theory of EM radiation (and light) explains a great deal about
phenomena observed in physics, but at the turn of the 20th century it became
obvious that a different perspective was needed to explain some of the
interactions of light and matter—in particular, processes such as the
photoelectric effect (and similar processes important for detection of light).
The inadequacy of the wave theory led to a resurgence of the idea that light, or
EM radiation, might better be thought of as particles, dubbed photons. The
energy of a photon is given by
E ¼ hf , (2.5)
where f is the frequency of the EM wave (in Hz), and
 
6.626  1034 joule seconds
:
4:136  1015 eV seconds
The electron volt (eV) is a convenient unit of energy related to the standard
metric unit (joules, or J) by the relation 1 eV ¼ 1.602  10–19 J. The conversion
factor is just the charge of the electron. This is not coincidental but rather a
consequence of the definition of the nonmetric unit.
Photon energy E is determined by the frequency of EM radiation: the
higher the frequency, the higher the energy. Although photons move at the
speed of light (as would be expected for electromagnetic radiation), they have
zero rest mass, so the rules of special relativity are not violated.
Is light a wave or a particle? The correct answer is “yes.” The stance that is
employed depends upon the process being observed or the experiment being
conducted. The applicable perspective often depends upon the energies
(frequencies) of the photons. It is generally found that wave aspects dominate
at frequencies below 1015 Hz; particle aspects dominate at higher
frequencies. In the visible part of the spectrum, both descriptions are useful.
Figure 2.4 summarizes the concepts developed so far. Wavelength,
frequency, and energy are plotted vertically with the labels routinely used to
define wavelength regions. Common abbreviations for wavelength are given.
The unit that might be expected for 10–6 m, the micrometer, is generally termed

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
32 Chapter 2

Figure 2.4 The spectrum of electromagnetic radiation.

the micron instead. The angstrom (Å) is a nonmetric unit, but it is used widely
nevertheless, particularly by older physicists. The nanometer (nm) needs to be
carefully distinguished from the nautical mile. Visible-light wavelengths
correspond to a wavelength range from 0.38–0.75 mm and energies of 2–3 eV.

Examples
Consider the following illustrative calculations of the characteristics of optical
frequencies and energies:
Photons corresponding to the “green” portion of the spectrum have a
nominal wavelength of 0.5 mm, or a frequency of 6  1014 Hz. The energy for
such photons can be calculated in electron volts by using Planck’s constant:

E ¼ hf ¼ ð4.14  1015 eV sÞð6  1014 HzÞ ¼ 2.48 eV:

This energy is on the order of (or slightly less than) typical atomic binding
energies.
Energies of typical x-ray photons are in the 104 to 105 eV range, whereas
the photons of a 100-MHz radio signal are only 4  10–7 eV.
LiDAR systems (Chapter 11) generally put out short pulses (5 ns) with
an energy of 10 mJ. How many photons is this for a laser with a wavelength
of 1.064 mm? Starting with Energy ¼ N · hf, where N is the number of photons,
the expression must be rewritten slightly in terms of wavelength:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 33

c
E ¼ N ·h ⇒
l
l 1.064  106 m
N ¼E· ¼ 10  106 joules ·
hc 6.626  1034 joule seconds · 3  108 m∕s
¼ 5.35  1013 photons

2.3.1 Photoelectric effect


The concept of energy in photons is very important for detector technology.
One illustration involves the photoelectric effect, a concept that won Albert
Einstein the Nobel Prize. The phenomenon is typically described in the
context of an experiment, as illustrated in Fig. 2.5. If light hits a metal
surface in a vacuum, electrons are liberated from the surface and can
be collected on a second surface, e.g., the collector plate. The energy of the
electrons can be measured by applying a back bias (a negative voltage) to the
collector plate, repelling the electrons. Electric potentials of one to two volts
are typically sufficient to zero out the current. Traditional wave theory
predicts that the amplitude of the current will vary with the amplitude
(intensity) of the light, which it does. However, wave theory cannot explain
an observed dependence on the wavelength (frequency) of the light. The
higher the light frequency (the bluer the light), the greater the electron
energy. Einstein combined the above concept, E ¼ hf, with the idea of a work
function, a fixed amount of energy necessary to liberate electrons from a
metal surface—typically an electron volt or two.

Figure 2.5 Layout for demonstration of the photoelectric effect. The convention for the
current is that it flows in the direction opposite that of the electrons.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
34 Chapter 2

Figure 2.6 Results from a demonstration of the photoelectric effect using a mercury (Hg)
light source.1

A classical laboratory experiment in the photoelectric effect is illustrated


in Fig. 2.5. Light of varying wavelengths (colors) is shone on a metal plate in a
sealed vacuum cylinder. If the frequency is high enough, photons are emitted
from the surface and propagate across a short gap to the collector plate. The
collected electrons can be measured as a current. A calibrated voltage source
is placed in the circuit to oppose the flow of the current. As the voltage is
varied, the current varies. In this illustration, a mercury lamp with several
distinct spectral lines is used; the results are shown in Fig. 2.6.
This illustration uses light at three wavelengths: l ¼ 435.8 nm, 546.1 nm,
and 632.8 nm (blue, green, and red, respectively). Calculating the energies
corresponding to these wavelengths for, e.g., blue,

hc
E ¼ hf ¼
l
4.136  1015 eV-s · 3  108 m∕s
¼
435.8  109 m
1.24  1016
¼ ¼ 2.85 eV:
4.358  107

Similarly, E ¼ 2.27 eV and E ¼ 1.96 eV for wavelengths of 546.1 and


632.8 nm, respectively. The experimental data in Table 2.1 show that the total
photon energy equals the electron energy plus the work function, or

1. Tel-Atomic Incorporated, PO Box 924, Jackson, MI 49204 800-622-2866, sales@telatomic.


com, http://www.telatomic.com/peffect.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 35

Table 2.1 Experimental data from Fig. 2.6.


Wavelength Photon energy Electron stopping Work function
(nm) (eV) potential W (V) F (eV)

435.8 2.85 1.25 1.6


546.1 2.27 0.7 1.6
632.8 1.96 0.4 1.6

E ¼ hf ¼ KE þ qF, (2.6)
where the total energy is E, the kinetic energy KE ¼ qW, and the work
function gives the potential energy term qF. The magnitude of the electron
charge is q in this equation; it converts from eV to joules.

2.3.2 Photomultiplier tubes


The photoelectric effect demonstrates that light (photons) has energy and also
leads to our first example of how detectors work: the photomultiplier tube
(PMT). This old technology is still in use, having evolved into modern
technological devices such as night-vision scopes.
The front surface of the PMT is a very efficient photoelectron source
within the spectral range (and photon energy) of interest. An initial-incident
photon produces a single electron (statistically, only 70–90% of photons
produce an electron), and the electron is multiplied through secondary
emission, a close cousin of the photoelectric effect. This is the technology used
in the DMSP/OLS-PMT detectors and, to some extent, in modern LiDAR
systems.
As illustrated in Fig. 2.7, secondary emission is a process that can generate
multiple electrons for each electron hitting a surface. The yield varies with the
material, but it is generally two or more for electron energies of a few hundred
volts. This behavior enables a process of amplification to occur as the
electrons from each stage are moved along. A fairly standard, end-illuminated
PMT design is shown in Fig. 2.8. A typical tube will have ten stages or so,
with a net acceleration of 1–2 kV distributed over the stages. A photon
incident on the alkali window on the left will be multiplied until a measurable
charge pulse of 105 to 106 electrons is carried out on the anode on the right.
Photomultiplier-tube technology has evolved into new forms that use fiber
optic bundles, wherein the thin fibers are replaced with channels, or hollow
tubes, that again exploit the multiplication process provided by secondary
emission. Figure 2.9 illustrates how this technology is implemented.

2.4 Sources of Electromagnetic Radiation


Now that electromagnetic waves have been defined and the manner in which
photons might be detected has been explained, this section considers how EM

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
36 Chapter 2

Figure 2.7 The Sternglass formula is the standard description for the yield of secondary
electrons as a function of incident electron energy. Sternglass published an expression for
the secondary current by electron impact using the yield function d(E) ¼ 7.4 dm (E/Em) exp
p
[ 2 (E/Em)], where the maximum yield dm and the energy at which it occurs Em vary from
material to material. Illustrative values for glass (SiO2), for example, are dm ¼ 2.9 and
Em ¼ 420 eV.2

Figure 2.8 Hamamatsu photomultiplier tube.3

waves are created. There are several major sources of EM radiation, all
ultimately associated in some form with the acceleration (change of energy) of
charged particles (mostly electrons). For remote sensing, these can be divided
into three categories:

2. Sternglass, E.J. (1954) Sci. Pap. 1772, Westinghouse Research Laboratory, Pittsburg, PA.
3. https://www.hamamatsu.com/resources/pdf/etd/PMT_handbook_v3aE.pdf; or Photomulti-
plier Tubes, Sales Brochure, TPMO0005E01, June, 2002, Hamamatsu Photonics, KK.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 37

Figure 2.9 A micro-channel plate (image intensifier) design. A standard Hamamatsu


product will be 1–10 cm in diameter, 0.5–1.0 mm thick, and have a channel pitch of 10–
30 mm. They can be (and are) grouped to increase the multiplication factor (104 for one
stage, 106 for two stages, and 108 for three stages). To form an image, a phosphor plate is
placed at the end of the stack.4

• Individual atoms or molecules that radiate line spectra;


• Hot, dense bodies that radiate a continuous “blackbody” spectrum; and
• Electric currents moving in wires (i.e., antennas).

2.4.1 Line spectra


Single atoms or molecules emit light in a form called line spectra. An atom or
molecule that is reasonably isolated (such as in a gas at ordinary temperatures
and pressures) will radiate a discrete set of frequencies called a line spectrum. If
radiation exhibiting a continuous spectrum of frequencies is passed through a
gas, a discrete set of frequencies is absorbed by the gas, leading to a spectrum
of discrete absorption lines.
The radiated (and absorbed) wavelengths are characteristic of the atom
or molecule in question and thus present a powerful tool for determining the
composition of radiating (or absorbing) gases. Line-spectra analysis
accounts for much of our knowledge of the chemical composition of stars
(including the sun).
The processes of absorption and emission of photons is reasonably well
explained by the Bohr model of the atom, developed at the beginning of the
20th century, which uses the familiar atom-as-solar-system construct. This
model has a nucleus at the center of the atom composed of heavy protons (+)
and neutrons. The lighter electrons () orbit the nucleus at well-defined radii
that correspond to different energy levels. The closer they orbit the nucleus,

4. References: all Hamamatsu Photonics, KK Rectangular MCP and Assembly Series TMCP
1006E02, December 1999, Circular MCP and Assembly Series, TMCP1007E04, December
1999, and Image Intensifiers, TII0001E2, September 2001. https://www.hamamatsu.com/
resources/pdf/etd/PMT_handbook_v3aE.pdf.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
38 Chapter 2

Figure 2.10 The Bohr postulate: photons produced/destroyed by a discrete transition in


energy.

the lower (more negative) their energy levels. As energy is given to the electrons,
the radii of their orbits increase until they finally break free. Bohr hypothesized
that the radii of the orbits were constrained by quantum mechanics to have
certain values (really, a constraint on angular momentum), which produces a
set of discrete energy levels that are allowed for the electrons. Bohr also
assumed that the emission and absorption of energy (light) by an atom could
only occur for transitions between the discrete energy levels allowed to
electrons. Figure 2.10 illustrates the concept that photons are emitted (or
absorbed) in changes of these discrete energy levels. See Appendix 1 for more
careful analysis of the Bohr model.
A few pages of mathematics (in the appendix) give the formula for the
energy of the electrons orbiting in hydrogen-like atoms:
 
1 Ze2 2 m E
E¼ ¼ Z2 21 , (2.7)
2 4pε0 h n2
n

where for Z ¼ 1, we find that for hydrogen E1 ¼  [(me4)/(32p2ε02ħ2)] ¼


13.58 eV (n ¼ quantum number 1, 2, 3, . . . ; Z ¼ atomic number; m ¼
electron mass; e ¼ electron charge; and the remaining terms are constants).
Figures 2.11 and 2.12 illustrate energy levels in Bohr’s model of the
hydrogen atom. The ionization energy—the energy necessary to remove the
electron from its “well”—is 13.58 eV. If the electron gains somewhat less
energy, it may move up to an excited state, where n > 1. For example, if an
electron beginning in the ground state gains 10.2 eV, it will move up to the
n ¼ 2 level. If the electron gains an energy of 13.58 eV or more, the atom will
be completely ionized.
When the electron drops from n ¼ 2 to n ¼ 1, it will emit a photon of
10.19-eV energy at a wavelength of

hc
l¼ ¼ 121.6 nm ¼ 1216 Å:
DE
If DE is expressed in electron volts, which it usually is, then the constant hc in
the numerator can be written as

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 39

Figure 2.11 An energy-level diagram of a hydrogen atom, showing the possible transitions
corresponding to the different series. The numbers along the transitions are wavelengths in
units of angstroms, where 1 nm ¼ 10 Å.5

hc ¼ 4.14  1015 · 3  108 ¼ 1.24  106 eV m,

and thus the wavelength l is given by

1.24  106 1240


lðmÞ ¼ or lðnmÞ ¼ : (2.8)
DEðeVÞ DEðeVÞ

In general, transitions will occur between different energy levels, resulting in a


wide spectrum of discrete spectral lines. Transitions from (or to) the n ¼ 1

5. Adapted from Fundamentals of Atomic Physics, Atam P. Arya, p 264, 1971.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
40 Chapter 2

Figure 2.12 The Balmer series: visible-region hydrogen spectra in emission and
absorption.6

energy level (the ground state) are called the Lyman series. The n ¼ 2 to n ¼ 1
transitions compose the Lyman alpha (a) transition. This ultraviolet (UV)
emission is one of the primary spectral (emission) lines of the sun’s upper
atmosphere. The emission (or absorption) lines in the visible portion of the
sun’s spectrum are the Balmer series, i.e., transitions from n > 2 to n ¼ 2.
Higher-order series are of less importance for our purposes.
Although the Bohr model was ultimately replaced by the solution of the
Schrödinger equation and a more-general form of quantum mechanics, it
successfully predicts the observed energy levels for one-electron atoms and
illustrates the quantum nature of the atom and associated energy levels. It is
also a good beginning for understanding the interesting spectral character-
istics reflected and radiated light may exhibit in remote-sensing applications.

2.4.2 Blackbody radiation


Blackbody radiation is emitted by hot solids, liquids, or dense gases and has a
continuous distribution of radiated wavelength, as shown in Fig. 2.13. The
curves in this figure give the radiance L in the following dimensions:
Power
unit area · wavelength · solid angle
or units of watts/(m2m ster). The radiance equation is
2hc2 1
Radiance ¼ L ¼ · hc , (2.9)
l 5
elkT  1
where c ¼ 3  108 m/s, h ¼ 6.626  10–34 joules per second (J/s), and
k ¼ 1.38  10–23 joules per kelvin (J/K).

6. Figure from Fran Bagenal, http://dosxx.colorado.edu/~bagenal/1010/SESSIONS/13.Light.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 41

Figure 2.13 Blackbody radiation as a function of wavelength. Radiance is in dimensions of


power per unit area–per unit wavelength–per unit solid angle.

It is a little easier to decipher the nature of the formula if it is rewritten


slightly:
 5
hc
2 lkT 2 x5
L ¼ 3 4 · hc ðkTÞ5 ¼ 3 4 · x ðkTÞ5 , (2.10)
c h elkT  1 c h e 1

where the dimensionless term x ¼ [(hc)/(lkT)] is defined. The shape of this


function of wavelength l does not change as the temperature changes; only
the overall amplitude changes (and, of course, the location of the peak in
wavelength).
Real materials will differ from the idealized blackbody in their emission of
radiation. The emissivity of a surface is a measure of the efficiency with which
the surface absorbs (or radiates) energy and lies between 0 (for a perfect
reflector) and 1 (for a perfect absorber). A body that has ε ¼ 1 is called a
“black” body. In the infrared, many objects are nearly blackbodies—in
particular, vegetation. Materials with ε < 1 are called gray bodies. Emissivity
ε will vary with wavelength.
Some textbooks emphasize another form of Planck’s law, featuring an
extra p:

2phc2 1 watts
Radiant exitance ¼ M ¼ · hc · 2 : (2.11)
l5
elkT  1 m mm

The difference is that the dependence on the angle of the emitted radiation has
been removed by integrating over the solid angle. This can be done for
blackbodies because they are “Lambertian” surfaces by definition—the
emitted radiation does not depend upon the angle, and M ¼ pL.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
42 Chapter 2

For the purposes of this book, two aspects of the Planck curves are of
particular interest: the total power radiated, which is represented by the area
under the curve, and the wavelength at which the curve peaks lmax.
The power radiated (integrated over all wavelengths) is given by the
Stefan–Boltzmann law:
R ¼ sεT 4 W∕m2 , (2.12)
where R is the power radiated per square meter, ε is the emissivity (taken as
unity for a blackbody), s ¼ 5.67  10–8 W/m2K4 (Stefan’s constant), and T is
the temperature of the radiator (in K).
Wien’s displacement law gives the wavelength at which the peak in
radiation occurs:
a
lmax ¼ (2.13)
T
for a given temperature T. Wien’s constant a has the value

a ¼ 2.898  103 ðm∕KÞ,


which gives lmax in meters if T is K.

Example
Assume that the sun radiates like a blackbody (which is not a bad assumption,
though two slightly different temperatures must be chosen to match the
observed quantities):
(a) Find the wavelength at which this radiation peaks lmax. The solar
spectral shape in the visible is best matched by a temperature of
6000 K:

a 2.898  103 m∕K


lmax ¼ ¼ ¼ 4.83  107 m:
T 6000 K
The spectrum peaks at 500 nm, as illustrated in Fig. 2.14.
(b) Find the total power radiated by the sun. The Stefan–Boltzmann law
is best served by an “effective temperature” of 5800 K.
We can calculate R, the power emitted per square meter of the
surface, by using R ¼ sεT 4 and assuming that ε ¼ 1 (blackbody).
Upon evaluation,

R ¼ 5.67  108 · 1 · 58004 ¼ 6.42  107 W∕m2 :


To find the total solar power output, multiply by the solar surface area
S ¼ 4pR2, where R ¼ 6.96  108 m is the mean radius of the sun.
Therefore, the total solar power output P is

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 43

Figure 2.14 The solar spectrum, based on the spectrum of Neckel and Labs. The peak
occurs at about 460 nm (blue or cyan). The data illustrated here represent the “top-
of-atmosphere” incident radiance. Reprinted with permission from “The solar radiation
between 3300 and 12500 angstrom,” Solar Physics 90, 205–258 (1984). Data file courtesy
of Bo-Cai Gao, NRL.

P ¼ Rð4pR2 Þ ¼ 4pð6.96  108 Þ2  ð6.42  107 Þ,


P ¼ 3.91  1026 W:
The sun’s spectrum is shown in Fig. 2.14 with the spectrum of a 5,800-
K blackbody superimposed.7

2.5 Electromagnetic-Radiation–Matter Interactions8


Electromagnetic radiation (EMR) that impinges upon matter is called incident
radiation. The strongest source of incident radiation for the earth is the sun.
Such radiation is called insolation, an abbreviation of “incoming solar
radiation.” The full moon is the next strongest source, but its radiant energy
is only about a millionth that of the sun. Upon striking matter, EMR may be
transmitted, reflected, scattered, or absorbed in proportions that depend upon

7. K. Phillips, Guide to the Sun, p. 83–84, Cambridge Press, Cambridge, U.K. (1992).
8. This section is adapted from the classic text by T. E. Avery and G. L. Berlin, Fundamentals
of Remote Sensing and Airphoto Interpretation, Macmillan Publishing Company, New
York (1992).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
44 Chapter 2

Figure 2.15 The four interactions defined here are somewhat artificial from a pure physics
perspective, but they are nonetheless extremely useful. Figure reprinted with permission
from Avery and Berlin (1992).8

• the compositional and physical properties of the medium,


• the wavelength or frequency of the incident radiation, and
• the angle at which the incident radiation strikes a surface.
These four fundamental energy interactions with matter are illustrated in
Fig. 2.15.

2.5.1 Transmission
Transmission is the process by which incident radiation passes through matter
without measurable attenuation; the substance is thus transparent to the
radiation. Transmission through material media of different densities (e.g., air
to water) causes radiation to be refracted or deflected from a straight-line path
with an accompanying change in its velocity and wavelength; the frequency
always remains constant. In Fig. 2.15, it is observed that the incident beam of
light at angle u1 is deflected toward the normal when going from a low-density

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 45

medium to a denser one at angle u2. Emerging from the far side of the denser
medium, the beam is refracted from the normal at angle u3. The angle
relationships in Fig. 2.15 are u1 > u2 and u1 ¼ u3.
The change in the EMR velocity is explained by the index of refraction n,
which is the ratio between the velocity of the EMR in a vacuum c and its
velocity in a material medium v:
c
n¼ : (2.14)
v
The index of refraction for a vacuum (perfectly transparent medium) is equal
to 1, or unity. Because v is never greater than c, n can never be less than 1 for
any substance. Indices of refraction vary from 1.0002926 (for the earth’s
atmosphere) to 1.33 (for water) and 2.42 (for a diamond). The index of
refraction leads to Snell’s law:

n1 sin u1 ¼ n2 sin u2 : (2.15)


2.5.2 Reflection
Reflection (also called specular reflection) describes the process whereby
incident radiation bounces off the surface of a substance in a single,
predictable direction. The angle of reflection is always equal and opposite to
the angle of incidence (u1 ¼ u2 in Fig. 2.15). Reflection is caused by surfaces
that are smooth relative to the wavelengths of incident radiation. These
smooth, mirror-like surfaces are called specular reflectors. Specular reflection
causes no change to either the EMR velocity or wavelength.
The theoretical amplitude of the reflectance at a dielectric interface can be
derived from electromagnetic theory and can be shown to be9
~
E polarized perpendicular to the plane of incidence:
n1 cos u1  n2 cos u2
r⊥ ¼ ; (2.16a)
n1 cos u1 þ n2 cos u2
~
E polarized parallel to the plane of incidence:
n2 cos u1  n1 cos u2
rk ¼ : (2.16b)
n2 cos u1 þ n1 cos u2
Here, n1, u1 and n2, u2 are the refractive indices and angles of incidence and
refraction in the first and second media, respectively. (Snell’s law defines u2;
other versions of these equations can be obtained as a function of the incident
angle, but they are more tedious to present.) Here, r is the ratio of the
amplitude of the reflected electric field to the incident field. The intensity of
the reflected radiation is the square of this value. Figure 2.16 shows the

9. See, for example, E. Hecht, Optics, 4th Edition, Addison Wesley, 2001.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
46 Chapter 2

Figure 2.16 Fresnel equations. Both curves approach 1 (100% reflection) as the incident
angle approaches 90°. There is a range of incident angles for which the intensity of the
parallel component is very small, reaching zero at the Brewster angle.

intensity as a function of incident angle for typical air-to-glass values of the


index of refraction. The difference between these numbers is why light
reflected from surfaces like water becomes highly polarized.

2.5.3 Scattering
Scattering (also called diffuse reflection) occurs when incident radiation is
dispersed or spread out unpredictably in many directions, including the
direction from which it originated (Fig. 2.15). In the natural environment,
scattering is much more common than reflection. Scattering occurs with
surfaces that are rough relative to the wavelengths of incident radiation. Such
surfaces are called diffuse reflectors. The velocity and wavelength of
electromagnetic waves are not affected by scattering.
Variations in scattering manifest themselves in varying characteristics in
the bidirectional reflectance distribution function (BRDF). For an ideal,
Lambertian surface, this function would nominally be a cosine curve, but the
reality generally varies quite a bit. Aside from the world of remote sensing,
BRDF is also studied in computer graphics and visualization.
Figure 2.17 shows the consequences of variation in the scattering function
from the NASA Terra satellite, orbiting at a 700-km altitude, and the Multi-
angle Imaging SpectroRadiometer (MISR). At left is a “true-color” image
from the downward-looking (nadir) camera on the MISR. This image of a
snow-and-ice-dominated scene is mostly shades of gray. The false-color image
at right is a composite of red-band data taken by the MISR’s forward 45.6°,

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Electromagnetic Basics 47

Figure 2.17 Multi-angle Imaging SpectroRadiometer (MISR) images of Hudson Bay and
James Bay, Canada, February 24, 2000. This example illustrates how multi-angle viewing
can distinguish physical structures and textures. The images are about 400 km (250 miles)
wide with a spatial resolution of about 275 m (300 yards). North is toward the top. Photo
reprinted courtesy of NASA/GSFC/JPL, MISR Science Team (PIA02603).

nadir, and aft 45.6° cameras, displayed in blue, green, and red colors,
respectively. Color variations in the right image indicate differences in the
angular reflectance properties. The purple areas in the right image are low
clouds, and the light blue at the edge of the bay is due to increased forward
scattering by the fast (smooth) ice. The orange areas are rougher ice, which
scatters more light in the backward direction.

2.5.4 Absorption
Absorption is the process by which incident radiation is taken in by a medium.
For this to occur, the substance must be opaque to the incident radiation. A
portion of the absorbed radiation is converted into internal heat energy, which
is subsequently emitted or reradiated at longer thermal-infrared wavelengths.

2.6 Problems
1. MWIR radiation covers the 3–5-mm portion of the EM spectrum. What
energy range does this correspond to in eV?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
48 Chapter 2

2. Show that 1 mm corresponds to roughly 1 eV (this is a useful fact to


memorize).
3. What frequency is an X-band radar? What wavelength (refer to Chapter 9)?
4. What is the ground-state energy for a He+ ion in eV (Z ¼ 2)?
5. Calculate the energy (in eV), frequency (in Hz), and wavelength (in
meters, microns, and nanometers) for the n ¼ 4 to n ¼ 2 transition in a
hydrogen atom (Z ¼ 1). This is the Balmer-b transition.
6. Bathymetric (green) LiDAR systems operate at 532 nm (nanometers).
Newer systems are pushing down to pulse lengths of 2 ns. For a pulse
energy of 10 mJ, how many photons are in a single pulse? How many
watts is the laser putting out? How long (in a spatial dimension) is the
LiDAR pulse? That is, how far does light propagate in 2 ns?
7. Calculate the radiance L(l) for T ¼ 1000 K from l ¼ 0–20 mm, and then
plot the result. This is an exercise in calculation, and you should be sure
you can obtain the correct answer with a hand calculator at a minimum of
2 wavelengths, say 3 and 10 mm.
8. Calculate the peak wavelength for radiation at T ¼ 297 K, 1000 K, and
5800 K, in microns and nanometers.
9. Calculate the radiated power for a blackbody at 297 K, in watts/m2.
Assume a blackbody (ε ¼ 1). Assume a surface area of 2 m2, and calculate
the radiated power in watts.
10. From the formula for blackbody radiation, consider the element x5/
(ex  1). For what value of x does this function have a maximum? From
that, can you obtain Wien’s constant? This becomes a difficult problem if
you use calculus. A more straightforward approach is to plot the function
of x using a calculator or computer.
11. Snell’s law problem: For an air–water interface, one can use typical values
n1 ¼ 1, n2 ¼ 1.33. For such values, calculate u2 if u1 ¼ 30°. What is the
speed of light in water? What is the angle for the reflected ray?
12. The formula for the amplitude of the reflectance of an incident EM wave,
given in Eqs. (2.16a) and (2.16b), simplifies nicely for normal incidence
(u ¼ 0°). The equations both reduce to r ¼ (n1  n2)/(n1 + n2). Now the
intensity of the reflected EM wave (light) is the square of the electric field,
so for unpolarized incident light, the intensity of the light is
 
n1  n2 2
R¼ :
n1 þ n2
For a light wave incident on glass, coming from air, calculate R. Use
n1 ¼ 1 for air, and n2 ¼ 1.5 for glass. Does this correspond to your
experience when looking out a window from a lit room at night?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 3
Optical Imaging

This chapter discusses remote sensing in the visible EM spectrum, beginning


with images and technology from the first remote-sensing satellites (the
Corona spy satellites). Following the Corona illustration, the chapter resumes
the progression of photons through the atmosphere and into a spacecraft that
began in Chapter 2.

3.1 The First Remote-Sensing Satellite: Corona


3.1.1 History
Immediately following World War II, the U.S. began to experiment with
imaging from space, first with captured V2 rockets and later with various
versions of the rockets being developed as part of the U.S. missile program.
These early experiments showed that it was possible to image the earth from
space. By the late 1950s, it was time to attempt earth imaging from an orbital
system.
Corona was America’s first operational space-reconnaissance project. It
was developed as a highly classified program under the joint management of
the CIA and USAF, a relationship that evolved into the National
Reconnaissance Office (NRO). For context, the first Soviet satellite, Sputnik,
was launched on October 14, 1957, and Van Allen’s Explorer spacecraft flew
on a Redstone rocket on January 31, 1958. President Eisenhower approved
the Corona program in February 1958. This decision proved to be farsighted:
when Francis Gary Powers was shot down in a U-2 on May 1, 1960, the
President was forced to terminate reconnaissance flights over the Soviet
Union (see Fig. 3.1).
The first Corona test launch (February 28, 1959) was the first of twelve
failed missions in the Discover series (a cover name), where seven involved a
launch malfunction, and five involved satellite and camera malfunctions.

49

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
50 Chapter 3

Figure 3.1 Front-page of the New York Times on August 20, 1960. Note the other
headlines.

Mission 13 yielded the first successful capsule recovery from space, on


August 10, 1960.1
The first high-resolution images from space were obtained during the next
mission (August 18, 1960). The last Corona mission, number 145, was
launched May 25, 1972; the last images were taken May 31, 1972. Curiously,
signals intelligence got a slightly earlier start; the launch of the first Galactic
Radiation Background Experiment (GRAB) satellite on June 22, 1960
launched the electronic intelligence discipline into space.
The early imaging resolution was on the order of 8–10 m, eventually
improving to 2 m (6 feet). Individual images on average covered approximately
10 miles by 120 miles. The system operated for nearly twelve years, and over
800,000 images were taken from space. The declassified image collection includes
2.1 million feet of film in 39,000 cans. The subsequent Gambit or KH-7 mission
acquired imagery with resolutions of 2–4 feet beginning in July 1963, with much

1. The United States’ Explorer-6 transmitted the first (electronic) space photograph of earth in
August 1959; the spin-stabilized satellite had television cameras. These images, apparently
now lost, predate the more-official first civilian images from space taken by TIROS 1,
launched on April 1, 1960 (see Chapter 8 and http://nssdc.gsfc.nasa.gov/, NSSDC ID: 59-
004A-05). Russian Luna-3 images of the far side of the moon were transmitted to earth in
October, 1959 (seventeen images from October 7–18, http://nssdc.gsfc.nasa.gov/database/
MasterCatalog?sc=1959-008A).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 51

smaller imaging areas.2 Higher-resolution KH-8 data have not yet been
declassified, although the systems themselves have been.

3.1.2 Technology
The Corona concept uses film cameras to record images for a few days before
dropping the film via a recovery vehicle. The film containers were de-orbited and
recovered by Air Force C-119 (and C-130) aircraft while floating to earth on a
parachute. The system adapted aerial-photography technology with a constantly
rotating, stereo, panoramic-camera system [Figs. 3.2(a), 3.2(b), and 3.3]. The low
orbital altitudes (typically less than 100 miles) and slightly elliptical orbits eased
some of the problems associated with acquiring high-spatial-resolution imagery.
The “Gambit” series of KH-7 satellites flew at even lower altitudes with initial
perigees as low as 120 km. Appendix 2 provides details on the Corona missions.3
The cameras, codenamed “Keyhole,” began as variants on products
designed for aerial photography. The first cameras, the “C” series, were
designed by Itek and built by Fairchild. Two counter-rotating cameras,
pointing forward and aft and viewing overlapping regions, allowed for stereo
views (Fig. 3.4). The cameras used long filmstrips (2.2 inches  30 feet) and an
f/5.0 Tessar lens with a focal length of 24”. The first images had a ground
resolution of 40 feet, based on a film resolution of 50–100 lines/mm.
(Advances in film technology by Kodak were some of the most important
technological advances in the Corona program. Kodak developed a special
polyester base to replace the original acetate film.)
Improved camera, lens, and film design led to the KH-4-series cameras
(Fig. 3.2), with Petzval f/3.5 lenses, still at a 24-inch focal length. With film
resolutions of 160 lines/mm, it was possible to resolve ground targets of six feet.
Corona (KH-4) ultimately used film ranging in speeds from ASA 2 to 8—only a
few percent of the sensitivity, or speed, of consumer film. This is the tradeoff for
a high film resolution, and one reason why very large optics were needed.4
The great majority of Corona’s imagery was black and white (panchro-
matic). Infrared film was flown on Mission 1104; color film was flown on
Missions 1105 and 1108. Photo-interpreters did not like the color film,
however, because the resolution was lower.5 Tests showed color as valuable
for mineral exploration and other earth-resources studies, and its advantages
led indirectly to the Landsat satellites.

2. Richelson, AF Magazine, page 72, June 2003.


3. The competing USAF Samos system never quite worked correctly. http://www.lib.cas.cz/
www/space.40/1963/028A.HTM;
4. Dwayne Day has written extensively on the Corona and subsequent missions, frequently
publishing in Spaceflight, http://www.thespacereview.com/index.html.
5. Day, Dwayne, et al. Eye in the Sky: The Story of the CORONA Spy Satellites. Washington, D.C.:
Smithsonian Institution Press, 1998, page 82, KH-4B, 12/04/69. Cameras operated satisfactorily
and the mission carried 811 ft of aerial color film added to the end of the film supply.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
52 Chapter 3

Figure 3.2 (a) KH-4B (artist’s concept) and (b) KH-4B or -J3 camera (DISIC refers to a dual
improved stellar index camera). Both images reprinted courtesy of the National Reconnais-
sance Office.6

Figure 3.3 A USAF C-119, and later, a C-130 (shown here) modified with poles, lines, and
winches extending from the rear cargo door, was used to capture capsules ejected from the
Discoverer satellites. Reportedly, this step—catching a satellite in midair—was considered
by some to be the least likely part of the whole process.7

Russia maintained film-return reconnaissance technology well into the


st
21 century. The Kobalt-M series began with Kosmos 2410, a film return
system, on September 24, 2004. On May 6, 2014, they launched the
(apparently) last mission, perhaps in response to the Crimean incursion.

6. http://www.nro.gov/history/csnr/corona/imagery.html.
7. A recent NASA attempt to replicate this technique failed. The Genesis satellite crashed on
September 8, 2004 as the parachute failed to deploy properly. It made a sizable hole in the Utah
desert. AW&ST, Sept 12, 2004; http://www.nasa.gov/mission_pages/genesis/main/index.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 53

Figure 3.4 The KH-4B cameras operated by mechanically scanning to keep the ground in
focus. The paired cameras provided stereo images, which are very helpful when estimating
the heights of cultural and natural features.

Satellite inclinations are not polar, and altitude is low. The 2014 mission was
observed to be in a 176  285-km orbit with an inclination of 81.4°. The
satellite lifetime is only a few months at those altitudes.8 There are
indications that Russia is moving to electronic systems.

3.1.3 Illustrations
The first Corona image was taken of the Mys Shmidta airfield. Figure 3.6
shows that the resolution was high enough to discern the runway and an adjacent
parking apron. Eventually, the systems were improved, and higher-resolution
images were acquired. Figure 3.7 shows two relatively high-resolution images
of the Pentagon and the Washington Monument in Washington D.C.
Declassified imagery is available from the U.S. Geological Service (USGS).9

8. http://www.russianspaceweb.com/kobalt_m.html; http://www.nasaspaceflight.com/2014/05/
soyuz-2-1a-kobalt-m-reconnaissance-satellite/.
9. http://pubs.usgs.gov/fs/2008/3054/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
54 Chapter 3

Figure 3.5 Corona satellite in the Smithsonian. The recovery vehicle is to the right.

Figure 3.6 Mys Shmidta Airfield, U.S.S.R. This August 18, 1960 photograph is the first
intelligence target imaged from the first Corona mission. It shows a military airfield near Mys
Shmidta on the Chukchi Sea in far-northeastern Russia (Siberia, 68.900°N, 179.367°W, just
across some forbidding water from Barrow, Alaska, at a very similar latitude). North is at the
upper left. Image reprinted courtesy of the NRO.

Of course, the whole point was to track the activities in the Soviet Union.
Figure 3.8 shows the Severodvinsk shipyard, a North Sea port for the U.S.S.R,
on February 10, 1969. The large rectangular building in the center is the
construction hall, and the square courtyard (drydock) to its left is where vessels

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 55

Figure 3.7 Photographs of Washington, D.C.: (a) one always-popular target, the
Pentagon, imaged September 25, 1967. (b) Note the shadow cast by the Washington
Monument in September 1967. Both images reprinted courtesy of the NRO.

Figure 3.8 Severodvinsk Shipyard, February 10, 1969.

(submarines) are launched. The curved trail of disturbed snow and ice reveals
where the subs are floated into the river. The satellite is on a southbound pass
over the port facility.10
The image of Severodvinsk is a chip from a much larger scene shown in
Figs. 3.9 and 3.10. The strips are perpendicular to satellite motion, which in

10. Eye in the Sky, The Story of the Corona Spy Satellites, page 224, D. A. Day, J. M.
Logsdon, and B. Latell, eds. (1998).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
56 Chapter 3

Figure 3.9 Coverage map for Corona (derived from USGS).

Figure 3.10 Three consecutive Corona images. The shipyard in Fig. 3.8 is in the bottom
frame, just under the word “Severodvinsk.” Note the ends of the film strips: “when the
satellite’s main camera snapped a picture of the ground, two small cameras took a picture of
the earth’s horizon at the same time on the same piece of film. The horizon cameras helped
interpreters calculate the position of the spacecraft relative to the earth and verify the
geographical area covered in the photo.”11

this frame was toward the southeast. There are horizon images on the edges of
the film strips, showing the rounded earth. These images from the horizon
cameras provided reference/timing information as the system scanned from
horizon to horizon.

11. http://airandspace.si.edu/exhibitions/space-race/online/sec400/sec431.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 57

Concurrent with the Corona series, several other film-return systems were
orbited. The most interesting of these, in some sense, were the Gambit systems,
codenamed KH-7 and KH-8. These were designed for a higher spatial
resolution and produced images over smaller areas. Illustrations from KH-7
appear at the beginning of Chapter 1.

3.2 Atmospheric Absorption, Scattering, and Turbulence


The previous chapter looked at the illumination of objects by sunlight and the
reflectance or scattering of light from those objects. Following the photons
along their paths, this chapter moves in sequence from the atmosphere (in this
section) through optical systems (in the next section), and then on to the
detectors.
There are three limiting factors that the atmosphere introduces into
remote sensing:
• absorption (typically by atomic and molecular processes),
• scattering (primarily due to aerosols like dust, fog, and smoke), and
• turbulence (due to fluctuations in the temperature and density of the
atmosphere).

3.2.1 Atmospheric absorption: wavelength dependence


The atmospheric factor that most limits earth observation is absorption—
particularly absorption by water, carbon dioxide, and ozone, in roughly that
order. Figure 3.11 shows the atmospheric absorption calculated for a standard

Figure 3.11 Atmospheric absorption. The transmission curves calculated using MOD-
TRAN 4.0, release 2. The U.S. standard atmosphere (1976) is defined in the NOAA
publication with that title, NOAA0S/T-1562, October, 1976, Washington, D.C., Stock # 003-
017-00323-0.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
58 Chapter 3

atmosphere at ground level (the horizontal axis is logarithmic). The


calculations were performed with MODTRAN, the standard USAF code
for modeling atmospheric absorption and scattering. The regions shaded dark
blue are atmospheric windows, that is, spectral regions that are largely
transparent to EM radiation. By contrast, regions such as the spectral range
from 5–7 mm are dominated by water absorption. The atmosphere is opaque
in this spectral range. Ozone shows its presence in a small spectral-absorption
band centered at 10 mm.

3.2.2 Atmospheric scattering


Electromagnetic radiation (photons) is scattered by various particles in the
earth’s atmosphere. This scattering is caused by collisions between the
photons and scattering agents that range from molecules (and atoms),
suspended particulates (aerosols), and clouds (water droplets). Scattering is
somewhat arbitrarily divided into three domains as a function of the
relationship between the wavelength and the size of the scattering agents
(atoms, molecules, and aerosols).
Rayleigh, or molecular, scattering in the optical spectrum is primarily
caused by oxygen and nitrogen molecules, whose effective diameters are much
less than the optical wavelengths of interest. Typical molecular diameters are
1–10 Å. The process of Rayleigh scattering is highly dependent on
wavelength. The probability of scattering interactions is inversely propor-
tional to the fourth power of wavelength (l–4), as illustrated in Fig. 3.12. The
bottom-most (dashed) curve in the figure is the Rayleigh scattering term.
The preferential scattering of the blue wavelengths explains why the clear
sky (i.e., with low humidity and few aerosols) appears blue in daylight. The
blue wavelengths reach our eyes from all parts of the sky. This is why
photographers who want to take clear panoramic photographs with black-
and-white film use a yellow or red filter: it allows the less-scattered light to be
captured on film for an overall sharper image. The Rayleigh scattered light is
also polarized, and so color landscape photographers will use a linear
polarization filter to darken the sky, as illustrated in Chapter 2 (Fig. 2.3).
Scattering is a strong motivating factor when choosing the wavelength
response for satellite imagers, which typically exclude blue wavelengths.
As the relative size of the scattering agent increases, the scattering
processes evolve toward Mie scattering. The dependence of the scattering
probability on wavelength decreases, and the scattering directionality evolves
as well. For Rayleigh scattering, the probability of scattering is roughly equal
in all directions; as the particle size increases, the probability of forward
scattering increases. Mie scattering produces the almost-white glare around
the sun when a lot of particulate material is present in the air. It also gives the
white light from mist and fog. Important Mie scattering agents include water
vapor and tiny particles of smoke, dust, and volcanic ejecta—all particles

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 59

Figure 3.12 Atmospheric scattering diagram.12

comparable in size to the visible and infrared wavelengths used in remote


sensing. Mie scattering is important in the determination of the performance
of IR systems, particularly in littoral (coastal) environments. Depending upon
the size distribution, shape, and concentration of scattering particles, the
wavelength dependence varies between l–4 and l0.
Scattering agents that are even larger (more than 10 the photon
wavelength) cause particles to scatter independently of wavelength (non-
selective in Fig. 3.12). This happens with the water droplets and ice crystals of
which clouds and fog are composed, showing itself in the gray hues of fog.
Such scattering causes the sunlit surfaces of clouds to appear a brilliant white.
Large smog particles, if not possessing special absorption properties, turn the
color of the sky from blue to grayish white.
The combined effects of scattering and absorption are illustrated in
Fig. 3.13; the graph is designed to indicate the impact of atmospheric
absorption on light transmitted from ground to space, as would be observed
by a high-altitude satellite. The spectral range is extended beyond that in the
previous illustration. The absorption due to ozone becomes very significant
below 0.35 mm, and the atmosphere is opaque to sunlight below 0.3 mm, due
to the ozone layer at altitudes of 20–40 km. Overall, the atmosphere is more
transparent in the long-wave infrared (11–12 mm) than in the visible spectrum
(0.4–0.7 mm). Although this latter region is described as a window, in the end
it is only 50–60% transparent.

12. P. Slater, Manual of Remote Sensing, Vol. 2, 2nd edition, F. M. Henderson and A. J. Lewis,
Eds., p. 210, Wiley, New York (1983).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
60 Chapter 3

Figure 3.13 Atmospheric absorption and scattering. The transmission curve is calculated
using MODTRAN 4.0, release 2. The conditions are typical of mid-latitudes with a 1976 U.S.
standard atmosphere assumed. The overall broad shape is due to scattering by molecular
species and aerosols.

3.2.3 Atmospheric turbulence


The third limiting factor in remote sensing through the atmosphere—
atmospheric turbulence—is the answer to the question, “why do the stars
twinkle?” Figures 3.14 and 3.15 illustrate.
Light propagating through the atmosphere will encounter small perturba-
tions of density and temperature, due to atmospheric turbulence. Small
irregularities in density produce variations in the index of refraction, which,
in turn, cause small fluctuations in the direction in which the light propagates
(Snell’s law), on the order of one part in a million. These irregularities in
the atmospheric boundary layer (the bottom of the atmosphere) have
characteristic scale sizes of tens of meters and fluctuate on timescales of
milliseconds to seconds. The impact of atmospheric turbulence is much
greater for telescopes looking up through the atmosphere than for sensors
looking down at earth.
Figure 3.15 shows how atmospheric turbulence affects stellar observa-
tions. Adaptive-optics technology (not developed here) uses optical
technology to compensate for the flickering direction of the incoming light.
The figures show the “first light” image for the adaptive-optics system on the
3.5-m telescope at the Starfire Optical Range, taken in September 1997. The
figure on the left is the uncompensated image; the compensated image is on
the right. It had not previously been known that the target was in fact a pair
of stars.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 61

Figure 3.14 The apparent position of a star will fluctuate as the rays pass through time-
varying light paths.

Figure 3.15 This astronomical I band (850 nm) compensated image of the binary star
Kappa-Pegasus (k-peg) was generated using the 756-active-actuator adaptive-optics
system. The two stars are separated by 1.4544 mradians. The images are 128  128 pixels;
each pixel subtends 120 nano-radians. The FWHM of the uncompensated spot is about
7.5 mradians—about 5 times the separation of the two stars.13 Note on nomenclature:
astronomy IR bands are H (1.65 mm), I (0.834 mm), J (1.25 mm), and K (2.2 mm).

13. Original source: http://www.de.afrl.af.mil/SOR/binary.htm, no longer available. See also,


The Adaptive Optics Revolution: A History, by R. W. Duffner, page 272 (2009).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
62 Chapter 3

Figure 3.16 The thin-lens law.

In summary, three environmental factors constrain the resolution possible


with an imaging system: absorption, scattering, and turbulence.

3.3 Basic Geometrical Optics


The sequence of physical processes considered here has followed the
photons in sequence from the source (generally, the sun for visible imaging)
and through the atmosphere. The energy now needs to be detected by a
sensor. This requires an optical system and detectors. The simplest aspects
of optical systems are considered here, especially the factors that define
resolution.

3.3.1 Focal length/geometry


The most fundamental equation in optics is the thin-lens equation:

1 1 1
¼ þ : (3.1)
f i o

Here, f is the focal length, an intrinsic characteristic of the lens determined


by the radius of curvature and the index of refraction of the lens material
(or materials). The distances from the center of the lens to the object (o) and to
the image (i) are the other two parameters. The focal length defines the image
distance when o, the object distance, is infinity (`). In the illustration here, the
object distance is twice the focal length, so the image distance is also twice the
focal length (i ¼ o ¼ 2f ).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 63

Figure 3.17 Magnification: similar triangles. The object distance will be the altitude or
range; the image distance is typically the focal length.

3.3.2 Optical diagram: similar triangles and magnification


The size of an image can be obtained by using the simple geometry law for
similar triangles—the magnification is equal to the ratio of the image distance
to the object distance. In the previous example, the two are equal, so the
image has the same size as the object. Figure 3.17 shows how the geometry
changes as the object distance increases. Normally, in remote sensing, the
object distance is a large number, whereas the image distance is roughly the
focal length. For example, the Hasselblad camera operated by early
astronauts was typically used with a 250-mm lens at an altitude of 150 km;
the ratio of the image size to the object is (250  10–3) / (150  103) ¼ 1.6 
10–6 (quite a small number). The Monterey Peninsula (extending some 20 km
from north to south) would be imaged on a piece of film 32 mm in length
(about half the width of the 70-mm film rolls). The Corona (KH-4) missions
introduced at the beginning of this chapter had 61-cm focal-length optics.

Example
For example, consider a photographer using a 1000-mm lens on a modern
digital-single-lens-reflex (DSLR) camera at a football stadium. If he images a
player (2 m tall) from across the field (object distance 40 m), how large is the
image on the camera detector, or focal plane?
image size object size object size
¼ ⇒ · focal length;
focal length range range
2m
image size ¼ · 1000 mm ¼ 5 cm:
40 m
This is larger than the size of the detector on a modern DSLR—the image
would not include the entire player.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
64 Chapter 3

3.3.3 Aperture (f/stop)


The light-gathering ability of a lens is defined by its diameter. More-sensitive
optical systems have larger (faster) optics. The effectiveness of a given
aperture depends on the focal length (the magnification). This dependence is
defined by the concept of the f-number (f/#), or f/stop, the ratio of the focal
length to the lens (or mirror) diameter:

focal length
f ∕# ¼ : (3.2)
diameter of the primary optic

Typical lenses found on amateur cameras will vary from f/2.8 to f/4. High-
quality standard lenses will be f/1.2 to f/1.4 for a modern DSLR. The longer
the focal length (the higher the magnification) is, the larger the lens must be to
maintain a similar light-gathering power. The longer the focal length is, the
harder it is to create a fast optic. A telephoto lens for a sports photographer
might be 500 mm and might at best be f/8 (question: what is the diameter
implied by that aperture?). Two different quantities are being referred to as
“f ” here, following optical convention. One is the focal length, and the other
is the aperture.
As mentioned at the beginning of this chapter, the KH-4B cameras had a
focal length of 24 inches and were 5–10 inches in diameter (see the appendix
on Corona cameras). Apertures of f/4 to f/5.6 are typical of the early
systems. In contrast, the Hubble Space Telescope is characterized by
aperture values of f/24 and f/48, depending on the optics following the large
primary mirror.14

3.3.4 Image formation by lens or pinhole


Once the image has formed on the focal plane (for example, on film), an image
can be recorded. This is illustrated by a pair of images taken with a
35-mm camera in Fig. 3.18. The left image was taken with a customary 50-mm
lens.15 The right image was taken with the same camera but without the lens;
instead, a piece of aluminum foil was pierced with a pinhole and fastened as a
lens cap at the front of the camera, approximately 50 mm from the film plane.
The pinhole’s diameter was approximately 1% that of the normal lens.

14. Supplemental topic: optical systems made with lenses need to pay attention to the
transparency of the material in the spectral range of interest. Glass is transparent from
400 nm into the infrared. UV sensors and MWIR/LWIR sensors need to be constructed
from special materials making them MUCH more expensive.
15. In photography, “normal” means that if you print the image on an 8  10 piece of paper
and hold it at arm’s length, it will appear as the scene did in real life. Modern “point and
shoot” cameras will typically have shorter focal lengths but still frequently refer to “35 mm
equivalent” as a way to standardize nomenclature.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 65

Figure 3.18 Two images of a shoreline. The pinhole is effectively a 50-mm lens stopped
down to an aperture of 0.57 mm with f/100.

The pinhole image obtained is fuzzy; the smaller the pinhole, the sharper
the image will be. The problem with a small aperture is, of course, a relatively
long exposure time. A limit is eventually reached as diffraction effects emerge,
as described in the next section.

3.4 Diffraction Limits: The Rayleigh Criterion


In addition to geometrical issues in optics, we must consider the impact of
physical principles that depend upon the wave character of light. One
manifestation of this wave character is diffraction—the fact that light can
diffuse around sharp boundaries. Diffraction applies equally to sound and
ocean waves, which have scales of distance with which we are more
familiar.
Diffraction leads to a fundamental, defining formula for the limitation in
angular (spatial) resolution for any remote sensing system: the Rayleigh
criterion, which is traditionally presented in the one-dimensional (1D) case,
where an infinite single slit is considered (Fig. 3.19).
The somewhat out-of-scale figure shows light rays incident on a slit. For
purely geometrical optics, there would be a white spot on the surface below,
corresponding to the size of the slit. Instead, there is a central maximum with
a width determined by the wavelength of the light, the width of the slit, and
the distance from the slit to the surface below, or range. The intensity is
written as
 
sinðFÞ 2 ax
I∝ , where F ¼ 2p , (3.3)
F Rl
a is the width of the slit, x is the distance along the target surface away from
the center line, R is the range, and l is the wavelength.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
66 Chapter 3

Figure 3.19 Single-slit diffraction pattern.

The function in brackets is the familiar sinc function,16 which has a


magnitude of 1 at F ¼ 0 and then drops to 0 at F ¼ p. This occurs when

ax x l
2p ¼ p, or ¼ : (3.4)
Rl R 2a

The width of the central maximum is just twice this value, and the result is
well known: the angular width of the first bright region is Du ¼ 2(x/R) ¼ l/a.
The secondary maxima outside this region can be important, particularly in
the radar and communications fields—these are the sidelobes in the antenna
pattern of a rectangular antenna.
How do these factors relate to optical systems? Diffraction implies
that for a finite aperture there is a fundamental limit to the angular resolution,
which is defined by the ratio of the wavelength to the aperture width or
diameter. This is the fundamental issue that determines the size of an optical
system. The diffraction formula applies nicely to rectangular apertures (as will
be seen in Chapter 9) and leads to the Rayleigh criterion:
l
Du ¼ , (3.5)
D
where Du is the angular resolution, and D is the aperture width. This formula
must be modified for circular apertures. The result is effectively obtained by

16. Generally encountered in an introductory calculus class as a good illustration for concepts
of limits and L’Hospital’s Rule. The numerator and denominator go to zero for x ¼ 0, but
the ratio is well defined.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 67

Figure 3.20 Single-slit diagram for the geometry of the diffraction pattern.

taking the Fourier transform of the aperture shape, which for a circular
aperture results in a formula involving Bessel functions, as normally
developed in a course in differential equations:
 
J1 ðwÞ 2
I∝ , (3.6)
w
where w ¼ (2par) / (Rl), J1 is the “J” Bessel function of order 1, a is the lens
radius, r is the distance from the center line, R is the distance from the lens to
the screen, and l is the wavelength. This function is illustrated in Fig. 3.21.
The first zero occurs where w ¼ 3.832, which leads to a relatively famous
result: the radius of the “Airy disk” ¼ 0.61l  distance / lens radius, or the
angular resolution of a lens is
l l
Du ¼ 0.61 · , or Du ¼ 1.22 · : (3.7)
a diameter

Figure 3.21 Airy pattern for diffraction from a circular aperture. The background pattern is
shown in negative to improve visual contrast.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
68 Chapter 3

Figure 3.22 (a) Two objects (stars) separated by a distance x just corresponding to the
Rayleigh criteria for a cylindrical optic. (b) Two point targets at a range of 400 km. The
objects are separated by 10 m.

So what are the implications? As the angular resolution improves, two


adjacent objects become separable, as illustrated in Fig. 3.22. The concept is
generally best understood in astronomical applications, where Du is the
angular separation between two stars. In the figure, the two stars can barely
be distinguished when their angular separation is Du, as defined in Eq. (3.7).
The concept extends directly into terrestrial applications, however, and
defines the ground separation distance (GSD), which is the product of Du
and the range from the sensor to the target.
Figure 3.22(b) shows what happens as the diameter of the optic varies
through the Rayleigh limit. The image simulates two point targets at a range of
400 km (e.g., a relatively-low-altitude satellite). The objects are separated by
10 m, so the angular separation is 25 mradians. The image that would be
obtained for different optic (lens) diameters is shown for optics that range from
12.2, 24.4, 36.6, and 48.8 mm. The wavelength is 0.5 mm, so the Rayleigh limit
is obtained critically for an optic with a diameter of 24.4 mm, e.g.,

0.5  106 m
1.22 · · 400  103 m ¼ 10.0 m:
0.0244 m
This is the case illustrated in Fig. 3.22(a), and for the second image from the
bottom of Fig. 3.23(b). The two targets illustrated at the top are separated by

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 69

twice the Rayleigh criteria, which is used in some engineering texts as a design
criterion.
As an illustrative example, consider a system like the Hubble Space
Telescope orbiting at an altitude of 200 nautical miles, or 370 km, and assume
it is nadir viewing, so that the range is just the altitude. The Rayleigh criteria
can be used to estimate the best possible ground resolution such a sensor could
produce:

mirror diameter ¼ 96 inches ¼ 2.43 meters;


wavelength ¼ 5000 Å ¼ 5  107 m;
l 5  107
GSD ¼ Dx ¼ 1.22 R ¼ 1.22 · · 370  103
a 2.43
¼ 9.3  102 m ¼ 9.3 cm or 3.7 inches:

3.5 Detectors
Following optics, the next primary element in any sensor system is the
detector. Modern detector systems make use of solid state technology, as
discussed here.

3.5.1 Solid state


Generally speaking, the detectors involved in remote sensing utilize arrays of
solid-state detectors, much like the charge-coupled detectors (CCDs) in
modern video cameras and digital still cameras. This broad statement about
the class of sensors [also termed focal plane arrays (FPAs)] does not properly
account for the many sensors flying linear arrays (1D focal planes), as on
SPOT, or even the ongoing use of single-detector sensors such as on GOES
and Landsat. Still, the underlying physics is similar.
There are a number of possible approaches to electronically detect optical
radiation. The focus here is on “intrinsic” (bandgap) detectors, which is the
design used for most silicon (CCD) focal planes.17 Photo-emissive technology
was introduced in Chapter 2 in the discussion of photomultiplier tubes and
image intensifiers.
As with Bohr atoms, solid materials (in particular, semiconductors) have a
distribution of energy levels that may be occupied by electrons. Generally,
electrons reside in a state corresponding to the ground state of an individual
atom, termed the valence band. There is then a gap in energy (a “band gap”)
that represents a range of energies that are forbidden for the electron to
occupy, as shown in Fig. 3.23.
If a photon hits the semiconductor, however, it can give energy to an electron
in the valence band, exciting it up into the conduction band. This behavior is

17. See C. McCreight, “Infrared Detectors for Astrophysics,” Physics Today (Feb. 2005).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
70 Chapter 3

Figure 3.23 Energy band-gap illustration.

Table 3.1 Band-gap energy of common materials.


Material Band-Gap Energy (eV) at 300 K

Silicon (Si) 1.12


Germanium (Ge) 0.66
Gallium arsenide (GaAs) 1.424
Indium antimonide (InSb) 0.18
Platinum silicide (PtSi) 0.22
Lead sulfide (PbS) 0.35–0.40
Mercury cadmium telluride (HgCdTe) 0.1–0.3

significant because it is now possible for the electron to move as though in a


conductor, and it can be collected and measured. This is the essence of the
detection mechanism for most solid-state detectors. Their efficiency varies, but
typically 40–80% of incident photons with sufficient energy can be detected. For
silicon, the maximum efficiency occurs in the near-infrared, from 0.9–1.0 mm.
This simple description makes it possible to understand some of the more
important constraints in the use of solid state detectors (SSDs). First, one
must match the energy band gap to the energy of the photons to be observed.
The photon energy must at least equal the size of the band gap, which varies
with material (see Table 3.1).18,19
What limits do these values place on the utility of different materials for use
as detectors? Taking silicon as the most common example, recall from Chapter 2
the equation that relates wavelength to the energy in a transition:
hc
l¼ :
DE
Here, DE is the band-gap energy (in eV), and

hc 1.24  106 ðeV mÞ


l¼ ¼ ¼ 1.1  106 m, or 1.1 mm:
DE 1.12 eV

18. G. Rieke, Detection of Light from the Ultraviolet to the Sub-millimeter, Cambridge
University Press (2002).
19. S. M. Sze, Physics of Semiconductor Devices, Wiley, New York (1981), and Kittel,
Introduction to Solid State Physics, Wiley, New York (1971).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 71

Thus, the visible and the first part of the near-infrared spectrum can be
detected with silicon detectors; this fact is reflected in their common use in
modern cameras. The energy bad gap generally depends on the temperature.
The detection of longer-wavelength photons requires materials such as
HgCdTe or InSb. The relatively small band gaps cause a problem, however.
At room temperature, the electrons tend to rattle around, and every now and
then one will cross the gap due to thermal excitation. This process is largely
controlled by the exponential term that comes from the “Maxwell–
Boltzmann” distribution (or bell-shaped curve), which describes the velocities,
or energies, to be found in any collection of objects (whether electrons, atoms,
or molecules) in a thermal equilibrium:
E E   band-gap energy 
N2
¼ e kT ⇒ number ∝ e thermal energyðkT Þ :
2 1
(3.8)
N1

If these electrons are collected, this factor becomes part of a background noise
known as the dark current. To prevent this, the materials must be cooled—
typically to 50–70 K, which requires liquid nitrogen at least [and for some
applications, liquid helium (4 K)]. Mechanical refrigerators can also be used, but
they are problematic in space applications because they generally have relatively
short lifetimes and can introduce vibration into the focal plane, which is
undesirable. Some recent NASA missions have used a long-lived pulse-tube
technology, developed by TRW, with apparent success.
The importance of cooling is illustrated here with a calculation. Use
HgCdTe, assume a band gap of 0.1 eV, and compare the nominal number of
electrons above the band gap at room temperature (300 K) and at 4 K. The
conversion factor k in the term kT is

joules joules eV
1.38  1023 =1.6  1019 ¼ 8.62  105 ;
kelvin eV kelvin
T ¼ 300 K, kT ¼ 0.026 eV; T ¼ 4 K, kT ¼ 0.00035 eV;
 
 bandgap energy  (  0.1 3.8
 thermal e 
0.026
¼ e ¼ 0.02 @ 300 K
number ∝ e energyðkTÞ ¼
 0.00035
0.1
e ¼ e286 ≈ 0 @ 4 K:

At room temperature, the exponential is small but reflects a non-negligible


number of excitations of electrons above the band-gap energy. At the
temperature of liquid helium, the electrons sit quietly below the band gap.
The band-gap values in Table 3.1 are for nominal room temperature.
Because the energy gap depends on temperature, it is affected by the cooling
of the detector. As an example, the band gap for indium antimonide (InSb)
increases as the temperature is reduced. Figure 3.24 illustrates this behavior;
the band gap changes from 0.17 eV at room temperature to 0.235 eV at 20 K.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
72 Chapter 3

Figure 3.24 Band gap for indium antimonide as a function of temperature.20

Visible-imaging cameras uniformly use silicon as the sensitive element.


Typical commercial IR imaging systems use InSb, PtSi, and HgCdTe (e.g., the
FLIR Systems family of cameras). In addition, there are two popular new
technologies: quantum-well (QWIP) and microbolometer detectors. Micro-
bolometer sensors offer a variety of advantages over the more-traditional
cooled semiconductor systems, as discussed in the following section.

3.5.2 Focal plane arrays


The photosensitive component of a detector can exist as a single element, and
there are a number of operational systems that nominally have a single
detector. A notable current example would be the GOES weather satellite,
described in Chapter 8. Generally, however, newer systems feature either

20. Reproduced with permission from C. L. Littler and D. G. Seiler, “Temperature dependence
of the energy gap of InSb using nonlinear optical techniques,” Appl. Phys. Lett. 46(10)
(1985), Copyright 1985, AIP Publishing LLC.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 73

Figure 3.25 Rough guide to the spectral ranges of use for different focal-plane materials.
HgCdTe is abbreviated as MCT (MerCadTelluride). The chart shows the wavelength and
temperature ranges that may be used for a variety of materials. Longer wavelengths fairly
uniformly require lower temperatures. Image courtesy the Rockwell International Electro-
optical Center.

rectangular arrays (as in digital cameras) or linear arrays of pixels, as will be


seen for IKONOS, with a 1D array of 13,500 pixels.
A CCD is an array of sensitive elements. It is an integrated circuit (IC)
with the unique property that a charge held in one cell of the IC can be shifted
to an adjacent cell by applying a suitable shift pulse. Information defined by
the amount of charge can be shifted from cell to cell with virtually no loss.
This characteristic allows the devices to be used as a memory device. When it
was further discovered that the construction could be altered so that
individual cells also responded to incident light while retaining the ability to
shift charges, the concept of a solid state imager emerged.
In a CCD, each picture element, or pixel, converts incoming light into an
amount of charge directly proportional to the amount of light received. This
charge is then clocked (shifted) from cell to cell, and then finally converted at
the output of the CCD to a video signal that represents the original image.
The detector resolution is defined by the pitch, or spacing, between
individual pixel elements on the focal plane, which is typically very close
to the size of the pixel. Modern silicon sensors generally have a pitch on

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
74 Chapter 3

the order of 5–10 mm. More exotic materials generally have somewhat
larger values for the detector pitch. A fairly typical linear CCD array is
illustrated in Chapter 6, where a Kodak 3  8000 linear array is shown in
Fig. 6.4.
A relatively new technology has emerged over the last decade, the
quantum well infrared photodetector (QWIP). QWIP focal planes have
recently been used on the Landsat Data Continuity Mission (aka Landsat 8).
The technology lends itself to larger, more-uniform arrays than the more
exotic InSb and HgCdTe materials. Coolers are still required.

3.5.3 Uncooled focal planes: microbolometers


One problem with the technology behind semiconductor detectors is the need
for cooling. Either a coolant (e.g., liquid nitrogen) or a mechanical
refrigerator is required. The former is awkward in the field and limited for
space use because a limited amount of coolant can be brought along.
Mechanical devices are generally troublesome and tend to fail in space
applications. Consequently, there is significant motivation to develop
alternatives. Energy detectors, as opposed to the photon detectors described
earlier, are emerging as an important alternative for remote-sensing systems.
Microbolometer techniques (Fig. 3.26) approach detection by sensing the
change in temperature of a sensitive element, typically by measuring its
resistance. This allows the direct detection of “heat,” as opposed to counting
photons directly. Consequently, such detectors do not need to be cooled.
Commercial IR cameras are now using these detectors, and they are very
popular for applications such as firefighting. These cameras are not as
sensitive as photosensitive technologies and may not offer the spatial
resolution of traditional approaches because the pixel pitch has generally
been fairly large, but detector pitch is now 12–17 mm with current detectors.

3.6 Imaging System Types, Telemetry, and Bandwidth


Remote-sensing systems can be divided into a handful of basic types,
depending on the form of imaging technology used. These distinctions affect
the resolution and sensitivity of the system, and to a certain extent, the quality
of the data. Closely related issues that appear at this point are how such data
are stored and telemetered.

3.6.1 Imaging system types


3.6.1.1 Framing systems (Corona)
Framing systems are those that snap an image, much like a common film
camera or a digital camera (still or video). An image is formed on a focal
plane and stored via chemical or electronic means (film or CCD,

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 75

Figure 3.26 A thermally isolated resistor (200 mm  200 mm), used in a microscopic
Wheatstone bridge. The current enters from the top left and exits through the bottom
right. As the sensor heats, changes in its resistance can be measured with great
sensitivity.

respectively). This technique was used on the early Corona satellites, where
the film is moved in concert with the satellite motion for longer exposure
and better focus. Figure 3.27(a) illustrates such a system. The wide-field
planetary camera (WFPC) on the Hubble is an example of this approach,
as well as the early UoSat cameras (University of Surrey, Surrey Satellite
Technology Limited). Aerial photography systems use this approach,
notably the widely used Vexcel UltraCam, with current systems offering
260-megapixel panchromatic images. In November 2013, Skybox Imaging
(now called Terra Bella) launched Skysat-1, a high-spatial-resolution
system with a framing focal plane. This is the first “1-m” system with such
a focal plane.21
3.6.1.2 Cross-track (Landsat MSS, TM; AVIRIS)
Sensors such as those on the GOES weather satellite and Landsat system consist
of a small number of detectors—from 1 to 32 or so in the systems described later
in this book. The sensor is swept from side to side, typically via an oscillating
mirror, while the system flies along a track. The image is constructed by the

21. The UltraCam Eagle uses four separate camera “cones” to obtain the 260-megapixel
panchromatic images, with four additional cones for the four-color (multispectral) frames.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
76 Chapter 3

Figure 3.27 (a) Framing system,22 (b) cross-track scanner (whiskbroom), and (c) along-
track scanner (pushbroom).

22. Avery and Berlin, 1992.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 77

combined motion of the optic and the platform (aircraft or satellite), as shown in
Fig. 3.27(b). Such sensors are called “whiskbroom” sensors.

3.6.1.3 Along-track (IKONOS, Quickbird, Worldview)


Linear detector arrays are used in systems such as the Worldview sensors,
the IKONOS camera, and the Quickbird sensor. The cross-track dimension
is covered by the linear array, whereas the along-track direction is covered
by the motion of the satellite. This type of sensor is termed “pushbroom;”
Fig. 3.27(c) illustrates its use. Notice how the pixels in the detector map to a
line on the ground. Geometrical optics defines the map from the detector
pixel to the spot on the ground.

3.7 Telemetry Strategies


Once acquired, there are three largely distinct approaches to the downlink
process: real time (direct downlink), relay, or store and dump. The technology
has shifted a fair amount in the last decade with more onboard (solid state)
storage and an increase in the number of ground stations, and combinations
of these approaches are common.

3.7.1 Direct downlink


This approach is used for spacecraft with little or no onboard storage, which
constrains the systems to operations when the ground station is in the satellite’s
field of view. A notable example of this was OrbView-2, with NASA’s sea-
viewing wide-field-of-view sensor (SeaWiFS), which recently stopped opera-
tions after over a decade of service. Although the spacecraft had some onboard
storage capability, large data elements were transmitted via 128 high-resolution-
picture-transmission local-area-coverage (HRPT-LAC) ground stations.23

3.7.2 Relay
Real-time systems (limited storage) can also operate through a relay. The
Hubble system is a good example, although it also uses onboard storage.
The Tracking and Data Relay Satellite System (TDRSS), described in the
appendix, gives a description of the NASA system.

3.7.3 Store and dump


Most satellite systems have onboard storage, as illustrated with the
commercial IKONOS and Quickbird systems below (64 Gb, and 128 Gb of
solid state storage, respectively). Earlier generations of satellites used tape

23. https://directory.eoportal.org/web/eoportal/satellite-missions/o/orbview-2; http://www.


faomedsudmed.org/pdf/publications/TD2/TD2_PERNICE.pdf.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
78 Chapter 3

recorders, mechanical systems that were always a subject of concern for


failure. This approach works well for earth resources systems in polar orbit.
A small industry in high-latitude ground stations has evolved to handle these
satellites, as they can “see” polar-orbiting satellites for 5–10 minutes on
almost every orbit. The Kongsberg Satellite Services has ground stations, for
example, in Tromso, Norway, at 69° 390 N, and on Spitsbergen (Svalbard
satellite station), at 78° 150 N, with fiber optic links to Oslo, and from there to
the world (illustrated in Chapter 5). A recent addition: an Antarctic station
(TrollSat at 72°S 2°E).24

3.8 Bandwidth and Data Rates


The telemetry systems described in the previous section typically use X-band
or K-band downlinks, which can carry several hundred megabits of data per
second. Examples in the next chapter (IKONOS and Quickbird) typically
operate at 300 Mbps. This data rate, and the period of time the spacecraft is in
view (typically a few minutes), defines the amount of data that can be
downlinked.

Example
IKONOS was designed to capture a 10 km  10 km scene at a 1-m spatial
resolution in 4 s. The dynamic range is 12 bits/pixel. The data acquisition
rate then becomes

data rate ¼ ð108 pixels∕4 secondsÞ  12 bits∕pixel ¼ 3  108 bits∕second.


For a telemetry system capable of 300 mbps, it would take 1 s to send the
image to the ground, not counting possible effects of compression. The
satellite can then nominally downlink 100–200 images in one orbit pass over a
ground station. Most satellite systems use image-compression techniques; a
robust “lossless” Kodak (now Harris/Exelis) algorithm typically results in
about a compression factor of 4.

3.9 Problems
1. When was the first Corona launch?
2. When was the first successful launch and recovery of a Corona capsule?
Which number was it?
3. How many launches did it take before a film was successfully returned?

24. Kongsberg Satellite Services, Global Ground Station Network; http://www.ksat.no/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Imaging 79

4. How did the date of this launch relate to that of the U-2 incident with
Gary Powers?
5. What was the best resolution (GSD) of the KH-4 cameras discussed here?
6. What was the swath width associated with the best-resolution KH-4
images?
7. How many Corona missions were there?
8. For a 24-inch focal length, f/3.5 lens, calculate the Rayleigh limit to the
GSD for a satellite at a 115-km altitude. Assume nadir viewing and visible
light (500 nm).
9. What diameter mirror would be needed to achieve 12-cm resolution
(GSD) at geosynchronous orbit? (Geosynchronous orbit has a radius of
6.6 earth radii (Re); this is not the altitude).
10. What are the three factors that constrain the resolution obtainable with an
imaging system?
11. Adaptive optics: compare the Rayleigh criteria for the 3.5-m Starfire
observations in Fig. 3.15 to results with and without the adaptive-optics
system.
12. What is the energy band gap for lead sulfide? What is the cutoff
wavelength for that value?
13. The Corona lenses were f/5.0 Tessar designs with a focal length of
24 inches. Calculate the diameter of these lenses.
14. For a 24-inch-focal-length camera, f/3.5, at an altitude of 115 km, calculate
the GSD corresponding to a 0.01-mm spot on the film (100 lines/mm).
Assume nadir viewing. This is a geometry problem.
15. What is the f/# for the 0.57-mm pinhole illustrated in Fig. 3.19? The focal
length is approximately 50 mm.
16. How large an optic (mirror) would you need on the moon to obtain a 0.66-m
GSD when viewing the earth? What is the angular resolution of this optic (Du
in radians)? Assume that the visible radiation l ¼ 0.5 mm ¼ 5  10–7 m.
17. One of the most popular cameras used for airborne mapping today is the
Microsoft/Vexcel Ultracam. The Ultracam Eagle can be outfitted with a
variety of lenses, including a 210-mm, f/5.6 optic. A typical flight altitude
is 1000 m. The panchromatic image size is 20,010  13,080 pixels, and the
panchromatic physical pixel size (pitch) is 5.2 mm. Calculate the resolution
defined by the Rayleigh criteria at 1.0 mm and the resolution defined by
the geometry of the camera.25 The result should be 2.5 cm.

25. http://www.microsoft.com/ultracam/en-us/UltraCamEagle.aspx.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 4
Optical Satellite Systems

This chapter applies the knowledge from Chapter 3 to a few illustrative satellite
systems. The Hubble Space Telescope is one of the most impressive
illustrations of the technology, even after 25 years of service. Smaller
(commercial) systems are described as well, and illustrations are given of
nighttime imaging.

4.1 Hubble: The Big Telescope


The science and technology of remote sensing involves the propagation of
light from source to subject to sensor and relies on an understanding of the
platform (or satellite bus) and the processes that transmit remotely sensed
imagery to the ground. The Hubble Space Telescope illustrates all of these
elements in the image chain.

4.1.1 The Hubble satellite


The Hubble Space Telescope (HST) was deployed April 25, 1990 from the
space shuttle Discovery (STS-31). The HST is large, featuring a 2.4-m-
diameter mirror that allows remarkable access to the distant universe,
unimpeded by atmospheric absorption, scattering, and turbulence. Consider
that the 100-inch Hooker telescope installed at the Mt. Wilson Observatory
was the largest telescope in the world from 1917–1948; Edwin Hubble used
this telescope for decades. Seven decades later, a telescope of the same size was
put in orbit (Fig. 4.1), named after the famous astronomer. From a space-
operations perspective, one of the remarkable things about the satellite has
been the five service missions that repaired and updated the space observatory.
The HST is roughly cylindrical (Figs. 4.2 and 4.3): 13.1 m end-to-end, and
4.3 m in diameter at its widest point. The ten-ton vehicle is three-axis stabilized.
Maneuvering is performed via four of six gyros, or reaction wheels. Pointing
can be maintained in this mode (coarse track) or through fine-guidance sensors

81

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
82 Chapter 4

Figure 4.1 The initial deployment of the Hubble Space Telescope.

Figure 4.2 The Hubble satellite. Image courtesy NASA/GSFC.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 83

Figure 4.3 Image of the HST taken during STS-82, the second service mission, on
February 19, 1997 (S82E5937, 07:06:57). New solar arrays had not yet been deployed.

(FGSs) that lock onto guide stars to reduce drift and ensure pointing accuracy.
The HST’s pointing accuracy is 0.007 arcseconds (0.034 mradians).
Power to the system electronics and scientific instruments is provided by
two 2.4  12.1-m solar panels, which provide a nominal total power of 5 kW.
The power generated by the arrays is used by the satellite system (1.3 kW)
and scientific instruments (1.0–1.5 kW); it also charges the six nickel–
hydrogen batteries that power the spacecraft during the roughly 25 minutes
per orbit in which the HST is in the earth’s shadow.1
Communications with the HST are conducted via the tracking and data-
relay satellites (TDRS, see Appendix 3). Observations taken during the time
when the TDRS system is not visible from the spacecraft are recorded and
dumped during periods of visibility. The spacecraft also supports real-time
interactions with the ground system during times of TDRS visibility. The
primary data link is at 1024 kbps, using the S-band link to the TDRS.2 The
system routinely transfers a few gigabytes per day to the ground station. Data
are then forwarded to NASA/GSFC via landlines.

4.1.2 The Hubble telescope design


The Hubble is an f/24 Ritchey–Chretien Cassegrain system with a 2.4-m-
diameter primary mirror and a 0.3-m secondary mirror. The Cassegrain

1. Dr. J. Keith Kalinowski, NASA/GSFC, private communication, August 3, 1999.


2. Daniel Hsu, Hubble Operations Center, January 7, 2005. Science data are all 1 Mbps (real
time or playback).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
84 Chapter 4

Figure 4.4 Hubble optics. The mirrors are hyperboloids, and the secondary is convex. The
primary has a focal length of 5.5 m and a radius of curvature of 11.042 m. The secondary
has a focal length of 0.7 m and a radius of curvature of 1.358 m. The bottom image is an
accurate ray trace for the Cassegrain telescope, courtesy of Lambda Research (Oslo).3

design for the Hubble is very common among satellite systems. The primary
mirror is constructed of ultra-low-expansion silica glass and coated with a thin
layer of pure aluminum to reflect visible light. A thinner layer of magnesium
fluoride is laid over the aluminum to prevent oxidation and reflect ultraviolet
light. The secondary mirror is constructed from Zerodur, a very-low-thermal-
expansion (optical) ceramic. The effective focal length is 57.6 m.
Figure 4.4 illustrates the optical design of the telescope. The distance
between the mirrors is 4.6 m, and the focal plane is 1.5 m from the front of the
primary mirror. The angular resolution at 400 nm is nominally 0.043
arcseconds (0.21 mradians). The fine guidance sensors, or star trackers, view

3. http://www.lambdares.com

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 85

Figure 4.5 The primary mirror of the Hubble telescope measures 2.4 m (8 ft) in diameter
and weighs about 826 kg (1820 lbs). By comparison, the Mt. Wilson 100-inch-solid-glass
mirror weighs some 9000 pounds.4 The center hole in the primary mirror has a diameter of
0.6 m.

through the primary optic. The off-axis viewing geometry does not interfere
with imaging, and it allows the use of the large primary optic for the necessary
detector resolution. Figure 4.5 shows a closeup of the 96 00 mirror. A
requirement for the spacecraft was a pointing accuracy (jitter) of 0.007
arcseconds, which was more easily achieved after the first set of solar arrays
was replaced. The original flexible array design vibrated rather badly every
time the satellite moved from sun to shadow or shadow to sun—that is, twice
an orbit. This design error required a redesign of the satellite-pointing-control
algorithms.
A more serious problem was found with the Hubble: the mirror was not
ground to the right prescription, and it suffered from spherical aberration (too
flat by about 4 mm at the edges.) As a consequence, new optics designs were
created, and a corrective optic was added for the existing instruments
(COSTAR). Figure 4.6 shows the before and after for the spherical aberration
problem. Subsequent scientific instruments, such as the WFPC2, built
corrections into the optics of the newer instruments. COSTAR was removed
during the last servicing mission because it was no longer needed.

4.1.3 The Hubble detectors: Wide-Field and Planetary Camera 2


The Hubble normally carries four or five distinct sensors. The Wide Field and
Planetary Camera 2 (WFPC2) is described in this section. Hubble’s scientific

4. http://www.mtwilson.edu, including a link to the 1906 article by George Hale describing the
new telescope.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
86 Chapter 4

Figure 4.6 On the top left, a FOC image of a star taken prior to the STS-61 shuttle mission to
service the HST, during which astronauts installed COSTAR. The broad halo (1-arcsecond
diameter) around the star is caused by scattered, unfocused starlight. On the right, following
installation, deployment, and alignment of COSTAR, starlight is concentrated into a
0.1-arcsecond radius circle. Images are reprinted courtesy of the Space Telescope Science
Institute (STScI), STScI-PRC1994-08. The bottom two images were taken of the center of NGC
1068 before and after COSTAR correction of Hubble’s aberration (STScI-PRC1994-07).5

instruments are mounted in bays behind the primary mirror. The WFPC2
occupied one of the radial bays, with an attached 45° pickoff mirror that allowed
it to receive the on-axis beam. (The best image quality is obtained on-axis.)
The WFPC2 field-of-view is distributed over four cameras by a four-
faceted pyramid mirror near the HST focal plane. Each of the cameras

5. http://www.spacetelescope.org/about/general/instruments/costar.html;http://hubblesite.org/
newscenter/archive/releases/1994/07/image/a/

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 87

Table 4.1 Hubble Space Telescope characteristics.

Launch date/time 1990-04-25 at 12:33:51 UTC


On-orbit dry mass 11600.00 kg
Nominal power output 5000 W (BOL)
Batteries 6 (60-amp-hour NiMH)
Orbital period 96.66 m
Inclination 28.48°
Eccentricity 0.00172
Periapsis 586.47 km
Apoapsis 610.44 km
Telemetry rate: science data TDRS, S-band, SA, 1024 kbps
Telemetry rate: engineering TDRS, S-band, MA, 32 kbps

Figure 4.7 WFPC2 optics. Light enters the optical train from the main telescope at left.

contains an 800  800 pixel Loral CCD detector. Three wide-field cameras
operate at f/12.9, and each 15-mm pixel samples a 0.10-arcsecond portion of
the sky. The three wide-field cameras cover an L-shaped field of view of
2.5  2.5 arcminutes. The fourth camera operates at 0.046 00 (arcseconds, or
0.22 mradians) per pixel (f/28.3) and is referred to as the planetary camera.
This sensor is therefore operating at the full resolution of the telescope. The
fourth camera observes a smaller sky quadrant: a 3400  3400 field. This is a
sufficiently large field of view to image all the planets but Jupiter. The spectral
range lies from approximately 1150–10500 Å. The exposure times range from
0.11 to 3000 s.
The WFPC2 was ultimately replaced by the Advanced Camera for
Surveys (ACS) and Wide-Field Camera 3 (WFC3), which used more
sophisticated detectors but had similar characteristics in terms of wavelength
coverage and angular resolution.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
88 Chapter 4

Figure 4.8 This image of Mars was taken by the HST using the WFPC2, on October 28,
2005, when Mars was near opposition—approximately a distance of 70 million km from
earth. The image shows the blue, green, and red data from three filter wheel positions
(410 nm, 502 nm, and 631 nm). The spatial resolution is 10 km. Image is reprinted
courtesy of NASA, ESA, the Hubble Heritage Team (STScI/AURA), J. Bell (Cornell
University), and M. Wolff (Space Science Institute).6

Example: Diffraction and resolution limits


The resolution concepts developed in Chapter 3 are illustrated here using
values from the HST.
The angular resolution for the telescope is given as 0.04300 , with the
slightly larger value of 0.046 00 for the combined Hubble/WFPC2 system.
These values include all of the effects that are ignored in the simplest
approximation to the Rayleigh criteria, as given in Eq. (3.12):

6. Hubble Site News Release Number: STScI-2005-34; http://hubblesite.org/newscenter/archive/


releases/2005/34/

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 89

Figure 4.9 For the Hubble/WFPC2 combination, altitude is 600 km, detector size is 15 mm,
and effective focal length is 57 m.

l
Du ¼ 1.22 · :
lens ðmirrorÞ diameter
A few numbers are tested here, assuming a deep-blue wavelength (410 nm):

4.1  107 m
Du ¼ 1.22 · ¼ 2.08  107 radians:
2.4 m
In order to compare this value to the given value of 0.043 00 , the given
resolution is converted to radians:
0.043 arcseconds 2p radians
Du ¼ · ¼ 2.08  107 radians:
60 s∕min · 60 min∕deg 360 deg
Applying this value to the hypothetical problem of the ground resolution that
the Hubble would have if pointed down produces

GSD ¼ Du · altitude ¼ 2.08  107 · 600  103 m ¼ 0.125 m:


If the Hubble were pointed downward, the resulting GSD would be 12.5 cm. The
calculations can be repeated for 0.04600 , but the increase is only a few percent.

Geometric resolution
The example thus far implies that the detector has infinite resolution. In
reality, however, it does not. The concept of similar triangles discussed
previously and this example’s values for the detector’s pixel size can be used to
compare the detector resolution to the best resolution offered by the telescope:
GSD pixel size
¼
altitude focal length
or
pixel size 15  106
GSD ¼ · altitude ¼ · 600  103 ¼ 0.16 m,
focal length 57

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
90 Chapter 4

which is slightly worse than the best results that the telescope can provide—
the Airy disk from a distant star (or a small, bright light on the ground) would
not quite fill one detector pixel. The detector is undersampling the image in
the shortest wavelengths.

4.1.4 The repair missions


The Hubble has been serviced five times, beginning with the STS-61 shuttle
mission launched on December 2, 1993. STS-61 was the fifth flight of the
Endeavour orbiter. During several days of EVA, the crew installed corrective
optics (COSTAR) in the light path after removing the high-speed photometer
(HSP) instrument, replaced the older wide-field/planetary camera (WF/PC)
with a newer version (WFPC2), and replaced malfunctioning solar arrays.
COSTAR helped correct the effects of the unfortunate spherical aberration
caused by incorrect grinding of the Hubble mirror.
The next repair was made during shuttle mission STS-82, launched on
February 11, 1997. This mission again involved repairs and instrument swapping.
During several days of EVA, the crew replaced a failed fine-guidance sensor
(FGS), swapped one of the reel-to-reel tape recorders for a solid state recorder,
and exchanged the original Goddard High-Resolution Spectrograph (HRS) and
UCSD Faint-Object Spectrograph (FOS) with the Space-Telescope Imaging
Spectrograph (STIS) and Near-Infrared Camera and Multi-Object Spectrometer
(NICMOS), respectively. In addition to scheduled work, astronauts discovered
that some of the insulation around the telescope’s light shield had degraded, and
so several thermal-insulation blankets were attached to correct the problem.
Repair mission 3A was launched on December 19, 1999. During three
spacewalks, astronauts replaced all six Hubble gyroscopes. Four had failed,
the last failing in November 1999, which accelerated the launch schedule.
With only two gyros, the satellite was nonoperational and had been put in safe
mode. Astronauts installed new battery-voltage regulators, a faster central
computer, a FGS, a data recorder, and a new radio transmitter. The telescope
was released from Discovery on Christmas day (5:03 pm CST).
Shuttle Columbia was launched for the fourth service mission (3B) on March
1, 2002 (STS-109). New rigid solar arrays, coupled with the new power-control
unit, were installed, generating 27% more electrical power, an increase that
roughly doubled the power available to scientific instruments (curiously, the
original control algorithms designed for satellite pointing became useful for the
first time.) A new camera, the Advanced Camera for Surveys (ACS), was installed
to replace the faint-object camera, the last of Hubble’s original instruments.
With a wider field of view and sensitivity to wavelengths ranging from ultraviolet
to the far red (115–1050 nm), the ACS supplanted the WFPC2 sensor as the
primary survey instrument. A final service mission was scheduled for 2004, but
the Columbia disaster in October 2003 significantly delayed that mission.
The last service mission was STS-125 (Atlantis) from May 11–24, 2009.
Two new scientific instruments were installed [the Cosmic Origins

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 91

Spectrograph (COS) and Wide-Field Camera 3 (WFC3)], the COSTAR was


removed because it was no longer needed, and the WFPC2 was removed to
make room for the WFC3. Two failed instruments were repaired, the STIS
and the ACS. To prolong the Hubble’s life, new batteries, new gyroscopes, a
new science computer, a refurbished FGS, and new insulation on three
electronics bays were also installed during the 12-day mission with five
spacewalks. Finally, a device was attached to the base of the telescope to
facilitate de-orbiting when the telescope is eventually decommissioned.

4.1.5 Operating constraints


There are two important constraints on satellite behavior that affect most
LEO satellites, which typically operate at altitudes of a few hundred
kilometers. The HST is subject to both.

4.1.5.1 South-Atlantic anomaly


Above South America and the south Atlantic Ocean lies a lower extension of
the Van Allen radiation belts called the south-Atlantic anomaly (SAA). No
astronomical or calibration observations are possible during spacecraft
passages through the SAA because of the high background induced in the
detectors. SAA passages limit the longest possible uninterrupted exposures to
about twelve hours (or eight orbits). This phenomenon compromises almost
all LEO imaging systems, though not to the same extent as astronomical
systems, which depend on very low background-count rates in the detector.

4.1.5.2 Spacecraft position in orbit


Because the HST’s orbit is low, its atmospheric drag is significant, varying
according to the orientation of the telescope and the density of the atmosphere
(which depends on the level of solar activity). The chief manifestation of this
effect is that it is difficult to predict where the HST will be in its orbit at a
given time. The position error may be as large as 30 km within two days of the
last determination. This effect also affects earth-observing systems and can
cause significant pointing errors for high-spatial-resolution systems. Operators
for systems such as Quickbird (see next section) update their satellite
ephemeris information once an orbit.

4.2 Commercial Remote Sensing: IKONOS and Quickbird


The world of remote sensing changed dramatically on September 24, 1999
with the successful launch of the IKONOS satellite by the Space Imaging
Corporation. The subsequent launch of the Quickbird satellite by Digital
Globe on October 18, 2001 emphasized the dramatic changes emerging
in the world of imaging reconnaissance. Images with spatial resolutions of one
meter or better were now available for any customer willing to pay for them.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
92 Chapter 4

System, satellite, and instrument characteristics for these first two satellites
are enumerated in Table 4.2. IKONOS and Quickbird differ in design; the latter
is unique for not using a Cassegrain. Both use store-and-dump telemetry systems.
Space Imaging used a large number of ground stations; DigitalGlobe uses one or
two (northern) high-latitude ground stations. Both companies suffered system
loss in their initial launches. DigitalGlobe, launching after Space Imaging,
lowered the orbit of their satellite to provide a higher spatial resolution and give
an economic advantage over its competitor. A larger focal plane allowed them to
maintain a larger swath width.
A fleet of commercial satellites have followed IKONOS and Quickbird
into orbit with ever-improving GSDs. The three (commercial) U.S. vendors
have since been consolidated, with DigitalGlobe absorbing their competitors.
The most recent systems have been designed to offer a spatial resolution of
better than 0.5-m GSD for their panchromatic sensors. Imagery from
Worldview-3, launched August 13, 2014, approach a 0.35-m GSD.

4.2.1 IKONOS satellite


The IKONOS satellite (Fig. 4.11) was launched on September 24, 1999 from
Vandenberg AFB. Essentially a miniature version of the Lockheed-built HST,
orbiting at 681 km, it provides a 1-m resolution and panchromatic (visible
range) imagery at a revisit time of three days. IKONOS also carries a 4-m-
resolution multi-spectral sensor, covering the VNIR portion of the spectrum
addressed by Landsat bands 1–4 and the four SPOT bands.7
Figures 4.10 and 4.12 show the historic ‘first light’ imagery collected over
Washington D.C. by IKONOS. One of the biggest advantages of this new
generation of satellites is the ability to point significantly off nadir, dramatically
reducing the revisit time in comparison with earlier systems such as Landsat.
A wider dynamic range makes the sensor more capable as well.

4.2.1.1 Imaging sensors and electronics for the IKONOS satellite


4.2.1.1.1 Camera telescope
The telescope design (Fig. 4.13) is a Cassegrain with a central hole in the
mirror and detectors behind. Three of the five telescope mirrors are curved
and used to focus the image onto the imaging sensors at the focal plane. Two
flat mirrors, known as fold mirrors, bounce the imagery across the width of
the telescope, thereby reducing the overall telescope length from 10 m to
2 m. The three-mirror anastigmat has a focal length of 10 m and is an f/14.3
optic. The primary is 0.7 m in diameter and 0.10 m thick, with a mass of
13.4 kg. Two of the optics can be adjusted for focusing by ground command
should correction be necessary.

7. M. Mecham, “IKONOS Launch to Open New Earth-Imaging Era,” Aviation Week & Space
Technology, McGraw-Hill, New York (October 4, 1999).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 93

Table 4.2 Imaging satellite characteristics.


IKONOS Quickbird

Date: September 24, 1999 October 18, 2001


Launch
Vehicle Athena II Delta II
information
Location Vandenberg Air Force Base, California

Altitude 681 km 450 km

Orbit Period 98 min 93.4 min

Inclination 98.1° 98°

Panchromatic 1 m (nominal at 0.61 m at nadir


imagery (Pan) <26° off nadir)
Resolution
(GSD)
Multi-spectral 4 m (nominal) 2.44 m at nadir
imagery (MSI)

Swath width 11 km at nadir 16.5 km at nadir

2.9 days at a 1-m resolution; 1 to 3.5 days at 70-cm resolution,


Revisit
1.5 days at a 1.5-m resolution for depending on latitude
frequency
targets at 40° latitude

Swath width 11 km at nadir 16.5 km at nadir

Metric 12-m horizontal and 10-m vertical 14.0-m RMSE


accuracy accuracy with no control

726 kg at launch, with a main body 1024 kg (wet, extra hydrazine for low
Mass/size
1.8  1.8  1.6 m orbit), 3.04 m (10 ft) in length

Onboard
64 Gb 128 Gb
storage

Payload data;
X-band downlink at 320 Mbps

Comms X-band downlink at 320 Mbps Housekeeping;


X-band from 4,16, and 256 kbps

2 kbps S-band uplink

4.2.1.1.2 Imaging sensors and electronics


The camera’s focal-plane unit—attached to the back end of the telescope—
contains separate sensor arrays for simultaneous capture of panchromatic
(black-and-white) and multi-spectral (color) imagery. The panchromatic sensor
array consists of 13,500 pixels with a pitch of 12 mm (three overlapping 4648-
pixel linear arrays).8 The multispectral array is coated with special filters, and the

8. Kodak Insights in Imaging Magazine (June 2003). See Fig. 6.4 for a similar sensor.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
94 Chapter 4

Figure 4.10 First light image from IKONOS of the Jefferson memorial, taken September
30, 1999. Image reprinted courtesy of DigitalGlobe.

3375 pixels have a pitch of 48 mm. These 4:1 ratios for the detector arrays sizes
and detector pitch are very typical of the design of these systems. The result is
that the panchromatic sensors have 4 the resolution of the spectral sensors.
The digital processing unit compresses digital image files from 11 bits per
pixel (bpp) data to an average value of 2.6 bpp at a speed of 115 million pixels
per second. The compression is important for onboard storage and telemetry
purposes. The lossless, real-time compression of the imagery is a capability that
only recently has been made practical by modern computational resources. It is
important that IKONOS and Quickbird offer the extended dynamic range
represented by 11 bits (DN ¼ 0–2047), a significant improvement on the
contemporary NASA systems. This topic is covered further in Chapter 7.

4.2.2 NOB with IKONOS: Severodvinsk


Figures 4.14 and 4.15 illustrate the sort of imagery that can be obtained for
regions of strategic interest. The former presents a large scene, and the latter
provides a zoomed-in view of the submarine facility. Compare these images to
the Corona image shown at the beginning of Chapter 3 (Fig. 3.8).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 95

Figure 4.11 Image of the IKONOS satellite in the acoustic test cell at Lockheed Martin
Missile and Space in Sunnyvale, CA. It is basically a “baby brother” to the HST.

4.3 The Earth at Night


The ability to image the earth at night first emerged with the Defense
Meteorological Satellite Program (DMSP), particularly when the system was
declassified in 1973. The ability to see night lights is a tremendous tool when
looking at industrial output (industrial order of battle). Recently, the NOAA
Suomi platform has dramatically increased our ability to image the earth at
night.9

9. T. E. Lee et al., “The NPOESS VIIRS Day/Night Visible Sensor,” Bull. Am. Meteorol. Soc.
87, 191–199 (Feb. 2006); S. E. Mills et al., “Calibration of the VIIRS Day/Night Band
(DNB),” 6th Annual Symposium on Future National Operational Environmental Satellite
Systems-NPOESS and GOES-R; https://ams.confex.com/ams/90annual/techprogram/
paper_163765.htm

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
96 Chapter 4

Figure 4.12 First light image from IKONOS, taken September 30, 1999. The Jefferson
memorial is shown at a higher resolution in Fig. 4.10. North is to the right in this image
orientation.

The most-recent generation of LEO meteorological satellites was


initiated on October 28, 2011 with the launch of the Suomi satellite,
following a long struggle to create the new National Polar-orbiting
Operational Environmental Satellite System (NPOES). The NPOES
pathfinder (NPP) was strategically renamed the Suomi National Polar-
orbiting Partnership spacecraft, or Suomi NPP, after the successful launch.
The new Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi
has provided a spectacular improvement over the DMSP-OLS system.
VIIRS carries a number of spectral bands; the interesting one for these
purposes is the “day–night” band that detects light in a range of wavelengths
from 0.5–0.9 mm and uses up to 250 steps of time-delay integration (TDI,
defined in the following section). It is so sensitive that airglow and
moonlight reflecting off the water is a significant problem for calibration—
the earth is never dark enough. Spatial resolution is 750 m, and the sensor
maintains that resolution across the scan. The dynamic range is 14 bits
(compared to 5 bits on DMSP/OLS), so not only is the sensor more sensitive,
but it can resolve fine levels of brightness. VIIRS data are shown in Chapter
1 for Egypt and the Nile River. A composite image built from a large
number of April and October 2012 images appears in Fig. 4.16. Like its
predecessor (DMSP), the VIIRS sensor observes aurora and airglow
signatures, as well as the surface phenomena shown here.

4.4 Exposure Times


The discussion thus far has neglected one final, rather important point with
regard to imaging resolution. Satellite movement becomes an issue for high-
spatial-resolution systems because of motion blur, just like that which occurs
in regular photography of rapidly moving targets. A simple calculation can

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 97

Figure 4.13 The IKONOS telescope, built by Kodak, features three curved mirrors. Two
additional flat mirrors fold the imagery across the inside of the telescope, thereby
significantly reducing telescope length and weight. The telescope is an obscured, three-
mirror anastigmat with two fold mirrors, a 70-cm-diameter primary with a 16-cm central hole,
a 10.00-m focal length, and 1.2-mrad instantaneous field-of-view (pixel).

estimate exposure time. CCD arrays have sensitivity not unlike regular
daylight film, with a standard speed defined as ISO 100.10 An old
photographic rule of thumb is that the exposure time at f/11 to f/16 is
1/ISO, or in this case 1/100 s. The f/14 IKONOS optics provide sufficient light

10. International Organization for Standardization, or ISO [the successor to the American
Standards Association (ASA)], ratings will mostly be familiar to old film photographers.
Kodak Plus-X pan, and Kodak Kodacolor films were rated ISO 100. Kodachrome, as made
famous by National Geographic photographers and musician Paul Simon, was rated ASA 25.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
98 Chapter 4

Figure 4.14 Severodvinsk, near metropolitan Arkhangelsk, southeast of Murmansk. This is


the overview image for the 11740 pixels (columns)  12996 pixels (rows) image. The
0.85-m-resolution original was downsampled to a 1.0-m resolution before delivery. The
original Satellite Imaging license limited the company to a 1-m resolution. Image reprinted
courtesy of DigitalGlobe.

for a 1/100-s (10-ms) exposure. The satellite moves approximately 75 m in that


time—too far for a clear image. Modern sensors are somewhat more sensitive
than those described here, but the principle holds for systems like Quickbird
and IKONOS.
There are two approaches to the need for very short exposure times. One
method mechanically scans the optics to compensate for satellite motion or
slew the spacecraft so as to reduce the effective ground velocity. The Landsat
Thematic Mapper compensates for the satellite motion, as described in
Chapter 6, with a scanning mirror system. The second solution counters the
motion electronically, as is performed on IKONOS and Quickbird with a
technique called time-delay and integration (TDI) on the focal plane, whereby
electrons are moved along the focal plane in the direction of satellite motion.
The electrons accumulate until a sufficient exposure time has been reached
(the same technology is used on some flatbed scanners). The Kodak-built
focal-plane on IKONOS has up to 32 steps of TDI; the VIIRS day–night band
(high gain) has 250 steps. For daylight imagers, typically, twenty steps of TDI

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 99

Figure 4.15 Severodvinsk, as captured by IKONOS. Compare this image with the Corona
image in Fig. 3.8. Acquisition date and time: 06-13-2001, 08:48 GMT. Nominal collection
azimuth: 133.4306°. Nominal collection elevation: 79.30025°. Sun angle azimuth:
168.5252°. Sun angle elevation: 48.43104°.

Figure 4.16 This image of the continental United States at night is a composite assembled
from data acquired by the Suomi NPP satellite in April and October 2012. The nominal
imaging time is 1:30 AM in each orbit. The primary downlink connects to Svalbard, Norway.
Image reprinted courtesy of NASA Earth Observatory/NOAA NGDC.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
100 Chapter 4

Figure 4.17 Edited image of the Long Beach harbor at night, taken from the International
Space Station (ISS016-E-27162.JPG). Date and time: Feb 4, 2008, 07:44:37.24 GMT;
camera: Nikon D2Xs; exposure time: 1/20 s; f/2.8; and focal length: 400 mm.

are needed to build up sufficient charge. The Quickbird satellite implements


both TDI and mechanical slew of the whole spacecraft to reduce effective
ground velocity. As more recent systems have pushed the resolution down to
50 cm or better, the operational approach is now solely to slew the satellite
across the scene, cancelling out orbital motion as a part of the imaging process
(e.g., Worldview-2).
An earth-tracking system has been added on the International Space
Station (ISS), allowing relatively long exposure times. Figure 4.17 shows an
image of the harbor in Long Beach, CA. The port facilities of the city are
illuminated with regularly spaced, orange sodium-vapor lights. Observe also
the city streets and some lights from ships in the scene. The resolution here is a
few meters. The Israeli EROS-B and U.S. SkySat-1 systems have acquired
small samples of nighttime imagery at 1-m resolutions.

4.5 Problems
1. What are the focal length, diameter, and f/# of the Hubble primary optic?
2. For a pushbroom scanner like IKONOS, calculate the data rate implicit in
a system with a 13,500-pixel linear array, assuming 16 bits/channel, or

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Optical Satellite Systems 101

pixels, imaging pixels on the ground with a 0.8-m GSD. Data compression
by a factor of 4 reduces the required number of bits to 4 bits/channel.
Assume the spacecraft is moving at 7.5 km/s. How many bits/second must
the telemetry system be able to handle? To do this problem, calculate the
length of time it takes the spacecraft to move one meter. Then calculate
the number of bits acquired in that time. Compare to the known
bandwidth of the IKONOS satellite.
3. Skybox Imaging has flown a staring focal plane for high-resolution
imaging from low-earth orbit. The system can acquire 5 megapixel frames
at rates up to 30 Hz for up to 30 s. What bandwidth would be required for
such a sensor to operate in near real time? What is the data volume for one
panchromatic scene (30 s)? Assume 12 bits/pixel.
4. The Hubble telescope ACS cameras have optical systems of f/25 and f/70.
To what focal lengths do the two channels correspond?
5. At opposition, the distance from the earth to Mars can be as low as 65
million km. What is the best spatial GSD the Hubble WFPC2 could
produce at that range?
6. The Nikon camera used to take the image of Los Angeles in Fig. 4.17 has
a pixel pitch of 5.5  5.5 mm. What spatial resolution can the 400-mm lens
used for this image give under ideal circumstances? The ISS altitude is
333 km. Assume a nadir view. Compare to the distance the spacecraft
moved in 0.05 s (the exposure time).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 5
Orbital Mechanics Interlude

The design and operation of remote-sensing systems depends on the orbital


motion of the satellites they employ, and consequently a proper understanding
of how they work and how to exploit them requires knowledge of orbital
mechanics. This chapter provides an understanding of orbital mechanics as it
impacts remote sensing. One important consequence of orbital mechanics is
the impact it has on coverage of targets and telemetry.

5.1 Gravitational Force


The orbital mechanics of earth-orbiting satellites are determined by the effects
of gravity. Students should be familiar with the simple textbook case where
the force due to gravity is given by the formula f ¼ mg, where m is mass
(generally in kilograms), and g is acceleration due to gravity (approximately
9.8 m/s2). Unfortunately, this simple form, while appropriate near the surface
of the earth, will not work for orbital motion—a more complex form must be
used. The correct formula for this “central force problem” is

m1 m2
F ¼ G r̂, (5.1)
r2

where G ¼ 6.67  10–11 N(m2/kg2) (gravitational constant), m1 and m2 are the


masses involved (usually the earth and the satellite), r is the separation
between the center of the earth and the satellite, and the vector elements (and
sign) indicate that the force exists along a line joining the centers (of mass) of
the two bodies. As always, the force F is in Newtons and masses are in
kilograms. At the surface of the earth, this equation takes the familiar form

F ¼ go m, (5.2)

where go ¼ G(mearth/R2earth) ¼ 9.8 m/s2 is the acceleration due to gravity at the


earth’s surface. This can lead to a convenient form of Eq. (5.1):

103

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
104 Chapter 5

 2
Rearth
F ¼ go m , (5.3)
r

where Rearth ¼ 6380 km, and this example uses mearth ¼ 5.9736  1024 kg.
Although mearth and G are not known to high accuracy, the product is
GMearth ¼ (3.98600434 ± 2  10–8)  1014 m3s–2. The ±2  10–8 in the parenthe-
ses is the error in the last digit of the expression—the term has nine significant
digits.1

5.2 Circular Motion


The force due to gravity results in a variety of possible solutions of the
equations of motion, the simplest of which is circular orbits, which objects like
the moon approximate.

5.2.1 Equations of motion


The velocity of an object moving in a circular orbit is described by means of
an angular velocity, which determines the relationship between the radius of
the circular motion and the linear velocity.

v ¼ vr,

where v is the velocity in meters/second, r is the distance from the center of


motion, and v is the angular velocity (radians/second). The angular frequency
v is related to the “regular” frequency f by a factor of 2p: v ¼ 2pf.
Frequency, in turn, is related to the period t by the relation
1 2p
t¼ ¼ : (5.4)
f v

Examples

• A car is traveling in a circle with a 200-m radius at 36 km/h. What is v?


v ð36  103 m∕3600 sÞ
v¼ ¼ ¼ 0.05 radians∕s:
r 200 m
• A satellite is orbiting the earth once every 90 min. What is v? The period
1 1
(t) ¼ 90  60 ¼ 5400 s. f ¼ ¼ ¼ 1.85  104 s1 ; v ¼ 2pf
t 5400
¼ 1.16  103 radians∕s:

1. Rees, 1990.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 105

5.2.2 Centripetal force


Newton said that for a mass to move in a trajectory other than a straight line,
a force must be applied. In particular, circular motion requires the application
of centripetal force. The magnitude of this force is
v2
F centripetal ¼ m ¼ mv2 r: (5.5)
r
5.3 Satellite Motion
For an object in circular motion, the consequence of the centripetal force
applied to a satellite by gravity results in a balancing of forces given by
 
v2 Rearth 2
F centripetal ¼ m ¼ F gravity ¼ go m : (5.6)
r r
The satellite mass cancels out; orbital motion does not depend on satellite mass:
  rffiffiffiffiffi
v2 Rearth 2 go R2earth go
¼ go ⇒v ¼2
⇒v¼ R : (5.7)
r r r r earth

There is an inverse relationship between the radius and velocity of a satellite in


circular orbit around the earth. This simple derivation introduces some basic
concepts of orbital motion, and quickly leads to Kepler’s laws.

5.3.1 Illustration of geosynchronous orbit


What is the radius of the orbit of a geosynchronous satellite, i.e., a satellite
with an orbital period of 24 h? First,
rffiffiffiffiffi rffiffiffiffiffi
2p 2p go v go v2 g
v¼ ¼ , v¼ Rearth ⇒ v ¼ ¼ Rearth ⇒ ¼ 3o ,
24 h 86; 400 s r r r3
Rearth r
2

or
 1  1
r3 g 1 r go 1 3 9.8 ð86400Þ2 3
¼ o2 1 ⇒ ¼ ¼
R3earth v Rearth Rearth Rearth v2 6.38  10 ð2pÞ2
6

1
¼ ð290.45Þ3 ¼ 6.62:
The geosynchronous orbit is 6.6 earth radii (geocentric). What is the velocity
of the satellite?

5.4 Kepler’s Laws


Johannes Kepler (1571–1630) studied the orbital motion of planets using data
obtained from Tycho Brahe. Starting from the Copernican theory of the solar
system, i.e., the sun is at the center, Kepler posited that the orbits of the

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
106 Chapter 5

planets are elliptical, not circular. Kepler’s three laws describing planetary
motion apply equally to satellites:
1. Planetary orbits are ellipses, with one focal point at the center of the sun.
2. Equal areas are swept out in equal times.
3. The square of the orbital period is proportional to the cube of the semi-
major axis.
It was one of the great triumphs of Newtonian mechanics that Kepler’s laws
could be derived from basic physics principles.

5.4.1 Elliptical orbits


The first of Kepler’s laws stated that orbital motion would be elliptical—
circular orbits are a special case of such orbits. An ellipse (Fig. 5.1) is
characterized by the semi-major and semi-minor axes (a, b), or alternatively,
the semi-major axis and the eccentricity (ε or e). In our case, the central point
for orbital motion is the focus, which for satellite motion around the earth is
the center of the earth. Some useful formulas:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 y2 a2  b2 b2
þ ¼ 1; ε ¼ or ε ¼ 1  2 :
a2 b2 a a
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
The distance from the center to the focus is c ¼ εa ¼ a2  b2 . The sum of the
perigee and apogee is just twice the semi-major axis.

Figure 5.1 A graph of radius r versus angle u for an elliptical orbit. In cylindrical or spherical
coordinates, r ¼ [a(1  ε2)] / (1 + ε cos u).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 107

Figure 5.2 Earth is at one focus, x ¼ 5.29; the x range is  13.29 to 2.71 Re (earth radii).

5.4.2 Equal areas are swept out in equal times


Kepler’s second law (Fig. 5.2) is a consequence of the conservation of angular
momentum: L ¼ mv  r, and |L| ¼ mvr sin u is a constant. Therefore, at each
point along the orbit, the product of the velocity perpendicular to the radial
vector vu, and the radius is a constant. At perigee and apogee, the radial
velocity is zero (by definition), and if one checks the following values, one can
see that 2.709  6.192 ¼ 13.277  1.263.
In consequence of this law, a satellite spends most of its time at apogee.
The instantaneous velocity is given by

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 
2 1
v ¼ GM  ,
r a

where r is the instantaneous radius from the center of the earth, and a is the
semi-major axis.

5.4.3 Orbital period: t2 ∝ r3


The simple derivation using Newton’s laws shows that the orbital period
depends only on the radius of the orbit for circular orbits. An even more
profound statement by Kepler is that for elliptical orbits, the period depends

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
108 Chapter 5

only on the semi-major axis. Following the same calculation given earlier to
derive the period for a geosynchronous orbit,
rffiffiffiffiffi rffiffiffiffiffi
go v go 2p
v¼ Rearth ⇒ v ¼ ¼ Rearth ¼ ; or
r r r 3 t
sffiffiffiffiffi
2p r3 4p2 3 4p2
t¼ ⇒ t2 ¼ r ¼ r3 : (5.8)
Rearth go go Rearth
2 M earth G

This result is quickly obtained here for a circular orbit but is more generally
true. The value of the orbital period can be obtained by replacing the radius of
the circle with the semi-major axis.

5.5 Orbital Elements


There are a handful of key parameters used to define the orbit of a satellite.
These parameters define the energy and shape of the orbit and the orientation
of the orbit ellipse.

5.5.1 Semi-major axis


The size of the orbit is determined by this parameter, as illustrated in Figs. 5.1
and 5.2. The semi-major axis a is half of the longest axis of the ellipse. A
related measure of size is the distance to the focus c (c ¼ 8.0 and 5.29 in the
two ellipses depicted in the figures).

5.5.2 Eccentricity
The eccentricity ε (or e) determines the shape of the orbit: ε ¼ c/a. For a circle,
ε ¼ 0; for a straight line, ε ¼ 1. The latter would be a ballistic missile: straight
up and straight down.

5.5.3 Inclination angle


The inclination angle I is the angle between the orbit plane and the equatorial
plane of the earth. In the idealized case of a spherical earth, a geostationary
satellite at the earth’s equator would have an inclination of 0°. Reality differs
slightly from this for modern geosynchronous satellites.
A typical polar-orbiting satellite will have an inclination of 98°, at
altitudes of 500–1000 km (low-earth orbit). The inclination is just such that
the 90–100-min orbit allows the spacecraft to cross the equator at the same
local time in each orbit, as the earth rotates under the satellite. There is a
precession of the orbit plane to the east of about a degree/day that compensates
for the motion of the earth around the sun.
The remaining parameters determine the relative phase of the orbit.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 109

5.5.4 Right ascension of the ascending node


The ascending node is the point at which the northbound (ascending) satellite
crosses the equator. The right ascension of the ascending node (RAAN) V is
the celestial longitude of this point. (Right ascension is measured with respect
to the fixed sky, not the earth.) As an alternative definition, the RAAN is the
angle between the plane of the satellite orbit and the line connecting the earth
and sun on the first day of spring, or vernal equinox. The right ascension can
also be described as being measured from the point of Ares. A related
definition—the descending node—is the southbound equator crossing.

5.5.5 Closest point of approach (argument of perigee)


The closest point of approach is the latitude for perigee, measured from the
ascending node in the orbital plane in the direction of satellite motion. The
argument of perigee v equals zero when perigee occurs over the equator; a
value of 90° puts perigee over the north pole. Because of the non-spherical
earth, in general, the argument of perigee will precess, so that the orbital
ellipse changes its orientation with respect to the earth over time. An
illustration is provided by an old NASA mission: Dynamics Explorer 1 was
launched into an elliptical orbit such that it took about 8 months from perigee
at the pole to perigee at the equator (v ¼ 0 to v ¼ 90). The rate of precession
depends on the inclination of the orbit:
• Inclination < 63.4°, v precesses opposite the satellite motion.
• Inclination ¼ 63.4°, v does not precess (Molniya orbit).
• Inclination > 63.4°, v precesses in the same direction as the satellite
motion.

5.6 A Few Standard Orbits


There are a set of more-or-less standard orbits used in the satellite industry,
most of which have some use in the remote-sensing community. In altitude,
they range from LEO (altitudes of a few hundred kilometers) to geosynchro-
nous orbit (altitudes of some 35,000 km).

5.6.1 Low-earth orbit


LEO is the domain of a large fraction of remote-sensing satellites of various
kinds, including weather, earth resources, and reconnaissance. These satellites
are typically in a sun-synchronous orbit, meaning they cross the equator at
the same local time to maintain a consistent solar-illumination angle for
observations. LEO satellites range in altitude from a few hundred kilometers to
1000 km. Figures 5.3 and 5.4 show the ground track for Landsat 4, orbiting at
an altitude of 705 km, at 0940 local time.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
110 Chapter 5

Figure 5.3 Ground track for four orbits by a LEO satellite, Landsat 4, crossing the equator
during each orbit at 0940 local time. The solar sub-point is just above the coast of South
America, corresponding to the time of the satellite crossing the equator. During the 98-min
orbit, the earth has rotated 24.5°.

Figure 5.4 (a) LEO illustration. The two white lines indicate the orbit and the ground track of
the orbit. (b) The sensor on Landsat 4 sweeps along the orbit, aimed at the sub-satellite point
(in the nadir direction). Over fifteen days, the satellite will have observed the entire earth.

Sun-synchronous orbits are popular for civil radar satellites because they
make a consistent solar array orientation to the sun practical; a dawn–dusk
plane allows power systems to largely dispense with batteries (e.g., Radsarsat).
Other, non-polar orbits are also used for LEO; for example, the Operationally
Responsive Space (ORS-1) satellite was launched into a 40° inclination orbit
to allow a focus on mid-latitude regions of interest.2

2. https://directory.eoportal.org/web/eoportal/satellite-missions/o/ors-1.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 111

Most LEO satellites are in circular, polar, sun-synchronous orbits.


Compare the scenario with the fairly elliptical orbit of the Corona spacecraft,
as described in Appendix 2. The elliptical orbit helped maintain the satellite in
orbital status for at least a few weeks with a high spatial resolution at low
altitudes and a larger area coverage at apogee.
The access time to a given target on the ground is typically a few minutes.
From the ground perspective, it can be considered that a satellite passing from
north to south moves from horizon to horizon in 5–10 minutes, which affects
the imaging “window” but also defines the period of time during which data
can be transferred to a ground station. At low–mid latitudes, a polar-orbiting
sun-synchronous orbit implies relatively infrequent access for nadir-view
systems, such as Landsat (16 days). The ability to image off-nadir brings this
down to 2–3 days for systems such as Worldview and Geoeye, typically
imaging up to 45° off nadir.
Polar-orbiting satellites have frequent access at high latitudes, a
characteristic that has lead to increasing numbers of telemetry systems
concentrated in the northern regions of Scandinavia and Alaska. Figure 5.5
shows the access region for the SvalSat station. There are now also
commercial telemetry stations in Antarctica to complement the northern
stations. These extreme locations are then connected via fiber optic links to
more central locations.

Figure 5.5 The Svalbard Satellite Station (SvalSat) on Platåberget (a mountain near
Longyearbyen, Norway) is ideally positioned as a ground station for polar-orbiting satellites.
From SvalSat, all of the 14 daily rotation of a polar satellite can be seen, compared with only
ten from Tromsø or the Kiruna stations. A 300-mbps downlink could theoretically transfer
180 gigabits (22 GB) in a 10-min pass. (The satellite illustrated here has a 10-min access;
the tracks shown on the right range from 10–13-min access times at an altitude of 617 km.)
Compare this value to the amount of onboard storage on Ikonos and Quickbird, discussed in
Chapter 4 (Table 4.2).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
112 Chapter 5

Figure 5.6 MEO orbit, illustrated for two GPS orbit planes.

5.6.2 Medium-earth orbit


Medium-earth orbit (MEO) is the domain of global positioning satellites
(GPS). Though not directly used for remote sensing, they are increasingly
important for mapping and thus influence the interpretation of remotely
sensed imagery. These satellites are in 4.15-Re circular (26378-km geocentric)
orbits with 12-h periods (see Figs. 5.6 and 5.7).

Figure 5.7 Orbit ground tracks for three GPS satellites.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 113

Figure 5.8 The TDRSS orbit and the field of view from the satellite.

5.6.3 Geosynchronous orbit


Geosynchronous orbit (GEO) is standard for most commercial and military
communications satellites, the NASA telemetry system (TDRSS), and weather
satellites (GOES). Figures 5.8 and 5.9 illustrate a TDRS orbit and the field of
view for a typical geosynchronous satellite. The poles are not in view.
There is some sloppiness in the usage of the term “geosynchronous,” and
it is frequently interchanged with geostationary. The former means an orbit
with a 24-h period, whereas the latter means that the satellite position with
respect to the ground is unchanging. A truly geostationary orbit is difficult to
obtain, and deviations from 0° inclination of a few degrees are typical. This
behavior leads to a modest amount of north–south motion during the day. See
Appendix Fig. A3.5 for an illustration.

Figure 5.9 TDRS views the earth. The GOES satellite views in Chapter 1 are similar to that
shown here. (Figs. 1.9, 1.10) TDRS-7, launched in 1995, with an apogee of 35809 km,
perigee of 35766 km, period of 1436.1 min, and inclination of 3.0°.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
114 Chapter 5

Late in the life of TDRS-1, the satellite had depleted its north–south
station-keeping capability, and the inclination had increased to the point
where it could view the Antarctic for part of the day. This situation allowed
for support from an NSF ground station at McMurdo. The image of the earth
taken from Apollo 17, as shown at the beginning of the book, was taken from
near-geosynchronous orbit. Compare the field of view to that seen in the
GOES illustrations in Chapter 1.

5.6.4 Molniya (HEO)


The Molniya orbit, or high-earth orbit (HEO), is useful for satellites that need
to dwell at high latitudes for an extended period. The careful match of
inclination and eccentricity allows a balance of forces that keeps the orbit
plane from precessing; that is, the latitude at which apogee occurs does not
vary. This is the standard orbit for Russian communications satellites.
Table 5.1 gives the orbital parameters for a typical Molniya orbit.
The 12-h orbit “dwells” for 8 or 9 hours at apogee and then sweeps
through perigee in the southern hemisphere, ultimately coming back up over
the northern pole over the opposite side of the earth (Fig. 5.10). The satellite
can view most of the northern hemisphere on both legs of the orbit (Fig. 5.11).

Table 5.1 Orbital parameters for a typical Molniya orbit.

Semi-major axis 26553.375 km Apogee radius 46228.612 km


Eccentricity 0.74097 Perigee radius 6878.137 km
Inclination 63.40° Perigee altitude 500.000 km
Argument of perigee 270.00° RAAN 335.58°
Longitude of ascending node 230.043° Mean motion (revolutions per day) 2.00642615
Period 43061.64 s Epoch July, 1, 1999, 00:00:00

Figure 5.10 Molniya orbit ground track. The sub-solar point is centered on India.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 115

Figure 5.11 (a) The view from Molniya orbit, corresponding to the location at apogee
illustrated above (06:54 UT). (b) Some twelve hours later, the view from the apogee over the
U.S. The sub-solar point is in the Caribbean (18:05 UT).

The altitude for the HEO orbit at perigee is 500 km—just high enough above
the atmosphere to avoid excessive atmospheric drag.
The Molniya orbit is only one of a variety of “magic” orbits with the
inclination and eccentricity matched to keep the inclination constant. The
Sirius radio satellites use a highly inclined orbit to allow the satellites to dwell
over North America, providing more direct access to users in the urban
canyons of cities in the United States. (By contrast, the XM system used a
geosynchronous orbit.)

5.6.5 Summary of orbital values


Table 5.2 summarizes the specifications of the orbits mentioned thus far. The
orbital periods are given in minutes or and hours, where appropriate.

Table 5.2 Illustrative values for satellites in the orbits discussed in this section.3
Orbit LEO MEO HEO (Molniya) GEO

Typical Satellite Landsat 7 GPS 2-27 Russian communications TDRS-7


Launch Date 04/16/1999 09/12/1996 n/a 07/13/1995
Altitude: Apogee 703 km 20314 km 39850 km 5.6 Re, 35809 km
Perigee 701 km 20047 km 500 km 35766 km
Radius: Apogee n/a 4.15 Re 7.2 Re, 46,228 km 6.6 Re
Perigee n/a n/a 6878.1 km n/a
Semi-major Axis 1.1 Re 4.15 Re, 26378 km 26553.4 km 6.6 Re
Period 98.8 minutes 12 h, 717.9 minutes 12 h, 717.7 minutes, 24 h, 1436.1 minutes,
Inclination 98.21° 54.2° 63.4° 2.97°
Eccentricity 0.00010760 0.00505460 0.741 0.000765
Mean Motion 14.5711 2.00572 2.00643 1.0027
(Revolutions/Day)

3. These numbers are primarily from the Systems Tool Kit (STK) database; STK is a product
of Analytical Graphics, Inc.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
116 Chapter 5

5.7 Bandwidth, Revisited


The concept of bandwidth was related to the data volume implicit in an image
in Section 3.8, and in problems at the end of Chapter 4. The total amount of
data that can be transmitted to the ground depends not only on the frequency of
the downlink system (and power) but also the period of time the ground station
is in view. As illustrated in Fig. 5.5, high-latitude ground stations are popular
because LEO satellites (generally in sun-synchronous polar orbits) pass over
them very regularly. A satellite will typically be in view for 5–10 minutes.
As an example, assume a fairly typical system of the last decade, using an
X-band or K-band downlink, at 10–15 GHz. Assume a 1-GHz bandwidth
(10% modulation, slightly at the high end of the engineering spectrum). How
many bits can be transmitted in 5 minutes?

bits ¼ bandwidth · time ¼ 109 bits∕s · 300 s ¼ 3  1011 bits, or 37.5 GB:

Figure 5.12 Sirius orbit.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Orbital Mechanics Interlude 117

Note that there are 8 bits (b) to the byte (B). A single Worldview-1
panchromatic image typically runs from 1–2 GB without compression.
DigitalGlobe normally applies a compression algorithm that reduces the size
of the image by a factor of 4 or so.

5.8 Problems
1. Calculate the angular velocity with respect to the center of the earth for a
geosynchronous orbit in radians/second.
2. Calculate the period for a circular orbit at an altitude of one earth radius
(r ¼ 2 Re).
3. Calculate the period for a circular orbit at the surface of the earth, at the
equator. What is the velocity? This is a “Herget” orbit and is considered
undesirable for a satellite.
4. Look up the orbits for the eight planets (and Pluto) and plot their period
versus their semi-major axis. Do they obey Kepler’s third law? This is best
done by using a log–log plot. Even better, plot the two-thirds root of the
period versus the semi-major axis (or mean radius). The proper system of
units for this problem is earth-years and astronomical units (AU).
5. Derive the radius of the orbit for a geosynchronous orbit.
6. Can Antarctica be seen from geosynchronous orbit? Geostationary?
7. A satellite is in an elliptical orbit with a perigee of 1.5 earth radii
(geocentric) and an apogee of 3.0 earth radii (geocentric). If the velocity is
3.73 km/s at apogee, what is the velocity at perigee? What is the semi-
major axis? Hint: use the principle of conservation of angular momentum:
L ¼ mv  r ¼ constant.
8. An ongoing desire of the intelligence, surveillance, and reconnaissance
(ISR) community is long-dwell imaging (LDI), or persistent surveillance.
If you could place a satellite at an altitude of 1.0 earth radius (2-Re
geocentric), how long an imaging window would it provide over a given
target? You need the period (or velocity) and a bit of geometry to answer
the question. Take the horizon to be ±45°.
9. The Sirius-1 satellite has an orbit of 53,432 km  30,895 km (geocentric).
The inclination is 61.2°, and the apogee is over Canada. See Fig. 5.12 for
an illustration. What are the semi-major axis, eccentricity, and period of
the orbit?
10. A rather popular concept that has emerged over the last few years is the
concept of a tactical satellite—one that does not depend on a remote
ground station but instead directly downlinks data to soldiers “in theater.”
Assuming you had such a satellite (e.g., ORS-1) and a relatively restrictive
field unit (small dish antenna), how many 1-GB images could you
downlink in one pass over a ground station with a 100-Mbps (megabits/
second) capability in 100 s?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 6
Spectral and Polarimetric
Imagery

The discussion of remote sensing up to this point has focused on


panchromatic (black and white) imagery. Beyond recording obvious features
such as size and shape, remote sensing excels in capturing and interpreting
color. Color systems also yield some spectacular imagery. For example, the
early “true-color” image from Landsat 7, shown in Fig. 6.1, depicts the green
hillsides and muddy runoff of the upper San Francisco Bay.

6.1 Reflectance of Materials


The reflectance of most materials varies with wavelength, which allows
spectral imagers, such as those on the Landsat missions, to distinguish
different materials. This task is a fairly common goal for such work.
Figure 6.2 illustrates different aspects of reflective spectra. Spectra are the
fingerprints of elements, deriving from their fundamental atomic character-
istics, as indicated in the previous discussion of Bohr’s model of the hydrogen
atom. One of the more important, and dramatic, spectral features found in
remote sensing is the “red edge” or “IR ledge” at 0.7 mm, as found in
Fig. 6.2.1 This dramatic rise in reflectance with wavelength makes vegetation
appear bright in the infrared. Military organizations design camouflage to
mimic this behavior. The panchromatic sensors on Landsat, SPOT, IKONOS,
and Quickbird extend well into the infrared; as a result, vegetation is bright in
their imagery.

1. Termed the “red edge” in the Manual of Photographic Interpretation, ASPRS, 1997, the
signature might be more properly referred to as an infrared signature. It marks the boundary
between absorption by chlorophyll in the red visible region and scattering due to the leaf’s
internal structure in the NIR region. http://www.eumetrain.org/data/3/36/navmenu.php?
page=3.2.3.

119

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
120 Chapter 6

Figure 6.1 Visible image of San Francisco from Landsat 7, taken April 23, 1999, on flight
day 9, orbit 117, 1830Z. The satellite is not yet in its final orbit and not on the standard
reference grid, WRS, so the scene is offset 31.9 km east of the nominal scene center (path
44, row 34). Landsat has been the premier earth resources satellite system for four decades.
Image reprinted with special thanks to Rebecca Farr, NESDIS/NOAA.

6.2 Human Visual Response2


Before considering the spectral response of orbital systems, first consider the
human visual response. The sensitive elements of the eye are the rods and
cones. Rods (which far outnumber cones) are sensitive to differences in
brightness within the middle of the light spectrum. The rods’ peak sensitivity
corresponds to the peak in solar illumination. If people had only rods, they
would see in shades of grey.
Cones provide color vision, and there are three types of cones (Fig. 6.3):
• L-cones are sensitive primarily to red in the visible spectrum.
• M-cones are sensitive to green.
• S-cones are sensitive to blue.

2. See also the Manual of Photographic Interpretation, page 67 ASPRS, 1997. The ASPRS
Manual cites Dartnall et al. 1983, “Microspectrophotometry of Human Photorecepters,”
pages 69–80 in Color Vision, edited by Mollon and Sharpe.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 121

Figure 6.2 Comparison of some synthetic and natural materials. The olive-green paint
mimics the grass spectrum in the visible to NIR but then deviates.

Figure 6.3 The white curves indicate the sensitivity level for the three types of cones. The
black curve indicates the sensitivity of the rods.3

6.3 Spectral Technologies


Spectral imagers generally use either filter techniques or dispersive elements to
analyze light. Filters are used in multi-spectral systems such as Landsat,
IKONOS, and similar systems (or, for that matter, modern digital electronic
cameras). An example of a three-color linear array is the Kodak sensor shown
in Fig. 6.4. Most pushbroom scanners use a similar geometry (e.g., IKONOS,
Worldview). Current-generation large- and medium-format cameras used for
aerial photography typically do so with multiple panchromatic cameras, each
with its own filter. For example, the Vexcel Ultracam features eight
independent cameras: four that contribute to the large-format panchromatic

3. J. E. Dowling, The Retina: An Approachable Part of the Brain, (1987).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
122 Chapter 6

Figure 6.4 The KODAK KLI-8023 Image Sensor is a multi-spectral, linear solid state image
sensor for color-scanning applications. The 8000-pixel  3-row detector features a 9-mm
pitch and filters for red, green, and blue. An enlarged view and a microscopic view of one
end are superimposed on the photograph showing the three rows and the individual pixels
that make up the detector.

image, and four that contribute to the multi-spectral image. This approach
requires very highly controlled mounting and calibration of the different
cameras to produce a complete spectral image because each of the four multi-
spectral cameras uses a different color filter.
The dispersive elements are prisms (transmission) and gratings (typically
reflective). Prisms make use of the variation of the index of refraction with
wavelength in glass. This variation in velocity with wavelength is termed
dispersion. Prisms are not widely used in space systems, but they were used in
the airborne HYDICE sensor in the 1990s. Figure 6.5 shows the characteristic
rainbow of colors dispersed from a white light source.
A diffraction grating is traditionally a ruled pattern on a glass or metal
surface, with thousands of narrow lines in parallel grooves. A CD or DVD
surface will show a rainbow spectrum similar to the one shown in Fig. 6.5.
The physics of the grating follows the same principles of interference described
in Chapter 3, leading up to the Rayleigh criteria. Reflective (metal) gratings
are common in spectral imaging systems; the grating is frequently inscribed on
the surface of a reflective mirror, typically curved as part of the optical system.
One final comment on the technologies and terminology: Airborne and
satellite systems that measure spectral data in a few bands are termed
multispectral imagers (MSI), with 4 to 16 bands depending somewhat on the
satellite generation. Higher-spectral-resolution systems typically make

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 123

Figure 6.5 Light diffraction through a prism for a mercury source lamp. Image reprinted
courtesy of D-Kuru/Wikimedia Commons.4

measurements in hundreds of (contiguous) bands. These systems are termed


hyperspectral imagers (HSI). The prototype for multispectral imaging systems
is and has been Landsat for over 40 years, with its 6 reflective bands and one
long-wave infrared sensor for much of that time.

6.4 Landsat
In late July 1972, NASA launched the first Earth-Resources Technology
Satellite, ERTS-1. The name of the satellite and those that followed was soon
changed to Landsat. These platforms have been the primary earth-resources
satellites ever since, utilizing MSI with a spatial resolution that has varied
from 30–100 m. After a decade-long hiatus in the operational pace, Landsat-8
(also called the Landsat Data Continuity Mission, or LDCM) was launched
in 2013. Table 6.1 shows some of the parameters for the sequence of missions.
The evolution in data storage technology, bandwidth, and the changes in
downlink technology shown here for Landsat mirror the evolution of the
industry. Resolution has gradually increased with time; Landsat 7 added a

4. http://commons.wikimedia.org/wiki/File:Light_dispersion_of_a_mercury-vapor_lamp_with_a_
flint_glass_prism_IPNr%C2%B00125.jpg.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
124 Chapter 6

Table 6.1 Landsat parameters. Note that Landsat 6 failed at launch and Landsat 7 suffered
a mechanical failure in 2003 that has since limited its utility.
Equatorial
On-Orbit / Resolution Altitude
Satellite Operational Date Sensors (meters) (km) Data Link

Landsat 1 July 23, 1972 to MSS 80 917 Direct downlink with a tape
(ERTS-A) January 6, 1978 RBV 80 recorder (15 Mbps)
Landsat 2 January 22, 1975 MSS 80
to February 25, RBV 80
1982
Landsat 3 March 5, 1978 to MSS 80
March 31, 1983 RBV 30
Landsat 4 July 16, 1982 to MSS 80 705 Direct downlink with TDRSS
December 14, TM 30 (85 Mbps)
1993
Landsat 5 March 1, 1984 to MSS 80
January 2013 TM 30
Landsat 6 March 10, 1983 ETM þ n/a Direct downlink (150 Mbps)
Landsat 7 April 15, 1999 to ETM þ 30 with solid state recorders
date (pan) 15 (380 Gb)
Landsat 8 February 11, OLI 15/30 Direct downlink (384 Mbps)
(LDCM) 2013 TIRS 100 with solid state recorders (3.8-
terabit BOL / 3.1-Tb EOL)

15-m GSD panchromatic sensor to the MSI sensors. Landsat 8 added a


number of new spectral bands, and changed the technology paradigm for the
detectors.
Figures 5.3 and 5.4 showed the traditional LEO orbit for Landsat 4.
Figures 6.6 and 6.7 illustrate the 185-km viewing swath for the Enhanced
Thematic Mapper (ETM) sensor, described below. Figure 6.6 also reflects the
different downlink options, as the sensor communicates with the Landsat
Ground Station (LGS).

6.4.1 Landsat orbit


The Landsat missions have been sun-synchronous polar orbiters in classic
LEO circular orbits. The initial missions flew at a 905-km altitude; the later
missions operated at an altitude of 705 km. NASA’s Mission to Planet Earth
has added several satellites in the Landsat 7 orbit—Terra, Aqua, SAC-C, and
EO-1—that trail the older satellite by a few minutes in an orbital sequence
nicknamed the “A-train” for the morning satellites. An inclination of ~98°
makes Landsat sun-synchronous. Equatorial crossings were set at 9:30 am for
Landsat 1, 2, and 3; 10:30 am for Landsat 4 and 5; and 10:00 am for Landsat
7 and 8. The satellite orbit tracks provide 14.5 orbits per day. The repeat cycle
is every sixteen days or 233 orbits. Figure 6.8 shows the orbit track and how
the orbit shifts in longitude over that interval.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 125

Figure 6.6 The nadir-viewing satellite images a swath 185 km wide.

Figure 6.7 Subsequent orbits are displaced 2500 km to the west. There are 233 unique
orbit tracks.

The orbit track for Landsat 7 is further illustrated in Fig. 6.8. The ground
track is illustrated for 2 orbits. The satellite is on an ascending node on the
night side, and it descends southward on the day side.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
126 Chapter 6

Figure 6.8 This orbit ground track corresponds to the San Francisco image in Fig. 6.1.
The yellow spot just below Mexico City is the sub-solar point, April 23, 1999, ~1830Z.

6.4.2 Landsat sensors


The evolution of the Landsat sensors over four decades provides great insight
into the evolution of satellite imaging systems over that time period. Three
distinct classes of systems were flown: framing systems (video cameras),
whiskbroom systems, and finally pushbroom systems.

6.4.2.1 Return Beam Vidicon5


The first three Landsat missions carried a sequence of RCA video cameras,
i.e., the Return Beam Vidicon (RBV). The first mission carried three cameras
to cover band 1 (blue–green), band 2 (yellow–red), and band 3 (near IR). The
sensors nominally had a GSD with a 40–80-m resolution. It was intended to
be the prime instrument but was quickly superseded by the MSS on the same
mission. The RBV on Landsat 1 failed very early, collecting only 1690 scenes.
A few publications came out using the Landsat 3 RBV sensor, which was
restricted to one (green) spectral band at a 40-m GSD. (Concerted efforts to
locate RBV data from that era by the author were not successful.) The
framing system was probably ahead of its time; the lack of success mirrored

5. http://landsat.gsfc.nasa.gov/about/landsat1.html; “The RBV instrument was the source


of an electrical transient that caused the satellite to briefly lose altitude control,
according to the Landsat 1 Program Manager, Stan Weiland.”; https://directory.
eoportal.org/web/eoportal/satellite-missions/l/landsat-1-3. see also G. R. Cochrane and
G. H. Browne, “Geomorphic Mapping from Landsat-3 Return Beam Vidicon (RBV)
Imagery,” PERS 47(8), pp 1205–1213 (1981).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 127

Table 6.2 Spectral-band numbers and ranges.


Band Spectral band (mm)

4 0.5–0.6
5 0.6–0.7
6 0.7–0.8
7 0.8–1.1
8 10.5–12.4

the early failure of the AF SAMOS system, an attempt at near-real-time


imaging that competed with the CORONA film-return systems.

6.4.2.2 Multispectral Scanner6


The Hughes/Santa-Barbara-Research-Center-designed MSS was the real
breakthrough in civil remote sensing, providing four-color MSI data. The
whiskbroom scanner provided data sampled at a 80-m GSD, using
photomultiplier tube sensors (bands 4–6) and silicon photodiodes (band 7).
The signal from the sensor was digitized to 6 bits (dynamic range). These
sensors flew on Landsat 1–5. The nominal GSD sampling was 79 m along
track and 57 m cross-track. The production MSS data were resampled to a
60-m resolution. The band-numbering scheme started at one with the RBV
channels; the first MSS channel is band 4 in the initial numbering scheme.
Beginning with Landsat 4, the numbering scheme was revised to match the
Thematic Mapper. A HgCdTe LWIR sensor was flown on Landsat 3 with a
240-m GSD, but it failed shortly after launch.

6.4.2.3 Thematic Mapper7


The Thematic Mapper (TM) was similar in many ways to the MSS, mostly
reflecting advances in technology. The TM became the primary sensor
beginning with Landsat 4, although the MSS was also carried to maintain
continuity in archival, synoptic datasets. The TM sensor provided seven
bands of spectral information at a 30-m resolution beginning in 1982. For
Landsat 6 and 7, the instrument was revised as the Enhanced Thematic
Mapper plus, or ETM+, which featured improved spatial resolution in the
LWIR channel (60 m) and a new panchromatic band with higher spatial
resolution (15 m). The following subsections address the ETM+ sensor.
6.4.2.3.1 ETM optics
The optical and sensor design of Landsat predates large linear or rectangular
arrays; it features a whiskbroom design. The telescope is a Ritchey–Chrétien

6. Landsat 1-5 Multispectral Scanner (MSS) Image Assessment System (IAS) Radiometric
Algorithm Description Document; USGS, June 2012.
7. http://landsathandbook.gsfc.nasa.gov/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
128 Chapter 6

Figure 6.9 Landsat ETM+ optical path.

Cassegrain, as seen with several earlier systems. The primary mirror (outer)
aperture is 40.64 cm; the clear inner aperture is 16.66 cm. The effective focal
length is 2.438 m, f/6. The instantaneous field of view (IFOV) for one pixel of
the high-resolution panchromatic sensor is 42.5 mrads.
The relay optics consist of a graphite-epoxy structure containing a folding
mirror and a spherical mirror that are used to relay the imaged scene from the
prime focal plane to the band 5, 6, and 7 detectors on the cold focal plane.
There is a mechanical scanning mirror at the beginning of the optical path
that oscillates at 7 Hz (Fig. 6.9). The scan-correction mirrors compensate for
the satellite’s forward motion as the sensor accumulates 6000 pixels in its
whiskbroom cross-track sampling. The scan-line correction mirror failed on
Landsat 7 after the first year, causing the satellite to collect spatially distorted
data for the remainder of the mission.

6.4.2.3.2 ETM focal planes


The ETM þ scanner contains two focal planes (Fig. 6.10) that collect, filter,
and detect the scene radiation in a swath (185 km wide) as it passes over the
earth. The primary (warm) focal plane consists of optical filters, detectors, and
pre-amplifiers for five of the eight ETM þ spectral bands (bands 1–4 and 8).
The second focal plane is the cold focal plane (90 K), which includes optical
filters, infrared detectors, and input stages for ETM+ spectral bands 5–7. This
approach of dividing the sensor into two focal planes is common with sensors
that must cover an extended wavelength range. It is difficult (and expensive)
to build focal planes that extend from the visible into the short-wave infrared
(SWIR).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 129

Figure 6.10 Diagram of the Landsat 7 focal plane design.

Table 6.3 Prime-focal-plane assembly design parameters.


Parameter Bands 1–4 Pan Band

Number of detectors 16 per band 32


Detector size 103.6 mm  103.6 mm 51.8 mm  44.96 mm
Detector area 1.074  10–4 cm2 2.5  10–5 cm2
IFOV size 42.5 mm 21.3 mm  18.4 mm
Center-to-center spacing along track 103.6 mm 51.8 mm
Center-to-center spacing between rows 259.0 mm 207.3 mm

6.4.2.3.3 ETM prime focal plane8


The prime focal plane array is a monolithic silicon focal plane made of five
detector arrays: band 1 through band 4 and the pan band (8). The arrays for
bands 1–4 contain 16 detectors divided into odd–even rows. The array for the
pan band contains 32 detectors, also in odd–even rows. The system focus is
optimized for the panchromatic band, which has the highest spatial
resolution. Table 6.3 lists each band’s parameters. These detector sizes, or
pitch, are large by current standards for silicon detectors. Compare, for
example, the values for the IKONOS sensors: 12–48 mm.

6.4.2.3.4 ETM cold focal plane


There are 16 cooled indium antimonide (InSb) detectors for bands 5 and 7.
Finally, for the LWIR, there are eight cooled mercury cadmium telluride
(HgCdTe) photoconductive detectors for band 6. The detectors for bands 1–4,

8. With thanks to Dr. Carl Schueler, Director Advanced Concepts, Raytheon, May 1999 and
http://landsathandbook.gsfc.nasa.gov/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
130 Chapter 6

Table 6.4 Cold-focal-plane design parameters.


Parameter Bands 5 and 7 Band 6

Number of detectors 16 per band 8


Detector size 48.3 mm  51.82 mm 104 mm  104 mm
IFOV size 42.5 mm 42.5 mm  85.0 mm
Center-to-center spacing along track 51.8 mm 104 mm
Center-to-center spacing between rows 130 mm 305 mm

5, and 7 each have a 30-m resolution; the LWIR detector has a 60-m
resolution (an improvement over the 120-m resolution of the TM). The
detectors are arranged to have coincident 480-m coverage down track. The
focal plane is cooled to 85 K via a (passive) radiative cooler. Table 6.4 lists
each band’s parameters.

6.4.2.3.5 ETM spectral response


The spectral response of the Landsat sensors is important for understanding
data from this and subsequent systems because most subsequent sensors
until 2010 adopted similar spectral sampling approaches. Figure 6.11 shows
the spectral bands for Landsat 7. The plot color coding reflects the band
colors. There are four reasonably contiguous bands in the visible and near-
infrared (1–4), and then two SWIR bands plotted. The band-6 response is
plotted separately. IKONOS, Quickbird, and others all have very similar
spectral bands to the Landsat bands 1–4. Notice the drop in the response of
the silicon focal plane at 0.9 mm as the silicon bandgap is approached—a
common occurrence in silicon detectors. The peculiar numbering scheme,

Figure 6.11 The Landsat 7 spectral-band base response functions are plotted here as a
function of wavelength. These values are from the ground calibration numbers provided by
NASA/GSFC. The higher-resolution panchromatic band covers the same region as bands
2–4 but does not extend into the blue in order to avoid atmospheric scattering at shorter
wavelengths.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 131

Table 6.5 Spatial resolution and swath of Landsat 7. Note that band 6 has a 60-m
resolution; earlier missions featured a 120-m resolution in the LWIR.
Band Wavelength (nanometers) Detector Resolution (m)

1 Blue 450–520 Si 30
2 Green 520–600 Si 30
3 Red 630–690 Si 30
4 NIR 760–900 Si 30
5 SWIR 1 1550–1750 InSb 30
6 LWIR 10.40–12.5 mm HgCdTe 60
7 SWIR 2 2090–2350 InSb 30
8 Pan 520–900 Si 15

with band 6 seemingly out of order, is due to the temporal evolution of the
TM design.9 Table 6.5 lists the specific parameters of each band.
6.4.2.3.6 ETM dynamic range
The dynamic range for the Landsat sensors is typical of the satellites flown in
the first decades of remote sensing. The TM and ETM sensors have an 8-bit
dynamic range—meaning that the digital number varies from 0 to 255, thus
defining the amount of data to be broadcast to the ground. By contrast, the
6-bit MSS data allow for grey levels from 0–63. As indicated in Section 4.2,
modern commercial systems offer an 11- or 12-bit dynamic range, allowing a
range of 0–2047 or 0–4095, respectively.

6.4.3 Landsat data links


The telemetry system for Landsat serves as a model for data-system
requirements. Landsat 7 is primarily a “store and dump” system. Data are
stored on a solid state recorder (SSR) and transmitted primarily to the
Landsat ground station (LGS) at Sioux Falls, South Dakota. The downlink is
an X-band link at a combined aggregate rate of 150 megabits per second
(Mbps).10 The data recorded on the SSR can be played back using one or two
150-Mbps bit streams and transmitted to the LGS via the X-band link.
A rough calculation shows how the data rate evolves from such a sensor.
A scan line is 6000 pixels across, based on the 185-km swath, with 30-m pixels.
This sensor takes 6928 cross-track samples in a sweep, since the pixels overlap
some. Considering only the seven spectral channels, there are nominally 56 bits
per pixel. One can estimate the time required to acquire one line by dividing
the pixel size by the satellite velocity. The data rate can then be obtained by
dividing the number of bits per line by the time required to collect those pixels.

9. Professor David Landgrebe, private communication, 2002. The original design was to include
five reflective bands. When NASA allowed the additional reflective band at 2.1 mm, it became
band 7. The most recently added band, band 8, is the high-spatial-resolution panchromatic
channel. The numbering scheme finally changed in a significant way with Landsat 8.
10. It is important to discriminate between upper and lower case for the “b” in Mbps or Mb/s.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
132 Chapter 6

For a 185-km swath ⇒ (185 km) / (30 m/pixel) ¼ 6000 pixels/scan line,
or 3.36  105 (56  6000) bits per scan line. What is the time interval
corresponding to one scan line?

30 m
t¼ ¼ 0.004 s, or 4.0 ms:
7.5  103 m∕s

The implicit data rate is

3.36  105 bits


¼ 84  106 bits∕s ≈ 80 Mb∕s ðMegabits per secondÞ:
4.0  103 s
The “correct” answer was 85 Mb/s for Landsat 4 and 5, which used an
8.2-GHz (X-band) downlink through TDRSS.

6.4.4 Landsat 8 detectors: Operational Land Imager (OLI) and Thermal


Infrared Sensor (TIRS)
The replacement for the Thematic Mapper makes use of 7,000 element linear
sensor arrays to build a pushbroom scanner design, finally ending the use of
mechanically sweeping mirrors first used 40 years earlier. The new sensor
added several spectral bands and increased the dynamic range to a full 12 bits
(digital number DN ¼ 0–4095), which allows for much finer resolution of
subtle intensity variations than was possible with the 8-bit systems. The
spectral bandwidths were decreased; in general, the new channels are
narrower than their ETM+ counterparts. In a rather subtle change, the
SWIR sensors switched from InSb to HgCdTe.
The LWIR sensor (TIRS) was divided into a separate package, with its
own telescope, using a relatively new detector approach: gallium-arsenide
QWIP detectors. The 1850-pixel FPA consists of three 640  512 detector
arrays with a pixel size of 25 mm. The resulting IFOV of 142 mrad provides
roughly a 100-m sampling. A mechanical, two-stage cryocooler cools the
sensor to 43 K. The original Landsat band-6 channel was split into two
channels (partly an artifact of the QWIP detectors), allowing for improved
temperature measurements. The new sensors have a temperature resolution
(NEDT) of 0.4 K and are again set to a 12-bit dynamic range.
The values for the bands are given in Table 6.6 and illustrated in Fig. 6.12.
The TIRS spectral response changes in a rather dramatic way (in comparison
to Fig. 6.11). A subtle message is also embedded about the poor atmospheric
transmission in the visible part of the spectrum, a factor that is present in the
corresponding figure in Chapter 3 (Fig. 3.13) but not heavily discussed. The
energy measured in the blue and green at high altitudes primarily comes from
scattered sunlight. This term must be taken into account during atmospheric
compensation, as required to calculate surface reflectance. Atmospheric
scattering makes earth observations fuzzier, or less sharp.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 133

Table 6.6 Spectral ranges and pixel sizes of OLI/TIRS.11


Band Wavelength (mm) Sensor Material GSD (m)

1 Coastal blue (aerosol) 0.43–0.45 Si 30


2 Blue 0.45–0.51 Si 30
3 Green 0.53–0.59 Si 30
4 Red 0.64–0.67 Si 30
5 NIR 0.85–0.88 Si 30
6 SWIR1 1.57–1.65 HgCdTe 30
7 SWIR2 2.11–2.29 HgCdTe 30
8 Panchromatic 0.50–0.68 Si 15
9 Cirrus 1.36–1.38 HgCdTe 30
10 Thermal 1 10.6–11.2 GaAs 80
11 Thermal 2 11.5–12.5 QWIP 80

Figure 6.12 Landsat 8 spectral bands.12 Atmospheric transmission values for this graphic
were calculated using MODTRAN for a USA 1976 Standard atmosphere, summertime, with
scattering. Band 1 (aerosol) is the narrow, dark-blue band on the left (unlabeled); the
panchromatic band (8) is indicated in grey between 0.5 and 0.7 mm. The panchromatic band
does not extend into the NIR, a major change from Landsat 7. The narrow cirrus band (9) is
in an absorption band for water vapor and is intended to enable atmospheric compensation.

6.5 Spectral Responses for Commercial Systems


A major evolution in spectral imaging occurred with the advent of the
commercial remote sensing systems. IKONOS, Quickbird, and Orbview-3
provide spectral imagery at a resolution of 4 m or better and have similar

11. http://landsat.gsfc.nasa.gov/?p=5779; http://landsat.gsfc.nasa.gov/?p=5698.


12. The idea for this image follows from a graphic created by Laura Rocchio & Julia Barsi,
NASA/GSFC. https://landsat.usgs.gov/documents/Landsat8DataUsersHandbook.pdf.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
134 Chapter 6

Figure 6.13 The panchromatic sensor has almost no blue response in order to avoid image
degradation caused by atmospheric scattering. The panchromatic sensor extends well into the
NIR. In nanometers, blue = 450–520, green = 520–600, red = 630–690, and NIR = 760–790.

spectral responses, all modeled after the first four Landsat spectral bands. The
calibration values for IKONOS are given in Fig. 6.13. The panchromatic-
sensor spectral response extends well into the near-infrared, and the response
in blue is relatively poor. This is by design, in some sense, to reduce the effect
of aerosols (scattering) in the high-spatial-resolution channel.
The response functions for most of the other commercial sensors are
all very similar. The subtle differences become important when spectral
quantities are estimated, such as vegetation health or area coverage. More
significant changes started to occur, however, when DigitalGlobe launched
the Worldview-2 sensor with 8 spectral (reflective) bands (October 8, 2009).
The sensor preceded the LDCM into orbit but has some similarities, including
the short-wavelength “coastal blue” band. The new Worldview design also
includes a yellow band, which is helpful in the study of shallow coastal water
for bathymetry. The Worldview-3 mission (August 13, 2014) carries the same
8-band VNIR sensor and a new 8-band SWIR focal plane that promises to
dramatically advance the art of spectral imaging from space.
The markets and applications for these sensors are still being created. Their
improved spatial resolution (a factor of ten greater than Landsat) is ideal for
studies of fields and forests by those who wish to observe and characterize
vegetation; presently, the largest market is in agriculture. Figure 6.14 shows the
spectral response functions, including the new short-wave bands as designed by
Fred Kruse and Sandra Perry.13 The design used hyperspectral data from
AVIRIS (described below) and follows in some sense the NASA/Terra/ASTER
sensor, offering great promise for geologic applications.

13. Kruse and Perry, “Mineral Mapping Using Simulated Worldview-3 Short-Wave-Infrared
Imagery,” Remote Sensing 5, 2688-2703 (2013).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 135

Figure 6.14 The Worldview-3 response functions are illustrated here, superimposed on a
blackbody curve for the solar spectrum. The scale is not given here, but the SWIR bands are
in a portion of the curve that has less than 10% of the radiance found in the visible spectrum.
(The visible/near-IR response curves are the same for Worldview-2.) The high-resolution
panchromatic sensor has a resolution of one-third of a meter, and the VNIR bands are four
times that value, at 1.33 m. Several of the bands overlap, NIR-1 and NIR-2 in particular. It is
more difficult to see, but SWIR bands 5 and 6 also overlap. The panchromatic sensor still
extends into the blue.14

Figures 6.15 and 6.16 show the characteristics of the spectral data from
Worldview-3. The differences from frame to frame are subtle, but one fairly
obvious transition is seen in the second row of Fig. 6.15—the golf course on the
right side of the scene changes from dark to bright at the transition from the
visible to the near-infrared. Figure 6.16 shows plots for several characteristic
scene elements. The data have been converted to a rough reflectance using a
common assumption that the scene must contain pixels that range from 0 to
100% reflectance. (The technique is termed internal average relative
reflectance.) The spectra in Fig. 6.16 shows the rather dramatic range in
reflectance in the “grass” class taken from an area just outside the field of view
of the image chips shown in Fig. 6.15. There is a dramatic rise from 15–20%
reflectance in the visible to 90% in the NIR.

6.6 Analysis of Spectral Data: Band Ratios and NDVI15


There are a variety of approaches to the analysis of spectral data, as partly
developed in the next chapter. As an initial illustration, the use of band ratios
is shown here for a well-known, heavily used vegetation index. The idea is to
combine the information from two of the spectral bands to obtain a measure

14. With thanks to Giovanni Marchisio at DigitalGlobe for the wavelength response functions.
15. http://www.seos-project.eu/modules/agriculture/agriculture-c01-s03.html. There is at least
one portable commercial product designed to measure the NDVI at the ground level for
individual plants: the Trimble Greenseeker crop sensing system. It uses light emitting diodes
(LEDs) at 680 and 780 nm. The handheld unit is being marketed as of 2016.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
136 Chapter 6

Figure 6.15 The Worldview-3 sensor provides 16 unique images for each scene. Here, the
visible/near-IR imagery are collected at a 1.2-m GSD and the SWIR data at a 7.5-m GSD.
The “small multiples” technique (Edward Tufte) provides some insight into the variations, but
the differences here are subtle. The SWIR bands provide more differentiation between the
various “impervious” surfaces, i.e., concrete, asphalt, and similar.

of the health and density of vegetation. The Normalized Difference


Vegetation Index (NDVI) takes the difference of the intensities in the near-
infrared and red bands, normalized by the sum of the two. This normalization
generally reduces the impact of illumination variations from the index, also
making it largely invariant to terrain. Formally,
DN near-infrared  DN red
NDVI ¼ , (6.1)
DN near-infrared þ DN red

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 137

Figure 6.16 Spectra from several characteristic regions in the scene shown in Fig. 6.15.
The radiance data have been converted to reflectance using a commonly used assumption
that the scene reflectances vary from 0–100%, i.e., there are perfectly dark and bright
targets. The ocean surface has near-zero reflectance in the SWIR. Vegetation shows a
characteristic peak in the “green” at 550 nm.

where the digital number (DN) comes from bands 4 and 3 for Landsat TM
and ETM data. Similar systems such as Quickbird will have a similar pair,
typically 4 and 3 for the NIR and red bands, respectively. For Worldview-3,
as illustrated in Fig. 6.15, the ratio would be determined from band 7 (832 nm)
and band 5 (660 nm).
Figure 6.17 illustrates the NDVI for the San Diego scene shown
previously in Chapter 1 (Figs. 1.11–1.13.) The NDVI plot is scaled from
 0.4 to þ 0.2; healthy vegetation will have an NDVI > 0. Those healthy
vegetation regions mostly correspond to the golf courses and city parks in this
scene; they appear bright red in the false-color infrared figure on the right.
There are a number of variations on the NDVI designed to produce a quantity
that is more directly proportional to physical parameters such as biomass, but
the standard index has a great advantage in simplicity and relatively
widespread acceptance.

6.7 Analysis of Spectral Data: Color Space and Spectral Angles


The previous section reflects one of the standard approaches to the analysis of
spectral data. A more-detailed look involves more consideration of where, in
color space, the different scene elements lie. Data elements from two small

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
138 Chapter 6

Figure 6.17 Spectral data from Landsat 7, taken 2001-06-14. The NDVI is shown on the
left, the false-color infrared image on the right (NIR, red, and green bands appear as RGB).
The inset in the top right of each figure is the golf course adjacent to the Hotel del Coronado.

areas in Fig. 6.17 are displayed here in a scatter plot—a 2D representation of


the data. Here, the data points are plotted as a function of the DN for bands 3
and 4. These are the same bands used for the NDVI calculation earlier. The
means for the two regions are indicated, and vectors are drawn from the origin
to those points. These vectors define angles in this color space, as indicated
with the labels u1 and u2. The difference u12 is a measure of the difference in
the spectra.
The angle between any two vectors is simply obtained by using the dot
product, or inner product. The “spectral angle” between the two vectors
shown in Fig. 6.19 is just the normalized dot product, i.e., the scalar product,

A·B
cos u ¼ : (6.2)
jAj · jBj

Table 6.7 gives the values of the mean spectra for two regions, along with the
magnitudes of the two vectors. The values for bands 3 and 4 are those shown
in Fig. 6.18. Calculating the product of the two vectors reveals a magnitude of
84446.1.
Calculating the spectral angle gives cos u ¼ 0.93, or u ¼ 21.8°. This is a
fairly arbitrary value without the context of the other data in the scene, but it
indicates a significant difference in spectral angles and a clear means of
distinguishing the two components of the scene.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 139

Table 6.7 Mean spectra values for two regions.


Region Band 1 Band 2 Band 3 Band 4 Band 5 Band 7 Magnitude

Vegetation 89.4 80.2 72.3 123.5 127.4 69.7 236.6


Sand/Concrete 167.3 158.8 185.0 91.5 167.3 154.7 384.3

Figure 6.18 Spectral data from Landsat 7, taken 2001-06-14. Data from two small regions
are plotted as a function of DN for bands 3 and 4. The labels for each of the two classes
correspond to the means, also given in Table 6.7, e.g., for vegetation, the mean of band 3 is
a DN of 72, and the mean for band 4 is 123. The calculation of the dot product for these
vectors shown in the text is done for the full set of six reflective bands.

6.8 Imaging Spectroscopy


Imaging spectroscopy is the acquisition of images wherein a spectrum of the
energy arriving at the sensor is measured for each of an image’s spatial-
resolution elements. These spectra are used to derive information based on the
signature of the interaction of matter and energy expressed in the spectrum.
Spectroscopy has been used in the laboratory and observatory for more than a
hundred years. The Airborne, Visible-Infrared-Imaging Spectrometer
(AVIRIS) sensor is the prototype imaging spectrometer for spectral remote
sensing, and following an aggressive update program over 25 years, has
remained the premier sensor for imaging spectrometry, or hyperspectral
imaging (HSI).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
140 Chapter 6

Figure 6.19 AVIRIS line diagram. AVIRIS uses silicon (Si) detectors for the visible range
and indium-antimonide (InSb) for the near infrared, cooled by liquid nitrogen. The sensor has
a 30° total field of view (full 614 samples) and one-milliradian instantaneous field of view
(IFOV, one sample), calibrated to within 0.1 mrad. The dynamic range has varied over time;
10-bit data encoding was used through 1994, and 12-bit data have been recorded since
1995.

6.8.1 AVIRIS
AVIRIS is a world-class instrument in the realm of earth remote sensing, a
unique optical sensor that delivers calibrated images of the upwelling spectral
radiance in 224 contiguous spectral channels (also called bands), with
wavelengths from 380 to 2500 nm. The instrument typically flies aboard a
NASA ER-2 plane (a U-2 modified for increased performance) typically at
20 km above sea level and 730 km/h. In recent years the sensor has also
been flown on a Twin Otter at a 2–3-km altitude (6000–17,500 feet) for a
higher spatial resolution.
The AVIRIS instrument contains 224 detectors, each with a wavelength-
sensitive range (also known as spectral bandwidth) of approximately 10 nm,
allowing it to cover the range between 380 nm and 2500 nm. Plotted on a
graph, the data from each detector yields a spectrum that, compared with the
spectra of known substances, reveals the composition of the area under
surveillance.
AVIRIS uses a scanning mirror to sweep whiskbroom fashion, producing
614 pixels for the 224 detectors on each scan. For the original ER-2 data, an

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 141

Figure 6.20 An AVIRIS “hypercube.” The 3D perspective shows a small image chip as a
false-color infrared image (750, 645, and 545 nm) with two spatial dimensions and the
wavelength in the third dimension. The data are in radiance, and the atmospheric absorption
bands are fairly obvious in this format. The intensity of the light in the water decreases
quickly with wavelength. Flight occurred on November 16, 2011, UTC 20:40, with a spatial
resolution of 7.5 m.

individual pixel produced by the instrument covers an approximately 20-m


square area on the ground (with some overlap between pixels), yielding a
ground swath 11 km wide.16
Figures 6.20 and 6.21 show data from AVIRIS collected over San Diego
in 2011. The spatial resolution is about 7.5 m for this (relatively low altitude)
flight on the ER-2. Figure 6.20 is a “hypercube” display meant to illustrate the
3D structure of imaging spectrometry data. Figure 6.21 shows the full flight
line on the left side as a “true-color” image; compare this image with Fig. 1.15
from Worldview-2 at much higher spatial resolution. Line plots on the right
show radiance (as observed) and reflectance (as calculated).
Sensor radiance must be converted to ground reflectance in order to
compare to spectral libraries, as illustrated in figure 6.2 above. The process of
conversion divides out the solar illumination term, and then compensates for
atmospheric absorption for the downwelling sunlight, and upwelling reflected
radiation, with further compensation for scattering by atmospheric aerosols.

16. http://aviris.jpl.nasa.gov/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
142 Chapter 6

Figure 6.21 This image from AVIRIS shows elements of a scene acquired on November
16, 2011. The mission was flown on a NASA ER-2 plane at an altitude of 7500 m (25,000
feet). The image on the left is roughly true color. Four characteristic regions of interest
(collections of pixels) were measured, with spectra shown in radiance (top) and reflectance
(bottom). The small characteristic peak in the green (550 nm) and IR ledge are evident in the
vegetation signature, particularly in the reflectance data. The “white-roof” spectra are from
the rooftops more clearly seen on the right side of Fig. 1.15. “Sand” is from the region of
open beach.

This brief description does little justice to the difficulty of the process. The
illustration here uses the FLAASH algorithm, which, like most such
approaches, is based on MODTRAN (as shown in Chapter 3).17

17. S. M. Adler-Golden et al., “Atmospheric correction for shortwave spectral imagery based
on MODTRAN4,” Proc. SPIE 3753, 61–69 (1999). F. A. Kruse, 2004, “Comparison of
ATREM, ACORN, and FLAASH Atmospheric Corrections using Low-Altitude
AVIRIS Data of Boulder, Colorado,” In proceedings 13th JPL Airborne Geoscience
Workshop, Jet Propulsion Laboratory, 31 March – 2 April 2004, Pasadena, CA, JPL
Publication 05-3.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 143

6.8.2 Hyperion
The first major VNIR/SWIR hyperspectral sensor to fly in space was the
Hyperion sensor on the NASA EO-1 platform. EO-1 is a test bed for earth
resources instruments, launched in conjunction with Landsat 7 and designed
to test follow-on technology for NASA systems. EO-1/SAC-C was launched
November 21, 2000 from Vandenberg Air Force Base (VAFB) in a 705-km
orbit, trailing just after Landsat 7. The Hyperion sensor is the Thompson,
Ramo, Woolridge (TRW)-built cousin to the payload of NASA’s ill-fated
Lewis satellite effort. [Also on EO-1: the Advanced Landsat Imager (ALI), the
predecessor to the Landsat 8 OLI sensor.]
Hyperion offers a 30-m spatial resolution covering a 7.5-km swath. The
0.4–2.5-mm spectral range is analyzed at a 10-nm spectral resolution (220
bands). Figure 6.22 contains a nice illustration of the spectral nature of the
sensor and an unusual look at the beginnings of a blackbody curve for the hot
lava of Mount Etna, glowing at some 100–2000 K. The lava curve (brown in
the line plot) shows a spectrum that rises above the intensity of reflected
sunlight beginning at 1600 nm and appears to peak around 2.4 mm. The
peak in the 2000–2400 nm range is due to the blackbody radiation from the
lava at a nominal temperature of 1000–2000 K. By contrast, a vegetation

Figure 6.22 Mount Etna. Hyperion offers 12-bit dynamic range.18 Data are in
watts/(m2 · ster · mm), i.e., power per unit area, per solid angle, and per wavelength (mm).

18. J. Pearlman et al., “Development and Operations of the EO-1 Hyperion Imaging
Spectrometer,” Proc. SPIE 4135, 243 (2000);

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
144 Chapter 6

signature (the green curve) shows the IR ledge expected of healthy vegetation
at 700 nm. The small arrows along the bottom of the curve (at 1234, 1639, and
2226 nm) indicate the spectral bands used to construct the image on the right,
coded as blue, green, and red, respectively.

6.8.3 MightySat II: Fourier-Transform Hyperspectral Imager


The first hyperspectral sensor to fly on a satellite was the Fourier-transform
spectrometer flown by AFRL on the USAF MightySat-II.1. The Fourier-
transform technology is very sensitive to vibration and as such is not well suited
to airborne platforms. Satellites, however, are suitable platforms. Mightysat
II.1 (P99-1) was launched from Vandenberg Air Force Base into a 550-km
polar orbit by a Minotaur rocket on 19 July 2000, and deorbited in December
2002. The Fourier-Transform Hyperspectral Imager (FTHSI) can produce 150
narrowband images in the 0.45–1.05-mm band. The image quality was modest,
but it was the first hyperspectral system on orbit. USAF security restrictions
prevented anyone from seeing much of the data.19 Figure 6.23 shows a false-
color image that demonstrates the function of the sensor.
From a space-systems design effort, the most interesting aspect of the sensor
was the use of commercial, off-the-shelf parts. No radiation-hardened parts

Figure 6.23 RGB of the first scene, taken near Keenesburg, CO. The data are presented
as a false-color IR image—regions that appear red are areas of vegetation.20

19. AW&ST, page 57, August 14, 2000.


20. John Otten, private communication, 2004.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 145

were used, although some additional shielding was placed around critical
components. The sensor operated nominally until the satellite was shut down.21

6.9 Optical Polarization


A domain of remote sensing that is akin to multispectral imaging is
polarimetric imaging. This is a relatively new discipline, with few applications
in the study of terrestrial imaging to date and relatively few systems capable of
making optical polarization measurements. There is a modest amount of work
in the atmospheric sensing community (not addressed here).
As described in Chapter 2, polarization is an intrinsic characteristic of
light. In nature, solar illumination is largely unpolarized, and the visible
effects follow from characteristics of reflection, as developed in the discussion
of the Fresnel relations. For active laser and radar systems, developed in later
chapters, polarization becomes a more-intrinsic element of sensor operations.
Signatures from optical polarization tend to be rather subtle, as illustrated
in Fig. 6.24. The data shown there were taken with a camera that has an
electronically rotating liquid-crystal polarizing filter.22 The data taken in that
mode have few obvious differences, and so they are generally transformed into
a more-natural coordinate system, as defined by the Stokes matrix, or Stokes
vectors. Introduced by G. G. Stokes in 1852, the Stokes vector is a four-
element real vector describing polarized or partially polarized light, based on
the intensity measurements. The symbols S0, S1, S2, and S3 (also called I, Q,
U, and V, respectively) are used for the four Stokes vector elements, defined as
2 3
6 jE x j þ jE y j 7
2 2

2 6 3 7
6 7 2 3
S0 6 jE j2  jE j2 7 I 0 þ I 90
6 S1 7 6 x y 7 6
I 0  I 90 7
S¼6 7¼6
6   7
7∝6 7, (6.3)
4 S2 5 6 7 4 I 45  I 135 5

6 2 Re E x E y 7
S3 6 IL  IR
6  7
7
4 5
2 Im E x E y

where S0 is the total intensity of the light, S1 is the difference between the
horizontal and vertical polarization, S2 is the difference between linear +45°

21. Yarbrough et al., “MightySat II.1 hyperspectral imager: summary of on-orbit perfor-
mance,” Proc. SPIE 4480, 186 (2002).
22. R. C. Olsen, M. Eyler, A. M. Puetz, and P. Smith, “Initial results using an LCD
polarization imaging camera,” Proc. SPIE 7303 (2009); Philip Smith, The Uses Of A
Polarimetric Camera, M.S. Thesis, Naval Postgraduate School, September 2008.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
146 Chapter 6

Figure 6.24 Optical polarimetric images of the Naval Postgraduate School campus,
showing I (S0), Q (S1), U (S2), and degree of linear polarization (DOLP). The panchromatic
camera has a green filter on the lens to limit the wavelength range for the polarization filter.

and  45° polarization, and S3 is the difference between right and left circular
polarization. The latter terms are often normalized by S0 so that they have
values between +1 and  1. (Radar jargon intrudes into the optical domain
with the use of I, Q, U, V, where the first two terms are in-phase and
quadrature in the radar domain.) The intensity term, S0 or I, is effectively the
unpolarized light or the overall intensity. The second term is the difference
between the measurements at 0° and 90°, and the third is the difference
between measurements at 45° and 135°. The last term describes circularly
polarized light, which is extremely rare in nature but frequently used in
satellite communications.
Figure 6.24 illustrates the first three elements of the Stokes vector for a
daytime scene. (The same scene was imaged with a color digital camera in
Fig. 2.3.) The top-left panel is just the intensity (the average, in this case, of all
four filter measurements.) The sun is to the right in this scene, and the sunlit
and shadowed sides of the main building can be seen on the right (Hermann
Hall). The top-right panel (Q) shows some of the expected polarization
elements—the sky is polarized due to Rayleigh scattering of the sunlight (blue
sky). The Hermann Hall rooftop elements appear bright because the relatively
smooth brick surfaces cause the reflected light to be polarized (Fresnel
equations). By contrast, the trees are dark in the optical (green wavelength)

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Spectral and Polarimetric Imagery 147

image and in the Q term because reflectance from the natural features tends to
be unpolarized. The final term U contains some residual polarization
information, but these orientations primarily show noise. The gradual shift
in the grey level most obvious in the U image reflects the change in orientation
with respect to the sun in the sequence of images assembled here.
One of the problems with optical polarization is the dependence on both
illumination (direction) and viewing direction. One approach to analysis
examines the total polarization. The degree of linear polarization (DOLP) is
the sum of Q and U, or in equation form,
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
S 21 þ S 22
DOLP ¼ : (6.4)
S0

The fourth panel in Fig. 6.22 shows the DOLP for the scene. The sky is the
most highly polarized element of the scene; the windows in the lower part of
the building reflect the polarized skylight. Figure 2.3 shows a similar effect in
the sky. The Rayleigh scattered sunlight is fairly strongly polarized at the
30–50% level.

6.10 Problems

1. When was the first Landsat launched?


2. What wavelength ranges correspond to red, green, and blue for the
Landsat TM?
3. How many spectral channels are used for the Thematic Mapper? Over
what wavelength ranges?

Figure 6.25 Worldview-3 scatter plot for NIR (B7) versus red (B5).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
148 Chapter 6

Table 6.8 Worldview-3 data for three regions of interest.


B1 B2 B3 B4 B5 B6 B7 B8

Wavelength (nm) 425 480 545 605 660 725 832.5 950
Grass 2.2 2.9 8.8 8.8 8.04 45.4 88.3 93.6
Soil 16.2 19.1 24.9 32.5 37.9 43.4 6.1 50.1
Concrete 57.3 61.0 68.3 73.9 75.0 75.6 66.4 64.6

4. How wide is a standard Landsat 8 image? What is the spatial resolution


for the reflective bands? What is the spatial resolution for the thermal
band? How many pixels wide does this make an image in the reflective
band?
5. What is the nominal orbit for Landsat 8 (include altitude, inclination, and
local time for the equator crossing)?
6. How long is the repeat cycle for Landsat 8?
7. What is the dynamic range for visible detectors on Landsat 7 (six, eight, or
ten bits)? (You may need to go to the NASA/GSFC Landsat website.)
How did this change with Landsat 8?
8. What is the nominal spectral resolution for AVIRIS (compared to
Landsat)? At what wavelength does the IR ledge for vegetation occur? Is
this wavelength in the bandwidth of a silicon detector?
9. Data from the Worldview-3 sensor are illustrated and given in table form
below for three small regions shown in Figs. 6.15 and 6.16. The scatter
plot shown in Fig. 6.25 follows the form of the one shown in Fig. 6.18,
showing the near-infrared channel vs the red channel. The means of each
region are indicated in Table 6.8. Calculate the angle between the “Grass”
material and the “Soil” material, and the spectral angle between “Soil”
and “Concrete.” For reference, the magnitude of the grass vector is 137.3.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 7
Image Analysis
The previous chapters discussed imaging technology and remote-sensing
systems. Once the data reach the ground, the next step is to extract information
from the images. We’ll begin with traditional photo-interpretation techniques,
then move on to the manipulation of digital data.
There are two techniques involved in image processing for remote sensing:
enhancing images for presentation and extracting information. Most work at
the pixel level, and many make use of scene statistics. If the data inhabit more
than one spectral dimension (that is, if they have color), then a broad range of
techniques can exploit their spectral character and extract information.

7.1 Interpretation Keys (Elements of Recognition)1


Traditional image analysis makes use of certain key elements of recognition,
ten of which are developed here. The first four—shape, size, shadow, and
height—are related to the geometry of objects in the scene.

7.1.1 Shape
Shape is one of the most useful elements of recognition. One classic shape-
identified structure is the Pentagon (Fig. 7.1). The well-known shape and size
make it easily identifiable.

7.1.2 Size
Relative size is helpful when identifying objects, and mensuration (the
absolute measure of size) is extremely useful for extracting information from
imagery. The illustrations at the beginning of Chapter 1 show how runway
length can be obtained from properly calibrated imagery. The Hen House
radar sites (Fig. 1.4) display characteristic shapes and sizes, and the size
provides information about the capability.

1. Avery and Berlin, pages 52–57; Manual of Photographic Interpretation; Jensen, pages 121–133.

149

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
150 Chapter 7

Figure 7.1 Early image of the Pentagon; the oblique view distorts the scene. The large
number of cars in the parking lot indicate a high level of activity, even though it is a
Saturday.

7.1.3 Shadow
Shadows separate targets from the background. They can also be used
to measure height, e.g., the Washington Monument as illustrated in
Fig. 7.2.

7.1.4 Height (depth)


Height can be inferred from shadows in nadir-viewing imagery but can be
derived directly from more-oblique views. Stereo imagery has been used to
distinguish height in aerial photography and the original Corona imagery.
Modern alternatives include LiDAR (as shown in Fig. 1.17) and interfero-
metric synthetic-aperture radar (IFSAR).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 151

Figure 7.2 Image of the Washington Monument, acquired by Gambit (KH-7) on 2/19/1966
(Mission 4025, frame 3). The image is oriented with north as “up.” Based on these details,
estimate the time the image was taken.

7.1.5 Tone or color


Tone and color are the product of the target albedo and illumination. These
characteristics of the data depend only on the level of an individual pixel,
whereas the other elements depend on higher-level abstractions. The
Landsat and similar images in Chapter 1 illustrate the elements of tone
and color. The vegetated regions on the south end of Coronado Island
(Figs. 1.12 and 1.15) are distinguished from regions of similar brightness by
color. On a larger scale, the distinction between bare soil and vegetation in
San Diego County can also be made by color (Figs. 1.11 and 1.14). Recall
from the previous chapter how NDVI is used to provide a quantitative
measure of color (Fig. 6.17).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
152 Chapter 7

Figure 7.3 U-2 image of a SAM site in Cuba, acquired November 10, 1962. These images
were taken from very low altitudes (less than 500 feet), which was dangerous work. Major
Rudolf Anderson was shot down on such a mission by an SA-2 on October 27, 1962.2

7.1.6 Texture
Texture is concerned with the spatial arrangement of tonal boundaries.
Texture is the spatial arrangement of objects that are too small to be
discerned. Texture depends on the image scale, but it can be used to
distinguish objects that may not otherwise be resolved. The relative coarseness
or smoothness of a surface becomes a particularly important visual clue with
radar data (Figs. 1.19 and 1.20). Agricultural and forestry applications are
appropriate for this tool—individual trees may be poorly resolved, but
clusters of trees will have characteristic textures.

7.1.7 Pattern
Related to shape and texture is pattern, the overall spatial form of related
features. Figure 7.3 shows a Russian SAM site with characteristic patterns
that help detect missile sites, such as the Russian propensity for erecting three
concentric fences around important installations. In imagery from systems
like Landsat (30-m resolution), irrigated fields form characteristic circular

2. National Museum of the Air Force, http://www.nationalmuseum.af.mil/Upcoming/Photos


.aspx?igphoto=2000573166.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 153

Figure 7.4 Landsat TM image (bands 4, 3, and 2) taken near Boulder, Colorado. The Bighorn
Basin is located about 100 miles east of Yellowstone National Park in northern Wyoming.
The circle is characteristic of irrigated crops. Bright red indicates the area is highly reflective in
the near-infrared (TM band 4), which indicates vegetation. Compare this image to Fig. 6.15.

patterns in the American southwest (Fig. 7.4). Irrigated fields and patterns are
also evident in the DMC data shown in Fig. 1.14. Geological structures, too,
reveal themselves in characteristic patterns, a concept applied to the search for
water on Mars and for characteristic textures3 and patterns4 associated with
mineral hydration and water flow.

3. “Petrogenetic Interpretations of Rock Textures at the Pathfinder Landing Site,” T. J. Parker,


H. J. Moore, J. A. Crisp, and M. P. Golombek, presented at the 29th Lunar and Planetary
Science Conference, March 16–20, 1998, http://mars.jpl.nasa.gov/MPF/science/lpsc98/1829.pdf.
4. “The strongest evidence for the Martian ocean comes from a unique pattern found in the
rocks that indicate a flow of some sort of built up of layers in the rock. There are fine
distinctions between wind flow and water, however, and the Mars science team directed
Opportunity to take a series of close-up pictures so they could assess the etchings.” “Ripples
that formed in wind look different than ripples formed in water,” said John Grotzinger, a
rover science team member from the Massachusetts Institute of Technology. “Some patterns
seen in the outcrop that Opportunity has been examining might have resulted from wind, but
others are reliable evidence of water flow.” http://www.nasa.gov/vision/universe/solarsystem/
Mars-more-water-clues_prt.htm, 3/23/2004.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
154 Chapter 7

7.1.8 Association
Three elements of photo-interpretation are related to context, or the
relationship between objects in the scene to each other and to their
environment. These elements are site, association, and time.
Association is the spatial relationship of objects and phenomena,
particularly the relationship between scene elements. “Certain objects are
genetically linked to other objects, so that identifying one tends to indicate or
confirm the other. Association is one of the most helpful clues for identifying
cultural features.”5 Thermal power plants will be associated with large fuel
tanks or fuel lines. Nuclear power plants tend to be near a source of cooling
water (although this can also be considered an example of site or location). A
classic instance of cultural association from the Cold War was the detection of
Cuban forces in Angola by the presence of baseball fields in the African
countryside (1975–1976).

7.1.9 Site
Site is the relationship between an object and its geographic location or terrain.
This can be used to identify targets and their use. An otherwise poorly resolved
structure on the top of a hill might, for example, be a communications relay,
based on its location.

7.1.10 Time
The temporal relationships between objects can also provide information,
through time-sequential observations. Crops, for example, show characteristic
temporal evolutions that uniquely define the harvest. Change detection, in
general, is one of the most important tasks in remote sensing and follows from
this interpretation key. Time can also play a role in determining level of
activity, as in Fig. 7.1.

7.2 Image Processing


Several important topics appear at this point. Between the earlier discussion of
images and the data from various space systems described in previous
chapters, at least one important concept has not yet been defined: the
relationship between the numbers that come from a satellite and the images
that result from those numbers.
In the world of remote sensing, the data from a sensor, as received on the
ground, is called the “digital number,” or DN (admittedly a redundant
terminology). The relationship between DNs and images is illustrated in
Fig. 7.5 using a photograph taken with a digital electronic camera. The

5. Avery and Berlin (1992).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 155

Figure 7.5 Model Susanna Olsen. The image chip on the right is reduced in resolution to
20% of the original.

Table 7.1 The digital number (DN) values are given here for the image chip of the eye in
Fig. 7.5. The DN values for 152 are highlighted. There are ten such values; compare them to
the image by locating the highest data value, DN ¼ 210.

1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
2 181 188 178 157 153 119 106 107 97 91 91 89 89 87 102 117 119 115 106 82
3 179 160 162 149 132 107 90 86 90 98 114 129 151 172 175 177 169 166 158 141
4 163 158 144 147 120 116 115 121 137 162 174 180 184 184 179 184 182 184 179 170
5 156 149 145 137 139 143 148 156 169 177 179 177 179 182 175 179 177 179 177 169
6 153 151 148 149 153 156 159 152 152 151 153 152 155 162 166 171 173 175 172 166
7 156 152 158 159 150 136 137 146 156 160 158 152 140 134 132 145 161 162 163 158
8 148 158 157 139 144 151 126 87 73 58 55 52 67 96 122 125 123 150 156 153
9 148 152 142 149 143 120 95 48 50 58 43 50 85 85 57 79 111 128 150 152
10 147 152 157 130 143 192 103 47 65 97 38 47 87 165 120 50 71 113 133 144
11 164 153 126 157 197 210 121 71 43 34 44 56 109 170 143 98 73 76 117 132
12 172 134 147 155 151 161 143 110 95 67 71 85 149 146 114 89 99 96 109 131
13 182 187 186 181 175 179 173 171 161 151 134 122 120 116 125 126 129 138 144 153
14 178 198 198 182 179 182 181 191 172 167 162 153 145 153 152 150 152 157 164 169
15 175 185 192 188 185 187 193 205 201 194 190 185 177 173 166 164 170 173 180 182
16 183 185 193 195 198 199 201 200 196 191 188 186 180 180 182 184 187 191 192 189

subject of a human face was chosen to provide an intuitive image. The


resolution is high, so a small image chip of the eye is extracted for this
example and shown expanded on the right; its resolution is reduced to
facilitate display in tabular form. The data values associated with each eye
pixel are given Table 7.1, with numbers running from 0 to 210, where 0 is a
black pixel, and 210 is nearly white.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
156 Chapter 7

Figure 7.6 Histogram of DN occurrences. The reduced-resolution curve corresponds to


the image at the right of Fig. 7.5 and Table 7.1. The full-resolution statistics for the same
region are given, along with those for the complete 847  972 image.

7.2.1 Univariate statistics


Many image-processing techniques begin with an examination of the statistics
of the scene or image; in particular, an important technique examines the
histogram of occurrences in a scene. In this case, there are ten occurrences of
DN value 152 in the table, corresponding to a bright gray in the image. This
point is indicated in Fig. 7.6 (10 at DN ¼ 152) in the trace of occurrences for the
values in Table 7.1. The additional curves show the distribution of values at the
original resolution for that region and then for the full image. The last trace has
been divided by ten to keep it on scale. The curves appear rather broad because
of the logarithmic vertical axis. The digital camera’s electronic brain has done
its job: there is a fairly broad range of DNs (0–255), with the histogram fairly
evenly distributed across the eight bits of dynamic range.

7.2.2 Dynamic range: snow and black cats


Figure 7.7(a) depicts a black-and-white picture taken with a high-quality 2.25-
inch-roll film camera (6  6 cm) that was scanned using a Nikon film scanner
with 12 bits of dynamic range. The resulting image has a range of digital numbers
from 50–4000. The histogram of data values is displayed in Fig. 7.7(b).
Roughly speaking, three peaks appear. At far right are the snow values
(DN  3,600). At far left (DN  340), the dark coats. The middle peak at
DN  1080 is the mid-tones of clothing and the fog-shrouded background.
The original image was scaled to encompass each of the three broad regions
that define the histogram.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 157

Figure 7.7 (a) The scanned film has a dynamic range of 12 bits or so. (b) Histogram values
with peaks at DN ¼ 340, 1080, and 3660.

Modern satellites, such as IKONOS and Quickbird, provide 11-bit data,


with DNs that range from 0 to 2047. As with the illustration here, the data must
be scaled to 0–255 for display, mapping that requires the same considerations as
the illustration given here, where the full dynamic range has been scaled from
50–800 to emphasize detail in the darkest pixels, from 800–2500 to emphasize
the mid-range, and from 2500–4000 to show detail in the snow.
One last note: in order to see the detail in the face of the little girl
[Fig. 7.7(c)], the data range from DN ¼ 241–2116 should be scaled to 0–255.

Figure 7.7 (c) Close-up of little girl in Fig. 7.7(a).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
158 Chapter 7

Figure 7.8 (a) Digital image of a black cat. (b) Black-cat histogram.

Now consider a color image. The cat depicted in Fig. 7.8(a) was
photographed with a Canon digital camera (1600  1200); the exposure was
adjusted to compensate for the nearly black fur. Histograms for the three
colors are shown in Fig. 7.8(b). The peak at DN  30 is the very dark fur on
the face; the lowest values are for the shadowed fur. The grass and brighter fur
make up the mid-range, at around 100 or so. The red collar and white fur
provide the peak, at DN  250. Such histograms are key in distinguishing
targets from backgrounds in both panchromatic and spectral imagery.

7.3 Histograms and Target Detection


The objective of target detection is to distinguish the target from the
background reliably. The effectiveness of this process is expressed by a
probability of detection (this should be high) and a probability of false
alarm (preferably low). In practice, the detection probability is related to
the false alarm probability such that the higher the detection probability is,
the higher the false alarm probability. However, in most applications a
satisfactory compromise can be found for these two values.
In Fig. 7.9, a target has been superimposed on a randomly generated
background. The nominal target has been circled. A subset of the image
values are given in Table 7.2, centered on the target. The target pixels in the
table are those that have higher data values than the background noise. The
standard deviation is the square root of the variance, or 24.1, which is roughly
the half width at half maximum (HWHM). The target, which is from 175–
180, is 4s away from the center of the background noise. Here, the
background distribution does not really represent the sensor noise (although it
could) but rather the signal from a uniform background like the sky.
As an exercise, estimate the random-noise level (the width of the background
distribution, also called “sigma”) from the histogram graph (Fig. 7.9) and

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 159

Figure 7.9 Histogram and target image. A model image simulates what you might obtain
from a 40  40 detector array attached to a telescope. The image is inset, and the small
bright region is supposed to be the target.

Table 7.2 Image values for the histogram in Fig. 7.11.

93 91 74 87 68 85 94 37 72 94 110
59 97 85 110 88 71 102 47 50 96 98
132 79 77 114 113 75 87 61 99 86 80
96 95 52 96 58 81 65 96 54 64 75
97 76 85 91 67 176 176 88 52 75 41
80 63 10 59 175 180 178 63 91 100 111
92 107 62 54 176 178 49 58 113 89 78
36 78 96 112 87 142 100 82 75 43 73
72 73 58 37 84 54 38 111 116 101 69
66 60 104 63 109 91 43 62 79 105 93
79 66 50 76 88 110 60 88 112 84 31

compare this value to the numbers obtained via calculation: mean ¼ 76.8,
variance ¼ 580.3, skewness ¼ 0.21, and kurtosis ¼ 0.92.

7.4 Multi-dimensional Data: Multivariate Statistics


The previous sections primarily deal with panchromatic imagery. Spectral
imagery require different techniques. An artificial illustration has been
constructed to make some of the important concepts in spectral analysis more
obvious. Figure 7.10 features a red Zip disk on grass to provide a strong
contrast in color. The data are displayed as an image, histogram, and two
scatter plots. The figure shows that although it is easy enough to see the disk,

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
160 Chapter 7

Figure 7.10 (a) A red disk on grass background. (b) Histogram of red (1), green (2), and
blue (3) bands from the color image.

Table 7.3 Values for the histogram in Fig. 7.10(b).


Color Mean Standard Deviation (s)

Red 50.9 23.4


Green 50.0 23.6
Blue 21.3 17.1

it is not immediately obvious how to tell a computer to distinguish the red pixels
from the background. The statistics that pertain to the histogram are given in
Table 7.3. The mean of the red values is 50.9 (the background), and the width,
or standard deviation, is 23.4. The target has DN values of 128–130.
Figure 7.11 shows a new data format: a 2D scatter plot. The high
correlation is apparent. This correlation, or redundancy, in the data is not a
major problem here, but it becomes much more so when you have higher
spectral dimensions to your data (six reflective bands with Landsat, 224 bands
with AVIRIS). The trick is to use the power of statistical analysis to make use
of redundancy in spectral data.
The details for the example in this section show a strong correlation
between the three bands. The correlation is unusually high because of the
homogeneity of the scene, but the concept is fairly general. Images taken at
varying wavelength will be highly correlated. This correlation can be
important in spectral analysis.
The correlation calculation has a closely related term: the covariance.
They are related by a normalization factor that is the square of the
standard deviation. The diagonals in the covariance matrix in Table 7.4 are
just the squares of the standard deviations given in Table 7.3 (that is,
17.22 ¼ 291.5 for the blue channel).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 161

Figure 7.11 Scatter plots of the occurrence of RGB triples: (a) red versus green, and
(b) blue versus green.

Table 7.4 Covariance and correlation values for Fig. 7.12.


Covariance Matrix Correlation Matrix

R G B R G B

R 547.86 530.06 369.36 1.00 0.96 0.92


G 530.06 557.58 368.14 0.96 1.00 0.91
B 369.36 368.14 291.49 0.92 0.91 1.00

Target detection and related goals in spectral imagery can be


powerfully addressed by these statistics. The trick is to rotate the data
space into a coordinate system in which the different bands are
uncorrelated, which frequently leads to a very useful transformation that
facilitates the detection of targets and other tasks. This rotation in color
space can be accomplished by diagonalizing the covariance (or correlation)
matrix by means of a rotation matrix (a matrix built from the eigenvectors
of the transformation that diagonalizes the covariance matrix). This
transformation is called a principle-components transform, also variously
termed the Karhuenen–Loeve transform and the Hotelling transform. The
new coordinates are PC1, PC2, and PC3 (shown in Table 7.5).

Table 7.5 Coordinate values.


Transform Matrix Covariance Matrix

R G B PC 1 PC 2 PC 3

PC1 0.631 0.635 0.445 1341.74 0.0 0.0


PC2 0.116 0.490  0.864 0.0 33.35 0.0
PC3 0.767  0.597  0.236 0.0 0.0 21.85

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
162 Chapter 7

The table is interpreted, for example, such that

PC3 ¼ 0.767  red  0.597  green  0.236  blue:

PC1 is the average brightness, weighted by the dynamic range (variance)


of each band. The remaining PC bands are orthogonal to the first (and each
other), and uncorrelated to each other. In this case, PC3 is the difference
between the red and blue/green images.
The covariance matrix is now diagonal and has an additional interesting
characteristic: the bands are ordered by their variance. For high-dimensional
data, this means that the effective dimensionality can often be reduced by
means of this transform because the dimensionality of an imaged scene is
generally fairly modest, reflecting to some extent the spectrally different
objects in the scene. For hyperspectral imagers, with hundreds of bands, the
data are highly redundant. In a PC transform, the first few bands in transform
space have the highest variance, that is, most of the scene information.
Higher-order bands are mostly noise. In fact, one way to remove noise from
spectral imagery is to transform into PC space, eliminate the noise bands, and
then invert the transform—a very powerful method for removing systematic
sensor noise.
The human eye performs a very similar transformation. The signals from
the LMS cones (Fig. 6.3) are transformed in a manner much like a PC
transform, which reduces the bandwidth needed to carry images from the eye
to the brain. The red–green and blue–yellow cones produce highly correlated
imagery, along with the panchromatic rods. The first component is intensity,
the second is hue, and the third is basically saturation. Most of the
information (and the highest spatial resolution) is in the intensity band (PC1),
whereas color is a few percent of the information (PC2). One consequence is
that the eye’s ability to resolve high-frequency components is much greater for
variations in intensity than in color.
Returning to Fig. 7.10, the target (the disk) is clearly differentiated from
the background in the new coordinate system, or color space (see Figs. 7.12
and 7.13). The disk pixels peak at a DN of 70 or so, some 10s away from the
center of the background distribution of grass pixels.

7.5 Filters
There are a number of standard ways to manipulate images for enhanced
appearance or to make it easier to extract information. Simple smoothing filters
reduce noise; more-sophisticated filters can reduce “speckle” in radar imagery.
A few simple concepts are illustrated here with a Landsat panchromatic
image from San Francisco taken on March 23, 2000. The data have a basic
resolution of 15 m. A small chip has been extracted from the northeast corner of
the peninsula (the Bay Bridge and Yerba Buena Island), as well as an image

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 163

Figure 7.12 The image in principal component space. The third PC is shown as an image
on the left and as a scatter plot in principal component space on the right. In PC3, the disk is
now clearly distinguished from the grassy background.

Figure 7.13 Histogram of PC3. The vertical scale is logarithmic—the background


distribution is quite narrow here—with s ¼ 4.67, and the FWHM is approximately 10. The
target (the disk) is over 10s away from the center of the grass data values.

chip for San Francisco Airport. The filters are applied using an image kernel—a
concept adapted from calculus and transform theory.

7.5.1 Smoothing
Noisy images can be difficult to interpret. One approach is to smooth the
image, averaging adjacent pixels together through a variety of approaches,

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
164 Chapter 7

with some specialized versions like the Lee filter used to reduce speckle in
radar images. The illustration here is not particularly apt because the data
quality is good, but a 3  3 filter block has been applied, with even weights for
each pixel. The kernel is illustrated here; the center of each 3  3 pixel block is
replaced by the average of all nine pixels: Fig. 7.14(b) shows the smoothed
image.

0.111 0.111 0.111


0.111 0.111 0.111
0.111 0.111 0.111

7.5.2 Edge detection


High-pass filtering removes the low-frequency components of an image while
retaining the high frequency (local variations). It can be used to enhance edges of
adjoining regions as well as sharpen an image. This result is accomplished by
using a kernel with a high central value, typically surrounded by negative weights:

1 1 1
1 8 1
1 1 1

Here, the kernel takes the difference between the central pixel and its
immediate neighbors in all directions.
In Fig. 7.14, the bridge is enhanced, as are the edges of the island. A small
section of the bridge is shown in a magnified view to further illustrate the
filter’s result. The original data appear in Figs. 7.14(a) and (d); the filtered
output appears in Figs. 7.14(c), and (e).
The same high-pass filter is applied to the airport area, depicted in
Fig. 7.15. The runways are pulled out, along with the edges of the terminal
buildings. Similar approaches are used to sharpen images for analysis by the
human eye. Edge detection is important for automated processes in mapping,
for example.

7.6 Supplemental Notes on Statistics


Remote-sensing data are replete with great applications for powerful
statistical analysis tools, which require at least a modest knowledge of
statistics and probability. A few fundamental ideas are reviewed in this
section.
First, the mean, or average value takes the sum of the DN values and
divides by the number of pixels:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 165

Figure 7.14 Landsat panchromatic sensor: (a) raw data, (b) smoothed image, (c) high-
pass filter, (d) raw, and (e) high-pass filter.

Figure 7.15 Landsat (a) raw data and (b) high-pass filter (edge detection).

1X N

mean ¼ x x:
N j¼1 j

The variance addresses the range of values about the mean. Spatially
homogeneous scenes will have a relatively low variance; scenes or scene
elements with a wide range of DNs will have a larger variance. The standard
deviation s is just the square root of the variance:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
166 Chapter 7

1 X N pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
variance ¼  Þ2 ,
ðx  x s¼ variance:
N  1 j¼1 j

The correlation coefficient and covariance are closely related:

correlation coefficient
PN P
N
PN
N j¼1 xj yj  j¼1 xj j¼1 yj
¼r¼ PN PN P P ,
½N j¼1 xj
2
 ð j¼1 xj Þ  ½N N
2 1∕2
j¼1 yj  ð N
2
j¼1 yj Þ2 1∕2
X 
1 N
1X N X
N
1 X N
covariance ¼ xj yj  xj yj ¼ Þðyj  yÞ:
ðx  x
N  1 j¼1 N j¼1 j¼1 N  1 j¼1 j

The correlation coefficient and covariance are related by the standard


deviation:
covariance
correlation coefficient ¼ :
sx sy

The following is a very simple example:

X ¼ [1, 2, 3] Y ¼ [2, 4, 6]
mean (x) ¼ 2 mean (y) ¼ 4
variance (x) ¼ 1 variance (y) ¼ 4
s(x) ¼ 1 s(y) ¼ 2
correlation coefficient ¼ 1 covariance ¼ 2

7.7 Problems
1. Figure 7.16 shows a small image chip of San Diego harbor (Coronado)
taken on February 7, 2000 by the IKONOS satellite. What can you tell
about the two ships? The carrier is 315 m long. What can you tell about the
other ship?
2. How could it be determined whether or not a road or rail line is intended
for missile transport?
3. For an otherwise uniform scene (Fig. 7.17), there is a target with higher
DN. The variance is 5106.4. Calculate the standard deviation s. Estimate
the distance between the target and background in units of s.
4. Three regions are identified in Fig. 7.18: water, a bright soil, and the old
Moss Landing refinery site (red), with some very bright white sand and soil.
Figure 7.19 provides the corresponding histogram. Describe what dynamic
ranges you would use to display the scene so as to enhance each region of

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 167

Figure 7.16 IKONOS image of San Diego harbor, taken February 7, 2000.

Figure 7.17 (a) Dark grey, cluttered background with bright target. (b) Histogram for DN
occurrence. The target DN is 250.

interest. As an example, the best display for the soil would be to scale the
data so that DN ¼ 250–450 mapped to a digital display range of 0–255.
5. For a scene with four pixels, calculate the correlation between the pixels
and the covariance:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
168 Chapter 7

Figure 7.18 The Moss Landing Mineral Refractory was built ca. 1942. The white material
may be dolomite from the Gabilan Mountains or magnesium residue from the material
extracted from seawater.

Figure 7.19 Histogram for the Moss Landing/Elkhorn Slough area north of Monterey, CA.
In the histogram plot, the red line is for the soil, and the green in is the dolomite. The cyan
region of interest is plotted in blue here. The black curve shows the values for the full scene.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Image Analysis 169

Pixel # Red (DN) Green (DN) Blue (DN)

1 40 50 60
2 20 25 28
3 30 30 30
4 15 16 14

6. The scene in Fig. 7.2 is oriented so that north is “up.” What is the time of
day? Where is the spacecraft relative to the Monument?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 8
Thermal Infrared

Figure 8.0 (a) Data from the Mars Global Surveyor (MGS) spacecraft taken by the Thermal
Emission Spectrometer. The image shows the daytime temperature measurements for one
day. Data are scaled from –125°C to 20°C.1 (b) MGS Thermal Inertia map, obtained by
comparing day/night temperature differences. The large region of dark blue on the left is
Olympus Mons. The scale ranges from 24–800 J/m2 K s½.2

171

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
172 Chapter 8

The imagery and data collected by tactical and strategic sensors operating in
the infrared portion of the electromagnetic spectrum is generated by emitted
radiation from targets and backgrounds rather than from reflected radiation.
These infrared sensors can give non-literal information that may have
different values than that from comparable panchromatic (visible) images.
A large range of tactical and strategic sensors operate solely in the IR, sensing
the emitted radiation from targets and backgrounds, as opposed to the
reflected radiation previously discussed.

8.1 IR Basics
In the visible spectrum, humans mostly see by reflected light, typically
sunlight. In the IR, there is a reflected solar component (during the day), but
much of remote sensing is due to emitted IR, particularly in the mid-IR range
(3–5 mm) and LWIR range (8–13 mm).

8.1.1 Planck’s radiation formula3


Returning to the Planck relation first shown in Chapter 2, the radiance
equation for a blackbody as a function of wavelength l is
 
watts 2hc2 1
radiance ¼ L ¼ , (8.1)
m · mm · ster
2 l elkT  1
5 hc

where c ¼ 3  108 m/s, h ¼ 6.626  10–34 J·s, and k ¼ 1.38  10–23 J/K. Slightly
mixed units are indicated in the (normally) metric formula to emphasize that
the m2 term is per unit area, the “mm” term reflects the per unit wavelength
element, and “ster” is per unit angle component.
Figure 8.1 shows the blackbody curves for bodies at 5800 K (the solar
temperature) and 300 K (typical terrestrial temperature). The solar curve has
been normalized to “top-of-the-atmosphere” values for earth orbit. The figure
represents the amount of energy available as a function of wavelength for a
sensor at low-earth orbit.
Figure 2.13 showed how the location of the peak in the spectrum and the
amplitude of the radiation change with temperature. The formula is integrated
over all wavelengths to obtain the Stefan–Boltzmann law [see Eq. (8.2)].
However, for many sensors it is necessary to integrate over relatively narrow
wavelength ranges, which can be challenging in the case of Eq. (8.1). This
process can be done numerically or by approximating the value of the
radiance function over a narrow spectral range.

1. http://tes.asu.edu/tdaydaily.png.
2. N. E. Putzig, M. T. Mellon, K. A. Kretke, and R. E. Arvidson, Global thermal inertia
and surface properties of Mars from the MGS mapping mission, Icarus 173, 325-341, 2005.
http://www.mars.asu.edu/data/tes_putzigti_day/.
3. The Wikipedia has an extensive, and well referenced, discussion on Planck’s Law. www.
wikipedia.com.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 173

Figure 8.1 Blackbody curves. In the example of radiation from the sun, the sun acts like a
blackbody at about 6000 K. Of course, the radiation decreases as per the inverse square
law, and the incident radiation observed at the earth is decreased by that factor, i.e.,
(radiussun / radiusearth orbit)2. As a consequence, the 3–5-mm wavelength range is in the
middle of the transition region from dominance by reflected solar radiation to dominance by
emitted thermal radiation for terrestrial targets.

8.1.2 Stefan–Boltzmann: radiance aT 4


The Stefan–Boltzmann law defines the total power

S ¼ εsT 4 , (8.2)
where s is the constant: s ¼ 5.669  10–8 W m–2 K–4, and ε is the emissivity.
The emissivity for a blackbody is one. Real sensors with a more-limited
bandpass (say, 8–13 mm) will still see a monotonic increase in power with
temperature.
The concept of blackbody temperature shows up in places as ordinary as a
local hardware store, where GE fluorescent light bulbs are sold by color
temperature.4 The bulbs are not blackbodies, but the concept is still applied.
For example:
• GE Daylight Ultra, 3050 lumens, 6500 K;
• GE Daylight, 2550 lumens, 6250 K;
• GE Chrome 50, 2250 lumens, 5000 K;
• GE Residential, 3150 lumens, 4100 K;

4. http://www.gelighting.com/LightingWeb/emea/images/Linear_Flourescent_T5_LongLast_
Lamps_Data_sheet_EN_tcm181-12831.pdf.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
174 Chapter 8

• GE natural color fluorescent for kitchen and bath, 3350 lumens,


3000 K.
(Luminous flux, or lumens, is a measure of power that takes into account the
human visual response.)

8.1.3 Wien’s displacement law


Wien’s displacement law says that the wavelength of the peak is inversely
related to the temperature:

a
lm ¼ , (8.3)
T

where a is a constant: a ¼ 2898 mm/K.

8.1.4 Emissivity
The assumption so far has generally been that emissive objects are
blackbodies, which are perfect absorbers and emitters of radiation. Real
objects all have an emissivity ε that is between zero and one. Table 8.1 shows
some average values for the 8–12-mm wavelength range. Just as with reflective
spectra, there are fine scale variations in emissivity, which are unique to the
material. Gold emits poorly in the longwave infrared, with an emissivity of
only a few percent.
Figure 8.2 shows the variation in emissivity that occurs as a function of
wavelength in the longwave IR spectrum for some common minerals. The
figure is a “stack plot” with the scales of successive materials shifted upward
by small factors to keep them from overlapping. Each curve has a maximum
just below 1.0. The dip in the emissivity just above 11 mm for magnesite moves

Table 8.1 Average emissivity values for common


materials. From Sabins, Remote Sensing and Image
Interpretation, page 138.5
Material Emissivity

Gold, polished @ 8–14 mm 0.02


Aluminum foil @ 10 mm 0.04
Granite 0.815
Sand, quartz, large grain 0.914
Asphalt, paving 0.959
Concrete walkway 0.966
Water with a thin layer of petroleum 0.972
Water, pure 0.993

5. His citation: Buettner and Kern, JGR, 70, p. 1333, 1965. Also, http://www.infrared-
thermography.com/material.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 175

Figure 8.2 Emissivity spectra for minerals in the LWIR. This figure comes from data in the
Arizona State University spectra library. http://speclib.asu.edu/.

to the right as the material varies from magnesite to dolomite to calcite. This
reflects changing bond strengths in the materials.

8.1.5 Atmospheric absorption


In the infrared wavelength range, atmospheric absorption—due primarily to
water and carbon dioxide—becomes a very important consideration. (Figs. 3.12
and 3.14).

8.2 Radiometry
The objective in much of thermal imaging is the accurate extraction of
temperatures from observations. This process, called radiometry, depends on
an understanding of the target and its radiance, atmospheric absorption,
scattering and propagation, and the detector response function. As with
several elements of this text, a proper development of the topic is the content

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
176 Chapter 8

Figure 8.3 Radiometry elements for a point target.

of full textbooks,6 and thus only a few of the basic elements and results are
developed here. Elements of the presentation by Schott (1997) are used.7
Two simplified cases are considered: a point target (subpixel) and a
homogenous, Lambertian surface. The former would be an unresolved target
such as a missile; the latter would be appropriate for systems like Landsat. In
both cases, an isotropic radiation pattern (i.e., Lambertian) will be assumed.
With this assumption, the radiant exitance M, where the angular dependence
has been integrated out, must be used to produce an overall factor of p:
 
watts 2phc2 1
exitance ¼M¼ , (8.4)
m2 · mm l elkT  1
5 hc

Also ignored here is reflected sunlight, normally an important background


term. For use in the next section, the solid angle representing the IFOV,
represented in Fig. 8.3, is labeled dΩ.

8.2.1 Point source radiometry


With the assumptions given in this section, the primary terms of interest in the
first special case are the spread of radiated energy as the distance from the
source increases, atmospheric absorption, and the detector response function.
For the first term, we effectively employ Gauss’ law. The energy emitted from
the target expands outward as 1/r2. Ignoring atmospheric effects for the

6. W. L. Wolfe, Introduction to Radiometry, SPIE Press, Bellingham, WA (1998).


7. J. R. Schott, Remote Sensing, the Image Chain Approach, Oxford University Press, Oxford
(1997). Thanks also to David Krause for the use of his Mathematica code on detector response.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 177

moment, the only remaining variable is the detector, and in particular the size
(area). The detector responds to the “irradiance,” which is the power/unit area
at the sensor optics. Irradiance E has the same units as exitance (watts/m2),
differing from that term in concept (emitted versus received/absorbed). As a
quick illustration of the distinction, from Chapter 2, the solar exitance is
6.42  107 W/m2; the irradiance at earth is 1378 W/m2.
The general formula for the irradiance from a point target, then, is
areatarget W
EðlÞ ¼ MðlÞ • in units of : (8.5)
4pr 2
m · m2
The measured energy then depends on the size of the aperture for the sensor—
typically defined by the diameter of the optics. The measured radiant flux is
then

areadetector W
radiant flux ¼ FðlÞ ¼ MðlÞ · areatarget · in units of : (8.6)
4pr2 m
Thus, all other things being equal, it will be much easier to detect a target that
is close than one that is farther away. The wavelength dependence can be
integrated out for a broadband detector, and the total amount of power
detected can be estimated as

areadetector
power ¼ sT 4 · areatarget · ; in units of watts: (8.7)
4pr2

Example
Consider a hot reentry vehicle in the earth’s atmosphere, as viewed from
geosynchronous orbit. (This could be a satellite or bolide burning up on
reentry, for example.) Take the surface area to be 20 m2, the temperature to be
1500 K, and the range to be 6 earth radii (38  106 m). Take the mirror to
have a diameter of 1 m. The power incident on the system, ultimately to be
detected, is calculated as
areadetector
power ¼ sT 4 · areatarget ·
4pr2
   2
watts 0.25 p m
¼ 5.68  108 · ð1500KÞ 4
· 20 m2
·
m2 K4 4pð38  106 Þ2 m2
¼ 0.25  109 W,

which does not seem like much power. (The thermal energy radiated is
6 MW.) The energy from the target peaks at 1.9 mm; a quick estimate for the
number of photons represented by that energy would be

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
178 Chapter 8

power 0.25  109


number of photons ¼ ¼
hc∕l 6.63  1034 · 3  108 ∕1.9  106
0.25  109 photons
≈ 19
¼ 2.5  109 :
1  10 s
For a nominal exposure time of one millisecond (1 ms), there are 2.5  106
photons measured, which is a pretty good signal for a modern system. The
development provided in this section is for unresolved targets. For targets that
are resolved, the physics changes some.

8.2.2 Radiometry for resolved targets


For an area target that is resolved, the radiometry changes. The main
difference is that as the range changes, the detector observes a larger area.
Equation 8.6 now becomes
areadetector
FðlÞ ¼ MðlÞ · areatarget ·
4pr2
 
areadetector dV
¼ MðlÞ · ðdV • r Þ ·
2
2
¼ MðlÞ · · areadetector ,
4pr 4p
where dV is the solid angle defined by the detector aperture.

Example: Landsat 7
For a system like Landsat 7, with a 60-m resolution at a 705-km range,
GSD 60 m
du ¼ ¼ ¼ 8.5  105 radians,
range 705  103 m
dV  u2 for small u,
dV ¼ ð8.5  105 Þ2 ¼ 7.24  109 ster:

The Landsat 7 system has a mirror diameter of 40.64 cm and a clear inner
aperture with a 16.66-cm diameter. The effective area is then 0.11 m2. For the
satellite in LEO, observing the earth at 300 K,
 
dV
power ¼ sT · 4
· areadetector
4p
   
8 W 7.24  109 ster
¼ 5.68  10 · ð300 KÞ ·
4
· 0.11 m2
m2 K4 4p
¼ 2.9  108 watts:

The energy from the earth’s surface peaks at 10 mm; a quick estimate of the
number of photons represented by that energy would be

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 179

Figure 8.4 Radiometry elements for an area target. The area being imaged increases with
range, so that term is cancelled out in the power calculation for a fixed angular resolution.

power 2.9  108


number of photons ¼ ¼
hc∕l 6.63  1034 · 3  108 ∕10  106
2.9  108 photons
≈ ¼ 1.4  1012 :
2  1020 s

The energy detected does not depend explicitly on range for a given detector.
As the range increases, a larger ground area is imaged, and the area imaged
increases with range-squared (R2). The energy will diminish with range if the
GSD is kept constant.

8.3 More IR Terminology and Concepts


8.3.1 Signal-to-noise ratio: NEDT
Sensor noise exists in all parts of the remote-sensing continuum. In the visible
spectrum, the silicon detector quality is very high (and uniform), and noise is
not generally a problem for daylight observations. In the world of infrared,
sensors built from more exotic materials are generally not particularly
uniform in their sensitivity and are frequently noisy. Sensors are typically
being driven to their limit to detect faint objects against variable backgrounds,
and it becomes important to recognize the sensor limit.
Measurement and estimation of noise in an infrared system becomes at
least a whole chapter in a good text on IR sensors and thus is not developed
here. The net result of the work, however, can be neatly defined in a few
key parameters, in units of temperature. The first is the noise equivalent

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
180 Chapter 8

power (NEP), which is the incident flux that would correspond to a


signal-to-noise ratio (SNR) of one. This is the sensitivity threshold. The
resolution is then defined as the noise equivalent differential temperature,
abbreviated either NEDT or NEDT. As an example for a good laboratory
camera, the specification for the FLIR Systems SC6000 is 20 mK (18 mK is
typical).8

8.3.2 Kinetic temperature


LWIR systems generally lack information on emissivity, although this can be
added in the analysis stage. This lack of knowledge appears in a newly defined
topic, here, the concept of kinetic temperature. The physics is simple enough.
The radiated quantity is εsT 4kinetic , and the sensor data must generally be
interpreted as sT 4radiated . Here, Tkinetic is the “real” temperature, i.e., the
temperature one would measure with a thermometer at the target. Setting the
two equal produces

sT 4radiated ¼ εsT 4kinetic , or T 4radiated ¼ εT 4kinetic :

Because the emissivity ε is a number less than one, Tkinetic > Tradiated by a
factor that is just the fourth root of ε:
1
T radiative ¼ ε4 T kinetic : (8.4)
8.3.3 Thermal inertia, conductivity, capacity, and diffusivity
Reflective observations depend primarily on the instantaneous values for the
incident radiation, but thermal IR observations are very much dependent on the
thermal history of the target region and the nature of the materials imaged.9
8.3.3.1 Heat capacity (specific heat)
Thermal heat capacity is a measure of the increase in thermal-energy content
(heat) per degree of temperature rise. It is measured as the number of calories
required to raise the temperature of 1 g of material by 1°C. It is given the
symbol C (calories / g °C).
Thermal storage is a closely related quantity, modified by the mass density
c (calories / cm3 °C), where the value for water is very high (1.0)—about five
times that for rocks. (Here, c ¼ rC, where r is the mass density in g/cm3.)
8.3.3.2 Thermal conductivity
Thermal conductivity is the rate at which heat passes through a material,
measured as the amount of heat (calories) flowing through a cross-section

8. FLIR-SC6000-MWIR-Series-Datasheet.pdf, downloaded September 2013.


9. W. G. Rees, Physical Principles of Remote Sensing, p. 109–113 (1990); F. F. Sabins, Remote
Sensing: Principles and Interpretations, Waveland Press, Inc., Long Grove, IL (1997).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 181

Table 8.2 Thermal inertia and related characteristic values for various materials. Units are
cgs. Data from the Remote Sensing Tutorial.10
     
K calories
cm s °C C calories
g °C r cmg 3 P calories
1
Material cm2 °C s2

Water 0.0014 1.0 1.0 0.038


Wood (oak) 0.0005 0.33 0.82 0.012
Sand/soil 0.0014 0.24 1.82 0.024
Basalt 0.0045 0.21 2.80 0.053
Aluminum 0.538 0.215 2.69 0.544
Copper 0.941 0.092 8.93 0.879
Stainless steel 0.030 0.12 7.83 0.168

area (cm2), over a set distance (thickness in cm), at a given temperature


difference (°C). It is given the symbol: K (calories / cm s °C), where the nominal
value for rocks is 0.006 in these peculiar units (compared to water 0.001 and
copper 0.941). Rocks are generally poor conductors of heat compared to metals,
but they are better than loose soil, which tends to have insulating air pockets.
8.3.3.3 Inertia
Thermal inertia is the resistance of a material to temperature change,
indicated by the time-dependent variations in temperature during a full
heating and cooling cycle:
 
pffiffiffiffiffiffiffiffiffiffiffi calories
P ¼ K ·C 1 :
cm2 °C s2
This quantity varies by a factor of four or five for the range of materials
shown in Table 8.2. Figure 8.0 shows a thermal inertia map obtained on
Mars, reflecting the variations in crust materials and thickness.
8.3.3.4 Thermal diffusivity
Thermal diffusivity is a measure of the rate of internal heat transfer within a
substance. It is related to conductivity k ¼ (K/Cr) (cm2 / s). In remote sensing,
this value relates the ability of a substance to transfer heat from the surface to
the subsurface during the day (heating period) and from the subsurface to the
surface during the night (cooling period).
8.3.3.5 Diurnal temperature variation
An application of the previous concepts involves the variation in temperature
that occurs over the day for the illuminated earth surface. Materials with high
thermal inertia (such as metallic objects) change temperature by fairly modest

10. The tutorial by Dr Nicholas Short is no longer present on the NASA web site. See also
Table 6-4 in Avery and Berlin, page 123; Campbell, Table 8.2, page 251; and Sabins
(2nd edition), page 133, Table 5.3.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
182 Chapter 8

Figure 8.5 Illustration of the temporal variations in temperature for various materials over
a day.

amounts. Low-thermal-inertia materials (such as vegetation) change more


quickly. Figure 8.5 illustrates some consequences. Different objects will reach
equal radiative temperatures in the dawn and dusk intervals. This can cause
targets to disappear from thermal IR imagery as their brightness temperatures
match the background brightness temperature.
Figure 8.6 more directly illustrates this effect through some observa-
tions taken at the Naval Postgraduate School. The images were taken with a
mid-wave infrared (MWIR) camera, the FLIR SC8200. The IR images
were converted to temperatures, with time profiles as shown in Fig. 8.7.
Small colored circles in Fig. 8.6 show the regions used for the measurements
given in Fig. 8.7. The temperatures given in the two figures are based on
an assumed emissivity of 0.98, with no compensation for atmospheric
effects. The first assumption is clearly not correct, but the latter is
reasonable for the near-range elements of the scene. Also, there is a
component of reflected sunlight in the solar illuminated MWIR scene that
has not been corrected.
In comparison to the general illustration of Fig. 8.5, shadows cause
different objects to change temperature at different times throughout the day
in Fig. 8.7. Still, thermal crossover is fairly obvious around dawn and dusk.
The greenery of the trees will be very close to the ambient air temperature, a
fact that can be very useful for the temperature calibration of remotely sensed
IR images. Sabins (1986, 1996) gives a great deal of information on the
interpretation of such scenes, including the impact of wind on surfaces. The
ocean surface temperature (15–16°C here) differs from the direct measurement
of the ocean temperature (17°C) due to evaporation, emissivity, and
atmospheric effects. There was a serendipitous partial solar eclipse in the
afternoon that caused a dip in the overall scene temperatures.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 183

Figure 8.6 MWIR image taken on 10/23/2014 at 1200 local time. The asphalt is noticeably
cooler in recently vacated parking spots, as people have gone to lunch. The soil/grass area
appears to be as warm as or warmer than the asphalt, which is largely an artifact of the
emissivity differences.

Figure 8.7 Temperature profiles for the scene in Fig. 8.6. The air temperature is provided
by the NPS weather station. The red tile roof is shaded in the latter part of the afternoon and
thus drops relatively more quickly than some of the other synthetic materials, with thermal
crossover well before sunset, by contrast with the still illuminated surfaces. In situ water-
temperature measurements in the bay cluster around 63°F (17°C) on this day.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
184 Chapter 8

Figure 8.8 The figure to the left is the merged IR and panchromatic image; to the right is
the panchromatic (reflective) image. The frozen ocean is at the upper left. The fuel tanks and
runway are warmer than the background. North is up. Jim Storey of the EROS Data Center
resampled and enhanced this image.

8.4 Landsat
Infrared data from Landsat were shown in Chapter 1 for several areas in San
Diego. Here, a second example shows the Landsat 7 thermal-band data. The
illustration is from northern Greenland, at Thule AFB. In the color picture of
Fig. 8.8, the 60-m-resolution data from band 6 were resampled using cubic
convolution to an effective pixel size of 5 m as part of a sensor-calibration
study. The LWIR data were then combined with panchromatic band data to
create the RGB image, as shown in Fig. 8.8. Band 6 (LWIR) data are assigned
to the red channel, whereas the panchromatic-band data are assigned to the
green and blue channels.
Some features are revealed as warmer than the surrounding snow due to
heating from the 24-h sunlight at this high northern latitude. The runway and
various buildings on the base show relative warmth, with the southern sides of
the storage tanks near the base somewhat warmer than the northern sides.
Exposed rock on the hillsides to the north are emitting greater thermal
radiation than the snow.11
The thermal channel on Landsat has not been widely exploited until fairly
recently. One application that seems to be emerging involves monitoring
water bodies. The LWIR data from Landsat provide an effective method for
tracking changes in natural and artificial bodies of water. A slightly different
illustration of the utility for LWIR data is given in Fig. 8.9. The image chip
for San Diego Harbor shows a ship and ship wake with temperature
calibration. In the daylight image, the surface water is a degree or two warmer

11. Resources in Earth Observation, 2000; CD-ROM, European Space Agency. Images
courtesy of NASA.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 185

Figure 8.9 The main image shows the 60-m resolution LWIR channel (band 6); the inset is
the 15-m-spatial-resolution panchromatic channel (band 8). False color is introduced by
combining low- and high-gain channels for band 6. Similar wake features can be found in
synthetic aperture radar data.

than the water below the surface. The ship moving on the surface disturbs the
surface, and cooler water is brought to the surface.

8.5 Early Weather Satellites


Weather satellites are some of the primary IR platforms. Historically, the
first were small polar-orbiting platforms (TIROS, Nimbus), followed by
the first geosynchronous platforms (Applied Technology Satellites, or ATS). The
first illustrations here, in the visible wavelengths, can add perspective to whether
surveillance can be done at the tactical or the strategic level from high altitudes.

8.5.1 TIROS
The Television Infrared Observation Satellite (TIROS) was the first series of
meteorological satellites to carry television cameras to photograph cloud cover
and demonstrate the value of spacecraft for meteorological research and weather

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
186 Chapter 8

Figure 8.10 Image taken by TIROS on April 1, 1960. Image courtesy of NASA.

forecasting. The first TIROS was launched on April 1, 1960 and returned 22,952
cloud-cover photos. The satellite was tiny by modern standards: mass ¼ 120 kg,
perigee ¼ 656 km, apogee ¼ 696 km, and inclination ¼ 48.4°. RCA built the
small cylindrical vehicle (42-inch diameter and 19-inch height).
Figure 8.10, acquired by TIROS, is one of the first images of the earth
taken from space. TIROS has a complicated history of name changes and
aliases, sensor packages, and parameters. Between 1960 and 1965, ten TIROS
satellites were launched. They were eighteen-sided cylinders covered on the
sides and top by solar cells, with openings for two TV cameras on opposite
sides. Each camera could acquire sixteen images per orbit at 128-s intervals.

8.5.2 Nimbus
Named for a cloud formation, Nimbus—a second-generation meteorological
satellite—was larger and more complex than the TIROS satellites. Nimbus 1
was launched on August 28, 1964 and carried two television and two infrared
cameras. Nimbus 1 had only about a one-month lifespan; six subsequent
missions were launched, with Nimbus 7 operating from 1978 through 1993.
The spacecraft carried an advanced Vidicon camera system for recording
and storing remote cloud-cover pictures, an automatic-picture-transmission
camera for real-time cloud-cover images, and a lead-selenide detector
(3.4–4.2 mm) to complement the daytime TV coverage and measure nighttime
radiative temperatures of cloud tops and surface terrain. The radiometer had
an IFOV of 1.5°, which at a nominal spacecraft altitude (1000 km)

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 187

Figure 8.11 MWIR image from Nimbus 2, showing Hurricane Gladys (left) and Hurricane
Inez (right). The darker regions are the ocean surface or low-altitude cloud; these are
warmer than the higher-altitude cloud tops, which are rendered as white in the image.
Images courtesy of NASA.12

corresponded to a ground resolution of approximately 8 km at nadir. Some of


these early IR images are shown in Fig. 8.11. The left-hand image shows a
roughly 5000-km profile through the storm system at a varying spatial
resolution. The radiance values have been converted to estimated blackbody
temperatures and their corresponding altitudes. The right-hand image was
acquired on October 7, 1966, near local midnight, and shows the progression
of temperature from cold (clouds) to warm (land) to warmest (ocean surface).

8.6 GOES
8.6.1 Satellite and sensor
The Geostationary Operational Environmental Satellite (GOES) mission
provides the now-familiar weather pictures seen on newscasts worldwide.
Each satellite in the series carries two major instruments, an imager and a
sounder, which acquire high-resolution visible and infrared data, as well as

12. http://history.nasa.gov/SP-168/p15.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
188 Chapter 8

temperature and moisture profiles of the atmosphere. GOES imagery were


first shown in Figs. 1.9 and 1.10, illustrating the persistent global imaging of
geosynchronous systems.
The GOES system serves a region covering the central and eastern Pacific
Ocean; North, Central, and South America; and the central and western
Atlantic Ocean. Pacific coverage includes Hawaii and the Gulf of Alaska. This
coverage is accomplished by two satellites: GOES-West, located at 135° west

Figure 8.12 The 120-kg module uses 120-W power and outputs 10-bit data at less than
2.62 Mbps. The Cassegrain telescope has a 31.1-cm (12.2-inch)-diameter aperture and f/6.8.13

Figure 8.13 GOES-15 spectral response function for the visible channel. A Gaussian fit to
the response function does not match particularly well, but it provides some measure of
where the response function is centered.

13. GOES N Series Data Book; Contract NAS5-98069 Rev D November 2009, published by
Boeing., http://goes.gsfc.nasa.gov/text/GOES-N_Databook/section03.pdf.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 189

longitude, and GOES-East, at 75° west longitude. A common ground station,


the CDA station at Wallops, Virginia, supports the interface to both satellites.
Figure 8.12 shows the GOES imager. A familiar Cassegrain design
provides the primary optic, and the sensor is a single pixel swept mechanically
over the visible hemisphere. This design allows the visible hemisphere to be
imaged once every thirty minutes; it is selected partly for heritage reasons and
partly because it simplifies sensor calibration (calibrating a larger array to the
required accuracy can be time consuming).
Five channels are used, as illustrated in Figs. 8.13–8.15 and further defined
in Table 8.3. The spectral response functions for the infrared channels are
shown with the U.S. Standard Atmosphere brightness temperatures in
Fig. 8.14, which reflects the depth that the atmosphere is penetrated at those

Figure 8.14 GOES-15 spectral response functions for the four infrared channels, with U.S.
Standard Atmosphere brightness temperature spectrum.14

14. credit: “University of Wisconsin-Madison Space Science and Engineering Center”;


Cooperative Institute for Meteorological Satellite Studies, http://cimss.ssec.wisc.edu/goes/
calibration/http://cimss.ssec.wisc.edu/goes/calibration/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
190 Chapter 8

Figure 8.15 GOES-15 full-disk infrared images taken April 26, 2010 at 1730 UTC. Top left:
0.6-mm band channel (VIS); top center: 3.9-mm channel (IR2); top right: 6.7-mm water vapor
channel (IR3); bottom left: 10.7-mm channel (IR4); and bottom right: 13.3-mm channel (IR6).
“The VIS Moon (left) is a bit skewed by its apparent motion while being scanned back-and-
forth by the Imager. The infrared (3.9 mm) view of the moon is so hot that it is off-scale in the
temperature range used for earth-scanning.”15

Table 8.3 Specifications for GOES N, O, P. Note that channel 1 has a GSD ¼ 28 mrads.16
Wavelength Wave Number Detector GSD
Channel (mm) (cm–1) Type (km) Purpose

1 0.52–0.71 15,385 Silicon 1 Cloud mapping


2 3.73–4.07 2,564 In-Sb 4 Night cloud mapping, fires, and volcanoes
3 5.8–7.3 1,481 Hg-Cd-Te 4 Moisture imaging and water vapor; cloud cover
and height
4 10.2–11.2 943 Hg-Cd-Te 4 Land and sea thermal mapping
617 13.0–13.7 750 Hg-Cd-Te 4 Water vapor

wavelengths. Because the bottom 10 km or so of the atmosphere decreases


monotonically in temperature with altitude, the brightness temperature
corresponds to altitude. Channel 3, for example, does not see to the surface

15. http://goes.gsfc.nasa.gov/pub/goes/100426_GOES15_firstir/index.html.
16. GOES N Series Data Book; Contract Report under NAS5-98069 Rev D November 2009,
published by Boeing.
17. The older GOES series used a slightly lower wavelength, and the channel-5 designation for
that sensor channel is maintained as a distinct sensor: l ¼ 12 mm, 833 cm–1.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 191

of the earth, and sees the upper atmosphere at a brightness temperature of


220–240 K. A wide range of spectral wavelengths, detector materials, and
spatial resolution are present in the GOES channels. The figures and table
introduce a commonly used unit in the infrared community, wavenumber in
inverse centimeters (cm–1). This is most properly thought of as a frequency in
dimension. A wavelength of 10 mm corresponds to a wavenumber of 1000 cm–1.
Figure 8.15 shows a series of illustrations for the different channels of
GOES-15, launched March 4, 2010. On December 6, 2011, it was activated as
the GOES-West satellite, replacing GOES-11. These images include
coincidental observations of the moon.
The bandpass selections for GOES are mirrored by the polar-orbiting
NOAA TIROS satellites. They allow observations of clouds, cloud and water
temperatures, and atmospheric water. Channel 3, at 6.7 mm, is at a
wavelength that is absorbed by atmospheric water. The atmosphere is opaque
at this wavelength (Fig. 3.12); the sensor is responding to energy radiated
from water vapor at the top of the atmosphere.

8.6.2 Shuttle launch: vapor trail and rocket


The GOES satellite serves as a nice analog for a missile-warning satellite.
Figure 8.16 shows GOES-8 images of the space-shuttle launch of June 21,

Figure 8.16 The wavelength ranges used here are illustrated in Figs. 8.13 and 8.14. The
third frame, from the 6.7-mm channel, shows the greatest signal-to-background ratio.18

18. http://goes.gsfc.nasa.gov/text/goes8results.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
192 Chapter 8

1996, a little after 1445 UTC (1045 EDT). The vapor trail can be seen in the
visible image (present as the white cloud you see from the ground). The hot,
sub-pixel, target is visible in all four infrared channels. The long-wave
channels (11 and 12 mm) do not show good contrast because the earth is
already bright at that wavelength. The highest contrast occurs in the 6.7 mm
channel because the atmospheric water vapor prevents the sensor from seeing
the extensive ground clutter.
Visible plumes had been seen before by GOES, but this is the first time the
unresolved heat from the rocket was seen in the 4- and 8-km IR pixels. The
window channels at 3.9 and 11 mm are consistent with a 493-K blackbody
occupying 0.42% of a 2-km-square pixel. The water vapor channels are
consistent with a 581-K blackbody occupying 0.55% of a 2-km pixel. The
shuttle burns liquid hydrogen and oxygen to create a hot exhaust that appears
bright in the water vapor bands.

8.7 Defense Support Program19


The missile-launch capability illustrated using the GOES data in the previous
section is in use with the geosynchronous platforms collectively called the
Defense Support Program (DSP). These satellites, and their mission, are not
as highly classified as they once were, and elements of their mission can be
discussed. The next generation of space-based infrared systems (SBIRS)
became operational over the 2010–2013 time period.
The last generation of DSP satellites, termed Block 14, continued a
sequence that began with the launch of the first operational DSP spacecraft in
May, 1971. The satellites were built by TRW, with focal planes by Aerojet.
The last block was characterized by a satellite that massed 2400 kg and was
10 m in length and 6.7 m in diameter. The vehicle rotates about the long axis
at 6 rpm. Solar arrays provide 1.5-kW power. The axis of the telescope is off
the main rotation axis, allowing the telescope to sweep the whole earth.
The 6000-element focal plane is made from lead-sulfide detectors designed
to work at 2.7 mm. These detectors provide good sensitivity at the
comparatively elevated temperature of 193 K, which permitted passive,
radiative cooling. The focal plane weighed 1200 pounds. Later generations
(1985) added a 4.3-mm HgCdTe focal plane. The first ground station was in
Woomera, Australia—others were added over time.
The operational constellation is nominally four or five satellites.
Figure 8.17 shows a photo of the DSP being deployed from the shuttle,
allowing for a unique image of the satellite already in space.

19. Information is taken from Aviation Week & Space Technology (February 20, 1989;
November 18, 1991; December 2, 1991; February 10, 1997; March 3, 1997; January 5,
1998), and a variety of press releases by TRW. A sequence of three very thorough articles
were published by Dwayne Day in Spaceflight magazine, 1996.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 193

Figure 8.17 The DSP satellites were usually sent aloft by the various generations of Titan
launchers; this photo depicts the shuttle launch of DSP Flight 16, “DSP Liberty,” launched
by the shuttle Atlantis (STS-44) on November 24, 1991. The shuttle crew deployed the
37,600-pound DSP/IUS stack at 0103 EST.

“The existing four-spacecraft DSP constellation routinely detects,


tracks, and pinpoints the launch location of small ballistic missiles
such as the Scud. DSPs observed about 2,000 Soviet Scud launches
directed at Afghan rebels and another 200 during the Iran/Iraq war.
During the Persian Gulf war, the three DSP spacecraft above the
Indian Ocean and eastern Atlantic region helped cue Patriot missile
batteries, providing 5-min. warning of 88 Iraqi Scud ballistic missile
launches against Israel and Saudi Arabia.
“In addition, the DSP’s infrared tracking of the Scud rocket
plumes was accurate enough to locate the Iraqi launch sites within 2.2
nautical miles. These detailed DSP data were then used to guide Air
Force E-8 Joint-STARS aircraft toward the Scud launch sites for final
launch-site identification and bombing by coalition forces.”20

20. Aviation Week & Space Technology; November 18, 1991; Vol. 135, No. 20; Pg. 65.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
194 Chapter 8

Figure 8.18 Observations of two Titan II ICBM test launches. Reprinted with permission
from F. Simmons, Rocket Exhaust Plume Phenomenology, Aerospace Press (2000).

The IR sensor data from the DSP are not currently releasable, though
some results have surfaced in studies of natural phenomena such as
meteorites. Two illustrations of what can be obtained from these high-
temporal resolution, non-imaging sensors are shown here. IR data from
Missile Defense Alarm System (MIDAS) satellite tests are shown in Fig. 8.18.
MIDAS was a DSP precursor, conducting tests in the 1960s. The sensors are
lead sulfide, with filters designed to limit the response to the water-absorption
band (2.65–2.80 mm). The initial variation in radiant intensity is due to the
decrease in atmospheric absorption as the rocket rises in the atmosphere. The
subsequent decline is due to plume effects in the exhaust.21
The visible sensor data from DSP are illustrated in Fig. 8.19. This plot
shows the energy observed in a bolide, i.e., a meteor impact. The event is
distinguishable both from other natural phenomena (such as lightning) and
the sensor’s primary concern, ICBMs. The calculations for power are based
on an assumption that the target has a temperature of 6000 K. A temperature
assumption is necessary because the broadband sensor only gives power—it is
not known where the peak in the spectrum occurs or even if the source is a
blackbody.
The high temporal resolution of DSP sensors is a domain not normally
exploited in remote sensing. The IR sensors are also able to observe events
like forest fires (Defense Daily, April 29, 1999) and jet aircraft on
afterburners, and they have some capabilities in battlefield damage
assessment (BDA).

21. Simmons (2000).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 195

Figure 8.19 Chart of meteor trace. Reprinted with permission from Tagliaferri et al.,
“Detection of Meteoroid Impacts by Optical Sensors in Earth Orbit,” pp. 199–220 in Hazards
Due to Comets and Asteroids, T Gehrels, ed., (1994).

8.8 SEBASS: Thermal Spectral


8.8.1 Hard targets
Spectral imagery has been an important tool in the reflective domain; it is now
emerging in the emissive domain. The image in Fig. 8.20 was taken from a
tower during an experiment in Huntsville, Alabama, using the SEBASS22
instrument, which measures radiance as a function of wavelength. The spectra
at three locations are shown in Fig. 8.21. The trees are nominally at air
temperature and have essentially unity emissivity (they are blackbodies). The
tank spectrum has a noticeable depression at about 9 mm, although it is at the
same temperature as the early-morning air and the trees. The ground is cooler
and darker in this image.
LWIR imagery like that shown here provides the advantage of having a
day–night capability. LWIR spectral imagery has some very useful applica-
tions in detecting gasses, as well, as shown in the next section.

8.8.2 Gas measurements: Kilauea, Pu ‘u ‘O ‘o vent23


Volcanoes provide a high-temperature extreme for thermal spectral
measurements. An experimental test flight using the University of Hawaii

22. Spectrally Enhanced Broadband Array Spectrograph System, from the Aerospace
Corporation.
23. These illustrations come from thesis work at the Naval Postgraduate School by Captain
Aimee Mares (USMC).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
196 Chapter 8

Figure 8.20 SEBASS data integrated over the LWIR spectral range.

Airborne Hyperspectral Imager (AHI) was conducted over a volcanic vent


on the island of Hawaii, providing the montage of images in Figs. 8.22–8.24.
Figure 8.22 shows a visible image of the crater and a coarse view of the
LWIR data, roughly corresponding to a strip through the vent area imaged
on the left. Modeling IR data such as these requires a fairly complex
calculation—most of the essential elements are illustrated in Fig. 8.23.
MODTRAN calculations were performed to define the processes shown
there, and the results were compared to the data. In Fig. 8.24, the smooth
curves are blackbody fits to the long-wavelength data. Deviations from the

Figure 8.21 SEBASS spectra.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 197

Figure 8.22 The left image features a photograph of Kilauea; the right image presents
LWIR data from over the volcano.

Figure 8.23 Modeling the SO2 concentration requires a fair amount of information that
must be estimated or modeled.

smooth curve from 8.0–9.5 mm are due to absorption by sulfur dioxide in a


plume emitted by the volcano. The top curve in Fig. 8.24 is based on an
estimated ground temperature of 342 K. The other two samples were taken
over slightly cooler ground.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
198 Chapter 8

Figure 8.24 The spectra are reasonably blackbody above 10 mm, and the background
temperature can be estimated from this portion of the spectrum. At shorter wavelengths, the
SO2 absorbs the upwelling radiation. The SO2 path density can be estimated from the
modeled values for absorption.24

The analysis of the data in Fig. 8.24 showed that AHI observed
concentrations of SO2 that could be successfully modeled at a few hundred
parts per million (ppm) in a layer estimated to be 150 m thick and at the same
temperature as the background air, which resulted in estimated plume
concentrations of 1 to 5  104 ppm-m. These values are consistent with those
obtained for such phenomena using upward-viewing UV spectrometers under
the volcanic plumes.

8.9 Problems
1. At what wavelength does the radiation for targets at 300 K peak? What is
the ratio of the total power-per-unit area emitted by a person (300 K) and
a hot vehicle (1000 K)?
2. What are the tradeoffs between using MWIR (3–5 mm) versus LWIR
(8–13 mm)? Consider radiated energy, detector technology (cooling
issues), and Rayleigh criterion concerns.
3. Of the materials in Table 8.2, which will show the largest temperature
fluctuation during a 24-h heating/cooling cycle? Which will show the
smallest?

24. A. G. Mares, R. C. Olsen, and P. G. Lucey, “LWIR Spectral measurements of volcanic


sulfur dioxide plumes,” Proc. SPIE 5425, 266–272 (2004).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Thermal Infrared 199

4. A target has a “real” or kinetic temperature of 513.345 K. It has an


emissivity of 0.900. What temperature would be estimated for the target if
the (incorrect) assumption is made that it has an emissivity of one?
5. Starting from the Planck equation, what is the radiance in watts/(m2 ster)
for a source at a temperature of 1000 K in the wavelength range from
1–2 mm?
6. For hot rocket exhaust, much of the radiance is in the water absorption
bands (hot water molecules emitting in the 2.5–3.0-mm range). For a
sensor at geosynchronous orbit (such as DSP), how much power/m2
reaches the sensor for a source emitting 106 watts/(m2 ster)? Assume that
the source is directly below the spacecraft, at a 5.6-earth-radii distance,
but above the atmosphere (no absorption). Calculate the 1/r2 decrease in
intensity with distance. For an optic with an area of 50 cm2, how much
energy is collected in 0.1 s? How many photons would that equal?
7. In Chapter 2, the power radiated by the sun is calculated as
P ¼ 3.911026 W. Following the inverse square law, what is obtained
for the “solar constant,” i.e., the irradiance at the top of the earth’s
atmosphere? Take the distance to the sun as 150  109 m.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 9
Radio Detection and Ranging
(RADAR)

9.1 Imaging Radar


Radar, specifically imaging radar, represents a powerful tool for remote
sensing that has the advantages of around-the-clock capability (due to
independence from solar illumination) and all-weather performance (due to
cloud penetration). The concept of imaging radar dates to 1951, as defined by
Carl Wiley,1 and practical systems followed fairly shortly thereafter. The first
satellite test was the NRO experimental Quill satellite, launched in 1964, using
the Corona satellite systems.2 Radar can penetrate modest depths into the
earth and allow glimpses below the surface—useful when detecting buried
objects such as pipelines or mines. Imaging radar is an essential tool for
maritime concerns, from tracking ships to tracking sea ice. This chapter
develops the physics and nature of imaging radar.
Figure 9.1 illustrates some of the radar data characteristics of interest.
This image from an airborne sensor is created by a radar that collects data at
two wavelengths, 6 and 24 cm. Shades of yellow and blue differentiate regions
of different surface roughness, and vegetated areas (golf courses and city
parks) show up particularly well.

9.1.1 Imaging radar basics


Radar systems have a unique set of terms, so new definitions are needed
before we proceed. Not all will be used here, but they all show up fairly
regularly in the use of modern systems and in the literature. These definitions

1. Wiley, CA, Synthetic Aperture Radars, A paradigm for technology evolution, IEEE
Transactions on Aerospace and Electronic Systems, Vol. AES-21, No. 3, 1985.
2. Per 2011 declassification guide: http://www.nro.gov/foia/declass/QUILL/33.%20QUILL%
20Declassification%20Guidelines.pdf.

201

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
202 Chapter 9

Figure 9.1 Image of San Francisco, California, taken by the JPL AIRSAR (C and L band,
VV polarization, 10-m GSD, and aircraft tack of 135°) on October 18, 1996 at 71351
seconds GMT.

relate to an imaging platform (aircraft or satellite) and the velocity vector of


that platform. Figure 9.2 illustrates this reference vector, which defines the
along-track direction (also the azimuthal direction). The orthogonal direction
is defined as the across-track, or range, direction.
The angles are determined by the flight path of the aircraft or satellite. A
horizontal line perpendicular to the flight line allows for the definition of the
depression angle b, measured from a horizontal plane downward to the radar
beam. The depression angle varies across the image swath, resulting in a
small angle for far-range observations and a large angle for near-range
observations.
The depression angle would be 90° for a nadir view, although imaging
radar cannot view directly below the platform. The look angle u equals
the complementary angles (b + u ¼ 90°) measured up from the local vertical
at the sensor.
Similar angles are defined with respect to the ground. The incidence angle
f is the angle between the radar beam and a line perpendicular to the local
ground surface; for horizontal terrain, the incidence angle equals the look
angle (u ¼ f). The complement of the incidence angle is called the grazing
angle g. For horizontal terrain, the depression angle equals the grazing angle
(b ¼ g).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 203

Figure 9.2 Definitions of terms for imaging radar. Reprinted with permission from Avery
and Berlin (1992).

Finally, a set of distances is defined:


• The slant range is the line-of-sight distance measured from the antenna
to the ground or target;
• The ground range is the horizontal distance measured along the surface
from the ground track, or nadir, to the target;
• The near range is the area closest to the ground track at which a radar
pulse intercepts the terrain; and
• The far range is the area of pulse termination farthest from ground
track.
The concept of resolution has been developed previously in the context of
optical systems with fairly intuitive meanings. The ground resolution of an
optical system, or GSD, is defined by the optical performance and detector

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
204 Chapter 9

characteristics. The resolution for radar systems behaves in a somewhat


different way, and the definition of resolution, though similar, differs as well.
Radar resolution is defined as the minimum separation between two objects of
equal reflectivity that will enable them to appear individually in a processed
radar image. Within the radar community, the impulse-response function
defines resolution. Radar systems are generally used to distinguish point
targets; the impulse response defines their ability to do so. The peculiar nature
of radar is that the resolution in range and azimuth derive from different
physical processes, and in general they need not be the same. These
relationships are discussed in the following sections.

9.2 Radar Resolution


9.2.1 Range resolution
The range or across-track resolution in slant range Rsr is determined by the
physical length of the radar pulse that is emitted from the antenna. This is
called the pulse length. The pulse length can be determined by multiplying the
pulse duration t in time by the speed of light c ¼ 3  108 m/s:
pulse length ¼ ct: (9.1)
In some radar texts, the pulse length (a distance) may be expressed as t, which
is more properly a time. In this work, the above definition will be used, but be
aware that this is not a universal choice. For a radar system to discern two
targets in the across-track dimension, all parts of their reflected signals must
be received at the antenna at different times, or they will appear as one large
pulse return or spot in an image. In Fig. 9.3 it is seen that objects separated by
a slant-range distance equal to or less than ct/2 will produce reflections that
arrive at the antenna as one continuous pulse, dictating that they be imaged as
one large object (targets A, B, and C). If the slant-range separation is greater
than ct/2, then the pulses from targets C and D will not overlap, and their
signals will be recorded separately. Thus, the slant-range resolution measured
in the across-track dimension is equal to one-half the transmitted pulse length:
ct
Rsr ¼ : (9.2)
2
To convert Rsr to ground-range resolution Rgr, the formula is
ct
Rgr ¼ , (9.3)
2 cos b
where t is the pulse length, c is the speed of light, and b is the antenna
depression angle. Radar imagery can be processed in both the slant range and
ground range. This is a technical choice that is based, to some extent, on the
imaging problem being addressed.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 205

Figure 9.3 Range resolution is a function of pulse length. Reprinted with permission from
D. J. Barr, “Use of Side-Looking Airborne Radar (SLAR) Imagery for Engineering Soils
Studies,” Technical Report 46-TR, U.S. Army Engineer Topographic Laboratories (1992).

Equation 9.3 shows that the ground-range resolution improves as the


distance from the ground track increases (i.e., the across-track resolution is
better in far range than in near range because b is smaller). Resolution can
also be improved by shortening the pulse length. However, a point will be
reached when a drastically shortened pulse will not contain sufficient energy
for its echoes to be detected by the receiver. As a practical matter, the
minimum pulse length is 0.05–0.10 ms, or 15–30 m.
A closely related concept to the pulse duration is the pulse repetition rate
or frequency (PRF), which corresponds to the interval between pulses and is
relatively long compared to the pulse duration. Figure 9.4 illustrates the
signals used for the SIR-C X-band radar. The pulse width is 40 ms, and
the PRF is 1240–1736 Hz, so the interval between pulses is about 15 times the
width of the pulses.

9.2.2 Signal modulation


Our ability to resolve targets is a function of the length of the radar pulse.
However, there is a problem in balancing the needs for suitable amounts of
power broadcast and for range resolution. Pulse modulation provides a
solution to this dilemma; it relates the shape of a signal in the time domain to
its distribution in the frequency domain. Figure 9.5 illustrates some basic
features of continuous wave and pulsed signals. A very short pulse produces a
wide spectrum of frequencies, whereas a monochromatic signal implies a very
wide pulse.
Simple Fourier analysis theory provides a few important points about
these relationships. In particular, the transform of a square pulse is a sinc
function—the ratio of sin(x) to x. The pulse width in frequency is just the

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
206 Chapter 9

SIR-C/X-SAR Pulses
260

Ground Receiver ADC Output


~1/1500 s

195

40 µs
130

65

0
0 15 0 15 0 15 0 15
Sample Number

Figure 9.4 SIR-C X-SAR pulses recorded by a ground calibration receiver, sampling at
4 ms (250 kHz). The slight variation in power seen over this interval is due to the progress of
the shuttle over the ground site. Without signal shaping, the best range resolution that could
be obtained from such pulses would be 6 km. The resolution obtained via the techniques
described in Section 9.2.2 is 25 m, corresponding to the 10- or 20-MHz bandwidth.

time frequency

fo

1/τ
τ

Figure 9.5 Continuous wave and pulsed signals. The bandwidth ¼ 1/pulse length.
Reprinted with permission from Elachi 2006, p. 229.

inverse of the pulse width in time, as indicated in the bottom half of Fig. 9.5.
For a square pulse modulated by a carrier frequency, the center of the sinc
function is shifted, but otherwise the shape of that function is unchanged.
This concept allows a slightly different definition of range resolution, as
follows:
ct c
Drange ¼ ¼ , (9.4)
2 2B
where t is the pulse length, the bandwidth B is the inverse of t, and c is the
speed of light. This definition, purely formal at first, provides a simple way of
understanding the next step, which is to modulate the frequency of the pulse.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 207

Time

Frequency Δf

Figure 9.6 A pulse varies linearly in frequency from f to fo + Df. The power is then localized
in frequency space (bandwidth).

The concept of frequency modulating the pulse, now termed “FM chirp,”
was devised by Suntharalingam Gnanalingam while at Cambridge after
World War II (1954).3 The technique was developed to study the ionosphere.
Figure 9.6 illustrates the chirp concept—the pulse is modulated with a
frequency that increases linearly with time. The value of this is that objects
that are illuminated by the pulse can still be distinguished by the difference in
frequencies of the returns, even if they overlap in time. Gnanalingam realized
that the transform of a chirped pulse would have a bandwidth in frequency
space that was defined by the frequency range of the modulation. Without
proof of any form, it is asserted here that by analogy to a finite pulse of
constant frequency, the bandwidth (1/t) is replaced by the range of the
frequency sweep (Df). As a result, the effective spatial resolution is
c
Drange ¼ , (9.5)
2Df

which allows significantly better spatial resolution because the frequency


sweep can be much larger than the inverse of the pulse width. (For example,
on the SIR-C X-band system, the 9.61-GHz signal had a 9.5-MHz chirp
bandwidth, or about a 15-m range resolution).

9.2.3 Azimuth resolution


Azimuth or long-track resolution Ra is determined by the width of the terrain
strip illuminated by a radar pulse, which is a function of the beam width of a
real aperture radar (RAR). As shown in Fig. 9.7, the beamwidth increases
with range. Thus, two tank-like objects (at the same range) are in the beam

3. Gnanalingam, S., “An Apparatus for the Detection of Weak Ionospheric Echoes”, Proc.
IEE, Part III, Vol. 101, pp. 243–248, 1954. The argument for the bandwidth (BW)
determining the range resolution is obtained by inference. The result is rigorously obtained
by R. J. Sullivan, Microwave Radar, Imaging and Advanced Concepts, 2000.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
208 Chapter 9

Figure 9.7 Imaging radar-beam illumination.

simultaneously, and their echoes will be received at the same time.


Consequently, they will appear as one extended object in an image. Two
other objects, an A7 jet and T72 tank, are located outside the beam width as
shown in Fig. 9.7. Because a distance greater than the beam width separates
them, their returns will be received and recorded separately. Thus, to separate
two objects in the along-track direction, it is necessary that their separation on
the ground be greater than the width of the radar beam. What determines the
beam width? Basically, the length of the antenna.

9.2.4 Beam pattern and resolution


The nature of the antenna-radiation pattern briefly must be examined in order
to understand the resolution characteristics of an imaging radar system.
Recalling the original discussion of the Rayleigh criterion (Chapter 3), the
same applies for a rectangular or cylindrical radar antenna. The beam pattern
is derived in Appendix 1.3. Here, we simply claim the result, which is that the
beam pattern is the Fourier transform of the aperture. For a square aperture,
the beam pattern is the square of the sinc function.
Figure 9.8 shows the beam pattern for a square aperture, and illustrates
the function (sin a/a)2, where a ¼ (kL sin u)/2. L is the length of the antenna,
and k is the wavenumber. This function has zeros where the argument of sin is
mp, or

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 209

Figure 9.8 The square of the sinc function: (sin a/a)2.

kL sin u∕2 ¼ mp ⇒ kL sin u ¼ 2mp, (9.6)


which leads to
2p
L sin u ¼ 2mp ⇒ L sin u ¼ l,
l
or
l
¼ sin u: (9.7)
L
This equation is effectively the same result given for the resolution of
diffraction-limited optics. This is not an accident. It follows from the
derivation of the far field (range ≫ aperture size and wavelength) behavior of
any aperture.
The equation for determining azimuth resolution Ra is
lRs
Ra ¼ , (9.8)
L
where l is the operating wavelength, Rs is the slant range to the target, and L
is the length of the antenna.
The relationships expressed in Eq. (9.6) show that azimuth resolution
decreases in proportion to increasing range (i.e., resolution is best in near
range, where the width of the beam is narrowest), and a long antenna or a
short operating wavelength will improve azimuth resolution. This concept

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
210 Chapter 9

X-SAR Azimuth Antenna Pattern


75
02-Oc tober-1994
70
Relative Power [dB]
65

60

55

50

45
-2.0 -1.5 -1.0 -0.5 0.0 0.5
Azimuth Angle (degrees)

Figure 9.9 SIR-C X-SAR azimuthal antenna pattern as observed from ground observa-
tions along the beam centerline (the center of the range antenna pattern).4 The vertical axis
in this figure is implicitly logarithmic (being in decibels); this allows the side lobes (secondary
maxima) to be visible in this plot. Figure 9.10 uses a vertical axis that is truly linear.

applies to all radar, but in particular to RAR, where the practical limit of
antenna length for aircraft stability is 5 m, and the all-weather capability of
radar is effectively reduced when the wavelength is decreased below about
3 cm. Because of these limitations, RARs are best suited for low-level, short-
range operations.
The resolution of a real-aperture imaging radar in the along-track
direction given earlier can be rewritten as

lh
Ra ¼ : (9.9)
L cos u

Ordinary-space kinds of numbers (antenna length L ¼ 10 m, l ¼ 3 cm,


u ¼ 20°, and h ¼ 800 km) produce a resolution of 2.5 km. It seems that a
bigger antenna is necessary.
Before proceeding to the method by which a long synthetic antenna can be
constructed, this section briefly looks at some observed antenna patterns,
again from the SIR-C X-SAR instrument. Figure 9.9 shows what certainly
appears to be a sinc2 pattern per the theory described earlier (the use of dB as
a unit means that there is an implicitly logarithmic vertical axis). A
companion figure, Fig. 9.10, shows the data on a linear vertical scale
(bottom), and divided by the expected factor of sinc2(w / 0.151°) in the top.
Here, the model is for the 3-cm (9.6-GHz) waves, given the 12-m antenna
length (0.151° ¼ l/L).

4. M. Zink and R. Bamler, “X-SAR Calibration and Data Quality,” IEEE TGARS 33(4),
pp. 840–847 (1995).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 211

Figure 9.10 SIR-C X-SAR azimuthal antenna pattern on a linear scale and compared to
sinc2. This portion of the processed data comes from the region between the dashed
lines (±0.07°) for which the model is very accurate.4

Figure 9.11 SIR-C X-SAR range antenna pattern. This beam pattern needs to cover the
entire cross-track range of the system, e.g., 20–70 km, as illuminated from a 222-km
altitude.5

For comparison, the range (or elevation) antenna pattern, synthesized


from a number of observations like those in Fig. 9.9, is given in Fig. 9.11. The
width is considerably greater because of the relatively narrow dimension of
the antenna in the corresponding direction (0.75 m).

5. M. Zink and R. Bamler, “X-SAR Calibration and Data Quality,” IEEE TGARS, Vol. 33,
No. 4, July 1995, pp. 840–847.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
212 Chapter 9

9.2.5 Synthetic-aperture radar


The principal disadvantage of real aperture radar is that its along-track or
azimuth resolution is limited by antenna length. Synthetic aperture radar
(SAR) was developed to overcome this disadvantage. SAR produces a very
long antenna synthetically or artificially by using the forward motion of the
platform to carry a relatively short real antenna to successive positions along
the flight line. The longer antenna is simulated by exploiting the coherence
of radar signals. If the sensor is moving at velocity V and has an antenna
length L, then the main beam footprint on the surface has a characteristic
length

2l h
l¼ : (9.10)
L

Data are accumulated for as long as a given point on the ground is in view (see
Fig. 9.12).

Figure 9.12 The ship is illuminated by the radar for a time interval that depends on the
altitude of the radar and the beamwidth.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 213

The synthesized collection of measurements will have a beamwidth


equal to
l L
us ¼ ¼ , (9.11)
l 2h
and the resulting array footprint on the ground has the size

L
Ra ¼ hus ¼ : (9.12)
2
This very counter-intuitive result is due to the fact that for a smaller antenna
(small L), the target is in the beam for a longer time. The time period that an
object is illuminated increases with increasing range, so the azimuthal
resolution is range independent.
The discussion in this section is correct for “scan-mode” SAR, where the
antenna orientation is fixed. If the antenna is rotated (physically or
electronically) in such a way as to continuously illuminate the target, a third
result is obtained (Fig. 9.13). In spotlight mode, radar energy is returned from
the target for an interval defined by the operator, which simulates an
arbitrarily large antenna. For example, if the shuttle radar illuminated a target
for 10 s, the effective antenna length would be some 75 km:

Figure 9.13 The synthetic antenna’s length is directly proportional to range: as the across-
track distance increases, so does the antenna length. This behavior produces a synthetic
beam with a constant width regardless of range for scan-mode SAR. Reprinted with
permission from Lockheed Martin Corporation (from original by Goodyear Aerospace Corp.).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
214 Chapter 9

l
Ra ¼ h, where Leff ¼ vplatform · T observe : (9.13)
Leff

In processing, the azimuth details are determined by establishing the position-


dependent frequency changes or shifts in the echoes that are caused by the
relative motion between terrain objects and the platform. To do this, a SAR
system must unravel the complex echo history for a ground feature from each
of a multitude of antenna positions. For example, if a single ground feature is
isolated, the following frequency modulations occur as a consequence of the
forward motion of the platform:
• The feature enters the beam ahead of the platform, and its echoes are
shifted to higher frequencies (positive Doppler).
• When the platform is perpendicular to the feature’s position, there is no
shift in frequency (zero Doppler).
• As the platform moves away from the feature, the echoes have lower
frequencies (negative Doppler) than the transmitted signal.
The Doppler-shift information is then obtained by electronically comparing
the reflected signals from a given feature with a reference signal that
incorporates the same frequency of the transmitted pulse. The output is
known as a phase history, and it contains a record of the Doppler frequency
changes plus the amplitude of the returns from each ground feature as it
passed through the beam of the moving antenna. A consequence of this
analysis is that objects that are moving will appear to be shifted in the along-
track direction.

9.3 Radar Cross-Section s and Polarization6


The amount of energy reflected by a target is defined by the cross-section—
functionally, an effective area for the target, modulated by the wavelength. As
a practical matter, the cross-section depends a great deal on the shape of the
target, the material, and surface roughness. The prototypical ideal target is a
smooth metal sphere that has a radar cross-section that is just the projected
area (pR2) in the “optical limit,” i.e., the wavelength is much smaller than the
target. In this case, energy is reflected or scattered away from the sphere
equally in all directions (isotropic scattering).
A dimensionless form is preferred for some applications. The cross-section
is then defined as the ratio of the backscattered energy to the energy that the
sensor would have received if the target surface has scattered the energy
incident on it in an isotropic fashion, and it is dimensionless. The backscatter
cross-section is then expressed in dB (decibels), given by s ¼ 10 log (energy
ratio).

6. M. Skolnik, Introduction to Radar Systems, 3rd Edition (2001).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 215

The measured cross-section s depends on any number of surface


characteristics, with a strong dependence on the incident angle and scattering
angle. At scattering angles larger than 30°, the surface scattering is
dominated by the effect of small-scale roughness (small scale compared to the
wavelength). The “point-scatterer” model is invoked here, and the small
scatterers are assumed to make up a Lambertian distribution (an optics term)
that applies to rough surfaces as

sðuÞ ∝ cos2 u: (9.14)

This Lambertian pattern follows for “rough” surfaces, which gives some idea
of the type of functional dependence on angle that one might obtain. The
details can be much more complicated, as illustrated by Skolnick.6
The discussion in this section so far has ignored the important topic of
polarization. Radar transmissions are polarized, with components nor-
mally termed vertical (V) and horizontal (H). Vertical means that the
electric vector is in the plane of incidence; horizontal means the electric
vector is perpendicular to the plane of incidence. The receiving antenna can
be selected for either V or H returns, which leads to a possible matrix of
return values so that the cross-section s is really a tensor:

 
sHH sHV
s¼ :
sVH sVV

The first subscript for each tensor element is determined by the transmit
state and the second by the receive state. These four complex (amplitude
and phase) components of the scattering matrix give a wealth of
information, much more than can be obtained from an optical system.
Generally speaking, scatterers that are aligned along the direction of
polarization give higher returns, and rough surfaces produce the cross-
terms. Water gives almost zero scattering in the cross terms, whereas
vegetation gives a relatively large cross-term.

9.4 Radar Range Equation


No discussion of radar would be complete without some consideration of
the radar range equation. The ability of radar to detect a target depends on
the transmitted power, the range, the radar cross-section of the target,
and the antenna gain (area). The dependence on range for a fully
illuminated target decreases as R–2 for the illumination, combined with a
factor of R–2 for the returning signal, for a net R–4 dependence. The
resulting formula is

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
216 Chapter 9

G antenna Aantenna
Preceived ¼ Ptransmitted · s
4pR2range 4pR2range
 2 (9.15)
1
¼ Ptransmitted · G antenna Aantenna s,
4pR2range
where Preceived is the received power, Ptransmitted is the transmitted power, s is
the radar cross-section (area), Aantenna is the antenna area, and Gantenna is the
antenna gain (dimensionless but proportional to antenna area).
There are a number of physical terms buried in the antenna gain and
cross-section not developed here. The antenna gain is proportional to the area,
inversely proportional to the square of the wavelength (due to the beam
pattern), and is dimensionless.7
The maximum antenna gain is defined by the physical area of the antenna
A and the wavelength:
4pA
Gantenna ¼ : (9.16)
l2
This term is representative of the beam pattern (as seen previously in the
Rayleigh pattern in Figs. 9.8 and 9.9).8 It approximates the ratio of energy on
a target for a given antenna, as compared to what is observed for an isotropic
radiator.
There are a number of variations to the range equation designed to
emphasize different symmetry elements of the range equation, particularly
with respect to the antenna gain. Here, the form is chosen to emphasize that
one of the limiting factors for space systems, in particular, is the R–4
dependence of the signal on range. This dependence is a fairly effective limit
on the altitudes for radar satellites.

9.5 Wavelength9
Choices for radar wavelength vary according to the goals of the system.
Table 9.1 lists most of the standard wavelength ranges and designations for
imaging radar systems. The variations in wavelength affect the behavior and
performance for imaging systems, with shorter wavelengths providing the
opportunity for a higher spatial resolution. In general, of course, radar
penetrates clouds, smoke, rain, and haze. There is some wavelength
dependence for rain penetration: at 15-cm wavelengths and longer (2 GHz

7. Antenna gain can be quite large: the massive Cassegrain antenna at Arecibo, with a diameter
of 305 m, has a gain of 1–2  106 at 2.4 GHz. It has been used to image the surface of
Mercury. C. Drentea, Modern Communications Receiver Design and Technology, p. 369,
Artech House (2010).
8. M. Skolnik, Introduction to Radar Systems, 3rd Edition (2001).
9. RADARSAT/PCI notes, pages 11 and 43.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 217

Table 9.1 Standard radar wavelength bands.



Band Designation Wavelength (cm) Frequency (GHz)

Ka (0.86 cm) 0.8–1.1 40.0–26.5


K 1.1–1.7 26.5–18.0
Ku 1.7–2.4 18.0–12.5
X (3.0 and 3.2 cm) 2.4–3.8 12.5–8.0
C 3.8–7.5 8.0–4.0
S 7.5–15.0 4.0–2.0
L (23.5 and 25 cm) 15.0–30.0 2.0–1.0
P 30.0–100.0 1.0–0.3

Wavelengths commonly used in imaging radars are indicated in parenthesis.

and below), rain is not a problem. At 5 GHz (6 cm), significant rain shadows
are seen. At 36 GHz (0.8 cm), moderate rainfall rates can cause significant
attenuation. Foliage penetration is enhanced with longer wavelengths. Shorter
wavelengths (X and C band) primarily interact with the surface, while longer
wavelengths (L and P band) penetrate forest canopies and soil. The current
generation of operational radar systems have primarily operated at the X-, C-,
and L-band. The Ku- and P-band have primarily been used on airborne
systems.

9.6 SAR Image Elements


The strength of the radar return varies within a scene according to the radar
cross section, as described in Section 9.3. The other essential elements in the
formation of a SAR image are the index of refraction (dielectric constant) and
the surface roughness.

9.6.1 Dielectric constant: soil moisture


The amplitude of the radar return depends strongly on the dielectric constant
p
of the surface material (think index of refraction in normal optics, n  εr; the
index of refraction varies as the square root of the relative dielectric constant).
Radar returns vary substantially as the surface materials go from insulator to
conductor, which in the dielectric constant shows up as the imaginary
component ε00 . Figure 9.14 illustrates that soil moisture increases the
imaginary term, causing increased absorption of radar energy. The
wavelength dependence of this variation means that higher frequencies (lower
wavelengths) are affected more; thus, lower frequencies are better for
penetrating ground and foliage.
Backscatter is also sensitive to an illuminated area’s dielectric properties,
including water content. Wetter objects will appear bright, and drier objects
will appear dark. The exception is a smooth body of water, which will act as a
flat surface and reflect incoming pulses away from a mapping area. These

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
218 Chapter 9

Figure 9.14 The real ε0 and imaginary ε00 components of the dielectric constant for a silty
loam mixture, as a function of water content. Absorption increases with moisture content.
Reflection (scattering) will increase as ε0 increases. Reprinted with permission from Ulaby
et al. 2006.10

bodies will appear dark. (For reference, a microwave oven works at a nominal
frequency of 2.45 GHz, or l  12 cm.)
The ability of radar to penetrate dry soil is apparent in a variety of desert
observations. Figure 9.15 depicts ancient riverbeds under the eastern Sahara
sand. The location is the Selima Sand Sheet region in northwestern Sudan. A
50-km-wide path from the Shuttle Imaging Radar (SIR-A) mission over the
Sahara is shown superimposed on a Landsat image of the same area. The
radar penetrated 1–4 m beneath the desert sand to reveal subsurface
prehistoric river systems invisible on the Landsat image. The soil must be

10. Ulaby, Moore, & Fung, Microwave Remote Sensing, Active and Passive, Volume III,
Artech House, 1986, p. 2096. Data from Hallakainen et al., IEEE TGRS, GE-23, #1, 1985.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 219

very dry (less than 1% water content), fine grained (small compared to the
radar wavelength), and homogeneous.11 The idea followed from a suggestion
by Charles Elachi in 1975.12

Figure 9.15 SIR-A observations of subsurface geological structures. The diagonal stripe is
the SIR-A data, and the orange background is the Landsat (visible) image. Work by Victor R.
Baker and Charles Elachi.13

11. F. El-Baz, C. A. Robinson, and T. S. S. Al-Saud, “Radar Images and Geoarchaeology of


the Eastern Sahara,” in Remote Sensing in Archaeology, J. Wiseman and F. El-Baz, eds.,
(2007); J. F. McCauley et al., “Subsurface valleys and geoarcheology of the Eastern Sahara
revealed by Shuttle radar,” Science 218, 1004–1020 (1982).
12. L. E. Roth and C. Elachi, “Coherent electromagnetic losses by scattering from volume
inhomogeneties,” IEEE Transactions on Antennas and Propagation (1975); C. Elachi, L. E.
Roth, and G. G. Schaber, “Space-borne radar subsurface imaging in hyperarid regions,”
IEEE Trans. Geosci. Remote Sensing GE-22, 383–388 (1984).
13. Selima Sand Sheet, “Geomorphology from Space: A Global Overview of Regional
Landforms,” NASA, N. M. Short, Sr. and R. W. Blair, Jr., eds., http://disc.sci.gsfc.nasa.
gov/geomorphology (1986); figure online at: http://disc.sci.gsfc.nasa.gov/geomorphology/
GEO_1/GEO_PLATE_I-3.shtml.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
220 Chapter 9

Figure 9.16 The concept of rough and smooth must take into account the wavelength of
the radiation. Reprinted with permission from Lockheed Martin Corporation (from original by
Goodyear Aerospace Corp.).

9.6.2 Roughness
The effect of surface roughness is illustrated by Fig. 9.16.14 The figure is
somewhat schematic, but it emphasizes the variation in radar return with angle
and surface roughness. Roughness is relative to wavelength, so “smooth” means
surfaces like concrete walls (e.g., cultural objects), and “rough” tends to mean
things like vegetation.
The rule of thumb in radar imaging is that the brighter the backscatter on
the image is, the rougher the surface being imaged. Flat surfaces that reflect
little or no microwave energy appear dark in radar images. Vegetation is
usually moderately rough on the scale of most radar wavelengths and appears
gray or light gray in a radar image. Surfaces inclined toward the radar will
have a stronger backscatter than surfaces that slope away.

9.6.3 Tetrahedrons/corner reflectors: the cardinal effect


Hard targets, i.e., most synthetic artifacts, generally have sharp corners and
flat surfaces. These features produce extremely bright returns, often saturating
the images produced from SAR systems. The SIR-C flight over Death Valley
included calibration sequences with retro-reflectors, which characteristically
return all incident radiation back in the direction of incidence—that is, they
are nearly perfect reflectors. Figure 9.17 shows some SIR-C observations
rendered in line graphics. In the SAR images, these observations correspond
to a small white dot against the relatively flat background.

14. Elachi (p. 174) and Sabins (pp. 197–201).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 221

Figure 9.17 SAR impulse response, Death Valley, CA, and retroreflectors. Image courtesy of
NASA.

The concept exploited in the calibration experiments shown here becomes


apparent again in imagery taken over urban areas, where the cardinal effect
causes very bright returns from portions of urban areas. Figure 9.18 shows the
cardinal effect in SIR-C imagery of Los Angeles.

9.7 Problems
1. For a spotlight-mode SAR system, what azimuthal resolution could
be obtained with the X-band for a 10-s integration interval [assume that
v ¼ 7.5 km/s and take the range (altitude) to be 800 km]?
2. The amplitude of the electric field for a 1D aperture is given by
 
sinðkL sin u∕2Þ 2
intensityðuÞ ¼
kL sin u∕2

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
222 Chapter 9

Figure 9.18 SIR-C/X-SAR image of Los Angeles, CA on October 3, 1994. Shuttle Imaging
Radar data are displayed: C-Band/HV (red), C-Band/HH (green), and L-Band/HH (blue). The
large cyan area at the top is the city of San Fernando, bounded by Interstates 5 and 210,
with streets largely parallel and perpendicular to those freeways. These are, in turn, roughly
parallel to the STS-68 flight line because the shuttle flew diagonally through the scene to the
east (here oriented with north as up). In a similar way, the City of Santa Monica is oriented by
the direction defined by the coastline in that area. Buildings act like corner reflectors in the
four “cardinal” directions and give strong returns in the co-polarized C- and L-band data. The
reddish regions, most noticeable NW of Santa Monica, are defined by multiple scattering
caused by rough surfaces and vegetation, and relatively higher scattering of energy into the
cross-polarized receiver (sVH in Section 9.2).

(see Appendix 1 for derivation). The zeros of this equation then define the
beam pattern, as shown in Fig. 9.8. Plot this function for an L-band
antenna (l ¼ 24 cm). Take the antenna length to be 15 m [L ¼ 15 m, k ¼
(2p)/l ¼ (2p)/ 0.23 m] and plot for an angular range of u ¼ 0–0.05 radians
(3°). At what values of u do the first few zeros occur?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radio Detection and Ranging (RADAR) 223

125 SIR-B Azimuth Pattern

100

Half Power
Received Voltage (mV)

0.63 s
75

Orbit 97.2
10/11/84
ARC # 120
50 PtGt = 86.72 dBm

25

-10 dB Sidelobe
-12 dB Sidelobe

0
-2 -1 0 1 2
Time (Seconds)

Figure 9.19 SIR-B azimuth (along-track) antenna pattern. Image reprinted with permission
of Dobson et al., “External Calibration of SIR-B Imagery,” IEEE TGRS (July 1986).

3. For the conditions illustrated in Fig. 9.11, the shuttle was at a 222-km
altitude, and the antenna (shuttle) attitude was 27.1°. To what range does
the 27.1° ± 3° angular range (measured from nadir) correspond?
4. During the SIR-B flight, observations similar to those shown in Figs. 9.9
to 9.11 were made. Figure 9.19 shows the intensity as a function of time.
Given a vehicle velocity of 7.5 km/s, convert the variations in time displayed
here into a beam width in degrees. The wavelength is 23.5 cm. The local
angle of incidence is 31°. (The incidence angle is measured down from the
vertical.) Additional information is given in Table 9.2. What is the antenna
length implied by this antenna pattern?
5. Why would a radar satellite not be viable in a geostationary orbit?
6. Estimate the power that would be needed for a radar satellite orbiting at
an altitude of one earth radius by extrapolation from the SIR-B
parameters in Table 9.2, assuming all other parameters are kept
constant.
7. Estimate the imaging time that would be required for a radar satellite with
an altitude of one earth radius to obtain an azimuthal resolution of 1 m.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
224 Chapter 9

Figure 9.20 ALOS dimensions and schematic.

Table 9.2 SIR-B mission parameters.

Shuttle Orbital Altitudes (360, 257), 224 km


Shuttle Orbital Inclination 57°
Mission Length 8.3 days
Radar Frequency 1.275 GHz (L-band)
Radar Wavelength 23.5 cm
System Bandwidth 12 MHz
Range Resolution 58–16 m
Azimuth Resolution 20–30 m (4-look)
Swath Width 20–40 km
Antenna Dimensions 10.7 m  2.16 m
Antenna Look Angle 15–65° from vertical
Polarization HH
Transmitted Pulse Length 30.4 ms
Minimum Peak Power 1.12 kW

Assume a slant angle of 45°. How far has the satellite flown in this time?
(Hint: It is moving slower than 7.5 km/s.)
8. The Japanese satellite ALOS (Fig. 9.20) carried the L-band (23.6 cm)
PALSAR radar system. The antenna was 3.1  8.9 m in size (the long
dimension was along track). Estimate the size of the projected ellipse for
an incidence angle of 45°. The satellite altitude was 570 km.15

15. http://www.eorc.jaxa.jp/ALOS/en/about/palsar.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 10
Radar Systems and
Applications

Figure 10.0 These data were acquired on October 3, 1994 by the Spaceborne Imaging
Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeav-
our (Image P48773). The image here is a subset of the scene, rotated so that north is roughly up.
L-band and C-band data are shown. Two different polarization results are combined here:
horizontal transmit/horizontal receive (HH) and horizontal transmit/vertical receive (HV).

The foundations for radar imaging were established in the previous chapter.
This chapter examines some of the imaging radar systems used over the last
20 years, the types of imaging products they have produced, and some of the
non-literal analysis techniques that make use of interferometry. The Shuttle

225

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
226 Chapter 10

Imaging Radar (SIR) is used for illustration first because it is the only system
to fly in space with multiple wavelengths and the first to offer multiple
polarizations (as in Fig. 10.0).

10.1 Shuttle Imaging Radar


The SIR has flown in several versions (A, B, and C). The C payload flew twice as
an imaging mission in 1994. The SIR-C missions were carried by the Endeavour
orbiter, the first as SRL-1: STS-59, on April 9–20, 1994, and the second as
SRL-2: STS-68, on September 30–October 11, 1994. Both missions were
conducted with highly inclined (57°), 222-km-altitude, circular orbits. SIR-C
included X-, C-, and L-band radar (Fig. 10.1) and was capable of various modes,
including full polarimetric (VV, VH, HV, and HH polarization). SIR-C flew a
third time as the Shuttle Radar Topography Mission (SRTM) on February 11,
2000 (STS-99). C-band and X-band data were taken on the SRTM flight.
Spatial resolution varied among the sensors, and with operating
mode, but was generally from 10–25 m, with a spatial extent of 30–50 km.
The combined SIR-C/X-SAR payload had a mass (instruments, antennas, and
electronics) of 10,500 kg, filling nearly the entire cargo bay of the shuttle. See
Table 10.1 for their parameters.
Enormous data rates are implicit in these missions. Launched April 9,
1994, the STS-59 mission for SIR-C/X-SAR collected a total of 65 h of data
during the 10-day mission, corresponding to roughly 66 million square
kilometers. All data were stored onboard the shuttle using high-density,
digital, rotary-head tape recorders. The data filled 166 digital-tape cartridges
(similar to VCR tapes).
The mission returned 47 terabits of data (47  1012 bits). When all radars
are operating, they produce 225 million bits of data per second. The raw data
were processed into images using JPL’s digital SAR processor and by
processors developed by Germany and Italy for the X-SAR data.
The L- and C-band SARs (Fig. 10.2) allow multi-frequency and multi-
polarization measurements. Parallel image generation can be done in L-band
and C-band with HH, VV, HV, or VH polarization. The look angle was
variable from 20–55°. The data rate was 90 Mbit/s for L-band and 90 Mbit/s
for C-band (a total of four streams of V and H data, where each data stream
has a data rate of 45 Mbit/s).
The German X-band SAR was provided by DARA/DLR and ASI. The
X-SAR uses only vertical polarization (VV). The look angle (off nadir) varied
from 15–55°, and the data rate was again 45 Mbit/s. The ground area
illuminated by the antenna is an ellipse of about 60 km  0.8 km (for an
altitude of 222 km). The traveling-wave tube (TWT) amplifier generates
1736 pulses/s at a peak transmit power of 3.35 kW; the pulses are frequency
modulated (chirp) with a pulse length of 40 ms and a programmable

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 227

Figure 10.1 The X-, C-, and L-band antennas are all 12 m in length. In width, the X-band
(at the bottom of the figure) is 0.4 m wide, the L-band is 2.95 m, and the C-band panel is
0.75 m. The width values follow the same proportions as the wavelengths (X:C:L::3:6:24).

Table 10.1 Shuttle imaging radar technical specifications.


Parameter L-Band Antenna C-Band Antenna X-Band Antenna

Wavelength (cm) 23.5 5.8 3.1


Frequency 1.250 GHz 5.3 GHz 9.6 GHz
Aperture length 12.0 m 12.0 m 12.0 m
Aperture width 2.95 m 0.75 m 0.4 m
Architecture Active phased array Slotted waveguide
Polarization H and V H and V V
Antenna gain 36.4 dB 42.7 dB 44.5 dB
Mechanical steering range N/A N/A ±23°
Electronic steering range ±20° ±20° N/A
Elevation beam width 5–16° 5–16° 5.5°
Azimuth beam width 1.0° 0.25° 0.14°
Peak radiated power 4400 W 1200 W 3350 W
Mass of structure 3300 kg 49 kg

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
228 Chapter 10

Figure 10.2 (a) The phased-array C- and L-band antennas are steered electronically.
Image reprinted courtesy of NASA.1 (b) The antenna is 4 m  12 m overall.

bandwidth of 9.5 or 19 MHz. The signal echoes are amplified in a coherent


receiver digitized (4 or 6 bits) and recorded together with auxiliary data.
Data from the SIR-C mission are illustrated in Fig. 10.0 using a composite
image technique that is common for this system—three colors are used to
represent different wavelengths and polarizations. Such figures are useful for
illustration, though frequently difficult to interpret. The different wavelengths
respond to surface roughness at different scales (typically responding to
spatial structures comparable to the wavelength). The dependence on
polarization is also a function of surface roughness and scattering. Rougher
surfaces tend to de-polarize the returned radar power and as such are
relatively stronger in the cross-polarization measurements (e.g., HV).

10.2 Soil Penetration


One of the more fascinating aspects of radar imagery is its ability to look
below the surface of dry soil such as deserts. Figure 9.15 showed the results of
an early shuttle flight; Fig. 10.3 shows imagery taken in a region of the Sahara
Desert in North Africa by the space-shuttle orbiter Endeavour on October 4,
1994. This area is near the Kufra Oasis in southeast Libya, centered at 23.3°
north latitude, 22.9° east longitude.2
This SIR-C image reveals a system of old, now-inactive stream valleys,
called “paleodrainage systems,” visible here as the two darker patterns
converging at the top of the figure. During periods of wetter climate, these
valleys carried running water northward across the Sahara. The region is now

1. C. A. Fowler, “Old radar types never die, they just phased array,” IEEE-AES Systems
Magazine, 24A–24L (Sept. 1998) (reference in Sullivan, Microwave Radar).
2. http://photojournal.jpl.nasa.gov/catalog/PIA01310.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 229

Figure 10.3 (a) JPL Image PIA01310 of the Sahara Desert in North Africa and
(b) expanded view of the Kufra Oasis. North is toward the upper left in these images. Red
is L-band, horizontally transmitted and received. Blue is C-band horizontally transmitted and
received. Green is the average of the two HH bands. The well-irrigated soils are quite bright
in radar due to the increased dielectric constant, as illustrated previously in Fig. 9.14.

hyper-arid, receiving only a few millimeters of rainfall per year, and the
valleys are now dry “wadis,” or channels, mostly buried by windblown sand.
Prior to the SIR-C mission, the west branch of this paleodrainage system,
known as the Wadi Kufra (the dark channel along the left side of the image),
was recognized and much of its course outlined. The broader east branch of
the Wadi Kufra, running from the upper center to the right edge of the image,
was, however, unknown until the SIR-C imaging radar instrument was able to
observe the feature here. The east branch is at least 5 km wide and nearly
100 km long. The sand is probably only a few meters deep.
The two branches of the Wadi Kufra converge at the Kufra Oasis, at the
cluster of circular fields at the top of Fig. 10.3(b). The farms at Kufra depend
on irrigation water from the Nubian Aquifer System. The paleodrainage
structures suggest that the water supply at the oasis is a result of episodic
runoff and the movement of groundwater in the old stream channels.3

10.3 Ocean Surface and Shipping


10.3.1 SIR-C: Oil slicks and internal waves
Maritime applications of radar include ship detection and oil-slick detection.
Figure 10.4 shows the sensitivity of radar to surface disturbances on water,
even though water is a poor reflector of radar energy. Normally, small wind
waves (over 3–4 m/s) produce radar returns. The amplitude of these small
waves depends on wind speed and the surface tension of the water, which is

3. El-Baz, Farouk, C. A. Robinson, and T. S. S. Al-Saud, Radar Images and Geoarchaeology


of the Eastern Sahara, in Remote Sensing in Archaeology edited by James Wiseman and
Farouk El-Baz, 2007.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
230 Chapter 10

Figure 10.4 NASA/JPL PIA01803, taken October 9, 1994. The image is located at 19.25°
north and 71.34° east, and covers an area 20 km by 45 km (12.4 miles by 27.9 miles). The
complementary color scheme: yellow regions reflect relatively higher energy in the L-band,
blue areas show relatively higher reflectance in the C-band. Both bands are observed in VV
polarization.

modified slightly by oil. This behavior modifies the return sufficiently to


produce a characteristic signature, as shown in Fig. 10.4. Water temperature
also influences the relative amplitude of the returns at different wavelengths.4
Figure 10.4 is a radar image of an offshore drilling field 150 km west of
Bombay, India, in the Arabian Sea. The dark streaks are extensive oil slicks
surrounding many of the drilling platforms, which appear as white spots. The
narrower streaks are more recent leaks; the spread areas have dispersed over
time. Eventually, the oil slick may be as thin as a single molecule deep. Oil
slicks may result from natural seepage from the ocean floor as well as human-
created sources.
There are two forms of ocean waves shown in this image. The dominant
large waves (center right) are internal waves formed below the surface at the
boundary between layers of warm and cold water. They appear in the radar
image because of the way they modify the surface. These waves have
characteristic wavelengths of 200–1600 m.5

10.3.2 RADARSAT: Ship detection6


The Canadian RADARSAT systems use a C-band synthetic-aperture radar
(5.4 GHz). RADARSAT-1 was limited to HH polarization. RADARSAT-2

4. High-resolution wind fields from ERS SAR, K. Mastenbroek, Earth Observation Quarterly,
#59, June 1998, http://www.esa.int/esapub/eoq/eoq59/MASTENBROEK.pdf.
5. Zhou et al., Satellite SAR Remote Sensing of Ocean Internal Waves, Asian Association of
Remote Sensing, Asian Conference on Remote Sensing, 1999.
6. http://www.asc-csa.gc.ca/eng/satellites/radarsat/radarsat-tableau.asp.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 231

Figure 10.5 Ultrafine (2-m pixels) ship-detection image taken by RADARSAT-2 near
Singapore on May 5, 2009 at 22:46:33Z, HH polarization. The scene center is 1° 40 5100 N,
103° 520 13.700 E. RADARSAT-2 data © Canadian Space Agency 2009. Data received by the
Canada Centre for Remote Sensing; data processed and distributed by RADARSAT
International.

was launched on December 14, 2007, and offers resolutions as fine as 2 m


(formally 1  3 m). The latter system offers a variety of polarization options,
including a fully polarimetric mode (VV, VH, HH) at spatial resolutions as
fine as 10 m.
Both are in circular, sun-synchronous (dawn–dusk) orbits at a 798-km
altitude, 98.6° inclination, and with a 100.7-minute period. This orbit allows
users to revisit a scene at the same local time, and the ascending node at 18:00
minimizes conflict when downlinking data to ground stations. It is a common
orbit for radar satellites because it simplifies maintaining the solar array
pointing and maximizes power (no eclipse intervals). The circular orbit is
maintained as accurately as possible to maintain repeatability in the imaging.
Figure 1.19 showed RADARSAT-2 imagery for San Diego Harbor. A similar
maritime focus is shown in Fig. 10.5, an image of Singapore harbor. The
relative utility for ship identification is indicated by the inset image of a
freighter.
The RADARSAT-2 data shown in Fig. 10.5 provide an example of ship
detection. A spatial resolution of 1–3 m certainly allows for ship detection

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
232 Chapter 10

Figure 10.6 TerraSAR-X data acquired May 12, 2008 at T06:30:00 Z in descending mode
(Strip-Mode, HH, GSD 1–3 m). The data are scaled logarithmically to slightly extend the
dynamic range. For the ship-wake illustration on the right, the mean DN in the wake is 40
and the mean of the adjacent water is 70, so there is a fairly significant difference that can
be detected and used for ship detection. Compare this figure with the thermal signature for a
wake illustrated in Fig. 8.9.

and, to some extent, ship identification. The structure along the length of the
ship reflects the ship structure (cranes, etc.) and the shipping containers on the
deck.7

10.3.3 TerraSar-X: Gibraltar


The German TerraSAR-X system was launched on June 15th, 2007 and
promptly became the highest-spatial-resolution radar system in the civilian
world. It offers a fairly routine high-spatial-resolution “spotlight mode” with
a 1-m spatial response and a variety of lower-spatial-resolution modes that
allow for larger area collection (e.g., 18-m resolution, 100 km  150 km). Over
the last year or two, as international restrictions on imaging resolution have
lifted, the system has started to collect in a 25-cm mode, over an area of
4 km  3.7 km. Illustrations of that higher resolution are given in Figs. 1.20
and 10.6. As with RADARSAT-2, different polarizations are available, and
the spatial resolution depends on the polarization mode. This system is also in
a dawn–dusk orbit, ranging from 512–530 km in altitude. The formal revisit
time for the same orbit track is 11 days, but with altered geometry, it can
collect at 2.5 day intervals.8

7. http://www.crisp.nus.edu.sg/~research/ship_detect/ship_det.htm.
8. http://www.geo-airbusds.com/terrasar-x/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 233

Figure 10.6 illustrates the TerraSAR-X data for the Strait of Gibraltar.
The large scene on the left depicts the southern tip of Spain, Gibraltar, and the
north tip of Africa (Morocco). In the water, numerous bright spots represent
ships, documenting busy traffic in the Strait. There is a “wind-wake” in the
water NW of Africa. Ship wakes are just visible for several vessels. On the
right side, an enlarged view of a ship in the Strait is shown, with the wake
below the ship. The offset between the ship and wake is an artifact of the
ship’s velocity with respect to the satellite. In this illustration, the ships are
moving at 5–10 m/s, enough to give them an apparent displacement of tens of
meters.

10.3.4 ERS-1: Ship wakes and Doppler effects


A third ship-wake illustration is given here using an illustration from the
European Radar Satellite (ERS). Figure 10.6 showed an illustration of ships
and wakes. In the figure, the ships are displaced from their wakes due to an
artifact in the radar image processing that did not properly account for the
Doppler effect. Figure 10.7 shows two ships and their wakes as observed by

Figure 10.7 The ERS-2 SAR image shows two moving ships and their wakes. From the
ship on the left, the speed is estimated at 6 m/s. For the ship on the right, the turbulent wake,
Kelvin envelopes, and transverse waves can be observed. Its speed is estimated to be
around 12.5 m/s. ERS-2; 3 April 1996 – 03:29:29; Incident Angle: 23.0; Lat/Long +01.64/
102.69; VV polarization.9

9. http://www.crisp.nus.edu.sg/~research/ship_detect/ship_det.htm.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
234 Chapter 10

the ERS. The displacement between the ships and their wakes indicates their
velocities. The ship velocity can be estimated by the formula:

Dx
V ship ¼ V sat , (10.1)
R cosðwÞ

where Vship is the ship’s velocity, Vsat is the satellite’s orbit speed, Dx is the
ship’s displacement from its wake, R is the slant range, and w is the angle
between the ship’s velocity vector and the SAR look-direction (this last
angle will nominally be zero if the target and satellite motion are parallel).10
The velocity component measured and observed displacements are effectively
in the along-track direction of the satellite motion.

10.4 Multi-temporal Images: Rome


Figure 10.8 illustrates a remote-sensing technique for which radar imagery is
particularly well suited. This multi-temporal image of Rome and the Castelli
Romani hills to its southeast shows, through color, a variety of changes in the
agricultural fields of the lowlands and the grasslands and forests of the hills.
The city, however, has not changed in the short interval between the first and
the last images and thus appears gray because equal values of the RGB colors
from which the image is made give a range of grayscales rather than of colors.

10.5 Sandia Ku-Band Airborne Radar: Very High Resolution


Two concluding images from an airborne sensor show what is possible if a
spot-mode imaging approach is used. The radar developed by Sandia
National Laboratories provides a 1-m resolution at ranges of 2–15 km. The
15-GHz SAR is generally carried by the Sandia Twin Otter aircraft, but it can
operate on modestly sized UAVs. Illustrated in Figs. 10.9 and 10.10 are data
taken over Washington D.C. that show a relatively broad area of coverage
and then a detailed image of the U.S. Capitol.
A subsequent evolution of the Sandia radar (May 2005) improved the
resolution to 0.1 m. Data from the airborne system rivals the imagery from
optical systems, although the area coverage rate is limited at this
resolution.11,14

10.6 Radar Interferometry


Radar interferometry is the study of interference patterns created by
combining two sets of radar signals. This technique allows for a number of

10. http://www.rsi.ca/rsic/marine/rs2_sd_051601.pdf - no longer online.


11. http://www.sandia.gov/radar/sar.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 235

Figure 10.8 ERS-1 multi-temporal image of Rome, with an incidence angle of 23°, a
spatial resolution of 30 m, and a swath width of 100 km. Three color bands are encoded:
green (January 3, 1992), blue (March 6, 1992), and red (June 11 1992). Image © ESA, 1995.
Original data distributed by Eurimage.12

Figure 10.9 Washington D.C., imaged by the Sandia Ku-band airborne SAR.13

powerful additional uses for SAR data beyond the formation of literal images.
Two of the more important applications are topographic mapping and change
detection. Both exploit the fundamental concept that SAR images contain
both amplitude and phase information.

12. Credit: European Space Agency (ESA).


13. Courtesy of Sandia National Laboratories, Airborne ISR, particular thanks to Armin
Doerry; reference A. W. Doerry, V. D. Gutierrez, and L. M. Wells, “A portfolio of fine-
resolution SAR images,” Proc. SPIE 5410, 28–35 (2004). http://www.sandia.gov/RADAR/
imagery/index.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
236 Chapter 10

Figure 10.10 The Capitol building in Washington, D.C.

10.6.1 Coherent change detection


The key characteristics that make interferometric synthetic aperture radar
(IFSAR) possible are the coherence of the signals and the ability to record
phase as well as amplitude in the return signals. As partly illustrated in
Fig. 10.11, these factors allow radar images taken in sequence to be
compared not just in intensity but also in phase. Images of a field with trees
adjacent are shown at the top of Fig. 10.11; they appear to be identical. The
bottom panel shows how well the two images are correlated by a moving
window, not unlike the kernel operators shown in Chapter 6 in the
discussion of filters. Typically, a 3  3 or 5  5 window is applied. This
process could be performed for a simple pair of intensity images, but there
would be little to see. After the phase is included, a complex coherency can
be calculated that is much more sensitive to change. In Fig, 10.11, the
moving trees are uncorrelated at the phase level (a few millimeters to
centimeters) and are black. The grass is mostly white, indicating high
correlation except where the grass has been mowed.
If, by contrast, a pair of images can be obtained with little change on
spatial scales of interest, it is possible to map topographic terrain. This process
has emerged as one of the most useful results from satellite and airborne SAR
systems.

10.6.2 Topographic mapping


Topographic mapping makes use of pairs of images acquired over a
fairly modest spatial baseline (typically on the order of kilometers) and

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 237

Figure 10.11 Coherent change detection (CCD) map with original reference
synthetic aperture radar (SAR) pre- and post-activity activity on the Hardin field parade
ground, with a temporal separation of 20 minutes. The data illustrate the detection and
progression of human footprints and mower activity. SOURCE: Courtesy of Sandia National
Laboratories.

relatively short time intervals. The latter is defined by the need to have
relatively few changes in the scene between observations. For satellites
such as ERS-1 and 2 and RADARSAT, these conditions are normally
obtained by comparing data from observations within a few days of one
another from nearly identical orbits. The desirability of producing such
products adds to the demand for strict constancy in the near-circular orbits
of these satellites.
The geometry is illustrated by Fig. 10.12. Targets 1 and 2 are imaged on
two separate orbits as illustrated. Given the offset in the satellite location (by a
distance indicated here as a baseline), there will be a relative difference in the
paths to the targets (s02  s2 ≠ s01  s1 ) that can be accurately determined to
within a fraction of a wavelength. This difference in phase can then be
translated into elevation differences.
The concept is illustrated in Fig. 10.13 by means of phase difference
observations from the SIR-C mission in October 1994. The complex images
taken a day apart are highly correlated with differences that are due to
elevation.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
238 Chapter 10

Figure 10.12 Interferometry basics.

Interferometry from multiple observations separated in time require that


the scene not change in any significant way between observations—the two
complex images need to be very highly correlated. A technique has been
developed to obtain interferometric images taken simultaneously. Figure 10.14
illustrates the geometry used by the shuttle topographic mission. There is an
antenna in the shuttle bay and a second on a boom 60 m out. Images are
formed from the radar energy received at the two antennas, and the
interferometric phase is determined by calculating the difference between the
two complex images.
Figure 10.14 illustrates how ground elements at different heights will
produce different radar returns. The differential distance of each of these
targets to the ends of the antenna baseline depends on the height of the target.
For the higher target (target 2), the differential distance is greater than that of
the lower one (target 1). The interferometric phase for target 2 is therefore
larger than that for target 1. The differential distance becomes larger as the
incidence angle (u1 < u2) in the picture to the target gets larger. The
interferometric phase difference F can be related to the delta in the incidence
angle by the nearly exact formula

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 239

Figure 10.13 This image of Fort Irwin in California’s Mojave Desert shows the difference in
phase between two (complex) SAR images, taken on October 7–8, 1994 by the SIR-C
L- and C-band sensors. The image covers an area of about 25 km  70 km. The color
contours shown are proportional to the topographic elevation. With a wavelength one-fourth
that of the L-band, the results from the C-band cycle through the color contours four times
faster for a given elevation change. One (C-band) cycle corresponds to a 2.8-cm ground
displacement parallel to the satellite line of sight for interferometric SAR.14

sin ðuÞ
F ¼ 2pB
, (10.2)
l
where B is the baseline length, and u is the incidence angle. The phase F is in
radians. This equation can be revised to obtain the height from the phase F
and solve for u, yielding the topographic height:15
lR
dh ¼ dF, (10.3)
2pL
where dh is the change in altitude associated with a change of phase dF.
As a quick illustration with the SIR-C parameters: take the baseline as
60 m, the range as 310 km (222 km altitude, depression angle of 45°), and a
wavelength of 6 cm. There is an assumption in Eq. (10.3) that the antenna is

14. Wang et al. Photogrammetric Engineering and Remote Sensing, p. 1157 (October 2004).
NASA Photojournal, image PIA01759
15. Text adapted from R. Treuhaft, JPL; http://www2.jpl.nasa.gov/srtm/instrumentinterfmore.
html, See also R. J. Sullivan, Microwave Radar Imaging and Advanced Concepts, Artech
House, Norwood MA (2000).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
240 Chapter 10

Figure 10.14 The geometry of the shuttle’s radar topographic mission.

vertical with respect to the ground. Further adjustments in the formula are
required when this assumption is relaxed.16 For the SRTM described in the
next section, a vertical resolution of 10 m corresponds to a phase difference
of 10°, as shown by

lR 2pL
dh ¼ dF ⇒ dF ¼ dh;
2pL lR
2p · 60
dF ¼ 10 m ¼ 0.2 radians or 11°, which is measureable:
0.06 · 310  103

10.7 The Shuttle Radar Topographic Mapping (SRTM) Mission


The SRTM mission flew on the space shuttle Endeavour, launched Friday,
February 11, 2000 at 1743 Z. The eleven-day mission collected an astonishing
amount of data. The SIR-C hardware flew for a third time, though only the
C- and X-band systems were used. The C-band radar (l ¼ 5.6 cm), with a
swath width of 225 km, scanned about eighty percent of the land surface of
the earth (HH and VV). The German X-band radar (l ¼ 3 cm, VV), with a
swath width of 50 km, allowed for topographic maps at a somewhat higher
resolution than the C-band data but did not have the near-global coverage of
the American system.
Some mission parameters of note include the following data-acquisition
values:17
• 222.4 h total duration of mapping phase,
• 99.2 h C-band operation (8.6 terabytes),
• 90.6 h X-band operation (3.7 terabytes), and
• 12.3 terabytes of total data.

16. See also, Principles and Applications of Imaging Radar, Manual of Remote Sensing, Third
Edition, Volume 2, American Society for Photogrammetry and Remote Sensing, edited by
Floyd Henderson and Anthony Lewis, Chapter 6, pages 361-36, by Soren Madsen and
Howard Zebker.
17. http://spaceflight.nasa.gov/shuttle/archives/sts-99/.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 241

10.7.1 Mission design


Normally the interferometric techniques described above require either two
satellites or one satellite making multiple passes over a target. In the SRTM
mission described by Fig. 10.15, a novel approach, in which a second antenna
was deployed on a mast extending 60 m from the payload bay, provided
sufficient baseline for the technique and dramatically reduced problems
associated with changes in the target area, including changes as subtle as wind
blowing in the trees. The mission also provided a very instructive illustration
of satellite technology and the unique capability of the space shuttle.
The challenge was that the structure had to maintain an almost perfect
attitude with respect to the shuttle. Fluctuations in mast length of 1 cm were
expected. Uncompensated mast-tip motion of 1 cm with respect to the shuttle
would result in a height error at the earth’s surface of 60 m. Knowledge of the
mast position with respect to the shuttle was therefore needed to better than
1 mm.
Figure 10.16 shows the mast used on the SRTM mission, the Able
Deployable Articulated Mast (ADAM) built by the Able Engineering
Company of Goleta, California. The mast consisted of a truss comprising
87 cube-shaped sections called bays. Unique latches on the diagonal members
of the truss allowed the mechanism to deploy bay-by-bay out of the mast
canister to a length of 60 m (200 feet). The canister housed the mast during
launch and landing and also deployed and retracted the mast.

Figure 10.15 SRTM mission overview.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
242 Chapter 10

Figure 10.16 (a) Mast fully deployed at AEC (shown from tip). (b) Mast with first few bays
deployed from canister at ATK-Able Engineering Company, Inc.

Table 10.2 The Shuttle Radar Topography


Mission Mast.

Mast Length 60 m
Nominal Mast Diameter 1.12 m
Nominal Bay Width at Longerons 79.25 cm
Nominal Bay Length 69.75 cm
Number of Bays 87
Stowed Height/Bay 1.59 cm
Total Stowed Height 128 cm

The mast supported a 360-kg antenna structure at its tip and carried
200 kg of stranded copper, coaxial, fiber optic cables, and thruster gas lines
along its length.
This remarkable technology worked largely as planned. The mast
deployed successfully, as illustrated in Fig. 10.17. Unfortunately, the
attitude-control jet at the end of the mast clogged, and it was only through
a remarkable bit of flying by shuttle astronauts that the system was able to
acquire useful data. Data analysis was slowed by this problem, and accuracy
of products reduced somewhat.

10.7.2 Mission results: level-2 terrain-height datasets


(digital topographic maps)
The level-2 terrain-height datasets contain the digital topography data
processed from C-band data collected during the mission. Each posting in a
level-2 terrain-height dataset represents a height measurement (posting) in
meters relative to the WGS84 ellipsoid surface—a standard reference
coordinate for elevation models.
The absolute horizontal accuracy (90% circular error) is 20 m. The absolute
vertical accuracy (90% linear error) is 16 m. For data from the equator to

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 243

Figure 10.17 SRTM mast, deployed. Shuttle glow is seen around the orbiter tail and along
the mast in the left figure; this is due to interactions between atomic oxygen in the upper
atmosphere and the surfaces of the spacecraft and antenna.

50° latitude, the postings are spaced at 100 (one arcsecond) latitude by
100 longitude. At the equator, these are spacings of approximately 30 m  30 m.
Figure 10.18 illustrates some of the products of the mapping mission.
Starting from a known elevation (sea level), altitude is obtained by
unwrapping the variation in phase. Figure 10.18 shows how the phase varies
over Lanai and a portion of Maui. This portrayal can be compared to the
difference image from Ft. Irwin in Fig. 10.13.

Figure 10.18 DEMS for some Hawaiian islands. Figure reprinted from http://photojournal.
jpl.nasa.gov/catalog/ PIA02723.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
244 Chapter 10

Once altitude has been determined, it can be used in a variety of


applications. The SRTM database has become the standard for most
orthorectification processes, providing a worldwide library at 90-m postings.

10.8 TerraSAR-X and TanDEM-X


An innovative approach to obtaining routine interferometric measurements of
elevation was devised by the German Aerospace Center (DLR) using the idea
of flying two SAR systems in tandem, which resulted in the launch of a
companion for TerraSAR-X (launched in 2007) – the TerraSAR-X add-on for
Digital Elevation Measurement in June 2010. The two satellites fly in a helical
pattern that allows them to maintain a spacing of a few hundred meters,
providing a nearly ideal baseline for bi-static interferometric SAR. In August
2015, DLR released a map of the world at a significantly higher GSD than the
SRTM baseline using nearly four years of observations. Nominal resolution
of the TSX/TanDEM-X derived products is 2 m for the commercial product.

10.9 Problems
1. For a SAR system such as SIR-C, does the nominal 12.5-m azimuthal
resolution for the German X-band system correspond well to the nominal
antenna width? What pulse length would be required to match that in
range resolution? Compare this value to the actual pulse width.

Figure 10.19 RADARSAT-2 orbital perspective for data acquisition. The satellite orbit track
and ground track below the satellite are traced in light blue.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Radar Systems and Applications 245

Figure 10.20 Histogram for regions of interest in TSX data acquired May 12, 2008.

2. What wavelengths and polarizations are used for commercial SAR


systems (Radarsat, ERS)?
3. The data illustrated in Fig. 10.5 from RADARSAT-2 took approximately
3.3 s to acquire based on the metadata provided with the image. The
geometry for the data acquisition is illustrated in Fig. 10.19. For this
C-band system, what is the best azimuthal GSD to be expected for the
image? The chirp bandwidth is 7.8163  107 Hz. What range resolution
could be expected for that bandwidth? The incidence angle is 38–39° (the
elevation angle is 51–52°), the satellite altitude is 7.95  105 m, and the
near range to the target is 975 km. The center frequency is 5.405  109 Hz.
4. Figure 10.6 shows a ship-wake illustration with some fairly subtle
differences in DN between the wake and adjacent water. Figure 10.20
shows a histogram distribution for the data values inside and outside the
wake. What is the difference in DN, in units of “water outside wake” sigma
for the two means? The mean is considerably higher than the peak—
particularly for the open water—the distributions are both highly skewed.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Chapter 11
Light Detection and Ranging

Figure 11.0 Point cloud elevation data for the Naval Postgraduate School campus
obtained from an airborne LiDAR system. Data are color coded by elevation, with red (high)
and green/blue (low) in this rainbow color scheme (6–32 m).

11.1 Introduction
Light amplification by stimulated emission of radiation, or the laser, dates to
1957, emerging in theoretical papers by Townes and Schalow.1 The term
“laser” was coined by Gould, who eventually received credit for this.2 The

1. A. L. Schawlow, and C. H. Townes, “Infrared and Optical Masers,” Physical Rev. 112(6),
1940–1949 (December 15, 1958).
2. R. G. Gould, “The LASER, Light Amplification by Stimulated Emission of Radiation,”
Ann Arbor Conf. Optical Pumping, pp. 128 (June 15–18, 1959).

247

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
248 Chapter 11

laser concept has three fundamental elements from a remote-sensing


perspective. The light from a laser is monochromatic, meaning it has a single
discrete wavelength. The spectral lines are typically quite narrow, i.e., a few
angstroms wide. In general, the light is naturally linearly polarized. Lasers can
be formed into continuous wave (CW) or pulsed systems. The ability to
quickly switch a laser on-and-off is what makes it particularly useful for
remote sensing.
Maiman (1960) working at the Hughes Research Laboratory (HRL)
receives the credit for the first working laser.3 That red (594 nm) ruby laser
is the precursor to modern solid state and diode lasers, as used for terrestrial
remote sensing. The Q-switch, which enables the rapid pulsing of lasers,
was subsequently also developed at HRL.4 This success was followed
shortly by the idea of using lasers to measure distances, and when
implemented in aircraft, terrain. Light detection and ranging, or LiDAR,5
uses the same principles as radar, but its shorter wavelengths make the data
useful in quite different ways. There are a variety of LiDAR types; the form
of interest here are those designed for remote sensing of the earth. Other
configurations are useful for studying the atmosphere, in particular aerosols
and dust.
The earliest published record of an airborne topographic profile appears
to be from data collected over the football stadium at George Washington
High School in Philadelphia, published in 1965. Figure 11.1 shows the
elevation profile measured along the flightline. The SpectraPhysics HeNe CW
laser generated 50 to 60 milliwatts of power at 6,328 Å. The laser was
frequency modulated at 25 MHz,6 which allowed a 0.3-foot resolution at
330 feet per/second (9-cm vertical resolution). The laser system and detectors
worked well enough, but the stability of the airplane and knowledge of
attitude and altitude made the technology impractical at the time. Some
30 years later, the advent of GPS systems really allowed the airborne LiDAR
technology to flourish. The modulated CW laser is the approach currently
used for near-field systems such as those built by FARO for use in scanning
building interiors. Airborne systems have moved to pulsed lasers, and that
technology is presented in this chapter.

3. T. H. Maiman, “Stimulated optical radiation in ruby,” Nature 187(4736): 493–494 (1960).


4. F. J. McClung and R. W. Hellwarth, “Giant optical pulsations from ruby,” J. Appl. Phys.
33(3), 828–829 (1962); F. J. McClung and R. W. Hellwarth, “Characteristics of giant optical
pulsations from ruby,” Proc. IEEE 51(1) (1963).
5. There are several approaches to the choice of capitals for the abbreviation for LiDAR. I’ve
gradually been convinced that LIDAR is wrong, but LiDAR and lidar are still negotiable.
LADAR just seems pretentious. I expect that lidar will become a common noun, just as this
occurred for radar.
6. H. Jensen, “Performance of an Airborne Laser Profiler,” Proc. SPIE 0008, 84 (1967). The
Aero Service Corporation (a Litton subsidiary) conducted this work in cooperation with
SpectraPhysics.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 249

Figure 11.1 Laser profile taken from 1000-foot altitude from a Douglas A-26 aircraft. There is a
two-foot “crown” on the field but also an overall drift due to the limitations in aircraft altitude estimates.7

11.2 Physics and Technology: Airborne and Terrestrial


Scanners
A typical LiDAR system consists of a semiconductor laser, typically operating in
the near-infrared (1.05–1.55 mm); a detector designed to measure the return time
with an accuracy of a few nanoseconds, and position information for the system,
typically obtained from GPS (Fig. 11.2). The returning light pulse is observed with
range gates, that is, the detector is sampled in a time sequence, and those times
correspond to range. A scan mirror to sweep the laser beam cross-track allows for
area coverage.

11.2.1 Lasers and detectors


One of the most common laser technologies is the semiconductor media Nd:
YAG (neodymium-doped yttrium aluminum garnet, or Nd:Y3Al5O12),
developed at Bell Laboratories by Geusic et al. in 1964.8 This material is

7. B. Miller, “Laser Altimeter May Aid Photo Mapping,” Aviation Week & Space Technology,
page 60, March 29, 1965.
8. J. E. Geusic, H. M. Marcos, and L. G. van Uitert, “Laser oscillations in Nd-doped yttrium
aluminum, yttrium gallium and gadolinium garnets,” Applied Physics Letters 4, 182–184
(1964).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
250 Chapter 11

Figure 11.2 A pulse of laser light is emitted from the aerial platform. A sensor records the
returning energy as a function of the xy position, which then provides the z, or elevation
component. Such systems are occasionally designated 3D imagers. The imager depends on
a very accurate knowledge of the platform position, generally obtained from GPS.9

not greatly different from the early ruby lasers, but it has better heat
conduction. Nd:YAG lasers typically operate at 1.064 mm (1064 nm), a
fluorescence line for Nd3+ in the YAG structure. These neodymium-doped
crystals can be and are “frequency doubled” to 532 nm by using a KTP
crystal10 for bathymetric applications. The laser output can also be “tripled”
to 355 nm.
Also popular are semiconductor lasers, particularly at 1.55 mm. These are
used in the fiber optics community for communications, which motivates a
great deal of development. They have significant eye-safety benefits, but this
wavelength is more affected by water vapor than the shorter wavelength
systems.
An illustrative system with both IR and green output is the Coastal Zone
Mapping and Imaging LiDAR (CZMIL) system. The output power is 30 W
at 10 kHz with a pulse length of < 2.5 ns FWHM at 532 nm and 20 W of
residual power at 1064 nm.11 Individual pulses from the Nd:YVO4 laser are a
few tenths of a milliJoule after amplification. Figure 10.3 shows a time profile
for the output pulse. The system uses a significantly higher power level than
typical terrestrial LiDAR scanners because of the significant amount of losses

9. K. Kraus and N. Pfeifer, “Determination of terrain models in wooded areas with airborne
laser scanner data,” ISPRS Journal of Photogrammetry and Remote Sensing 53(4), 193–203
(1998); with thanks to David Evans, MSU, Dept of Forestry.
10. potassium titanyl phosphate KTiOPO4 (KTP); http://www.lc-solutions.com/product/
ktp.php.
11. J. W. Pierce, E. Fuchs, S. Nelson, V. Feygels, and G. Tuell, “Development of a novel laser
system for the CZMIL lidar,” Proc. SPIE 7695, 76960V (2010).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 251

involved in water measurements. Typical Nd:YAG lasers output 10 mJ/pulse


at high pulse rates (100–500 KHz).

11.2.2 Laser range resolution and the LiDAR equation


The key technology element for a laser is pulsed mode operation, similar to
that of imaging radar systems, with characteristic pulse lengths of a few
nanoseconds (ns). Even for these very short pulse lengths, the pulse is a
meter long in air. The analogy to radar is very close – the main difference at
present is that pulse modulation is not currently in use for airborne
systems.12
The nominal range resolution, therefore, would be as with radar
ct
Rresolution ¼ : (11.1)
2
With discrete return LiDAR systems, however, the light detectors are set to
trigger on the leading edge of a fairly short rise-time pulse—typically on the
order of a few tenths of a nanosecond. As an example, the output of the
CZMIL green laser is illustrated in Fig. 11.3. It is typically characterized as a
Gaussian distribution in time, which is not particularly accurate, but is
consistent with the ability of technology to resolve such pulses.

Figure 11.3 CZMIL green-output-pulse temporal profile. The pulse is a bit less than 2 ns
wide, and the leading edge is a fraction of a nanosecond. Image reprinted with permission
from Pierce et al. (2010).

12. Some vendors are selling systems that measure the phase within a CW signal, as with the
early SpectraPhysics system in Fig. 11.1, for very fine range resolution at short ranges,
notably FARO. These are terrestrial systems used for short-range scanning, from tens of
meters out to 100 m.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
252 Chapter 11

The LiDAR formula for power is also similar to that for radar,13 but it is
generally true that the laser beam is small enough that all of the transmitted
energy reaches the detected area on the target. For a homogenous target
(surface) a relatively simple form results. The return beam still falls off
according to the inverse square formula.
The resulting formula is
1
Preceived ¼ Ptransmitted · a G detector , (11.2)
4pR2range
where Preceived is the received power, Ptransmitted is the transmitted power, a is
the albedo (reflectance), Gdetector is the detector gain (proportional to the
detector area and quantum efficiency). The LiDAR range equation for
imaging systems depends on the range squared, in contrast to imaging radar
systems, which depend on the fourth power of the range.

Example
To illustrate the values and implications of the formula, consider a nominal
airborne system operating at 1.06 mm, with 10-mJ pulses. Assume an aperture
with a diameter of 30 cm, and a beam divergence of one milliradian.
Assuming an ideal detector for a moment, the gain is just the area of the
collecting optic. Typical albedos for vegetation are about 0.9, and we assume
an isotropic scattering. Assume an altitude (range) of 500 m:
1
Preceived ¼ 10  106 · 0.9 ðp · 0.152 Þ ¼ 2.02  1012 J:
4pð500Þ2
The 1.06-mm photons have an energy of 1.9  10–19 J, so the return pulse
contains 107 photons. A typical efficiency for the detectors in a commercial
system would be about 10%, so the sensor would count about 106 photons
over a period of a few tens of nanoseconds.
The detectors for most LiDAR systems are variations on the photo-
multiplier tubes (PMTs) illustrated in Chapter 2. The solid-state version of a
PMT is a photodiode or, more particularly, an avalanche photodiode. Longer
wavelengths (1.55 mm) require high-speed (GHz) InGaAS photodiodes.14
This technology is part of the infrastructure for fiber optics communications,
so there is significant technology evolution at work in this area. Photon-
counting detector arrays have been developed and are starting to appear in
commercial systems.

13. A good development and more detailed form is given by Wagner et al., ISPRS Journal of
Photogrammetry and Remote Sensing, Volume 60, Issue 2, April 2006, Pages 100–112.
14. L. E. Tarof, “Planar InP/InGaAs avalanche photodetector with gain-bandwidth product
in excess of 100 GHz,” Electron. Lett. 27(1), 34–36 (1991).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 253

11.3 Airborne and Terrestrial Systems


11.3.1 Airborne Oceanographic LiDAR
One of the early tests of the technology by Krabill et al. (1984) employed an
early bathymetric LiDAR (green laser) to run terrestrial profiles. Figure 11.4
shows data from the NASA Airborne Oceanographic LiDAR (AOL). The
system used a neon laser (540.1 nm with 7-ns, 2-kW pulses), a 30-cm
Cassegrain telescope, and a PMT.15 The laser system had a 400-pulses/s
maximum pulse repetition frequency (PRF). The EMI D-279 PMT output
was directed to a series of detectors with a temporal resolution down to 2.5 ns,
revised to 4 ns for stability. The aircraft generally operated at an altitude of
150 m in these tests. The results showed that the solar background could be
overcome in daytime use and that tree canopies could be penetrated for the
purpose of developing terrestrial maps. (Solar background remains an issue
for some of the emerging LiDAR technologies.) Bathymetric data from the
AOL are shown in Section 11.5.

Figure 11.4 Elevation of Wolf River Basin, located near Memphis, Tennessee, taken in
September 1980.16

15. F. E. Hoge, R. N. Swift, and E. B. Frederick, “Water depth measurement using an


airborne pulsed neon laser system,” Appl. Opt. 19, 871–883 (1980).
16. W. B. Krabill, J. G. Collins, L. E. Link, R. N. Swift, and M. L. Butler, “Airborne
laser topographic mapping results,” Photogrammetric Engineering and Remote Sensing 50,
685–694 (1984).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
254 Chapter 11

Commercial airborne systems currently fly at modest altitudes (a few


thousand feet), and the laser forms a spot on the ground of less than a meter in
diameter, perhaps only a few tens of centimeters. (The beam divergence is
typically a few tenths of a milliradian.) The sensor typically operates in
whiskbroom mode, sweeping cross-track with an angular extent defined by
the hardware capabilities, but on the order of the 40°, as illustrated in
Fig. 11.2. A variety of other scan patterns are used according to the tastes of
the designers, and applications, notably helical or circular patterns for
bathymetric systems.

11.3.2 Commercial LiDAR systems


The spacing between pulses is typically larger than the spot size—typically
1–3 m until recently (2005), now more routinely a few tens of centimeters.
Pulse repetition frequencies determine the sweep rate (and angular range) that
is practical for a given sensor. Initial rates were rather low. Figure 11.5 shows
the evolution of the Optech sensors (leaving out the very first sensor from
1983 and its 100-Hz PRF). The PRF being used depends somewhat on the
altitude of the flying platform, so the specifications depend on altitude. In
Fig. 11.5, the maximum PRF is plotted with the maximum altitude
represented by the point size. Optech has manufactured a large fraction of
the commercial systems in use today, so this chart reflects the field as a whole
rather well. As the PRF increases, the cost of flying a given area decreases.
A commercial effort in 2005 imaged a 10 km  20 km area north of Monterey

Figure 11.5 Optech systems: pulse repetition rate (or frequency) as a function of time. The
diameter of the symbol is proportional to the operational altitude. The ALTM 3100 was a key
system in the evolution of commercial imaging and has only recently been superseded in the
market by newer and faster systems. As of 2013, the Pegasus HA-500 was the highest-
altitude, fastest instrument in the Optech inventory, able to work at an altitude of 100 m to
5 km and a PRF of 100–500 KHz. The dual laser system allows for multiple pulses in the air
(MPIA).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 255

Figure 11.6 Leica ALS70, with flight electronics. The laser is a Nd:YAG operated at
1.064 mm. The ALS70 operates at a maximum laser pulse rate of 250 kHz, with a maximum
average optical output of 8 W. The energy per pulse under these conditions is 8 W / 250,000
Hz ¼ 32 mJ. Higher power is possible at a lower PRF, limited by the heating of the laser. The
pulse length is 4.5 ns or 9 ns, depending on system settings. The detector is a Si APD.17

in about a day of flying, at about 1 point/m2. By contrast, a Quickbird image


of this area takes approximately 8 s to acquire. Higher laser-repetition rates
enable either higher point densities or higher area collection rates; the choice
depends greatly on application.
Similar technology is marketed by Leica Geosystems. Figure 11.6 shows
the components for a fairly typical airborne system: the laser, electronics, and
a laptop for control (and scale). On the right side, the relationship between the
altitude and the pulse repetition rate (PRR) is shown for a single pulse in air
(SPIA) and multi-pulse in air (MPIA). These curves represent the limitations
imposed by the finite propagation time for the pulses in the air.

11.4 Point Clouds and Surface Models


Illustrative examples from a modern commercial system are shown next. Data
were acquired by Airborne 1 Corporation, flying over Exposition Park in Los
Angeles to image the Coliseum and the Sports Arena (the ellipse in the lower
right of Fig. 11.7). Figure 11.8 shows a horizontal transect through the scene,
roughly along the long center line of the football stadium. The sloping
stadium walls lead down to a field that is below the level of the ground outside
the stadium. Temporary seats (bleachers) have apparently been set up in the
peristyle end of the stadium. The intensity image from the 1.06-mm laser is
shown in Fig. 11.9. In Fig. 11.10, the scattered returns from throughout the
canopy show how the laser penetrates through the leaves, reaches the ground,
and returns.

17. Courtesy: Wolfgang.Hesse@leicageosystems.com; January 03, 2014.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
256 Chapter 11

Figure 11.7 Digital elevation model of the Los Angeles Coliseum.

Figure 11.8 LiDAR image taken over the Los Angeles Coliseum. The goal posts are 120
yards apart. Bleachers are at the 420-m mark. Data courtesy of Airborne 1, Los Angeles, CA.

One of the more vexing problems in remote sensing involves power and
telephone lines, which are generally sub-pixel for any reasonable detector—
the wires simply do not show up in optical imagery. LiDAR, with a relatively
small spot size, illuminates the wires at a fairly regular interval, and Fig. 11.11
shows the wires detected quite accurately. Corridor mapping is one of the
major business areas for airborne laser mapping.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 257

Figure 11.9 Intensity image produced by LiDAR active illumination at 1.06 mm.

Figure 11.10 Detailed view of the first/last returns, bare soil, and extracted feature returns
over a few trees to the west of the Coliseum.

11.5 Bathymetry
One powerful capability offered by LiDAR is the ability to survey for water
depth, that is, to conduct bathymetric surveys. Some of the first such
measurements by Hoge et al. are illustrated here for data taken over the
Atlantic Ocean by the Airborne Oceanographic LiDAR (AOL) in Fig. 11.12.
[These data are from the same system used by Krabill et al. (1984), as shown in
Fig. 11.4.] This early system successfully made measurements down to depths as
great as 10 m. Current systems typically work to depths of 50–100 m, at least
for clear water. The Avco model C-500 neon laser operated at 540.1 nm with a

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
258 Chapter 11

Figure 11.11 Power lines adjacent to the NPS campus, collected with the Optech C-100
corridor mapper. The point density along the power lines ranges from 12.5–13.5 points/m
along the lines. The background point density on the surface ranges from 60–110 points/m2
here, with a peak in the 80–90 points/m2 range. This overall point density is relatively high by
current mapping standards (2015), and the typical power-line point density will be a bit less.

Figure 11.12 Cross-section comparison of AOL data with NOAA launch data. Twenty
seconds of data are shown: the dots are the LiDAR returns, and the solid line is from the
in situ measurements (sonar). The vertical axis indicates the depth in meters, ranging from
0 to 5. Errors are likely due to navigation, i.e., position/time. Image reprinted with permission
from F. E. Hoge et al. (1980).18

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 259

Figure 11.13 Waveforms from a profile over Monastery Beach, Monterey, CA. The detector
is sampled at 1.8 GHz, so at roughly 0.5 ns intervals. In the graph at the top, time is increasing
to the right. The peaks in the samples in the 200–220 range are from the surface reflections; the
peaks centered around sample 320 are the bottom reflections. The water here is about 5 m
deep. The waterfall display inset reflects the scan pattern over the water, hence the scalloped
pattern most obvious in the bottom signature. The graph at the bottom represents the depth
profile obtained from the discrete returns. Color is altitude (or depth), with red a few meters
above ground level, green at sea level, and shades of blue as the water deepens.

400-Hz PRF. The 7-ns, 2-kW pulses were modulated by a conical scanning
system; photon returns were obtained from a PMT, digitized, and then gated
at 2.5-ns intervals (a temporal resolution that is still respectable by modern
standards).
For comparison, a modern commercial LiDAR system was operated over
the Monterey Bay area in 2014; the two-color AHAB system measures the
waveforms of the returned laser signals, as illustrated in Fig. 11.13. These
systems are generally applied in clear coastal waters to depths of 10–20 m.

11.6 LiDAR from Space


Laser scanning is, at present, primarily a terrestrial and airborne technology,
but there have been a number of LiDAR missions flown to study the
topography of the earth, Mars, the moon, and Mercury. They have uniformly
flown transects around their targets, resulting in data not too different from

18. F. E. Hoge, R. N. Swift, and E. B. Frederick, “Water depth measurement using an


airborne pulsed neon laser system,” Appl. Opt. 19, 871–883 (1980).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
260 Chapter 11

Figure 11.14 The topography of Mars, as measured by the Mars Orbiter Lander (MOLA).
The elevation is scaled from 0–12 km in the color scale shown at the top right.19

the figures shown at the beginning of the chapter. These “big-footprint”


systems typically operated at a few Hz, with spot-size on the ground in the
50–100-m-diameter range, limited by the time of flight and laser power issues.
The first LiDAR in space was flown on an Apollo mission to the moon in
order to help in the selection of landing sites (1971). The seminal Clementine
mission carried a LiDAR around the moon, mapping most of it in 1994 and
thus initiating the small-satellite revolution. Several terrestrial missions were
flown around the earth using the Space Shuttle (1994, 1996, 1997), producing
snapshots with limited coverage. The ICESAT mission has mapped the
Antarctic region, and provided some useful data at lower latitudes.
The Mars Global Surveyor carried the Mars Orbiter Laser Altimeter
(MOLA), and over a period of several years (1997–2001) it mapped all of
Mars to a vertical accuracy of 30 m, not unlike the SRTM map of the earth.
The instrument optics generated a surface spot size of 130 m. The Nd:YAG
laser used 8-ns pulses, a 10-Hz PRF, and generated 330-m shot spacing.
Figure 11.14 can be compared to the thermal inertia images shown at the
beginning of Chapter 8—both seem to reflect important characteristics in the
evolution of the Martian crust.

19. O. Aharonson et al., “Mars: Northern hemisphere slopes and slope distributions,”
Geophys. Res. Lett. 25, 4413–4416 (1998). http://mola.gsfc.nasa.gov/images.html.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Light Detection and Ranging 261

11.7 Problems
1. For a LiDAR to have a vertical resolution of 5 cm, what is the upper limit
on the pulse length, in time? Assume a square pulse, as with a radar.
2. For a LiDAR emitting a 32-microJoule pulse, how much energy is
returned to the detector? How many photons are emitted and return?
Assume a range of 1500 m, a wavelength of 1.55 mm, and a receiver
(telescope) with a diameter of 20 cm. Assume a perfect Lambertian surface
(reflectance a  1).
3. For an airborne system, flying at 1000 AGL, calculate the transit time for
a laser pulse that is reflected from directly below the aircraft. Be sure to
include both the downward and upward propagation time. Compare your
result to the ALS-70 operational parameters.
4. For a satellite operating at an altitude of 705 km (e.g., ICESAT), calculate
the transit time for a laser pulse in a nadir view. What would be the
maximum frequency allowable for SPIA conditions?

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Afterword
The preceding text shows a remote sensing world that is changing rapidly.
The last few chapters, on SAR and LiDAR, don’t quite do justice to the rapid
pace of change in hardware but even more so with respect to the impact of
modern computing technology. Keep in mind that in the mid-1990s SAR
processing was mostly a matter of converting the raw data into real images; it
has only been in the last few years that interferometric processes could
routinely be exploited (DEMs and CCDs). Computing has become the great
enabler of that technology. In a similar way, LiDAR helps define “Big Data.”
A routine post on a blog for a frequently used software package typically
begins with “I have 8 TB of data, and I need to. . . .”
That said, the one thing I wish there was room for in this text is the
exciting convergence between computing and (small) UAV technology. It is
now possible to fly a lightweight UAV with a modestly sized camera (a few
Megapixels), rapidly construct a mosaic of visible or infrared imagery at a
spatial resolution of a few centimeters, and then by using computer vision
technology, convert that to an accurate 3D model. The emergence of this
combined capability is a great example of disruptive technology. The UAVs

Figure 1 Topographical map of Terra Sirenum, the Martian Atlantis. It ranges in elevation
from 8000 to 3000 m. Copyright ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO.

263

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
264 Afterword

are driving sensor technology, making navigation and imaging sensors


dramatically smaller. These technologies put remote sensing capabilities into
the hands of many more people, and that will drive the next evolution.
As a closing illustration, here is a topographic map of the region known as
Terra Sirenum, located in the southern hemisphere of Mars, derived from the
Mars Express High-Resolution Stereo Camera, at a spatial resolution of 14 m.
The analysis that produced this image uses computer vision techniques that
enable a product that has only been practical in the last year or two. You can
compare this to the LiDAR-derived map shown at the end of Chapter 11,
obtained a decade earlier, at a 30-m resolution.
A briefing I’ve been giving for the last year or so quotes the American
baseball player (and philosopher) Satchel Paige: “Don’t look back. Something
might be gaining on you.” That about sums it up.
R. C. Olsen

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Appendix 1
Derivations

A1.1 Derivation of the Bohr Atom


The existence of line spectra can be explained by means of the first “quantum”
model of the atom, developed by Bohr in 1913. Although the Bohr model of
the hydrogen atom was eventually replaced, it yields the correct values for the
observed spectral lines and gives a substantial insight into the structure of
atoms in general. The following derivation has the objective of obtaining the
energy levels of the Bohr atom. If the energy levels can be obtained, then the
hydrogen atom spectra can be reproduced.
The derivation proceeds with three major elements: first, use the force
balance to relate the velocity to the radius of the electron orbit, then use a
quantum assumption to get the radius, and then solve for the energies.

A1.1.1 Assumption 1: the atom is held together by the Coulomb force


It is an experimental fact that the force F between two point charges q1 and q2,
separated by a distance r, is given by

q1 q2
F¼ , (A1.1)
4pε0 r2

where 1/(4pεo) ¼ 8.99  109 (Nm2/C2), and q1 and q2 are in units of


coulombs. The distance r is in meters, of course. The charges may be positive
or negative.
For a single electron atom, the charge of the nucleus q1 is taken to be
þ Ze, where Z is the atomic number of the atom (the number of protons in
the nucleus). Z equals 1 for hydrogen. The charge of the electron q2 is  e.
Substituting the values into Eq. (A1.1) produces

265

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
266 Appendix 1

Figure A1.1 Bohr atom model, where Z ¼ 3.

 
Ze2
F ¼ : (A1.2)
4pε0 r2
The minus sign on the force term means that the force is “inward,” or
attractive.

A1.1.2 Assumption 2: the electron moves in an elliptical orbit around


the nucleus (as in planetary motion)
Let us assume that the electron moves in a circular orbit around the nucleus,
as shown in Fig. A1.1. Newton’s second law (F ¼ ma) is applied here by
setting the coulomb force equal to the centripetal force. The result can be
written as

Ze2
¼ mv2 ∕r, (A1.3)
4pε0 r2
and it is possible to solve for the radius versus velocity.

A1.1.3 Assumption 3: quantized angular momentum


Bohr now introduced the first of his two new postulates, namely that the only
allowed orbits were those for which the angular momentum L was given by
L ¼ mvr ¼ nℏ, (A1.4)
where m is the electron mass, v is the velocity, r is the radius of the orbit, n is
an integer (1,2,3,..), and
h
ℏ¼ ¼ 1.054  1034 ðjoule-secondsÞ ¼ 0.658  1015 ðeV-secondsÞ,
2p
where h is simply Planck’s constant, as before.
(One suggestion for a physical basis for this assumption is that if you view
the electron as a wave, with wavelength l ¼ h/p ¼ h/mv, then an integral
number of wavelengths have to fit around the circumference defined by the
orbit, or n · h / mv ¼ 2pr. Otherwise, the electron “interferes” with itself. This

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Derivations 267

all follows as a corollary to the idea that an electromagnetic wave is a particle


with energy E ¼ hf, as above, and thus the momentum of a photon is p ¼
E/c ¼ hf/c ¼ h/l.)
This relationship is sufficient to establish that

nℏ
vn ¼ (A1.5)
mrn
for the velocity of the electron in its orbit. There is an index n for the different
allowed orbits. It follows that

mv2n Ze2 mn2 ℏ2


¼ ¼ 2 3 : (A1.6)
rn 4pε0 rn
2
m rn
Solving for the radius of the orbit rn produces
 
n2 ℏ2 4pεo 2 4pεo ℏ
2
rn ¼  ¼n , or rn ðmetersÞ ¼ n2  0.528  1010 ∕Z:
m Ze2 Zme2
(A1.7)
This only works for one-electron atoms (H and He þ as a practical matter),
but within that restriction, it works fairly well. For hydrogen (Z ¼ 1) the
Bohr radius r1 ¼ 0.528  10–10 m as the radius of the smallest orbit. The
radius of the Bohr hydrogen atom is half an angstrom. What is the radius of
the orbit for the sole electron in He þ (singly ionized helium, Z ¼ 2)?
Solving for the energy levels
The potential energy associated with the Coulomb force is
q1 q2
U¼ ; (A1.8)
4pεo r
taking U (r ¼ `) ¼ 0 and plugging in for the charges produces
 
Ze2
U ¼ : (A1.9)
4pε0 r
A negative potential energy means that the electron is in a potential “well.”
Given this expression for the potential energy, a similar expression is needed
for kinetic energy.
The kinetic energy T is easily obtained from Eq. (A1.3):
 
1 2 1 Ze2
T ¼ mv ¼ : (A1.10)
2 2 4pε0 r
Therefore, the total energy of the electron E is obtained:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
268 Appendix 1

     
Ze2 1 Ze2 1 Ze2
E ¼U þT ¼ þ ¼ : (A1.11)
4pε0 r 2 ð4pε0 Þr 2 ð4pε0 Þr
The total energy is negative—a general characteristic of bound orbits. This
equation also indicates that if the radius of the orbit (r) is known, then the
energy E of the electron can be calculated.
Substituting the expression for rn [Eq. (A1.7)] into Eq. (A1.11) produces
  2   
1 Ze 1 Zme2
E¼  2 ,
2 4pε0 n 4pε0 ℏ2
or
  
1 Ze2 2 m E
E¼ ¼ Z 2 21 , (A1.12)
2 4pε0 ℏ n2 n
where
 
me4
E1 ¼  ¼ 13.58 eV
32p2 ε0 2 ℏ2
is the energy of the electron in its lowest or “ground” state in the hydrogen
atom.

A1.1.4 Assumption 4: radiation is emitted only from transitions


between the discrete energy levels
The second Bohr postulate now defines the nature of the spectrum produced
from these energy levels. This postulate declares that when an electron makes
a transition from a higher to a lower energy level, a single photon will be
emitted. This photon will have an energy equal to the difference in energy of
the two levels. Similarly, a photon can only be absorbed if the energy of the
photon corresponds to the difference in energy of the initial and final states.

A1.2 Dielectric Theory


Why does the complex dielectric constant imply absorption? The answer goes
back to Maxwell’s equations and the solution for propagating waves. Two
elements of the solution are claimed here:
a) For a plane wave, the electromagnetic field propagates according to the
form
 
iðkxvtÞ i 2px  2pf t
En ¼ Eoe , or E o e l : (A1.13)
Such waves propagate at a velocity v ¼ lf ¼ v/k, where v ¼ 2pf, and
k ¼ (2p)/l.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Derivations 269

b) The velocity is v ¼ p1ffiffiffiffi pffiffiffiffiffiffiffi


εm ; with c ¼ εo mo ¼ 3  10
1 8 m
s in vacuum. There-
pffiffiffiffi pffi
εm ε pffiffiffiffi
fore, v ¼ c/n, and n ¼ pffiffiffiffiffiffiffi pffiffiffi
εo mo » εo ¼ εr , where the fact that the
permeability m is generally equal to the vacuum value is put to use.
Here, εr is the relative dielectric constant, not just the real component. If εr
is complex (εr ¼ ε0 þ iε00 ), then so is the velocity because it depends on the
square root of εr. This situation raises a problem of interpretation because it is
not really meaningful for the wave velocity to be complex. Regardless, the
discussion may continue by returning to the definition of v as the ratio of v
and k. One of the two, at least, must be complex. For radar purposes, it is best
to take the frequency as real, which makes k complex and has the following
effect on Eq. (A1.13):

E ¼ E o eiðkxvtÞ ¼ E o eiððkr þiki ÞxvtÞ ¼ E o eiðkr xþvtÞ eki x : (A1.14)

The traveling wave is now multiplied by an exponentially decreasing term,


which is just the absorption of the radar energy by water or another absorbing
element.

A1.3 Derivation of the Beam Pattern for a Square Aperture


The beam pattern for a square antenna is derived in this section. The beam
pattern is basically the Fourier transform of the aperture, which is a general
result that extends beyond the current illustration. Returning to a geometry
that is reminiscent of Young’s double-slit experiment, Fig. A.1.2 shows a cut
in the plane for a stylized phased-array antenna. Each of the small elements
on the left side represents the source of a radar pulse, which then propagates
to the right. The array elements are separated by a distance d in the y
direction.
Each array element is responsible for an electric field element defined by
the equation for a spherically expanding wave:

Eo
En  ðan eiwn Þeikrn , (A1.15)
r2n

where En is the electric field component due to the nth array element, Eo is the
electric field magnitude defined by the sources (all the same in this case), an is
the amplitude contribution from the nth array element (here with the
dimensions of length squared), wn is the phase for the nth array element, rn is
the distance from the nth array element to observation point, and k ¼ (2p)/l is
the wavenumber. A classic principle of electricity and magnetism says that the
total electric field can be obtained by adding up all of the components,
keeping track of the phase, of course.
The total field from all the radiators is the sum of the elements:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
270 Appendix 1

n=0 y

d rn

rn+1

n+1 r

x
Figure A1.2 Each of the array elements is the source of a spherically expanding wave,
which reaches the “screen” at the right after traveling a distance that depends on y.

X Eo
E total ¼ ðan eiwn Þeikrn : (A1.16)
n r2n

Although accurate, this form is difficult to deal with analytically. Therefore,


some assumptions will be made that permit simplification. The primary
assumption is that the observation point is a large distance from the array
(rn ≫ d), which capitalizes on the fact that the variation in amplitude with
distance varies slowly in the y direction, compared to the relatively rapid
changes in phase. For simplicity, an is a constant, and wn ¼ 0 (the emitters are
all in phase). The simplified equation is thus

E o X ikrn
E total ¼ e , (A1.17)
r2o n

where the slow inverse square variation in amplitude has been factored out.
The next part manipulates the complex term inside the summation to address
the question of how the exponent varies as n varies.
If it is assumed that y0 ¼ 0 corresponds to the n ¼ 0 element, then
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ro ¼ x2 þ y2 ; rn ¼ x2 þ ðy þ ndÞ2 : (A1.18)

This form is exact, but the trick is to factor out the ro term from the rn terms.
Such an operation is possible because d is small. First, expand the term inside
the square root:

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Derivations 271

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 þ ðy þ ndÞ2 x2 þ y2o þ 2ynd þ n2 d 2
rn ¼ ro pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ro pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : (A1.19)
x2 þ y2 x2 þ y2
Without any approximations, bring the denominator into the radical and
divide out:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x þ y þ 2ynd þ n d
2 2 2 2
2ynd n2 d 2
rn ¼ ro ¼ ro 1 þ 2 þ 2 : (A1.20)
x þy
2 2
x þy 2
x þ y2
A subtle trick is used at this point. First, take the third term as very small and
then use an approximation for the square root, where the second term is small:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    
2ynd ynd ynd
rn ¼ ro 1 þ 2 þ 0 ≈ ro 1 þ 2 ¼ ro 1 þ 2 : (A1.21)
x þ y2 x þ y2 ro

(For exercise, check that the third term is small by plugging in some typical
numbers: d ¼ 1 cm, y ¼ 500 m, and x ¼ 2000 m). Now use the familiar polar
form sin u ¼ y/ro and simplify the remaining terms:
E o X ikrn E o X ikro iknd sin u E o ikro X iknd sin u
E total ¼ e ¼ 2 e ¼ 2 e e ,
r2o n ro n ro n
(A1.22)
which defines the zeroes in the beam pattern. For a continuous antenna
element, the sum is replaced by an integral over an antenna of length L ¼ nd:
L

E X E a
2

E total ðuÞ ¼ 2o eikro eikro ðnd sin uÞ ¼ 2o eikro ∫ o eiky sin u dy, (A1.23)
ro n ro L L 2

where the sum over the elements has been replaced by an integral over
y ¼  L/2 to L/2, and the amplitude factor has been put back in for a
moment, along with an inverse length to go with the integration variable. The
integral on the right side is simply the Fourier transform of the square
aperture of length L:
L
2
E E sin½ðkL sin uÞ∕2
E total ðuÞ ¼ 2o eikro ∫ eiky sin u dy ¼ 2o eikro
1
: (A1.24)
ro L L 2
ro ðkL sin uÞ∕2

The power at any particular location will then be proportional to the square of
the electric field strength. The resulting function is then proportional to the
square of the sinc function: sin2 a / a.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Appendix 2
Corona

A2.1 Mission Overview

Table A2.1 Summary of Corona missions (1959 to 1972).


KH-1 KH-2 KH-3 KH-4 KH-4A KH-4B

Period of Operation 1959–1960 1960–1961 1961–1962 1962–1963 1963–1969 1967–1972


Number of RVs 1 1 1 1 2 2
Mission Series 9000 9000 9000 9000 1000 1100
Life 1 day 2–3 days 1–4 days 6–7 days 4–15 days 19 days
Altitude (nm):
Perigee 103.5 (e1) 136.0 (e) 117.0 (e) 114.0 (e) u/a2 u/a
Apogee 441.0 (e) 380.0 (e) 125.0 (e) 224.0 (e)
Average Ops u/a u/a u/a 110 (e) 100 (e) 81 (e)
Missions:
Total 10 10 6 26 52 17
Successful 1 4 4 21 49 16
1
estimated.
2
unavailable.

A2.2 Camera Data


Table A2.2 Camera data for the Corona systems.
KH-1 KH-2 KH-3 KH-4 KH-4A KH-4B

Model C C0 C00 Mural J-1 J-3


Type mono mono mono stereo stereo stereo
Scan Angle (deg) 70 70 70 70 70 70
Stereo Angle (deg) – – – 30 30 30
Shutter u/a1 u/a u/a u/a focal plane focal plane
Lens (24-inch focal length) f/5 Tessar f/5 Tessar f/3.5 Petzval f/3.5 Petzval f/3.5 Petzval f/3.5 Petzval
Resolution (estimated):
Ground (feet) 40 25 12–25 10–25 9–25 6
Film (lines/mm) 50–100 50–100 50–100 50–100 120 160

(continued )

273

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
274 Appendix 2

Table A2.2. Continued


KH-1 KH-2 KH-3 KH-4 KH-4A KH-4B

Coverage u/a u/a u/a u/a 10.6  144 nm 8.6  117 nm


Film Base acetate polyester polyester polyester polyester polyester
Film Width 2.1000 2.1000 2.2500 2.2500 2.2500 2.2500
Image Format 2.1000 (e2) 2.1900 (e) 2.25  29.800 2.18  29.800 2.18  29.800 2.18  29.800
Film Load u/a u/a u/a u/a
Camera – – – – 8,0000 8,0000
RV – – – – 16,0000 16,0000
Mission – – – – 32,0000 32,0000
1
unavailable.
2
estimated.

A2.3 Mission Summary


Date Mission Designator Success1 Remarks

1959
28 Feb 1959-002A Discoverer I, Thor Agena A, orbited for 5 days.
(b)
One year after Explorer 1 (2/21/58)
13 April 1959- Discoverer II2, Thor rocket Agena A,
003A (g1)
3 June Failed to Orbit
25 Jun 9001 KH-1 no Discoverer IV Agena did not orbit.
13 Aug 9002 KH-1 no Discoverer V; camera failed on Rev 1: RV not
recovered.
19 Aug 9003 KH-1 no Discoverer Vl; camera failed on Rev 2;
retrorocket malfunction; RV not recovered.
7 Nov 9004 KH-1 no Discoverer VII Agena failed to orbit.
20 Nov 9005 KH-1 no Discoverer VIll; bad orbit; camera failure; no
recovery.
1960
4 Feb 9006 KH-1 no Discoverer IX; Agena failed to orbit.
19 Feb 9007 KH-1 no Discoverer X; Agena failed to orbit.
15 Apr 9008 KH-1 no Discoverer Xl; camera operated; spin rocket failure;
no recovery.
29 Jun N/A N/A N/A Discoverer XII diagnostic flight; Agena failed to orbit.
10 Aug N/A N/A N/A Discoverer XIII diagnostic flight successful.3
18 Aug 9009 KH-1 yes Discoverer XIV; first successful KH-1 mission;
first successful air recovery of object sent into space.
13 Sep 9010 KH-1 no Discoverer XV; camera operated; wrong pitch attitude
on reentry: no recovery.(capsule sank)
26 Oct 9011 KH-2 no Discoverer XVI; Agena failed to orbit.
12 Nov 9012 KH-2 no Discoverer XVII; air catch: payload malfunction.
7 Dec 9013 KH-2 yes Discoverer XVIII; first successful KH-2 mission: air
catch
20 Dec N/A N/A N/A Discoverer XIX radiometric mission. (MIDAS missile
detection test)

(continued )

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Corona 275

(Continued)
Date Mission Designator Success1 Remarks

1961
17 Feb 9014A KH-5 no Discoverer XX; first ARGON flight: orbital
programmer failed, camera failed, no recovery.
18 Feb N/A N/A N/A Discoverer XXI radiometric mission.
30 Mar 9015 KH-2 no Discoverer XXII; Agena failure; no orbit.
8 Apr 9016A KH-5 no Discoverer XXIII; camera OK; no recovery.
8 Jun 9018A KH-5 no Discoverer XXIV; Agena failure, power &
guidance failure; no recovery.
16 Jun 9017 KH-2 yes Discoverer XXV; water landing, recovery
7 Jul 9019 KH-2 partial Discoverer XXVI; Camera failed on Rev 22:
successful recovery.
21 Jul 9020A KH-5 no Discoverer XXVII; No orbit; Thor problem.
3 Aug 9021 KH-2 no Discoverer XXVIII; No orbit; Agena guidance failure.
30 Aug 9023 KH-3 yes Discoverer XXIX; 1st KH-3 flight. Air recovery.
12 Sep 9022 KH-2 yes Discoverer XXX; Air recovery (fifth).
17 Sep 9024 KH-2 no Discoverer XXXI; no recovery power failure.
13 Oct 9025 KH-3 yes Discoverer XXXII; Air recovery.
23 Oct 9026 KH-2 no Discoverer XXXIII; Agena failed to orbit.
5 Nov 9027 KH-3 no Discoverer XXIV; no recovery.
15 Nov 9028 KH-3 yes Discoverer XXXV
12 Dec 9029 KH-3 yes Discoverer XXXV
1962
13 Jan 9030 KH-3 no Discoverer XXXVII; Agena failed to orbit.
27 Feb 9031 KH-4 yes Discoverer XXXVIII; first KH-4 flight: air recovery.
18 Apr 9032 KH-4 yes air recovery
28 Apr 9033 KH-4 no No recovery; failed to eject parachute.
15 May 9034A KH-5 yes
30 May 9035 KH-4 yes
2 Jun 9036 KH-4 no No recovery; torn parachute.
23 Jun 9037 KH-4 yes
28 Jun 9038 KH-4 yes
21 Jul 9039 KH-4 yes
28 Jul 9040 KH-4 yes
2 Aug 9041 KH-4 yes
29 Aug 9044 KH-4 yes
1 Sep 9042A KH-5 yes
17 Sep 9043 KH-4 yes
29 Sep 9045 KH-4 yes
9 Oct 9046A KH-5 yes
5 Nov4 9047 KH-4 yes
24 Nov 9048 KH-4 yes
4 Dec 9049 KH-4 yes
1963
14 Dec 9050 KH-4 yes
8 Jan 9051 KH-4 yes
28 Feb 9052 KH-4 no Separation failure
18 Mar 8001 KH-6 no First KH-6 flight; no orbit; guidance failure (Agena)
1 Apr 9053 KH-4 yes
26 Apr 9055A KH-5 no No orbit; attitude sensor problem
18 May 8002 KH-6 no Orbit achieved; Agena failed in flight.
13 Jun 9054 KH-4 yes

(continued )

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
276 Appendix 2

(Continued)
Date Mission Designator Success1 Remarks

26 Jun 9056 KH-4 yes


18 Jul 9057 KH-4 yes
31 Jul 8003 KH-6 partial Camera failed after 32 hrs.
24 Aug 1001 KH-4A partial First KH-4A flight;5 2 RV’s; RV-2 Lost.
29 Aug 9058A KH-5 yes
23 Sep 1002 KH-4A partial RV-1 recovered; RV-2 lost
29 Oct 9059A KH-5 yes
9 Nov 9060 KH-4 no Failure, unstable launching
27 Nov 9061 KH-4 no Agena failed in flight; prevented recovery.
21 Dec 9062 KH-4 yes Last KH-4 mission
1964
15 Feb 1004 KH-4A yes
24 Mar 1003 KH-4A 110 No orbit: Agena power failure
27 Apr 1005 KH-4A no No on-orbit operation: Agena failure: RV impacted in
Venezuela.
4 Jun 1006 KH-4A yes
13 Jun 9063A KH-5 yes
19 Jun 1007 KH-4A yes
10 Jul 1008 KH-4A yes
5 Aug 1009 KH-4A yes
21 Aug 9064A KH-5 yes
14 Sep 1010 KH-4A yes
5 Oct 1011 KH-4A partial No RV-2 recovery
17 Oct 1012 KH-4A yes RV-2 water recovery because of bad weather.
2 Nov 1013 KH-4A partial Both cameras failed on Rev 52.
18 Nov 1014 KH-4A yes
19 Dec 1015 KH-4A yes
1965
15 Jan 1016 KH-4A yes
25 Feb 1017 KH-4A yes
25 Mar 1018 KH-4A yes
29 Apr 1019 KH-4A partial No RV-2 recovery
18 May 1021 KH-4A yes
9 Jun 1020 KH-4A yes Water recovery on RV-2
19 Jul 1022 KH-4A yes
17 Aug 1023 KH-4A partial Forward camera failed
2 Sep N/A no Destroyed on launching by range safety
22 Sep 1024 KH-4A yes
5 Oct 1025 KH-4A yes
28 Oct 1026 KH-4A yes
9 Dec 1027 KH-4A yes Control-gas loss
24 Dec 1028 KH-4A yes
1966
2 Feb 1029 KH-4A yes
9 Mar 1030 KH-4A yes
7 Apr 1031 KH-4A yes
3 May 1032 KH-4A no Agena failed to separate from booster.
24 May 1033 KH-4A yes
21 Jun 1034 KH-4A yes

(continued )

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Corona 277

(Continued)
Date Mission Designator Success1 Remarks

9 Aug 1036 KH-4A yes


20 Sep 1035 KH-4A yes
8 Nov 1037 KH-4A yes
1967
14 Jan 1038 KH-4A yes
22 Feb 1039 KH-4A yes
30 Mar 1040 KH-4A yes
9 May 1041 KH-4A yes
16 Jun 1042 KH-4A yes Water pick-up on RV-2
7 Aug 1043 KH-4A yes
15 Sep 1101 KH-4B yes First KH-4B mission. (PERS)
2 Nov 1044 KH-4A yes
9 Doc 1102 KH-4B yes
1968
2 Jan 1045 KH-4A yes
14 Mar 1046 KH-4A yes
1 May 1103 KH-4B yes
20 Jun 1047 KH-4A yes
7 Aug 1104 KH-4B yes
18 Sep 1048 KH-4A partial Forward camera failed.
3 Nov 1105 KH-4B yes
12 Doc 1049 KH-4A yes Degraded film
1969
5 Feb 1106 KH-4B partial Aft camera failed.
19 Mar 1050 KH-4A partial Terminated; Agena failure
2 May 1051 KH-4A yes Degraded film
24 Jul 1107 KH-4B partial Forward camera failed; RV-1 water recovery
22 Sep 1052 KH-4A yes Last KH-4A mission
4 Dec 1108 KH-4B yes
1970
4 Mar 1109 KH-4B yes
20 May 1110 KH-4B yes
23 Jul 1111 KH-4B yes
18 Nov 1112 KH-4B yes
1971
17 Feb 1113 KH-4B no Failure of Thor booster.
24 Mar 1114 KH-4B yes
10 Sep 1115 KH-4B yes
1972
19 Apr 1116 KH-4B yes
25 May 1117 KH-4B yes Final Corona mission.
1
The assessment in this column is subjective.
2
Due to error, the capsule landed on the island of Spitzbergen and was apparently recovered by the Soviet Union (ref:
Richelson, America’s Secret Eyes in Space).
3
This was the first successful diagnostic flight in the Discoverer series. Its mission ended with the first successful recovery of
an object sent into space. The recovery vehicle (RV) capsule was recovered from the Pacific Ocean, and the RV resides in
the Smithsonian’s National Air and Space Museum.
On October 26, 1962, a non-photoreconnaissance engineering mission was flown.
Richelson says the first KH-4A camera flight was May 18, 1963.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
278 Appendix 2

A2.4 Orbits: An Example1


Data are given here showing orbital parameters for an early Corona mission.
These systems differed from most modern satellites with low altitudes,
elliptical orbits (as opposed to circular), and the rapid drop in altitude
throughout the mission. The inclination was 80.0°, as opposed to 98° “polar.”
The parameters are epoch (year, month, day, and fraction of a day); period P;
height for perigee hP; and height for apogee hA.

1959-005A (ε1) – Discoverer-5

Property Name Discoverer-5


KH-1 Mission 9003 [=Key Hole]
Agena 1028
FTV-1028 [=Flight Test Vehicle]
Corona 5
SSC 18
Start 1959-08-13 19:00:08 UT, Western Test Range, Thor Agena A

Orbital Parameters
Epoch P (min) hP (km) hA (km)

59-08-13.8 94.19 217 739


59-08-14.36 94.07 215 732
59-08-20.29 93.59 193 707
59-08-28.39 92.94 215 622
59-09-02.23 92.60 215 588
59-09-05.44 92.31 215 560
59-09-08.64 91.96 215 526
59-09-10.23 91.85 215 515
59-09-15.01 91.39 185 501
59-09-18.18 91.00 185 462
59-09-21.33 90.45 163 430
59-09-24.46 89.67 163 354
59-09-26.32 89.10 137 323

1. http://www.lib.cas.cz/www/space.40/1959/005A.HTM

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Appendix 3
Tracking and Data Relay
Satellite System

A3.1 Relay Satellites: TDRSS


In the early phases of the space age, NASA and the U.S. Air Force
maintained a large array of ground stations to communicate with their
satellites. The ground stations were expensive and did not provide continuous
coverage, particularly for the LEO satellites. NASA developed the idea for a
global system of communication satellites that culminated with the launch of
TDRS-1 on April 4, 1983. Following the loss of TDRS-2 in the Challenger
accident in 1986, five more TDRS satellites were launched over the next nine
years. The original TRW-built satellites were supplemented (replaced) by
Boeing-built satellites, TDRS 8–10. TDRS-11 and -12 were launched in 2013
and 2014, respectively, with only modest changes from the second-generation
satellites.
The complete system, known as the Tracking and Data Relay Satellite
System, or TDRSS, consists of the satellites, two ground terminals at the
White Sands Complex, a ground terminal extension on the island of Guam,
and customer- and data-handling facilities. This constellation of satellites
provides global communication and data relay services for the Space Shuttle,
International Space Station, Hubble Space Telescope, and a multitude of
LEO satellites, balloons, and research aircraft.1

A3.2 White Sands


The White Sands Complex (WSC) is located near Las Cruces, New Mexico
and includes two functionally identical satellite ground terminals. Figure A3.1

1. http://tdrs.gsfc.nasa.gov/tdrsproject/about.htm

279

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
280 Appendix 3

Figure A3.1 The second TDRSS ground terminal at White Sands Ground Station.

Figure A3.2 There are four nominal stations for the active TDRS constellation: TDE
(TDRS East), TDW (TDRS West), TDZ [TDRS Zone of Exclusion (ZOE)], and TDS (TDRS
Spare). The original plan involved only the first two stations. A zone of exclusion existed in
the original plan, and eventually the third station was added.

shows the ground station. These terminals are known as the White Sands
Ground Terminal (WSGT) and the Second TDRSS Ground Terminal
(STGT). The ground stations include three 18.3-m Ku-band antennas, three
19-m Ku-band antennas and two 10-m S-band TT&C antennas.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Tracking and Data Relay Satellite System 281

Table A3.1 The TDRS fleet: satellite locations. TDRS-7 and -8 are controlled from the
Guam Remote Ground Terminal (GRGT).
Satellite Launch Date Location

TDRS-1 April 4, 1983 STS-6 (Challenger) Decommissioned June 2010


TDRS-2 January 27, 1986 STS 51-L (Challenger)
TDRS-3 September 29, 1988 STS-26 (Discovery) 43° W Spare (in storage)
TDRS-4 March 13, 1989 STS-29 (Discovery) Decommissioned December 2011
TDRS-5 August 2, 1991 STS-43 (Atlantis) 168° W (TDW)
TDRS-6 January 13, 1993 STS-54 (Endeavour) 46° W (TDE)
TDRS-7 July 13, 1995 STS-70 (Discovery) 90° E (TDZ)
TDRS-8 June 30, 2000 Atlas IIA 85° E (TDZ)
TDRS-9 March 8, 2002 Atlas IIA 41° W (TDE)
TDRS-10 December 5, 2002 Atlas IIA 174° W (TDW)
TDRS-11 January 30, 2013 Atlas V 171° W (in test)
TDRS-12 January 23, 2014 Atlas V 150° W (in test)

A3.3 TDRS 1–7


A3.3.1 Satellites
The Tracking and Data Relay Satellite series began with a TRW-built vehicle
(illustrated in Fig. A3.3). The system is fairly typical of communications

Figure A3.3 TDRS 1-7 spacecraft: 45 feet wide, 57 feet long, 5000 pounds, and 1800-W
power (EOL).

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
282 Appendix 3

Figure A3.4 TDRS H.

Table A3.2 TDRS telemetry characteristics. Data reprinted from http://tdrs.gsfc.nasa


.gov/tdrsproject/spacecraft.htm#.
Baseline Service Service TDRS 1–7 TDRS 8–10

Single Access (SA) S-Band Forward 300 kbps 300 kbps


Return 6 Mbps 6 Mbps
Ku-Band Forward 25 Mbps 25 Mbps
Return 300 Mbps 300 Mbps
Ka-Band Forward N/A 25 Mbps
Return N/A 800 Mbps
Number of links per 2 S SA 2 S SA
spacecraft 2 Ku SA 2 Ku SA
2 Ka SA
Number of Multiple-Access Links per Spacecraft Forward 1 @ 10 kbps 1 @ 300 kbps
Return 5 @ 100 kbps 5 @ 3 Mbps
Customer Tracking 150 m 150 m
3 sigma 3 sigma

satellites of this era (early 1980s). The total power output of the solar array is
approximately 1800 W. Spacecraft telemetry and commanding are performed
via a Ku-band communications system, with emergency backup provided by
an S-band system.2

2. http://tdrs.gsfc.nasa.gov/tdrsproject/tdrs1.htm#1

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Tracking and Data Relay Satellite System 283

Figure A3.5 The TDRS fleet as of 2014. TDRS-1 and TDRS-4 are drifting with respect to
the earth’s surface, as indicated here in the day-long simulation. The active satellites are still
fluctuating with respect to the surface of the earth by a few degrees, illustrating the
difference between geosynchronous and geostationary. The latter is quite rare and would be
difficult to maintain. TDRS-9 and -10 are nearly geostationary, with inclinations of just a few
degrees.

A3.3.2 Payload3
The satellite payload is an ensemble of antennas designed to support the relay
mission:
• Two single-access (SA) antennas: Each antenna is a 4.9-m-diameter
molybdenum wire mesh antenna that can be used for Ku-band and
S-band links. Each antenna is steerable in 2 axes and communicates
with one target spacecraft at a time.
• One multiple-access (MA) S-band antenna array: This is an electronically
steerable phased array consisting of 30 fixed helix antennas. The MA
array can receive data from up to 20 user satellites simultaneously, with
one electronically steerable forward service (transmission) at a time.
Twelve of the helices can transmit and receive, with the remainder only
able to receive. Relatively low data rates are supported—100 bps to
50 kbps.4
• One space-to-ground-link (SGL) antenna: This is a 2-m parabolic
antenna operating at Ku-band that provides the communications link
between the satellite and the ground. All customer data is sent through

3. http://msl.jpl.nasa.gov/QuickLooks/tdrssQL.html
4. NASA Press Release, Tracking And Data Relay Satellite System (TDRSS) Overview;
Release No. 91-41; June 7, 1991

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
284 Appendix 3

this dish, as are all regular TDRS command and telemetry signals. The
antenna is gimbaled on two axes.
• One S-band omni-antenna: a conical log spiral antenna used during the
satellite’s deployment phase and as a backup in the event of a spacecraft
emergency. This antenna does not support customer links.

A3.4 TDRS 8–10


The second generation of TDRS spacecraft are based on the body-stabilized
Boeing (Hughes) 601 satellite (see Fig. A3.4). This is a standard communica-
tions bus that is heavily used in the telecommunications industry. It featured a
tighter pointing capability than the earlier models, which is required for the
narrow bandwidth, used for the new Ka-band service. The power system is
more robust. Two solar array wings provide a 15-year end-of-life power of
approximately 2300 W. Nickel hydrogen batteries are used, in contrast to the
older nickel cadmium batteries used in earlier satellites.

A3.4.1 TDRS 8–10: payload characteristics5


The TDRS H, I, and J provide 18 service interfaces to user spacecraft. The
onboard communications payload can be characterized as bent-pipe
repeaters, in that no processing is done by the TDRS.

A3.4.1.1 S-band multiple access


The MA array consists of two antennas, one each for transmitting to and
receiving from users. The phased array antennas are designed to receive signals
from five spacecraft at once while transmitting to one.

A3.4.1.2 Two single-access antennas


These two large (15-foot diameter), very light antennas are pointed at
individual user satellites to transmit and receive data using one or two radio-
frequency (RF) channels (S-band and either Ku-band or Ka-band). The
S-band access is used to support manned missions, science data missions
including the Hubble Space Telescope, and satellite data dumps. The
Ku-band higher bandwidth supports high-resolution digital television,
including all space-shuttle video communications. Recorders aboard NASA
satellites can dump large volumes of data at rates of up to 300 million bits per
second (300 Mb/s). The Ka-band is a new tunable, wideband, high-frequency
service offered by the 15-foot antennas, providing data rates up to 800 million
bits per second.

5. http://spaceflightnow.com/atlas/ac139/000626tdrsh.html

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Tracking and Data Relay Satellite System 285

A3.4.1.3 Space-ground-link antenna (Ku-band)


This smaller (2-m-diameter) antenna always points at the TDRS ground
station at White Sands, New Mexico.

A3.5 TDRS K, L, M
A new generation of TDRS satellites began to operate in 2013. They are
similar to the second series (based on the Boeing 601). Two have been
launched, and as of early 2014, they are in the checkout phase.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Appendix 4
Useful Equations and
Constants

EM Waves

hc
lf ¼ c; E ¼ hf ; l ¼ ; c ¼ 2.998  108 ; 1 eV ¼ 1.602  1019 J;
DE

 
6.626  1034 joule-seconds
h ¼ Planck’s constant ¼ ;
4.136  1015 eV-seconds

1.24  106 1.24


DEðeVÞ ¼ ¼ :
lðmÞ lðmmÞ

Bohr Atom

rn ðmÞ ¼ n2  0.528  1010 ∕Z;

    
1 Ze2 2 m 2 E1 me4
En ¼  ¼ Z 2 ; E1 ¼  ¼ 13.58 eV;
2 4pε0 ℏ n2 n 32p2 ε0 2 ℏ2

 bandgap energy

number ∝ e thermal energyðkTÞ :

287

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
288 Appendix 4

Blackbody Radiation

m J
c ¼ 3  108 ; h ¼ 6.626  1034 J · s; k ¼ 1.38  1023 ;
s K

2hc2 1
radiance ¼ L ¼ ;
l elkT  1
5 hc

 
W
StefanBoltzmann Law: R ¼ sεT 4
;
m2

 
W
ε ¼ emissivity; s ¼ 5.67  108 ; T ¼ temperatureðKÞ;
m2 K4

Wien’s Law: lmax ¼ a∕T; a ¼ 2.898  103 ðm KÞ;

1
T radiative ¼ ε4 T kinetic :

Optics

1 1 1 focal length
¼ þ ; f ∕# ¼ ;
f i o diameter

Rayleigh criteria:GSD ¼ Du · range


 l : rectangular apertures
¼ range · diameter l
1.22 · : circular optics:
diameter

Reflection and Refraction

c
n ¼ ;n1 sin u1 ¼ n2 sin u2 ;
v
 
n1 cos u1  n2 cos u2 n2 cos u1  n1 cos u2 n1  n2 2
r⊥ ¼ ; r ¼ ;R¼ :
n1 cos u1 þ n2 cos u2 k n2 cos u1 þ n1 cos u2 n1 þ n2

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Useful Equations and Constants 289

Orbital Mechanics
 
~ m1 m2 Rearth 2 m2 m
F ¼ G 2 r̂; F ¼ go m G ¼ 6.67  1011 N 2 ; go ¼ G 2earth
r r kg Rearth
m
¼ 9.8 2 ;
s
Rearth ¼ 6.38  106 m, mearth ¼ 5.9736  1024 kg:

1 2p
v ¼ vr; v ¼ 2pf ; t ¼ ¼ ;
f v
rffiffiffiffiffi
v2 go
F centripetal ¼ m ¼ mv2 r; circular motion: v ¼ R ;
r r earth

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffi
x y 2 2
a b
2 2
b2
Ellipses: 2 þ 2 ¼ 1; ε ¼ or ε ¼ 1  2 ;
a b a a

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Distance from center to focus: c ¼ εa ¼ a2  b2 ;
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 
2 1 4p2 3 4p2
Elliptical orbit: v ¼ GM  ; t2 ¼ r ¼ r3 :
r a go Rearth
2 M earth G

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Index
A C
absorption, 37–40, 47, 141 Cardinal effect, 220–221
adaptive optics, 60–61 Cassegrain, 83–84, 92
Advanced Camera for Surveys C-band radar, 222, 227, 239
(ACS), 87, 90 central force problem, 103
Aerojet, 192 centripetal force, 105
agriculture, 17, 134–135, 153 channel plate intensifier, 37
air order of battle (AOB), 3–5 chirp, 207
airborne, 1, 8. 18,122, 144, 201, 234, circular motion, 104–105
248–255 coherent change detection (CCD),
Airborne Hyperspectral Imager 236–237
(AHI), 196, 198 corner reflectors, 221–222
Airy disk, 67, 90 Corona, 49–57, 74–75, 273–278
aperture, 64–67, 208–209 correlation, 160–161
astronaut photography, 1, 100 COSTAR, 85–86, 90–91
atmospheric absorption, 57–60, covariance, 160–161, 166
141, 175 cross track, 75–77
atmospheric compensation, 132–133 Cuba, 152
atmospheric scattering, 58–59,
132, 134 D
atmospheric turbulence, 60, 61 Defense Meteorological Satellite
AVIRIS, 75, 134, 139–142 Program (DMSP), 9, 35, 94, 96
azimuthal antenna pattern, 210–211 Defense Support Program (DSP),
192–194
B Degree of Linear Polarization
bandgap, 69, 71–72, 130 (DOLP), 146–147
bathymetry, 134, 257–259 desert soil penetration, 218–219,
beam pattern, 208, 216, 269 228–229
blackbody radiation, 40–42 dielectric coefficient, 217–218, 229,
Blue Marble, 3 268–269
Bohr atom, 37–40, 265–268 diffraction, 65–67, 88
brightness temperature, digital elevation model, 238, 243,
182, 189–191 262–263, 256

291

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
292 Index

digital number (DN), 154–158 G


Disaster Management Constellation Gambit, 5–6, 50–51
(DMC), 15–17, 153 geometric resolution, 89–90
dispersion, 122–123 geostationary/geosynchronous orbit
Dolon air base, 5 (GEO), 105, 108, 113–114, 283
Doppler, 213, 233–234 Gnanalingam, Suntharalingam, 207
dynamic range, 92, 94, 96, 131–132, Geostationary Operational
156–158 Environmental Satellite (GOES),
11–13, 187–192
E Galactic Radiation and
Earth Resources Technology Background (GRAB) Satellite, 50
Satellite (ERTS-1), 123–124 gravity, 103–105
electronic order of battle (EOB),
3, 5–6 H
elements of recognition, 149–154 Hasselblad, 63
association, 154 Hen House radar, 6
height, 150 High-earth orbit (HEO), 114–115
pattern, 152–153 histogram, 156–160
shadow, 150 Hubble Space Telescope, 64, 69,
shape, 149 81–91
site, 154 human vision, 120–121
size, 149 hydrogen atom, 38–39, 119, 265
texture, 152 hypercube, 141
time, 154 Hyperion, 143
tone, 151
emissivity, 174–175 I
energy, 31–35, 38–39, 70–72 IKONOS, 18, 77–78, 91–100
Enhanced Thematic Mapper image intensifier, 37
(ETM), 13–15, 124, 127–131 imaging radar, 201–222
Earth Resources Observation inclination, 53, 108–110, 113–115
Satellite (EROS), 9, 100 indium antimonide (InSb), 70–72
European Radar Satellite (ERS), infrared, 11–15, 119, 171–199
7–8, 233–234, 236 interferometry, 225, 234, 238
exposure time, 65, 96–98, 100 interferometric SAR (IFSAR),
150, 234–238
F internal waves, 229–230
f/#, 64 interpretation keys (See elements of
faint-object camera (FOC), 86, 90 recognition)
filters, 121,162–163 IR ledge, 119, 121, 142–144
framing system, 74–76, 126
frequency modulation, 213 K
Fresnel relations, 45–46 Kepler’s laws, 105–108
FTHSI, 144 kernel, 163–164

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Index 293

Keyhole, 51 Nimbus, 186–187


kinetic temperature, 180 Normalized Difference Vegetation
KH-4, 51–53, 63–64, Index (NDVI), 136–138
Kodak, 51, 74, 97–98, 121–122 NPOES (NPP), 95, 99
Ku-band radar, 217, 234, 235
O
L oil slicks, 229–230
Landsat, 13–17, 75, 119, 123–133 Operational Land Imager (OLI),
Landsat 8, 123–124, 132–133 132–133
L-band radar, 217, 221–223, Optech, 20, 254, 258
226–230 orbital elements, 108–109
lasers, 247–252 orbital period, 105–108
laser profile, 248–249, 253, 256–258 order of battle (OOB), 2–3
LiDAR, 18, 32, 247–260, 263–264
LiDAR range equation, 252 P
low-earth orbit (LEO), 12, 13, 91, Pentagon, 53, 55, 149–150
108–111, 115 photoelectric effect, 27, 31,
33–35
M photomultiplier tube, 9, 35–36
magnification, 63 pinhole, 64–65
Mars, 88, 153, 171, 259–260, Planck’s law, 41, 172
263–264 polarization, 23, 29–31,
Maxwell’s equations, 28–29, 268 145–147
medium-earth orbit (MEO), principle component (PC)
112, 115 transform, 161–162
mercury cadmium telluride prism, 122–123
(HgCdTe), 70–75, 127, 129, pushbroom, 76–77, 121
131–133
microbolometer, 72, 74, 75 Q
Missile Defense Alarm System Quickbird, 77–78, 91–94
(MIDAS), 194
MODTRAN, 57–60 R
Molniya orbit (HEO), 109, 114–115 radar, 6, 201–223
Multiangle Imaging radar azimuthal resolution,
Spectroradiometer (MISR), 46–47 207–209, 213
multi-pulse in air (MPIA), 254–255 radar cross section, 214–215
multivariate statistics, 159–162 radar range resolution, 204–207
Mys Shmidta, 53–54 RADARSAT, 20, 22, 230–232
radiometry, 175–179
N range antenna pattern, 210–211
Nadar, 1 Rayleigh criteria, 65–69
naval order of battle (NOB), Rayleigh scattering, 31, 58–59
3, 8–9, 94 red edge, 119

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
294 Index

reflectance/reflection, 45–47, 119, T


135–137, 141–142 TerraSAR-X, 20, 23–24, 232–233,
Ritchey–Chretien Cassegrain, 244
83, 127 Thematic Mapper (TM), 127, 131
thermal crossover, 182–183
S thermal inertia, 171, 181–182, 260
Sandia National Laboratories, Thermal Infrared Sensor
234–235 (TIRS),132–133
Sary Shagan, 6 thin lens equation, 62
scattering, 46–47 time-delay integration (TDI), 96, 98
scatter plots, 138–139, 147, 159–161 TIROS, 185–186
SEBASS, 195–196 Tournachon, Gaspard-Félix
Severodvinsk, 54–56, 98–99 (Nadar), 1
ship detection, 229, 230–232 Tracking and Data Relay Satellite
ship wakes, 184–185, 232–234 System (TDRSS), 77, 113,
Shuttle Imaging Radar (SIR), 218, 279–280
222, 226–228 transmission, 57, 60, 132–133
Shuttle Radar Topographic turbulence, 57, 60–61
Mapping (SRTM) Mission,
240–244 U
single pulse in air (SPIA), 255 univariate statistics, 156
Sirius, 115–116
Snell’s law, 45 V
soil penetration, 219, 228–229 VIIRS, 9–10, 95–96, 98–99
solar spectrum, 43
space order of battle (SOB), 6–8 W
spectral angle, 138–139, 147–148 wakes, 185, 233
spectral response, 130, 132, 134, Washington Monument, 53, 55,
188–189 150–151
SPOT, 8, 69, 92, 119 wave equation, 28
Sputnik, 49 wavelength, 29, 31–32, 34, 38–42,
South Atlantic anomaly, 91 57–59, 216–217
statistics, 155–163 weather satellites, 11, 113, 185–187
Stefan–Boltzmann, 42, 172–174 wide-field/planetary camera
Sternglass formula, 36 (WF/PC), 90
Stokes vectors, 145–147 Wien’s displacement law, 42, 174
Suomi NPP, 9–10, 95, 99 whiskbroom, 76–77, 126–128
Svalbard, 78, 99, 111 Worldview, 7, 18–19, 77, 134–137,
Surrey Satellite Technology, Ltd. 147–148
(SSTL), 15–18
synthetic aperture radar (SAR), X
20, 212–214 X-band radar, 205–206, 240

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
Richard C. Olsen received his degrees at the University of
Southern California (B.S.) and the University of California
at San Diego (M.S., Ph.D.). His graduate work and early
career involved space plasma physics, with a particular
emphasis on satellite charging behavior, and the control of
satellite charging. At the Naval Postgraduate School, he
moved into the field of remote sensing, working with both
optical and radar systems (spectral imaging systems in
particular). He teaches courses in remote sensing and classified military
systems, and he works on developing new methods of exploiting both civil and
military systems for terrain classification and target detection. His most recent
interests involve using LiDAR and other approaches to build 3D models of
the world. He has directed the thesis efforts of over 150 graduate students,
approximately 100 of whom studied remote sensing.

Downloaded From: https://www.spiedigitallibrary.org/ebooks/ on 06 Jan 2021


Terms of Use: https://www.spiedigitallibrary.org/terms-of-use

You might also like