You are on page 1of 276

Ocean Sensing SPIE PRESS | Tutorial Text

Ocean Sensing and Monitoring:

and Monitoring
Optics and Other Methods
Weilin Hou

This is an introductory text that presents the major optical ocean sensing techniques. It
starts with a brief overview of the principal disciplines in ocean research, namely,
physical, chemical, biological, and geological oceanography. The basic optical properties
Ocean Sensing
and Monitoring
of the ocean are presented next, followed by underwater and remote sensing topics,
such as diver visibility; active underwater imaging techniques and comparison to sonars;
ocean color remote sensing theory and algorithms; lidar techniques in bathymetry,
chlorophyll, temperature, and subsurface layer explorations; microwave sensing of
surface features including sea surface height, roughness, temperature, sea ice, salinity
and wind; and infrared sensing of the sea surface temperature. Platforms and
Optics and Other Methods

Optics and Other Methods

instrumentation are also among the topics of discussion, from research vessels to
unmanned underwater and aerial vehicles, moorings and floats, and observatories.
Integrated solutions and future sensing needs are touched on to wrap up the text. A
significant portion of the book relies on sketches and illustrations to convey ideas,
although rigorous derivations are occasionally used when necessary.

Contents: Oceanography Overview Basic Optical Properties of the Ocean

Underwater Sensing: Diver Visibility Active Underwater Imaging Ocean Color
Remote Sensing Ocean Lidar Remote Sensing Microwave Remote Sensing of the
Ocean Infrared Remote Sensing of the Ocean Platforms and Instruments
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring


P.O. Box 10 Weilin Hou

Bellingham, WA 98227-0010

ISBN: 9780819496317
SPIE Vol. No.: TT98
Tutorial Texts Series
. Laser Beam Quality Metrics, T. Sean Ross, Vol. TT96
. Military Displays: Technology and Applications, Daniel D. Desjardins, Vol. TT95
. Aberration Theory Made Simple, Second Edition, Virendra N. Mahajan, Vol. TT93
. Modeling the Imaging Chain of Digital Cameras, Robert D. Fiete, Vol. TT92
. Bioluminescence and Fluorescence for In Vivo Imaging, Lubov Brovko, Vol. TT91
. Polarization of Light with Applications in Optical Fibers, Arun Kumar, Ajoy Ghatak, Vol. TT90
. Digital Fourier Optics: A MATLAB Tutorial, David G. Voeltz, Vol. TT89
. Optical Design of Microscopes, George Seward, Vol. TT88
. Analysis and Evaluation of Sampled Imaging Systems, Richard H. Vollmerhausen, Donald A. Reago,
Ronald Driggers, Vol. TT87
. Nanotechnology: A Crash Course, Ral J. Martin Palma and Akhlesh Lakhtakia, Vol. TT86
. Direct Detection LADAR Systems, Richard Richmond, Stephen Cain, Vol. TT85
. Optical Design: Applying the Fundamentals, Max J. Riedl, Vol. TT84
. Infrared Optics and Zoom Lenses, Second Edition, Allen Mann, Vol. TT83
. Optical Engineering Fundamentals, Second Edition, Bruce H. Walker, Vol. TT82
. Fundamentals of Polarimetric Remote Sensing, John Schott, Vol. TT81
. The Design of Plastic Optical Systems, Michael P. Schaub, Vol. TT80
. Fundamentals of Photonics, Chandra Roychoudhuri, Vol. TT79
. Radiation Thermometry: Fundamentals and Applications in the Petrochemical Industry, Peter Saunders,
Vol. TT78
. Matrix Methods for Optical Layout, Gerhard Kloos, Vol. TT77
. Fundamentals of Infrared Detector Materials, Michael A. Kinch, Vol. TT76
. Practical Applications of Infrared Thermal Sensing and Imaging Equipment, Third Edition, Herbert
Kaplan, Vol. TT75
. Bioluminescence for Food and Environmental Microbiological Safety, Lubov Brovko, Vol. TT74
. Introduction to Image Stabilization, Scott W. Teare, Sergio R. Restaino, Vol. TT73
. Logic based Nonlinear Image Processing, Stephen Marshall, Vol. TT72
. The Physics and Engineering of Solid State Lasers, Yehoshua Kalisky, Vol. TT71
. Thermal Infrared Characterization of Ground Targets and Backgrounds, Second Edition, Pieter A. Jacobs,
Vol. TT70
. Introduction to Confocal Fluorescence Microscopy, Michiel Mller, Vol. TT69
. Artificial Neural Networks: An Introduction, Kevin L. Priddy and Paul E. Keller, Vol. TT68
. Basics of Code Division Multiple Access (CDMA), Raghuveer Rao and Sohail Dianat, Vol. TT67
. Optical Imaging in Projection Microlithography, Alfred Kwok Kit Wong, Vol. TT66
. Metrics for High Quality Specular Surfaces, Lionel R. Baker, Vol. TT65
. Field Mathematics for Electromagnetics, Photonics, and Materials Science, Bernard Maxum, Vol. TT64
. High Fidelity Medical Imaging Displays, Aldo Badano, Michael J. Flynn, and Jerzy Kanicki, Vol. TT63
. Diffractive Optics: Design, Fabrication, and Test, Donald C. OShea, Thomas J. Suleski, Alan D.
Kathman, and Dennis W. Prather, Vol. TT62
. Fourier Transform Spectroscopy Instrumentation Engineering, Vidi Saptari, Vol. TT61
. The Power and Energy Handling Capability of Optical Materials, Components, and Systems, Roger M.
Wood, Vol. TT60
. Hands on Morphological Image Processing, Edward R. Dougherty, Roberto A. Lotufo, Vol. TT59
. Thin Film Design: Modulated Thickness and Other Stopband Design Methods, Bruce Perilloux, Vol. TT57
. Optische Grundlagen fr Infrarotsysteme, Max J. Riedl, Vol. TT56
. An Engineering Introduction to Biotechnology, J. Patrick Fitch, Vol. TT55
(For a complete list of Tutorial Texts, see
Bellingham, Washington USA
Library of Congress Cataloging-in-Publication Data

Hou, Weilin.
Ocean sensing and monitoring : optics and other methods / Weilin Hou.
pages cm (Tutorial texts in optical engineering ; v. TT98)
Includes bibliographical references and index.
ISBN 978-0-8194-9631-7
1. Oceanography Remote sensing Congresses. 2. Environmental monitoring
Congresses. 3. Underwater imaging systems Congresses. I. Title.
GC10.4.R4H68 2013
551.46028'7 dc23

Published by
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1.360.676.3290
Fax: +1.360.647.1445

The content of this book, except as otherwise indicated, is a work of the U.S.
Government and is not subject to copyright. The presentation format of this work by
the publisher is protected, and may not be reproduced or distributed without written
permission of the publisher.

Publication Year: 2013

The content of this book reflects the work and thought of the author. Every effort has
been made to publish reliable and accurate information herein, but the publisher is not
responsible for the validity of the information or for any outcomes resulting from
reliance thereon.

Printed in the United States of America.

First printing
To my father, Hou Dong Yuan, who brought me to the ocean
Introduction to the Series
Since its inception in 1989, the Tutorial Texts (TT) series has grown to cover
many diverse fields of science and engineering. The initial idea for the series
was to make material presented in SPIE short courses available to those who
could not attend and to provide a reference text for those who could. Thus,
many of the texts in this series are generated by augmenting course notes with
descriptive text that further illuminates the subject. In this way, the TT
becomes an excellent stand-alone reference that finds a much wider audience
than only short course attendees.
Tutorial Texts have grown in popularity and in the scope of material
covered since 1989. They no longer necessarily stem from short courses;
rather, they are often generated independently by experts in the field. They are
popular because they provide a ready reference to those wishing to learn
about emerging technologies or the latest information within their field. The
topics within the series have grown from the initial areas of geometrical optics,
optical detectors, and image processing to include the emerging fields of
nanotechnology, biomedical optics, fiber optics, and laser technologies.
Authors contributing to the TT series are instructed to provide introductory
material so that those new to the field may use the book as a starting point to
get a basic grasp of the material. It is hoped that some readers may develop
sufficient interest to take a short course by the author or pursue further
research in more advanced books to delve deeper into the subject.
The books in this series are distinguished from other technical
monographs and textbooks in the way in which the material is presented.
In keeping with the tutorial nature of the series, there is an emphasis on the
use of graphical and illustrative material to better elucidate basic and
advanced concepts. There is also heavy use of tabular reference data and
numerous examples to further explain the concepts presented. The publishing
time for the books is kept to a minimum so that the books will be as timely
and up-to-date as possible. Furthermore, these introductory books are
competitively priced compared to more traditional books on the same subject.
When a proposal for a text is received, each proposal is evaluated to
determine the relevance of the proposed topic. This initial reviewing process
has been very helpful to authors in identifying, early in the writing process, the
need for additional material or other changes in approach that would serve to
strengthen the text. Once a manuscript is completed, it is peer reviewed to
ensure that chapters communicate accurately the essential ingredients of the
science and technologies under discussion.
It is my goal to maintain the style and quality of books in the series and to
further expand the topic areas to include new emerging fields as they become
of interest to our reading audience.

James A. Harrington
Rutgers University
Preface xiii
List of Acronyms and Abbreviations xvii

1 Oceanography Overview 1
1.1 Introduction 1
1.2 Planet Earth and the Oceans 1
1.3 History of Oceanography 3
1.4 Branches of Oceanography Research 8
1.4.1 Chemistry 8
1.4.2 Physics 10
1.4.3 Biology 13
1.4.4 Geology 14
1.4.5 Biogeochemistry: global carbon budget 16
1.5 Summary 18
References 18
2 Basic Optical Properties of the Ocean 21
2.1 Introduction 21
2.2 Light Attenuation: Absorption and Scattering 23
2.2.1 Absorption 23
2.2.2 Scattering: volume scattering function and
backscattering coefficient 25
2.2.3 Beam attenuation 30
2.3 Light Propagation 33
2.3.1 Basic definitions 33
2.3.2 Snells law 36
2.3.3 Radiative transfer equation 38
2.4 Light Generation: Solar Radiation, Fluorescence, Bioluminescence,
and Raman and Brillouin Scattering 40
2.4.1 Solar radiation 40
2.4.2 Fluorescence 41
2.4.3 Bioluminescence 43
2.4.4 Other inelastic scattering: Raman and Brillouin 44
2.5 Summary 47
References 47

x Contents

3 Underwater Sensing: Diver Visibility 51

3.1 Introduction 51
3.2 Point Spread Functions and Modulation Transfer Functions 51
3.3 Point Spread Functions in the Ocean 54
3.3.1 Historical forms 54
3.3.2 Simplified form 55
3.4 MTF of Ocean Waters 56
3.5 Impacts from Underwater Turbulence 58
3.5.1 Theoretical treatment: the simple underwater imaging model 58
3.5.2 Simple underwater imaging model validation 63
3.6 Secchi Disk Theory Revisited 68
3.7 Through-the-Sensor Technique 77
3.8 Summary 81
References 82
4 Active Underwater Imaging 87
4.1 Introduction 87
4.2 Active Electro-Optical Systems 87
4.2.1 Separation of source and receiver 88
4.2.2 Time of flight and range gating 90
4.2.3 Synchronous scan: spatial filtering 93
4.2.4 Temporal modulation 99
4.2.5 Imaging particles underwater 100
4.3 Comparison to Active Acoustical Systems 107
4.3.1 Active acoustical systems 107
4.3.2 Depth sounder 108
4.3.3 Side-scan sonar 110
4.3.4 Imaging sonar 111
4.4 Summary 112
References 112
5 Ocean Color Remote Sensing 117
5.1 Introduction 117
5.2 Basic Principles of Ocean Color Remote Sensing 120
5.3 Atmospheric Correction 121
5.4 Ocean Color Sensors 127
5.4.1 CZCS 128
5.4.2 SeaWiFS 129
5.4.3 MODIS 130
5.4.4 MERIS 132
5.4.5 HICO 132
5.4.6 GOCI 133
5.4.7 VIIRS 133
Contents xi

5.5 Retrieval Algorithms 134

5.5.1 Empirical methods 135
5.5.2 Semi-analytical methods 136
5.6 Calibration and Validation 139
5.6.1 Basic calibration: radiometric and spectral 140
5.6.2 In-orbit calibration: vicarious method 140
5.7 Summary 141
References 142
6 Ocean Lidar Remote Sensing 145
6.1 Introduction 145
6.2 Basic Components and Principles of Lidar Remote Sensing 146
6.3 Lidar in Depth Sounding: Altimetry and Bathymetry 148
6.4 Lidar in Temperature Measurements 151
6.5 Lidar Fluorosensing of Subsurface Chlorophyll and Colored
Dissolved Organic Matters 156
6.6 Other Lidar Applications 159
6.7 Summary 161
References 162
7 Microwave Remote Sensing of the Ocean 165
7.1 Overview 165
7.2 Passive Sensing of Sea Surface Temperature, Salinity,
and Sea Ice 165
7.3 Active Microwave Sensing of the Ocean 173
7.3.1 Altimeter 174
7.3.2 Scatterometer 177
7.3.3 Imaging radar 180
7.3.4 Synthetic aperture radar 182
7.4 Summary 185
References 185
8 Infrared Remote Sensing of the Ocean 187
8.1 Overview 187
8.2 Sea Surface Temperature: Definition 188
8.3 Basic Principles 190
8.4 Sea Surface Temperature Sensors and Algorithms 193
8.4.1 AVHRR 194
8.4.2 MODIS 196
8.4.3 Transition to VIIRS 198
8.5 Cloud Detection 199
8.6 Summary 201
References 201
xii Contents

9 Platforms and Instruments 203

9.1 Introduction 203
9.2 Bottom, Surface, Air, and Space Platforms 204
9.2.1 Surface platforms: ships 204
9.2.2 Remote sensing platforms 206
9.2.3 Subsurface vessels: unmanned underwater vehicles 209
9.2.4 Floats, buoys, moorings, observatories, and shore systems 216
9.3 Common Instruments for Ocean Observation 220
9.4 Summary 227
References 228
10 Integrated Solutions and Future Needs in Ocean Sensing
and Monitoring 231
10.1 Tactical Ocean Data System 232
10.1.1 Glider optics 233
10.1.2 OpCast 235
10.1.3 Optimization and 3D optical volume 237
10.1.4 TODS Summary 240
10.2 Future Ocean Sensing Needs 240
10.2.1 Short- to mid-term outlook 240
10.2.2 Long-term outlook 242
10.3 Concluding Remarks 245
References 245
Index 247
This book covers general topics related to optical techniques in ocean sensing
and monitoring applications. It is written as a tutorial text, primarily for those
with undergraduate training in science and engineering who wish to gain a
broader understanding of these topics. The book is designed to serve as a
stepping stone for more advanced topics in specialized disciplines. It can also
serve as a refresher for those who are already familiar with some of the topics
but wish to have a peek at recent advances, trends, and new challenges. It
provides readers with the necessary expertise and technical know-how to
understand the current issues at hand and is a great tool for managers as well
as graduate students and young professionals to gain a sense of their
whereabouts in the newer aspects of this field.
There are many specialized areas in oceanography, and it would be
impossible for this tutorial to include them all as well as their fundamentals
and field data. Instead, this book takes a more narrative approach and focuses
on the science and reasons behind methods and approaches, rather than on
the precise details of issues (unless they are key to understanding the
problems). Bibliographical entries are given for readers who have a strong
interest in certain topics.
Although this introductory text mainly focuses on the area of optics in
oceanography, a bigger picture of oceanography in general must be provided.
Chapter 1 starts with the history of the oceans and ocean research, followed
by outlines of the major ocean research disciplines and topics they cover,
including chemistry (often referred to as chemical oceanography or marine
chemistry), physics, biology, and geology. One specific interdisciplinary area,
biogeochemistry, is included due to elevated interest and associated pressing
issues related to global climate change.
Optical properties of the ocean are the focus of Chapter 2, which lays the
foundation for the discussions in later chapters. This mainly covers the small
transmission window in the visible wavelengths of electromagnetic bands,
especially when in-water transmissions are considered, due to the strong
absorption by water in longer (infrared, microwave, radio) and shorter
wavelengths. The properties associated with other bands are discussed in
corresponding chapters when necessary.

xiv Preface

Chapter 3 covers diver visibility, or passive optical imaging issues. This is

one of the oldest topics in oceanography research. Historical approaches as
well as state-of-the-art methods are covered in detail, which is the reason for
including precise mathematical formulas. The influences of both turbidity and
turbulence are discussed in this chapter.
Chapter 4 extends the discussion of underwater imaging to active
methods, covering both optical (light detection and ranging, known as lidar)
as well as acoustical approaches (sound navigation and ranging, known as
sonar), although the main focus remains on optical systems. Major
approaches in enhancing optical imaging range and resolution are covered,
including sourcereceiver separation, range gating, synchronous scan, and
modulation. Several techniques discussed are current, active research topics
that show very promising potential for future underwater sensing capabilities.
Remote sensing of the ocean is one of the key advances in modern
oceanography. The next four chapters discuss the major areas of research and
applications in this field. Chapter 5 focuses on ocean color remote sensing,
which is the primary method to quantify ocean productivity on synoptic
scales. Major ocean color sensors are discussed, including CZCS, SeaWiFS,
MODIS, and MERIS, along with several recent and unique sensors, including
HICO, GOCI, and VIIRS. Atmospheric correction and ocean color retrieval
algorithms are compared and summarized, and calibration and validation
methods are discussed.
Since visible light provides the only means for penetrating the ocean from
space, active sensing of the vertical structures of the ocean using light can
provide much needed information in ocean sensing and monitoring. This is
primarily performed by ocean lidar, which is the topic of Chapter 6. Principles
and sensing techniques are discussed that cover depth sounding, temperature,
chlorophyll, CDOM, as well as recent studies involving fisheries and
subsurface layers.
Unlike visible bands, microwave remote sensing is not influenced by clouds
and is capable of providing 24/7 sensing of the ocean from space, despite its
inability to penetrate water. The sensing of sea surface height, salinity,
temperature, sea ice, wind, and roughness is covered briefly in this chapter.
Active sensing with imaging radar and synthetic aperture radar,which provides
very detailed information of the ocean surface, is also discussed in Chapter 7.
Active sensing helps in deriving subsurface features, such as internal waves, as
well as in detecting surface materials such as oil slicks.
Chapter 8 focuses on infrared remote sensing of the ocean, namely sea
surface temperature. This is currently one of the most widely used ocean
sensing and monitoring methods, affecting weather forecasting, hurricane
predictions, fisheries, and long-term climate change, to list a few. Major
sensors are discussed, including AVHRR, MODIS, and VIIRS, along with
corresponding retrieval algorithms.
Preface xv

Ocean sensing platforms and major instruments are the topics of Chapter 9.
Platforms are key to ocean research, ranging from traditional ocean research
vessels to unmanned underwater vehicles, unmanned aerial vehicles, floats,
buoys, drifters, and large-scale moorings and observatories. Several in situ
sampling instruments are briefly mentioned along the lines of functionality and
type; the brevity of their coverage in this book is due to the myriad of specialized
sensors for various applications that are in existence, in addition to the new types
and designs that come online with ever-increasing design-to-deployment rates.
The last chapter discusses integrated solutions for addressing ocean
sensing needs, using an example of Tactical Ocean Data Systems (TODS),
where in situ sampling, remote sensing, and ocean models are fused to provide
an optical ocean forecasting product. Future goals and requirements are
discussed in terms of scales of study, automated data processing, and sensor
planning. The long-term outlook discussed in this chapter reflects the authors
opinions on sustainable growth provided by the ocean.
This book reflects the authors preferences and is meant to encourage the
reader to study more-advanced topics. As a result, several key areas have been
intentionally ignored or unrepresented.
Without the support from my family, this book would not be possible.
I want to thank my wife and best friend, Chunzi Du, for her encouragement.
I also want to thank my sons, Joshua and Wilson, for asking the right
questions and putting up with me ignoring their play invitations. I would also
like to acknowledge the support from the Naval Research Laboratory, as well
as the help from anonymous reviewers, and editors, Dara Burrows and Teresa
Wiley Forsyth.
Weilin Hou
August, 2013
List of Acronyms
and Abbreviations
3DOG 3D Optical Generator

ADCP acoustical Doppler current profiler

AGARD Advisory Group for Aerospace Research and Development
ALTM Airborne Laser Terrain Mapper
AMSR-E Advanced Microwave Scanning Radiometer for EOS
AOL Airborne Oceanographic Lidar
AOP apparent optical properties
APS Automated Processing System
APT automatic picture transmission
ASB adaptive sliding box
ASW antisubmarine warfare
AUV autonomous underwater vehicle
AVHRR Advanced Very High Resolution Radiometer
AVIRIS Airborne Visible/Infrared Imaging Spectrometer
BonD battlespace on demand
BRDF bidirectional reflectance distribution function
BSF beam spread function
BUFR Binary Universal Form for the Representation
BVF BruntVisl frequency
CALIPSO Cloud-Aerosol Lidar and Infrared Pathfinder Satellite
CCD charge-coupled device
CDOM colored dissolved organic matter
CHARTS Compact Hydrographic Airborne Rapid Total Survey
[chla] chlorophyll-a concentration
CODAR Coastal Ocean Dynamics Application Radar
COTS commercial off-the-shelf
CSA cloud and shadow algorithm
CTD conductivity temperature and depth (meter)
CZCS Coastal Zone Color Scanner

xviii List of Acronyms and Abbreviations

CZMIL Coastal Zone Mapping and Imaging Lidar

DLP digital light projector
DN digital number
DO dissolved oxygen
DOP degree of polarization
DORIS Doppler Orbitography and Radiopositioning Integrated
by Satellite
DRDC Defense Research and Development Canada
DSDP Deep Sea Drilling Project
DTF decay transfer function
DVL Doppler velocity log
EM electromagnetic
EO electro-optical
EODES Electro-Optical Identification System
EOID electro-optical identification
EOS Earth Observing System
ESA European Space Agency
ESFADOF excited-state Faraday anomalous dispersion optical filter
FILLS fluorescence imaging laser line scanner
FLIP Floating Instrument Platform
FLOE Fish Lidar, Oceanic, Experimental (system)
FOV field of view
FSW feet of seawater
GAC global area coverage
GATE Global Atmospheric Research Program
GF/F glass-fiber filters
GHRSST Group for High Resolution Sea Surface Temperature
GLAS Geoscience Laser Altimeter System
GLOBEC Global Ocean Ecosystem Dynamics Program
GOCI Geostationary Ocean Color Imager
GOOS Global Ocean Observation System
GPS global positioning system
GUI graphicuser interface
HALE high altitude and long endurance
HH horizontal to horizontal
HICO Hyperspectral Imager for Coastal Ocean
HMM hyperstereo mirror module
HRPT high-resolution picture transmission
IARR internal average relative reflectance
ICE image control and examination
ICESat Ice Cloud Elevation Satellite
IFOV instantaneous field of view
IHO International Hydrographic Organization
List of Acronyms and Abbreviations xix

IMAST Image Measurement Assembly for Subsurface Turbulence

IMLD intensity of the mixed-layer depth
IMU inertial measurement unit
IOCCG International Ocean Color Coordinating Group
IOP inherent optical properties
IOR index of refraction
IR infrared
IV integrated value
JGOFS Joint Global Ocean Flux Study
KORDI Korea Ocean Research and Development Institute
LAC local area coverage
LADS Laser Airborne Depth Sounder
LAGER Local Automated Glider Editing Routine
LE long exposure
LLS laser line scanner
LRA Laser Retroreflector Array
LUCIE Laser Underwater Camera Imaging Enhancer
MALE medium altitude and long endurance
MBES multibeam echo sounding
MBR maximum band ratio
MCM mine countermeasure
MCSST multichannel SST (algorithm)
MERIS Medium Resolution Imaging Spectrometer
MIW mine warfare
MLO mine-like object
MOBY Marine Optical Buoy
MODIS Moderate-Resolution Imaging Spectroradiometer
MOR mid-ocean ridge
MSA mean square angle
MTF modulation transfer function
MUG manual-use GUI
MVCO Marthas Vineyard Coastal Observatory
MVSM multispectral volume scattering meter
NASA National Aeronautics and Space Administration
NATO North Atlantic Treaty Organization
NAVO Naval Oceanographic Office
NCOM Navy Coastal Ocean Model
ND no data
NDBC National Data Buoy Center
NetCDF Network Common Data Form
NIH National Institutes of Health
NIR near infrared
NIRDD NRL image restoration via denoised deconvolution
xx List of Acronyms and Abbreviations

NLOS nonline-of-sight
NLSST nonlinear SST (algorithm)
NOAA National Oceanic and Atmospheric Administration
NRL Naval Research Laboratory
NSCAT NASA Scatterometer
NTNU Norwegian University of Science and Technology
OCTS Ocean Color Temperature Scanner
ONR Office of Naval Research
OPC optical plankton counter
OQC optics quality control
OTF optical transfer function
PAR phytosynthetically available radiation
PLLS pulsed laser line scanner
POD precision orbit determination
POLDER Polarization and Directionality of the Earths Reflectance
POSEIDON Positioning Ocean Solid Earth Ice Dynamics Orbiting
PSF point spread function
QAA quasi-analytical approach
QC quality control
RAR real aperture radar
REMU remote environmental monitoring unit
RH relative humidity
RIDGE Ridge Inter-Disciplinary Global Experiment
ROV remotely operated vehicle
RP research platform
RTDHS real-time data handling system
R/V research vessel
SAR synthetic aperture radar
SAS synthetic aperture sonar
SASS Seasat-A Satellite Scatterometer
SBS stimulated Brillouin scattering
SeaWiFS Sea-Viewing Wide-Field-of-View Sensor
SHOALS Scanning Hydrographic Operational Airborne Lidar Survey
SIPPER Shadowed Image Particle Profiling and Evaluation Recorder
SLR side-looking radar
SMMR Scanning Multichannel Microwave Radiometer
SNR signal-to-noise ratio
SOOP Ship of Opportunity Program
SOTEX Skaneateles Optical Turbulence Exercise
SSH sea surface height
SSS sea surface salinity
SST sea surface temperature
List of Acronyms and Abbreviations xxi

STIL streak tube imaging lidar

SUIM simple underwater imaging model
SWIR shortwave infrared
TD temperature dissipation
TIR thermal infrared
TKED total kinetic energy dissipation rate
TMR TOPEX microwave radiometer
TOA top of the atmosphere
TODS Tactical Ocean Data System
TOGA Tropical Ocean Global Atmosphere
TOPEX Topography Experiment
TOTO Tongue of the Ocean
TVI time-varying intensity
UAS unmanned aerial system
UAV unmanned aerial vehicles
USAF U.S. Air Force
UUV unmanned underwater vehicles
UV ultraviolet
VHF very high frequency
VIIRS Visible Infrared Imager Radiometer Suite
VSF volume scattering function
VV vertical to vertical
WGSA weighted grayscale angle
WMO World Meteorological Organization
WOCE World Ocean Circulation Experiment
WSS wide-sense stationary
WVSST water vapor SST (algorithm)
Chapter 1
Oceanography Overview
1.1 Introduction
By general definition, oceanography is a science that explores the oceans, and
includes their extent and depth, physics, chemistry, biology, as well as the
exploitation of their resources. This is a rather concise definition and aptly
outlines the major fields of the discipline, with the exception of history and the
current state of the seafloors, the foundations where the ocean resides. As this
book primarily covers applied research related to oceanography, it makes
sense to take a look at the big picture first, from the history of the ocean, to
the history of ocean research, to the subdisciplines of ocean science.

1.2 Planet Earth and the Oceans

In the solar systems planets, Earth is in the perfect position. Its size and mass,
distance from the Sun, orbit around the Sun, and its rotation cycle all
intervene to form an ideal condition that produces an average temperature of
approximately 15C in most regions of its surface. This allows the coexistence
of all three states of water: liquid, solid, and gaseous. As we know, the most
abundant state is liquid water, which forms a watery blanket covering the
majority of Earths surfacethe worlds oceans. Therefore, our planet is also
commonly known as the water planet.
The total area of the worlds oceans is about 360 million square km,
accounting for 72% of the total area of the surface. The average depth of the
global ocean is about 3800 m, with a maximum depth of 11,033 m, deeper
than the highest peak on Earth (the peak of Mount Everest is 8848 m). The
total volume of the worlds oceans is approximately 1.37 billion cubic km,
accounting for 97% of the total amount of water on Earth. Another way to
look at this number is that the amount of water in the ocean would give us a
thick water blanket of 2600 m in depth, if Earth were a perfect sphere.
How were the oceans formed? There is no definitive answer to this
question, in part, because the answer relies on another bigger yet unresolved
issue: the origin of the solar (or star) system. The evidence we have now points

2 Chapter 1

back to five billion years ago, when clumps of small (compared to the Sun)
solar nebula rotated and collided around the Sun, gradually forming into
larger bodies and eventually into planets, including Earth. During the course
of ocean formation, gravitational force caused the strong contraction resulting
in accelerated collisions. Coupled with internal radioactive decay, Earth was
heated to a temperature high enough that its core, including iron and nickel,
began to melt. Under the force of gravity, heavier elements sank toward the
center of Earth to form the core, while the lighter material rose to form the
crust and mantle. In these high temperatures, the internal water vaporized
along with other gases and gushed out into the air. Due to gravity, the air
could not escape from Earth, but instead formed a gaseous vapor sphere.
The surface layer of the crust condensed in the cooling process that
followed. It was constantly subjected to strenuous squeezing and folding,
which caused the unevenness of the surface and led to earthquakes and
volcanic eruptions when the crust was broken, spewing lava and hot gas. This
happened frequently during the early stages of formation, then gradually
became less and eventually stabilized. The differentiation of the material,
completed about 4.5 billion years ago, involved major reorganization of the
With the cooling of the amorphous crust, the surface wrinkled into denser
and uneven regions involving a variety of terrains: mountains, plains, rivers,
and basins. For an extended period of time, the sky (in the traditional sense)
and atmospheric water vapor coexisted, overcast with heavy clouds. As the
crust cooled, so did the atmosphere. The water vapor formed droplets with
ash, and fine dust particles served as nuclei, which led to still larger droplets.
The uneven temperatures and densities in the atmosphere led to strong
convection, lightning, and downpours. The surging flood water gathered
through cracks and rivers to the basins, which formed the primitive ocean.
In the primitive ocean, the water was not salty. It was acidic and lacked
oxygen. The constant evaporation and precipitation cycle gradually transported
the ions from land and basins into the water. After hundreds of millions of years
of accumulation and mixing, the water became the generally uniform salty
water we see today. In the meantime, since there was no oxygen in the
atmosphere, ultraviolet rays could reach the ground unchallenged without the
protection of the ozone layer. With the protection of water, however, biological
activities were possible, and indeed, life on Earth was first born in the ocean.
About 3.8 billion years ago, organic matter was formed in the ocean and led to
single-cell organisms. Phytoplankton was formed about 600 million years ago in
the Paleozoic era. Through photosynthesis, oxygen was produced and slowly
accumulated to form the ozone layer. At this point, life began to exist on land.
In short, after a gradual increase in water and salt quantities, and by way
of Earths geological vicissitudes, the primitive ocean gradually evolved into
todays oceans.
Oceanography Overview 3

1.3 History of Oceanography

Although we have no direct evidence of any systematic teachings related to the
seas, it is not hard to imagine our ancestors curiosity about the vast water
bodies near their dwellings. Archaeological evidence dating back several tens of
thousands of years suggests that people began to explore from coastlines in
rafts. They learned about the oceans by observing tides and their periods, waves
crashing on shores, storms with seasonality, and winds and currents carrying
their rafts in different directions at different times. They harvested fish and
other sea creatures for food. They learned to distinguish between fresh water
and salty seawater, which is undrinkable. Through legends and myths, as well
as associated rituals for seafaring explorers, knowledge was passed down for
thousands of years through many generations.
However, it is believed that around 3,000 years ago, philosophers, thinkers,
and naturalists began to make sense of the vast bodies of water surrounding the
land they lived on. This coincided with the time of The Odyssey in the West,
and the Tao Te Ching in the East. People believed that the sky was round, and
the earth and oceans were flat. Around the same time, the ancient Greeks were
able to sail beyond the Mediterranean and the Straits of Gibraltar. They
encountered a strong north-south current, which led them to believe it was a
large river, thanks to their experience with strong river currents. The Greek
word for river is okeano, which is the root of the English word ocean.
It is fortunate that belief in the earth being flat during the fifteenth and
sixteenth centuries did not prevent Columbus and others from exploring the
oceans, although some question whether such beliefs would have still been held,
considering Aristotle's proof (around 340 BC) that the earth was a sphere. By the
fifteenth century, the accumulated experience of the prevailing winds and currents
helped the ancient Polynesians to populate vast areas of the Pacific. Early Arab
traders had enough knowledge to time their voyages so that they could utilize
the alternating monsoon winds when traveling to the Malabar Coast on the
western side of India, and even farther toward the East. An early understanding
and utilization of strong trade winds enabled Portugal to become a dominant
maritime nation by the fifteenth century, when they traveled with ease along the
coast of Africa to reach the bountiful land of India. Around the same time,
between 1405 and 1433, seven naval expeditions led by Chinese admiral Zheng He
covered routes from south of Shanghai to Southeast Asia, the Middle East, and
East Africa, amassing 317 ships and 28,000 sailors during the first expedtion.
Under this backdrop, Prince Henry the Navigator of Portugal came to
realize the importance of oceanic and navigational knowledge to trade and
commerce. A learning center for marine sciences was established in Sagres,
Portugal, and is widely regarded today as the first oceanographic institution.
Sailors came to learn the skills of map making and to gain knowledge of
currents and the ocean at large. The skills they learned further enhanced
knowledge of the ocean currents, which no doubt helped Columbus sail
4 Chapter 1

across the Atlantic and back to Europe, as well as allowed Magellan to

circumnavigate the globe in the early 1500s.
Driven by the desire of territorial expansion and commercial needs
(especially the spices from the East Indies, which were believed to cure the
plague), major European powers, including Spain, Britain, and France,
launched expeditions in the 1700s to explore lands across the Atlantic, the
Pacific, and the Indian Oceans, as well as the Arctic and Antarctic oceans.
Captain James Cooks voyage on the HMS Endeavour in 1768 was his
first of three worldwide expeditions, in which he mapped for the first time the
coastlines of Australia and New Zealand, as well as the Hawaiian Islands.
An avid seaman as well as a great observer, he kept a detailed journal of his
voyage and discoveries that was published upon his return. He is widely
accredited with the discovery of scurvy among sailors, a disease caused by
lack of fresh fruits or vegetables (sources of vitamin C) at sea that resulted in
a high mortality rate. His pickled cabbage treatment that was given to all of
his seamen undoubtedly saved many lives.
It is worth mentioning that one of the key elements in the success of these
mappings and expeditions of the oceans was the invention of the spring-
operated clock. The pendulum clocks of the time did not operate well at sea,
for the obvious reason of a ships lateral motion caused by waves and swells.
Time keeping is critical at sea, especially on long journeys across vast bodies
of water. Longitude locations can only be marked with precise clockwork. In
1736, a British cabinetmaker, John Harrison, invented a clock that used a
spring instead of a pendulum as the movement mechanism. This was the first
marine chronometer, which enabled navigators to calculate how far east or
west they had traveled from 0-deg longitude. The clock was accurate to a few
seconds per day, which was an incredible accomplishment even by todays
It is safe to say that modern oceanography disciplines started sometime
during the late nineteenth century, when scientific methods were applied and
systematic methodology was adopted. This came after Americans, British,
and Europeans launched several small expeditions to explore ocean currents,
ocean life, and the seafloor off of their coastlines.
The first major scientific expedition was the Challenger Expedition,
which took place between 1872 and 1876 and aimed to explore the worlds
oceans and their seafloors. Led by a pair of naturalists, John Murray and
Charles Thompson, the Challenger Expedition was the first to systematically
gather data on several key areas of ocean research, even by todays
standards. This included ocean temperatures, chemistry related to seawater,
ocean currents at various depths, marine biota, and geology of the ocean
floor. Thanks to previous efforts by Thompson, who had discovered some
interesting creatures from the deep oceans of the North Atlantic and
Mediterranean Sea, the British government supported this ambitious,
Oceanography Overview 5

worldwide expedition with a converted Navy corvette warship. This ship was
most likely the first dedicated ocean research vessel. Its well-equipped
laboratory and instrumentation allowed samples to be taken at different
depths and analyzed under microscopes. The HMS Challenger had winches
on board that allowed extended sounding lines to be deployed to measure the
depth of the ocean. Various samplers were also used on board to retrieve
rocks and sediments from the ocean floor, and to lower trawls and nets to
different depths to collect live samples to be brought back to the ship for
further study.
As shown in Fig. 1.1, the HMS Challenger first traveled south from
England toward the South Atlantic, then made the turn around the Cape of
Good Hope at the southern tip of Africa. She journeyed across the wide
southern Indian Ocean under rather challenging conditions, then crossed the
Antarctic Circle before heading to Australia and New Zealand. From there,
the HMS Challenger turned north toward the Hawaiian Islands, and then
headed south again around Cape Horn at the southern end of South America,
where the Pacific and Atlantic Oceans connect. She returned to England in
May 1876, exploring the Atlantic Ocean on her way home.
Several key findings established the Challenger Expedition as the
beginning of modern oceanography. It produced the first systematic map of
ocean currents and temperatures. It revealed the first outlines of the ocean
basins, especially the rise in the middle of the Atlantic Ocean, which is
currently known as the Mid-Atlantic Ridge, a mountainous structure on the
ocean floor as a result of plate movement. The expedition also discovered the
deepest spot in the ocean at the time, the Marianas Trench in the western
Pacific Ocean, at a depth of 26,850 ft (more than four miles deep). The
location is not far from the currently known deepest spot in the ocean, which

Figure 1.1 The route of the Challenger Expedition, which lasted more than 1,000 days and
covered more than 68,000 nautical miles. (Courtesy of NOAA.)
6 Chapter 1

is 37,800 ft deep, also known as Challenger Deep. These exciting findings by

the Challenger Expedition further encouraged other countries to start their
own expeditions.
The real breakthrough of modern oceanography can be traced back
approximately 70 years to the Second World War, when urgent needs of the
U.S. Navy to gain the technological upper hand catapulted the level of efforts
poured into ocean research. This was primarily due to threats posed by
submarines and associated submarine warfare.
While the role of the oceans has always influenced the outcome of wars
in the pastfrom the transportation of solders to blockades of harbors and
supply routesthe introduction of submarines elevated the urgency to
counter stealth attackers to an unprecedented level, due to the unsymmetrical
nature of the risk/reward ratio.
It did not take long before scientists (not yet called oceanographers) came
up with the idea to listen and echo-locate submarines using acoustical waves
underwater. A new word was invented, along with large developmental
efforts. The term sonar (short for sound navigation and ranging) encompassed
the instrument as well as the technology behind it, although its original aim
was iceberg detection to prevent similar tragedies like that of the Titanic.
While not intended by the original design, an inverted application of sonar
turned out to be the most often used in surveying the depth of the ocean.
Sound transducers were mounted on the hull of the survey vessel to transmit
and receive the sound waves. The roundtrip travel time of the sound waves
was easily converted to range information, which gave the depth of the ocean
floor. Echo sounders were used for oceanographic research by a German
expedition team in the South Atlantic in the mid-1920s and became the
standard procedure for bathymetry mapping. This was a vast improvement
over methods used during the Challenger Expedition, where 200-pound
weights were attached to miles of rope for depth measurements. Hours of
waiting were replaced by seconds of echo duration time.
More sophisticated sonars are now used, such as high-resolution side-scan
versions capable of imaging the seafloor at very high resolution for object
detection and identification. Multibeam echo sounders are also widely used
in search-and-rescue missions. Details of these instruments and working
principles are discussed in Chapter 4. Similar ranging approaches are also
implemented using light instead of sound (lidar instead of sonar) to gain higher
resolution. They are also used across the airsea interface to map shallow
water bathymetry from space.
Another important instrument developed during the war era was the
magnetometer. It was originally designed to detect large metal objects in the
ocean, such as the hulls of submarines. A magnetometer works by detecting
variations in the electromagnetic field of the underwater environment when a
moving conductor interacts with the earths magnetic field. Oceanographers
Oceanography Overview 7

use this technique to study properties of the seafloor, as the oceanic crust
contains strong metallic signatures. Techniques and theories have also been
developed to study wave structures inside the ocean (internal waves), where
conductivity changes due to the ocean currents can be detected by a
magnetometer. In addition to detecting and neutralizing threats from
submarines, naval mine hunting was also one of the driving forces of
technological development. As an integral part of naval warfare, mines were
widely used in the ocean and coastal areas, similar to those on land, to destroy
or block enemy naval forces. Magnetometers were also enhanced, often used
in a towed setup in association with sonars, to help clear minefields at sea.
With the invention of lasers in 1958, interest in developing and deploying
light-based systems drove the basic and applied research, which formed the
foundation of the ocean optics today. Jerlovs Marine Optics monograph
(Jerlov, 1976) is one of the most cited classic works in this field. Light
transmission theories, measurement instrumentations for laboratory and field
use, as well as imaging systems were quickly developed. By the 1970s,
airborne imaging systems began to be deployed on a regular basis and served
as the test beds for a new era of satellite-based remote sensing systems.
Remote sensing from space significantly augmented the capability of ocean
research. It provided synoptic views of the ocean for the first time, covering
ocean features that included surface temperature, salinity, intensity and
direction of ocean currents, wind intensity, depth of shallow seas, and water
quality and related biological activities. It also allowed for the mapping,
classifying, and monitoring of the bottom types of shallow seafloors.
Key to the success and prosperous development of these new technologies
were international and national collaborations, especially after the collapse of
the Berlin Wall and the end of the Cold War. Working groups and workshops
provided rapid exchanges of ideas and shared inspirations. The influential
Advisory Group for Aerospace Research and Development (AGARD)
lecture series, sponsored by the North Atlantic Treaty Organization (NATO),
is a prime example. There have also been other larger scale efforts such as
the World Ocean Circulation Experiment (WOCE), Global Atmospheric
Research Program (GATE), the Deep Sea Drilling Project (DSDP), Joint
Global Ocean Flux Study (JGOFS), Ridge Inter-Disciplinary Global
Experiment (RIDGE), Global Ocean Ecosystem Dynamics Program
(GLOBEC), and Tropical Ocean Global Atmosphere (TOGA), to give an
example of the scope and disciplinary interactions involved. Collaborations
like these help to improve ocean observation systems that have been
developed or are under development, including various observatories, drifters,
buoys, moorings, and ships-of-opportunity sensor payloads. These help to
enhance data collection, transmission, and processing into higher-level
products, from the deep ocean to the shallow seas, to the atmosphere above,
and to spaceborne platforms.
8 Chapter 1

1.4 Branches of Oceanography Research

Oceanography is the study of the various natural phenomena, interactions,
and changes of the worlds oceans. This includes the study of the physical and
chemical properties of the ocean, the life within it, as well as the origin and
evolution of geological structures of the sea beds. Early studies focused on
marine navigation, fishing, and exploration. After World War II, the world
began to refocus on the importance of oceans in maritime transportation,
resources, defense, and environmental issues. Recently, the importance of
oceans in global climate change has been realized, as well as their sustainable
development and relationship to regional and global ecological systems. All of
these have resulted in the rapid growth of oceanography and related marine
science disciplines, which are becoming some of the most active research areas
in natural science.
Traditionally, oceanography can be divided into four distinct but mutually
interconnected areas: physical, chemical (marine chemistry), geological (or
marine geology), and biological oceanography (marine biology). Physical
oceanography research involves the studies of water temperature, density and
pressure characteristics, waves, currents and tides, and movements of water
masses, as well as airsea interactions. A subset of this, often referred to as
marine physics, studies the basic phenomena associated with acoustical, optical,
and other physical properties of the ocean. Ocean chemistry focuses on the
chemical composition of seawater, and its impact on various biogeochemical
cycles. Marine geology studies the structures of the ocean basin and tectonic
plates, and their characteristics and evolution. Marine biology involves the
study of all marine life forms, including their life cycles, and marine ecology.

1.4.1 Chemistry
The worlds oceans provide about 505,000 cubic km of water each year in the
form of evaporation under the solar radiation; this constitutes about 87.5% of
the water vapor in the atmosphere. Water evaporated from the sea and land
returns back to the sea and land in forms of rain and snow. On average,
approximately 47,000 cubic km of water per year passes through the land by
means of gravity into rivers, or through the formation of ground water into the
soil, and eventually into the ocean. These constitute the hydrological cycle on
Earth (Fig. 1.2).
Natural water is an aqueous solution containing a variety of dissolved
salts. In seawater, water accounts for about 96.5%, and the rest is mainly a
variety of dissolved salts and minerals, as well as dissolved oxygen, carbon
dioxide, and nitrogen gases. The typical salinity of the worlds oceans is about
3.5% or 35 parts per thousand (ppt), and the average is 2.5%. If salt from the
water of the global oceans could be extracted and laid evenly on the earths
surface, it would form a 40-m-thick salt layer.
Oceanography Overview 9

Figure 1.2 Water cycle on planet Earth. (Courtesy of NOAA.)

The total number of chemical elements in seawater exceeds 80. With the
exceptions of hydrogen and oxygen that are stored in the form of the water
molecule, the vast majority of chemical elements in seawater is in the ionic
state, namely chlorine, sodium, magnesium, sulphur, calcium, potassium,
bromine, carbon, strontium, boron, and fluoride. These constitute 99% of the
total dissolved elements in seawater. The rest are termed trace elements. Trace
elements are often used as tracers for water masses in ocean circulation studies,
due to their low concentration and the assumption of conservation in the
absence of sources.
The dissolved gases such as oxygen and carbon dioxide and other nutrient
elements, including phosphorus, nitrogen, and silicon, are critical to the
marine life cycle. The dissolved substances in water not only affect the
physical and chemical characteristics of water, but also provide nutrients to
sustain marine life and ecological systems. Due to restrictive circulation and
associated biological activities, in certain parts of the oceans pockets of water
bodies can become oxygen depleted for biological activities (meaning for most
fish, a drop to 30% or less of dissolved oxygen in water), resulting in lifeless
regions. This is called hypoxia, and can be found throughout the water
columns and near seafloors.
The ocean is of critical importance to life. The main elements of seawater
content and composition are essentially the same as the body fluids of many
lower level animals. Certain highly evolved mammals serum, including those
10 Chapter 1

of humans, contains an elemental composition similar to that of seawater.

Research has shown that life on earth originated in the ocean, and the vast
majority of life forms exist in the ocean. On land, biological activities are
concentrated within tens of meters from the surface. In the ocean, however,
habitat depths range up to ten thousand meters. There should be little doubt
why the ocean is termed by many as the cradle of life.

1.4.2 Physics
The earths oceans are an important part of the planetary water cycle. They
interact and couple closely with the atmosphere, lithosphere, and biosphere,
and serve as a key link in maintaining and regulating the characteristics and
ecological features of our planet.
Due to the high heat capacity of water, the worlds oceans are an
important source of water vapor in the atmosphere and heat storage of the
planet. The oceans help to balance the surfaces material and energy, and
serve as the heat capacitor of solar radiation. Because of the differences
between the reflectance of land and the waters surface, the sea surface heat
absorption of solar radiation per unit area is approximately 25 to 50% more
than that of land at the same latitude. This accounts for the 10 C difference
between the temperature of the surface waters of the global oceans and the
global land average.
Due to the inherent differences of solar radiation distribution on the
earths surface, water temperatures near the equator are significantly higher
than high-latitude waters. This is one of the driving forces behind large-scale
global circulations, which are caused by the flow of warmer water from the
equator to colder high-latitude water, and the colder flow from high latitudes
to the equator at depth to balance, often referred to as the global conveyer
belt. This leads to a redistribution of energy, buffering the extremes of the
weather in polar and equatorial regions.
Evaporation from the seas surface produces large quantities of water
vapor. This vapor is often transported thousands of miles away by local and
global atmospheric circulation patterns, condensed, and then precipitates in
the form of rain and snow on the land as well as the oceans. This precipitation
serves as the source of fresh water (Fig. 1.2). Thus, oceans play important
roles in global weather and climate patterns, and have a profound impact on
shaping the surface morphology of the earth.
As a physical system, there are different types of movement within the
oceans at different depths and locations. This movement also strongly
influences the biological, chemical, and geological processes. The motions can
be crudely categorized by their causes: thermohaline circulation due to density
variation of the seawater caused by surface evaporation, condensation, and
icing as well as mixing (the global conveyor belt is a prime example); wind-
driven circulation due to wind stress; geostrophic flows due to the pressure and
Oceanography Overview 11

rotation of the earth; tidal movements due to gravity attractions of planetary

bodies; turbulence mixing due to velocity differential or shear; and various
type of waves due to different perturbations caused by localized wind, inertial
waves, and planetary waves.
One key element to the understanding of physical oceanography is the
Coriolis effect. Briefly, it is the strange phenomena or feeling that things do not
travel in a straight line. It is caused by the rotation of the earth. While it is easily
understood by an observer outside the rotational system (i.e., an astronaut in
space observing Earth), it is an odd concept to grasp for an observer within the
rotational system (someone on Earth), the concept being that things tend to turn
right if you live on the Northern Hemisphere, and left on the Southern
Hemisphere, as if there is a force pushing them. This is called the Coriolis force.
A simple example can be seen in Fig. 1.3. For the observer inside the rotating
coordinates (on the surface of the earth), it is not obvious. But this imaginary
force comes from the inertia of a moving object. For the object moving along a
straight line in a rotating system, viewed from the outside, it maintains its course.
But to the observer inside the system, it rotates since the observer also moves.
In the ocean, if the wind is blowing northward, the surface water tries to
follow the wind. However, with the rotation of the earth, it ends up flowing
45 deg to the right of the wind (see Fig. 1.4). The water beneath the surface
tends to follow the surface water due to friction (viscosity to be exact).
However, it too will be moving to the right, and so is the deeper and deeper
water. The net effect is that the integrated net flow over a water column is
90 deg to the right of the wind. This is termed Ekman transport.

Figure 1.3 Deflection of movement due to Coriolis force.

12 Chapter 1

Figure 1.4 Sketch of Ekman transport.

Combined with trade winds around the ocean, the Ekman transport
indeed causes a net transport of water toward the center of the ocean,
resulting in an approximate 2-m elevation in the central area of the Atlantic
Ocean. The balance between the pressure gradient around the gyre and the
Coriolis force results in a geostrophic flow around the gyre. On smaller
scales, often associated with cyclonic winds (in the Northern Hemisphere,
cyclonic is clockwise), the net effect is localized Ekman transport in the
form of divergence and upwelling, based on the same principle just discussed.
Anticyclonic wind stress, on the other hand, leads to convergence and
downwelling Ekman transport. These are termed Ekman suction (divergence)
and Ekman pump (convergence), respectively. The Coriolis effect also causes
another major oceanic feature, namely westward intensification of the ocean
currents on the western side of the ocean basin compared to the eastern side.
This is due to the difference in rate of rotation at the different latitudes of
the earths surface. For more details and in-depth analyses, please refer to the
additional reading list at the end of this chapter.
Wind can also produce vertical transports of water locally, known as
upwelling and downwelling. Wind that blows parallel to the coast can
generate a coastal upwelling. For example, in Fig. 1.5, a northerly wind blows
parallel to a north-south coastline in the Northern Hemisphere. The Ekman
transport is offshore, creating divergence. The diverging water is replaced
by water from below. Because the upwelling water is frequently nutrient
rich, upwelling is often associated with increased biological productivity in the
upper layer. Equatorial upwelling occurs near the equator and results from
divergence caused by trade winds. In the Northern Hemisphere, a southerly
Oceanography Overview 13

Figure 1.5 Upwelling caused by Coliolis force. (Courtesy of Wikipedia.)

wind blowing parallel to a north-south coastline can produce downwelling.

In this case, the Ekman transport is toward the coast, causing convergence
(water pileup). The result of this convergence is downward flow of the water.

1.4.3 Biology
There are about 160,000 to 200,000 different species of animals and more than
10,000 species of plants in the ocean. Most ocean creatures, similar to the
entire biosphere of living creatures, are dependent on solar energy through
photosynthesis for their survival, directly or indirectly, to form the food chain.
The marine ecological systems are heavily influenced by the physical processes
within the oceans. Therefore, different communities and niches can be
expected from different water masses. The motion of the water masses also
provides key dynamics for the dissolving of various materials, the resuspension
of particles, and transport of sediment near the ocean floor. For this reason,
the distribution of chemical compositions, geological sedimentations, and
morphology of the coast is tightly linked to the physical dynamics of the
ocean. In turn, these motions are also influenced by the geological features as
well as chemical and biological environments. This is yet another evidence of
the coupled oceanic system.
Marine biology is the study of various life forms in the ocean. Contrary to
misconceptions, the majority of these studies does not focus on marine
mammals such as dolphins, whales, or sharks. Rather, they involve creatures
almost invisible to the naked eye, the phytoplankton. These tiny creatures are
the foundation of the oceanic food chain, and are the primary producers of
organic matter by means of photosynthesis in the worlds oceans. They
support the life of higher class members in the ecological system such as
zooplanktons, larva, shrimp, crustaceans, and fish. They can also impact the
environment directly by phytoplankton blooms, which are extreme events.
The blooms typically occur in spring and sometimes fall time frames. In the
14 Chapter 1

spring, when temperatures and light increase along with high levels of
nutrients available in the upper water column (thanks to mixing by winter
storms), rapid phytoplankton growth can occur with the help of stratification
that inhabits vertical mixing, thus reducing mortality rate. This is the cause of
red tides, a sudden increase of phytoplankton growth in the upper layer of the
ocean that leads to reduced visibility in the water and an increased level of
toxins by certain phytoplankton species. Red tides can result in a wide-spread
mortality of fish and aquatic stocks in the short term. Fall blooms are often
the result of increased nutrient availability through increased vertical mixing
via summer storms, explaining why they are often smaller in scale when
compared to the spring blooms.
Elements required for biological activity have their own paths of recycle.
They take on various life formsfrom inorganic states to organic formsin
the biosphere. Due in part to the limited supply of elements in the earths
atmosphere and surface (both land and oceans), all elements in living cells are
recycled to sustain the growth of the vast ecosystem. After death, the elements
decompose and disintegrate back into inorganic states after a long period of
time. These cycles can be thought of as either a repository or exchange library
of different scales. Larger exchanges associated with inorganic parts are slow;
smaller exchanges associated with organic as well as inorganic activities are
more active and rapid, as part of the ecosystems evolution. A good example
of an extreme case is the processes involved in marine aggregates or the
marine snow type of particles, where the microbial loop associated with
dissolved organic matter (DOM), smaller bacteria, and viruses can
significantly enhance the rate of conversion.

1.4.4 Geology
Marine geology uses geological, geophysical, and geochemical methods to
study the phenomena and changes from the continental margin to the basins of
pelagic oceans. It focuses on the various aspects of geology not only to describe
the physical and chemical properties of seabed morphology, sediment, and
rocks, but also to explore the mechanisms of their formation and processes. In
the 1950s and 1960s, the theory of plate tectonics was established to explain
seafloor spreading (Fig. 1.6) and the deep trenches that had been observed. This
can be seen as an extension of the continental drift theory by Wegener
(Thurman, 1996). In the late 1960s, with the start of ocean drilling and data
obtained across global ocean floor samples, paleoceanography was formed to
examine the long history of ocean formation and sheds light on the past climate
changes of the earth. The discovery in 1977 of deep submarine hot springs and
associated biological communities at the mid-ocean ridge of the eastern Pacific
Ocean opened the door to studies on the origins of life.
As part of the global crust system, oceanic crust is different from that
of continental crust. Continental crusts are typically lighter, thicker, and
Oceanography Overview 15

Figure 1.6 Illustration of lithosphere and plate tectonics, a focal point of geological
oceanography. (Courtesy of Rice University.)

relatively older. Oceanic crusts are heavier, thinner due to lack of a granite
layer, and relatively younger. This results in the rising of the lighter
continental crust and sinking of the heavier oceanic crust, and is the reason
for the vast and deep oceans.
Due to the coverage of water, it is difficult to observe the oceans crust. In
the last few decades, deep-sea ocean research has revealed deep trenches
exceeding 10,000 m, fault zones more than a thousand miles in length, and
countless seamounts. This research led to the discovery of an impressive
underwater mountain system that cut across the basins and around the globe,
with a total length of more than 80,000 km (50,000 miles). This underwater
mountain range runs through the Atlantic Ocean basin and the central Indian
Ocean, and is termed a mid-ocean ridge (MOR). At the top of a MOR, there is
typically a valley known as a rift running along its spine, formed by plate
tectonics. This type of oceanic ridge is characteristic of what is known as an
oceanic spreading center, which is responsible for seafloor spreading.
In the 1970s, oceanographers observed MORs and rifts using deep-diving
submersibles, and discovered hot springs gushing out from the rifts. This is
caused by cold seawater seeping into the cracks of the hot oceanic crust. The
heated seawater interacts with oceanic basalt, releasing iron, manganese,
copper, and zinc into the mix, to produce metal-rich hot springs. The
magnesium and sulfate carried by the rivers into the ocean are absorbed by
these reactions in the crust. It is estimated that in approximately eight to ten
million years, an amount of water equivalent to the worlds oceans will have
16 Chapter 1

cycled through the MOR rifts. This has far-reaching implications to the
chemical contents of the seawater.
In summary, the various natural processes occurring in the ocean are closely
connected with the atmosphere, lithosphere, and biosphere. They interact
and constrain one another, linked by various forms of energy and material
exchange, to form a multilayer, global-scale natural habitat. The goal of
oceanography, therefore, is to investigate (through observation and experimen-
tation), analyze, conclude, theorize, and predict the properties and processes of
the system, so that we can better utilize and protect the world we live in.

1.4.5 Biogeochemistry: global carbon budget

Global climate change is closely related to the carbon cycle, thus the role of
the ocean in this capacity should be briefly discussed. An overview of the
biogeochemical cycle will benefit our discussion.
Biogeochemical cycles can be divided into gas and sedimentation cycles.
Gas cycles include those of nitrogen, oxygen, carbon, and water, while
sedimentation cycles include iron, calcium, phosphorus, and other earth
elements. Gas cycles move faster than sedimentation cycles. Also, due to the
large reservoir of a gas cycle, it is easier to regulate changes in the biosphere.
For example, local accumulation of carbon dioxide can be quickly absorbed
by plants or dispersed by wind. Sedimentation cycles, although varied by
different elements, often consist of solution and sediments. Weathering helps
to release minerals in the crust in the form of salts. Some of the minerals
dissolve in water, cycle through a series of organisms, and finally end up in
deep-sea sediments, thus leaving the cycle. Other minerals are deposited in the
form of sediment in shallow seas, and eventually weather and return back to
the circle.
Plants and certain animals obtain their required nutrients from the
environment by solutions (such as drinking water), while other animals obtain
most of their nutrients by ingesting plants and herbivores. After death,
elements fixed in these biological forms are released back to the earth through
decomposition over time, and can be reused in other biological activities.
Human beings are the only threat within this biogeochemical cycle of nutrient
recycling in nature. Not only does the growth of our bodies require
indispensable elements of life, but we also consume in large quantities via
industrial use, through manmade as well as new materials (such as plastic).
This interference by humans in the nutrient recycling process has resulted in
local excesses or accelerated processes in some places, and in deficiencies or
slowdowns in others.
Due to the vast volume of the ocean, it has an almost unlimited capacity
to absorb carbon dioxide as part of the chemical process (Cambridge
Encyclopedia, 2005). Unfortunately, the rate of carbon dioxide absorbed by
Oceanography Overview 17

the oceans is limited by the rate of exchange of surface water with the deeper
ocean; once the surface water has been saturated with carbon dioxide, it can
no longer absorb it. The deeper water has less carbon dioxide and thus can
absorb more. In short, the rate of carbon dioxide absorption depends on the
time it takes surface water to mix with the deeper layers of ocean water.
It might help to visualize this process if we think of ocean water as having
two parts. On the top is a thin layer of warmer water. Beneath it is the lower,
much thicker and colder water layer. The lower, colder water has a higher
density, and does not mix well with the lighter, warmer water above. With
exposure to the atmosphere, the surface water can absorb carbon dioxide
before reaching equilibrium. However, due to its shallowness (up to 200 m,
compared to the deeper water, which has 90% of the total volume of the
ocean), its capacity of absorption is limited, due in part by the limited
exchange hindered by the thermoclinea sharp temperature gradient
separating the warmer and colder water that also defines the bottom of the
mixed layer. The only place that allows less restrictive exchanges or mixing
between surface and deeper water is the polar regions, where no significant
thermocline can be found throughout the seasons.
Marine chemists often use radioactive carbon-14 to estimate the mixing
time frame between the upper and lower layers of seawater. Carbon-14, and
its much more abundant sister element carbon-12, share the same chemical
properties. Carbon-12 is an ordinary element in high abundance, and can be
found in timber, coal, and most biological entities. Carbon-14 is quite
different from the more stable carbon-12. Carbon-14 is unstable, decays
by radioactive emission, and can be used as a timer in the carbon dioxide
system. In the natural atmosphere, only trace amounts of carbon-14 exist,
with a half-life of about six thousand years, when it decays into ordinary
nitrogen atoms. Thus, if we extract 100 carbon-14 atoms from the
atmosphere, every six thousand years the number of carbon-14 atoms will
reduce by half: six thousand years later we have 50; twelve thousand years
later we are left with 25; etc.
We can assume that the ratio between carbon-14 and carbon-12 is a fixed
value in the atmosphere. Since the industrial revolution in the 1800s, the
burning of fossil fuels has caused carbon dioxide to be released into the
atmosphere. As these carbon atoms have already undergone millions of years
of history, they no longer contain radioactive carbon-14, and therefore the
ratio of carbon-14 to carbon-12 in the atmosphere drops. However, the large
number of nuclear tests in the 1950s to 1960s released large amounts of
carbon-14 into the atmosphere, and subsequently onto the surface of the
ocean, causing an increase in the ratio.
This allows us to quantify the rate of exchange with the surface ocean.
Over the past few decades, measurements of carbon-14 in shallow water
suggest that these manmade radioactive carbon-14 atoms have penetrated the
18 Chapter 1

surface layer and started to show up in deeper layers, passing the thermocline.
By examining the ratio from the waters formed prior to 1800, oceanographers
can calculate when the water mass was formed, based on this value, and the
assumption that its last contact with the atmosphere determined this ratio. The
results show that deep-sea waters were last in contact with the atmosphere on
average approximately one thousand years ago. This slow rate of mixing
indicates that we can only expect a turnover rate of absorption of carbon
dioxide on the order of one thousand years.
These estimates, combined with other observations, suggest that about
35% of the excessively produced carbon dioxide has been absorbed by the
surface of the ocean. If this estimate is correct, only about half of the fossil
fuels producing carbon dioxide since the 1950s has been transported to the
ocean, leaving the other half in the atmosphere, and possibly influencing the
global climate. It is worth mentioning, however, that the rate of exchange is
based on imperfect mathematical models, and reliable data are also sparse.
Therefore, predictions on the source or sink of carbon dioxide are anything
but definitive, and require more data collection and better models. This is one
of the most active research areas in oceanography, especially optical
oceanography, where global, synoptic monitoring, and remote sensing of
primary producers can provide a better understanding of carbon fixation.

1.5 Summary
Oceanography today is the general term for the subareas mentioned above,
while each of their subdisciplines are established using in situ field
observations or remotely sensed data from the ocean and thereby derived
theories. The study of oceanography requires sampling from ocean waters,
biological forms, and sediments. It can even include sampling of cores from
the sea floors, or by acoustical signals, to investigate structures beneath the
ocean floor, and locally or remotely sensed via airborne platforms and
spaceborne satellites. Armed with more in-depth understanding of the ocean
and its related processes, scientists (oceanographers) can provide more
accurate predictions over longer terms, from hurricanes to long-term climate
changes, enabling us to not only better protect our environment, but also plan
for sustainable growth.

Jerlov, N. G. (1976). Marine Optics, Amsterdam: Elsevier.
Oceanic ridges, Cambridge Encyclopedia (2005).
Thurman, H. V. (1996). Introductory Oceanography, 5th ed. Upper Saddle
River: Prentice-Hall.
Oceanography Overview 19

Additional Reading
Martin, S. (2004). An Introduction to Ocean Remote Sensing. Cambridge:
Cambridge University Press.
Miller, C. B. (2004). Biological Oceanography. New York: John Wiley and Sons.
Pond, S. and Pickard, G. (1983). Introductory Dynamical Oceanography,
2nd ed. Oxford: Pergamon Press.
Robinson, I. (2004). Measuring the Oceans from Space: The Principle and
Methods of Satellite Oceanography. Berlin: Springer-Verlag.
Chapter 2
Basic Optical Properties
of the Ocean
2.1 Introduction
The vast surface area of the ocean covers about three quarters of the planets
surface. The majority of energy we receive from the sun is accordingly
absorbed by the ocean. The optical properties of the water body play a critical
role in understanding the processes involved, and interpreting the measure-
ment results from various active and passive sensors. For example, reflectance
from the oceans surface and bottom of the shallow seas not only affects the
signals returned, but also carries information about their properties, which are
integral parts of the ocean sensing and monitoring process. The amount of
light being scattered along its transmission path is a function of the water
itself, modulated by the temperature, salinity, and density of the environments,
as well as the constituents within. It is assumed that readers are familiar with
basic optical definitions, from which we introduce concepts often used in the
field of ocean optics. This chapter serves as the foundation of following
chapters, and readers are encouraged to read through it, unless you are
confident with the terminology and symbols used in ocean sensing literature.
These basic terms are touched on briefly during discussions of light attenuation,
light propagation, and light generation topics.
Next, a brief review of the basics of electromagnetic (EM) waves is given,
including polarization. Maxwells equations [Eq. (2.1)] are listed here for
simplicity to assist in later discussions. Additional details regarding the equations
can be found in numerous optics textbooks (Fowles, 1975; Born and Wolf, 2005).
r  E 0,
r  H 0,
@H 2:1
r  E m0 0,
rH 0 0:

22 Chapter 2

The equations essentially relate the electric field vector E to the magnetic
field vector H in a source-free nonconductive medium. The energy flows with
the propagation of light, via Poynting vector S, such that S E  H. For a
harmonic plane wave, they are
E E0 exp ik  r vt,
H H0 exp ik  r vt:
If the amplitude E0 and H0 are real vectors, the wave is said to be linearly
polarized, or plane polarized, as the electric field vector stays in its own plane
during propagation (Fig. 2.1).
Partial polarization refers to the situation where light is partially polarized
and partially unpolarized. A degree of polarization (DOP) can be defined as
the ratio of intensity between the polarized light and total light (polarized and
unpolarized). Circular polarization can be defined similarly, where an electric
field vector rotates clockwise or counterclockwise along its propagation path.
The former is generally referred to as right circularly polarized, while the other
is left (defined as such if the reader is looking at the incoming light). A more
general case is elliptically polarized light, and details can be found in other
textbooks, such as Born and Wolf (2005) and Fowles (1975). Circular
polarization can be explained as a fixed phase delay (p/4) between a mixture of
two electric field vectors. This is why quarter-wave plates are used to generate
circular polarization in instrument design and signal analysis (Fowles, 1975).
The complete state of polarization is often described by the four-element
Stokes vector (S0, S1, S2, S3), where total intensity, liner vertical polarization,
45-deg linear polarization, and circular polarization intensitities are used,
respectively. The propagation of the polarized light state can then be
conveniently calculated using linear algebra, where the effects of the medium
and materials can be expressed by Muller matrices (Born and Wolf, 2005).

Figure 2.1 Linear polarization of a plane harmonic wave. (Courtesy of Wikipedia.)

Basic Optical Properties of the Ocean 23

2.2 Light Attenuation: Absorption and Scattering

2.2.1 Absorption
It is a well-known fact that visible light can penetrate into depths of the
ocean, contributing to the vast diversity of life forms on our planet. A quick
peek of the absorption spectrum of EM waves as a function of wavelength
(Fig. 2.2) tells us that indeed such a window exists, coincidentally aligned
with our vision response range from 360 nm (violet) to about 750 nm (red)
(perhaps not too much of a coincidence because the human eye is filled with
water, among other things). Therefore, as the suns radiation reaches the
oceans surface, shorter (than UV) or longer (than red) wave radiation is
attenuated very fast by the water, and the intensity of light decreases exponentially
with water depth, following the exponential decay law (BeerLambert law)
discussed later.
Qualitatively speaking, in clear, open ocean waters, visible light is
absorbed at the longest wavelengths first. Thus, red, orange, and yellow
wavelengths are absorbed at increasing water depths, and blue and violet
wavelengths reach the deepest parts of the water column. Because the blue and
violet wavelengths are absorbed last compared to the other wavelengths, open-
ocean waters appear deep blue to the eye. In coastal waters, higher
concentrations of phytoplankton (therefore chlorophyll-a and other green
color pigments) are present compared to very clear ocean gyre waters. These
pigments in the phytoplankton absorb light, and the plants themselves scatter
light, making coastal waters less clear than open waters. Chlorophyll-a

Figure 2.2 The absorption of EM waves by pure water. (Courtesy of Wikipedia, 2008
Kebes at English Wikipedia.)
24 Chapter 2

absorbs light most strongly in the shortest wavelengths (blue and violet), as
well as the red end of the visible spectrum. In near-shore waters where there
are high concentrations of phytoplankton, the green wavelength reaches the
deepest parts of the water column, and the color of the water appears green-
blue or green to an observer due to backscattered photons. Beyond this visible
wavelength window, EM wave penetration depth is rather limited, on the
order of millimeters or centimeters at the maximum (Fig. 2.2). This limits
thermal and microwave sensing to ocean surface features only. For this
reason, treatment of in-water optical properties focuses on the visible bands.
These relationships can be explained quantitatively by Eq. (2.3) and the
corresponding absorption characteristics of each component involved. By
omitting subscripts that denote wavelength dependence, the total absorption
coefficient can be expressed as the summation of individual constituents,
namely those from particles (biological and inorganic), yellow substances
[gelbstof, or colored dissolved organic matter (CDOM)], and water itself:
a ap acdom aw : 2:3
Their spectral absorption characterics can be seen in Fig. 2.3. Notice that
the lack of CDOM in clear, open ocean water helps to preserve the amount of
blue light, as it is an exponential decay form as a function of wavelength. The
two absorption peaks from phytoplankton around 443 and 670 nm significantly
reduce the amount of blue and red light, although the majority of red light is
absorbed by the water itself, as can be seen by the significant increase in loss rate
starting at 550 nm. Regardless of the relative contributions of each component,
the lowest values seem to congregate around 550 nm, which is often termed the
optimal transmission window of light in oceanic environments. This is one of
the main reasons why 532-nm lasers are the most widely used light sources in
underwater electro-optical (EO) applications.
Absorption coefficients are considered inherent optical properties (IOPs)
of water because they are solely the property of the medium and are not
affected by environmental factors, such as the source of light location, sun

Figure 2.3 Spectral absorption by different components in the water. [Reproduced with
permission from Mueller et al. (2003).]
Basic Optical Properties of the Ocean 25

angle, or background radiation. Properties that are dependent on light

conditions are called apparent optical properties (AOPs) and are discussed
later in more detail.
The primary production rate and phytoplankton growth rate have been
linked to the in vivo absorption by living algae cells, which is the absorption
coefficient of phytoplankton per unit of chlorophyll-a concentration (Kiefer
and Mitchell, 1983; Platt and Sathyendranath, 1988; Morel, 1991; Anderson,
1993). It is generally referred to as chl-a specific absorption coefficient achl,
which can be written as
achl achl  chl: 2:4
Closer examination of measured chlorophyll-specific absorption suggests
that it can vary as a function of pigment composition or location (Carder et al.,
1991; Bricaud et al., 1995). Furthermore, variability can be modeled in a power
function in the form shown in Bricaud et al. (1995), as illustrated in Fig. 2.4.
achl Al  chl Bl
, 2:5
where Al and Bl are positive values and dependent on the wavelength of
impinging light. The simplest explanation of this phenomena is that the
increasing size of a living cell reduces the efficiency of the cell in harvesting
light, which is in agreement with other studies, including effects of light
intensity (Fujiki and Taguchi, 2002). This self-shading effect is also termed the
package effect.

2.2.2 Scattering: volume scattering function and

backscattering coefficient
A more complex but useful IOP is the volume scattering function (VSF, b).
It describes the probability of an incoming photon scattered into a specific
angle away from the incoming direction u and passing through a unit
distance Dr, as seen in Fig. 2.5. Operationally, it can be measured by the
amount of scattered photons per unit angle (solid angle in 3D) per unit
volume. Assuming that incoming flux is FI, and scattered flux is Fs, the VSF
can be written as
b : 2:6
FI drdV
Because E F=A,
b : 2:7
It is easy to see from the prior definition that the VSF gives the relative
contribution or probability of photons scattering into specific directions.
26 Chapter 2

Figure 2.4 Variability of chlorophyll-specific absorption as a function of concentration.

[Reproduced with permission from Bricaud et al. (1995) 1995 John Wiley & Sons.]

Figure 2.5 Volume scattering function (VSF). [Reproduced courtesy of Mobley et al.
Basic Optical Properties of the Ocean 27

Its normalized form is termed the phase function, which is exactly the
probability density function, as it integrates into unity over the 4p solid
angle. The total scattering coefficient b (m 1) is related to the VSF by
Zp Z2p
b bu, sinudud: 2:8
u 0 0

The backscattering coefficient bb (m 1) describes the overall probability of

photons moving backward relative to their initial path, defined as
Zp Z2p
bb bu, sinudud: 2:9
u 2 0

In practice, bb is often determined by a ratio using measurements from a

specific angle (Chami et al., 2006; Hou et al., 2010).
Scattering happens when inhomogeneity is encountered as the photons
march toward their destiny. This inhomogeneity is often represented by
variation of the light speed traveling in the medium involved, or the real part
of the (complex) index of refraction (IOR) or refractive index n:
n : 2:10
Traditional treatment of these inhomogeneity centers, or scattering
centers, can be seen in two broad categories: those that study scattering due
to the particles, which assume a complex individual unit of certain structures
and distinctive IOR, and those that study scattering due to turbulence, which
assumes a more gentle change of IOR due to small density fluctuations caused
by temperature or mixing with other dissolved materials (salt, for example).
Scattering by particles (dielectric spheres assumed) has been studied
thoroughly in both analytical forms and numerical simulations (Jerlov, 1976;
Kirk, 1981; van de Hulst, 1981; Bohren and Huffman, 1983). Depending on
the size of the scattering center relative to the wavelength of the incoming
light, two general approximations are used to give vigorously derived
scattering phase functions: Mie scattering, for those scattering centers that
are much larger than the wavelength, and Rayleigh scattering, for those
scattering centers that are much smaller than the wavelength. This scale
corresponds to the molecular scales such as those of the water itself and gas
components in the atmosphere (incidentally, this explains why the sky is blue).
The more gradual change of IOR in natural environments by
microstructures of density packets is often attributed to the effect of
scattering by turbulence. This explains, in part, the discrepancy of strong
forward scattering at very small angles observed, compared to the Mie
theory prediction (Wells, 1973; Mobley, 1994). If we consider that both
28 Chapter 2

salinity and temperature variations are present in the oceanic water being
examined, IOR variation
@n @n
Dn DT DS 2:11
@T T @S S
can be used to estimate the rough order of variability to be on the order of
parts per million or 10 6 (Mobley, 1994). This seemingly small number can
be deceiving, however, if the path length of the propagation light and the
number of such inhomogeneities is not considered. Studies have shown that
in theory, these could contribute to the scattering coefficient orders of
magnitude higher than those of typical particle contributions (Bogucki et
al., 2004). By incorporating the salinity and/or temperature dissipation rate
of microstructures in the ocean, along with kinetic energy dissipation, the
variation can be linked to dissipation rates at different subregions of the
power density spectrum, namely inertial, decay, and viscous-convective, to
be examined in association with optical signal variations (Fields, 1972;
Hou, 2009). Additional impacts of turbulence scattering are examined in a
later chapter on underwater sensing involving diver visibilities.
Typical values of the VSF by particles can be found in Petzold
(1972), data from which is used in Fig. 2.6. The particles all exhibit very
strong forward scattering and elevated backscattering, with sidescattering
(around 90 deg) being the weakest. These in situ data have been
approximated by many efforts in quasi-analytical forms (Fournier and
Forand, 1994), one of which is the most widely used form of Henyey and
Greenstein (1941):

1 1 g2
bg, u  , 2:12
4p 1 g2 2g cos u3=2

where g is a parameter best viewed as the asymmetry of the phase function,

since it is used to adjust the amount of forward and backward scattering

Figure 2.6 VSF of particles and pure seawater, using Petzolds data. [Reproduced with
permission from Mobley (1994).]
Basic Optical Properties of the Ocean 29

(Henyey and Greenstein, 1941; Mobley,1994). The value g 0.924

provides the closest fit to that of the particle phase function shown in
Fig. 2.6, in the sense that the cosine averages, or asymmetry, are the closest
in value.
Notice that scattering in theory does not attenuate light in the sense that
photons are lost forever. However, its deviation from the original propagating
path does affect remote sensing signal returns, as backscattering is the primary
contributor in this case, and forward scattering is mostly blamed for the
blurring effects on imaging applications. These are discussed in more detail
later on.
While no specific polarization state of light has been given in the previous
discussions, the relationships hold true for both the polarized and unpolarized
light under consideration. It is, however, necessary to mention a few key
points, and readers are encouraged to explore these concepts in more detail if
further knowledge is desired.
The polarization state of propagating light can be affected by the
scattering process associated with the medium. This is because the electric
field of the light wave induces oscillating electric dipoles of the mediums
atoms and molecules during propagation. These induced dipoles are
responsible for the absorption and scattering properties discussed before,
along with the direction of electric field vectors. The radiation pattern of
the dipoles is at its maximum, perpendicular to the transmission direction,
and minimum along its axis. Furthermore, radiation is also at its maximum
linear polarization along the direction of the dipole. This filtering helps to
produce linear polarization of the scattered light 90 deg to the incident light
(Fig. 2.7). This phenomenon can be easily observed by using polarized
sunglasses and looking into a clear sky, 90 deg in the incident sunlight

Figure 2.7 Scattered light at 90 deg with linear polarization.

30 Chapter 2

2.2.3 Beam attenuation

Clarity of the water, often measured by the attenuation of a colliminated beam
(not necessarily a laser beam), is termed the beam attenuation coefficient,
denoted by c, and often is referred to as beam-c. Beam-c is one of the most
widely used IOPs in optical ocean sensing, and is sometimes referred to as a
measure of underwater visibility. It involves two basic processes, namely
absorption, where photons are incorporated into the medium or its
constituents, and scattering, where photons deviate from their original path
into other directions. For the sake of simplicity, we assume only elastic
scattering at this point, meaning no spectral shift is involved. A simple sketch
of this process is shown in Fig. 2.8.
Intuitively, we can understand the need to classify water types, as one part
of the ocean can be crystal clear, like those in the Bahamas or Great Barrier
Reef, and other parts can be dirty or turbid. Our crude judgment of these
water types are based on the clarity of the water.
In recent literature, especially those involving ocean color remote sens-
ing, the commonly used water types are case 1 and case 2. In case 1,
phytoplankton and their derived product concentration are high compared to
inorganic materials, while the chlorophyll concentration alone is typically as
low as those found in open ocean waters. In case 2 the opposite is true, often
indicated by higher attenuation values and associated higher chlorophyll
concentrations (Morel and Prieur, 1977; Gordon and Morel, 1983; Morel,
1988; Mobley, 1994; IOCCG, 2000). The relative absorption contribution can
be seen in Fig. 2.9. This is a crude classification, especially compared to earlier
literature such as those of Jerlovs water types (Jerlov, 1976), where types I, II,
and III correspond to the clearest oceanic waters to turbid mid-latitude waters,
and 1 through 9 relate to coastal waters of increasing turbidity
(Fig. 2.10). A more quantified approach, although still somewhat rudimentary
in an instrument development sense, can be credited to that of the Secchi disk,

Figure 2.8 Beam attenuation due to absorption and scattering in the water column.
[Reproduced courtesy of Mobley et al. (2012).]
Basic Optical Properties of the Ocean 31

Figure 2.9 Absorption coefficients of typical case 1 and case 2 waters. [Reproduced with
permission from Mobley (2001) 2001 Academic Press.]

Figure 2.10 (a) Attenuation of daylight in the ocean in percent per meter as a function of
wavelength. I: extremely pure ocean water; II: turbid tropical-subtropical water; III: mid-
latitude water; and 1 through 9: coastal waters of increasing turbidity. Incidence angle is
90 deg for the first three cases, 45 deg for the other cases. (b) Percentage of 465-nm light
reaching indicated depths for the same types of water. [Reproduced from Jerlov (1976)
1976 Jerlov.]

where the distance of a disappearing white disk in the water is used as a

measure of water quality (Hou, Lee, and Weideman, 2007). More details of
the Secchi disk are discussed in Chapter 4.
Next, a simple derivation is used to illustrate the definition of the beam
attenuation coefficient. If a beam with intensity I (imagine a number of I
photons) is incident on a thin slab of water of thickness dx with a probability
32 Chapter 2

Figure 2.11 Sketch of beam attenuation. [Reproduced courtesy of Mobley et al. (2012).]

of interaction c (Fig. 2.11), the reduction of photons from the beam is denoted
by dI, which is
dI cIdx, 2:13
cdx or I I0 e cx
: 2:14
The above relationship is the BeerLambert law. Path length x is often
expressed in meters, and c is m 1. Thus, cx is often called the optical length or
attenuation length, or optical depth when studying vertical distributions in the
ocean. When cx 1, the range is often termed as one attenuation length, or
one optical depth (typically used in the vertical sense). The previous derivation
illustrates that the beam-c coefficient is only a function of the optical density of
the medium, and not a function of the lighting of the background. It is,
therefore, an inherent optical property.
When using the term attenuation in the prior text, the fate of the photon
was not specified. It is not known if the photon was absorbed by the medium
(water, CDOM, or particles within), or if it bounced off into other directions
(Fig. 2.8). In fact, beam attenuation can be expressed as a sum of these two
contributions, namely,
c a b: 2:15
A similar derivation following this approach can be made, and we can see
that both quantities are IOPs as well. Roughly, an absorption coefficient
describes the probability of photons being absorbed by various constituents in
the water, while a scattering coefficient describes the probability of photons
being scattered out of the original transmitting path. The ratio of b to c is
termed the single scattering albedo (v0).
Notice that (until now) the wavelength dependence of these relationships
has not been discussed. The underlying assumption is that these relationships
Basic Optical Properties of the Ocean 33

all hold with respect to the wavelength under examination. However, such
conserved relationships are not always true, and this becomes obvious when
we partition each process more closely in later sections. Also, in ocean remote
sensing applications, both passive and active, the total amount of photon loss
as a function of depth is more related to diffused attenuation, instead of beam
attenuation, which is defined as the light intensity loss over depth. This relates
to some of the quantities best discussed under light propagation in the next
section, and are examined more closely there.

2.3 Light Propagation

As we know, the dual nature of light can be expressed as individual packets of
energy with no mass (photons), or as propagating EM waves following the
laws magnificently summarized by Maxwells equations (Born and Wolf,
2005). As mentioned in the Introduction, the aim of this monograph is not to
treat the theory thoroughly. Rather, it aims to use basic concepts to introduce
readers to a new field and new problems, so that their expertise can be applied
to solve ocean sensing issues, or at least give them a better understanding of
the key issues.
It is understood that among the three basic properties of light, intensity is
by far the most widely recognized form. Therefore, we begin with this
quantity, and examine its dependence on colors (wavelengths) when needed.
Lastly, we briefly discuss polarization.

2.3.1 Basic definitions

We are all familiar with individual photon energy as
q , 2:16
where h 6.626  10 34 J s 1, and is Plancks constant.
The intensity of light, or radiant intensity I (W sr 1), is defined by radiant
energy, Q (J), or radiant power (flux, W) F, in the following manner:
dF dQ
I : 2:17
dV dtdV
The more commonly used quantity in ocean remote sensing, radiance L
(W m 2 sr 1 m 1), is the amount of photonic energy projected onto a surface
area dA, from a solid angle dV, such that
Lu, w, l : 2:18
Notice that radiance is a function of incoming direction and wavelength
(Fig. 2.12). If the direction of incident radiance is not of concern, or
34 Chapter 2

Figure 2.12 Illustrated definition of radiance.

in other words, if we consider all of the inputs from various directions

on a surface area, the total light intensity is referred to as irradiance,
E (W m 2 m 1):
Eu, w, l : 2:19
Notice that irradiance is typically defined as one general direction, either
upward or downward, denoted by Ed (depicted in Fig. 2.13) or Eu, although
scalar irradiance (Eo) is also widely used. E and L are related by a simple
integration across all directions:

Figure 2.13 Illustrated definition of irradiance. [Reproduced courtesy of Mobley et al. (2012).]
Basic Optical Properties of the Ocean 35

El Lu, w, l cos udV, 2:20

Z2p Zp=2
Ed Lu, w, l sin udu, 2:21
0 0

Z2p Zp
Eu Lu, w, l sinu du, 2:22
0 p=2

Eo Lu, w, l sin udu: 2:23
0 0

Irradiance reflectance R denotes the ratio between the amount of

irradiance traveling upward to the amount traveling downward:
R : 2:24
This quantity is important when examining the level of signal a remote sensor from
an airborne or spaceborne platform receives. More practically used in ocean color
remote sensing, however, is remote sensing reflectance Rrs (sr 1) because a remote
sensor typically only captures a small amount of upward traveling radiance in a
predefined solid angle (i.e., the aperture of the system):
Rrs : 2:25
For convenience, the notation of wavelength dependency is omitted (this is
always the case unless otherwise specified), until spectral information becomes
relevant again.
Reflectance is an important value in ocean sensing applications in
quantifying the characteristics of water body and bottom types, despite the
fact that it is an AOP parameter, which is dependent on lighting conditions. By
applying remote sensors from spaceborne and airborne platforms, water types
can quickly be classified by the reflectance value as a function of wavelength.
However, sometimes the relative angle of the sensor to the radiance
distribution affects the reflectance observed. When this occurs, a differential
quantity is needed to fully describe such changes. This is termed the
bidirectional reflectance distribution function (BRDF, Fig. 2.14):

dLu ui ; uo
rui , uo : 2:26
dEd ui
36 Chapter 2

Figure 2.14 Illustrated definition of BRDF. Notice the incoming (i ) and outgoing (o)
directions. (Courtesy of NIH.)

2.3.2 Snells law

As light propagates from one medium to another, say from air to sea, the
lights speed changes, as shown in Eq. (2.10). Accordingly, the angle of
incidence (u1) and transmission angle (u2) after refraction is related by Snells
law (Fig. 2.15):
n1 sin u1 n2 sin u2 : 2:27
It is interesting to notice that if n1 is the IOR of air (~ 1) and n2 is that of
water (~ 1.33), then n1 < n2. It is understood that total internal reflection can
happen for light rays traveling from water to air when the incident angle is
larger than a particular critical angle. Following Snells law, there exists
an angle such that total internal reflection can be achieved. If u1 90 deg in
Eq. (2.27), then
u2 arcsin1=1:33 48:8 deg: 2:28

Figure 2.15 Snells law, relating the refraction angle before and after transmission at the
interface. (Adapted from Wikipedia.)
Basic Optical Properties of the Ocean 37

Figure 2.16 Total internal reflection can be observed at incident angles larger than the
critical angle (in this case, 48.8 deg from sea to air). (Courtesy of

Total reflection can be easily observed underwater, as is well represented

by the photograph shown in Fig. 2.16. It is interesting to notice that the
surface wave slope helps to determine the regions of total internal reflection,
which can be (and have been) used to infer ocean surface wave slopes.
Another interesting transmission law at the interface is Brewsters law,
which describes an angle at which maximum polarization would occur, when
the reflected light ray is 90 deg to the refracted ray, as illustrated in Fig. 2.17.
Following Snells law, one can easily see that the Brewsters angle for an
unpolarized ray is
uB arctann2 =n1 , 2:29
which is 53 deg for the airsea interface.

Figure 2.17 Illustration of polarization at Brewsters angle. (Courtesy of Wikipedia.)

38 Chapter 2

While it is true that physical optical principles provide better solutions,

when computational requirement can be neglected, solving Maxwells
equation from the ground up is not always the best approach. At times, it
is beneficial to apply geometrical optics instead of detailed scattering
calculations. This is especially true in computer graphics and image
rendering, when ray tracing is widely used. An image can be generated by
tracing the path of light through pixels in an image plane while simulating
light interactions with virtual objects along the path, using Snells law. For
light traveling in time-varying density microstructures such as those found in
atmospheric turbulence and ocean mixing layers, it is more convenient to
apply Snells law to simulate the phase screen, rather than solve for the basic
EM wave equation, even in the simplified Helmholtz and Kirchhoff form
(Born and Wolf, 2005).

2.3.3 Radiative transfer equation

If the amount of light at certain depths of a water body is considered, there is
clearly a directionality to the radiance distribution. A single parameter can be
used to describe such distribution. Using the integrated irradiance values, the
average cosine m can be defined as (Mobley, 1994)
Lu, wcos udV
Ed Eu
m VZ : 2:30
Lu, wdV

Another quantity often used to describe in-water light transmission

or the availability of light at a certain depth is the diffuse attenuation coefficient
K (m 1), briefly mentioned earlier. It is similar in derivation to that of beam
attenuation, if the total available irradiance is considered in place of the light
beam itself. Thus, the downwelling light diffuse attenuation at depth z is
1 dEd z
Kd z : 2:31
Ed z dz
A similar quantity for radiance diffuse attenuation can be defined as well,
which interested readers can explore on their own. While similarities in
derivation between the beam attenuation coefficient and diffuse attenuation
coefficient have been suggested, their key difference should be emphasized. It
is apparent that Kd is dependent on the ambient radiance distribution and
scattering characteristics of the medium, thus it is an AOP. Beam attenuation
is not a function of the ambient light condition or radiance distribution, thus it
is an inherent optical property of the water. It is worth mentioning that K is an
important parameter in active remote sensing applications.
Basic Optical Properties of the Ocean 39

Now all of the pieces are in place to examine the radiative transfer
dLr, u, w, l
cos u cr, lLr, u, w, l
dr Z
Lr, u0 , w0 , lbu0 , w0 ; u, wdV0 Sr, u, w, l, 2:32

where b(u0 , w0 ; u, w) denotes the scattering phase function from (u0 , w0 ) to

(u, w). The general form of this equation and derivation to the current
form can be found in other literature (Mobley, 1994) (see Fig. 2.18). It is
important to note that when assuming horizontal homogeneity, dr can be
replaced by dz/m. Also, this form is time independent and can be viewed as an
averaged case in most applications, if indeed temporal variation is of concern,
such as those in turbulent microstructures, involving the mixing and
dissipation of kinetic energy and heat. This is similar to those long-exposure
approaches used in astronomy (Fried, 1965; Fried, 1966; Goodman, 1985;
Hou, 2009), which are not discussed in detail here. Source component S is
discussed in more detail in the following section.
It can be shown (Mobley, 1994; Gershun, 1936) that a simplified version
of the above energy conservation relationship can be expressed by Gershuns
law (Gershun, 1936), assuming elastic scattering and a lack of internal
dEd Eu
aE0 : 2:33
It is easy to see in the physics behind the above relationship, that the net
photon loss is the result of absorption. One can see that Gershuns law can be
used to directly measure the absorption coefficient.

Figure 2.18 Elements of radiative transfer equations. [Adapted from Mobley et al. (2012).]
40 Chapter 2

2.4 Light Generation: Solar Radiation, Fluorescence,

Bioluminescence, and Raman and Brillouin Scattering
In the previous sections, we examined the fate of existing photons during the
transmission process, whether they were absorbed, scattered, or refracted at
the interface, along with the quantities describing these processes. In this
section, the main focus is to examine the light field of a water column from the
origins of the photons. We start with the radiation from above, the solar input
through the atmosphere, followed by in-water transformation of energy in the
forms of fluorescence and bioluminescence, and conclude with two other types
of inelastic scattering, Raman and Brillouin scattering.

2.4.1 Solar radiation

As we know, the energy to sustain daily life on the earth is mostly from the
sun. It is intuitively straightforward to see the necessity in examining the solar
radiation spectrum. The apparent color of the sky and the ocean is directly
related to the incoming light, modified by our atmosphere and ocean.
Atomic radiative processes within the sun generate a wide range of
frequencies, or wavelengths, of EM energy. Only a small portion of these are
visible to human eyes, or detectable by human skin in terms of heat, or are
usable by plants and animals on Earth. Those familiar with blackbody
radiation know that solar radiation can be expressed by simply applying the
surface temperature (5250 oC) using Plancks law (Eq. 7.1), which is shown by
the smooth gray line in Fig. 2.19. The actual measurement of the radiance
differs slightly from perfect blackbody radiance due to secondary terms, which
are not important in this discussion. What is interesting to notice is the
radiance at sea level. The atmospheric absorption modifies the incoming light,
with distinguished lines and fixed positions at specific wavelengths. This
provides a method to check instrumentation errors in measurements, and to

Figure 2.19 Solar radiation spectra near visible bands. (Courtesy of NTNU.)
Basic Optical Properties of the Ocean 41

Figure 2.20 (a) The EM spectrum and (b) fraction of the earths radiation transmitted to
space. The amount of radiation transmitted is reduced because of the absorption of radiation
by different atmospheric gases. (Courtesy of the COMET Website at
of the University Corporation for Atmospheric Research (UCAR), sponsored in part through
cooperative agreement(s) with the National Oceanic and Atmospheric Administration
(NOAA), U.S. Department of Commerce (DOC). 19972013 University Corporation for
Atmospheric Research. All Rights Reserved.)

infer the concentration of certain gases in the atmosphere that can be used in
the atmospheric correction process discussed in later chapters.
A wider-wavelength spectrum can be seen in Fig. 2.20. This is an
important figure to keep in mind, especially Fig. 2.20(a), where the wavelength
and frequency of corresponding light are paired for easier reference.
Researchers from different areas of expertise tend to use terms most closely
related to their background. This chart can help readers understand what
terahertz waves indicate in terms of wavelength, as often this is the indication
of system resolution limit (by a small factor, to be very precise, but it is
sufficient enough with the order of magnitude used here). This topic is
revisited during later discussions on microwave sensing in the ocean.

2.4.2 Fluorescence
Fluorescence is the emission of light by an absorbing body at a wavelength
different from the illuminating radiation. Most of us are familiar with
fluorescence involving UV light illumination, such as those used in laser tag
42 Chapter 2

games at kids birthday parties. In most cases, the fluorescent light (emission
wavelength) has a longer wavelength compared to the excitation wavelength.
This is referred to as Stokes shift (Lakowicz, 1999). However, the reverse can
also be true, when multiple photons are absorbed to effectively re-emit a
higher-energy photon at a shorter wavelength. These processes can be best
explained by the Jablonski diagram, shown in Fig. 2.21.
In aquatic environments, many complex organic compounds fluoresce.
These include various pigments inside phytoplanktons, compounds found in
yellow substances (CDOMs), as well as hydrocarbons. The latter and certain
pollutants often show strong fluorescence when excited by UV lights at 300- to
400-nm wavelength. This has been widely used in environmental monitoring
applications, including those for oil spill detection and tracking (Li et al., 2004;
Bello, Smirnov, and Toomey, 2012).
One of the most studied fluorescence responses in ocean sensing is that of
chlorophyll. Light energy taken in by the phytoplankton cells excites the electrons
in chlorophyll molecules. Energy in photosystem II (Ferreira et al., 2004) can be
converted into chemical energy to drive photosynthesis. Alternatively, the
absorbed energy can be emitted as heat or as chlorophyll fluorescence. These
processes all compete for the same photons, resulting in varied fluorescence
yields. Despite such variations, chlorophyll fluorescence has been widely used as a
tool to quantify the level of chlorophyll concentration in the natural environment.
Research vessels use flow-through systems for fast, underway study of
chlorophyll distribution. Calibration is performed using standard substances
and thus the unit of parts per billion quinine sulphate (ppb QS). An interesting
property of fluorescence by a pure substance is the fixed emission wavelength,

Figure 2.21 Jablonski diagram to show fluorescence states after an electron absorbs a
high-energy photon to reach a higher energy state. (Courtesy of Wikipedia.)
Basic Optical Properties of the Ocean 43

regardless of the excitation wavelength. Chlorophyll-a always fluoresces at 685

nm, whether it is excited at the UV, blue, green, or red bands. Understandably,
the quantum yield is a function of the excitation wavelength, since the photons
must be absorbed before they can be re-emitted, and the absorption is wavelength
dependent, as described earlier.

2.4.3 Bioluminescence
Self-illuminating organisms in the sea have been a central attraction to the general
public, evidenced by numerous photographs and videos shown on television and
in print, as well as in online media. For those who are observant and have the
opportunity to stride along the beach at night, away from bright city lights,
sparkling lights can frequently be seen in the water. This sparkle is biolumines-
cence created by organisms in a wide range of sizesfrom bacteria to fish
(Haddock, Moline, and Case, 2010) and colors (Widder, Latz, and Case, 1983).
Bioluminescence can be found from the surface to the deepest seas on the planet,
from the equator to the poles. The fact that deep dwellers are mostly equipped with
eyes, where sunlight never reaches the depth of their roaming range, is an exciting
notation of lifes dependence on light. It is also a reminder of the wide distributions
of bioluminescence, as fish can detect very low light intensities, to the level of 10 12
W m 2 (Denton and Warren, 1957). Various hypotheses exist as to the ecological
role of bioluminescence, ranging from defensive measures (startling, misdirecting,
distractive decoy, sacrificial tag, warning, etc.) to offensive methods (stun, lure,
illumination, etc.) to reproduction, communication, or schooling (Mobley, 1994;
Haddock, Moline, and Case, 2010).
An examination of 70 marine species by Widder, Latz, and Case (1983)
that ranged from bacteria to fish, revealed a wide spectral range of
bioluminescence, shown in Fig. 2.22. One interesting fact observed from

Figure 2.22 Bioluminescence response from three different marine organisms: (line a) the
arthropod Scina cf. rattrayi, lmax 439 nm; (line b) the dinoflagellate Pyrocystis noctiluca,
lmax 472 nm; and (line c) the bacterium Vibrio fischeri Y-1 strain, lmax 540 nm.
[Adapted from Fig. 4 in Widder, Latz, and Case (1983) with permission from Marine
Biological Laboratory, Woods Hole, Massachusetts.]
44 Chapter 2

these response curves is that the centers of these wavelength bands coincide
with the wavelengths that are most transparent in the sea (Mobley, 1994).
Perhaps less exciting than the flashy bioluminescence photographs from
the deep sea (but of equal interest in this authors opinion) are the large-scale
bioluminescent phenomena known as milky seas, which have been detected
from space (an example is shown in Fig. 2.23). They are the result of high
concentrations of luminous bacteria (Herring and Watson, 1993). While
each bacteria cell emits a low-intensity signal, the cumulative effect of the
continuous light production can be visibly distinguished from the blooms of
dinoflagellates. It has been postulated that the onset of such large-scale light-
emitting events near the surface could be facilitated by substrates produced by
a previous or ongoing phytoplankton bloom (Miller et al., 2005). The large
area covered by this first study using satellite remote sensing from space is
approximately 16,000 km2. On a smaller scale are the illuminating wakes
caused by surface ships or submarines that can be observed from airborne and
spaceborne platforms. The latter has important implications in antisubmarine
warfare (ASW) (Strand et al., 1980).

2.4.4 Other inelastic scattering: Raman and Brillouin

Raman scattering, or the Raman effect, is the inelastic scattering of a photon.
Like fluorescence, Raman scattering occurs when a molecule absorbs an
incident photon and shortly thereafter emits a photon of greater wavelength.
This occurs when the kinetic energy of the incoming photon (thus frequency)
changes to a lower energy state. This type of inelastic scattering is a rare

Figure 2.23 A composite rendering of large-scale bioluminescence from space, known as

a milky sea, shown in the lower right corner, covering an area of 16,000 km2. [Reproduced
with permission from Miller (2005).]
Basic Optical Properties of the Ocean 45

occurrence [less than 10%, (Bartlett et al., 1998)] compared to elastic scattering
such as Rayleigh scattering, where the energy level remains the same. Raman
scattering helps to explain anomalies observed during field measurements
associated with yellowish-green to red wavelengths [beyond 550 nm, (Mobley,
1994)], where the computed K values are often less than those of pure
seawater. If there is an internal light source contributing to these observed
wavelengths, then it makes sense that the absorption coefficients derived using
Gershuns law would certainly be less than those of the true value. Similarly, if
there is an internal source, the rate of irradiance decrease will be less than
source-free cases, thus the derived values of K will be less (Sugihara, Kishino,
and Okami, 1984; Mobley, 1994).
It is important to mention the difference between Raman scattering and
fluorescence. For the latter, an incident photon is absorbed and the system is
transferred to an excited state, from which it can go to various lower states
after a certain resonance lifetime. The result of both processes is in essence the
same. A photon with a frequency different from that of the incident photon is
produced, and the molecule is brought to a higher or lower energy level. But
the major difference is that the Raman effect can take place for any frequency
of incident light. In contrast to the fluorescence effect, Raman scattering is
therefore not a resonant effect. In practice, this means that a fluorescence peak
is always at a specific frequency, regardless of the excitation wavelengths,
whereas a Raman scattering peak maintains a constant separation from the
excitation frequency. This can be seen clearly in Fig. 2.24, where comparisons to
fluorescence excitation and emissions of yellow substances and chlorophyll-a are
also shown.
Brillouin scattering, named after Lon Brillouin, a French physicist (1889
1969), occurs when light in a medium interacts with time-dependent optical
density variations, and hence changes its energy (frequency) and path. The
density variations can be due to acoustic modes such as phonons, magnetic
modes such as magnons, or temperature gradients. As described in classical
physics, when the medium is compressed, its index of refraction changes, and a
fraction of the traveling light wave, interacting with the periodic refractive
index variations, is deflected, as in a 3D diffraction grating. Since the sound
wave is traveling as well, light is also subjected to a Doppler shift, so its
frequency changes.
From a quantum perspective, Brillouin scattering is an interaction
between an EM wave and a density wave (photon-phonon scattering),
magnetic spin wave (photon-magnon scattering), or other low-frequency
quasi-particle. The scattering is inelastic: the photon can lose energy to create
a quasi-particle (Stokes process), or gain energy by destroying one (anti-Stokes
process). This shift in photon frequency, known as the Brillouin shift, is equal
to the energy of the interacting phonon or magnon, and thus Brillouin
scattering can be used to measure phonon or magnon energies. The Brillouin
46 Chapter 2

Figure 2.24 Raman scattering and fluorescence of North Sea waters at three different
wavelengths, l0. lR indicates Raman, lY indicates CDOM (yellow substance) fluorescence,
and lc indicates chlorophyll fluorescence. [Reproduced with permission from Mobley

shift is commonly measured by use of a Brillouin spectrometer, based on a

FabryProt interferometer. While Rayleigh scattering can also be considered
the result of fluctuation in the density, composition, and orientation of
molecules (and hence of the refractive index) in small volumes of matter
(particularly in gases or liquids), the difference is that Rayleigh scattering
considers only random and incoherent thermal fluctuations, in contrast with
the correlated, periodic fluctuations (phonons) of Brillouin scattering.
Raman scattering is another phenomenon involving inelastic scattering
processes of light with vibrational properties of matter. The detected
frequency shift range and type of information extracted from the sample,
however, are very different. Brillouin scattering denotes the scattering of
photons from low-frequency phonons, while for Raman scattering, photons
are scattered by interaction with vibrational and rotational transitions in
single molecules. Therefore, the two techniques provide very different
information about a sample: Raman spectroscopy is used to determine
chemical composition and molecular structure, while Brillouin scattering
Basic Optical Properties of the Ocean 47

measures properties on a larger scale, such as elastic behavior. Experimen-

tally, the frequency shifts in Brillouin scattering are detected with an
interferometer, while a Raman setup can be based on either an interferome-
ter or dispersive (grating) spectrometer.
There are some exciting possibilities when applying Brillouin scattering in
aquatic environments to measure sound speed and possibly temperature and
salinity structures in the ocean remotely, such as involving active lidar systems
(Fry, 2012). The details of such an approach are discussed in Chapter 6, when
active remote sensing methods are presented.

2.5 Summary
In this chapter, the basic concepts of ocean optical properties and their
definitions are covered, along with equations relating to these concepts. Light
transmission across the airsea interface, within the water column and off of
the surface and bottom, are briefly discussed. The scattering and absorption of
photons are key to the understanding of ocean sensing in subsea environments,
especially those related to diver visibility theories (Chapter 3) and active
imaging system design and performance evaluations (Chapter 4). They are
also important in understanding ocean color remote sensing applications
(Chapter 5), as well as active lidar remote sensing (Chapter 6).

Anderson, T. R. (1993). A spectrally averaged model of light penetration and
photosynthesis. Limnol. Oceanogr. 38(7), 14031419.
Bartlett, J. S. et al. (1998). Raman scattering by pure water and seawater.
Appl. Optics 37(15), 33243332.
Bello, J., Smirnov, A. G., and Toomey, P. (2012). Development of a
fluorescence polarization submersible instrument for the detection of submerged
heavy oil spills. Proc. SPIE 8372, 83720B. doi: 10.1117/12.919509.
Bogucki, J. A. et al. (2004). Light scattering on oceanic turbulence. Appl.
Optics 43, 56625668.
Bohren, C. J. and Huffman, D. R. (1983). Absorption and Scattering of Light
by Small Particles. New York: John Wiley.
Born, M. and Wolf, E. (2005). Principles of Optics, Cambridge: Cambridge
University Press.
Bricaud, A. M. et al. (1995). Variability in the chlorophyll-specific absorption
coefficients of natural phytoplankton: Analysis and parameterization.
J. Geophys. Res. 100, 1332113332.
Carder, K. L. et al. (1991). Reflectance model for quantifying chlorophyll a
in the presence of productivity degradation products. J. Geophys. Res. 96,
48 Chapter 2

Chami, M. et al. (2006). Variability of the relationship between the

particulate backscattering coefficient and the volume scattering function
measured at fixed angles. J. Geophys. Res. 111(C05013), 10.
Denton, E. J. and Warren, F. J. (1957). The photosensitive pigments in the
retinae of deep sea fish. J. Mar. Biol. Assoc. UK 36, (651662).
Ferreira, K. N. et al. (2004). Architecture of the photosynthetic oxygen-
evolving center. Science 303(5665), 18311838.
Fields, A. S. (1972). Optical phase and intensity fluctuations in a refractive
index microstructure: a mathmatical analysis. Annapolis: Naval Ship
Research and Development Center. R&D Report 3577.
Fournier, G. R. and Forand, J. L. (1994). Analytic phase function for ocean
water. Proc. SPIE 2258, 194. doi: 10.1117/12.190063.
Fowles, G. R. (1975). Introduction to Modern Optics. New York: Holt,
Rinehart, and Winston.
Fried, D. L. (1965). Statistics of a geometric representation of wavefront
distortion. J. Opt. Soc. Am. 55, 14271435.
Fried, D. L. (1966). Optical resolution through a randomly inhomogeneous
medium for a very long and very short exposures. J. Opt. Soc. Am. 56,
Fry, E. (2012). Remote sensing of sound speed in the ocean via Brillouin
scattering. Proc. SPIE 8372, 837207. doi: 10.1117/12.923920.
Fujiki, T. and Taguchi, S. (2002). Variability in chlorophyll a specific
absorption coefficient in marine phytoplankton as a function of cell size and
irradiance. J. Plankton Res. 24(9), 859874.
Goodman, J. W. (1985). Statistical Optics. New York: John Wiley & Sons.
Gordon, H. R. and Morel, A. (1983). Remote assessment of ocean color for
interpretation of satellite visible imagery: A review. Lecture Notes on Coastal
and Esturine Studies. New York: Springer Verlag. 4, 114.
Gershun, A. A. (1936). Fundamental ideas of the theory of a light field
(vector methods of photometric calculations), in Russian. Izvestiya Akad.
Nauk SSSR, 417430.
Haddock, S. H. D., Moline, M. A., and Case, J. F. (2010). Bioluminescence
in the sea. Ann. Rev. Mar. Sci. pp. 443493.
Henyey, L. C. and Greenstein, J. L. (1941). Diffuse radiation in the galaxy.
Astrophys. J. 93, 7083.
Herring, P. J. and Watson, M. (1993). Milky seas: a bioluminescent puzzle.
Mar. Obs. 63, 2230.
Hou, W. (2009). A simple underwater imaging model. Opt. Lett. 34(17),
Basic Optical Properties of the Ocean 49

Hou, W. et al. (2010). Glider optical measurements and BUFR format

for data QC and storage. Proc. SPIE 7678, 76780F. doi: 10.1117/
Hou, W., Lee, Z., and Weideman, A. D. (2007). Why does the Secchi disk
disappear? An imaging perspective. Opt. Express 15(6), 27912802.
IOCCG (2000). Remote sensing of ocean colour in coastal, and other
optically-complex, waters. IOCCG Report. Group 3. Dartmouth.
Jerlov, N. G. (1976). Marine Optics. New York: Elsevier.
Kiefer, D. A. and Mitchell, B. G. (1983). A simple, steady state description of
phytoplankton growth based on absorption cross section and quantum
efficiency. Limnol. Oceanogr. 28, 770776.
Kirk, J. T. O. (1981). A Monte Carlo study of the nature of the underwater
light field in, and the relationships between optical properties of, turbid yellow
waters. Aust. J. Mar. Fresh. Res. 32, 517532.
Lakowicz, J. R. (1999). Principles of Fluorescence Spectroscopy. New York:
Kluwer Academic/Plenum Publishers.
Li, J. et al. (2004). Matching fluorescence spectra of oil spills with spectra
from suspect sources. Anal. Chim. Acta 514, 5156.
Miller, S. D. et al. (2005). Detection of a bioluminescent milky sea from
space. Proc. Natl. Acad. Sci. USA 102, 1418114184.
Mobley, C. D. (1994). Light and Water:Radiative Transfer in Natural Water.
New York: Academic Press.
Mobley, C. D. (2001). Radiative Transfer in the Ocean. in Encyclopedia of
Ocean Sciences. J. H. Steele, Ed. New York: Academic Press. 23212330.
Mobley, C. D. et al. (2012). Ocean Optics Webbook. [e-book] Available at
<> [Accessed 4 Mar. 2012].
Morel, A. (1988). Optical modeling of the upper ocean in relation to its
biogenous matter content (Case I waters). J. Geophys. Res. 93, 1074910768.
Morel, A. (1991). Light and marine photosynthesis: a spectral model with
geochemical and climatological implications. Prog. Oceanogr. 26, 263306.
Morel, A. and Prieur, L. (1977). Analysis of variations in ocean color.
Limnol. Oceanogr. 22(4), 709722.
Mueller, J. L. et al. (2003). Ocean Optics Protocols For Satellite Ocean Color
Sensor Validation, Revision 4. Greenbelt: NASA Goddard Space Flight Center.
Petzold, T. J. (1972). Volume scattering functions for selected natural
waters. Scripps Inst. Oceanogr. pp. 7278.
Platt, T. and Sathyendranath, S. (1988). Oceanic primary production:
estimation by remote sensing at local and regional scales. Science 241,
50 Chapter 2

Strand, J. A. et al. (1980). The Antisubmarine Warfare (ASW) Potential of

Bioluminescence Imaging. Seattle: Naval Reserve Center.
Sugihara, S., Kishino, M., and Okami, N. (1984). Contribution of Raman
scattering to upward irradiance in the sea. J. Oceanogr. Soc. Japan 40,
van de Hulst, H. C. (1981). Light Scattering by Small Particles. New York:
Wells, W. H. (1973). Theory of small angle scattering. AGARD Lec. Series
61, NATO.
Widder, E. A., Latz, M. I., and Case, J. F. (1983). Marine bioluminescence
spectra measured with an optical multichannel detection system. Bio. Bull.
165, 791810.
Chapter 3
Underwater Sensing:
Diver Visibility
3.1 Introduction
From the introductory chapter on oceanography, we learned that the volume
of the ocean is huge; so huge, in fact, that if you could shovel all of the lands
mass into the ocean, it would not come close to filling it.
This chapter delves into the underwater world and examines what and
how far we can see underwater, also known as the diver visibility issue. This is
mostly only associated with EO systems. However, we will discuss acoustical
imaging topics related to visibility in the next chapter when active imaging
issues are presented. Other general underwater sampling and monitoring
issues are the topics of later chapters, when sampling techniques and
platforms are discussed. However, even then, the detailed methods involving
chemical and biological sensors are not covered, as this book is biased toward
optical sensing techniques.

3.2 Point Spread Functions and Modulation Transfer Functions

In general, scattering properties of the medium determine the outcome of
image transmission. In ocean and lake optics, scattering properties are
conveniently described and measured by scattering coefficient (b), which
determines the probability of a photon being scattered away from its original
traveling direction per unit length by the medium molecules, constituents
within the medium [i.e., particles (Mobley, 1994)], and turbulence (Gilbert
and Honey, 1972; Bogucki et al., 2004). Scattering parameter (b) is an
integration over all directions of the volume scattering or phase function b,
which details such probabilities by the relative directions of incoming and
outgoing photons (Mobley, 1994). These IOPs, although measured frequently
due to their important applications in ocean optics (especially in remote
sensing), have not been applied to underwater imaging issues until recently

52 Chapter 3

(Hou, Lee, and Weidemann, 2007c; Hou et al., 2008), in part because they
inherently only reflect the effect of single scattering.
A more intuitive and direct measure in imaging is the point spread
function (PSF), which provides the system response to a point source, and
thus includes the effect of multiple scattering (Mobley, 1994). It is the ideal
parameter for studying image transmission, optical sounding (Katsev et al.,
1997), and retrieval of optical properties (Hou et al., 2007b). Simply put, an
image of an object is the combination of the original signal o(x, y), convolved
with the entire system PSF h(x, y), plus noise n(x, y):

zx, y ox, y hx, y nx, y: 3:1

The system response includes those from the imaging system itself, as well as
the effects of the medium (water in this case). With known characteristics
of the imaging system and correct modeling of the medium, theoretically it is
possible to fully recover the original signal by reversion or deconvolution
(Gonzalez and Woods, 2002). Mathematically, the PSF is equivalent to the
beam spread function (BSF) (Mertens and Replogle, 1977), which can be
modeled and measured more easily. From Eq. (3.1), it can be seen that with
the knowledge of z and h, the original image o can be restored, especially
when noise can be reduced with improved system setup or optimization
(Zhang et al., 2008). Additionally, with knowledge of the PSF, the total
system output can easily be simulated. This is valuable for system design,
performance prediction, as well as underwater scene simulations, as it is
based on the accuracy of the physical model of the system (medium
Generally speaking, a 2D image of an object is basically the combination
of the original signal o(x, y), convolved by imaging system response h(x, y),
integrated over sensor space X:
zx, y oxi , yi hx xi , y yi dxi dyi , 3:2

zx, y ox, y hx, y, 3:3
where denotes a 2D convolution, and h(x, y) is the PSF. The system
response is made up of those from the imaging system and the effects of the
medium (water).
Mathematically, it is easier to manipulate this relationship in the frequency
domain, as the convolution operator becomes simple multiplication. Applying
Fourier transforms, the relationship becomes
Zu, v Ou, vHu, v, 3:4
Underwater Sensing: Diver Visibility 53

where u and v are spatial frequencies, and Z, O, and H are Fourier transforms
of z, o, and h, respectively, i.e.,
Z Z1
Zu, v zx, ye j2puxvy

Z Z1
Hu, v hx, ye j2puxvy

Z Z1
Ou, v ox, ye j2puxvy
: 3:5

System response function H, also referred to as the optical transfer function

(OTF), is the Fourier transform of the PSF. The magnitude of the OTF is
the modulation transfer function (MTF). The MTF describes the contrast
response of a system at different spatial frequencies, and when phase information
is of little concern, it is a sufficient measure of the power transfer. By definition,
therefore, the MTF can be measured by the contrast of sinusoid or bar patterns of
corresponding spatial frequencies (Coltman, 1954; Barrett and Myers, 2004).
Notice that the MTF term H(u, v) is the total system response. Therefore,
if one views the complete path from the target to the bottom of the eyes, or
the recording charge-coupled device (CCD) planes, in many cases the MTF can
be the effect of multiple individual components. In the frequency domain, the
MTF can be expressed by the direct product of each component, for instance, the
optical system itself, and the medium (plus any other factors when applicable):
Hu, v Hsystem u, vHmedium u, v: 3:6
The system response Hsystem(u, v) can be predetermined and calibrated to
remove any significant errors, and in most cases does not vary with imaging
conditions. The formulation in Eq. (3.6), which emphasizes the validity of the
separation of the system and the medium, is significant in this analysis.
Furthermore, special attention should be given to the band-limiting char-
acteristics imposed by Hsystem, such as a camera systems field of view, and
Nyquist sampling frequency limits imposed by the CCD resolution (Barrett
and Myers, 2004). For a known object, by examining the characteristics
of the medium evident in Hmedium(u, v), one could theoretically predict the
exact outcome image within that environment by responses from all of
the individual spatial frequencies. In contrast, one can inversely derive the
detailed information of the object from the outcome image via inverse Fourier
transform. This is the goal of underwater mine detection and other applications
including target recognition and tracking.
54 Chapter 3

3.3 Point Spread Functions in the Ocean

3.3.1 Historical forms
Although established almost 40 years ago, the most thorough empirical point
spread model to date is still that of Duntley (1971). Duntley reported extensive
laboratory measurements of BSFs of simulated ocean waters with remarkably
different optical lengths (0.5 to 21), and summarized his findings in a single
(albeit complex) empirical relationship:
Hu 10A C uB
, 3:7
P 2psin u
with parameters B 1 210 D , A 1:260 0:375t0:7100:489= 1:378
0:05310 0:2680:088t , E 13:75  0:501  0:626  0:357t 0:01258
0:00354t 2 , C 13 1 u EE2=3 , and D 0:018 0:011 0:001725tt.

Note that in these equations that the optical length is defined as t cr (c being
the total attenuation coefficient), while the single scattering albedo is given as
v0 b=c. 1=1 v0 is used to simplify the form. H is the spectral
irradiance and is normalized by laser power P, which provides the BSF. To add
to the complexity, it should be realized that the regression results are in degrees,
rather than radians. It is of little surprise that the original publication had to
resolve to errata to achieve the exact correct form, while another citation also
contained misinformation, likely due to printing errors (Voss, 1991).
A different approach was used by Dolin et al. (2006), where numerical
approximations were applied in his analytical formulation of the PSF. This
model can be simplified by assuming little backscattering compared to the
total scattering. Briefly, a modified BSF E(u, r) can be expressed via PSF
kernel G, in the order of direct beam, single, and multiple scattering terms as
Eu, r P0 q=r2 Guq , t b exp aef r, 3:8
duq tb
Guq , t b expt b 0:525 exp2:6uq0:7  t b
puq uq
b22 1
2  1 t b expt b   expb1 b2 uq 3  b2 uq 2 b3 , 3:9
6:857  1:5737t b 0:143t2b  6:027  10 3 t3b 1:069  10 4
 t 4b
b1 , 3:10
1  0:1869t b 1:97  10 2
 t2b  1:317  10 3
 t3b 4:31  10 5
 t 4b

0:469 7:41  10 2 t b 2:78  10 3 t 2b 9:6  10 5  t 3b

b2 , 3:11
1 9:16  10 2 t b 6:07  10 3  t 2b 8:33  10 4  t 3b

6:27 0:723t b 5:82  10 2  t 2b

b3 , 3:12
1 0:072t b 6:3  10 3  t 2b 9:4  10 4
Underwater Sensing: Diver Visibility 55

uq qu, t b br vt, 3:13

Z1 Z1
2p Guq , t b uq duq 1, duq uq duq 1, 3:14
0 0
" # 1=2 " #
< cos u > < cos u > 
q 1  4 1  , b b bb =b: 3:15
1 2 bb 1 2bb
Notice that a dimensionless scaling factor q is used and can be shown to
approximate the mean scattering angle (Dolin et al., 2006). P0 and d relate to
the direct beam initial power and can be omitted when normalizations are
applied to scattered photons. The simplified result is under the assumption
that the absorption coefficient a >> bb, the backscattering coefficient, and
b >> bb, such that b  2bb  b and a  2bb  a. This holds true in most
conditions applied, and introduces little variation when tested. This is not
surprising, as the focus is on small forward scattering angles. Once again, the
complexity of this model is evident and even exceeds that of Duntleys.
However, it does offer a better approximation, as is examined later.
Lastly, using field-collected data, Voss put forth an empirical PSF
expression (Voss, 1991), that fits three different types of oceanic waters
[Tongue of the Ocean (TOTO), Pacific, and Sargasso Sea]. The PSF covered a
range between 4 and 100 mrad and was given by
PSF u Bp u m
, 3:16
where m is the slope of log(PSF) log(u), and Bp is a constant. Here, m is not
a constant, but rather a function of IOP and t. This formula fit all data to less
than 15% error. For imaging restoration applications, especially considering
optimization in convolution (Zhang et al., 2008), this was an acceptable start.
The m values ranged from 0.4 to 2.0 as a function of t (0 to 10). Other than
graphical results, there is no definitive relationship provided to allow
comparison of this method to other approaches. Nonetheless, this was a very
encouraging result, since such simple relationships can be rather beneficial
to imaging needs, especially when high frame rate per pixel calculations
are needed.

3.3.2 Simplified form

A simple form of the PSF is highly desirable, especially when routinely
measured optical properties can be used as input. Such a model was
introduced (Hou et al., 2007a) based on the analytical form by Gordon
(1975). For brevity, only key steps are outlined here. First, note that the PSF
can be approximated by examining the amount of nondisturbed light
56 Chapter 3

variation within a cone of half angle u at range r (Wells, 1973; Gordon,

1975), therefore,
PSF u, r , 3:17
2 0 13
Z1 Zu=t
6 B C7
Fu, r exp4 @c bdvdtAr5, 3:18
0 0

u dt
PSF u e t r b : 3:19
t t2

Direct integration of Eq. (3.19) can be carried out for a known analytical
volume scattering function. The parameter t is used for the convenience of
integration and can be interpreted as the portion of the scattering angle up
to u. There are many possible and defendable choices of b of natural water.
A simple formulation from Wells (1973) is applied here, using a phase
function in the following form:
bu , 3:20
2pu20 u2 3=2
where u0 relates to the mean scattering angle. Substituting Eq. (3.20) into Eq.
(3.19) for integration gives inverse power relationships to scattering angle u.
With the help of numerical integrations, we arrive at the following empirical
bre t v0 te t
PSF u Bu0 n Bu0 , 3:21
2pu 2pun
where B(u0) is a constant and does not affect imaging requirements, since
relative units of PSF are used, and m 1/v0 2tu0. The latter term is
necessary to include higher scattering orders. This is in line with numerical
calculations across different parameter ranges. Comparisons to empirical and
measured formations can now be made.

3.4 MTF of Ocean Waters

For circular symmetrical response systems, such as the isotropic volume
scattering type found in seawater, the corresponding 2D transforms found in
Eq. (3.5) can be reduced to a 1D Hankel (FourierBessel) integral as
Hc, r 2p J0 2puchu, rudu: 3:22
u 0
Underwater Sensing: Diver Visibility 57

Wells (1973) applied small-angle approximations to Eq. (3.22) and derived a

robust underwater modulation transfer model, which is briefly outlined here.
By separating the exponential decay effect with distance due to the medium,
the MTF of the medium in Eq. (3.22) can be expressed as

Hc, R e DcR
, 3:23
where D(c) is the decay transfer function (DTF) and is independent of the
range of detection. This provides a method for comparing measurements at
different ranges for consistency.
By using a thin slab model with the small-angle scattering approximation,
the decay function can be written as the photopic beam attenuation c, less the
light scattered back into the acceptance cone, which is

Dc c V c, 3:24

V c 2p J0 2pucbuudu, 3:25
u 0

and b(u) is the volume scattering function. Total scattering coefficient b is

obtained via
Z p
b 2p busin udu: 3:26

In an effort to derive a closed-form solution, Wells (1973) assumed a

scattering function with the following analytical form:

bu , 3:27
2pu20 u2 3=2
where u0 is related to the mean square angle (MSA). When compared with
coastal measurements by Petzold (Mobley, 1994) (Fig. 2.6), this function
approximates the behavior of scattering in small angles reasonably well in
most cases (Fig. 3.1). Further discussion about these curves continues in the
next section. By combining Eqs. (3.24) through (3.27), the DTF of the
seawater can be expressed as

Dc c V c
b1 e 2pu0 c 3:28
c :
2pu0 c
58 Chapter 3

Figure 3.1 An example of PSF comparison under different u0 values. Notice that the
variation of u0 does not affect Duntley or measurement results from the multiangle volume
scattering meter (MVSM). Normalized to 100 mrad. [Reprinted from Hou et al. (2008).]

3.5 Impacts from Underwater Turbulence

3.5.1 Theoretical treatment: the simple underwater imaging model
When it comes to diver visibility, the most prominent issue is degradation of
image quality over distance due to signal attenuation. This presents a striking
contrast to our usual, seemingly unlimited visual ranges in air. The cause of
the degradation has been mostly attributed to dirt, or inorganic as well as
organic particles (i.e., micro-organisms and detritus), in the water. Most
research has focused on reducing the impact of particle scattering by means of
discriminating scattering photons involving polarization, range gating,
modulation, and by means of restoration via deconvolution (Gilbert and
Pernicka, 1967; Fournier et al., 1993; Chang et al., 2003; Mullen et al., 2004;
Hou, Lee, and Weidemann, 2007c; Hou et al., 2007d). However, in clean
oceanic or lake waters, another factor besides turbidity can influence visibility.
This is scattering by optical turbulence, which is caused by variations of the
index of refraction of the medium over imaging range over time. It is mostly
associated with the turbulence structures of the medium. Degradation of
image quality in a scattering medium involving turbulence has been studied
mostly in the atmosphere (Fried, 1966; Kopeika, 1987; Roggemann and
Welsh, 1996). These studies mainly focused on modeling the OTF as a
function of density variations by association with wind profiles, in an effort to
restore the images obtained, such as in air reconnaissance and astronomy
studies (Sadot et al., 1994; Yitzhaky et al., 1997).
Little has been done regarding turbulence effects on imaging formation in
water, mainly due to dominant particle scattering and associated attenuation.
This is of little surprise to anyone with experience in coastal waters, especially
those inside a harbor or estuarine areas, such as Mississippi, where visibility can
Underwater Sensing: Diver Visibility 59

quickly reduce to zero in a matter of a few feet. The same applies to regions of
strong resuspension from the bottom, both in coastal regions as well as in the
deep sea. Effects of turbulence have been postulated to have impacts only over
long image-transmission ranges (Wells, 1973), a hypothesis that has been
supported by light-scattering measurements and simulations (Bogucki et al.,
2004). Under extreme conditions, observations have been made that involve
targets with a path length of a few feet (Gilbert and Honey, 1972). The images
obtained under such conditions are often severely degraded or blurred, on par
with (or more than) those caused by particle scattering. Overcoming such
challenges to increase both reach and resolution is important to current and
future underwater EO applications. It is critical to establish a good
understanding about the limiting factors under different conditions.
The simple underwater imaging model [SUIM (Hou, 2009)] was developed
to address this issue. Results developed in previous atmospheric research (Fried,
1966; Tatarskii, 1967; Goodman, 1985; Roggemann and Welsh, 1996) are
used with modifications to reflect in-water optical conditions. Key steps and
assumptions are outlined here for convenience and in assessing the limitations
of the theory.
Optical turbulence in the ocean is primarily caused by the IOR variation
as a function of the temperature and salinity. It has been shown that IOR
fluctuations can be expressed as linear combinations of individual elements,
both in terms of the power spectrum and structure function (Fields, 1972).
Following the Kolmogorov model (Tatarskii, 1967), for a fully developed
turbulent flow, under the inertial subregime 2p/L0 < k< 2p/l0 [k is the wave
number corresponding to eddy scales, and L0 and l0 denote outer and inner
scales, respectively (Roggemann and Welsh, 1996)], the power spectral density
of the IOR of ocean waters over imaging range (r) can be expressed in the
forms of (Batchelor, 1959; Fields, 1972)
n k, r K3 k
, 3:29
where K3 B1e 1/3 and reflects the 3D optical turbulence strength. B1 is a
constant and assumed to be of order of unity (Batchelor, 1959). The
total kinetic energy dissipation rate (TKED, e) typically ranges from 10 3 to
10 11 m2s 3 in natural waters.  relates to the dissipation rate of temperature
(TD) or salinity (TS) variances (Batchelor, 1959), and ranges between 10 2 to
10 9 C2s 1 and 10 4 to 10 11 psu2s 1, respectively (Domaradzki, 1997; Nash
and Moum, 1999). It is apparent that Eq. (3.29) has the usual Kolmogorov
form found in atmospheric studies as FK n k, r 0:033Cn rk
2 11=3
1967; Goodman, 1985; Roggemann and Welsh, 1996), where the superscript
K denotes the Kolmogorov spectra. Cn2 is the structure constant of the IOR
fluctuations that describes the optical turbulence strength at a distance r
from the pupil plane (i.e., intensity of IOR fluctuations). It can be noted that
K3 is the equivalent of Cn2, by a constant, and ranges from 10 8 to 102 in the
60 Chapter 3

ocean. The scalar relationship implies that turbulence in water is considered

statistically isotropic, homogenous, and wide-sense stationary (WSS), such
that the spatial autocorrelation function depends only on relative positions. It
is important to recall that not all turbulent flows can be described by the
spectra (Fields, 1972; Roggemann and Welsh, 1996).
It is commonly known that spatial coherence functions between optical
fields of any two points can be used to describe the irradiance distribution of
the source image or object (Goodman, 1985; Goodman, 2005). Using the
WienerKhinchin theorem (Goodman, 1985), the power spectrum [Eq. (3.29)]
is related to the spatial autocorrelation of the IOR, which itself is directly
linked to the structure function of the IOR. Combined, the OTF can be shown
as the equivalent to the spatial correlation function on the pupil screen. For a
time-varying correlation function under WSS conditions, its ensemble average
can be related to the spatial phase structure function, such that the OTF of a
general incoherent object can be expressed following the approach by Fried
(1966; Roggemann and Welsh, 1996). The time-averaged, or long exposure
(LE) OTF of the optical turbulence in underwater environments under a
Kolmogorov spectrum thus takes the form:
"  5=3 #
ldi f
OTFLE f exp 3:44 , 3:30

where l is the mean wavelength [530 nm for typical underwater transmissions

(Hou, Lee, and Weidemann, 2007c)], di is the distance between the pupil plane
and the detector, and f is the spatial frequency on the pupil plane in units of
inverse length. The so-called seeing or Fried parameter (r0 ) is defined over
propagation distance r as
2 33=5
6 4p2 7
r0 0:1854 Z r 5 , 3:31
k 2 dzCn z

and k 2p/l. If we consider optical turbulence throughout the imaging

range to be homogenous and isotropic, it can be described by a constant
independent of distance
Z r to the pupil plane that is mostly true in water,
Cn z Cw , so that
2 2
dzCn2 z rCw2 . Thus, we have

 3=5  3=5
4p2 0:132p2
r0 0:185 2 2 r 3=5
0:185 r 3=5
R0 r 3=5
, 3:32
k Cw k 2 K3
which is a function of range (r). R0 is referred to as the characteristic seeing
parameter, and denotes seeing at the unit distance (1 m). It is important to
notice that the underwater seeing parameter reduces over range at a rate close
Underwater Sensing: Diver Visibility 61

to the square root of r, implying fast roll-off of high spatial frequencies

at extended ranges. Consequently, the time-averaged OTFtur is also a function
of range r:
"  5=3 #
OTFtur c, r exp 3:44 c5=3 r
R0 3:33
exp Sn c5=3 r,
where Sn 3:44l=R0 5=3 1736K3 l , and is termed the optical turbulence
intensity coefficient (Hou, 2009), and the angular spatial frequency c di f :
The average wavelength is used, as the phase shift caused by the IOR
variation due to the temperature or salinity is not primarily wavelength
Another factor that affects underwater imaging is path radiance. This
effect can be quantified with a simple setup that produces generalized
solutions. If we assume that one transferred frequency component has
amplitude x above the mean, which is valued at 1, it is easy to see that
between the original amplitude (1) and the current value (x), the modulation
function is
Imax Imin 1 x 1 x
Morig x: 3:34
Imax Imin 1x1 x
Adding normalized radiance D received by the detector, assuming nonsatura-
tion for simplicity, the modulation after effects of path radiance now becomes
Imax Imin 1 x D 1 x D
Imax Imin 1xD1 xD
2x 1
Morig Morig MTFpath : 3:35
2 2D 1D
The result shows that path radiance affects all frequencies equally, and the net
effect shifts the entire modulation transfer curve, thus it does not affect
relative contributions between turbulence and particles. As ambient light is
incoherent in nature, the MTF can be considered the same as the OTF.
Naturally, at D 0 (no path radiance), Eq. (3.35) shows MTFpath 1. By
expanding the definition of D, it can be easily seen that MTFpath!0 with
a large D, describing the saturation by ambient lights, which limits all
frequencies. This is in general agreement with earlier results (Kopeika, 1987),
but without any specific illumination limitation.
Consider that random phase changes of a wavefront can be described
independently as a thin screen that exists only when turbulence exists; the
resting or averaged OTF is that of particles only. This assures linearity of
system components that allows the application of the cascading of OTFs in the
frequency domain (Goodman, 2005). From this, a simple underwater imaging
62 Chapter 3

equation can be derived that accounts for particles (Hou, Lee, and Weidemann,
2007c), path radiance, and turbulence scattering:
OTF c, rtotal OTF c, rpath OTF c, rpar OTF c, rtur
! " !#
1 1 e 2pu0 c  
exp cr br exp Sn c5=3 r
1D 2pu0 c
! ( " ! #

exp  c  b
1  e 2pu0 c
2pu0 c
Sn c 5=3
r ,
g 3:36

where u0 relates to the mean scattering, and c and a are the beam attenuation
and absorption coefficients, respectively. It is worth pointing out that OTFpar
can take many different forms, depending on how the scattering phase
structure is incorporated (Hou et al., 2008).
SUIM [Eq. (3.36)] accounts for scattering from particles and optical
turbulence, as well as path radiance, in underwater environments. The primary
aim of the model is to determine relative contributions that are essential in
assessing the limits of conventional passive systems under different underwater
conditions. The model has been applied successfully to explain discrepancies
between diver observations and model outputs, when the effect of turbulence
scattering is not accounted for, particularly involving high-frequency compo-
nents (Hou and Weidemann, 2009). Selected optical conditions corresponding
to clear water (c 0.3 m 1), with strong turbulent mixing environments (e
10 310 5,  10 1010 11, R0 0.002  0.004), are used to illustrate the
relative effects of particle and turbulence scattering on imaging transmissions,
shown in Fig. 3.2. We see that turbulence scattering rapidly reduces imaging
details, especially with increased path lengths. It has limited effect on low-
frequency components when compared to particle scattering, even in clear
waters. Equation (3.36) and Fig. 3.2 also help to explain why Mertens and
Replogle (1977) reported no frequency higher than 1 cyc/mrad observed in the
field, results that puzzled Duntley (1974). At such high spatial frequencies, the
relative contrast decreases rapidly toward zero. Lastly, this model helps to
explain the extreme turbulence situation observed by Gilbert and Honey (1972).
If one converts the standard U.S. Air Force (USAF) resolution chart line pairs
(Gilbert and Honey, 1972) to spatial frequencies, the first blurred group (1)
corresponds to 650 cyc/rad. Using Eq. (3.36) with c 0.3, R0 0.0005, which
corresponds to clear water, and strong turbulence condition (e 10 5,  10 11),
it can be seen that the total contrast easily decreases to less than 2% within the
imaging range. This explains the complete disappearance of the 1 group in the
USAF chart, as reported by Gilbert and Honey (1972).
The limitation of Eq. (3.36) is apparent in many ways. The form itself
reflects the optical properties of the medium under incoherent cases. Since the
coherent cutoff frequency is often less (Goodman, 1985), this can only be used
Underwater Sensing: Diver Visibility 63

Figure 3.2 Comparison of relative contributions under different conditions. (T) denotes the
OTF contribution from the turbulence, (P) describes the particle scattering contribution, while
(A) represents all combined contributions. Figure legends under corresponding labels
indicate attenuation coefficients, imaging ranges, and seeing parameters, respectively
(in m1, m, and m). The single scattering albedo of all curves is assumed to be a constant
(0.8). The last three curves in the legend are contributions from particles, turbulence, and
combined (from top to bottom) under the same optical conditions. Path radiance was
excluded (D 0) in assessing these relative contributions. [Reprinted from Hou (2009).]

as a crude estimate under coherent cases. Furthermore, it does not include the
effects of backscattering, nor cases with saturations, omissions that only further
degrade the MTF. This approach is not directly applicable to active systems
such as those gated and modulated, although modifications can be made to
reflect impacts on shorter integration time, to obtain system limitations similar
to the approach used by Fried (1966). Higher wave number regimes involving
viscous and decay processes should be closely examined as well.

3.5.2 Simple underwater imaging model validation

Optical turbulence underwater is primarily a function of temperature
structure, although salinity variations can at times contribute to strong
optical turbulence (Gilbert and Honey, 1972). Intensified thermoclines in the
natural environment provide a convenient setup to examine this stochastic
process. One of the Finger Lakes in upstate New York, Skaneateles, was
identified as a test site for SOTEX [Skaneateles Optical Turbulence Exercise,
July 2010 (Hou et al., 2012)]. The impact of optical turbulence can be best
64 Chapter 3

illustrated when a pair of sample images is examined side by side, one with strong
optical turbulence and one without, under similar turbidity conditions. The images
were taken at two different depths: one at 2.8 m, which is essentially free of optical
turbulence, and one at 8.7 m, which is strongly influenced by optical turbulence. A
rigid structure, called Image Measurement Assembly for Subsurface Turbulence
(IMAST) was designed and utilized (Hou et al., 2012). From the temperature
profiles shown in Fig. 3.3, a negligible amount of optical turbulence should be
expected at the shallow depths where the temperature profile is essentially uniform,
as reflected in the very low temperature dissipation (TD) rates. Both images (Fig.
3.4) are taken under the horizontal deployment configuration. It can be noticed that
despite the similarity in measured optical properties (beam-c less than 0.45 m 1, see
Fig. 3.3), the image taken inside the strong turbulence layer [Fig. 3.4(b)]suffered
much more degradation, compared to that obtained under conditions of weaker
turbulence [Fig. 3.4(a)]. It is worth mentioning that for the IMAST-iPad setting, the
0-2 group of the USAF 1951 resolution chart corresponds to the spatial frequency
of 1900 cyc/rad, or 1.9 cyc/mrad.

Figure 3.3 Optical properties (beam-c at 532 nm) and temperature profile measured during
July 27, 2010, daytime IMAST deployment, as part of SOTEX. Temperature profiles from
other deployments are plotted as well to show the stable and strong thermoclines at the
sample station. [Reprinted from Hou et al. (2012).]
Underwater Sensing: Diver Visibility 65

Figure 3.4 Sample image pair obtained by IMAST during night deployment (IMAST
horizontal) of July 27, 2010. The corresponding physical conditions can be seen in Fig. 3.3
and related publications (Hou, 2009; Hou et al., 2012). (a) was taken at 2.8-m depth with no
obvious optical turbulence, while (b) was taken at 8.7 m, under conditions of similar turbidity
but strong optical turbulence. The images share the same imaging path, camera, and light
settings. For the current IMAST setting, the 0-2 group corresponds to 1900 cyc=rad.
[Reprinted from Hou et al. (2012).]

There are many ways to quantify image degradation. The most direct
way, which also works well with SUIM, is to estimate image degradation in
terms of the MTF, which describes the total system response at different
spatial frequencies. This might not be the best approach for turbulence-
degraded images, especially under high levels of distortion. However, the
intent is to estimate long exposure (averaged) impacts, and such an approach
is acceptable for this purpose. There are several methods to derive the MTF
from imagery. A standard slant edge technique (Cunningham and Fenster,
1987; ISO, 1997) has been used by measuring the corresponding frame-
averaged MTF. However, it is worth mentioning that efforts were made to
ensure error-free implementation by checking in air, as well as in particle- and
turbulence-free underwater conditions.
To test the hypothesis that differences in image quality shown at different
depths (Fig. 3.3) are caused by optical turbulence, a single-frame MTF was
evaluated against a multiframe-averaged MTF. Image degradation by particle
scattering alone should remain the same between the single- and multiframe
averaged MTF, while ones involving turbulence scattering degradation would
be reduced in quality when multiple frames are averaged. This is due to the
static nature of the PSF associated with the particle scattering process (Hou
et al., 2008), as a result of evenly distributed scattering centers, i.e., particles.
The larger turbulent cells (or turbules) and their temporal variations in size
and shape, on top of uneven spatial distribution, determine the nonstatic
66 Chapter 3

Figure 3.5 Normalized MTF of individual (dashed line) and ten-frame averaged (solid line)
images obtained under strong (8.7 m) and weak (2.8 m) optical turbulence during SOTEX.
MTFs are calculated using slant edge algorithms over the same ROI for all images.
The optical properties (particle scattering) of these images are similar, and can be seen in
Fig. 3.3. [Reprinted from Hou et al. (2012).]

nature of the PSF associated with optical turbulence. Results show that there
is very little difference in image quality under the weak turbulence situation
(2.8 m), but noticeable differences under the stronger turbulence case (8.7 m,
Fig. 3.5), which confirms the hypothesis, at least to the first order.
From SUIM, if we assume that path radiance is the same (which is the
main reason for conducting night deployment with an active target), and
assume as well that the particulate scattering characteristics (including the
single scattering albedo vo) and the mean scattering angle are the same at the
depths of interest, then the difference of the MTF at depth 2 (H2) relative to
depth 1 (H1) can be written as
H2 c, r exp Sn2 Sn1 c5=3 r  exp c2 c1 
1 e 2pu0 c 3:37
 1 $0  H1 c, r,
2pu0 c
where Sn2 and Sn1 are the optical turbulence intensities at the corresponding
depths, respectively.
From Eq. (3.37), MTFs are calculated for the depth of 8.7 m (H2) to
include the impacts of optical turbulence, using measured optical properties and
turbulence dissipation rates at this depth, along with measured MTFs (H1)
from the turbulence-free region (2.8 m). The results are shown in Figs. 3.6(a)
through (d). The images used are obtained from an active source (one-way
path) during night deployment of July 27, 2010, to minimize path radiance.
Figure 3.6(a) shows the averaged MTF at the shallower depth (2.8 m), where
optical turbulence is weak, compared to several cases (sequences A, B, and C as
marked) at the deeper depth (8.7 m), where optical turbulence is strong. For
Underwater Sensing: Diver Visibility 67

Figure 3.6 MTFs of three different image sequences estimated from 8.7 m using the slant
edge method, and compared to modeled results, during night deployment on July 27, 2010.
(a) shows the relative variations in MTF of the three image sequences from 8.7-m depth
(marked A, B, and C), compared to the averaged value at 2.8 m. (b), (c), and (d) compare
individual sequences at 8.7 and 2.8 m, respectively. [Reprinted from Hou et al. (2012).]

each of the image sequences at 8.7 m, the MTFs are estimated using the same
ROI of consecutive frames, over ten frames for long exposure. They are
compared to the single- and averaged-frame results at 2.8 m, as well as the
modeled outcome [Figs. 3.6(b), 3.6(c), and 3.6(d)]. As explained before,
averaging and comparison to single-frame results at 2.8 m is used to examine
the level of degradation over longer exposure, as a crude indicator of turbulence
degradation (or lack thereof, in this case). The SUIM model (Hou, 2009; Hou
et al., 2011) is used to incorporate the impacts of optical turbulence at 8.7 m,
using different turbulence parameters for each sequence with R0 0.0045,
0.006, and 0.008 m 1. These values are calculated using Eq. (3.36), applying
measured TKED rates and TD rates (Woods et al., 2011a) as e 10 7, 10 9,
10 9.2 m2s 3,  10 6, 10 6.4, and 10 6.3 oC2s 1, respectively (Woods et al.,
2011b). It is worth noting that the SNR cannot be improved when multiple
68 Chapter 3

frames are used, as each individual frame would typically undergo a different
amount of degradation. Therefore, averaging would only increase the SNR
toward the low-frequency elements, and leave behind random variations at
the high-frequency end. This is necessary, however, to contain all of the
variations caused by optical turbulence (Hou, 2009).

3.6 Secchi Disk Theory Revisited

The simple, widely used Secchi disk method, dating back to 1865, is still used
by oceanographers and limnologists as a quick measurement of water clarity
in oceans and lakes. Its deployment is simple: one lowers the traditionally
white, circular disk (approximately 30 cm in diameter) from above the water
into a water column, and determines the point at which it disappears from
sight. The depth at which the disk can no longer be discerned against the
background is defined as the Secchi disk depth (ZSD). Measurement
protocols include measurement of the depth at which the disk disappears,
and the depth when it reappears, then using the average of both values to
eliminate errors imposed by possible residual memory effects and different
operators. Secchi depth is often used as a convenient way of quantifying
water clarity. It can also be used horizontally to help measure diver
visibilities, or to avoid issues associated with its vertical deployment (i.e., sun
glint, surface waves, bottom patchiness, etc.), especially in very shallow
areas. Preisendorfer (1986) thoroughly summarized and reviewed the science
behind the Secchi disk approach from a radiative transfer perspective, and
warned against potential overuse of its application in what he termed
Secchi disk madness. Even though newer, more rigorous EO instruments
are available to measure optical properties, the Secchi disk method has
continuously proven to be an inexpensive, dependable, and efficient measure
of water clarity that is applicable to diver visibility, water quality, and
biological activities (Preisendorfer, 1986; Lee et al., 2006). The vast database
of available Secchi depths accumulated over the years throughout the entire
world adds an incentive to better understand the science behind this simple
approach. Implications of the Secchi disk method on underwater imaging
issues, such as prediction of diver visibility for mine identification, also
prompt further study beyond the traditional radiative transfer approach.
For most theoretical treatments of Secchi disk visibility, the common
definition of visibility contrast, Weber contrast (Barrett and Myers, 2004), is
used. Visibility contrast Cv is given as the normalized difference in brightness
(radiance L) between the target (Secchi disk) and the background:
CV , 3:38
where subscripts T and B denote the target (disk) and the background,
respectively. As the viewing range decreases to zero (without medium
Underwater Sensing: Diver Visibility 69

attenuation and scattering), inherent contrast (C0) is given. For horizontal

orientation and viewing, as the disk is moved farther away (increasing z), it
can be shown that the apparent contrast Cz decreases exponentially as a
function of the mediums attenuation (Duntley, 1963; Preisendorfer, 1986):
Cz C0 e cz
, 3:39
where c is the photopic beam attenuation coefficient, which is the attenuation
of the natural light spectrum convolved with the spectral responsivity of the
human eye (photopic response function). For most viewing conditions, this
can be approximated by a monochromatic beam attenuation near the peak of
human eye sensitivity, such as at 532 nm (Zaneveld and Pegau, 2004). When
the disk is lowered vertically into the water column, it must be remembered
that illumination at the disk and background is being influenced by surface
radiance and attenuation of light; therefore, Eq. (3.39) is modified by the
diffuse attenuation, and becomes (Duntley, 1963; Preisendorfer, 1986)
Cz C0 e , 3:40
where K is the downward diffuse attenuation coefficient (excluding the
viewing angle dependence for simplicity). Equation (3.40) is what Pre-
isendorfer referred to as the basic equation of Secchi disk science. In a
different form, when the contrast of the target and background converge,
cK , CL , 3:41
where CL is the threshold contrast at disappearing disk depth (ZSD). CL has
been shown to be a function of disk size, but more importantly the adaptation
luminance at the disk location (Blackwell, 1946). The contrast threshold has
been found to vary from 0.02 or higher with low adaptation luminance, to
0.008 (Davies-Colley, 1988) or even 0.002 at high luminance (Blackwell,
1946). Unless mentioned, all parameters are associated with photopic
quantities from here on. It has been shown that such simplification does
not invalidate the derivations that follow (Zaneveld and Pegau, 2004). The
spectral and angular dependence of radiance distribution, as well as the effects
of surface glint, are also neglected for simplicity.
When a black Secchi disk is used instead of the traditional white one,
studies suggest that the same theory holds with less variability (Davies-
Colley, 1988), and therefore a black disk can be used as a more robust
measure of underwater visibility (Zaneveld and Pegau, 2004). Other modi-
fications of the disk, such as one with alternating black and white quadrants,
are also used by many researchers, especially those monitoring lakes. Results
show visibility ranges measured with a white/black quadrant disk are similar
to those obtained with the single color black disk (Steel and Neuhausser,
70 Chapter 3

2002). This has important implications from an imaging perspective, in that

the white/black disk can be viewed as a pseudo-resolution chart, with a
separation distance about half the size of the disk diameter (keeping in mind
that the black and white areas on the disk are reduced compared to
corresponding bar patterns).
Intuitively, since the disappearance of a Secchi disk is observed by a
human operator, it is fitting to examine the issue from an imaging perspective.
In this regard, underwater visibility can be thought of as the resolution
reduction of specific spatial frequencies over distance. This is usually studied
by examining a standard resolution chart such as USAF 1951 and known
environmental transmittance characteristics.
Recall that the MTF measures the modulation or contrast of a pattern
after transmission through the system. The modulation can be obtained by
measuring maximum and minimum values of the sinusoid (or bar) pattern of
spatial frequency c (Coltman, 1954; Barrett and Myers, 2004):
Smax c, R Smin c, R
Mc, R , 3:42
Smax c, R Smin c, R
where Smax and Smin denote the maximum and minimum brightness values of
the pattern at each spatial frequency c, at range R. These terms can be related
to the luminance level of the image, or the radiance detected by the image
By normalizing to the input or initial modulation (assuming uniform),
which is
Smax c, 0 Smin c, 0
Mc, 0 M0 , 3:43
Smax c, 0 Smin c, 0
the MTF H(c, R) can be expressed as
Mc, R
Hc, R : 3:44
When considering black versus white patterns, source modulation is unity
M0 ( 1) and can be omitted. Operationally, normalization can be applied by
dividing the low-frequency value of the system M0 before the transfer. Note
that while this contrast definition (also known as Michelson contrast) is
different from the traditional contrast equation used in the Secchi disk theory,
when the Secchi disk disappears, the information content is the same in both
Equations (3.23) and (3.28) reveal the relationship between the water
MTF, total attenuation, spatial frequencies involved (which can be related to
the physical dimensions of the target or Secchi disk), and depth or horizontal
range at which the Secchi disk disappears. For example, assuming single
scattering albedo v0 b/c 0.8, u0 0.03, and cR 4.8, the MTF of the
Underwater Sensing: Diver Visibility 71

Figure 3.7 MTF of water at different mean square scattering settings [Eq. (3.28)] with
optical length DR 4.8. The curve with circles demonstrates the MTF without the e2pu0c
term in Eq. (3.19). [Reprinted from Hou, Lee, and Weidemann (2007).]

water plotted in Fig. 3.7 shows a large and rapid reduction with increasing
angular frequencies, following Eq. (3.44). It is clearly shown that finer details
(higher frequencies) are lost first. Notice that the term e 2pu0 c only affects
very low spatial frequencies.
From a qualitative MTF perspective, it is clear that the disappearance of a
Secchi disk as it moves away from an observer is the result of reduced
resolution by the system response function, or MTF, at the spatial frequency
related to disk size and range. For simplicity, we exclude the effect of the
imaging system (the human eye in this situation), as well as that of the airsea
interface when considering vertical application, and consider only the effect of
water transmission.
When contrast threshold CV is reached, the Secchi disk can no longer be
seen by the human operator. The resulting effective spatial frequency (cSD)
reflects all frequencies related to the disappearance of the disk. These include
those related to the actual disk size (initial low frequencies) and extend to
those associated with the disk edge (high frequencies). The first-order
approach is, therefore, to use the mean of all frequencies involved. If the
high-frequency limit is very large, one can simply take the mean value (double
the value of the lowest frequency) as cSD in which case the distance of
disappearance (ZSD) becomes
cSD 2ZSD =d: 3:45
This relationship can also be interpreted as the corresponding spatial
frequency on a resolution chart that has a d/2 separation distance. This is
consistent with field measurements that showed the same Secchi depth for a
72 Chapter 3

black versus alternating color disk (Steel and Neuhausser, 2002). Another
way to interpret this equation is that as the disk is moved away from the
observer, the corresponding spatial frequency increases (i.e., narrower
subtense angle).
Assume at this point that the difference between the target brightness and
background is small (but not zero), such that
LT  LB or CV << 1 : 3:46
We can relate the visibility contrast to the MTF via the following:

Smax cSD Smin cSD

Smax cSD Smin cSD

M0 M0  :
LT LB CV 2 2M0
As noted previously, since both contrast terms depict the same target,
informational content is the same. The difference exists only in mathematical
forms, yet it gives a new perspective about Secchi disk disappearance. It is
easy to see from Eq. (3.47) that for Cv 0.02, the approximation introduces
about 1% error.
Equation (3.23) can be rewritten as
Dc : 3:48
At Secchi depth R ZSD, combining Eqs. (3.28), (3.47), and (3.48) results in
b1 e 2pu0 cSD 1 CV
c ln : 3:49
2pu0 cSD ZSD 2M0

For conditions u0  0.03 (McLean and Voss, 1991) and disk size d 0.3 m,
for ZSD on the order of 10 m, exp(2pcSDu0)  0.002 <<1. Let
lnCv =2M0 , then
c : 3:50
2pu0 cSD ZSD
This is the derived horizontal visibility range. For vertical Secchi disk
applications, it can be seen from Eqs. (3.39) and (3.40) that
cK : 3:51
2pu0 c ZSD
Equation 3.51 is the equivalent form of the basic equation for a Secchi disk
referred to by Preisendorfer (Preisendorfer, 1986), but now derived using
modulation transfer theory. If G db=4pu0 , then the Secchi disk depth
Underwater Sensing: Diver Visibility 73

(or visibility range in the horizontal direction) can be written for the
horizontal case as
cZSD G, 3:52
and for the vertical case,
c KZSD G: 3:53
These have the same form and are directly comparable to the radiative
transfer results (Preisendorfer, 1986).
Rewriting Eq. (3.52) gives the horizontal visibility range as
ZSD : 3:54
Now lets consider a situation where a black Secchi disk is used. Take the
generally accepted limiting contrast as 0.02 from atmospheric studies (Bohren,
1995), with M0 1; is then 4.6. Assuming that the total scattering is 0.2 m 1
with u0 0.03, then for a 30-cm Secchi disk, the second term in G is given as
db=4pu0 c  0:16=c. Therefore, coupling constant G is 4.76. The horizontal
visibility range for a black Secchi disk becomes
visibility  , 3:55
which is in line with previous studies (Zaneveld and Pegau, 2004). When the
background is not dark, for instance assuming  10% reflectance for the white
disk case, M0 0.82. Using the same assumptions and approximations as
previously used, visibility would be 4.55/c instead of 4.76/c if the same limiting
contrast applies. This represents about 5% less in the Secchi depth or visibility
range, due to the reduced contrast.
Using absorption and attenuation coefficients measured with an ac-9
spectrophotometer (WETLabs Incorporated) collected during the GLOW
experiment (Gauging Littoral Optics for the Warfighter, September 1722,
2001, Pensacola, Florida), calculated visibility ranges using the radiative
transfer and MTF approaches are shown in Fig. 3.8. Visibility measurements
were taken at 10 feet of seawater (FSW) and 30 FSW water depths (depth
below the surface) each day, following the same track directions each day.
Attenuation (c) and scattering (b) values at 532 nm were used to represent
photopic attenuation and scattering (Zaneveld and Pegau, 2004) for both
approaches. Each data point represents the averaged value at the specific
depth on each day. 4.8/c is used to derive visibility ranges representing
previous studies (Zaneveld and Pegau, 2004), while Eq. (3.29) is used to obtain
MTF visibility ranges along with typical parameters (u0 0.03, d 0.3 m).
The two approaches generally agree well as shown. However, the
current approach provides a smaller predicted visibility range in clearer
74 Chapter 3

Figure 3.8 Visibility range comparisons between previous radiative transfer approach and
the current MTF approach. c (532 nm) ranges from 0.24 to 0.45. See Hou, Lee, and
Weidemann (2007) for details. [Reprinted from Hou, Lee, and Weidemann (2007).]

waters (c  0.24) and a larger range in more turbid waters (c  0.45). The
solid line depicts a 1:1 ratio.
Up to this point, the effect of disk size on Secchi depth has only been
implicitly embedded in previous theoretical treatments of the Secchi disk
method, usually with respect to the subtense angle for the eye. In other words,
the traditional theory based on the radiative transfer approach applies,
regardless of whether the disk is 1 mm or 10 m in diameter. Using simple but
important assumptions, a similar Secchi depth/visibility range can be reached
using a more general approach involving all spatial frequencies. In doing so,
Secchi depth becomes explicitly related to disk size as well as scattering. The
decay of MTF at spatial frequencies over the observing range (Fig. 3.7)
provides a straightforward answer to Secchi disk visibility: as the disk moves
away from the observer, the spatial frequency corresponding to the disk size
increases, and the MTF decreases as a result of both absorption and scattering
at an increased rate. From Eq. (3.28), it can be seen that the role of total
attenuation (c) is to dampen the MTF by the same amount to all frequencies,
which is the basis of the previous radiative transfer approach. The MTF-based
approach shows, on the other hand, that scattering does not affect all
frequencies equally [see Eq. (3.28) and Fig. 3.7]. The only time both
approaches would converge is when the second term on the right side of
Eq. (3.28) becomes insignificant. This is precisely the case for the Secchi disk,
where total attenuation plays the dominant role at increased higher spatial
frequencies by reducing target contrast. This explains why the original
radiative transfer approach by Preisendorfer and Duntley works so well.
Underwater Sensing: Diver Visibility 75

Throughout the history of the Secchi disk method, various disk sizes have
been employed, ranging from a few centimeters to meters in diameter. Disk
sizes most often used are between 20 and 40 cm in diameter, with 30-cm disks
being the standard for marine scientists, while lake researchers seem to prefer
the 20-cm size. Radiative transfer theory results do not explicitly contain
relationships between the Secchi depth and disk size. Rather, they are
embedded implicitly in contrast threshold CV. From a MTF aspect, however,
the functional dependence of Secchi disk visibility and disk size, and the
relationship with forward scattering [shown in Eqs. (3.52) and (3.53)] is
explicit in the formulation. There are other contributors to changes in the
apparent size of the disk, such as deployment height and viewing angle, but
here disk size is specifically addressed.
Because of the large range of disk sizes used, it is important to examine the
impact disk size has on the overall visibility range. The rate of variation of
ZSD can be derived as a function of the disk size using Eq. (3.53) and
demonstrates that
@ZSD b
: 3:56
@d 4pu0 c K
By applying typical values found in coastal regions (Jaffe, 1995) (i.e., a 0.14
and b 0.18 for bays, and a 0.041 and b 0.21 for coastal waters,
respectively), the change in the range would be approximately 3 to 5% for
waters with a small MSA value (0.03), and approximately 1 to 1.6% for a
larger MSA value (0.1). Clearly the rate of variation has a relatively small
contribution to the overall visibility range, as it only affects a small portion of
G. Likewise, using the typical values given, it can be seen that the scattering
term represents roughly 3% of the variation in G. Together, these further help
to explain the convergence of radiative transfer and MTF approaches.
Blackwell (1946) also showed that the limiting contrast CV is a function
of disk size (or angular subtense) and adaptation luminance, i.e.,
@=@d 6 0. Analysis with the radiative transfer method has shown a small
(< 1%) change in range visibility due to disk size changes (43 to 60 cm)
(Preisendorfer, 1986). This effect can also be explained by Fig. 3.7. From a
modulation transfer perspective, it can be seen that the contrast decay of
the spatial frequency reaches a plateau at higher spatial frequencies. In
other words, due to the fast decay in the MTF, it does not matter much if
the disk size is 20 or 30 cm. This makes an impact only when ZSD is very
small or when the disk is very large (i.e., low frequencies). This rapid
decay is the key reason behind observations that the target size does not
significantly alter the disappearance range of the disk. The farther away
the disk is, the higher the spatial frequencies it corresponds to, and where
the MTF at these frequencies flattens out is approximately the same for
the 30- and 20-cm disk.
76 Chapter 3

As for disk colorwhether entirely black, or black and white, from the
modulation transfer theory [see Eqs. (3.43) and (3.50)]the usage of a black
disk versus a black/white alternating disk should have the same outcome,
since they both have M0 1, with everything else being equal. This has been
confirmed by daily Secchi depth measurements over a six-month time span
taken in the Skagit River in Washington (Steel and Neuhausser, 2002).
Therefore, both types of disks are equally useful when measuring underwater
visibility and water quality (Zaneveld and Pegau, 2004).
Contrast term is somewhat less defined, since it is solely dependent on each
observers ability to discern contrast, and subsequently is also a function of
conditions at which measurements are taken (i.e., ambient light, sea state, surface
glint, sun angle, shadow or sunny side of vessel, disk types, bottom types, and so
on). However, claims have been made to indicate very small variations (< 1%)
among different observer groups, such as volunteers versus professionals among
lake monitors in Florida (Carlson, 1995). This result seems inconsistent with the
theory, suggesting that for differing background adaptation luminance,
combined with different CV values (0.02 to 0.005), the possible G values can
range from 5.9 to 9.3, even under optimal observation conditions, per
Preisendorfer (1986). Using the typical coastal region parameters quoted earlier,
the results here show G ranging from 4.6 to 6.1 [Eq. (3.52)] under conditions
similar to Preisendorfer (1986). Other studies report different values, such as
G 8.68 (Hojerslev, 1986), and G 4.8 (Zaneveld and Pegau, 2004). As
expected, the G value can be significantly higher under conditions of strong
scattering. For example, when b 2 and all other parameters stay the same, as
in the earlier case where G 4.76 (b 0.2), then G 6.2 will be obtained with
the current formulation. The larger range of G values can be explained, in part,
by the differences in the scattering contribution to total attenuation.
The modulation transfer method presented has the benefit of a general
approach, and is applicable to other underwater visibility issues, including self-
illuminating targets. The disappearing frequency corresponds to any contrast-
related features, such as patterns and textures, as long as the dependent spatial
frequency is not so small that it is limited by the system hardware before the
medium [see Eq. (3.6)]. The only difference when applying the theory to
different targets is in the threshold contrast that is required.
The small-angle approximation by Wells is applicable to most vision and
imaging systems with a field-of-view cone less than 20 deg. It considers
photons scattered within this angle as part of the signal to be included. This
inherently assumes strong forward scattering in the medium. Monte Carlo
simulation results indicate that at the typical spatial frequency for a Secchi
disk range (> 50 cycles per radian), the small-angle approximation error is
about 2% for up to ten attenuation lengths (Jaffe, 1995). In addition, studies
also show that the exact shape of the volume scattering function [Eq. (3.27)]
does not affect the outcome significantly (< 1%), as it has little impact on the
Underwater Sensing: Diver Visibility 77

cumulative effect of PSF, and therefore the MTF (McLean and Voss, 1991).
However, departure from the small-angle approximation assumption will
invalidate the modulation transfer theory applied here. For instance, the
theory is not valid for a Raleigh-type scattering medium.
The assumption to eliminate the exp(2pcSDu0) term in Eq. (3.49) can be
invalid in very turbid environments. For example, if visibility is only 1.5 m,
cSD  10 and exp(2pcSDu0)  0.15, in which case the contribution should
be included, as it represents  15% variation. However, based on previous
discussions, since this entire term contributes only  3% to G overall, the
conclusions arrived at in Eqs. (3.51) through (3.54) still hold up well.
Furthermore, a closer examination at different spatial frequencies reveals
that this term would affect only low frequencies (Fig. 3.7), and therefore is
not a major concern in the case of the Secchi disk.
Modulation transfer decay (blur) due to the turbulence of the medium is
not accounted for in this case, nor are other factors that reduce visibility, such
as surface glint and capillary waves. In addition, one cannot ignore an implicit
assumption when applying modulation transfer theories: this is namely shift
invariance of the system. The shift invariance roughly translates into
requirements in the field that Secchi disk irradiance does not change over a
period of observation. Fortunately, the human vision system automatically
adapts and eliminates many of these effects. This requirement is important,
however, when EO imaging systems are used or designed based on the
previously discussed theory.

3.7 Through-the-Sensor Technique

While image restoration is one of the key elements in diver visibility studies,
the assessment of optical properties of the medium can be equally important,
as it is not only directly related to the enhancement issues at hand, but also
contains information about in-water constituents that affects modeling efforts,
especially ocean color remote sensing algorithms. A through-the-sensor
approach can be used to examine both issues.
Implementation of the framework is termed the Naval Research
Laboratory (NRL) image restoration via denoised deconvolution (NIRDD).
The flow chart in Fig. 3.9 shows the process involved through an automated
restoration framework. The optimization process is based on the quality of
restoration measured by an image quality metric, termed weighted grayscale
angles (WGSAs) (Hou and Weidemann, 2007). It follows the previously
outlined model [Eq. (3.23), when turbulence can be ignored] to derive the
medium MTF and then the system PSF with knowledge of the camera/lens
MTF. Both automated and manual input (measured optical properties) can
be incorporated into this framework. It can be further applied toward real-
time image enhancement in the field.
78 Chapter 3

Figure 3.9 Sketch of the automated restoration framework NIRDD.

An example case can be used to demonstrate the patented NIRDD. Test image
sets were obtained using the Laser Underwater Camera Imaging Enhancer or
LUCIE from Defense Research and Development Canada (DRDC), during an
April through May 2006 NATO trial experiment in Panama City, Florida.
The amount of scattering and absorption were controlled by introducing
Maalox and absorption dye, respectively. Although the effects of polarization
are examined during the experiment, all images used in this study were unpolarized.
In-water optical properties during the experiment were measured. These included
the absorption and attenuation coefficients (using the ac-9 from WETLabs),
particle size distributions (using the LISST-100 from Sequoia Scientific), and
volume scattering functions [using a multispectral volume scattering meter
(MVSM, custom-built)]. Using the framework discussed previously, image
restoration was carried out and medium optical properties were estimated.
The measured MTFs of the lens and LUCIE camera system are used to
model the combined system MTF [Hsystem(,) in Eq. (3.4)] and are as shown
in Fig. 3.10, modeled by a Gaussian point response (R2 > 0.99 in all fits). It is
clear that the camera is the limiting factor.
The system response functions (PSFs) of the medium are derived from
measurement results of the volume scattering functions and Monte Carlo
simulations. Modeled PSFs using Eq. (3.8) are compared to the measurements
derived. Comparison of the modeled PSF using Eq. (3.8) and in situ measured
results are shown in Fig. 3.11. Note that they are in relative units. The
discrepancy among the two PSFs could be the result of excluding the direct
beam contribution in the Monte Carlo simulations, an omission that
inherently reduces the peak contribution of nonscattered photons. The effect
of multiple scattering, which is accounted for in the Monte Carlo approach,
also helps to reduce the PSF peak. In either case, they are affected by the
sampling frequency limits imposed by detectors in the spatial domain.
Underwater Sensing: Diver Visibility 79

Figure 3.10 Overall camera system (lens plus camera) MTF. Measured camera and lens
MTF were used in Gaussian-type fits. [Reprinted from Hou et al. (2007c).]

Figure 3.11 Measured PSF via Monte Carlo simulations (solid line) compared to modeled
results (dotted line) using data from an afternoon experiment on April 28, 2006. [Reprinted
from Hou et al. (2007c).]
80 Chapter 3

Figure 3.12 Sample of (a) an original image and (b) restored image based on modeled
PSF using measured optical properties, with WGSA values of 0.05 and 0.14, respectively.
[Reprinted from Hou and Weidemann (2007).]

The images are restored using PSFs derived from both the modeled and
measured optical properties, and then quantified by the image quality metric
discussed earlier. A sample pair is shown in Fig. 3.12, with corresponding
WGSA values of 0.05 and 0.14, respectively. The visual restoration differences
between measurement-derived PSFs and modeled PSFs are small despite their
differences (Fig. 3.11), thus only one is shown. Further details can be found in
Hou and Weidemann (2007).
An optimization approach can be used to estimate underwater optical
properties. Forward scattering and mean square angles are used for initial
testing. A set of individual images obtained under different conditions or
ranges is used. Additionally, it is straightforward to obtain medium optical
properties when imagery-derived decay transfer functions (DTFs) are
available, by applying the first-order Taylor expansion to the exponent under
Wells formulation [Eq.(3.28)]:
b1 e 2pu0 c
Dc ! 0 c ,
2pu0 c
b1 1 2pu0 c
c ,
2pu0 c 3:57
c b a,
b1 0
Dc ! 1 c c:
Taking the following regression equation form,
A1 e X

DX C ; 3:58
Underwater Sensing: Diver Visibility 81

Figure 3.13 Sample result of retrieved optical properties from measured MTFs
based on Wells small-angle scattering theory. Top and bottom curves correspond to turbid
(c 0.95 m1) and clear (c 0.35) conditions, respectively. [Reprinted from Hou et al.

results are shown in Fig. 3.13. Regression parameters for the clearer water are
A 33.47 and C 0.3989. For the turbid setting, they are A 44.18 and
C 0.7446. This approach yields c 0.40 m 1 for the clear water situation
and is inline with the measurement (c 0.35 m 1). For the turbid situation,
the regression yields c 0.74 m 1, which is smaller compared to the measured
value of 0.95 m 1. For absorption under the turbid condition, the regression
trend is close to the field measured a 0.27 m 1, at low angular frequencies
(Fig. 3.13). Clearly, these results would benefit from measurements at
increased spatial frequencies. Higher dynamic ranges would also help eliminate
probable digitization errors.

3.8 Summary
Diver visibility has been one of the driving forces in underwater imaging
applications. A majority of related issues are covered in this chapter, starting
with basic imaging models involving PSFs and MTFs, before discussion
moves on to imaging theories and applications associated with particle and
turbulence scattering. Finally, the approach is applied in field observations,
including the classical topic of the Secchi disk and newer research related to
optical turbulence impacts. An example of this approach is shown by the
through-the-sensor approach.
82 Chapter 3

Many stones remain unturned, however. Obvious topics associated with

polarization are not discussed, nor are any of the active sensing approaches.
These topics are addressed in the next chapter to keep the contents and
lengths of each chapter manageable, for both teaching purposes and
coherency of the layout.
Barrett, H. H. and Myers, K. J. (2004). Foundations of Image Science.
Hoboken: WileyInterscience.
Batchelor, G. K. (1959). Small-scale variation of convected quantities like
temperature in turbulence fluid. J. Fluid Mech. 5, 113133.
Blackwell, H. R. (1946). Contrast thresholds of the human eye. J. Opt. Soc.
Am. 36, 624643.
Bogucki, D. J. et al. (2004). Light scattering on oceanic turbulence. Appl.
Opt. 43, 56625668.
Bohren, C. F. (1995). Optics, atmospheric. Encycl. Appl. Phys. New York.
pp. 405434.
Carlson, R. E. (1995). The Secchi disk and the volunteer monitor. LakeLine
15, 2829.
Chang, P. C. et al. (2003). Improving visibility depth in passive underwater
imaging by use of polarization. Appl. Opt. 42, 27942803.
Coltman, J. W. (1954). The specification of imaging properties by response
to a sine wave input. J. Opt. Soc. Am. 44, 468471.
Cunningham, I. A. and Fenster, A. (1987). A method for modulation
transfer function determination from edge profiles with correction for finite-
element differentiation. Med. Phys. 14, 533537.
Davies-Colley, R. J. (1988). Measuring water clarity with a black disk.
Limnol. Oceanogr. 33, 616623.
Dolin, L. S. et al. (2006). Theory of Imaging through Wavy Sea Surface.
Nizhniy Novgorod: Russian Acad. Sci.
Domaradzki, J. A. (1997). Light scattering induced by turbulence flow: a
numerical study. Univ. Southern California, Los Angeles Dept. of Aerospace
Engineering, Technical report. Accession number ADA320777. p. 65.
Duntley, S. Q. (1963). Light in the sea. J. Opt. Soc. Am. 53, 214233.
Duntley, S. Q. (1971). Underwater lighting by submerged lasers and
incandescent sources,Scripps Institution of Oceanography San Diego,
California Visibility Lab. Technical report. Accession number Ad-730 721.
Duntley, S. Q. (1974). Underwater visibility and photography. In: Optical
Aspects of Oceanography, N. G. Jerlov and E. S. Nielsen, eds. New York:
Academic Press.
Underwater Sensing: Diver Visibility 83

Fields, A. S. (1972). Optical phase and intensity fluctuations in a refractive

index microstructure: a mathmatical analysis. Naval Ship Research and
Development Center. R&D Report 3577.
Fournier, G. R. et al. (1993). Range-gated underwater laser imaging
system. Opt. Eng. 32(9), 2185. doi: http://10.1117/12.143954.
Fried, D. L. (1966). Optical resolution through a randomly inhomogeneous
medium for a very long and very short exposures. J. Opt. Soc. Am. 56,
Gilbert, G. D. and Honey, R. C. (1972). Optical turbulence in the sea. Proc.
SPIE. 24, 4955. doi: http://10.1117/12.953476.
Gilbert, G. D. and Pernicka, J. C. (1967). Improvement of underwater
visibility by reduction of backscatter with a circular polarization technique.
Appl. Opt. 6, 741746.
Gonzalez, R. C. and Woods, R. E. (2002). Digital Image Processing. Upper
Saddle River: Prentice Hall.
Goodman, J. W. (1985). Statistical Optics. New York: John Wiley & Sons.
Goodman, J. W. (2005). Introduction to Fourier Optics. Greenwood Village:
Roberts & Company Publishers.
Gordon, A. (1975). Practical approaches to underwater multiple-scattering
problems, in Proc. SPIE 64, pp. 8593. doi: http://10.1117/12.954496.
Hojerslev, N. K. (1986). Visibility of the sea with special reference to the
Secchi disc. Proc. SPIE 637, 294305. doi: http://10.1117/12.964245.
Hou, W. (2009). A simple underwater imaging model. Opt. Lett. 34,
Hou, W. et al. (2008). Comparison and validation of point spread models for
imaging in natural waters. Opt. Express 16, 99589965.
Hou, W. et al. (2007a). A practical point spread model for ocean waters.
in IV Intl. Conf. Curr. Problems Opt. Nat. Waters. Nizhniy Novgorod.
pp. 8690.
Hou, W. et al. (2007b). Automated underwater image restoration and
retrieval of related optical properties. IEEE Intl. Geosci. Remote Sens. Symp.
IGARSS. pp. 18891892.
Hou, W., Lee, Z., and Weidemann, A. (2007). Why does the Secchi disk
disappear? An imaging perspective. Opt. Express 15, 27912802.
Hou, W. and Weidemann, A. (2007). Objectively assessing underwater
image quality for the purpose of automated restoration, Proc. SPIE 6575,
6575Q. doi: http://10.1117/12.717789.
Hou, W. and Weidemann, A. (2009). Diver visibility: why one can not see as
far? SPIE Proc. 7317, 731701. doi: http://10.1117/12.820810.
84 Chapter 3

Hou, W. et al. (2007c). Imagery-derived modulation transfer function and its

applications for underwater imaging. Proc. SPIE 6696, 221228. doi: http://
Hou, W. et al. (2011). Impacts of optical turbulence on underwater imaging.
Proc. SPIE 8030, 803009. doi: http://10.1117/12.883114.
Hou, W. et al. (2012). Optical turbulence on underwater image degradation
in natural environments. Appl. Opt. 51, 26782686.
ISO (1997). Electronic still picture imaging spatial frequency response (sfr)
measurements. International Organisation for Standarisation.
Jaffe, J. (1995). Monte Carlo modeling of underwater image formation:
validity of the linear and small-angle approximations. Appl. Opt. 34(24),
Katsev, I. L. et al. (1997). Efficient technique to determine backscattered
light power for various atmospheric and oceanic sounding and imaging
systems. J. Opt. Soc. Am. 14, 13381346.
Kopeika, N. (1987). Imaging through the atmosphere for airborne
reconnaissance. Opt. Eng. 26, 11461154. doi: http://10.1117/12.7974208.
Lee, Z. P. et al. (2006). Euphotic zone depth: Its derivation and implication
to ocean-color remote sensing. J. Geophys. Res. 112, C03009.
McLean, J. W. and Voss, K. J. (1991). Point spread function in ocean water:
comparison between theory and experiment. Appl. Opt. 30(5), 20272030.
Mertens, L. E. and Replogle, J. (1977). Use of point spread and beam
spread functions for analysis of imaging systems in water. J. Opt. Soc. Am.
67, 11051117.
Mobley, C. D. (1994). Light and Water: Radiative Transfer in Natural Waters,
New York: Academic Press.
Mullen, L. et al. (2004). Amplitude-modulated laser imager. Appl. Opt. 43,
Nash, J. D. and Moum, J. N. (1999). Estimating salinity variance dissipation
rate from conductivity microstructure measurements. J. Atmos. Ocean. Tech.
16, 263274.
Preisendorfer, R. W. (1986). Secchi disk science: visual optics of natural
waters,Limnol. Oceanogr. 31, 909926.
Preisendorfer, R. W., and Environmental Research Laboratories (U.S.)
(1986). Eyeball Optics of Natural Waters: Secchi Disk Science. Washington,
Roggemann, M. C. and Welsh, B. M. (1996). Imaging Through Turbulence.
Boca Raton: CRC Press.
Underwater Sensing: Diver Visibility 85

Sadot, D. et al. (1994). Restoration of thermal images distorted by the

atmosphere, based on measured and theoretical atmospheric modulation
transfer function. Opt. Eng. 33, 4453. doi: http://10.1117/12.151549.
Steel, E. A. and Neuhausser, S. (2002). Comparison of methods for
measuring visual water clarity. J. N. Am. Benth. Soc. 21, 326335.
Voss, K. (1991). Simple empirical model of the oceanic point spread
function. Appl. Opt. 30, 26472651.
Wells, W. H. (1973). Theory of small angle scattering, AGARD Lec. Series
61. NATO.
Woods, S. et al. (2011a). Quantifying turbulence microstructure for
improvement of underwater imaging. Proc. SPIE 8030, 80300A. doi:
Woods, S. et al. (2011b). Measurements of turbulence dissipation in Lake
Skaneateles. Proc. IEEE/OES 10th CWTM, pp. 179183.
Yitzhaky, Y., Dror, I., and Kopeika, N. (1997). Restoration of atmospheri-
cally blurred images according to weather-predicted atmospheric modulation
transfer functions. Opt. Eng. 36, 30623072. doi: http://10.1117/1.601526.
Zaneveld, J. R. and Pegau, W. S. (2004). Robust underwater visibility
parameter. Opt. Express 11, 29973009.
Zhang, J., Zhang, Q., and He, G. (2008). Blind deconvolution: multiplicative
iterative algorithm. Opt. Lett. 33, 2527.
Chapter 4
Active Underwater Imaging
4.1 Introduction
One of the main issues concerning diver visibility is that it is directly
dependent on the availability of natural light. The theories developed and
discussed in the last chapter, for the most part, are only applicable under ideal
situations. In other words, they can be seen as a one-way propagation special
case, where path radiance can be neglected or does not pose significant issues,
such as those in the development of the new Secchi disk theory.
However, the convenience of natural light is not always available nor
dependable from an operational standpoint. Also, the limited range of
detection of the passive approach negates the usefulness of underwater EO
systems at times, despite its higher resolution. Active EO imaging systems, like
those of active sonar systems, can take advantage of extended ranges of
detection and identification by applying various approaches to reduce (or
discriminate against) mostly backscattered photons that enhance the range.
Because of the similarities in principle between active EO and acoustical
systems, and the wide-spread use of acoustical systems in underwater sensing
applications, both are discussed in this chapter.

4.2 Active Electro-Optical Systems

Over the years, many approaches have been developed to increase visibility
range underwater. They involve both civilian and military applications,
including search and rescue, survey, inspection, mapping and research, target
detection and identification, etc. Several review articles have covered these
developments and approaches well from a historical prospective, such as
Duntley (1963), Mertens (1970), papers in the AGARD lecture series, as well
as recent reviews (Jaffe, McLean, and Strand, 2001; Kocak et al., 2008).
Generally, two issues hinder the visibility range underwater, and both are
due to scattering by the medium and constituents within. Forward scattering
leads to the spread of photons in small forward angles, resulting in blurring of
the image. This is shown throughout various examples in the previous chapter.

88 Chapter 4

While particle scattering typically introduces the type of degradation that is well
known and well studied, both theory and measurements have shown that in very
small angles, it is turbulence scattering that accounts for the majority of
scattering contributions. This is the case not only in clean water situations, where
the phenomena is more pronounced relative to those of particle scattering
contributions, but is also evident in turbid waters, especially when biological
activities are high and exopolymers are abundant (Passow and Alldredge, 1994).
Backscattering impacts imaging system performance by reducing image contrast
when an artificial light source is used for additional underwater illumination to
compensate for the rapid loss of light with increasing depth. This is especially bad
when the light source is close to the camera (Fig. 4.1). Unless the object of interest
is self-illuminating (which is discussed in the previous chapter), including a light
source with an underwater imaging system is critical for obtaining high-
resolution images at extended ranges. Several methods have been developed to
mitigate the backscattering issue, including separating the light source and
receiver, range gating using the time-of-flight principle, synchronous scanning
between the light and receivers, and modulated systems, especially nonline-of-
sight (NLOS) imagers. These are sketched in Fig. 4.1. The range of detection of
each approach is listed in the figure as a crude reference of system performance.
Details of these approaches are discussed in the following sections.

4.2.1 Separation of source and receiver

The availability of natural light disappears very fast with depth, starting with
the color red disappearing at about 15 ft, orange at 30 ft, yellow at 60 ft, and
green at 80 ft, in clear ocean waters (McGraw-Hill Encyclopedia, 2002).

Figure 4.1 Comparison of various underwater imaging systems and maximum range.
[Adapted from Jaffe, McLean, and Strand (2001).]
Active Underwater Imaging 89

Artificial lights are usually used for photography and other EO imaging
systems at greater depths, even though faint blue light can still be available.
While it sounds simple enough, the importance of separating the light and the
camera is not readily recognized by underwater photographers (Jaffe,
McLean, and Strand, 2001) (see Fig. 4.2). This is essentially the same
principle as not using the high beams while driving in heavy fog, as strongly
backscattered photons cast a bright veil, reducing the visibility to effectively
zero in no time.
The physics behind this observation can be traced back to the basic optical
properties of ocean water in Chapter 2, where it is shown that elevated
backscattering is caused by particulates in the water. Although backscattering
is relatively weak compared to forward scattering, the close proximity to the
source tends to compensate enough for the difference; Gershuns law describes
an exponential reduction of the radiance from the light source over the
imaging range. The net outcome is a veiling effect that could severely reduce
contrast of the image.
By separating the source and the receiver, only the weak side-scattered
photons will likely reach the receiver, while at the same time, the common
volume intercepted by the source and receiver is much smaller than direct
illumination, as shown in Fig. 4.3. While it is always a desired approach, it is
not always convenient (or even possible) to significantly separate the source
and the receiver of an active imaging system, especially considering the
confined space of unmanned underwater vehicles.

Figure 4.2 U.S. Navy photographer training underwater. Notice that the light is away from
the camera. (Courtesy of Wikipedia and U.S. Navy; photograph by Mass Communication
Specialist 1st Class Jayme Pastoric.)
90 Chapter 4

Figure 4.3 Separation of light and camera helps to reduce backscattering to achieve a
better image, due to weak side scattering and less common volume.

4.2.2 Time of flight and range gating

With the introduction of short-pulsed lasers, it became possible to illuminate
targets of interest with intense light, without the contamination of back-
scattered photons. This is accomplished by using the time-of-flight principle,
as shown in Fig. 4.4.
It is easy to see that the time of flight for the two photons to reach the
detector is differentlonger for the black circle, which has been scattered
multiple times, and shorter for the red circle, which goes directly into the
receiver (ballistic). If the source is a pulsed laser and the receiver is
synchronized to the pulsed source, then by opening and closing the receiver at
the right time, it is possible to shut out (or gate out) the unwanted (i.e.,
scattered, especially backscattered) photons to achieve a better image. This is
more clearly explained in Fig. 4.5.
The gate of the receiver is kept closed when pulsed light travels toward the
intended target or range. This prevents backscattered photons from entering
the receiver, causing a reduction in image contrast. Once the controller decides

Figure 4.4 Time-of-flight sketch showing that scattered photons (black circle) travel a
longer path and therefore take more time to reach the receiver, compared to ballistic
photons (red circle).
Active Underwater Imaging 91

Figure 4.5 Range-gating camera principle. (Courtesy of Georges R. Fournier.)

that the desired reflected or backscattered photons are approaching from the
desired range, by calculating the time it takes for the photon to travel, it opens
the gate of the receiver for the right photons to enter (Fig. 4.5). This way, only
photons from a desired range can be collected to form an image, thus the term
range gating. A sample image collected by such a system is shown in Fig. 4.6,

Figure 4.6 (a) Example of a range-gated image at 8.6-m range underwater. The visibility is
3 m. (b) Target in air. (Courtesy of Georges R. Fournier.)
92 Chapter 4

Figure 4.7 Sample images of a Secchi disk (a) in air and (b) in turbid water at two
attenuation lengths (OD, beam-c valued at 1.2 m1). (Courtesy of U.S. NRL.)

which demonstrates a 3 increased range when compared to passive vision

systems under similar resolution. As a comparison, a passive Secchi image is
shown in Fig. 4.7 at two attenuation lengths [same as optical depth (OD)]
underwater. The system used to acquire the image in Fig. 4.6(a) is the Laser
Underwater Camera Image Enhancer (LUCIE), in its second generation,
fitted for an underwater remotely operated vehicle (ROV), shown in Fig. 4.8.
LUCIE involves 20 years of effort from Defence Research Development
Canada (DRDC) (Fournier et al., 1993). It utilizes a unique feature that
employs a high repetition rate (22.5 kHz) of the pulsed signal (2-W 532 nm,
7-ns pulse, 3-ns gate, on LUCIE2) at any given range, to increase the signal-
to-noise ratio (SNR) by on-chip averaging in the intensified CCD camera.
The output is 30 Hz (video rate). A newer version of the system, fitted for
use by divers as a handheld unit, has been implemented and is shown in
Fig.4.8. It is self-contained with a battery, and is capable of 90 min of
continuous operation, with a lower-power laser for eye safety (600 mW, 1-ns pulse,

Figure 4.8 Three generations of LUCIE: (a) LUCIE1 (19901996); (b) LUCIE2 (19982006);
and (c) handheld LUCIE (20062009). (Courtesy of Georges R. Fournier.)
Active Underwater Imaging 93

Figure 4.9 Sketch of Streak Tube Imaging Lidar (STIL) operational concept. [Figure adapted
from Jaffe, McLean, and Strand (2001).]

5-ns gate, and 20-kHz repetition). Automatic ranging is achieved using a sonar
altimeter system.
Another example of a successful underwater gated imaging system is
the streak tube imaging lidar (STIL) system (Bowker and Lubard, 1993),
developed by Arete Associates (Northridge, California) for underwater EO
identification (EOID) applications. A fan-shaped beam of pulsed laser source
is used for illumination, perpendicular to the track of a moving platform
(Fig. 4.9). The along-track resolution can be adjusted by the speed of the
moving platform and the laser repetition rate. Not only is the time-of-flight
information recorded per pulse, but also the intensity of backscattering from
the target, at operational resolution on the order of centimeters (Fig. 4.10)
(Jaffe, McLean, and Strand, 2001). The image quality and ability to retrieve
range information makes this a favorable candidate for many underwater

4.2.3 Synchronous scan: spatial filtering

Another major approach to underwater EO imaging is the synchronous scan
method. It is also known as the laser line scanner (LLS) approach, since it uses
a highly collimated beam (cw laser) to illuminate each point of interest of the
image, typically via a rotating mirror, while the collector optics focuses with a
very narrow field of view (FOV) on the light beam synchronously via a
94 Chapter 4

Figure 4.10 (a) STIL contrast image and (b) range image. [Reproduced with permission
from Jaffe, McLean, and Strand (2001).]

rotating mirror on the same shaft (Coles, 1997) (Fig. 4.11). The rotating
mirror scans in a fan shape, perpendicular (i.e., cross-track) to the motion of
the platform (i.e., along-track). This essentially provides a separation of
source and receiver to reduce the effect of backscattering and some forward
scattering, thanks to the very tight FOV of the laser and receiver (Strand,
1997). Improvement on imaging resolution and range is significant (Dalgleish,
Caimi, and Andren, 2009), shown in Fig. 4.12.
A modified approach, combining some of the ideas of STIL, uses a
pulsed laser line scanner (PLLS) instead of a cw laser for illumination
in the prior configuration. This approach was made possible by advances in
laser manufacturing capabilities that enable sufficient power in individual
short pulses. The result shows significant improvement of resolution and
SNR. Compare LLS (Fig. 4.12) and PLLS (Fig. 4.13) at 5.1 OD; it is
easy to discern the (3,7) group of the USAF resolution chart from the
PLLS image.
Because recognition of color provides us with the best visual cues in scene
understanding, multiple channels can be used to obtain color images using the
Active Underwater Imaging 95

Figure 4.11 Sketch of a laser line scanner. [Reproduced with permission from Dalgleish,
Caimi, and Andren (2009).]

Figure 4.12 LLS cw laser as the source, at (a) 0.6 and (b) 5.1 OD. (Courtesy of F. Dalgleish.)

Figure 4.13 PLLS laser as the source, at (a) 0.6 and (b) 5.1 OD. (Courtesy of F. Dalgleish.)
96 Chapter 4

LLS approach. A four-channel fluorescence imaging laser line scanner

(FILLS) has been implemented (Strand et al., 1996). This involves four sets of
input/output assemblies mounted on the same drive shaft to ensure
synchronization of the beams and receivers. The result is a high-resolution
color image capable of up to six optical lengths, along with range information,
as shown in Fig. 4.14 (Strand et al., 1996). This is a great tool for target
detection and identification, especially within a complex scene involving man-
made and even camouflaged targets. Notice that the nonfluorescent mine-like
object (MLO) shown in Fig. 4.14 jumps out from the scene among the live
benthic organisms.
One direct LLS variant has been used for underwater inspection
by unmanned underwater vehicles (UUVs). It uses a projected line of light,
via line-generating optics such as a cylinder lens, along with a CCD camera to
reconstruct a reflectance image and 3D structure, with triangulation (Carder
et al., 2005) (Fig. 4.17).This requires less payload space and power, and is a
good choice for UUV deployment, as shown in Fig. 4.15.
Instead of using a single line recorded by a CCD over time, along with
platform motion to scan the scene, multiple lines can be used at the same

Figure 4.14 FILLS image. (a), (b), and (c) are from R, G, B channels; (d) is the pseudo-
color composite of the fluorescent image excited at 488 nm; and (e) is the reflectance image
of 488 nm. [Reproduced with permission from Jaffe, McLean, and Strand (2001).
Active Underwater Imaging 97

Figure 4.15 Laser line scan for autonomous underwater vehicle deployment. [Reproduced
with permission from Carder et al. (2005).]

time to form a structured illumination, which can be seen as an expanded

laser line scanner approach. This clever approach is further enhanced when
mechanical movement is replaced with a digital light projector (DLP), as
seen in Fig. 4.16 (Narasimhan et al., 2005). The difference is that instead of
using temporal progression as a scanning line, all lines are scanned all at
once by a structured illumination (Jaffe, 2010). The improvement is rather
significant under strong scattering situations, as research has shown by both
a single line plane (single line scan) and floodlit images in a laboratory
experiment. The 3D surface reconstruction is similar in principle to that of
Carder et al. (2005) (Fig. 4.17).
The LLS systems of the last 20 years have shown great improvement in
underwater applications. This prompts the question of how much more
improvement could be attained by reducing the size of the instantaneous
field of view (IFOV) of the receiver and the spot of the laser. Theoretical
study has shown that it is possible to achieve the limit of the system, which
is the square of the mediums PSF (Jaffe, 2005). However, this assumes that
a static PSF is available for the imaging process, which is not always the
case, especially considering long-range underwater environments (Hou,
2009; Hou et al., 2012). In a case where time-varying PSF is involved,
98 Chapter 4

Figure 4.16 Structured light experimental setup and results for underwater imaging
under various scattering conditions. [Reproduced with permission from Narasimhan et al.
(2005) 2005 IEEE.]

Figure 4.17 Line scan 3D image and actual target. A D cell battery is used for size
comparison. [Reproduced with permission from Carder et al. (2005).]

resolution limit is no longer only a function of the inherent optical

properties of the medium (as shown in the previous chapter), but is also
dependent on the coherent length of the microstructures that influence
propagation of the light (Fried, 1965; Hou, 2009).
Active Underwater Imaging 99

4.2.4 Temporal modulation

The third class of active underwater imagers modulates the light source to
discriminate between the scattered photons. This is done via intensity or
amplitude modulation (Scott, 1990; Mullen et al., 2004) for both cw and
pulsed sources. An earlier approach linked the temporal modulation of the
source by radio frequency to the intensified CCD receiver, and the phase
information of each pixel was used to determine range information (Scott,
1990). Multiple frames were needed to reduce the ambiguity due to
environmental variations. Recent studies show that the coherency of the
modulated source amplitude can be used to increase image range (Mullen
et al., 2004). The principle is illustrated in Fig. 4.18. If photons arriving at
the receiver maintain high coherency to the source photons under
modulation (as seen in the figure), chances are that they did not encounter
strong scattering, which would result in a lower modulated amplitude, as
seen in the figure.
A very exciting application of the amplitude modulation approach
expands the time-varying intensity (TVI) approach, first developed at the
Scripps Visibility Laboratory (Austin et al., 1991; Mullen et al., 2009). The
modulated light source carries the amplitude modulation encoded about
the pixel location, and this approach allows the source to be very close to
targets of interest. All scattered photons can be used to reconstruct the
original image. The net result is an extremely high range of image
transmission, up to 75 OD under laboratory conditions (Mullen et al.,
2009), which is a more than 10 increase compared to range gating and
LLS. The extended range and its characteristics can be summerized by its
NLOS nature. This approach also allows imaging over the airsea
interface, where distortion due to surface wave conditions, and extra
scattering due to bubbles and turbulence, often make it impossible for high-
resolution EO imaging to be accomplished. A set of example images of this
approach can be seen in Fig. 4.19.

Figure 4.18 Amplitude modulation can be used on the light source to enhance imaging.
[Reproduced with permission from Mullen et al. (2009).]
100 Chapter 4

Figure 4.19 Enhanced TVI system showing the potential of a NLOS type of underwater
imager. The four groups of image pairs (up and down as one pair) correspond to various
optical conditions: cr 0.01, cR 0.08; cr 1.23, cR 7.04; cr 2.34, cR 13.4; and
cr 4.30, cR 24.6, respectively. The single scattering albedo is 0.7, achieved by using
Maalox and absorbing dye. [Reproduced with permission from Mullen et al. (2009).]

4.2.5 Imaging particles underwater

One of the most important underwater imaging applications studies the
characteristics of particles such as their size, morphology, type, composition,
and distribution. These are important parameters associated with various
ocean research topics that examine the basic building blocks of the ocean, at
least in a bulk property or macroscale sense. By examining species type and
distribution, ecological models can be established and validated to provide a
basic understanding of phytoplankton and zooplankton functional groups,
their interaction, productivity, and biomass, as well as extended topics that
help to identify impacts from particle scattering in optical and acoustical
signals. The study of particles is also related to the fundamental questions of
primary productivity, carbon fixation rate, sedimentation rate, and the
combined effects on global biogeochemical balance of the greenhouse gas
pool. The common approach in identifying very small, individual particles is
flow cytometry.
There are various implementations of flow cytometers, and most of them
are used in medical fields to examine individual cells such as blood cells for
physiological changes. The general principles of operation are the same for
those in the field of oceanography. A laser beam is used to illuminate a small
volume of hydrodynamically focused fluid, where the object of detection is
carried and passed through. A set of detectors are placed in line and a
majority are perpendicular to the incident illuminating beam. One or more
fluorescent detectors are usually also in place. Depending on the speed of flow
and size of the particles, this setup can examine up to thousands of particles
per second, ranging in size from 0.2 to 150 mm, such as the CytoSense by
Active Underwater Imaging 101

Figure 4.20 A bench-top flow cytometer (CytoSense) from CytoBuoy. (Courtesy of

CytoBuoy b.v.)

CytoBuoy, shown in Fig. 4.20. By combining signals from various detectors, it

is possible to identify the physical and chemical content of an object, as shown
by the example (Fig. 4.21).
The LLS approach has also been used to image larger particles in a water
column, especially those larger than 500 mm, which are often termed marine
snow particles or aggregates. These large particles can be formed by various
processes. Smaller particles ( 1 mm) collide via Brownian motion and stay
attached to form larger particles (McCave, 1984; Jackson, 1990). Turbulent
shear stress also brings particles together to form aggregates, while excessive
shear breaks them up. Another physical process that aids aggregate formation
is differential settling. When a particle settles through a water column by the
force of gravity at a velocity differing from those of its neighboring particles, it
encounters particles along its pathway, scavenging (at various probabilities)
small particles and trace metals in its path (Fowler and Knauer, 1986;
Stolzenbach, 1993), or colliding with other particles of similar size to form
aggregates (Fowler and Knauer, 1986; Alldredge and Gotschalk, 1988a). It
has also been demonstrated in laboratory experiments that in stratified fluids,
102 Chapter 4

Figure 4.21 Image of an A. Sanguina particle, taken by a CytoSense flow cytometer.

The flaggelate cannot be seen in the picture, but can be discerned from the shape of the
pulse. FWS length: 60.5 mm, SWS length: 55.1 mm. (Courtesy of CytoBuoy b.v.)

a falling particle can create a less viscous flow path in its wake, enhancing the
settling speeds of trailing particles until collisions occur. This forms vertically
elongated aggregates, especially at a density interface (Carder et al., 1986).
These various physical processes can occur throughout the water column, and
are greatly enhanced in regions of upwelling, and in convergent flows (slick
zones) in the upper layer caused by internal waves, fronts, and jets (Mann and
Lazier, 1991).
Aggregates play important roles in the oceanic environment. Their
elevated backscattering can strongly influence the IOP measured. The 2 to 5
bacteria concentration, rapid primary producer growth, and cycling of
nutrients in this microenvironment should be accounted for in the biomass
and carbon fixation estimate (Mann and Lazier, 1991). It is known that
Active Underwater Imaging 103

aggregates are responsible for the majority of material flux and sedimentation,
as well as trace-element and metal transport out of the euphotic zone and to
depth. They are also an important food source for fish and large animals
(Alldredge, 1976). Additionally, accurate abundance and distribution
information within the water column is important information for modeling
production, aggregation processes (Jackson, 1990; Hill, 1992), sediment flux
(Walsh and Gardner, 1992), and optical properties (Hou, 1997). However,
determining the abundance of these particles has proven to be difficult, mainly
due to their fragile nature, making sampling very difficult (Alldredge and
Silver, 1988b). Traditional sample collection with hydrobottles can easily
break these aggregates. They can also sink within minutes below the spigot of
the collection bottle (Calvert and McCartney, 1979), or eventually break up
when passing through the spigot (Gibbs and Konwar, 1983). Even sample
storage and transportation can result in their disruption (Alldredge and Silver,
1988b), and common water-filtration systems typically destroy aggregates.
To fulfill the need for a noncontact optical sampling system, a multicamera
system was developed at the College of Marine Science, University of South
Florida, for the purpose of measuring large-particle abundance, distribution,
and optical properties (Costello, Carder, and Steward, 1991; Hou, 1997). The
free-falling device, called the Marine Aggregated Particle Profiling and
Enumerating Rover (MAPPER), is equipped with three video cameras having
different magnifications, with the intent of covering a wide size range while
maintaining reasonably high resolution and sampling a large water volume (see
Fig. 4.22). A hyperstereo mirror module (HMM) consisting of two tilted
mirrors within the full view of the lowest magnification camera system is

Figure 4.22 Drawing of MAPPER. [Courtesy of Hou (1997).]

104 Chapter 4

Figure 4.23 Particles from the small field cameras of MAPPER, taken in the East Sound
in Washington: (a), (b), and (d) are loosely connected marine snow aggregates; (e) and (f)
are copepods; (c) and (i) are detritus; and (g), (h), and (j) are aggregates. [Courtesy of
Hou (1997).]

mounted on MAPPER to provide images of particles at different viewing

angles. An automated image control and examination (ICE) system was
designed and programmed for data postprocessing, frame by frame, synchro-
nized between all three cameras and auxiliary data (Hou, 1997). Sample images
of these large particles can be seen in Fig. 4.23. The results show consistent
distribution over large size ranges (34 mm to millimeters), with all three cameras
matching each other, along with smaller size ranges provided by a Coulter
counter. The data collected by MAPPER during two weeks of field
experiments, including a rapid bloom, showed Junge-type distribution (Junge,
1963), as represented in Fig. 4.24 (Hou, 1997).
The Shadowed Image Particle Profiling and Evaluation Recorder
(SIPPER) (Samson et al., 2001; Remsen, Hopkins, and Samson, 2004;
Kramer, 2010) is another particle imaging system designed using the line scan
principle. It is essentially a larger version of flow cytometry, in a sense that it is
towed through a water column. While the flow goes through the sampling
tube, particles within are imaged when the light path is blocked, creating a
3-bit image (Fig. 4.25). This enables high-speed, in situ visual sampling of a
water column of zooplanktons, which are shown in Fig. 4.26. The results have
been validated using a sampling net and the optical plankton counter (OPC)
(Remsen, Hopkins, and Samson, 2004).
The holographic approach in particle imaging is briefly mentioned below.
Holography has been used to study plankton indices of refraction in the past
(Carder, Tomlinson, and Beardsley, 1972). Recent approaches have focused
on not only individual critters and their distributions, but also their
movements and inferred velocity field, and subsequently, the kinetic energy
dissipation associated with turbulent structures. The Holosub has been
developed by the Johns Hopkins University Experimental Fluid Dynamics
Laboratory (Pfitsch et al., 2005; Katz and Sheng, 2010) as a free-drifting in
Active Underwater Imaging 105

Figure 4.24 Particle size distribution measured by MAPPER and a Coulter counter.
[Courtesy of Hou (1997).]

Figure 4.25 SIPPER underwater imaging platform and schematics of sampling tube and
light path. [Reproduced with permission from Kramer (2010).]

situ submersible digital holographic imaging system, to examine particle

distribution as well as particle imaging velocimetry (Talapatra et al., 2012).
The system schematics can be seen in Fig. 4.27. Sample images of individual
particles and their scale can be found in Fig. 4.28. Holograms are recorded at
15 fps using two cameras at 2048  2048 pixels, with a resolution of 3.9- and
5.7-mm/pixel due to different FOVs. Needless to say, this is one of the most
promising approaches to underwater imaging, as it bridges the particle and
flow field research in one coherent and convenient setting.
106 Chapter 4

Figure 4.26 Sample image shadow graph from SIPPER. [Reproduced with permission
from Remsen, Hopkins, and Samson (2004) 2004 Elsevier.]

Figure 4.27 Holosub schematic. [Reproduced with permission from Talapatra et al. (2012).]
Active Underwater Imaging 107

Figure 4.28 Sample images by Holosub: (a) and (b) C. socialis colonies; (c) and (d)
diatom chains; (e) and (f) nauplii, (g) copepod, (h) pair of Noctiluca cells, and (i)
appendecularian. [Reproduced with permission from Talapatra et al. (2012).]

4.3 Comparison to Active Acoustical Systems

4.3.1 Active acoustical systems
While this book mainly focuses on optical systems for ocean sensing issues,
some major nonoptical sensing approaches are included. Acoustics is a
natural candidate, due to its wide application in underwater sensing.
The necessity to gain advantage in naval warfare since World War II and
throughout the Cold War, especially in mine warfare (MIW) and antisubma-
rine warfare (ASW), greatly advanced our understanding of sound propaga-
tion and active sensing with sound or sonar. While both acoustical and optical
signals can be described as propagating waves, it is important to remember
their main differences: light waves are electromagnetic waves, while sound
waves are pressure waves. Light waves do not rely on the existence of a
medium to propagate. In fact, a medium only dampens their traveling speed,
compared to a vacuum. Thus, variation of a mediums density or changes in
the IOR over range and time are the main causes of scattering and refraction.
Sound waves, however, do rely on the existence of a medium to propagate.
Temperature, density, pressure, and salinity are all medium properties
108 Chapter 4

that directly affect sound speed and propagation in oceanic environments. The
denser medium of the ocean actually assists the propagation speed of sound
By using the scattering wave properties of acoustical and EO signals, 2D
and 3D images of individual targets or large surface areas (such as the
seafloor) can be obtained in underwater environments. It is apparent in both
EO and acoustical applications that the wavelength of the source determines
the absolute system resolution at the diffraction limit. It is a crude way to see
and understand that EO systems most likely offer better spatial resolution.
Typical underwater sensing involves blue or green channels, where the
corresponding wavelengths are less than a micron (10 6 m), as shown in
Chapter 2. It is interesting to note that sound waves travel faster in the ocean
than in air, at approximately 1500 m/s compared to approximately 340 m/s.
Acoustical frequencies for underwater sensing typically range from kilohertz
to megahertz, and corresponding wavelengths on the order of meters to
millimeters. Because of this, sound waves are less prone to the scattering and
absorption by particulates in the ocean, reducing the effective visibility range
in the traditional EO approach.
While acoustical systems do not have the resolution of EO systems, their
range of sensing is significantly longer, from miles at low-frequency ranges
(kilohertz or lower) to tens of meters at high frequencies, and they are less
prone to certain environmental conditions such as turbidity.
The following sections briefly cover various acoustical systems and
their corresponding applications in bathymetric mapping, target search
and detection, as well as target identification by recent high-resolution
imaging sonars.

4.3.2 Depth sounder

Similar to the principle applied in EO range gating, active acoustical systems
use time of flight to detect the range of return, and/or the intensity of reflected
signals. Echo sounding or depth sounding measures the depth of the ocean
using sound sources (transducer) mounted on surface vessels, as illustrated in
Fig. 4.29.
Echo sounders are widely used today by the scientific community, ship
operators, fishing vessels, and recreational sportfishers. As lower frequency
sound waves travel farther with less attenuation, 24- or 33-kHz echo
sounders are often used for deep ocean surveys. In shallower waters, higher
frequencies such as 200 kHz are used for increased spatial resolution, which
is desirable when the fine details of bottom topography are sought after,
whether for hazard avoidance or to find fishing holes. Dual-frequency
sounders are readily available, as their frequencies do not overlap or
interfere with each other. For scientific or survey requirements, multibeam
echo sounding (MBES) is often used, where small cones (1 to 3 deg) of
Active Underwater Imaging 109

Figure 4.29 Echo sounding to measure water depth. (Courtesy of Wikipedia and Brandon
T. Fields, U.S. Army Corps of Engineers.)

sound beams are used to increase an area of coverage while a vessel is

underway (Fig. 4.30). Combined with the ships motion and its attached
global positioning system (GPS) unit, a full coverage bathymetry map of
the ocean over the ships track can be obtained using sounders, and often
serves as the basis of hydrographical surveys.
Along with range information, echo intensity can be used to infer the type
of material that a sound wave encounters. Based on the reflectivity of bottom
type to sound wave, crude bottom classifications can be made based on the
color or intensity of the echo, and texture, if sufficient spatial resolution is
available, as seen in Fig. 4.31.

Figure 4.30 Diagram showing the principle of MBES. (Courtesy of University of Bremen
Center for Marine Environmental Sciences.)
110 Chapter 4

Figure 4.31 Echo intensity can be used to infer bottom type. (Courtesy of Wikipedia.)

4.3.3 Side-scan sonar

To obtain higher resolution of the image of the seafloor, and to locate and
identify targets, higher spatial resolution sonar is required. This can be
achieved by lowering the transducer closer to the bottom of the ocean or
intended targets via a tow body (Hagermann, 1958), as shown in Fig. 4.32,
sacrificing the detection range for increased resolution by using higher
acoustical frequencies, often from 100 to 500 kHz. A conical or fan-shaped
beam, much like the EO LLS setup, is used in a wide-angle spread,
perpendicular to the motion or track of the tow body (Fig. 4.32).
The tow-body configuration allows a side-to-bottom looking sonar to
examine objects of interest at deeper depths, or on seafloors that are far away
from the surface. Clear images of the targets can be obtained using the

Figure 4.32 Side-scan sonar operational sketch. (Courtesy of Oceanic Imaging

Active Underwater Imaging 111

Figure 4.33 Side-scan sonar image formation, illustrating (a) strong reflection and shadow
zone, and (b) actual image. (Courtesy of NOAA.)

intensity of the returns (Fig. 4.33), although range information is usually not
available under this setup. Shadows are formed in areas the direct sound beam
cannot reach (Fig. 4.33). It is worth mentioning that active acoustical sensing
can reveal the location of the source or transponder, which is detrimental
during wartime, if the sonar is part of a submarine. It is potentially harmful to
marine mammals, as some do use acoustical communication and echolocation
(such as whales) similar to bats on land. The U.S. Navy has mandatory
guidelines and approval processes regarding intensities, frequency of sonar,
and laser deployment in the ocean, along with active research programs
through the Office of Naval Research (ONR).

4.3.4 Imaging sonar

Although in principle the newer imaging sonar or acoustic camera is not much
different from the side-scan or multibeam setups discussed here, it deserves
special mention due to its unprecedented capabilities in underwater sensing
applications. By using multiple high-frequency beams with small beamwidths
formed with the help of an acoustic lens (Belcher et al., 1999), acoustic
cameras are capable of visualizing under zero visibility (in EO terms) with
submillimeter resolution (Vesetas and Manzie, 2001). Depending on the
frequency used, they can reach up to 100 m in range with high spatial
resolution on the order of centimeters to millimeters (e.g., Sound Metrics/
DIDSON ARIS series; Blueview P and M series).The refresh rate is near real
time, thus it can provide near video quality viewing of the environment, much
like a medical sonogram. Their compact size, low power consumption, and
112 Chapter 4

wide field of view make acoustic cameras ideal for divers or small UUV
platforms to use for navigation, inspection, survey, and identification
purposes. Detailed 3D information can be obtained by either using a mosaic
approach or fixed mechanical scan. Higher-resolution sonar imagery, up to
10 compared to conventional sonar, can be obtained using the synthetic
aperture sonar (SAS) approach, which is analogous to the synthetic aperture
radar (SAR) discussed in Chapter 7.

4.4 Summary
Key methods in active underwater imaging, both by acoustics and optics, are
discussed in this chapter. LLS has been widely used and enhanced for its
simplicity and relative ease in system development and deployment in the
field. This is especially true when considering the various particle imaging
apparatus we briefly touched on, both in the laboratory and in situ.
Long-range EO systems are of high importance in both civilian and
military applications. One of the most promising approaches, the NLOS
method, is poised to encroach on realms that have long been claimed by
acoustical systems, namely, longer reaches underwater. This objective will be
met by an equally capable (if not stronger) defense from the imaging sonar
line up of various manufacturers.
In addition to the visibility and target identification requirements, under-
water imaging on the microscale that is related to particle and environment
interactions is a new, exciting, and rewarding topic. Several examples of
systematic approaches are discussed, although far from comprehensive, to
showcase the diversity and benefit to basic ocean research needs.
While attempts are made to cover all major underwater imaging systems
in acoustical and EO approaches, there are many that have been left out or
did not receive the attention they deserve. Readers are encouraged to examine
the references cited in this and related chapters.

Alldredge, A. L. (1976). Discarded appendicularian houses as aources of
food, surface habitats and particulate organic-matter in planktonic environ-
ments. Limnol. Oceanogr. 21, 1423.
Alldredge, A. L. and Gotschalk, C. (1988a). Insitu settling behavior of
marine snow. Limnol. Oceanogr. 33, 339351.
Alldredge, A. L. and Silver, M. W. (1988b). Characteristics, dynamics and
significance of marine snow. Prog. Oceanogr. 20, 4182.
Austin, R. W. et al. (1991). An underwater laser scanning system. Proc.
SPIE 1537, 5773. doi: 10.1117/12.48872.
Active Underwater Imaging 113

Belcher, E. O. et al. (1999). Beamforming and imaging with acoustic lenses in

small, high-frequency sonars, in MTS/IEEE OCEANS 99. pp. 14951499.
Bowker, K. and Lubard, S. C., Arete Associates (1993). Underwater imaging
in real time, using substantially direct depth-to-display-height lidar streak
mapping, U.S. Patent 5467122.
Calvert, S. E. and Mccartney, M. J. (1979). Effect of incomplete recovery of
large particles from water samplers on the chemical composition of oceanic
particulate matter. Limnol. Oceanogr. 24, 532536.
Carder, K. L. et al. (2005). Optical inspection of ports and harbors: laser-line
sensor model applications in 2 and 3 dimensions. Proc. SPIE 5780, 49. doi:
Carder, K. L. et al. (1986). Relationships between chlorophyll and ocean
color constituents as they affect remote-sensing reflectance models. Limnol.
Oceanogr. 31, 403413.
Carder, K. L., Tomlinson, R., and Beardsley, G. (1972). Technique for
estimation of indexes of refraction of marine phytoplankters. Limnol.
Oceanogr. 17, 833839.
Coles, B. W. (1997). Laser line scan systems as environmental survey tools.
Ocean News Technol. 3(4), 2227.
Costello, D., Carder, K. L., and Steward, R. G. (1991). Development of the
marine aggregated particle profiling and enumerating rover (MAPPER).
Proc. SPIE 1537, 161172. doi: 10.1117/12.48881.
Dalgleish, F., Caimi, F. M., and Andren, C. F. (2009). Improved LLS
imaging performance in scattering-dominant waters. Proc. SPIE 7317,
73170E. doi: 10.1117/12.820836.
Duntley, S. Q. (1963). Light in the sea. J. Opt. Soc. Am. 53, 214233.
Fournier, G. R. et al. (1993). Range-gated underwater laser imaging
system. Opt. Eng. 32(9), 2185. doi: 10.1117/12.143954
Fowler, S. W. and Knauer, G. A. (1986). Role of large particles in the
transport of elements and organic-compounds through the oceanic water
column. Prog. Oceanogr. 16, 147194.
Fried, D. L. (1965). Statistics of a geometric representation of wavefront
distortion. J. Opt. Soc. Am. 55, 14271435.
Gibbs, R. J. and Konwar, L. N. (1983). Sampling of mineral flocs using
niskin bottles. Environ. Sci. Technol. 17, 374375.
Hagermann, J. (1958). Facsimile recording of sonic values of the ocean
bottom. U.S. Patent 4197591.
Hill, P. S. (1992). Reconciling aggregationtheory with observed vertical fluxes
following phytoplankton blooms. J. Geophys. Res. Oceans 97, 22952308.
114 Chapter 4

Hou, W. (1997). Characteristics of Large Particles and their Effects on

Submarine Light Field. Ph.D. dissertation. Univ. South Florida, p. 149.
Hou, W. (2009). A simple underwater imaging model. Opt. Lett. 34,
Hou, W. et al. (2012). Optical turbulence on underwater image degradation
in natural environments. Appl. Opt. 51, 26782686.
Jackson, G. A. (1990). A model of the formation of marine algal flocs by
physical coagulation processes. Deep-Sea Res. 37, 11971211.
Jaffe, J. (2005). Performance bounds on synchronous laser line scan
systems. Opt. Express 13, 738749.
Jaffe, J. (2010). Enhanced extended range underwater imaging via structured
illumination. Opt. Express 18, 1232812340.
Jaffe, J., McLean, J., and Strand, M. (2001). Underwater optical imaging:
status and prospects. Oceanography 14, 6475.
Junge, C. E. (1963). Air Chemistry and Radioactivity. New York: Academic Press.
Katz, J. and Sheng, J. (2010). Applications of holography in fluid mechanics
and particle dynamics. Ann. Rev. Fluid Mech. 42, 531555.
Kocak, D. M. et al. (2008). A focus on recent developments and trends in
underwater imaging. Marine Tech. Soc. J. 42, 5267.
Kramer, K. (2010). System for Identifying Plankton from the Sipper
Instrument Platform. Ph.D. dissertation. Univ. South Florida.
Mann, K. H. and Lazier, J. R. N. (1991). Dynamics of Marine Ecosystems:
Biological-Physical Interactions in the Oceans. Oxford: Blackwell Scientific
McCave, I. N. (1984). Size spectra and aggregation of suspended particles in
the deep ocean. Deep-Sea Res. 31, 329352.
Underwater photography. McGraw-Hill Professional Encyclopedia (2002).
Mertens, L. E. (1970). In-Water Photography: Theory and Practice. New
York: WileyInterscience.
Mobley, C. D. (1994). Light and Water: Radiative Transfer in Natural Waters.
New York: Academic Press.
Mullen, L. et al. (2009). Extended range underwater imaging using a time
varying intensity (TVI) approach. MTS/IEEE Oceans 691, 18.
Mullen, L. et al. (2004). Amplitude-modulated laser imager. Appl. Opt. 43,
Narasimhan, S. G. et al. (2005). Structured light in scattering media. Proc.
10th IEEE Int. Conf. Computer Vis. 1, 420427.
Negahdaripour, S. G. et al. (2005). Structured light in scattering media.
Proc. IEEE ICCV, 420427.
Active Underwater Imaging 115

Passow, U. and Alldredge, A. (1994). Distribution, size and bacterial-

colonization of transparent exopolymer particles (tep) in the ocean. Marine
Ecol. Prog. Series 113, 185198.
Pfitsch, D. W. et al. (2005). Development of a free-drifting submersible
digital holographic imaging system. Proc. MTS/IEEE Oceans 691, 690696.
Remsen, A., Hopkins, T. L., and Samson, S. (2004). What you see is not
what you catch: a comparison of concurrently collected net, optical plankton
counter, and shadowed image particle profiling evaluation recorder data from
the northeast Gulf of Mexico. Deep-Sea Res. 51(1), 129151.
Samson, S. et al. (2001). A system for high-resolution zooplankton imaging.
IEEE J. Oceanic Eng. 26, 671676.
Scott, M. W. (1990). Ranger imaging laser radar. U.S. Patent 4935616.
Stolzenbach, K. D. (1993). Scavenging of small particles by fast-sinking
porous aggregates. Deep-Sea Res. 40(1), 359369.
Strand, M. P. (1997). Underwater electro-optical system for mine
identification. Naval Res. Rev. 49, 2128.
Strand, M. P. et al. (1996). Laser line scan fluorescence and multispectral
imaging of coral reef environments. Proc. SPIE 2963, 790795. doi: 10.1117/
Talapatra, S. et al. (2012). Application of in-situ digital holography in the
study of particles, organisms and bubbles within their natural environment.
Proc. SPIE 8372, 837205. doi: 10.1117/12.920570.
Vesetas, R. and Manzie, G. (2001). AMI: a 3-D imaging sonar for mine
identification in turbid waters. MTS/IEEE Oceans 1, 1221.
Walsh, I. D. and Gardner, W. D. (1992). A comparison of aggregate profiles
with sediment trap fluxes. Deep-Sea Res. 39, 18171834.
Chapter 5
Ocean Color Remote Sensing
5.1 Introduction
The next time you fly over the ocean, look out the airplane window and
observe what is beneath you. The water will appear dark blue when flying
over the clean, deep, open ocean; greenish when closer to shore; and yellowish
or almost dark brown in color closer to cities, where the water becomes dirty.
If you are lucky enough to visit the Bahamas, the water is many shades of
breathtaking light blues and greens, caused by the shallow sandy or grassy
bottoms. These many colors of the ocean can only be seen if youre traveling
on a clear, sunny day, and the sun is not in the way (i.e., opposite of the ocean
surface norm, relative to you), otherwise you will only see a whitish glint,
preventing you from discerning the color of the water. Viewing becomes worse
or even impossible if there is a large amount of dust in the air, or if it is foggy
or cloudy.
All of these viewing situations summarize the key topics important
to ocean color remote sensing. The crude, visual observations from our
naked eyes are replaced by highly sophisticated cameras or radiance sensors,
with multiple channels (colors and spectral bands), i.e., they are multi-
spectral. These channels can consist of dozens or even hundreds of bands
(hyperspectral), covering not only the bands visible to our eyes (380 to
700 nm), but also bands beyond. These bands include IR channels that are
sensitive to sea surface temperature (SST), and microwave channels for sea
surface roughness and derived wind velocity, SST, and salinity distribution at
the surface. The guesswork of water types made by simple human observation
(such as those described in the opening paragraph), and then the possible
constituents and corresponding concentrations of each, can now be carried
out by inversion or retrieval algorithms, built on complex relationships among
various channels. This is often coupled with observations of bottom types and
reflectance, which can produce surprisingly accurate water depth information
when the water is shallow and uniform. Of course, all of this information must
be gathered under almost ideal situations, with little or no influence from the
atmosphere between the sensor and the ocean. In other words, there can be no

118 Chapter 5

clouds or dense aerosols, or there must be means for completely compensating

or accounting for their contribution. Lastly, the effect from surface glint as a
function of sun angle and sea state must be of minimal concern, or be able to
be removed effectively. This is basically the essence of atmospheric correction,
under the assumption that the sensor system has been well calibrated and
We examine these pieces of the puzzle, and discuss each separately,
before attempting to connect them in a later chapter, within a framework
that utilizes various individual elements. To effectively solve the puzzle, we
will attempt to follow the natural development of ocean color remote
sensing, starting with the basic principles of passive remote sensing, building
on the foundation of the basic optical properties of the ocean provided
in Chapter 2. This is then followed by a discussion on the influence of
the atmosphere and means of compensation. Various sensors are briefly
presented before the calibration and algorithms are reviewed.
Active remote sensing is discussed in Chapter 6. The basic distinction
between active and passive remote sensing is the availability of light, as seen
in Fig. 5.1. Passive remote sensing (commonly referred to as ocean color
remote sensing) observes the ocean using available sunlight, while active
remote sensing relies on an active light source. This is commonly referred to
as light detection and ranging (lidar), similar to range-gated active imaging
discussed earlier, where a pulsed source is often used to illuminate a target.
A gated receiver is used to identify the time and intensity light waves have
takenfrom the source to the target and then to the receiverwhich can be
converted into the distance and reflectivity of targets.

Figure 5.1 Comparison of passive and active remote sensing.

Ocean Color Remote Sensing 119

Figure 5.2 (a) The earliest surviving aerial photograph, titled Boston, as the Eagle and the
Wild Goose See It, taken by J. W. Black and Samuel A. King on October 13, 1860. It depicts
Boston from a height of 630 m. (Courtesy of The Metropolitan Museum of Art.) (b) Cartoon
depicting the first aerial photograph taken by Honore Daumier, which appeared in
Le Boulevard, May 25, 1863. (Courtesy of Brooklyn Museum online collection, Frank L.
Babbott Fund.)

The concept of passive remote sensing, without an active light

source, can be traced back to 1858, when aerial photography was first
attempted by Gaspard-Felix Tournachon (known as Nadar, 18201910) of
Paris, France, from a captive balloon 1,200 ft off the ground [depicted in
Fig. 5.2(b)]. The earliest surviving aerial photograph was taken in 1860
of Boston, Massachusetts, by J. W. Black and Samuel A. King, shown in
Fig. 5.2(a).
Military applications, starting from the Civil War in the 1860s and
proceeding through World War I (Fig. 5.3), World War II, and the Cold
War, drove the rapid development of remote sensing from aerial platforms,
and later from satellites. In 1954, a U-2 spy plane took its first flight and
brought back highly valued photographs. By 1960, intelligence photographs
were collected from earth-orbiting Corona satellites. Today, both civilian
and military applications drive the development of remote sensing
technology. The resolution of sensors has increased dramatically to reveal
more and more spatial details, from higher and higher orbits, using more
channels at finer spectral bands (colors and spectral resolution). We have
progressed from simple black-and-white images, to color photographs, to
multichannels, and now to hyperspectral images, which can provide hundreds
of colors for better sensing capabilities. To fully realize the potential of these
120 Chapter 5

Figure 5.3 Military aerial photography during World War I. (Courtesy of Aerotech News
and Review Inc.s Journal of Aerospace and Defense Industry News.)

sophisticated sensor systems, and to develop more advanced configurations,

it is necessary to take a look at the basic physics behind remote sensing.

5.2 Basic Principles of Ocean Color Remote Sensing

While the physics behind remote sensing is applicable to both terrestrial and
ocean sensing, we use the ocean as the default for convenience of discussion.
After all, the ocean covers 72% of the earths surface, so it makes perfect sense
that the majority takes priority.
A sketch of the signal seen by a remote sensor at a certain height is shown in
Fig. 5.4. Without losing generality, we assume that the platform is spaceborne
to discuss all relative elements thoroughly. For lower platforms, such as
unmanned aerial vehicles (UAVs), certain elements become less important (e.g.,
atmospheric correction, discussed in detail shortly). Total radiance at the sensor is
the radiance at the top of the atmosphere (TOA), denoted as Lt. Lt consists of
contributions from atmospheric scattering, including molecular Rayleigh
scattering Lr and aerosol contribution La, and contributions from the sea surface
Ls, including glint and foam (whitecaps) from the water Lw, which can include
bottom contributions in shallow seas. In mathematical form, this is expressed as
Lt l Lr l La l TLs l td lLw l, 5:1
where T is the transmittance from the sea surface to the sensor, and td is the
diffuse transmission of water-leaving radiance to the sensor. These two
Ocean Color Remote Sensing 121

quantities are not necessarily equal, due to adjacency effects from specular
reflection. Water-leaving radiance Lw, discussed in Chapter 2, is a function of
water itself, the gelbstof (g), and particles (p), which can be inorganic (e.g.,
suspended material such as quartz) and organic (e.g., phytoplanktons). In
shallow waters, bottom contributions (b) should also be included. The
contributions can be expressed in terms of absorption coefficients as
aw awa ag ap : 5:2
It is certainly feasible to look at only the color and intensity of the returned
signal at the sensor level and make an educated guess of in-water properties,
such as chlorophyll concentration level, the amount of suspended sediment
load, etc. However, an underlying assumption would have to be made,
consciously or subconsciously, to compare the received intensity to the
amount of sunlight available in the area, where the radiance comes from, in
terms of relative positions and angle. Remote sensing reflectance Rrs, defined
in Chapter 2, is used to address such concerns.

5.3 Atmospheric Correction

From Fig. 5.4 and Eq. (5.1), it is seen that atmospheric contribution is part of
a remotely sensed signal. In fact, depending on sensor altitude, this value can
represent up to 90% of the total sensor radiance collected onboard a satellite,
as seen in Fig. 5.5. It requires a significant amount of effort to ensure that the
value measured at the sensor is very accurately calibrated, and that effects of
the atmosphere are completely removed, if ocean color or water-leaving
radiance is the goal. Any small changes or errors in the atmospheric
correction algorithm would be enlarged ten times in this case (e.g., 0.5% in

Figure 5.4 Remote sensing signal components in passive ocean remote sensing.
122 Chapter 5

Figure 5.5 Atmospheric contribution from a satellite sensor. [Reproduced with permission
from Wang et al. (2011) using data from Gordon.]

TOA calibration error results in a 5% error in the retrieved water-leaving

radiance, when 90% of the signal received is contributions from the
atmosphere). This is important, even if true values cannot be retrieved due
to algorithm and sensor limitations. Properly corrected scenes allow for
comparisons between different days and different sensors; these comparisons
enable researchers to identify temporal variability or seasonality, which is key
in ecological, geophysical, and environmental modeling and monitoring.
Properly corrected scenes also help to improve scene interpretation, in
particular surface-type classification results. Residual error can be further
reduced by band ratio methods in most cases, although recent studies have
pointed out concerns related to singularity types of issues (Hu et al. 2012).
Retrieval algorithms are discussed in later sections of this chapter.
Most of the discussions in this section are generic in nature, meaning they
are applicable to both land and ocean scenes, especially those involving
Rayleigh scattering and aerosol models. However, bias is given toward
oceanic features, including those from glint and foam. It is worth mentioning
that due to the low reflectance of ocean water, atmospheric correction is more
demanding when the same level of accuracy is desired, which can be rather
challenging, considering the dynamic nature of the aerosols (including water
vapor) involved in coastal environments.
Atmospheric influence to TOA-received radiance includes gaseous absorp-
tion from approximately 30 gases; eight of them contribute the majority of
signals and variability. They are: ozone (O3), oxygen (O2), water vapor (H2O),
carbon dioxide (CO2), carbon monoxide (CO), methane (CH4), nitrous oxide
(N2O), and nitrogen dioxide (NO2) (Gao et al., 2009). Their absorption lines
cover from 0.4- to 2.5-mm spectral range, as shown in Fig. 5.6.
Ocean Color Remote Sensing 123

Figure 5.6 Simulated gaseous transmissions from 0.4- to 2.5-mm spectral range.
[Reproduced with permission from Gao et al. (2009) 2009 Elsevier.]

Scattering by air molecules (Rayleigh scattering) constitutes approxi-

mately 80 to 85% of the total signal, and can be estimated rather accurately,
given the geometry and atmospheric pressure, using radiative transfer models
(e.g., ATREM, MODTRAN, and derivatives such as FLAASH and
ACORN) (Berk et al., 1989; Gao et al., 1993; Adler-Golden et al., 1999;
Gao et al., 2000; Kruse, 2004). This is strongly dependent on the wavelength
(l 4). Aerosol impacts up to 10% of the total signal and is related to the size of
the scattering centers and distributions, which can vary significantly. It
includes water vapor, smoke, dust, ashes, pollen, spores, and various
pollutants, and affects polarization if the sensor is sensitive to corresponding
states. Unlike those of Rayleigh scattering we discussed earlier (which give the
color blue to the sky), aerosol adds to the whitish or yellowish hues of the sky.
It is only weakly dependent on the wavelength (l 1 to l 2) (Gao et al., 2009).
Active sensing of aerosol distribution has been carried out, such as those by
Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations
(CALIPSO). Different aerosol models can be selected when sufficient
knowledge of the environment is available based on either weather station
observations or other sensor inputs. Transmittance due to aerosol can be
modeled via the radiative transfer models mentioned earlier for Rayleigh
scattering cases. Other image-based atmospheric correction methods are also
available, such as the internal average relative reflectance (IARR) method,
which normalizes images to the scene average spectrum; the flat field method,
where a known area of flat reflectance is assumed across the spectrum; and the
124 Chapter 5

empirical line approach, where known field targets are used. These and other
approaches are more frequently used in land scene remote sensing
applications, and readers are encouraged to refer to the more detailed reviews
found in (Gao et al., 2009).
Before we discuss atmospheric correction algorithms and methods, the
challenges associated with the temporal and spatial variability of a scene
should be briefly reviewed. This will help us appreciate, diagnose, and
improve the various methods presented. Based on the definition of aerosols, it
is conceivable that concentration changes, especially those associated with
water vapor, can be rather significant at different locations, altitudes, different
times of day, and even within a scene. This is exactly what image-based
algorithms attempt to accomplishdeduce the impacts of aerosols and
correct accordingly.
A better method of atmospheric correction for oceanic remote sensing
applications is the black pixel approach (Gordon, 1978; Gordon and Wang,
1994). This was widely used for earlier sensors, such as the Coastal Zone
Color Scanner (CZCS), one of the first ocean color satellite sensors aimed at
exploring the oceans properties by its color, and its successor, the Sea-
Viewing Wide-Field-of-View Sensor (SeaWiFS). This method is based on the
absorbing nature of the water (see Chapter 2, Fig. 2.9), as well as observations
that the water-leaving radiance is very close to zero at near-infrared (NIR)
channels (0.76 to 0.87 mm). By allowing Lw 0 in Eq. (5.1), the measured
signal at the sensor level is entirely that of atmospheric contributions. The
aerosol reflectance (Gordon, 1978; Gao et al., 2000) can then be obtained
using a band ratio model, when its value at initial wavelength l0 is known,
after calculations to determine the Rayleigh scattering contributions:
ra l fnu, u0 , w, l, l0 ra l0 : 5:3
This algorithm proved to be very successful in retrieving the water-leaving
radiance from case-1 water, not only for the CZCS but also later on for the
SeaWiFS (Gordon and Wang, 1994). Improvements have been made since
to include multiple scattering effects in the SeaWiFS approach, along with a
two-layer system where the contribution of aerosols are assumed to be at
the bottom of the atmosphere, while a gaseous scattering and absorption layer
sits above.
One of the defects of this model is its inability to compensate for the
adjacency effects in the fringe pixels of sun glint areas (Fraser et al., 1997;
Reinersman et al., 1998). An improved version has been developed, utilizing a
look-up table generated from a vector radiative transfer model that includes
sea surface roughness and resulting specular reflection, along with multilayers
of aerosol and gaseous molecules, degrees of polarization, and multiple
scattering. The two algorithms are well matched over glint-free oceanic waters
(within a few percent), but they disagree up to 30% when glint areas are
Ocean Color Remote Sensing 125

present (Fraser et al., 1997; Gao et al., 2000). A case of successful sun glint
removal is shown in Fig. 5.7 to illustrate the issues mentioned and the impact
on retrieval before and after removal.
Another issue with the CZCS algorithm based on the black pixel
assumption is that in coastal, case-2 turbid waters, resuspended sediments
near the surface are registered at NIR channels, even though water absorption
remains strong, as is seen in Fig. 5.7. The remedy is to use longer IR channels
to block out any residual water-leaving signals, essentially using the stronger
water absorption feature of the longer wavelength. Examples can be found
with 0.95 to 1.14 mm (Gao et al., 2000), or shortwave-infrared (SWIR)
channels of the Moderate-Resolution Imaging Spectroradiometer (MODIS)
at 1.24 to 2.13 mm (Wang and Shi, 2007). Another approach is Tafkaa, a
look-up table implementation based on ATREM and a vector radiative
transfer code, developed initially for hyperspectral remote sensing of land
scenes (Gao et al., 1993; Gao et al., 2000). Four aerosol types are used, along
with five relative humidity (RH) levels, to generate 20 aerosol models, for
which various values of wavelength, optical depth (at 0.55 mm), sensor and
sun angles, and sea surface wind speed are included to populate the

Figure 5.7 An example of successful glint removal using AVIRIS data over Kaneohe Bay,
Hawaii. [Reproduced with permission from Gao, Green, and Lundeen (2011).]
126 Chapter 5

Figure 5.8 Example of spectral matching technique used in Tafkaa. [Reproduced with
permission from Gao et al. (2009) 2009 Elsevier.]

transmittance spectral database of all 60 gases considered. The apparent

radiance spectrum of water vapor at 0.94 to 1.14 mm is used to estimate water
vapor value using the look-up table. Tafkaa employs a spectral matching
technique (Gao et al., 2009), which can be seen in the example shown in
Fig. 5.8. For turbid case-2 water, the spectral matching algorithm applies a
weight of 1 for wavelengths longer than 1 mm, and 0 weight for shorter
wavelengths. Its versatility lies in that not only is it useful for turbid case-2 waters,
but it is also capable of handling case-1 water when the weighting factor is
adjusted to use the black pixel assumption (0.76 to 0.87 mm) (Gao et al., 2000).
One of the issues with the SWIR approach is that SWIR channels are not
always available for such algorithms to be applied, due to sensor limitations,
such as the Geostationary Ocean Color Imager (GOCI) and the Hyperspectral
Imager for Coastal Ocean (HICO). Several methods have been developed to
address this issue. Two recent developments are discussed below.
For very turbid waters, such as those from the West Pacific, empirical
relationships between K490 and NIR channels (748 and 869 nm) can be used,
in combination with an iterative approach, to apply an atmospheric correction
(Wang et al., 2012). The iteration converges rather fast, with typically less
than ten iterations required before satisfactory results are achieved. The flow
chart of the algorithm is shown in Fig. 5.9.
Another approach is based on the cloud shadow technique (Reinersman
et al., 1998; Lee et al., 2007). The differences of total radiance between
a cloud-shaded pixel and its nearby cloud-free pixel are those from the
Ocean Color Remote Sensing 127

Figure 5.9 NIR-corrected atmospheric correction algorithm. [Reproduced with permission

from Wang et al. (2012).]

atmospheric path radiance, if it can be assumed that the optical properties

of the water remain homogenous in nearby regions. This information can be
used to remove the contribution of atmospheric transmittance to the sensor.
An improved, automated cloud and shadow algorithm (CSA) has been
implemented by using an adaptive sliding box (ASB) of integrated values
(IVs), which are spectral returns from 400 to 600 nm, to detect cloud shadow
automatically, before the atmospheric correction is applied (Amin et al.,
2011). The flow chart of the algorithm is shown in Fig. 5.10.

5.4 Ocean Color Sensors

Over the past 50 years, numerous satellites with various land and ocean
remote sensors have been launched, both active and passive alike, from
visible, to IR, to microwave bands. This is truly an exciting era to be part
of the ocean sensing and monitoring community. It is impossible to list all
of the sensors and their applications in this short monograph, let alone in a
subsection inside a chapter. However, a brief overview of the ocean color
128 Chapter 5

Figure 5.10 Automated CSA for atmospheric correction. [Reproduced with permission
from Amin et al. (2011).]

sensors of the past and present is provided, so that readers can have a good
understanding of the most widely used sensors, as well as gain a solid
appreciation for the retrieval algorithm discussions that follow.
The International Ocean Color Coordinating Group (IOCCG) and its
website ( provide a wealth of information on ocean
color remote sensing, including sensors, algorithms, publications, and reports.
This section draws heavily from their high-quality summaries and expert
opinions, and readers are encouraged to examine the reports more closely.
From the long list of past sensors, CZCS and SeaWiFS are discussed,
although the Ocean Color Temperature Scanner (OCTS), and POLarization
and Directionality of the Earths Reflectance (POLDER) probably should
also be on the list, if not for the limited contents covered here. Current sensors
include MODIS, GOCI, HICO, and the recently launched Visible Infrared
Imager Radiometer Suite (VIIRS).

5.4.1 CZCS
The CZCS is one of the most published and widely used ocean color
sensors in the world, in service from October 24, 1978 to June 22, 1986,
onboard Nimbus-7, built by Ball Aerospace for NASA. It has six bands
(0.443, 0.520, 0.550, 0.670, 0.750, and 11.5 mm), covering a 1556-km swath
with 825-m spatial resolution. It is widely used for chlorophyll concentra-
tion retrieval of the oceans, as well as sediment distributions, ocean
temperature, and currents (Fig. 5.11). It served as a foundation for all
future ocean color sensors, and as a cornerstone of our understanding
of the global carbon cycle. The two spectral bands of 0.443 and 0.67 mm
are strong chlorophyll absorption channels (see Fig. 2.3). The retrieved
chlorophyll concentrations matched in situ measurements reasonably
well (Fig. 5.12).
Ocean Color Remote Sensing 129

Figure 5.11 (a) Photograph of Nimbus-7 and CZCS. (b) Pseudo-color image of chlorophyll
distribution from CZCS, near Tasmania. The reddish color indicates higher concentration.
(Courtesy of NASA.)

Figure 5.12 (a) Raw radiance value of 550 mm. (b) CZCS algorithm of retrieved chlorophyll
concentration, compared to in situ measurements. (Courtesy of NASA.)

5.4.2 SeaWiFS
SeaWiFS is the follow-up ocean color remote sensor after CZCS. It began
operational data collection on September 18, 1997 onboard SeaStar
(OrbView-2), and ceased collecting data on December 11, 2010. Spatial
resolution is 1100 m (nadir), with a 2800-km swath. The eight optical spectral
bands are 412, 443, 490, 510, 555, 670, 765, and 865 nm, with 20-nm
bandwidth for the first six bands and 40 nm for the other two.
SeaWiFS was contracted for five years of data service, and it greatly exceeded
design specs and data goals, as well as the quality of data, accessibility, and
130 Chapter 5

Figure 5.13 Intense spring bloom in the North Atlantic in 2002, captured by SeaWiFS. The
monthly mean chlorophyll concentration clearly shows the propagation of the green wave, a
zonal band with a high concentration of chlorophyll as a result of photosynthetic available
radiation, which increases over the course of spring and summer at higher latitudes.
[Reproduced with permission from McClain et al. (2004) 2004 Elsevier.]

usability (McClain et al., 2004). The above reference provides a wealth of

information on data flow, calibration, and data product reviews. An impressive
seasonal development discussed in McClain et al. (2004) is worth mentioning
here. It illustrates very well the seasonal progress of a spring bloom, using data
derived from SeaWiFS ocean color algorithms (Fig. 5.13).

5.4.3 MODIS
MODIS is the current ocean color remote sensor of choice. It began
service on April 5, 2002, onboard the Earth Observing System (EOS) Aqua
(EOS PM). It consists of 36 spectral bands, with nine bands related to ocean
color remote sensing at 1-km resolution. They are centered at 413, 443,
488, 531, 561, 667, 678, 748 and 869 nm. Two channels (645 and 858 nm)
at 250-m spatial resolution, and five channels (469, 550, 1240, 1630, and
2130 nm) at 500-m spatial resolution, provide high-resolution information
Ocean Color Remote Sensing 131

Figure 5.14 Image of the earth taken by MODIS. (Courtesy of NASA.)

not available in previous sensors. It also measures aerosol properties, radiative

energy flux, land cover, and usage. The same sensor is also aboard a sister
satellite, Terra (EOS AM). Three onboard calibrators provide in-flight
calibration. Ground truth or vicarious calibration is done using the Marine
Optical Buoy (MOBY). The sun-synchronous orbiting sensor is capable of
covering the earth in 1 to 2 days. An example composite of the earth using
MODIS sensors can be seen in Fig. 5.14.
It is a good idea to mention that Aqua is part of the afternoon train
(A-train), a satellite constellation of four U.S. and French satellites flying in
sun-synchronous orbits at 690-km altitude that pass the equator around
1:30 pm solar time every day, separated only a few minutes apart. The collective
sensing capability of the A-train provides unprecedented high-definition, 3D
images of the earths atmosphere, land and ocean, employing both active
and passive sensors. The sequence of the current train is Aqua, CloudSat,
CALIPSO, and Aura (Fig. 5.15). Although shown in the figure, PARASOL is
no longer in service, and the OCO and the Glory failed during their launch.

Figure 5.15 A-train illustration. Parasol, OCO, and Glory are not in service. (Courtesy of
132 Chapter 5

5.4.4 MERIS
The Medium Resolution Imaging Spectrometer (MERIS) was one of the most
widely used ocean color sensors. It was the main payload of Envisat-1, launched
by the European Space Agency (ESA) in March 2000, and ceased operations
after losing contact in April 2012. It covered a 1150-km-wide swath at a 300-m
nominal nadir resolution, which was reduced to 1,200-m resolution to increase
the SNR. The 15 programmable spectral bands spanned from 390 to 1040 nm.

5.4.5 HICO
Limited in data accessibility until recently, the HICO program is a great
example of what focused research and excellent execution can bring in less
than two years, and on a low budget ( The
system was designed, built, and tested in only 16 months at the U.S. NRL,
using commercial off-the-shelf (COTS) components, which included a CCD
camera, rotation mechanism, and a spectrometer, as shown in Fig. 5.16
(Corson et al., 2004; Lewis et al., 2009; Lucke et al., 2011).

Figure 5.16 HICO camera with rotation mechanism and open slot. (Courtesy of U.S. NRL.)
Ocean Color Remote Sensing 133

HICO sits on the Japanese Experiment Module-Exposed Facility at the

International Space Station (ISS), and has been operational since October 1,
2009. It has 124 bands, covering a spectral range from 380 to 1000 nm, with
100-m ground resolution. It provides a unique opportunity for coastal ocean
characterization in terms of water quality, distribution, bathymetry, and
surface currents. The hyperspectral channels of HICO provide researchers a
powerful set of data products. For example, multispectral data such as those
of MODIS can be modeled using integrated HICO channels, which provide a
proxy that can be compared directly to MODIS channels, with the exception
of higher spatial resolution. This helps to examine the impacts of spatial
resolution on ocean sensing products, as well as provides an indirect
calibration capability, among other operational benefits.

5.4.6 GOCI
GOCI was launched by the Korea Ocean Research and Development
Institute (KORDI) on June 27, 2010, and became operational on April 20,
2011. It is part of the payload of the Communication, Ocean, and
Meteorological Satellite (COMS). Like most ocean color sensors, it has
multiple channels (eight), covering all important bands related to chloro-
phyll absorption (412, 443, 490, 555, 660, 680, 745, and 865 nm). Unlike
existing polar orbiting satellites that cover the same area once a day or
longer, GOCI/COMS flies over the Korean peninsula and nearby land and
oceans eight times a day at hour intervals. This is important when timely
monitoring of the ocean and land becomes critical in disaster surveying, oil
spill monitoring, red tide mapping and tracking, current and eddy studies,
as well as diurnal physiological changes associated with ecological systems.
An hourly map of chlorophyll concentration and currents taken by GOCI
on May 13, 2011, is shown in Fig. 5.17. When analyzed together with
circulation models, this time series provides valuable information in
understanding ecological changes, including adaptation patterns of plank-
ton, which are clearly visible by their increasing concentration toward the
later part of the day (center part of the figure).

5.4.7 VIIRS
VIIRS is among the youngest members of ocean color sensors, aboard the
Suomi National Polar-orbiting Partnership (NPP) satellite, along with four
other imagers. Designed as a successor to previous ocean color sensors
(including MODIS, which is currently operating beyond its designed service
life), it collected its first image on November, 21, 2011. Flying high at
512 miles (824 km) above sea level, with a rotating telescope design similar to
that of SeaWiFS, its wide swath (3000 km) is able to cover a span from the
Great Lakes to Cuba in a single scene (Fig. 5.18), at a remarkable spatial
134 Chapter 5

Figure 5.17 GOCI time series on May 13, 2011, to show hourly changes on the west coast
of Korea. (Reproduced with permission from R. Arnone.)

Figure 5.18 Assembled VIIRS image of our planet taken November 24, 2011. (Courtesy of

resolution of 370 m (imaging) / 740 m (radiometric). Out of the 22 bands

available, seven are considered ocean color related (412, 445, 488, 555, 672,
746, and 865 nm).
Early comparisons between VIIRS, MERIS, and MODIS have been
conducted. It seems that backscatter and reflectance of VIIRS at 443 nm is
noticeably lower than that of MODIS, which is currently under investigation
(Arnone et al., 2012).

5.5 Retrieval Algorithms

As mentioned at the beginning of this chapter, the naked eye and common
sense can tell us roughly about water quality and the constituents within. The
challenge comes when quantified results with reasonable accuracy are desired.
Ocean Color Remote Sensing 135

This is not an easy task, even when sensing just above the water and without
the influence of the atmosphere. Many methods have been developed and
tested to be effective, mostly tuned empirically to a particular region, specific
time of year, or certain type of water.
Due to the complexity of water types and associated constituents within
them, it is next to impossible to obtain an analytical formula for the
inversion problem, which derives component information from the total
combined output in the form of water-leaving radiance (or remote sensing
reflectance). We can classify various algorithms into two loose categories:
the empirical approach, and the semi-analytical approach, where the
recently developed quasi-analytical approach and optimization methods
are also included.

5.5.1 Empirical methods

The empirical approach compares the remote sensing reflectance (or the
water-leaving radiance) to that of the chlorophyll-a concentration (denoted
as [chla]). The simplicity and direct correlations to the observations make
this approach very attractive. It has been widely used for CZCS, SeaWiFS,
and MODIS data processing. To reduce the impacts of surface effects,
sensor calibration issues, and errors in atmospheric correction algorithms,
band ratios are often used. Due to the hinge, or minimal changes of remote
sensing reflectance at 550 nm as a function of [chla], band ratios of
Rrs(443)/Rrs(550), Rrs(490)/Rrs(550), and Rrs(510)/Rrs(550) are often used, mostly
in pairs. The CZCS algorithm uses Rrs(443)/Rrs(550) and Rrs(510)/Rrs(550).
MODIS uses Rrs(443)/Rrs(550) and Rrs(490)/Rrs(550) due to channel
availability (no 510 nm in MODIS). SeaWiFS uses all three ratios listed.
Results show that these ratios decrease linearly as [chla] increases, with the
sharpest decline in Rrs(443)/Rrs(550), and slowest in Rrs(510)/Rrs(550) (Aiken
et al., 1995). To achieve the best SNR (thus data quality), it makes sense to
use the highest response from these ratios: Rrs(443)/Rrs(550) at low [chla],
Rrs(490)/Rrs(550) at middle [chla] values, and Rrs(510)/Rrs(550) at high [chla]
regions. This is exactly what is behind the maximum band ratio algorithm
employed by SeaWiFS.
The SeaWiFS algorithm was developed and tested under the SeaWiFS
Bio-optical Algorithm Miniworkshop (SeaBAM). Sponsored by NASA
starting in 1997, SeaBAM gathered data with in situ chlorophyll and surface
radiance values from 919 different stations, covering vast ranges of [chla]
(also denoted as Ca) from 0.019- to 32.79-mg m 3 (OReilly et al., 1998). 15
empirical models were examined against these data, and the maximum band
ratio Ocean Color-4 (OC4) method provided the best results. It uses a fourth-
order polynomial fit in the form of
log10 chla a0 a1 RL a2 R2L a3 R3L a4 R4L : 5:4
136 Chapter 5

The maximum band ratio RL is defined as

Rrs 443 Rrs 443 Rrs 443
RL max , , : 5:5
Rrs 555 Rrs 555 Rrs 555
The OC4-2009 reprocessed data are shown in Fig. 5.19, taken from the
NASA website (see
R2009/ for more information). One can see that the algorithm works well
in both low and high chlorophyll concentrations (denoted Ca in the figure),
with slight underestimation at high concentrations and overestimation at very
low concentrations (< 0.02). These are likely influenced by the instrument
SNR and packaging effects. It captures the relationship between the maximum
band ratio (MBR) and the [chla] very smoothly, unlike the CZCS algorithm
where sudden changes can happen at [chla] 1.5 mg m 3. The CZCS
algorithm was called a switching algorithm because the band ratio of 443/500
was used when [chla] < 1.5 mg m 3, and the 520/550 ratio used when [chla]
> 1.5 mg m 3 (OReilly et al., 1998), which is the reason behind the jump.

5.5.2 Semi-analytical methods

The semi-analytical approach reflects the simple analytical reasoning that
backscattering mainly contributes to water-leaving radiance, while beam
attenuation reduces the chance of reflectance (or water-leaving radiance).
However, unless it is being absorbed, it can be argued that forward scattering
does not actually reduce the chance of reflectance, as eventually photons are
scattered back. Thus, reflectance would be proportional to bb, and inversely
proportional to c bf, which is a bb. In mathematical form, it is [in first-
order forms, see Eqs. (2) and (4) in Gordon et al. (1988)]
Rrs l Qg1 u, where u , 5:6
a bb
which is essentially a more general form that has been validated before,
through analytical means as well as numerical simulation (Monte Carlo)
(Kirk, 1981).
This relationship is accurate in many cases to within 10% accuracy
(Gordon et al., 1988). We see that the normalized remote sensing
reflectance R/Q is essentially an IOP (Q is a parameter related to the
surface effects and incident angles of radiance). g1 is a nondimensional
constant with a value around 0.1 [typically 0.009xx for oceanic waters
(Gordon et al., 1988), or 0.008xx for coastal waters (Lee et al., 1999)]. For
simplicity of the discussion, the second-order term is left out intentionally,
without compromising the goal of this text. Using relationships discussed
in Chapter 2, a and bb can be broken down into corresponding
components, where individual empirical and spectral relationships to the
chlorophyll concentration can be brought in (Gordon et al., 1988), thus
Ocean Color Remote Sensing 137

Figure 5.19 OC4-2009 reprocessing data, with coefficients a0 through a4 0.3271,

2.9940, 2.7218, 1.2259 and 0.5683. (Courtesy of NASA.)

the term semi-analytical algorithm. In more complex case-2 waters, this

approach can be applied to generate a set of relationships to include not
only [chla] but also CDOM and other pigments (Carder et al., 1999). With
the knowledge of spectral information of individual components, an
138 Chapter 5

optimization approach can be used to deconvolute the spectral radiance

response into component concentrations (Roesler and Perry, 1995).
Another type of inversion method has been devised by Lee et al. (2002),
(2005), termed the quasi-analytical approach (QAA). This is an IOP-based
approach, in that it does not seek to directly link the analytical formulas to
chlorophyll or any specific empirical relationships. Rather, it seeks to derive
total absorption and backscattering from the remote sensing signal first, as
they are inherent properties of water. This is done by calculating the total
absorption at 555 nm using Eq. (5.7), exploiting an empirical relationship to
derive a(440) using reflectance ratios between 440 and 555 nm (Lee et al.,
1998). Backscattering at 555 nm can then be calculated using Eq. (5.8).
Values of backscattering at other wavelengths are then obtained using
Eq. (5.9), where the power term Y is estimated using an empirical formula
(Lee et al., 1997); (Lee et al., 2002). Finally, the total absorption is
calculated again using Eq. (5.6) and previously derived values, as shown in
Eq. (5.10). This is a step in the right direction, since IOPs are a better proxy
and closer to the analytical forms required to be applied to broader
scenarios (e.g., ecological models and active sensing), despite the fact that
IOPs are derived based on empirical approaches. This approach is rather
accurate, as it does not rely on the spectral curves of the IOP involved. It is
also fast in terms of processing time, similar to that of the empirical
approach, but with the accuracy of other semi-analytical approaches, such
as spectral optimization. For details, readers are encouraged to explore the
references cited here. As mentioned above, the total absorption at 555 nm
is calculated using

a555 0:0596 0:2a440i 0:01, 5:7

and backscattering at 555 nm using
bbp 555 bbw 555: 5:8
1 u555
With bp(l) estimated by Lee et al. (2002),
555 Y
bbp l bbp 555 , 5:9

1 ulbbw l bbp l
al : 5:10

The two-wavelength (440/555) approach of QAA proposed by Lee et al.

(2002) and Lee et al. (2005), works generally well with oceanic and most
coastal waters, keeping in mind that reference wavelength l0 can be adjusted
Ocean Color Remote Sensing 139

Figure 5.20 Comparison between QAA-derived IOPs and known IOPs, for the synthetic
dataset (sun at 30 deg from zenith). IOPs were derived with Rrs values at 410, 440, 490, 510,
555, and 670 nm as inputs. [Reproduced from IOCCG report No. 5 (2006) with permission.]

according to requirements. For high absorption waters [a(440) > 0.5 m 1], a
longer wavelength such as l0 640 is recommended for a more reliable
measurement of reflectance at the wavelength, as well as a better estimate of
a(l0) (Lee et al., 2002). The results using a simulated IOCCG dataset can be
seen in Fig. 5.20.

5.6 Calibration and Validation

The CCD imagers used in ocean color sensors only provide digital numbers
(DN) after an analog conversion from electronic signals (radiation) through
optical systems. The atmospheric correction issues discussed earlier are based
on the assumption that true values can be obtained by these sensors at the
ground level, without any atmospheric interference. This assumption
essentially means that the sensors are calibrated already, to give correct
radiance values once the switch is turned on. The focus of this section is to
review the involved processes that ensure that true values are coming out of a
sensor after it has been launched into space.
140 Chapter 5

5.6.1 Basic calibration: radiometric and spectral

The basic calibration process involves radiometric and spectral calibration. In
radiometric calibration, the relationship between DN and the radiance has to be
established and ensured across all spectral bands. The ideal regions for any
given sensor are those with the best linear response to the signal level (or log-
linear if a wide dynamic range is involved). Dark currents must be addressed at
this stage. Radiometric calibration also includes corrections for any optical
defects of the system, such as those caused by pixel response differences (which
can lead to stripping), or optical design and lens nonuniformity (vignetting).
Spectral calibration is equally important, if not more so, in ocean color
remote sensing applications, especially when hyperspectral bands are used, and
bandwidths are often narrow. This is to ensure that in addition to radiometric
calibration, the response of each color is known for a given light source with
various colors (wavelength). The spectral width of the bands must be known,
along with their relative locations to the peaks (emission lines). The out-of-band
contributions, or crosstalk, must be addressed. This can be critical in certain
applications in ocean color remote sensing, where inelastic scattering (such as
Raman scattering) or fluorescence of chlorophyll and CDOM are involved. In
certain active sensing approaches, very high accuracy of spectral information is
required to correctly measure returns from water and in-water constituents alike.
Geometric calibration is often required to remove any possible aberrations of the
optical system caused by incorrectly registered mechanical and optical components,
especially when multiple bands are involved. Line patterns can be used for prelaunch
calibration. Known landmarks can also be used for geometric registration during
flight. Other special calibrations unique to the sensors might be required as well, such
as determining the polarimetric properties of the sensor response.

5.6.2 In-orbit calibration: vicarious method

The majority of calibration efforts take place at the prelaunch stage, under
controlled conditions with thorough procedures, to fully understand the exact
behavior of the system. However, the lapse of time between calibration and
orbiting is usually lengthy. By the time the sensor is in orbit, or over an
extended service life span, the system components (detectors and filters) and
optical alignment will often change slightly, rendering the previous calibrations
suboptimal. This becomes critical when the accuracy of the sensor is required at
a 0.1% level to successfully retrieve in-water properties, as discussed earlier.
An example of sensor drift over time can be seen in Fig. 5.21. With limited
options in space, various methods have been used to address calibration
issues, including carrying a known light source for onboard calibration, using
solar signatures by applying diffuse reflectance panels and/or looking at the
moon (both applied on VIIRS), or imaging deep space (for background
radiation, mostly used by microwave sensors).
Ocean Color Remote Sensing 141

Figure 5.21 SeaWiFS band 7/8 ratio drift over time. [Reproduced with permission from
Eplee et al. (2007).]

The majority of in-orbit calibrations of an ocean color imager are done via
vicarious methods, which involve using known ground targets, either natural or
artificial. Without interference of the atmosphere, these methods are very similar
to laboratory calibrations, with known target characteristics, radiometrically as
well as spectrally. A simple gain curve can be derived to force alignment of the
drift, radiometrically or spectrally. With the interference of the atmosphere, a
direct comparison cannot be made; impacts of the atmosphere must be included.
This can be accomplished by reversing the atmospheric correction algorithm, and
propagating the ground target radiance information back to the top of the
atmosphere, to be compared to the sensor response. This does complicate the
process, considering that instead of one set of unknowns related to the sensor
response, another set of unknowns is introduced that is related to atmospheric
influence. This can be remedied, to some extent, by intercomparison between
similar classes of sensors, in addition to known target sets. MOBY (out of
Hawaii) is one of the calibration sites for ocean color sensors, including SeaWiFS
and MODIS. An example calibration scheme for SeaWiFS is shown in Fig. 5.22,
which includes prelaunch, onboard, in-orbit, and vicarious calibrations.

5.7 Summary
Ocean remote sensing provides environmental scientists one of the most powerful
tools available. It enables synoptic coverage of the ocean, at high temporal
frequencies, and at a lower cost when compared to traditional oceanography
sampling techniques. Ocean color, as a subset of ocean remote sensing, enables us to
monitor and track large-scale events such as red tides or blooms, oil spills, and
productivity with unprecedented capability. New sensors are being implemented,
142 Chapter 5

Figure 5.22 SeaWiFS pre- and postlaunch calibration scheme. [Reproduced with
permission from McClain et al. (2004) 2004 Elsevier.]

built on knowledge gained from previous generations. New algorithms are being
developed and tested to achieve better accuracy across more water types and regions
of coverage. However, all of these advancements still depend on the availability of
natural sunlight. Due to the exponential decay of light penetrating a water column, a
passively sensed signal carrying information about the ocean is heavily weighted
toward surface layers. This vertical integration eliminates any depth information. To
probe the ocean with more control, and to gain vital vertical structure information,
active sensing is needed, and is the main focus of the next chapter.

Adler-Golden, S. M. et al. (1999). Atmospheric correction for short-wave
spectral imagery based on MODTRAN4. Proc. SPIE 3753, 61. doi: 10.1117/
Aiken, J. et al. (1995). The SeaWiFS CZCS-type pigment algorithm.
SeaWiFS Tech. Report Series 29.
Amin, R. et al. (2011). Automated detection and removal of cloud shadows
on HICO images. Proc. SPIE 8030, 803004. doi: 10.1117/12.887761.
Arnone, R. et al. (2012). Validation of the VIIRS ocean color. Proc. SPIE
8372, 83720G. doi: 10.1117/12.922949.
Ocean Color Remote Sensing 143

Berk, A. et al. (1989). MODTRAN amoderate resolution model for

LOWTRAN 7. Hanscom Air Force Base: U.S. Air Force Geophysics
Laboratory. Available at <
pdf> [Accessed 14 Mar. 2013.]
Carder, K. L. et al. (1999). Semianalytic moderate-resolution imaging spectrome-
ter algorithms for chlorophyll-a and absorption with bio-optical domains based
on nitrate-depletion temperatures. J. Geophys. Res. 104, 54035421.
Corson, M. R. et al. (2004). The HICO programhyperspectral imaging of the coastal
ocean from the International Space Station. Proc. IEEE IGARSS 6, 41844186.
Eplee, J. R. E. et al. (2007). SeaWiFS on-orbit gain and detector
calibrations: effect on ocean products. Appl. Opt. 46(27), 67336750.
Fraser, R. S. et al. (1997). Algorithm for atmospheric and glint corrections of satellite
measurements of ocean pigment. J. Geophys. Res. 102(D14), 1710717118.
Gao, B. C. et al. (2009). Atmospheric correction algorithms for hyperspectral
remote sensing data of land and ocean. Remote Sens. Environ. 113(1), S17S24.
Gao, B. C. et al. (1993). Derivation of scaled surface reflectances from Aviris
data. Remote Sens. Environ. 44(2,3), 165178.
Gao, B. C. et al. (2000). Atmospheric correction algorithm for hyperspectral
remote sensing of ocean color from space. Appl. Opt. 39(6), 887896.
Gao, B. C., Green, R. O., and Lundeen, S., (2011). Atmospheric correction
algorithms for surface reflectance retrievals from VSWIR measurements.
Washington D.C. 2325 Aug. Pasadena: JPL.
Gordon, H. R. (1978). Removal of atmospheric effects from satellite imagery
of the oceans. Appl. Opt. 17, 16311636.
Gordon, H. R. et al. (1988). A semianalytic radiance model of ocean color.
J. Geophys. Res. 93(D9), 1090910924.
Gordon, H. R. and Wang, M. (1994). Retrieval of water-leaving radiance
and aerosol optical thickness over oceans with SeaWiFS: a preliminary
algorithm. Appl. Opt. 33, 443452.
Hu, C. et al. (2012). Chlorophyll algorithms for oligotrophic oceans: a novel approach
based on three-band reflectance difference. J. Geophys. Res. Oceans 117, C01011.
Kirk, J. T. O. (1981). A Monte Carlo study of the nature of the underwater
light field in, and the relationships between optical properties of, turbid yellow
waters. Aust. J. Mar. Fresh. Res. 32, 517532.
Kruse, F. A. (2004). Comparison of ATREM, ACORN, and FLAASH
atmospheric corrections using low-altitude AVIRIS data of Boulder, CO.
Proc. 13th JPL Airborne Geosci. Workshop. Pasadena: Jet Propulsion Lab.
Lee, Z. P. et al. (2007). Water and bottom properties of a coastal
environment derived from Hyperion data measured from the EO-1 spacecraft
platform. J. Appl. Remote Sens. 1(1), 011502. doi: 10.1117/1.2822610.
144 Chapter 5

Lee, Z. P. et al. (2002). Deriving inherent optical properties from water

color: A multi-band quasi-analytical algorithm for optically deep waters.
Appl. Opt. 41, 57555772.
Lee, Z. P. et al. (2005). The quasi-analytical algorithm for IOPs. IOCCG
Report. Dartmouth.
Lee, Z. P. et al. (1999). Hyperspectral remote sensing for shallow waters: 2. Deriving
bottom depths and water properties by optimization. Appl. Opt. 38(18), 38313843.
Lee, Z. P. et al. (1997). Remote-sensing reflectance and inherent optical
properties of oceanic waters derived from above-water measurements. Proc.
SPIE 2963, 160166. doi: 10.1117/12.266436.
Lee, Z. P. et al. (1998). An empirical algorithm for light absorption by ocean
water based on color. J. Geophys. Res. 103, 2796727978.
Lewis, M. D. et al. (2009). The Hyperspectral Imager for the Coastal Ocean
(HICO): sensor and data processing overview. MTS/IEEE Oceans 2009, pp. 19.
Lucke, R. L. et al. (2011). Hyperspectral Imager for the coastal ocean:
instrument description and first images. Appl. Opt. 50(11), 15011516.
McClain, C. R. et al. (2004). An overview of the SeaWiFS project and
strategies for producing a climate research quality global ocean bio-optical
time series. Deep Sea Res. 51(13), 542.
O'Reilly, J. et al. (1998). Ocean color chlorophyll algorithms for SeaWiFS.
J. Geophys. Res. 103, 2493724953.
Reinersman, P. et al. (1998). Satellite-sensor calibration verification with the
cloud-shadow method. Appl. Opt. 37(24), 55415549.
Reinersman, P. N., Carder, K. L. et al. (1998). Satellite-sensor calibration
verification with the cloud-shadow method. Appl Opt. 37(24), 55415549.
Roesler, C. S. and Perry, M. J. (1995). In situ phytoplankton absorption,
fluorescence emission, and particulate backscattering spectra determined from
reflectance. J. Geophys. Res. 100, 1327913294.
Wang, M. and Shi, W. (2007). The NIR-SWIR combined atmospheric
correction approach for MODIS ocean color data processing. Opt. Express
15(24), 1572215733.
Wang, M. et al. (2011). Satellite ocean color remote sensing and its applications
for marine air quality modeling, Presentation at the 3rd Int. Workshop on Air
Quality Forecasting Research, Potomac, MD, Nov 29Dec 1, 2011. Slide 3.
Wang, M. H. et al. (2012). Atmospheric correction using near-infrared bands
for satellite ocean color data processing in the turbid western Pacific region.
Opt. Express 20(2), 741753.
Chapter 6
Ocean Lidar Remote Sensing
6.1 Introduction
Since the early days of ocean color remote sensing (when CZCS images first
became available), researchers have been looking for means to extend their
reach beyond the surface of the ocean. By recalling Fig. 2.2, it is clear that the
only capable means to explore the oceans vertical structure from space is
through the window of optics in the EM spectrum, where other bands
including microwaves and IRhave negligible penetration beyond the very
thin, top layer of the ocean, due to their strong absorption by water.
The most transparent channel, depending on the constituents of the water,
is that of the green to blue wavelength. Green wavelengths are especially
favored because low-cost, high-power lasers are readily available in the form
of frequency-doubled Nd:YAG lasers at 532 nm. This is the basis of the
majority of development of active optical sensing from space, or lidar. Lidar is
similar to sonar, and is very close to that of range-gated active imaging
(discussed in Chapter 4). This chapter focuses on the application of airborne
lidar for sensing the vertical structure of the ocean. The most relevant topics
involve measurements of temperature T, [chla], altimetry, and water depth
(bathymetry or bathy).
In addition to the vertical information active remote sensing provides,
lidars are not limited by the availability of light, meaning that night
operations are possible. In fact, due to atmospheric scattering and
background illumination, lidar performs better at night because of higher
SNR. It is also important to keep in mind the limitations of lidar, such as
limited channels or spectral information. One could argue that lidar is trading
spectral coverage for vertical information, since horizontal coverage can be
remedied by a high repetition rate and fast-moving platform, including
airplanes and satellites.
In this chapter, the basics of lidar systems are covered first, followed by
discussions on their applications in ocean bathymetry, temperature, salinity,
CDOM, chlorophyll, and physical/biological layers in the ocean.

146 Chapter 6

Figure 6.1 Illustration of lidar system components.

6.2 Basic Components and Principles of Lidar Remote Sensing

A lidar system consists of several key components (see Fig. 6.1): the scanning
laser emitter-receiver pair, an inertial measurement unit (IMU), a differentially
corrected GPS, a computer, and a platform matching its design specification
(plane, ship, satellite, or UAV).
Like the LLS discussed in Chapter 4, a scanning mechanism is needed for
the spatial coverage of lidar. Examples are shown in Fig. 6.2. The oscillating
mirror mechanism in the first panel shows that the mirror rocks back and
forth, perpendicular to the platform motion path, as indicated by the large
gray arrow in the lower part of the panel. If the platform is stationary, all of
the dots (laser beam) fall on one straight line. With the platform in motion,
it stretches out to form a Z-shaped pattern on the ground or sea surface.
A similar explanation applies to the elliptical pattern, where a tilted rotating
mirror or prism is used to steer the laser beam in a circular pattern. The
motion of the platform (e.g., plane) stretches the ground pattern to be
elliptical, as shown in the bottom of the right panel. The rotating polygon
shown in the middle panel essentially breaks down the continuing pattern of
the scanning laser by jumping back to the original starting point (if the
platform is stationary) when one side of the polygon has reached its edge.
When the platform is in motion, the repeated scanning line becomes stretched
and follows the same slant path of the starting line as the Z-shaped pattern.
While any of these patterns can achieve the desired results, at higher altitudes
Ocean Lidar Remote Sensing 147

Figure 6.2 Lidar scanning mechanism. [Adapted from Moskal (2008).]

where atmospheric influence has to be taken into account, the elliptical pattern
is the most convenient choice. This is due to the fact that similar path lengths
can be used in atmospheric correction, assuming that a small horizontal
variability is involved.
Scattered photons from the medium and/or targets carry encoded
information about the medium or target. The two basic types of interactions
are: elastic scattering, where the wavelength of the photon or its frequency
does not change, only the direction does; and inelastic scattering, where the
photon is absorbed and re-emitted at a different frequency. Some inelastic
processes have been mentioned briefly before, such as Raman scattering and
fluorescence from chlorophyll and CDOM. For elastic scattering, the lidar
equation can be written as a function proportional to the backscattering,
inversely proportional to the square of the distance to the surface (power
reduction law), and of course, has the two-way intensity reduction in water by
the diffusion attenuation (Fig. 6.3):
" Zz #
Cx Kb z
Pr z 2 exp 2 Kd rdr , 6:1
Hs 4p

where Pr is the power received by the lidar, Hs is the range to the surface, Kd
is the diffuse attenuation coefficient of the water, and z is the water depth.
Kb is related to backscattering efficiency. Cx is the lidar constant, which
here includes the laser power, receiver area, angle, FOV overlap, quantum
efficiency, transmission, and bandwidth limits. Detailed discussions of lidar
148 Chapter 6

Figure 6.3 Geometry for lidar equation and waveform. (For illustration only; not to scale.)

equations can be found in numerous publications (Kim, McClain, and

McLean, 1980; Rees, 2001; Josset et al., 2010) that interested readers can
The pulse emitted from the laser travels through the atmosphere, interacts
with the surface, the water column, and the bottom (if optically shallow
enough), then returns to the receiver to be digitized. Notice that due to
scattering effects the width of the beam is significantly stretched, with the
altitude, bathymetric, and water column information encoded. Our task is to
examine and decode the information from the waveforms. Next, we begin
with bathymetry sensing, which is the most mature application of lidar to
date, followed by lidar sensing with chlorophyll fluorescence, temperature
probing in the ocean with Raman and Brillouin scattering, and brief
discussions on fish and biological layer detection in the ocean.

6.3 Lidar in Depth Sounding: Altimetry and Bathymetry

Figure 6.3 illustrates that it is rather straightforward to estimate the distance
from which the signal has returned, by locating peaks of the return signals in
the waveform. If the exact GPS location of the plane is known, along with the
platform orientation (pitch, yaw, and roll, to refine the directional returns),
Ocean Lidar Remote Sensing 149

altitude can be calculated. The same can be applied to the bathymetry

calculation. Based on the time of flight, the range can be calculated as half
of the distance traveled by the peaks of interest, which yields single-trip
distance R:
vg T
R , 6:2
where vg is the group velocity of the light traveling in the medium, and T is the
round-trip time of flight of the waveform from the receiver to the detector.
The accuracy (or residual error) of the range estimate is proportional to the
pulse rise time tr, and inversely proportional to the SNR (denoted as S0 here).
Considering that averaging multiple returns also increases SNR, for platform
velocity v, laser repetition rate p, and laser angular width Du, the number of
pulses that can bep averaged (without overlapping the next pulse) is NHs*
Du/(v/p). When N is applied for ensemble averaging, the residual error is
(Rees, 2001)
v g tr v
DR : 6:3
2S0 pHs Du

For a typical airborne lidar system flying at 200-m altitude with a speed of 50
m/s, a 5-ns pulse rise with a 4-mrad spread, and S0 1, would give an accuracy
of 0.19 m at 1-kHz repetition rate using Eq. (6.3). It is worth mentioning that
to reduce ambiguity of the pulse return, there is a limit to how many pulses
can be averaged. One cannot expect to keep reducing the residual errors by
increasing repetition rate p.
Airborne and spaceborne laser altimetry has been used since 1995,
onboard space station MIR, at a height of 350 km. The Geoscience Laser
Altimeter System (GLAS) onboard the Ice Cloud Elevation Satellite (ICESat,
20032009) was able to provide vertical range resolution at 0.1-m accuracy
from an altitude of 700 km. High-range accuracy is very helpful in
topographic mapping and other civil engineering applications. This is of
high relevance for sea ice, in that it can be used to estimate the subsurface
volume based on the portion of ice afloat above sea level. The successor
satellite, ICESat2, is scheduled to be launched in 2016; the interim solution is
the Airborne Topographic Mapper, used on a fixed wing aircraft.
Bathymetry, or depth mapping of the ocean, is very important in
understanding ocean circulation, biology, chemistry, and geological activities.
Although generated by gravity fields of the moon and Earth, and rotation of
the planet itself, currents and tides are mostly controlled by the shapes of
ocean shorelines, basins, seamounts, ridges and their distributions, as well as
locations. Biological activities are greatly influenced by the vertical movement
of currents and related nutrient availability. Due to slow morphological
changes in deep oceans, detailed bathymetry also helps researchers understand
150 Chapter 6

lithospheric changes and development of the ocean by analyzing mantle

convection patterns, plate boundaries and movements, oceanic ridges and
plateaus, and volcanoes. Most seafloor mapping is tedious and costly, and is
typically accomplished by ships equipped with echo sounders. A survey of
ocean bathymetry at a resolution of 100 m requires approximately 125 ship-
years of time to complete. Fortunately, researchers have found ways to use
altimetry to reveal deep ocean bathymetric information, due to the fact that
surface ocean topography often matches the variability beneath (Dixon et al.,
1983; Smith and Sandwell, 1994), as shown in Fig. 6.4.
Ocean bathymetry in shallow regions has been the main focus of ocean
lidar research, an example of which is the Airborne Oceanographic Lidar
(AOL) of NASA, which began in the 1970s and continues today. NASA
found that cross-channel coupling must be removed before bottom return
waveforms can be used to estimate bathymetry (Hoge, Swift, and Frederick,
1980; Hoge, Berry, and Swift, 1986). It is worth mentioning that AOL was
designed as an experimental unit, enabling it to be operated in multiple modes
such as altimetry, bathymetry, fluorescence, fluorescence decay, and bath-
yfluorometry (Hoge, Swift, and Frederick, 1980).

Figure 6.4 The use of altimetry to infer bathymetry of the ocean. [Reproduced from
General Bathymetric Chart of the Oceans (]
Ocean Lidar Remote Sensing 151

Specialized bathymetric lidars, especially those from commercial compa-

nies, are widely used for routine surveys. The most widely used systems include
the Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS,
by Optech), the Laser Airborne Depth Sounder (LADS, by Tenix), and the
Hawk Eye (by Airborne Hydrography AB).
The SHOALS system employs pulsed IR (1064 nm) and green (532 nm)
lasers in an elliptical scanning pattern, and collects returned photons with five
receivers. IR and green wavelengths are used for surface and penetration
purposes, respectively. SHOALS typically flies on an airplane at 200-m
altitude with 60-m/s cruising velocity, which gives a spot density of 4 m and a
110-m swath width. With a 400-Hz repetition rate and a 6-ns pulse width,
SHOALS is capable of vertical and horizontal accuracies of 15 cm and 3 m,
respectively, when operated with a differential GPS (Irish and Lillycrop,
1999). SHOALS can provide up to 3,000 water depth soundings per second at
an International Hydrographic Organization (IHO) Order-1 standard. When
operating with an Airborne Laser Terrain Mapper (ALTM), the system is
capable of providing more than 100,000 topographic measurements per
second. Routine surveys of coastal regions have been conducted by the Armys
Joint Airborne Lidar Bathymetry Technical Center of Expertise (JALBTCX),
using the Compact Hydrographic Airborne Rapid Total Survey (CHARTS)
system, which consists of SHOALS-3000 integrated with an Itres CASI-1500
hyperspectral imager. The system is shown in Fig. 6.5. CHARTS typically
delivers 3-kHz bathymetric data, and 20-KHz topographic data (Optech,
A next-generation version of CHARTS, Coastal Zone Mapping and
Imaging Lidar (CZMIL), has been developed by Optech for JALBTCX,
aimed to enhance the rapid environmental assessment capabilities of the
system. This is accomplished by quadrupling the receiver size and doubling the
spatial resolution. CZMIL is also designed for highly automated processing of
acquired data, to produce physical and environmental information of mapped
coastal areas. These include bathymetric maps, seafloor reflectance, chloro-
phyll and CDOM concentrations in the water column, water column IOPs
(attenuation), and bottom classification. It is currently under acceptance
testing for operational surveys, as of the end of 2012.

6.4 Lidar in Temperature Measurements

Ocean temperature measurement is critical in our understanding of oceanic
processes, as solar input is the main driving force of ocean dynamics, from
wind-driven circulation, to thermohaline circulation, to the radiation available
for photosynthesis that is responsible for the majority of biogeochemical
activities in the ocean. Due to the limitations of EM waves discussed earlier,
IR sensors can only measure the SST, which is the skin temperature (on the
152 Chapter 6

Figure 6.5 CHARTS components and operational setup. (Courtesy of JALBTCX, U.S.
Army Corps of Engineers.)

order of microns) of the ocean (see Chapter 8 for details). The same limitation
applies to microwave bands. Since the only possible penetration is that of the
visible wavelength, currently there are two types of approaches for linking
subsurface temperature to lidar return waveforms. Both involve inelastic
scattering of the medium. One is based on Raman scattering of the water, and
the other is based on Brillouin scattering of phonon interactions with photons
(i.e., density wave interactions with EM waves).
The fundamental physical reason behind Raman scattering, or the spectral
shift of inelastic scattering, is that there are multiple forms of liquid water molecules
in the oceanthe polymer type and the free or monomer type. Their ratio is a
function of temperature and can be used to infer the temperature of the water, by
locating their corresponding spectral location, as shown in Fig. 6.6. This is because
the O-H bond stretching frequency, when excited, is significantly different in
monomers and polymers (Leonard, Caputo, and Hoge, 1979; Leonard, 1980). It is
worth mentioning that depolarization of the Raman scattering returns can also be
Ocean Lidar Remote Sensing 153

Figure 6.6 Raman scattering for probing subsurface ocean temperatures. [Reproduced
with permission from Leonard, Caputo, and Hoge (1979).]

used for temperature probing. The ratio of the two polarization spectraone
parallel to the incident beam and the other orthogonalis found to be a function of
temperature (Cunningham and Lyons, 1973).
Using the lidar equation in combination with the parameters of a typical
Raman lidar system, Leonard (1980) calculated the performance surface of
subsurface water temperature measurement as a function of depth and water
quality. Results are shown in Fig. 6.7. The figure shows that an accuracy of

Figure 6.7 Typical system operating map, assuming a mixed-layer depth of 100 m.
[Reproduced with permission from Leonard (1980).]
154 Chapter 6

0.1 C at depths up to 100 m can be achieved using a laser with 337.1-nm

wavelength. Field efforts have confirmed an accuracy of 1 C for temperature
gradients up to 30 m (Leonard, 1980). This is in line with recent studies under
continuous wave (cw) and pulsed laser excitation, where researchers found
that pulse energy does not affect the ratio marker. However, calibration,
especially determination of offset values, can be critical in determining
absolute values of the temperature measured in the field (Becucci et al., 1999).
Another technique for sensing subsurface temperature structures using
lidar is by means of stimulated Brillouin scattering (SBS). Brillouin scattering
is the result of photons interacting with acoustical waves in the water. Due to
thermal energy, sound waves are present in all directions and in various
wavelengths in liquid water above 10 K in temperature, although only waves
that satisfy Braggs law (Born and Wolf, 2005) interact with the light. With
the exception of zero-degree incident angles (i.e., forward scattering), a
Doppler shift is produced to the incoming photon, along the direction of the
vibration, which is (Hirschberg, Wouters, and Byrne, 1980; Fry, 2012)
vn u
Dl l sin , 6:4
c 2
where v is the sound velocity, n is the index of refraction, c is the speed of light
in vacuum, and u is the angle of incidence (Fig. 6.8).
A measurement of the frequency shift of incoming light provides
measurement of the sound speed profile in the ocean. Since sound speed is
a function of temperature and salinity, when one is known, the other can be
calculated. As salinity variability is usually low due to the conservative nature
of the water mass (unless run-off or discharge are involved), and is usually
known, the temperature can be derived. It can be shown that the Brillouin
frequency shift is about the same as the sound frequency, which means for a
backscattered photon of 530-nm wavelength, at a sound speed of 1500 m/s,
the frequency shift is 7.5 GHz (Fig. 6.9) (Fry, 2012). To measure such a small
frequency shift, which is on the order of 0.001 nm, an interferometric
approach must be applied. An example of a FabryProt interferometer setup
is shown in Fig. 6.10.

Figure 6.8 Brillouin scattering. [Reproduced with permission from Hirschberg, Wouters,
and Byrne (1980).]
Ocean Lidar Remote Sensing 155

Figure 6.9 Frequency shift by typical Brillouin scattering in ocean water. [Reproduced with
permission from Fry (2012).]

Figure 6.10 FabryPerot interferometer used to measure Brillouin shift and line width.
[Reproduced with permission from Fry (2012).]

The challenge with this approach lies in the mismatch in detection FOV in
field deployments. The lidar-excited photons returning from the ocean
typically diverge several degrees in relation to the receiver, while the Fabry
Prot acceptance angle is on the order of 1/10 deg. To probe the vertical
structures of the ocean, a fast-gating technique is needed on the order of
nanoseconds; this prevents the use of scanning techniques in association with
the narrow FOV of an interferometer. A practical solution is to use the edge
detection technique shown in Fig. 6.11, where an absorption cell is used to
detect the amount of overlap between the Brillouin scattering peak (triangle
156 Chapter 6

Figure 6.11 Edge detection technique to detect Brillouin shift. [Reproduced with permission
from Fry (2012).]

used) and the absorption line (rectangular shape used, both simplified for
convenience of illustration). Thus, the amount of shift can be obtained by
comparison of the split beams that arrive at two different detectors, as shown
in the figure. A detailed implementation, using an excited-state Faraday
anomalous dispersion optical filter (ESFADOF), has been tested in the
laboratory, and demonstrated a temperature detection capability on the order
of 1/10 deg in natural environments (Schorstein et al., 2007; Popescu and
Walther, 2010; Fry, 2012). For the emitters, it is important to note that
portable, low-power, high-repetition-rate lasers generate the same level of
Brillouin scattered photons induced by low-repetition and high-output lasers
(Schorstein and Walther, 2009). Field validation of this approach has yet to be

6.5 Lidar Fluorosensing of Subsurface Chlorophyll and

Colored Dissolved Organic Matters
Attempts to study ocean primary producers using lasers date back at least
four decades (Kim, 1973). As mentioned earlier, one of the mission objectives
of AOL is to study the lidar sensing of subsurface chlorophyll concentrations
using stimulated fluorescence (Fig. 6.12). This is attractive, since it is
equivalent to using a field flow-through-type fluorometer to measure
chlorophyll concentration, without the need for empirical data interpretation
associated with ground truth validation. For AOL fluorescence lidar, the
transmittance and receiving path is essentially the same as the bathymetric
configuration. The difference starts at beam splitting, where a majority of the
photons at the excitation wavelength, and the fluorescent returns, are sent into
the fluorosensing detector assembly (Hoge and Swift, 1981).
It is worth mentioning that due to the complicated process involved
in pigment fluorescence delays, only total fluorescent returns are used
for total biomass estimation. No vertical biomass distribution is obtained.
Time-resolved fluorescence lidar has been demonstrated in vegetation
stress studies, where the chlorophyll fluorescence decay signature is used with
Ocean Lidar Remote Sensing 157

Figure 6.12 Airborne lidar sensing of fluorescent returns. [Reproduced courtesy of SEOS
Project EU Learning Management System.]

subnanosecond excitation pulses (Schmuck and Moya, 1994; Ounis et al.,

2001). Challenges still remain, however, such as impacts of multiple scattering
on photons by the water column that need to be addressed. To reduce the
uncertainties associated with sea surface variations, laser power fluctuations,
subsurface optical property variations due to patchiness, and atmospheric
influence, a water Raman signal is often used to normalize fluorescence return
(Hoge et al., 2005). For typical 532-nm lidar, the Raman O-H shift is 645 nm
(Hoge and Swift, 1981), where fluorescence occurs at 683 nm. CDOM
concentration can be obtained using a similar approach, with the help of
tripled frequency Nd:YAG lasers, where fluorescent signals at 450 nm are
used, in combination with Raman normalization at 402 nm (Hoge, Vodacek,
and Blough, 1993).
A coupling effect has been observed in the field from chlorophyll and
CDOM absorption measurements, and has been confirmed by retrieved values
from passive ocean-color remote sensing using the band ratio, as well as by the
analysis of global achl and aCDOM. A simultaneous retrieval of chlorophyll and
CDOM with lidar fluorescence was developed and successfully compared with
in situ measurements, as well as SeaWiFS OC4 retrieved concentrations (Hoge
et al., 2005). An example is shown in Fig. 6.13, where improvement is clearly
158 Chapter 6

Figure 6.13 Example results of airborne lidar chlorophyll biomass retrieval with
(a) chlorophyll-fluorescence-only approach, and (b) simultaneous chl-CDOM approach,
compared with (c) SeaWiFS composite data May 226, 1998 (OC4v4). [Reproduced
with permission from Hoge et al. (2005).]

visible between the chlorophyll-fluorescence-only approach, and the chloro-

phyll-CDOM simultaneous approach.
One of the underlying assumptions in these approaches is that the
fluorescent returns are not a function of temperature. This seems true for
CDOM and Raman scattering, as they are both temperature invariant.
However, slow changes have been observed for algae fluorescence, which
increases positively with temperature increase (Lin, 2001). This could bias
retrieved chlorophyll concentrations. The same study also examined
wavelength dependency of the excitation and fluorescence channel, and
found small variations in the Raman signals, although not enough to render
the method ineffective. Optimal bands, however, are between 490 to 535 nm
for coastal studies.
Fluorescent yield is not a constant, as research has shown a two-fold
yield difference during daytime observations, and a significant 10 variability
Ocean Lidar Remote Sensing 159

Figure 6.14 Day and night fluorescence efficiencies observed. [Reproduced with
permission from Chekalyuk (2011).]

between day and night [Chekalyuk (2011), see Fig. 6.14]. Physiological factors
must be included and are under the investigation of pigment functional
groups, and should be included in the next generation of AOL-4.

6.6 Other Lidar Applications

Although it does not fall under any of the main categories already discussed,
the Fish Lidar, Oceanic, Experimental (FLOE) system (Fig. 6.15) has been
successfully deployed in many joint research projects, and has yielded
interesting results regarding schools of fish, salmon distribution, bubble
impacts, polarization characteristics, and subsurface internal waves (Churnside,
Wilson, and Tatarskii, 1997; Churnside and Wilson, 2004; Churnside and
Thorne, 2005; Churnside, 2008; 2010).

Figure 6.15 FLOE flow chart. (Courtesy of NOAA.)

160 Chapter 6

The details of FLOE can be found on NOAAs website (http://www.esrl. It uses a 532-nm
laser, pulsed at 30 Hz at a width of 12 ns and 100-mJ energy. FLOEs receiver
differentiates linear, co- and cross-polarization at 17-mrad FOV, digitized at
1 GHz. A short-exposure (20-ns) intensified camera can be used to take a
snapshot of the backscattered photons, essentially providing a shadowgraph
when a large object or fish is present.
Fish populations have been studied with the help of echo sounders in situ.
One example is shown in Fig. 6.16, where mullet schools are confirmed and
clearly visible on lidar returns (as a function of depth) as well as along track
information (Churnside and Thorne, 2005). This is very attractive informa-
tion, since it is less intrusive, cost effective, and requires much less time to
cover larger areas.
Optical layers have been observed in numerous coastal areas, as well as in
open ocean gyres by FLOE (Fig. 6.17). These layers provide a first-hand look
at the vertical distribution of particle concentrations unavailable to passive
remote sensing techniques, or any other active non-EO channels. When used
as a tracer of water mass, ocean currents, or boundaries of density variation,
this can provide the much needed input for forecasting ocean weather and
model validation. A great example is internal waves, typically generated by
tidal currents with large spatial and temporal scales. These nonlinear waves
propagate through density layers in the ocean with various amplitudes, which
in turn affects their speed and mixing characteristics. Mixing at density
boundaries is critical in ocean energy dissipation and nutrient transport.
Internal waves are typically measured using in-water instruments that observe
changes in density, temperature, and current intensity and directions. Their

Figure 6.16 Mullet school from (a) echo sounders compared to (b) FLOE lidar returns.
[Reproduced with permission from Churnside, Demer, and Mahmoudi (2003) 2003 Oxford
University Press.]
Ocean Lidar Remote Sensing 161

Figure 6.17 Various optical layers observed in the field. [Reproduced with permission from
Churnside and Donaghay (2009) 2009 Oxford University Press.]

signature can often be indirectly observed from space, as subsurface current

features eventually affect surface morphology with cases involving strong
currents, which can be seen by microwave radiometers via surface roughness.
This provides the required large-scale coverage, although depth information
is still unavailable by the current approach. The location, distribution, and
intensity of internal waves are critical in understanding ocean mixing
mechanisms, biological activities, and sound propagation, and have important
implications in ASW.
Due to buoyancy changes in density layers in the ocean (or pycnocline),
phytoplanktons and zooplanktons tend to gather at these depths, creating a
natural optical scattering layer, as shown in Fig. 6.17. This has been detected
by airborne lidar (Vasilkov et al., 2001; Churnside and Donaghay, 2009).
A recent study carried out in the West Sound on Orcas Island by NOAA
and the U.S. NRL further validates this theory, along with new observa-
tions to infer Komolgorov-type turbulence energy dissipation at these layers
(Churnside, 2013).

6.7 Summary
With advances over the last 40 years, ocean lidar is gaining momentum in
ocean sensing and monitoring applications, for reasons discussed throughout
this chapter. Lidar does not rely on external light availability and provides the
only direct mechanism to probe the vertical structures of the ocean from space.
New lasers are becoming available, with more choices in wavelength, power,
pulse width, repetition rates, and ongoing reductions in cost and size. With
162 Chapter 6

these advantages, along with more available platforms thanks to the rapid
advancement of UAVs and fast electronics, there is little doubt that we will see
wider applications of lidar use across all fields of oceanography: physical,
chemical, biological, and geological. When combined, we could hold the key
to understanding the global biogeochemical cycle that affects the climate of
our planet.

Becucci, M. et al. (1999). Accuracy of remote sensing of water temperature
by raman spectroscopy. Appl. Opt. 38, 928931.
Chekalyuk, A. (2011). Oceanographic laser fluorosensing: a historical
overview. LOOPP, LIDAR Observations of Optical and Physical Properties
Workshop. La Spezia. 1517 Nov.
Churnside, J. H. (2008). Polarization effects on oceanographic lidar. Opt.
Express 16, 11961207.
Churnside, J. H. (2010). Lidar signature from bubbles in the sea. Opt.
Express 18, 82948299.
Churnside, J. H., Demer, D. A., and Mahmoudi, B. (2003). A comparison of
lidar and echosounder measurements of fish schoolds in the Gulf of Mexico,
ICES J. Marine Sci. 60: 147154.
Churnside, J. H. and Donaghay, P. L. (2009). Thin scattering layers
observed by airborne lidar. ICES J. Mar. Sci. 66, 778789.
Churnside, J. H. and Thorne, R. E. (2005). Comparison of airborne lidar
measurements with 420 kHz echo-sounder measurements of zooplankton.
Appl. Opt. 44, 55045511.
Churnside, J. H. and Wilson, J. J. (2004). Airborne lidar imaging of
salmon. Appl. Opt. 43, 14161424.
Churnside, J. H., Wilson, J. J., and Tatarskii, V. V. (1997). Lidar profiles of
fish schools. Appl. Opt. 36, 60116020.
Cunningham, K. and Lyons, P. A. (1973). Depolarization ratio studies on
liquid water. J. Chem. Phys. 59, 2132.
Dixon, T. H. et al. (1983). Bathymetric prediction from Seasat altimeter
data. J. Geophys. Res-Oceans 88, 15631571.
Fry, E. (2012). Remote sensing of sound speed in the ocean via Brillouin
scattering, Proc. SPIE 8372, 837207. doi:10.1117/12.923920.
Hirschberg, J. C., Wouters, A. W., and Byrne, J. D. (1980). Ocean
parameters using the Brillouin effect. In Ocean Remote Sensing Using Lasers,
NOAA Tech. Memo. ERL PMEL-18, pp. 2944.
Hoge, F. E., Berry, R. E., and Swift, R. N. (1986). Active-passive airborne
ocean color measurement: l. Instrumentation. Appl. Opt. 25, 3947.
Ocean Lidar Remote Sensing 163

Hoge, et al. (2005). Chlorophyll biomass in the global oceans: airborne lidar
retrieval using fluorescence of both chlorophyll and chromophoric dissolved
organic matter. Appl. Opt. 44, 28572862.
Hoge, F. E. and Swift, R. N. (1981). Airborne simultaneous spectroscopic
detection of laser-induced water-Raman backscatter and fluorescence from
chlorophyll a and other naturally occurring pigments. Appl. Opt. 20, 31973205.
Hoge, F. E., Swift, R. N., and Frederick, E. B. (1980). Water depth measurement
using an airborne pulsed neon laser system. Appl. Opt. 19, 871883.
Hoge, F. E., Vodacek, A., and Blough, N. V. (1993). Inherent optical
properties of the ocean: Retrieval of the absorption coefficient of chro-
mophoric dissolved organic matter from fluorescence measurements. Limnol.
Oceanogr. 38, 13941402.
Irish, J. L. and Lillycrop, W. J. (1999). Scanning laser mapping of the coastal
zone: the SHOALS system. ISPRS J. Photogramm. 54, 123129.
Josset, D. et al. (2010). Lidar equation for ocean surface and subsurface.
Opt. Express 18, 2086220875.
Kim, H. H. (1973). New algae mapping technique by the use of an airborne
laser fluorosensor. Appl. Opt. 12, 14541459.
Kim, H. H., McClain, C. R., and McLean, J. (1980). A laser fluorosensing
studysupplementary notes. In Ocean Remote Sensing Using Lasers,
NOAA Tech. Memo. ERL PMEL18.
Leonard, D. A. (1980). Experimental field measurements of subsurface water
by Raman spectroscopy. In Ocean Remote Sensing Using Lasers, NOAA
Tech. Memo. ERL PMEL-18, pp. 4574.
Leonard, D. A., Caputo, B., and Hoge, F. E. (1979). Remote sensing of
subsurface water temperature by Raman scattering. Appl. Opt. 18, 17321745.
Lin, C. S. (2001). Characteristics of laser-induced inelastic-scattering signals
from coastal waters. Remote Sens. Environ. 77, 104111.
Moskal, L. M. (2008). LiDAR fundamentals. In Workshop on site-scale
applications of LiDAR on forest lands in Washington. Seattle: Center for
Ubran Horticuture, Univ. of Washington.
Optech, Inc. (2004). Field test report for CHARTS. Prepared for Joint
Airborne Lidar Bathymetry Technical Center of Expertise. Document No.
0020905/Rev A.
Ounis, A. et al. (2001). Dual-excitation FLIDAR for the estimation of
epidermal UV absorption in leaves and canopies. Remote Sens. Environ. 76,
Popescu, A. and Walther, T. (2010). On an ESFADOF edge-filter for a
range resolved Brillouin-lidar: the high vapor density and high pump intensity
regime. Appl. Phys. B-Lasers O. 98, 667675.
164 Chapter 6

Rees, G. (2001). Physical Principles of Remote Sensing. Cambridge: Cambridge

University Press.
Schmuck, G. and Moya, I. (1994). Time-resolved chlorophyll fluorescence-
spectra of intact leaves. Remote Sens. Environ. 47, 7276.
Schorstein, K. et al. (2007). A fiber amplifier and an ESFADOF:
developments for a transceiver in a Brillouin lidar. Laser Phys. 17, 975982.
Schorstein, K. and Walther, T. (2009). A high spectral brightness Fourier-
transform limited nanosecond Yb-doped fiber amplifier. Appl. Phys. B-Lasers
O. 97, 591597.
SEOS Project EU Learning Management System:
Smith, W. H. F. and Sandwell, D. T. (1994). Bathymetric prediction from
dense satellite altimetry and sparse shipboard bathymetry. J. Geophys. Res-
Sol. Ea. 99, 2180321824.
Vasilkov, A. P. et al. (2001). Airborne polarized lidar detection of scattering
layers in the ocean. Appl. Opt. 40, 43534364.
Chapter 7
Microwave Remote Sensing
of the Ocean
7.1 Overview
The main focus of the previous chapters has been on the optical methods
of ocean sensing and monitoring, in part due to the fact that the only
transmission window of seawater is in the visible range. However, longer
wavelengths of the EM spectra are less sensitive to atmospheric influences,
and approaches have been developed for sensing the oceans surface
temperature, salinity, sea surface height, and roughness by both passive and
active microwave sensors. Microwave radiation from thermal emissions is
typically in the 1- to 100-GHz (3 mm to 0.3 m) region of EM bands. Due to
the longer wavelengths, there is little scattering effect from molecules and
aerosols, including pollen, ash, and fog. Therefore, atmospheric influences are
smaller and easier to correct, making microwave sensing one of the more
attractive choices in ocean sensing with 24/7 coverage. Some excellent reviews
on the applicability of microwave sensing of the ocean can be found in Swift
(1980), with in-depth discussions in Robinson (1995) and Rees (2001). The key
areas of sensing in both passive and active microwave methods are discussed
in this chapter to provide the necessary background knowledge for further

7.2 Passive Sensing of Sea Surface Temperature,

Salinity, and Sea Ice
Due to the weak thermal emissivity of microwave bands from the ocean, it
is desirable to have a larger instantaneous FOV or footprint to increase the
SNR by integration and averaging. This task is made more difficult by the
fact that the weak photon energy from microwave bands is not enough to
excite electrons between atomic states or molecular bandgaps of imaging
materials. The direct measure of incoming radiation has to be replaced by a
conducting metal, where the electric currents induced by the incoming EM

166 Chapter 7

waves can be amplified and detected as a function of radiation intensity.

Microwave antennas thus serve as the medium between the incoming radiation
and fluctuating voltage in a circuit to be detected. The combined effect would
be the limit on the spatial resolution of microwave sensing, typically on the
order of tens of kilometers, if not for the recent advancements in active
As we know, the SST is an important quantity for both oceanic and
atmospheric processes. It directly links the heat transfer between the ocean
and atmosphere, a process that also determines water vapor concentration
in the atmosphere. It is important to note that the ocean is the driving
engine of the planets climate patterns, both above and below sea level. The
heat capacity of the top three meters of the ocean is equivalent to the entire
atmospheric counterpart above it. Similarly, the top ten meters of water in
the ocean are the equivalent of the water content in the atmosphere (Gill
1982). This explains why the anomalous temperature change of the surface
ocean (i.e., El Nio and La Nia) can significantly impact global weather
patterns. Studies have also suggested that an accuracy of 0.2 K of the SST
would enable efficient monitoring of global climate change using only a 20-year
history (Casey and Cornillon 1999). The direct implication and application of
oceanic processes are obvious when observing retrieved large-scale surface
temperature patterns, where eddies and currents can be clearly discerned. This
helps to confirm and validate model predictions, especially those involving
vertical mixing and upwelling events.
Following Plancks law, blackbody radiation of any material can be
expressed as a function of its temperature and the wavelength under
examination (Born and Wolf 2005):
2n3 1
ET  , 7:1
c2 ekT 1
where is Plancks constant (6.623  10 34 J s 1), n is the frequency of the EM
wave, and k is Boltzmanns constant (1.38  10 23 K 1 J). For the microwave
wavelength and temperature range being examined, /kT<<1. Applying
a Taylor expansion to the first order, where ex  1 x(x<<1), it is easy to
see that
2n2 kT
ET : 7:2
This relationship implies that microwave radiation from the ocean is strictly
a function of its temperature. For an ideal blackbody, this is all we need
to measure its temperature, by observing the radiation from a certain wave-
length. Unfortunately, the ocean is far from a blackbody, whose emissivity is
unity. Here the emissivity of a surface is defined as the ratio between the
actual radiation observed to its blackbody values. The emissivity of ocean
Microwave Remote Sensing of the Ocean 167

water is only about 30% on average (Swift 1980). It is a function of not only its
temperature, but also many other factors such as salinity, and reflectivity due
to sea surface roughness.
Considering the ocean under equilibrium conditions, the amount of loss
should equal the amount of gain, such that emission at a specific wavelength
equals the incident radiation. Following the conservation of energy, we can
see that emissivity e can be expressed as e 1 R, where R can represent the
horizontally and vertically polarized reflectance, respectively.
R is a function of the dielectric constant, which can be modeled using
Debye relaxation as a function of frequency. The dielectric constant is also
a function of temperature and salinity. The net combined effect on the
brightness temperature of the ocean, as a function of SST and salinity
variation, can be seen in Fig. 7.1 (Swift 1980).
It is interesting to note that for the higher frequency shown (2.65 GHz),
the relationship between the SST and brightness temperature is almost
linear over a wide range of SST. This relationship seems to work best for
low-salinity waters. For typical salinity of the open ocean at 35 ppt, this
holds true to about 30 C before it flattens out. This limitation can be easily
overcome by using another frequency, such as the one shown in Fig. 7.1(b)
(1.43 GHz). Note that for the lower frequency shown, the brightness
temperature is more responsive to the change in salinity. This is because
ionic conductivity strongly depends on temperature and salinity, and
conductive currents predominate the displacement currents at lower
frequencies. This is the reason why the L-band (1.4 GHz) is used for sea
surface salinity (SSS) retrieval. The S-band (2.65 GHz) is more responsive

Figure 7.1 Brightness temperature (in kelvins) of sea surface as a function of SST, salinity,
and different frequencies: (a) f 2.65 GHz and (b) f 1.43 GHz. [Reproduced with
permission from Swift (1980) 1980 D. Reidel Publishing Co.]
168 Chapter 7

to SST change (Fig. 7.1), and is not restricted by other land sources (such
as 5 GHz). This is why the S-band was used by earlier SST sensors. Higher
frequency bands can be used, especially when coastal regions (up to 50 km)
are not of primary concern, due to soil-moisture diurnal cycles. An example
SST map using a higher frequency (C-band, 6.9 GHz) can be seen in
Fig. 7.2. The map originates from the Advanced Microwave Scanning
Radiometer for EOS (AMSR-E) onboard EOS Aqua (Emery, Brandt et al.
2006), as part of the A-train. The same study also shows that these values
match up very well when compared to nearby in situ temperature
measurements from Argo floats (Fig. 7.3).
Since brightness temperature is clearly a function of both SSS and SST,
other measurements and assumptions are often required. For the open ocean
this is relatively easy, as salinity typically stays in a narrow range of a few ppt
(parts per thousand) at most, allowing SSS retrieval using a simple assumption
of a fixed value. SSTs retrieved by thermal-infrared (TIR) (approximately
10 mm) are often used to provide surface temperature in SSS algorithms,
especially in coastal regions. SST retrieval from well-calibrated IR and
microwave sensors shows good agreement over time, as shown in Fig. 7.4.
More details can be found in the next chapter.

Figure 7.2 Weekly SST from AMSR-E given in Celsius. [Reproduced with permission from
Emery, Brandt, et al. (2006) 2006 John Wiley & Sons.]
Microwave Remote Sensing of the Ocean 169

Figure 7.3 Microwave SST from AMSR-E compared to temperatures measured by an

Argo float near the surface on the Labrador Sea. [Reproduced with permission from Emery,
Brandt, et al. (2006) 2006 John Wiley & Sons.]

Figure 7.4 Microwave SST from AMSR-E compared to IR SST over a summer period.
[Reproduced with permission from Emery, Brandt et al. (2006) 2006 John W ley & Sons.]

SSTs detected from microwave bands are not only from the skin of the
oceans surface (such as the TIR approach, which measures a tenth of a
millimeter or less, depending on wavelength), but can also include
temperature signatures below the surface down to a centimeter or more,
170 Chapter 7

Figure 7.5 Microwave penetration depth as a function of salinity at (a) 1.43 GHz and (b)
frequency. [Reproduced with permission from Swift (1980) 1980 D. Reidel Publishing Co.]

depending on salinity (Fig. 7.5). This is important for understanding ocean

heat exchange, and the associated temperature anomalies at the seas surface,
where discrepancies often arise when compared to the IR approach (Rees
2001). This is due to surface cooling, or evaporation, under constantly
changing conditions. SST detected by microwave sensors is also dependent on
several other ocean surface properties, including polarization, SSS, roughness,
and surfactants. Due to these relationships, it is critical to accurately retrieve
each one before meaningful comparisons can be made. Details about the
thermal SST approach are discussed in a separate chapter.
Sea surface roughness affects reflectance (as seen in previous chapters),
which in turn affects the emissivity of the ocean, and thus the brightness
temperature seen by microwave sensors. Additionally, the viewing angle
also affects brightness temperature detected by a sensor, as shown in the
first study carried out by Stogryn (Stogryn 1967) in vertically (Fig. 7.6) and
horizontally polarized (Fig. 7.7) states. The assumption was that large-scale
roughness could be approximated by an ensemble of reflective plane facets,
following the classic Cox and Munk rms slope distribution (Cox and Munk
1954), which in turn is a function of wind speed. Notice that sea surface
roughness as a function of wind speed does not seem to significantly affect
the observed vertically polarized component of the brightness temperature.
This is especially true at a 50-deg viewing angle, where the detected
brightness temperature is invariant to wind speed or surface roughness
variations. This is the exact reason why the local cone angles of the CZCS
Scanning Multichannel Microwave Radiometer (SMMR) and Aqua
AMSR-E are around 50 deg for detection of SST. At the same angle,
horizontally polarized components of the brightness temperature can be
used to retrieve surface wind speed, as suggested by Fig. 7.7, to about 2-m/s
accuracy (Robinson 1995).
Microwave Remote Sensing of the Ocean 171

Figure 7.6 Vertically polarized brightness temperature of ocean surface sensed at different
viewing angles, under different wind conditions, at 19.4 GHz. [Reproduced with permission
from Stogryn (1967) 1967 IEEE.]

It has been discovered that SSS is related to low-frequency microwave

emissivity (less than 5 GHz), due to the inverse correlation between emissivity
and ionic conductivity. This is because ionic conductivity significantly
increases the imagery portion of a dielectric constant when compared to
pure water, and therefore enhances the Fresnel reflectance of the seas surface.
This in turn reduces emissivity. However, the low frequency of the signal
translates into a rather large footprint at satellite altitude, rendering the
approach ineffective. Nevertheless, experimental results from airborne plat-
forms have demonstrated strong potential (Burrage, Wesson et al. 2008).
Another interesting application of passive microwave remote sensing
across the ocean is oil slick detection. The approach is analogous to the
multilayer reflectance model used by opticians, common in multicoating
applications, but instead uses a microwave equivalent (Ulaby, Moore et al.
172 Chapter 7

Figure 7.7 Horizontally polarized brightness temperature of ocean surface sensed at

different viewing angles, at different wind speeds, at 19.4 GHz. [Reproduced with permission
from Stogryn (1967) 1967 IEEE.]

1981; Trieschmann, Hunsaenger et al. 2004). The physics behind this

approach is that Fresnel reflection coefficients of layered surfaces can be
significantly modulated by the thickness of multilayers due to in-phase
construction or destruction. These are essentially the same physics behind
antireflective coatings on eye glasses. An example of oil slick brightness
temperature can be seen in Fig. 7.8, where brightness temperature as a
function of oil layer thickness at several frequencies is presented. As the layer
thickness increases, radiation intensity dampens due to increased optical path

Figure 7.8 Brightness temperature of an oil layer over water. [Reproduced with permission
from Trieschmann, Hunsaenger et al. (2004).]
Microwave Remote Sensing of the Ocean 173

Figure 7.9 Sea ice emissivity compared to seawater and land. (Courtesy of MetEd
COMET, University Corporation for Atmospheric Research.)

and the increased imaginary part of the dielectric constant due to increased
mixing with water.
Sea ice coverage is an important parameter in climate and meteorological
studies, especially with the steady temperature rise observed in the last few
decades that caused large areas of polar ice to melt and glaciers to recede. It
has been observed that the emissivity of sea ice is significantly higher than
seawater, below 50 GHz (Fig. 7.9), which is interesting, since ice is brighter or
hotter than the surrounding water. Emissivity difference in new and
multiyear ice is also significant enough to be used to discern a difference in
distribution when multiple frequencies are used. Lower VHF frequencies have
been shown to be able to estimate sea ice thickness due to their low
attenuation coefficients. The active sensing of sea ice provides more details,
such as roughness and thickness, and is discussed in the next section.
Finally, it is worth mentioning that a microwave antenna passively collects
not only emissions from the ocean surface, but also background radiation from
the galaxy and atmosphere. It should be briefly noted that strong galaxy
noise and background radiation are mostly under 1 GHz, while atmospheric
oxygen and water vapor contributions rise sharply above 10 GHz, as shown
in Fig. 7.10 (Swift 1980). Therefore, 1 to 10 GHz is most suitable for passive
microwave sensing needs.

7.3 Active Microwave Sensing of the Ocean

Despite the benefits of microwave sensors, in particular their ability to
penetrate clouds that allows for 24/7 observation, even with a narrow window
(1 to 10 GHz), their weak SNR limits better system resolution, and they lack
174 Chapter 7

Figure 7.10 External influence in addition to microwave emission from the ocean. Note
that strong galaxy noise is under 1 GHz, while strong water vapor and oxygen emission from
the atmosphere is above 10 GHz. [Reproduced with permission from Swift (1980) 1980
D. Reidel Publishing Co.]

other capabilities that active sensing provides. Three major types of active
microwave sensors (also known as radar) are discussed next, namely the
altimeter, scatterometer, and radiometer (imaging radar).

7.3.1 Altimeter
The radar altimeter is a ranging system very similar to that of lidar. The
difference is that the wavelength of radar is much longer (or lower frequency),
at centimeter scale (or 10-GHz frequency), compared to the submicron
wavelength of lidar. This difference results in a larger beamwidth and
therefore larger footprint on the ocean surface when operated from space. An
antenna is used to transmit short pulses to the earths surface. The returned
signals are collected by the same antenna, before being passed on to the
detector for digitization and analysis. Range information h (Fig. 7.11) is
derived from the returned waveform, similar to that of lidar (in principle). If
the orbit altitude H is known, along with the geoid surface height G (which is
the ocean at rest), the sea surface height (SSH) can be calculated as
SSH H h G, 7:3
as shown in Fig. 7.11. SSH is the sea-level variability relative to the geoid,
influenced by many atmospheric and oceanographic processes, such as
Microwave Remote Sensing of the Ocean 175

Figure 7.11 Satellite altimetry h can be used to calculate SSH with knowledge of the orbit
height and geoid.

geostrophic flows, tides, atmospheric pressure, and density changes due to

diurnal and seasonal variations. Altimetry information can also be used to
derive bathymetric information of the open ocean, as discussed in Chapter 6.
The early missions of various altimeters (Skylab in 1973, GEOS-3 in 1975,
Seasat in 1978, GEOSAT from 19851990, and ERS-1 and -2 from 1991 to
2000) resulted in the more successful altimeters of today, including the dual-
frequency Topography Experiment (TOPEX), and JASON-1, -2, and -3
(Fig. 7.12). The dual-frequency altimeter (often referred to as ALT or TOPEX
instead of the cumbersome acronym Positioning Ocean Solid Earth Ice
Dynamics Orbiting Navigator or POSEIDON) uses 5.3 GHz (C-band) and
13.6 GHz (Ku-band), flying along a second altimeter from France with a
single frequency at 13.65 GHz at an orbit of 1336 km. Four other instruments

Figure 7.12 Past, present, and future spaceborne altimetry systems. (Courtesy of NASA.)
176 Chapter 7

Figure 7.13 Instrumentations and measurements for TOPEX and successor missions.
(Courtesy of NASA.)

were also onboard TOPEX, including the TOPEX microwave radiometer

(TMR). The other three instruments used for precision orbit determination
(POD) purposes were the Laser Retroreflector Array (LRA), a NASA GPS
receiver, and the Doppler Orbitography and Radiopositioning Integrated by
Satellite (DORIS) (Fig. 7.13). The TMR is a nadir-viewing radiometer
operating at 18-, 21-, and 37-GHz frequencies that enables estimations of
columnar water vapor and sea surface scalar wind. Following the success
of TOPEX, JASON-1 and -2 have been launched with similar setups in terms
of LRA, frequencies, and orbit heights, but have much less weight to reduce
atmospheric drag. These features allow for effective use of established ground
stations for TOPEX for tracking and calibration purposes. TOPEX was cited
as the most successful ocean experiment of all time by Munk.
To obtain the 2- to 3-cm accuracy in SSH required for ocean modeling
(Wunsch and Stammer 1998; Martin 2004), high accuracy of the three
variables listed in Eq. (7.3) must be achieved. This includes: (1) a model of the
geoid, (2) measurement of the orbit height more than a thousand kilometers
above, and (3) measurement of the altimetry, all within this limit. The geoid
can be calculated to satisfactory accuracy by using expansions of spherical
harmonics fitted to the altimeter data, such as the Earth Geopotential Model
96 (EGM96) (Wunsch and Stammer 1998). Determination of the satellite
orbit height to the desired accuracy requires a combination of satellite GPS,
laser, and radio ranging measurements, which are carried out by LRA and its
ground stations, the DORIS beacon, and the NASA GPS receiver onboard
the satellite, as shown in Fig. 7.13. The error budget of the altimetry
Microwave Remote Sensing of the Ocean 177

Figure 7.14 TOPEX-measured sea surface height, overlaid with geostrophically computed
velocity vectors. Small values and spatial wavelengths less than 500 km are omitted for
clarity. [Reproduced with permission from Wunsch and Stammer (1998) 1998 Annual

measurements includes influences from the sea state bias and uncertainties,
atmospheric influence, and altimeter noise. Sea surface roughness by smaller
waves has a strong influence on the reflectivity of short-pulsed signals ( 3 ns),
with the effective radar scattering cross section (s0) reduced with increased
wind (Martin 2004). Larger swells, on the other hand, affect the return
by tilting, which in effect blurs or defocuses the beamwidth. For example,
the footprint of TOPEX increases from 12  6 km to 20  15 km, under a
significant wave height of 3 m (Martin 2004). Free electrons in the ionosphere
affect the phase velocity of EM waves, which can result in 30-cm errors in
altimeter estimation if not corrected. By using dual frequencies in TOPEX and
its later successors, such issues can be mitigated.
An example of retrieved SSH and geostrophic flows can be seen in
Fig. 7.14. Small features with wavelengths less than 500 km and equatorial
flows are omitted for clarification (Wunsch and Stammer 1998). Total errors
of SSH for TOPEX are 4.1 cm, and 2.5 cm for JASON-1 and -2.

7.3.2 Scatterometer
The second type of active microwave sensor is the scatterometer, which uses a
backscattered signal (as a function of detector azimuth angle) to detect surface
winds. Its functionality parallels the radiometer or radiance sensors discussed
in Chapter 2. It uses a small, narrow FOV (or pencil beam) to measure the
surface backscattering cross section s 0, a process quite different from an
imager or imaging radars, where a broad FOV is illuminated by an active
beam, and returned signals are subdivided to form a 2D image.
178 Chapter 7

From earlier discussions, it is understood that the surface wind field is

very important in the coupled atmosphere-ocean system, where wind-driven
currents provide major kinetic energy sources to the ocean, along with
materials (water vapor, aerosol, and CO2) and heat exchange, to list a few.
Ocean wave height distribution and direction of propagation, and the
direction of swells, are determined by the vector wind field, which affects
navigation, defense, oil exploration, fisheries, and tourism operations on a
daily basis. Vector wind field retrieval by remote sensing is particularly
important in areas lacking shore-based monitoring facilities such as weather
stations, HF radars, moored buoys, or ships of opportunity. Determination of
vector wind fields provides not only regional but also global sensing input for
large-scale monitoring, modeling, and predictions.
Scatterometry dates back to the application of radar during World War
II. Although unknown at the time, radar images across oceans are often
corrupted by noise (sea clutter) caused by surface-wind-induced ocean wave
backscattering. The first attempt to link radar responses to wind field was in
the 1960s. By 1973, the first scatterometer was flown on Skylab to demonstate
the possibility of a spaceborne scatterometer. This was followed by the Seasat-
A Satellite Scatterometer (SASS) from June to October 1978 that provided
accurate measurements of wind velocities from space. ERS-1 carried a single
swatch scatterometer, the Advanced Microwave Instrument (AMI), from 1991
to 1997, that measured wind field. The NASA Scatterometer (NSCAT) was the
first dual-band sensor to fly after SASS in 1996, using a Ku-band fan-beam
system. It provided continuous, accurate, global vector wind measurements
over the oceans, until its premature loss due to satellite power failure. The
exciting results brought by NSCAT into various applications prompted an
accelerated plan to reduce the data gap in scatterometery databases. The
QuickSCAT mission launched SeaWinds in June 1999, operating in the Ku
band, followed by a second SeaWinds on ADEOS-2 in 2003. The ESA
launched their first C-band scatterometer, ASCAT, in 2006 onboard Metop-A.
An experimental polarized radiometer (scatterometer) built by the U.S. NRL
called WindSat was launched in 2003 onboard Coriolis. A short list of past
scatterometers and their characteristics can be seen in Table 7.1 (Liu, 2002).
A scatterometer typically retrieves the vector wind field by taking multiple
views of the same ocean surface area at different azimuth angles and
polarizations. Radar pulses from the scatterometer are polarized in horizontal
or vertical orientations from the antenna. Strong returns are mostly in the
same polarization states, meaning that, typically, horizontal-to-horizontal
(HH) or vertical-to-vertical (VV) returns are used.
To detect wind direction, constructive backscattering of radar pulses
forming Bragg scattering is viewed by the satellite antenna at an angle of 15 to
30 deg (Fig. 7.15). The relationship between s 0 and the surface wind field is
called the geophysical model function, and is a function of the polarization
Microwave Remote Sensing of the Ocean 179

Table 7.1 Past spaceborne scatterometers and their parameters. [Reproduced with kind
permission of Springer from Liu (2002) 2002 Springer.]

Figure 7.15 Example of multiple looks of the same surface area by a scatterometer.
(Courtesy of NOAA.)
180 Chapter 7

Figure 7.16 Spectra of measured wind (a) zonal and (b) meridional components from
various scatterometers. Notice the reference of the k5/3 curve by the black dotted line.
[Reproduced with permission from Vogelzang, Stoffelen, et al. (2011) 2011 John Wiley &

(HH or VV), incident angle of the pulse, wind speed (10 m above the sea
surface), and its direction. With a fixed wind, incident angle, and polarization,
an empirical relationship between s0 and the azimuth angle can be established
using what is referred to as the two-cosine function, by a truncated Fourier
series (Martin 2004). Surface wind measurements from buoys and other satellites
are used to derive the model function, and are continuously updated to reflect
advances in our understanding. Physical models, however, are still lacking.
Even so, results from various sensors compare well to each other, as shown in
the spectra plots of Fig. 7.16; velocity values and directionality (Fig. 7.17)
also compare well (Vogelzang, Stoffelen et al. 2011; Yang, Li et al. 2011).

7.3.3 Imaging radar

When a scene of interest is illuminated by a broad microwave beam, and
the returned signal is subdivided into smaller solid cones for processing, a 2D
view of the ocean surface characteristics of specific properties is obtained.
This is the basis of imaging radar. The difference between this approach
and a passive microwave imager is obvious, in that a better SNR can be
achieved, and the approach is not influenced by land, atmosphere, or galaxy
Microwave Remote Sensing of the Ocean 181

Figure 7.17 Comparison of wind speeds and directions between scatterometer-retrieved

values from SeaWinds and direction to buoy data. [Reproduced with permission from
Vogelzang, Stoffelen, et al. (2011) 2011 John Wiley & Sons.]

background noise. A wider choice of wavelengths is now available, allowing

finer resolution and improved property retrieval.
There are two basic approaches to subdivide a backscattered signal: range
binning or Doppler binning. In range binning, backscattered waves arriving at a
side-looking radar (SLR) from the surface footprint are binned according to the
time delay between the transmitted and received signals, or range (roundtrip).
For a flat surface, this is shown in Fig. 7.18, where t is the pulse width.
In a way, there is similarity between this approach and the LLS method
discussed in Chapter 4. Although range information (or time delay) is used, it
is different from the range-gating approach discussed earlier for an active EO
imager. In fact, this is more in line with the side-scan sonar approach, except
that only the intensity of the return is used with no range information.
The other binning method uses the Doppler shift caused by relative
motion between the surface wind and platform (spacecraft or airplane). This is
because shift is a function of the viewing angle of the surface, if one can
imagine that the platform velocity component along the imaging path is a
function of viewing angle. Therefore, returns from the same bin are in an arc
182 Chapter 7

Figure 7.18 The range binning method for an imaging radar.

Figure 7.19 Sketch of Doppler binning. Notice that if the satellite is moving toward the
right, only the front part of the ellipsoid return increases frequency shift, while the latter
(trailing) decreases by the same amount.

of the same incident angle to the surface (Fig. 7.19). Notice that since most
modern microwave imagers are conical scanners, the straight-line pattern
between pulses shown in Fig. 7.18 should actually be arcs, meaning that range
and Doppler binning produce similar results.

7.3.4 Synthetic aperture radar

The prior discussion is the basis for generic radar, sometimes referred to as
real aperture radar (RAR), or SLR. Radar, which is primarily a real- and
fixed-aperture imager, has a theoretical limit on its imaging resolution
(angular), set forth by the aperture diameter D and the wavelength used for
imaging (Born and Wolf 2005):
Du 1:22 , 7:4
or in ground resolution
Dd 1:22 R, 7:5
Microwave Remote Sensing of the Ocean 183

Figure 7.20 Sketch showing the principle and resolution of SAR. SAR resolution is similar
in a cross-track direction to that of SLR, which is determined by pulse width or range binning.
The along-track resolution is a function of multiple coherent returns from a moving aperture.

where R is the range (Fig. 7.20). This is referred to as the diffraction-limited

resolution. Equation (7.5) is the reason behind higher resolution for optical
imagers. For an optical camera with a lens aperture of 10 cm, viewing with
the green channel (i.e., 0.5 mm), flying at an orbit of 1000 km, the resolution
limit is 1000 km  1.22  0.5/1000000/0.1  6 m. For a microwave sensor,
wavelength increases by a factor of 106 for the L-band, meaning that either the
spatial resolution will need to be 106 larger (6000 km per pixel), which is not
useful in ocean sensing, or the aperture must increase by the same factor to
achieve a matching resolution, which is prohibitive. Even though typical
synthetic aperture radar (SAR) antennas are at least 100 larger in along-
track dimensions (e.g., 25 m), it would be impossible to increase an additional
104, reaching tens of kilometers in size.
The solution is coherent integration of the pulsed signal returns from
the same surface area of interest, at different locations, when the satellite is
underway (Fig. 7.20). One can imagine that we have a real aperture 10 km in
size. A larger portion (if not all) of the backscattered signals (pulses) are
collected by this antenna. Because it takes time for the returned pulses to
reach from one end of the aperture to the other, we can use a much smaller
antenna to catch all of the pulses by moving the antenna in space. This is
similar to breakout electronic games, where a movable paddle (aperture) is
used to bounce a ball. When the antenna travels in the along-track direction, it
views the same surface area from a different range and time, recording both
184 Chapter 7

the intensity of the return, as well as the phase information. Sophisticated

digital processing is then applied to stitch together the single view or aperture
into a large, synthetically apertured view, which greatly enhances resolution.
It has been noted that fundamental to the SAR concept is the realization that
SAR is a marriage of radar and signal processing technologies (McCandless
and Jackson 2005). This also involves various polarization states (full or quad
HH, VV, HV, and VH), which are becoming the new standard in new sensors.
McCandless and Jackson (2005) is strongly recommended for more details, as
well as for the history of SAR development.
The benefit of SAR is that it possesses all of the advantages of a passive
microwave radiometer (capable of 24/7 surveying, regardless of clouds and
even light rain), with the added capabilities of higher resolution and SNR.
Modern SARs are capable of submeter spatial resolution, which can provide
extremely valuable information of the surface ocean and sea ice features. An
example of SAR versus an optical image can be seen in Fig. 7.21.
Figure 7.21 is a great example of the capability of SAR, which is now
approaching unprecedented spatial resolutions of centimeters from space-
borne platforms, along with 24/7 coverage (McCandless and Jackson 2005).
Ocean surface currents, eddy distributions, upwellings, internal wave
signatures, and oil slicks can be mapped with high accuracy and used for
monitoring the ocean on large scales, assisting model development and
ultimately predicting system performance, and forecasting weather events.
Another major area of research is sea ice distribution. The strong reflectance

Figure 7.21 (a) Seasat SAR image of the ocean southeast of Nantucket Island (August 27,
1978) showing internal wave signatures, bottom topographic, and upwelling information.
(b) Optical image of the same area from Skylab (1973) showing shore beaches and internal
waves. [Reproduced with permission from McCandless and Jackson (2005).]
Microwave Remote Sensing of the Ocean 185

from ice edges helps to identify the size and coverage of sea ice, along with its
brightness temperature (intensity) returns. Surface roughness and topography
are used to differentiate ice type and thickness. Large structures such as oil
platforms and ships can also be detected, characterized, and monitored by
SAR, examining their impacts on surrounding waters (i.e., oil spills and
discharge of waste). Recent developments in interferometric SAR, which
compares signal emissions and returns from separate SAR antennas, allows
precise phase information to be retrieved to estimate sea surface height and
fine-scale currents to centimeter scale. This resembles approaches used in
stereo-vision studies, where two cameras are used. The two antennas serve the
same purpose when range binned in a cross-track direction.

7.4 Summary
Passive and active microwave ocean sensing is becoming a very important
part of ocean research. Its capability to penetrate clouds and light rain allows
for 24/7 observations, which are vital in the continuous monitoring of the
ocean, of both physical and biological activities. The development of active
radar sensing, especially SAR, is overcoming the major drawbacks of
microwave sensors, with significantly increased spatial resolution. The phase
array approach of SAR, along with polarimetric and interferometric methods,
are augmenting radar sensing onto the center of the ocean remote sensing
stage. Combined with the capabilities of high spectral sensing (hyperspectral,
both color and IR), high spatial sensing [both horizontal and vertical (lidar)],
and high temporal sensing (geostationary), an exciting 5D remote sensing
capability of the ocean is becoming a reality.
However, the ground truth of remotely sensed signals is required for the
calibration, development, and validation of any remote sensing algorithms.
We examine these major platforms and key instruments in a later chapter. But
first, we examine the other SST measurement approach, which is based on
TIR remote sensing.

Born, M. and E. Wolf (2005). Principles of Optics, Cambridge University
Burrage, D., J. Wesson, et al. (2008). Deriving sea surface salinity and density
variations from satellite and aircraft microwave radiometer measurements:
application to coastal plumes using STARRS. Geoscience and Remote Sensing,
IEEE Transactions on 46(3), 765785.
Casey, K. S. and P. Cornillon (1999). A comparison of satellite and in situ-
based sea surface temperature climatologies. Journal of Climate 12(6),
186 Chapter 7

Cox, C. and W. Munk (1954). Measurement of the roughness of the sea

surface from photographs of the suns glitter. Journal of the Optical Society
of America 44(11), 838850.
Emery, W. J., P. Brandt, et al. (2006). A comparison of sea surface temp-
eratures from microwave remote sensing of the Labrador Sea with in situ
measurements and model simulations. J. Geophys. Res. 111(C12), C12013.
Gill, A. E. (1982). AtmosphereOcean Dynamics. New York, Academic Press.
Liu, T. (2002). Progress in scatterometer application. Journal of Oceanog-
raphy 58, 121136.
Martin, S. (2004). An Introduction to Ocean Remote Sensing. Cambridge,
UK.; New York, Cambridge.
McCandless, S. W. J. and C. R. Jackson (2005). Principles of Synthetic
Aperture Radar. Synthetic Aperture Radar Marine Users Manual. C. R.
Jackson and J. R. Apel, NOAA.
Rees, G. (2001). Physical Principles of Remote Sensing. Cambridge, U.K.;
New York, Cambridge University Press.
Robinson, I. S. (1995). Satellite Oceanography: An Introduction for
Oceanographers and Remote-Sensing scientists. Chichester; New York, Wiley;
Praxis Pub.
Stogryn, A. (1967). The apparent temperature of the sea at micro-
wave frequencies. IEEE Transactions on Antennas and Propagation Ap15
(2), 278286.
Swift, C. T. (1980). Passive microwave remote sensing of the oceanA
review. Boundary-Layer Meteorology 18(1), 2554.
Trieschmann, O., T. Hunsaenger, et al. (2004). Data assimilation of an
airborne multiple-remote-sensor system and of satellite images for the North
Sea and Baltic Sea, Proc. SPIE 5233, pp. 5160 [doi: 10.1117/12.514023].
Ulaby, F. T., R. K. Moore, et al. (1981). Microwave Remote Sensing: Active
and Passive. Reading, Mass., Addison-Wesley Pub. Co., Advanced Book
Program/World Science Division.
Vogelzang, J., A. Stoffelen, et al. (2011). On the quality of high-resolution
scatterometer winds. J. Geophys. Res. 116(C10), C10033.
Wunsch, C., D. Stammer (1998). Satellite altimetry, the marine geoid, and
the oceanic general circulation. Annual Review of Earth and Planetary
Sciences 26, 219253.
Yang, X., X. Li, et al. (2011). Comparison of ocean surface winds from
ENVISAT ASAR, MetOp ASCAT scatterometer, buoy measurements, and
NOGAPS model. Geoscience and Remote Sensing, IEEE Transactions on 49
(12), 47434750.
Chapter 8
Infrared Remote Sensing
of the Ocean
8.1 Overview
As we have seen, SST is a key parameter in the study of oceanography. This is
due to the high heat capacity of seawater, with its top three meters containing
as much latent heat as the entire atmosphere above it. Energy exchange
between the airsea interface is largely a function of SST, in association with
surface wind speed, cloudiness, humidity, and air temperature. Understand-
ably, such heat storage capacity influences air temperature on both smaller
(local) and larger (global) scales. On the smaller side, water bodies exert their
influence by regulating air temperature, humidity, and wind speed, thereby
affecting local weather. This is the reason behind the mild weather of the San
Francisco Bay area, thanks to the upwelling of deeper cooler water. Global
temperature anomalies, such as El Nio and La Nia events, are primarily
associated with SST variations or pattern changes. Longer-term climate
patterns (Fig. 8.1), especially the pressing issues of global warming, require
precise monitoring of the ocean surface temperature on 72% of the planets
Regional events such as hurricanes are also strongly dependent on SST, as
warmer waters serve as the engines of tropical cyclones. These storms, on the
other hand, increase vertical mixing and bring up deeper cooler waters, often
leaving behind a cool wake. SST is also associated with the key driving
mechanisms of ocean currents by the transfer of solar energy via the heat flux
across the airsea interface. Lastly, solar input is certainly the driving force of
most, if not all, biological activities in the ocean.
Although microwaves can be used to derive SST (as shown in Chapter 7),
they lack the needed resolution (and therefore spatial accuracy) of the IR
band. Using IR to derive SST is the main focus of this chapter. The definition
of SST is discussed first, followed by the physical principles associated with
the retrieval process. Major SST sensors, including the Advanced Very High
Resolution Radiometer (AVHRR), MODIS, and VIIRS, are discussed, along

188 Chapter 8

Figure 8.1 Trend of average ocean surface temperature over the past 150 years [data
from NOAA (2012)]. Notice that uncertainties have decreased since the 1950s, thanks to
better data quality. For more information, see U.S. EPA (2012). (Courtesy of the U.S.
Environmental Protection Agency.)

with their SST algorithms. Cloud coverage is an important parameter that

needs to be addressed separately to obtain unbiased assessments. IR sensing,
when combined with visible channels, provides an effective method to this
challenge, and is discussed briefly as well.

8.2 Sea Surface Temperature: Definition

Due to the importance of SST, a general interest in surface temperatures of
the ocean and their systematic measurements date back at least to the days of
Benjamin Franklin in the late eighteenth century, when he used thermometers
suspended from a ship to measure the surface water temperature of the
Gulf Stream. Before we can quantify the importance and relevance of SST
to the various factors mentioned in Section 8.1, it is necessary to distinguish
differences in the definitions of SST. The differences relate to the measure-
ment techniques used, thus the location of quantities retrieved, and the fact
that they often disagree. SSTs can come from thermometers on a buoy at one
or a few meters below the sea surface, or from cool water intake valves a
few meters down from an underway ship. SSTs can also originate from a
microwave radiometer (discussed in the previous chapter), which measures
approximately a few millimeters into the airsea interface (see Fig. 7.5); or
from a value used in ocean circulation models for the upper ocean mixed
layer, spanning tens of meters; or from an IR sensor (under discussion here),
whose returns are typically from a fraction of a millimeter into the water. To
make things even more interesting, diurnal heating by the sun generates a
nonmonotonic profile as a function of depth (Fig. 8.2). Depending on
conditions near the surface, this is usually a function of solar input (i.e., day or
night, as well as cloudiness), wind intensity, and the associated turbulent
Infrared Remote Sensing of the Ocean 189

Figure 8.2 Ocean surface temperature variability as a function of depth in the surface
layer, for typical day and night scenarios.

mixing. The very top layer of ocean temperature (upper 1 m) can be several
kelvins different from the bulk or foundation temperature below. This is best
shown by Fig. 8.2, modified after Donlon et al. (2002) and the Group for
High Resolution Sea Surface Temperature (GHRSST), which is the most
promising international collaborative effort on SST product development.
Solar heat flux input to the surface layer is usually very shallow in terms of
penetration depth, as a function of absorption at different wavelengths (see
Figs. 2.2 and 2.9), typically contained in the first meter by warmer photons
(longer wavelengths compared to visible, higher absorption by the water).
This is the reason for the higher temperature gradient in the first meter of the
surface layer, as shown in Fig. 8.2. Part of the heat absorbed is returned
back to the atmosphere by way of thermal radiation and conduction from
within the first millimeter of the water column, as marked by the interface
temperature (SSTint) and skin temperature (SSTskin) at about 10 mm. This
creates the diurnal profile differences as shown. Another factor contributing
to temperature drop close to the surface is the sensible and latent heat loss
due to mixing by turbulent eddies in the atmospheric boundary layer. The
seemingly unstable thermal skin layer (resulting from cooler, thus denser,
water) is balanced by the viscosity of the water from overturn. Turbulent
mixing on the ocean side contributes to the gradual decrease in SST from
approximately 1 mm (SSTsubskin) to 1 m and beyond, marked by the SST
foundation (SSTfnd). It is interesting to notice that SSTsubskin from around
1 mm corresponds to SST values measured by the microwave radiometer.
From the BeerLambert law [see Chapter 2, Eq. (2.14)] and absorptive
nature of visible, IR, and microwave bands, we can see that the penetration
depths (in meters) correspond to the orders of O(10), O(0.001), and O(0.01),
190 Chapter 8

Figure 8.3 Penetration depth of IR, microwave, and visible sensors for SST applications,
assuming normalized incidence. [Reproduced with permission from Minnett and Kaiser-
Weiss (2012).]

respectively, by roughly inverting the absorption coefficient shown in Fig. 2.9.

A more intuitive representation can be found in Fig. 8.3 (Minnett and Kaiser-
Weiss, 2012). These depths can be seen as the corresponding sensing range
of the sensor types we just discussed, as the emission is balanced by the
absorption under local thermodynamic equilibrium (Kirchhoffs law).
The above discussion is important because it highlights the challenges
in quantifying SST and addressing the differences among remotely sensed
values, ocean models, and in situ measurements. SSTfnd corresponds to the
upper ocean mixed layer used by ocean models. SSTskin is what IR sensors
collect. The in situ observations from buoys and ship intakes fall between the
SSTsubskin and SSTfnd. These challenges, combined with the atmospheric
contributions discussed next, add extra uncertainties to retrieved SST values.

8.3 Basic Principles

Recalling our earlier discussions of blackbody radiation, Eqs. (7.1) and (7.2)
still stand for the TIR bands of interest to us. This means that an EM wave
emitted from the sea surface (brightness temperature Tb) is strictly a function
of its blackbody temperature (when emissivity is 100%). One can see clearly
Infrared Remote Sensing of the Ocean 191

Figure 8.4 Blackbody radiation of ocean surface. The shadowed bands highlight the
atmosphere transmission window around 3.7 and 11 mm. [Reproduced with kind permission
of Springer from Robinson (2010) 2010 Springer.]

from Fig. 8.4 that brightness temperature increases monotonically with SST
in all IR bands. The atmosphere transmission windows are highlighted by the
shadowed regions, where absorption by the atmosphere is weak. The influence
of clouds is not accounted for here.
Figure 8.4 explains the choices for IR bands between 3 and 13 mm for SST
sensors. Specifically, three channels are often used. One is centered around
3.7 mm (from 3.5 to 4.1 mm), and the other two are from 10 to 12.5 mm, often
in pairs such as 10.3/11.3 or 11.5/12.5 mm. These are referred to as split
window channels in SST algorithms. The advantage of the split channel
approach is similar to the ocean color algorithms discussed earlier, which help
to remove effects of the atmosphere. These approaches are adopted by all
major IR satellite sensors, including the AVHRR (from NOAA), MODIS,
as well as the recently launched VIIRS.
Radiometric calibration of the IR sensor of AVHRR is accomplished via
an onboard reference blackbody, as well as deep space. This is carried out
for every scan line across the swath.
Due to absorption by the atmosphere on IR bands, the signal arrives
slightly cooler at the satellite altitude compared to the true Ts at the ocean
surface (Fig. 8.5). The difference varies from channel to channel. This is
mostly due to water vapor, as aerosol absorption contributes less variability
(with the exception of volcanic ash). To achieve accuracy to tenths of kelvins,
as desired by weather forecasts and long-term monitoring, such variability
must be addressed for each pixel. As discussed in Chapter 5 on ocean color
remote sensing, band differential methods can be used to infer the missing
information necessary to compensate for the effects of the atmosphere. In the
192 Chapter 8

Figure 8.5 The split window principle used to derive brightness temperature Tb from two
bands (two colors): (a) weak and (b) strong absorption. [Figure adapted with permission from
Robinson (2010).]

case of IR sensing, water vapor concentration is the source of the majority of

these variations. Since absorption by water vapor is different for different
wavelengths, the difference between the bands can be used as a proxy of water
vapor content, so that the stronger absorption shows up as a larger difference
between the channels. This is the basis of the SST algorithm, as shown in
Fig. 8.5. Specifically, in the weaker absorbing case [Fig. 8.5(a)], the water
vapor content is less compared to the stronger case [Fig. 8.5(b)]. Differentials
Di, j(Tb) are less for the weaker absorbing medium, and more for the stronger
case, reflecting the effects of water vapor. From this, a mathematical
relationship can be used to derive Ts from Tb, as
Ts c1 Tbj c2 Tbi Tbj c3 , 8:1
where coefficients c1, c2, and c3 are determined by calibration and validation
processes. This is usually done by empirical fitting or data match-up to
measurements from drifting buoys and moorings for AVHRR. This limits the
accuracy to no better than 0.5 K (Robinson, 2010), and can only be applied to
regions with buoy/mooring deployments. All three channels mentioned can be
used for night retrieval. Due to the reflection of sunlight from the sea surface,
3.7 mm cannot be used for daytime operations; only split channels are used
with Eq. (8.1) (Barton, 1995) for daytime.
Since IR radiation, like visible bands, cannot penetrate clouds, it is
important to mask the pixels blocked by clouds to prevent false values from
entering ocean models. This can be easily done with thick clouds, especially
during the daytime, when visible bands can be used in combination with IR
channels for masking. The greatest challenges come from thin clouds such as
cirrus, small patches less than the size of a single pixel, and low-level sporadic
Infrared Remote Sensing of the Ocean 193

sea fogs. This type of bias cannot be easily fixed, a fact that can lead to a
cooling effect on the estimated SST, as much as 0.5 K (Robinson, 2010). The
approach to address these issues is presented after the required knowledge
of SST algorithms.

8.4 Sea Surface Temperature Sensors and Algorithms

The most widely used operational satellite SST sensors are AVHRR and
MODIS, which are our focus here. It is worth mentioning that SST measured
by the Advanced Along Track Scanning Radiometer (AATSR) on board
the Envisat enjoys higher accuracy (to the order of 0.3 K) compared to
existing U.S. sensors, thanks to its multilook ability with only a few minutes in
between, and closely controlled processing procedures. The openly available
AVHRR and MODIS data make near-real-time operations a reality. We
focus on the AVHRR and MODIS in the following sections, with a brief
discussion on the replacement sensor VIIRS, whose channels closely match its
predecessors, as shown in Table 8.1. Incidently, for VIIRS, M is used for
moderate (750 m) resolution bands, and high-resolution imaging bands at
375 m are designated as I.
We can see from the solar reflection contributions to the SST bands listed
in the table that the channels around 3.7 mm are strongly affected by solar
reflection. These numbers are obtained under normal reflection, assuming that
reflecting facets occupy less than 0.001% of the FOV. Similar values can be
expected for larger viewing angles. In fact, for viewing angles less than 45 deg,
for both polarized and unpolarized observations, it has been shown that the
emissivity of these channels is nearly constant, with a value of 0.99 (Martin,
2004), with little variation between a rough sea surface and a specular surface
(Wu and Smith, 1997). This renders the 4-mm channel unsuitable for daytime
operations. On the other hand, impact to the split channels is less than 0.002%,
which is the basis for using these TIR channels for daytime SST retrieval. This is
especially important in achieving the 0.2-K SST accuracy needed for long-term
global climate change studies (Casey and Cornillon, 1999).

Table 8.1 Emissive bands used by MODIS, AVHRR, and VIIRS for SST measurements.
Contributions from solar reflection to the spectral bands are shown. [Adapted from Martin

MODIS Band AVHRR Band VIIRS Band Solar Contribution

(wavelength in mm) (wavelength in mm) (wavelength in mm)

20 (3.66 3.84) 3 (3.55 3.93) M12 (3.66 3.84) 12 %

22 (3.93 3.99) 6%
23 (4.02 4.08) M13 (3.973 4.128) 4%
4 (10.30 11.30) 0.002%
31 (10.78 11.28) M15 (10.26 11.26) 0.001%
32 (11.77 12.27) 5 (11.50 12.50) M16 (11.54 11.49) 0.0004%
194 Chapter 8

8.4.1 AVHRR
AVHRR was designed as a remote imaging sensor to detect cloud cover and
surface temperatures. The term surfaces here is not limited to the ocean, as we
presume in most chapters, but rather can be any of the earth boundaries
already discussed, including land as well as cloud cover (viewed from the top
of the atmosphere). While the current AVHRR/3 (Fig. 8.6) has six detector
bands (launched since 1998 on board NOAA-15/16/17/18), the initial version
was only equipped with a four-channel radiometer, on board TIROS-N
(1978). It was improved later by the five-channel AVHRR/2 on NOAA-7
(1981). Specifications of the polar orbiting broadband scanner are listed in
Table 8.2. It has a 2500-km swath, with a nadar resolution of 1.1 km.
AVHRR/3 broadcasts data in real time using direct readout and time-
delay modes. In the direct readout mode, data are collected by the scanner and
simultaneously transmitted back to the ground receiving station beneath. The
time-delay mode allows an onboard recorder to store collected data and
be transmitted to given ground stations at a later time. Two formats are
transmitted during direct readout: automatic picture transmission (APT)
and high-resolution picture transmission (HRPT). In-flight calibration is

Figure 8.6 (a) Photograph and (b) line drawing of AVHRR/3. (Courtesy of NOAA and NASA.)
Infrared Remote Sensing of the Ocean 195

Table 8.2 AVHRR spectral band characteristics. (Courtesy of NOAA.)

AVHRR/3 channel characteristics

Channel Number Resolution at Nadir Wavelength ( mm) Typical Use

1 1.09 km 0.58 0.68 Daytime cloud and surface mapping
2 1.09 km 0.725 1.00 Land water boundaries
3A 1.09 km 1.58 1.64 Snow and ice detection
3B 1.09 km 3.55 3.93 Night cloud mapping, SST
4 1.09 km 10.30 11.30 Night cloud mapping, SST
5 1.09 km 11.50 12.50 SST

completed for each scan line through a deep-space view, as well as an internal
calibration reference (onboard blackbody). In addition to full resolution
HRPT data, selective scenes can be recorded for later playback that maintain
full 1.1-km resolution, termed local area coverage (LAC). Full resolution data
are also processed on board the satellite into reduced resolution global area
coverage (GAC), at 4.4-km resolution, for later transmission.
The split window algorithm derived by McClain, Pichel, and Walton
(1985) has a form similar to Eq. (8.1), which is
1 t4
Ts T4 GT4 T5 , where G : 8:2
t4 t5
Here, t4 and t5 are the transmittance at bands 4 and 5, respectively. When
the multichannel SST (MCSST) algorithm is compared to a buoy-derived
temperature match-up dataset, a consistent bias must be addressed by adding
a constant, as in Eq. (8.1):
SST c1 T4 c2 T4 T5 c3 : 8:3
Notice that SST is used to replace the skin temperature match-up with buoy
measurements from 1- to 3-m water depth. The constants in the previous
linear relationship have values of 0.95876, 2.564, and 261.68 (Walton et al.,
1998), and can provide accurate representations of viewing angles up to
30 deg. An improved MCSST is later used to address increased path length
associated explicitly with a larger viewing angle (May et al., 1998):
SST c1 T4 c2 T4 T5 c3 T4 T5 sec y 1 c4 : 8:4
The corresponding nighttime algorithm has a simpler form when we use
band 3, which is less sensitive to water vapor influences:
SST c1 T4 c2 T3 T5 c3 sec y 1 c4 : 8:5
To explicitly express the viewing angle, as well as the water vapor content
in the SST retrieval algorithm, a water vapor SST algorithm (WVSST) is
used when the water vapor thickness V can be derived from microwave or
196 Chapter 8

radiosonde measurements. This is incorporated through the G value in

Eq. (8.2) (Emery et al., 1994).
Numerical studies of G show that it has a linear relationship to the
SST when the viewing angles are less than 30 deg. This is not surprising, as
higher water vapor contents should be expected over warmer ocean waters.
This is the basis for the nonlinear SST algorithm (NLSST) (Walton et al.,
SST c1 T4 c2 TSF T4 T5 c3 T4 T5 sec y 1 c4 , 8:6
where a nonlinear term has been introduced by surface temperature TSF.
It is approximated by either climatological values or MCSST algorithms
(Walton et al., 1998).
For nighttime operations, band 3 is used (for the reasons discussed above),
simplifying the above form to:

SST c1 T4 c2 TSF T3 T5 c3 sec y 1 c4 : 8:7

The algorithms discussed above have been calibrated and validated using
a large number of moored and drift buoys (>1000) distributed globally,
managed by TAO/TRITON and the National Data Buoy Center (NDBC).
These are in place to ensure the accurate retrieval of SST. Uncertainties
remain, however, not only from diurnal surface temperature variations and
thin clouds, but also episodic events such as volcano eruptions and related
aerosol change. This often adds a 0.5 C cooling bias, up to 2 C in tropical

8.4.2 MODIS
There are five bands used for MODIS SST retrieval, as listed in Table 8.1. The
algorithms developed are parallel to those of AVHRR, with the split window
approach applied for daytime estimates. To investigate long-term climate
trends and data consistency, NASA and NOAA jointly developed the
Pathfinder program to process all AVHRR SST under the same algorithm
(Kilpatrick et al., 2001). Based on the Pathfinder-improved NLSST
algorithm, MODIS SST can be obtained by (Brown and Minnett, 1999;
Minnett, Evans et al. 2002)
SST c1 T31 c2 TSF T31 T32 c3 T31 T32 sec y 1 c4 : 8:8
Surface temperature TSF uses monthly climatology data derived from
AVHRR SST and surface in situ measurements, interpolated to a 1 deg by
1 deg latitude and longitude grid point (often referred to as Reynolds SST).
Notice that improvements in the Pathfinder NLSST to the previous
corresponding AVHRR algorithm lies in a wet and dry atmosphere condition
switch, where two different sets of parameters (c1, c2, c3, and c4) are used when
Infrared Remote Sensing of the Ocean 197

Figure 8.7 Gulf of Mexico 14-day composite sample SST images from MODIS Aqua from
2012. [Reproduced with permission from the Colorado Center for Astrodynamics Research].

the channel differential T31 T32 is larger than a preset value (in this case,
0.7 K). Sample output from the MODIS SST algorithm is shown in Fig. 8.7.
For nighttime retrieval, the 3.7-mm (or 4-mm, as some call it) bands (22,
23) are used:
SST4 c1 c2 T22 c3 T22  T23 c4 sec y  1: 8:9
Despite the apparent simplicity between day/night and night-only algorithms,
SST4 has better accuracy ( 0.4 K) compared to the day/night improved
NLSST algorithm ( 0.5 K) (Brown and Minnett, 1999; Minnett, Evans et al.
2002). This is due to the fact that the 4-mm band is less sensitive to water
vapor influences. Yet solar reflection makes this approach only functional at
night. The NLSST algorithm, while slightly less accurate, does offer the
flexibility of both day and night operations, and maintains the continuity to
AVHRR products, in both coverage as well as improved accuracy. Naturally,
both approaches are prone to suffer from volcanic activities.
198 Chapter 8

Figure 8.8 Suomi NPP satellite, where VIIRS is housed. [Reproduced with permission
from NASA.]

8.4.3 Transition to VIIRS

As mentioned in Chapter 5, VIIRS is part of the payload on the Suomi NPP
satellite (Fig. 8.8). VIIRS was launched in October 2011 and is poised to
deliver fully operational data. It has 22 spectral bands (compared to the 36
by MODIS), which helps to reduce cost and weight. It has a wider swath
compared to AVHRR (5,000 versus 2,500 km) with higher-resolution SST
bands (740 versus 1,100 m), while at the same time the pixel is only 2
stretched at the edge of the scan line, compared to 6 by AVHRR. This is
accomplished by reducing the number of along scan detectors with increasing
zenith angle, thus decreasing the FOV. All of these result in significant data
flow increase: 180 files per day at a rate of 8 GB for global coverage by
AVHRR, to 6,000 files per day at a rate of 155 GB per day by VIIRS. This
does not even account for the increase by higher-resolution (375 m, I-band)
bands specialized in cloud masking (McKenzie et al., 2012).
To expedite the transition of VIIRS SST into operation, a proxy VIIRS
data stream has been proposed and created using MODIS bands (Vogel et al.,
2008) and using the spectral band similarities listed in Table 8.1. Spectral
transformations for various surface types are needed to convert MODIS
bands 20 and 22 to the M12 channel of VIIRS; MODIS 22 and 23 to M13;
and MODIS 31 and 32 to VIIRS M15 and M16. These tasks were carried out
by the Government Resource for Algorithm Verification, Independent
Testing, and Evaluation group (GRAVITE), administered by the Joint Polar
Satellite System program (JPSS). At the operational center of the Naval
Oceanographic Office (NAVO), a VIIRS SST algorithm, based on an
operational AVHRR algorithm, is used to test the algorithm with proxy data
Infrared Remote Sensing of the Ocean 199

Figure 8.9 Gulf of Mexico SST from VIIRS on March 21, 2012, processed by NAVO.
[Reproduced with permission from McKenzie et al. (2012).]

Table 8.3 VIIRS and AVHRR (NOAA-19 LAC) buoy SST match-up stats on March 19,
2012. [Reproduced with permission from McKenzie et al. (2012).]

Count RMSE Bias

Daytime VIIRS 5110 0.87 0.44

Nighttime VIIRS 5863 0.80 0.47
Daytime AVHRR 5672 0.43 0.05
Nighttime AVHRR 5448 0.46 0.20

before the launch, then with actual VIIRS data after the launch, to produce
the SST data map. An example of the VIIRS SST over the Gulf of Mexico is
shown in Fig. 8.9. Extensive match-up datasets were used and initial results
are shown in Table 8.3 (McKenzie et al., 2012).

8.5 Cloud Detection

Earlier discussions have mentioned that cloud masking is necessary for any
successful SST algorithm, especially under complex situations where cloud
contamination is introduced by thin clouds and sporadic patches affecting
200 Chapter 8

single pixels. Here we briefly discuss algorithms used to detect and mask
questionable pixels, leveraging some of the SST algorithms previously
explored. While different bands and somewhat different approaches are used
for AVHRR and MODIS, the fundamentals are quite similar, and thus they
are presented and contrasted together.
AVHRR cloud masking uses all five of the spectral bands available, and is
based on two important assumptions. The first is that clouds are cooler and
more reflective than the underlying ocean surface. The second is that the open
ocean has uniformity in temperature and reflectance that clouds (especially
broken patches) do not have.
This can be clearly demonstrated by the operational AVHRR SST data
processing routine used at NAVO (May et al., 1998), shown in Fig. 8.10. A
single-band, single-pixel threshold test is first applied for both daytime and
nighttime using corresponding bands. The dynamic threshold is based on a
variety of factors, including the sun angle, viewing angle, and climatology
data. The pixels that passed the first test are evaluated against uniformity tests
on the same band, where variances against a small area (2  2 or 11  11
array) are used. Special care should be taken in coastal areas, where higher

Figure 8.10 Cloud detection algorithm flow chart for AVHRR. [Adapted from May et al.
Infrared Remote Sensing of the Ocean 201

temperature gradients due to upwelling can cause significant increase in natural

SST variances. Spectral differences are used to detect cloud types, including thin
cirrus, nighttime low stratus, fog, and broken patches. The derived SST values
are then compared to other algorithms, as well as climatology, to further screen
questionable pixels.
The MODIS cloud detection algorithm uses visible and NIR, as well as TIR
bands for cloud masking, ocean color retrieval, and cloud classification. Similar
to AVHRR, it uses multiple tests to label suspicious pixels as cloudy, including
a general reflectance test, a reflectance test for thin cirrus, and a spatial
and temporal uniformity test (Ackerman et al., 1998). Special to MODIS, a
reflectance ratio between 660 and 870 nm is used for daytime discrimination.
Also, two extra channels26 (1.375 mm) and 19 (936 nm)are used to detect
thin cirrus.

8.6 Summary
SST retrieval is one of the most rewarding but equally challenging tasks
in ocean sensing applications. As data from previous chapters have shown, both
microwave and IR methods can deliver accurate products for current and
future needs, given careful calibration and validation. The advantage of higher
resolution by the IR approach is somewhat balanced by the all-weather
capability of the microwave approach. For long-term, high-resolution, accurate
retrieval of the ocean product, it is advisable that both approaches be used, in
association with various field calibration and validation measurements. As
mentioned throughout this book, the importance of field data in sensing the
ocean cannot be overlooked, and a chapter about ocean research platforms and
sensors is in order.

Ackerman, S. A. et al. (1998). Discriminating clear sky from clouds with
MODIS. J. Geophys. Res-Atmos. 103(D24), 3214132157.
Barton, I. J. (1995). Satellite-derived sea surface temperatures: Current
status. J. Geophys. Res-Oceans 100(C5), 87778790.
Brown, O. B. and Minnett, P. J. (1999). MODIS Infrared Sea Surface
Temperature Algorithm, Version 2.0. Theoretical Basis Document, ATBD25.
Casey, K. S. and Cornillon, P. (1999). A comparison of satellite and in situ-
based sea surface temperature climatologies. J. Climate 12(6), 18481863.
Colorado Center for Astrodynamics Research.
modis/sst gom viewer.
Donlon, C. J. et al. (2002). Toward improved validation of satellite sea
surface skin temperature measurements for climate research. J. Climate
15(4), 353369.
202 Chapter 8

Emery, W. J. et al. (1994). Correcting infrared satellite estimates of sea

surface temperature for atmospheric water vapor attenuation. J. Geophys.
Res-Oceans 99(C3), 52195236.
Kilpatrick, K. A. et al. (2001). Overview of the NOAA/NASA advanced
very high resolution radiometer Pathfinder algorithm for sea surface
temperature and associated matchup database. J. Geophys. Res-Oceans
106(C5), 91799197.
Martin, S. (2004). An Introduction to Ocean Remote Sensing. Cambridge:
Cambridge University Press.
May, D. A. et al. (1998). Operational processing of satellite sea surface
temperature retrievals at the Naval Oceanographic Office. B. Am. Meteorol.
Soc. 79(3), 397407.
McClain, E. P., Pichel, W. G., and Walton, C. C. (1985). Comparative
performance of AVHRR-based multichannel sea surface temperatures.
J. Geophys. Res-Oceans 90(C6), 1158711601.
McKenzie, B. et al. (2012). Initial results of NPP VIIRS SST processing at
NAVOCEANO. Proc. SPIE 8372, 83720H. doi: 10.1117/12.922955.
Minnett, P. J. and Kaiser-Weiss, A. (2012). Near-surface oceanic temperature
gradients. GHRSST, Version 12.
Minnett, P. J. et al. (2002). Sea-surface temperature measured by the
Moderate Resolution Imaging Spectroradiometer (MODIS). IEEE T. Geosci.
Remote 2, 11771179.
NOAA (2012). Extended reconstructed sea surface temperature (ERSST.v3b).
Silverspring: National Oceanic and Atmospheric Administration. Available at:
<>. [Accessed April 15, 2013].
Robinson, I. S. (2010). Discovering the Ocean from Space: The Unique
Applications of Satellite Oceanography. Berlin: Springer.
U.S. EPA (2012). Climate change indicators in the United States.
Washington, D.C.: Environmental Protection Agency. Available at: <http://>. [Accessed April 15, 2013].
Vogel, R. L. et al. (2008). Creating proxy VIIRS data From MODIS: spectral
transformations for mid- and thermal-infrared bands. IEEE T. Geosci. Remote
46(11), 37683782.
Walton, C. C. et al. (1998). The development and operational application
of nonlinear algorithms for the measurement of sea surface temperatures with
the NOAA polar-orbiting environmental satellites. J. Geophys. Res-Oceans
103(C12), 2799928012.
Wu, X. and Smith, W. L. (1997). Emissivity of rough sea surface for 8 mm
13 mm: modeling and verification. Appl. Opt. 36(12), 26092619.
Chapter 9
Platforms and Instruments
9.1 Introduction
One of the major obstacles in exploring the ocean, and therefore understand-
ing ocean processes, is the difficulty associated with our inability as humans
to travel freely in the watery world. Because of this, ocean research relies on
platforms that help researchers and sensors extend their reach across larger
areas, deeper depths, longer durations, and more channels (properties).
Instruments that can best utilize these platforms, with better resolution
in spatial and temporal domains, have been constantly sought after and
invented. The fundamental physical limitations for underwater research
discussed in previous chapters are primarily due to the density difference
between the air and water by a factor of a thousand, and the associated strong
attenuation of most EM waves in the water. This limits the majority of ocean
sensing from space to the surface of the ocean, with lidars maximum
penetration likely extending to the bottom of the mixed layer as the only
exception. For this reason, in situ sampling is an integral part of ocean
research, and will most likely never be replaced.
Oceanography has come a long way in terms of sampling platforms,
discussed briefly in Chapter 1. Ship-based data and sample collection has
been complemented by airborne and spaceborne remote sensing methods
to gain fast and synoptic coverage. Surface vessels have been supplemented
with various UUVs, ranging from remotely operated vehicles (ROVs) to
autonomous underwater vehicles (AUVs) and various types of buoyancy-
driven gliders. Stationary sampling stations, or moorings, are deployed on a
continuous basis, along with other vertical profilers and floats, to complete
larger in situ monitoring areas. These are the basic elements of observatories
and provide long-term sensing and monitoring capabilities of the ocean.
Various instruments, starting with the simple Secchi disk, have been used
in exploration of the ocean and have enriched our understanding of the
ocean. To improve the quantification of various properties and processes,
more sophisticated instruments and sensors have been developed to enable
the collection of temperature, salinity, visibility, and optical properties of the

204 Chapter 9

ocean with higher accuracy and broader coverage, spatially, temporally, and
often spectrally as well. Key instruments in ocean sensing applications,
especially those related to the field of ocean optics, are explored in this
chapter, along with the platforms that are best suited to their requirements.

9.2 Bottom, Surface, Air, and Space Platforms

In this section, platforms used in ocean sensing and monitoring applications
are discussed, including ships, underwater vehicles, airplanes (including
unmanned), and satellites. The main focus is on their applicability, and their
strengths and weaknesses, respectively.

9.2.1 Surface platforms: ships

Ships were the main platforms of ocean exploration in the early days. It is the
natural choice for ocean research by default, as has been since at least the
eighteenth century with the exploration lead by Lieutenant James Cook
on board the HM Bark Endeavour (Fig. 9.1). The strength of sea-going
ships is mainly their capacity in terms of transportation of large volumes of
instruments, samples collected, as well as scientists. Ships also ensure longer
reach and duration of investigations for scientists when areas of interest are
far from land. They also often serve as a launching pad for many modern
mobile platforms, such as UUAs and UAVs, and provide deployment and
maintenance support to buoys, floats, moorings, and more sophisticated
observatory nodes. For these reasons, shipsespecially specialized research
vessels (R/Vs)remain the first choice of sea-going experiments. Routine
surveys are often carried out by research vessels, particularly those
administrated by NOAA of the United States. Bathymetric information
collection is often the main element of their surveys for navigation purposes.

Figure 9.1 Replica of the HM Bark Endeavour in Cooktowns harbor. (Courtesy of

Wikipedia and John Hill, 2005.)
Platforms and Instruments 205

These surveys are frequently complemented by hydrographic soundings of

the seabed for the study of geological structures, for geological research and
mineral detection, by means of high-pressure, low-frequency sound (shock)
waves. Naval hydrographic surveys usually include additional elements as
well as sea trials for validation purposes.
There is no definitive size or operational range for research vessels, which
are usually a function of the research objective. Todays larger and formal
research vessels typically range in size from tens of feet to hundreds, with
bunks for ship and science crews underway to cover more stations and
durations, science laboratories in dry and wet conditions, and U-frames
to deploy large, heavy equipment. Modern research vessels are typically
equipped with many standard instruments, such as conductivity temperature
and depth (CTD) sensors, acoustical Doppler current profilers (ADCPs),
beam transmissometers (commonly referred to as beam-c or c-meters), and
fluorometers for chlorophyll and CDOM. Specialized equipment is often
fitted to certain research vessels, depending on the requirements, such as
plankton nets for biological samples and airguns for seismic surveys. Dry
laboratories are used for data logging and processing, where computers and
other moisture-sensitive electronics are used. Wet laboratory space is typically
used for sample collection and processing, especially those associated with
biological or sediment studies. An example of a research vessel can be seen
in Fig. 9.2, where R/V F. G. Walton Smith from the University of Miami is
shown in the process of deploying a CTD package using a hydrowire through
a U-frame.
While various types of vessels can be (and have been) used for research
purposes, including fishing and pleasure boats, there are extreme and unique
types of research vessels, such as the Floating Instrument Platform (FLIP),
worth mentioning here. The FLIP is, in the strictest sense, a research platform
(RP) instead of R/V. Funded by the U.S. ONR in 1962, RP FLIP has been
maintained and operated by the Scripps Institute of Oceanography. What is
unique about this 108-m vessel is that it can be partially flooded and tipped

Figure 9.2 (a) R/V F. G. Walton Smith from the University of Miami and (b) the back deck
during a CTD deployment. (Courtesy of the University of Miami.)
206 Chapter 9

Figure 9.3 RP FLIP in horizontal and vertical configurations. Notice the research space at
the end of the arm that allows for wave observations without interference. (Courtesy of
Scripps Institute of Oceanography.)

vertically to stand in the water column, with only 17 m above the water
line, as shown in Fig. 9.3. Buoyancy of FLIP under operation, i.e., tipped
vertically, is provided by the submerged portion of the platform, making it
very stable and immune to surface waves, even under strong wind conditions.
This provides an excellent research platform for various deployment needs,
especially those requiring stable conditions under rough wave conditions. One
of its main research goals is the study of waves. FLIP allows wide coverage of
conditions, from calm seas to breaking waves of various heights. Its other
main application is to study acoustical signal propagation, which is why it has
no propulsion system. FLIP has to be towed to its stations after compressed
air is pumped into the flooded section (ballast tanks) to turn the platform
horizontal, at a speed up to 10 knots. Not surprisingly, it is often mistaken for
a capsized ship when flooded to stand up.
In addition to designated research vessels, efforts are underway to
utilize commercial vessels to collect data during their operations. The Ship
of Opportunity Program (SOOP) is part of the Global Ocean Observation
System (GOOS), and has enjoyed tremendous progress (Manzella et al., 2006).
However, despite all of these advantages, ship-based research has been
gradually replaced by other means of sensing due to cost, including subsurface
vessels, floats, moorings, and airborne and spaceborne platforms. The latter
provides one of the greatest advancements in modern oceanographic research.

9.2.2 Remote sensing platforms

For oceanographers, one of the main challenges in the quality of data
collected is the coherency of the observations. This is due to the fact that ships
usually travel slowly, and it often takes several hours or more to travel
between stations. Additionally, time is required at each station for instrument
deployment, sample collection, and data acquisition. Conditions can change
Platforms and Instruments 207

significantly due to weather (which is often a function of wind strength, cloud

coverage, solar height, etc.) and currents during this period of time. Due to
availability and cost, it is prohibitive to deploy multiple sets of instruments on
different vessels to cover multiple stations simultaneously. This is because the
cost of platform support for ocean research is usually quite high. An average
research vessel capable of handling a team of ten or more scientists and
their gear can easily cost more than $10,000 per day in the United States.
Remote sensing platforms, on the other hand, provide synoptic coverage at a
significantly reduced cost.
Remote sensors can cover thousands of kilometers in one flight swath,
as shown in earlier chapters. This enormous coverage of a scene, usually taken
within a few minutes at most, provides oceanographers unprecedented levels of
data of the overall status of the ocean in real time. In addition to the expanded
spatial coverage, remote sensors are capable of providing multichannel or
multispectral information of the ocean. This information is the cornerstone of the
success of ocean color and the derived global ocean primary productivity
estimates. It enables us to understand the global biogeochemical cycle, which is
vital in assessing the health of our planet, when combined with land cover studies
using multi- or hyperspectral information provided by remote sensors (Fig. 9.4).

Figure 9.4 AVIRIS data products covering both land and water components derived from a
hyperspectral sensor. (Reproduced with permission from Jet Propulsion Laboratory.)
208 Chapter 9

The improved spatial and spectral coverage can be further enhanced by

temporal observations, with multiple views of the same area over time. For
those sensors providing daily coverage, seasonality of a focused research
phenomenon can be easily uncovered, information that was virtually
impossible to glean before remote sensing, due to cost and (more importantly)
the time required to cover vast areas of interest. Studies of large- or global-
scale phenomena are facing fewer hurdles. For sensors providing multiple
views within a day, such as GOCI, diurnal observations can be made that
enable modeling at physiological and ecological levels. Newer platforms and
systems, such as JMMES (Brown and Schulz, 2009) and WorldView-2,
provide multiple views and even multisensors within a few seconds of each
other, allowing for sophisticated deconvolution of scene content and
information retrieval, including stereoscopic information of the ocean, in
addition to bathymetric coverage.
Space platforms such as satellites and the International Space Station
(where HICO resides) provide the highest orbits, allowing the widest synoptic
coverage possible. However, they often take longer periods to implement,
are more difficult (if not impossible) to maintain, and lack the flexibility
of control, compared to aerial surveys, which utilize airplanes. These aerial
platforms can go as high as 20 km, such as those of U-2 (or ER-2, as NASA
knows it), where AVIRIS was tested, or as low as 100 ft above the ocean,
with as little as a few days of lead time for preparation. These platforms
are also less prone to atmospheric influences, especially ones flying at lower
The new exciting addition to aerial platforms is the UAV, sometimes
referred to as unmanned aerial systems (UASs). These cover an altitude range
from just above the sea surface to thousands of meters high. Their reach can
range from a limited radius, in terms of kilometers, to hundreds of kilometers
and even unlimited radius, utilizing in-air refueling. The role of UAVs consists
of many real-world applications, with their concept dating back as far as
World War I, where automatic airplanes were proposed and tested, although
these could probably be more accurately considered as the precursors of
todays cruise missiles. In addition to combat duties, UAVs are typically
involved in reconnaissance, survey, research, and development of systems and
sensor payloads. The various types (Fig. 9.5) or classifications of UAVs,
according to different organizations or agencies, can be rather confusing.
Roughly speaking, there are hand-held units, capable of reaching several
hundreds of meters in altitude and several kilometers in range. These are often
not classified in the tier structures of the U.S. Army, Navy, Air Force, or the
Marines. Examples in this category can include a micro-UAV, such as the
Wasp III from the Marines. Tier-1 types are often considered close- to
medium-range UAVs. Depending on agency standards, they can range from
operation ceilings of one thousand meters to tens of thousands, and reach an
Platforms and Instruments 209

Figure 9.5 Sample collection of various types of UAVs at the 2005 Naval Unmanned
Aerial Vehicle Air Demo. (Courtesy of Wikipedia.)

order of tens of kilometers and longer (Dragon Eye, Gnat, and Raven). Tier-2
types and beyond can offer medium altitude and long endurance (MALE),
up to 9000 m in altitude and 200 km in range; and high altitude and long
endurance (HALE), with even longer reach, which, in theory, can offer an
indefinite radius.
The use of UAVs as remote sensing platforms can be best illustrated by
Fig. 9.6, where camera types and functions are outlined, along with their roles
in relaying command and control.

9.2.3 Subsurface vessels: unmanned underwater vehicles

While remote sensing platforms such as airplanes, satellites, and UAVs
provide synoptic coverage at significantly reduced costs, this approach still
does not address one of the most important issues of ocean research: in situ
sampling. We have seen from previous chapters that remote sensing has come
a long way in terms of providing quality data products with reasonable
accuracy, with continuous improvements in sensor design and retrieval
algorithms. These are still limited, however, to the very surface of the ocean,
and cannot provide the much needed 3D vertical structures of the ocean. Even
with the most advanced lidar systems to date, their greatest penetration depths
have barely reached the mixed-layer depth, which could be the theoretical
limits of current lidar system designs (Arnone et al., 2012). Barring major
breakthroughs on the horizon, in-water measurements by ship-based stations
or spot sampling are still the main choice of most research expeditions, with
the exception of the maturity of recent UUVs.
While manned underwater vehicles (or submarines) have been used for
ocean research and exploration, their use is very limited due to availability
and cost. As the name UUV suggests, these platforms do not require human
210 Chapter 9

Figure 9.6 Example of UAV applications and roles in remote sensing of the ocean, both
as sensor platforms as well as relay stations. (Adapted from NOAA.)

operators (at least not as part of the payload) while they are underwater
during operation. There are basically two main types of UUVs: AUVs and
ROVs. The first type has no direct link to the mothership or shore station
from which they are launched. They are self-contained in terms of power
source, data logging, and navigation. An example of such UUVs, which is a
large class of remote environmental monitoring units (REMUs) AUVs, can be
seen in Fig. 9.7.
While underway, an AUV typically navigates under stored commands
from its internal computer, as well as by feedback from onboard sensors such
as a Doppler velocity log (DVL), sonar, and video imager, although it can
also receive real-time instructions from acoustical modems (Fig. 9.8). It can
locate its position at the surface using onboard GPS through antennas, as well
as via in-water communications, and referenced baselines when available.
Depending on the class of AUV, it can carry many traditional sensor
payloads, such as CTDs, side-scan sonars, ADCPs, cameras, and backscat-
tering and fluorescence meters. Specialized sensor packages can be fitted to
certain types of AUVs for special missions, such as those involved in oil spill
Platforms and Instruments 211

Figure 9.7 REMUS AUV deployment on the sea. (Courtesy of Scripps Institute of

Figure 9.8 Typical AUV components.

detection or mine-hunting operations. The limitations of AUVs are their

payload size and power availability that limit their range and endurance. As a
result, AUV thrusters cannot generate a strong enough push against currents;
3 to 5 knots is usually their maximum underwater velocity.
Underwater inspection of subsea structures often requires direct feedback
to the surface or operator. This is where ROVs show their greatest strength.
212 Chapter 9

An umbilical cord is used to transmit power and control to the vehicle, and
bring data feedback to the surface vessel, where the operator often resides.
The data link can include high-resolution video signals because the bandwidth
of the cable can afford it, unlike acoustical modems. Payloads are less
restricted on ROVs compared to AUVs, since space and power consumption
are of lesser concern on a ROV. Higher-caliber thrusters can often be fitted to
ROVs, along with mechanical arms, high-energy light sources, or even active
sensors, to gain not only better environmental sensing capabilities, but also
physical control and maneuverability, which is critical in underwater
operations involving inspection, repair, and retrieval. This is often used in
oil and gas exploration and production, where diver operations can be
dangerous and impossible due to short operation hours and depth limits. An
example can be seen in Fig. 9.9. The downside of ROVs is that due to their
umbilical cords, movement and range are limited.
A newer class of AUVs, called ocean gliders, have been gaining
momentum. Unlike traditional AUVs, they do not have thrusters to propel
them to their desired locations. They cleverly use the buoyancy of their bodies
in the water as a power source. Inside the gliders, bladders are used to control
buoyancy, with an electronic motor to make turns at desired depths and/or
locations. The wings on their sides are used to divert part of the vertical energy
from gravity potential into horizontal force to move forward, similar to
airborne gliders on their descending path (Fig. 9.10). At the surface, gliders
can communicate through Iridium satellites to send data and receive
commands and instructions by tilting their wings to the side or by tipping

Figure 9.9 An example ROV, made by Oceaneering International, Inc., inspecting an

underwater structure. (Courtesty of Oceaneering International, Inc.)
Platforms and Instruments 213

Figure 9.10 A Slocum ocean glider from Rutgers University. (Courtesy of Rutgers

Figure 9.11 Major glider types currently in operation. (Courtesy of U.S. NRL.)

their tail end up, depending on their configuration or manufacturer. The three
major types of gliders and their specifications are shown in Fig. 9.11. In
addition to the main types mentioned in the figure, wave gliders have also
demonstrated strong potential in ocean research. Wave gliders have two
portions, one on the surface with solar panels, and one beneath the surface
that converts the up and down wave action into forward thrust motion for the
The operation of gliders is shown in Figs. 9.12 and 9.13, where a yo-yo
(Fig. 9.12) or sawtooth (Fig. 9.13) pattern is formed to sample the water
214 Chapter 9

Figure 9.12 Example of Spray glider operation. (Reprinted with permission from Under-
water Gliders website: us.htm.)

Figure 9.13 Glider operation example in sawtooth fashion. Shallow water deployment
configuration is used. (Courtesy of U.S. NRL.)
Platforms and Instruments 215

Figure 9.14 Sample data collected by a Slocum glider during an exercise, showing
temperature along the track of the glider, as a function of its depth. (Courtesy of U.S. NRL.)

column from the surface to (near) the bottom. Horizontal and vertical
resolutions are mostly controlled by the glider configurations, with typical
descending rates of 50 cm/s. The rates can be slowed to as low as a few
centimeters per second under suitable conditions. Low power consumption
enables the gliders unprecedented endurance underwater, including the ability
to cross the Atlantic ocean, as demonstrated by Rutgers University (Glenn
et al., 2011), while sampling data along the way. Continuous data collection
while underway enables the gliders to sample in situ, at a very low cost, with
high spatial and temporal resolution, as shown by examples in Figs. 9.13
and 9.14.
Due to the sheer number of data points collected by glider operations, an
automated data quality control (QC) module is needed. This shortens data
turnaround time, making the best use of the data in a real-world, real-time
fashion, and provides input for ocean models to ingest the data feedback
for better ocean weather forecasting. A sketch of an example data flow is
shown in Fig. 9.15 (Carnes and Hogan, 2010; Hou et al., 2010a). Notice that
the real-time data handling system (RTDHS) uses a format adopted by the
216 Chapter 9

Figure 9.15 Data flow of an automated data QC routine. [Reproduced from Hou et al.

World Meteorological Organization (WMO), the Binary Universal Form for

the Representation (BUFR) of meteorological data. It is not a critical part of
the data flow, and can be easily changed to other formats such as the Network
Common Data Form (NetCDF).

9.2.4 Floats, buoys, moorings, observatories, and shore systems

Buoys have been used in various configurations at sea, both free drifting and
anchored. When anchored, they serve at minimum landmarks in the ocean.
Often, sensors are attached to a floating platform to collect data of the
environment, above and beneath the surface, including weather (wind, solar
radiation, humidity, and air temperature), salinity, temperature, wave heights
for research, early warnings for tsunamis, sound signatures for submarine
detections, amongst many other applications. They can be moored to specific
locations or drift freely, profiling up and down the water column. It is not
possible, nor practical, to exhaustively list all of the types and their
differences. Interested readers can find a wealth of information on the subject
on NOAAs National Data Buoy Center (NDBC) website, http://www.ndbc., which offers descriptions of major types and equipment, data
availability, and accessibility. Interested readers are encouraged to do further
An example of the capability provided by buoys and floats is seen in the
successful Argo float (Feder, 2000). As a collaborative work by more than 30
Platforms and Instruments 217

countries, more than 3,000 units have been deployed worldwide to date,
covering the entire world ocean, as shown in Fig. 9.16.
It can be seen from Fig. 9.17(a) that a hydraulic bladder is used to control
the buoyancy of the Argo float; this enables it to profile the water column in a

Figure 9.16 Map of the Argo network as of February 2011. An up-to-date color map can be
found at the International Argo Program website [International Argo Program (2011).]

Figure 9.17 Argo float (a) configuration and (b) operation. (Courtesy of University of
California, San Diego.)
218 Chapter 9

10-day cycle. Temperature and salinity are measured at  10 cm/s during

descent to as deep as 2000 m. The float then ascends to 1000-m depth and
drifts for 9 days before surfacing, at which point the data are sent to the
satellite within a 6- to 12-h window. Initially designed to work together with
the satellite altimeter JASON, the floats have proven to succeed above and
beyond expectations, to provide in situ observations for model validation and
Shore-based systems, such as tidal gauges and high-frequency (HF)
radar, also provide much needed data for ocean monitoring. Together
with cabled observational nodes, they are key components of larger ocean
sensing networks, i.e., observatories. One of the most widely used observa-
tories is Marthas Vineyard Coastal Observatory (MVCO), depicted in
Fig. 9.18 (Austin et al., 2000). It consists of permanent subsea sensor
arrays and shore-based meteorological stations. It continuously monitors
coastal marine environments by measuring the humidity, wind speed,
solar radiation, precipitation, CO2, and the turbidity, temperature, salinity,
and fluorescence of the water column. These provide great ground truth
data for remote sensors in vicarious calibrations, such as the recently
launched VIIRS imager. The MVCO website (
cgi-bin/mvco/mvco.cgi) has more details and data access policy.
As the summary graph in Fig. 9.19 demonstrates (Dickey, 2003), todays
ocean sensing and monitoring platforms have evolved from solely ship-based
platforms to a multitude of 4D arrays that cover above- and sub-sea
environments across vast areas, great depths, and over extended periods of

Figure 9.18 MVCO. [Reproduced with permission from Austin et al. (2000).]
Platforms and Instruments 219

Figure 9.19 Ocean sensing platforms, from space to the bottom of the ocean. WEC is a
wave energy converter; USBL is an ultrashort baseline; and LBL is a long baseline.
[Reproduced with permission from Dickey (2003) 2003 Elsevier.]

time. While this section focuses on various platforms, it by no means offers a

complete discussion. We have only briefly touched on shore-based systems,
and left out topics involving acoustical arrays, industrial surveys, and
automated surface ships.
To help to put things into perspective based on their relative importance,
spanning both in terms of temporal and spatial relevance, a chart has been
created to best present the platforms discussed in this chapter (Fig. 9.20)
(Dickey, 2003). It is interesting to see that areas best covered by these
platforms are those closely related to human activities, on the order of meters
to kilometers, days to weeks. This is most likely explained by our focus on
airsea interactions involving weather forecasting capabilities. The lack of
available platforms for smaller scale process studies seems to call out for
better instrumentation, as interest in small-scale phenomena are not lacking in
oceanography research. Basic physiological understanding of primary
producers, including viruses and bacteria, is still in its infancy. Basic mixing
at the airsea interface on fine vertical scales, in terms of gas, heat, and
momentum exchange (such as Langmuir circulation), is still not well
understood, and is critical in deriving large-scale, long-term forecasting
capabilities. The new platforms including UUVs and UAVs are the most
220 Chapter 9

Figure 9.20 Spatial and temporal scale accessible by different observational platforms.
[Reproduced with permission from Dickey (2003) 2003 Elsevier.]

likely candidates to fill the technical gap in platforms, when combined with
the new generation of nanosensors, such as miniaturized flow cytometers.

9.3 Common Instruments for Ocean Observation

The list of instruments used for ocean research is long, due to the history and
diversity of topics involved. It is therefore not possible to exhaustively list
them, nor is it practical to discuss all of them. Instead, a few typical sensors
are mentioned and discussed, with an intentional bias toward optical sensors.
The most commonly used instruments covered in this chapter are the CTD,
ADCP, high-frequency radar, ac-9, backscattering meter, fluorometer, and
spectroradiometer. They join the list of instruments already discussed in
previous chapters: ocean color sensors, lidars, microwave altimeters, radio-
meters, SARs, EO imagers, and sonars.
CTDs are a standard instrument in oceanography research, as they
provide a vertical distribution of temperature and salinity as a function of
depth. Temperature directly relates to density, and is critical in understanding
water mass stability, mixing, and biological activities. Unless there is fresh
water input, salinity is often used as a conservative tracer of water mass
for ocean circulation models. Both temperature and salinity also strongly
Platforms and Instruments 221

influence sound propagation in the ocean, which affects communication and

sonar performance in mine hunting and submarine detection. CTDs are often
used in combination with other sensors in a cage to achieve time-
synchronized, depth-synchronized measurements between sensors; the dis-
solved oxygen (DO) sensor is shown as an example setup in Fig. 9.21(a).
Because time is required for the temperature and especially the conductivity
sensor to reach equilibrium on contact with the water, there can be a lag
between the logged depth and data readout. To avoid this complication, some
CTDs use pumps to speed up the water flow passing the sensor, as Fig. 9.21(a)
shows. They are self-contained, operate on battery power, and possess internal
data storage at a sampling rate of tens of hertz, time stamped by an internal
On most research vessels, a CTD is often tied to a rosette carousal
consisting of Niskin bottles, as shown in Fig. 9.21(b). Niskin bottles are used
to collect water samples at desired depths. They are open at both ends at the
beginning of deployment. Once the carousal reaches the desired depth, a

Figure 9.21 (a) CTD with a DO sensor in the same cage. (b) CTD with Niskin bottles on a
rosette carousal.
222 Chapter 9

signal is sent through a hydrowire to trigger a specific bottle to close at both

ends, trapping the water inside. It is interesting to note that in earlier versions
a metal messenger was sent down the wire to mechanically trigger an up-side
down bottle to retain the water inside.
Ocean currents are another highly desired measurement, as they are a key
parameter in ocean circulation models. Currents are typically measured
using an ADCP, shown in Fig. 9.22. ADCP consists of transducers (imagine
directional loud speakers), receivers, amplifiers, gyros, and a computer for
logging, postprocessing, and averaging. Doppler effects are used to estimate
current velocity, when the backscattered sound wave frequency is shifted by
the currents from which the backscattering occurred. This is accomplished by
using the passive tracer (particles) inside the flow. Similar to sonar and lidar,
gating is used to calculate the range and distance, or the layer to the receiver.
ADCPs can be mounted sideways on a vessel or river bank, on the bottom of
a vessel, inside or attached to moorings, or on the bottom of the ocean.
Currents and wave actions can be deduced from ADCP measurements. A
Doppler velocity logger (DVL), often used on AUVs, is essentially a special
type of ADCP with a different logic circuit. It uses the seabed or specific layers
of the water column as a reference, from which the velocity of the platform
is estimated.
High-frequency radars in ocean sensing are shore-based systems (Fig. 9.23)
that measure sea surface waves and currents from tens to hundreds of kilo-
meters away. Operating in the megahertz range, with equivalent wavelengths
on the orders of 10 to 100 m, high-frequency radar can detect (at a very long
range) the Bragg scattering of vertically polarized EM waves in a conducting
media (seawater). Similar to optical gratings in principle, sea surface waves
return the constructive and deconstructive backscattered EM waves to the
receiver through the sea surface wave gratings. Since strong returns only

Figure 9.22 (a) An ADCP ready to be deployed. (b) Heads of an ADCP with four
transducers oriented at different angles to obtain 3D flow structures. (Courtesy of NOAA
Ocean Service Education.)
Platforms and Instruments 223

Figure 9.23 A CODAR high-frequency radar antenna. (Courtesy of University of California,


happen when the surface waves are half of the EM wavelength, surface
wavelength can be determined using the wave dispersion equation (Pond and
Pickard, 1983). The velocity of the currents can be deduced using Doppler
frequency shifts, as the wave motion causes frequency shift. The current
velocity can be calculated, since we know that the wavelength, period, and
height differ from the observation. Additionally, the waves responsible for
returning the constructive backscattered signals must be traveling in a radial
path directly to or from the direction of the radar. This provides information
about the direction of the waves. By using a network of high-frequency radars,
such as the one shown in Figs. 9.23 and 9.24 the same patch of water surface
can be monitored from different angles. Ocean surface waves and currents can
therefore be mapped in precise direction and intensity. Coastal Ocean
Dynamics Application Radar (CODAR) was developed by NOAA (Barrick,
Evans, and Weber, 1977) and was later commercialized to be widely used
today (Howden, Barrick, and Aguilar, 2011).
Optical instruments for in situ measurements are critical for our
understanding of ocean processes, as radiation from the sun provides the
ultimate driving force for the movement of the ocean, the life within it, and the
chemical reactions associated with it. Precise measurements of ocean water
responses to the incoming solar energy directly affect model input and
parameterization. Also, ocean optics instruments help to validate the remote
sensing algorithms discussed earlier.
As shown in Chapter 2, basic IOPs are important in underwater
radiative transfer, and are affected by the absorption and scattering process.
224 Chapter 9

Figure 9.24 CODAR high-frequency radar network and coverage range. [Reproduced with
permission from the University of Maine.]

In situ measurements of these quantities have mainly focused on beam

transmission and absorption. Beam transmissometers (or beam-c meters) are
widely used in visibility and turbidity studies, as demonstrated in the Secchi
disk theory discussions. Newer versions are more compact, and can measure
both beam-c and scattering at the same time, at multiple wavelengths. An
example of such an instrument is shown in Fig. 9.25, where a nine-channel
ac meter (ac-9, with wavelengths at 412, 440, 488, 510, 532, 555, 650, and
676, and 715 nm) by WETLabs is compared to 3.5-in. disks. An improved
version, ac-s, is capable of measuring absorption and attenuation at 4-nm
spectral resolution from 400 to 730 nm (Zaneveld et al., 2004).
The direct measurement of absorption by particulates filters a fixed
amount of seawater using filter pads, such as 0.2-mm glass-fiber filters (GF/F),
and measures the spectral absorption of the filter pads, as shown in Fig. 9.26
(Miller, Del Castillo, and McKee, 2005). These are then compared to blank
filters to obtain the net absorption by particles.
In addition to the IOP measurements already discussed, important AOPs
such as up- and downwelling radiance and irradiance are often measured with
Platforms and Instruments 225

Figure 9.25 A nine-channel absorption and beam attenuation meter from WETLabs.
(Courtesy of WETLabs.)

Figure 9.26 Absorption measured by comparing filtered particles on filter pad to blank
pads: (a) sample preparation and (b) filter pads. (Reproduced with permission from NOAA
Teacher at Sea website:

radiometers, as shown in Fig. 9.27. Depending on requirements, they are

typically deployed as fixed configurations for ground stations, such as those in
cal/val sites, including MVCO. An alternative is the portable version, where a
single receiver is used to measure both up- and downwelling radiance, aiming
226 Chapter 9

Figure 9.27 (a) Radiometers used to measure up- and downwelling radiance (or irradiance).
(b) Portable FieldSpec radiometer from ASD used to measure spectral responses in the field.
(Courtesy of ASD, Incorporated.)

the detector accordingly. Known reflectance grayscale panels with Lamber-

tian reflectance are often used for calibration in the field.
Smaller sensors with low power consumption are required with the
advancement of UUVs. A new line of such products is becoming available
and has tested successfully in field deployments. An example is shown in
Fig. 9.28. The ECO Triplet is made by WETLabs and is configured to allow

Figure 9.28 (a) The small optical sensor ECO Triplet is designed for UUVs. (b) Three-
wavelength backscattering meters shown installed on a Slocum glider. [Figure (a) courtesy
of Rutgers University; Fig. (b) courtesy of WETLabs.]
Platforms and Instruments 227

Table 9.1 Typical ocean optical properties and ranges from various sensors for UUVs.
[Reproduced from Hou et al. (2010a).]

for three different sensors, such as backscattering meters at select wavelengths

(zero to three) and fluorometers (zero to three). A backscattering configura-
tion is shown in Fig. 9.28(b), installed on a Slocum glider.
A typical selection of sensors designed for UUVs are listed in Table 9.1 for
readers interested in exploring the topic further. Detailed discussion and
abbreviations can be found in a validation test report for automated data QC
for gliders (Hou et al., 2010a).

9.4 Summary
Traditional ocean research techniques are widely augmented today with in
situ sampling packages on moorings, buoys, floats, flow-through systems,
mobile platforms (gliders, AUVs, and ROVs), integrated sensor networks,
and observatories. These are vibrant research and development areas and
they generate the most accurate data available, in 3D, often in real time, and
are less affected by adverse conditions. However, spot sampling lacks the
rapid, broad coverage that is critical in high-level, real-time tactical decision
making. In situ observations frequently are not possible for unsafe or denied-
access environments. Remote sensing techniques (both active and passive)
have been proven to offer synoptic surface coverage with adequate accuracy,
when sensors are calibrated and validated correctly. It is essential to
establish and maintain precise protocols when deciding the appropriate mix
and application of different sensor systems to maintain data coherence and
comparability. It is important to understand how the ocean environment
affects sensor performance, and what techniques are being developed to
enhance this performance in challenging ocean environments. An integrated
solution, based on numerical ocean circulation model, with initial and
boundary parameters adjusted based on in situ and remotely measured
228 Chapter 9

results, might be the ultimate answer to many challenging questions we are


International Argo Program (2011). Map of the Argo network. [online]
Available at:
Arnone, R. et al. (2012). Probing the subsurface ocean processes using ocean
LIDARS2. Proc. SPIE 8372, 83720O. doi:10.1117/12.921103.
Austin, T. C. et al. (2000). The Marthas Vineyard Coastal Observatory: a
long-term facility for monitoring air-sea processes. IEEE/MTS Oceans, pp.
Barrick, D. E., Evans, M. W., and Weber, B. L. (1977). Ocean surface
currents mapped by radar. Science 198, 138144.
Brown, B. M. and Schulz, B. L. (2009). The effects of the Joint Multi-
Mission Electro-Optical System on littoral maritime intelligence, surveillance,
and reconnaissance operations. Masters Thesis. Monterey: Naval Postgrad-
uate School.
Carnes, M. and Hogan, P. (2010). Validation test report for LAGER 1.0.
Memorandum report. U.S. Naval Research Laboratory. p. 68.
Dickey, T. D. (2003). Emerging ocean observations for interdisciplinary data
assimilation systems. J. Marine Syst. 40, 41, 548.
Feder, T. (2000). Argo begins systematic global probing of the upper
oceans. Phys. Today 53, 5051.
Glenn, S. et al. (2011). The trans-Atlantic Slocum glider expeditions: a
catalyst for undergraduate participation in ocean science and technology.
Marine Technol. Soc. J. 45, 5267.
Hou, W. et al. (2010a). Development and testing of local automated glider
editing routine for optical data quality control. Memorandum report. U.S.
Naval Research Laboratory. p. 37.
Hou, W. et al. (2010b). Glider optical measurements and BUFR format for
data QC and storage. Proc. SPIE 7678, 76780F. doi:10.1117/12.851361.
Howden, S. D., Barrick, D., and Aguilar, H. (2011). Applications of high
frequency radar for emergency response in the coastal ocean: utilization of the
Central Gulf of Mexico Ocean Observing System during the Deepwater
Horizon oil spill and vessel tracking. Proc. SPIE 8030, 80300O. doi:10.1117/
Zaneveld, J. R. et al. (2004). Correction and analysis of spectral absorption
data taken with the WETLabs ac-s. Ocean Opt. 14 (abstract).
Manzella, G. M. R. et al. (2006). The improvements of the Ships Of
Opportunity Program in MFSTEP. Ocean Sci. Discuss. 3, 17171746.
Platforms and Instruments 229

Miller, R. L., Del Castillo, C. E., and McKee, B. A. (2005). Remote Sensing of
Coastal Aquatic Environments : Technologies, Techniques and Applications.
Dordrecht: Springer.
Pond, S. and Pickard, G. L. (1983). Introductory Dynamical Oceanography.
New York: Pergamon Press.
Chapter 10
Integrated Solutions and Future
Needs in Ocean Sensing
and Monitoring
It has been said that oceanography is a young but ancient science. Despite
the vastness of the ocean, which covers close to three-quarters of the earths
surface, concentrated efforts in ocean studies did not start until the last few
decades. This is especially true of the systematic approaches we see today.
World wars and the wartime requirements of transportation, embargo, and
defense drove oceanographic research to new levels. Concerns of global
warming, more appropriately referred to as global climate change, are one of
the key driving forces behind ocean research today. Biogeochemical cycles of
the biosphere, lithosphere, and each basic element and process involved, have
been examined in great detail. For example, the airsea gas exchange is of
great interest to oceanographers because it strongly influences the rate of CO2
absorption by the ocean. Also of interest is the temperature at the sea surface
and beyond, as this affects the physical and chemical balance of the water
as well. A better global circulation model is critical to the understanding of
water mass transport and gas exchange in vastly different environments. The
question that needs to be asked is: What will be (or should be) the driving
force of oceanography in the next few decades and beyond? To answer
this question, it is necessary to point out and highlight the obvious. Views
represented here reflect only the opinions of the author, based on results of
oceanographic research. They do not reflect the views of any organization or
Before discussing the future needs of ocean sensing, it is prudent to look at
an example of the integrated solutions we have today. The example brings
together observational data from in situ and remote sensors into a circulation
model, to predict and forecast system performance under current and future
conditions. This is followed by discussions on logical new focus areas for
future research and likely solutions to our sensing needs.

232 Chapter 10

10.1 Tactical Ocean Data System

The majority of this section is adapted from a previous work (Hou et al.,
2010c), where a performance surface prediction framework for a laser
mine-hunting system is established through the combined use of in situ
measurements, remote sensing, and circulation models. It provides a good
example of an integrated solution involving multiple disciplines of oceanog-
raphy and sensing techniques.
The ability to sense and monitor vast areas of the ocean in detail and with
accuracy is of vital interest to civilian, defense, and security applications,
including fisheries, shipping, navigation, homeland security, MIW, and ASW.
One of the key obstacles in operational oceanography is the difficulty
associated with sampling and taking measurements in the field, due to lack of
easy access to the vast areas of the ocean. Remote sensing techniques, both
active and passive (especially those from space), can offer synoptic surface
coverage with adequate accuracy. This occurs when sensors are calibrated
and validated correctly with help from in situ measurements, and effects of
atmosphere are correctly estimated. Even so, such signals are heavily weighted
and are thus biased toward features from the surface layer. Ocean circulation
models offer a much needed 3D element to the mix that allows for features
extending beyond the surface layer, on top of the 2D synoptic coverage. After
properly determining initial boundary conditions that can be obtained from
in situ measurements and satellite observations, the fourth dimension, or
forecast, can be included in the framework. This philosophy has been realized
in the Tactical Ocean Data System (TODS) framework.
Modern defense and security requirements demand that accurate
information be provided when and where it is needed [e.g., battlespace on
demand (BonD)]. Ocean sensing must not only provide timely and accurate
data, but also offer insights regarding overall 3D and future, i.e., forecasted,
environmental conditions. The combined use of in situ observations, remotely
sensed data, and physical models is a rapidly evolving field, although
improved assimilation of available data into models still poses a challenge.
The ability to sense, integrate, and predict is vital in establishing a truly real-
time, 4D cube of verified and validated information for ocean now-casts and
forecasts, as shown in Fig. 10.1, in terms of TODS. We see that observations
such as those from gliders provide critical input to one of the key elements in
the BonD tier 0 structure.
Glider observations can provide input for localized or spot events, as well
as provide long-range, long-term, in-depth data throughout the water column.
One good example is the seven-month voyage by the Rutgers University
Slocum glider Scarlet, which crossed the Atlantic Ocean on December 4, 2009,
from New Jersey to Baiona, Spain. This glider was capable of sampling ocean
structures during the crossing. The vast data stream generated by gliders
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 233

Figure 10.1 TODS components. [Reproduced from Hou et al. (2010c).]

contains a wealth of much-needed information for topics discussed earlier.

Automated QC and analysis of these data are the only way to achieve timely
assimilation of the described models and the framework.
The following section describes the components of the TODS framework
as an example of an integrated solution to an MIW application. First, the
capabilities of glider optical measurements at the U.S. NRL and the Naval
Oceanographic Office (NAVOCEANO) are discussed. Then, efforts associated
with the development of automated optics QC processes are presented, followed
by discussions on Optical Forecast software (OpCast) and 3D optics modules.

10.1.1 Glider optics

To fulfill the requirements of MIW, ASW, navigation safety, monitoring of
global climate change, and battlespace environmental sensing, optical sensors
have been fitted and tested on different types of gliders and have proven to be
successful in assessing optical conditions in a variety of water masses. These
sensors provide input for optimizing the ocean optical models used to generate
and forecast electro-optical identification (EOID) performance surfaces, diver
visibility, and asset vulnerability products (see Table 9.1). The output can aid
in tactical decision making during fleet operations.
An automated QC process for optical measurements from gliders has been
designed and implemented as part of the Local Automated Glider Editing
Routine (LAGER) (Carnes and Hogan, 2010; Hou et al., 2010b). Optical
algorithms are used to process data from available glider optical sensors,
234 Chapter 10

including instruments that measure the beam attenuation coefficient c; total

scattering coefficient b; backscatter coefficient bb; the fluorescence returns
and thus derived concentrations of chlorophyll, phycoerythrin, and CDOM;
downwelling irradiance Ed; phytosynthetically available radiation (PAR);
and other derived optical parameters, such as diver visibility (Table 9.1).
All current optical sensors, including the ECO series from WETLabs (bb2f,
bb2slo, fl3, SAM, BAM, AUVb, and ECO-PAR), and OCR504/7 from
Satlantic, are embedded in the algorithm, with flexibility built in to allow for
future expansion, such as firmware and data format change. Additionally,
Table 9.1 outlines the typical range, resolution, bandwidth, and channel limits
of the optical sensors used in the LAGER optics routines (Hou et al., 2010c).
The data flow can be viewed as a five-step process, with the inclusion of
optics, as shown in Fig. 9.15. Notice that the process can be enabled such that
all data are: (1) sent to manual editing, (2) processed by physical parameters
alone, (3) or processed by physical and optical parameters combined.
The data ingestion module reads in real-time raw data at the
NAVOCEANO Glider Operation Center (GOC) and converts it to Network
Common Data Form (NetCDF) format. Automated QC is the key step in
LAGER optics. QC uses flags to classify different types of erroneous data
points, and the combined quality of the flags determines whether manual-user
GUI (MUG) is needed for closer inspection. The resulting data are then sent
to the database in BUFR format, which is required by the RTDHS at
NAVOCEANO (Hou et al., 2010c).
The automated optical QC program structure can be found in Hou et al.
(2010a) and Hou et al. (2010c). Briefly, the QC process tests for optical
variables that are implemented in LAGER to modify algorithms and
conventions for detecting and flagging questionable data. These tests were
initially developed for processing physical variables such as temperature T
and salinity S. However, because of the wide variety of optics types and
frequency bands, a 2D variable array structure is used, in contrast to the 1D
vector approach used individually for T and S. The reader is referred to Hou
et al. (2010a) for details of physical data QC implementation.
Two of the physical QC algorithms implemented in LAGER are used
with minor adaptations for optics QC (OQC). These are the global bounds
check and the spike test. The latter defines a spike to be a single datum
departing significantly from its neighboring values (single-point spike). It
has the additional feature of being able to detect a convoluted spike in the
presence of a gradient (as a function of depth) of the relevant hydrographic or
optical variable. Since segments of anomalous optical data can span several
data points, a method to detect these cases, in the form of a running standard
deviation filter, is also included in OQC. This filter finds and flags data that
depart significantly in value from a local mean, computed inside a running
window (depth interval), which is moved through the entire sampled depth
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 235

range of the profile. In addition to the variable value range check, the depth
of each sample point is also checked and chopped short, if necessary. This
eliminates values that appear to lie above, or too close, to the surface, where
processes such as wind waves and bubbles can make optical measurements,
such as backscattering and downwelling irradiance, either invalid or difficult
to interpret reliably. If this depth is shallower than a given depth (default is 1 m
from the surface), or is negative, the variable value is flagged. A constant profile
check is also done to determine whether the values for a particular optics
variable are constant throughout the sampled depth range. This has proven
critical in assessing real-time EO sensor performance, such as that during the
RIMPAC 2008 exercise (Mahoney et al., 2009). This change in optics variable
values could occur for a variety of reasons, such as instrument sampling
faults, or sensor contamination, which suggest that the data are invalid.
The automated QC routines are in place and have been tested under
different conditions, using data from different types of gliders. The results
(Hou et al., 2010a) show that under the most complicated situations involving
complexities associated with data transmission as well as environmental
variability, as much as 30% of profiles require manual examination by
operators. The failure rate in detection of bad profiles is less than 0.4% of
the profiles examined and is associated with sparseness of the data and single-
point spikes likely due to subsurface optical layers.

10.1.2 OpCast
Optical forecasts are required for naval operations to support U.S. Navy
diving operations and MIW mine-hunting system performance predictions.
OpCast software by the U.S. NRL aims to forecast the coastal optical
properties using satellite ocean color. This forecasting ability uses a Navy
Coastal Ocean Model (NCOM) and satellite imagery processed using the
Automated Processing System (APS). Satellite ocean color has been providing
monitoring capabilities for bio-optical properties in coastal regions for
decades, using several ocean color sensors. Applications of these data for
operations and research are limited by coverage and the ability to forecast
changing conditions. The ability to couple satellite bio-optical properties
with physical forecast circulation models provides a real-time prediction of
24-h bio-optical distribution along coastal regions (see Fig. 10.2). A real-time
and long-term evaluation of an optical forecast is assessed by comparison
with next-day imagery. The forecast is based on Eulerian advection from
NCOM, which does not account for biological and optical degradation and
growth processes at this point. However, for near-real-time (~ 24 h) coastal
applications, this assumption appears to be valid and provides a new
capability for deployment planning and coastal management.
For validation of forecast products, a 24-h prediction was compared to an
actual remote sensing image that came in the next day. Figure 10.3 shows the
236 Chapter 10

Figure 10.2 Examples of OpCast for the Mississippi Bight show the progression of optical
properties using backscattering at 551 nm. [Reproduced from Hou et al. (2010c).]

Figure 10.3 Comparison of OpCast output and satellite observations: (a) 24-hour forecast,
(b) MODIS observation after 24 h, and (c) difference image between (a) and (b). [Reproduced
from Hou et al. (2010c).]
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 237

comparison and validation results of the backscattering forecast (551 nm) for
October 20, 2009. Figure 10.3(a) shows the 24-h forecast, and Fig. 10.3(b)
shows MODIS imagery, which includes clouds and atmospheric correction
failures. Figure 10.3(c) shows the difference between the observation from
MODIS and the 24-h forecast output for valid retrievals only. White areas are
zero difference, red areas are overestimation, and blue areas indicate
underestimated values. Black indicates an area of no data (ND) due to
contaminated pixels, i.e., clouds, glint, and algorithm failures, including those
in atmospheric corrections.

10.1.3 Optimization and 3D optical volume

The 3D Optical Generator (3DOG) was developed at the U.S. NRL to
provide real-time characterization of the optical environment. 3DOG is used
to define the mine countermeasure (MCM) performance surface for specific
laser systems. The software requires the following input:
1. Glider profiles of optical and physical properties, obtained from optical
LAGER processing;
2. Satellite surface bio-optical properties and 1% light level from MODIS- or
MERIS-type sensors; and
3. Modeled vertical density and temperature fields from NCOM.
The procedure to develop a 3D optical volume is illustrated in Fig. 10.4.
The first step (red lines) is to develop region-specific coefficients, which are

Figure 10.4 Flow chart of 3DOG. Note that the hierarchical data format (HDF) is used as
model output. [Reproduced from Hou et al. (2010c).]
238 Chapter 10

derived based on the optimization of glider and satellite data. The second step
(black lines) is to use these coefficients with the surface satellite optics and
3D physical NCOM model. The latter yields density, mixed-layer depth, and
intensity of mixed-layer depth, all of which are used in generating the 3D
optical volume.
Coefficients from the optimization code are used with the 3DOG software
to generate 3D optical volume. This software is written in IDL programming
language and requires the following input:
1. NCOM output in NetCDF is required to determine the mixed-layer depth
and the intensity of the mixed-layer depth (IMLD) for each grid location.
This is accomplished by using the same BruntVisl frequency (BVF)
(Carnes and Hogan, 2010) and temperature thresholds in optimization,
obtained from an ASCII file generated from an optimization code.
2. The satellite bio-optical property of chlorophyll or backscattering, and the
1% light level at each grid location are also required.
3DOG takes these inputs and the coefficients from the optimization and
constrains the surface bio-optical field using the satellite optical properties as
truth. The vertical profiles are constrained over the first attenuation length
and cannot exceed the 1% light level depth. Examples are shown in Fig. 10.5.
The 3D bio-optical data volume has been integrated into the Electro-
Optical Identification System (EODES v3.0), developed by Metron, Inc.
(Giddings, Shirron, and Tirat-Gefen, 2005). The program defines the
performance field of an underwater-laser mine-hunting system. Inputs for
the EODES model are the vertical profiles of optical properties, including the
beam attenuation coefficient, backscattering coefficient, system specifications,

Figure 10.5 Example image of the derived mixed-layer depth with 3D slices of diver
visibility along north to south and west to east transects (red lines). (Courtesy of U.S. NRL.)
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 239

and tow altitude above bottom for desired probability of identification.

Outputs of the EODES model include image quality of a target image, and
optimal towing altitude to detect and identify subsurface objects under
defined optical conditions, given by the 3D optical volume. These outputs
provide methods to determine where the optimal height is relative to the
bottom to tow the mine-hunting system.
3DOG volume is then linked to the EODES model to create a 2D
performance field. This field shows regions where targets of interest cannot be
identified (red), might be identified (yellow), and are positively identified
(green), as illustrated in Fig. 10.6. The EODES software was modified so that
the user can run the system performance model while skipping n pixels in
an image. This cuts down on the computation time required to generate
the performance field. In addition, an option was added to allow the user to
specify a smaller box inside the image grid for performance estimates, to allow
for timely support. This has been utilized to support field exercises such as
RIMPAC 08 (Mahoney et al., 2009).

Figure 10.6 2D performance field (red no target identification, yellow possible target
identification, and green target identified). [Reproduced from Hou et al. (2010c).]
240 Chapter 10

10.1.4 TODS Summary

Glider optical measurements provide critical input for establishing accurate
optical ocean sensing and forecasting systems. These systems also fuse
observations from satellite remote sensing sensors and outputs from ocean
circulation models. Together, a 4D verified data cube can be utilized in real-
time operational MIW decision making. An automated glider optics QC
package is required to handle the large volume of in situ data input. Output
data include the much-needed information for downstream optical products,
including 3D optimization. Combined with OpCast, TODS provides perfor-
mance surface predictions for EOID sensors.

10.2 Future Ocean Sensing Needs

With accelerated advancements in measurement technology, we are experienc-
ing an unprecedented influx of information about the ocean, although our
knowledge of the ocean remains paltry compared to the unknowns to be
discovered. To avoid aimless wandering, or merely to satisfy our intellectual
curiosity, a sharp focus is needed to best utilize our research assets and res-
ources, so that our collective efforts produce the maximum return.

10.2.1 Short- to mid-term outlook

For the short to medium term, the decadal goal will likely remain as our
current focus and pressing need: to understand global climate change and
related issues of energy consumption, regeneration, and exploration. Ocean
research provides the key to our understanding of global biogeochemical
cycles, in terms of the carbon cycling process in the ocean. The ocean can
provide carbon storage in the seawater through primary production into living
organisms, through sinking and sedimentation, through redistribution
by ocean circulation, and finally through exchange at the airsea interface.
A smaller, certainly not negligible, role in ocean research also related to climate
change is ocean energy exploration and production. This type of research
requires an in-depth understanding of geological processes, ocean weather
forecasting, deep-sea operations related to inspection and diver visibility, and
renewable energy production through biofuel and wind farms. Defense
requirements primarily focus on surveillance of ocean borders and shipping
routes for mine detection and clearance, as well as submarine warfare related to
detection, identification, and neutralization of hostile threats. Long-term
monitoring capabilities, especially passive methods, are preferred for these
sensing requirements. Monitoring oceans for pirating and trafficking of illegal
cargo (substance, human) also falls under such surveillance needs.
Related to these applications, the following gaps in our current
capabilities should be addressed as top priority: (1) finer sensing resolution
on smaller scales, both in spatial and temporal domains; (2) automated
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 241

processing of collected data, systematic and interdisciplinary; and (3) future

sensor planning via simulation and synthetic data validation.
Ocean processes are best understood on scales most relevant to our daily
routines. These scales range from meters to kilometers and from days to
weeks, as outlined in Fig. 9.20. Larger scales, such as those surface events
and properties observable from space using remote sensing, are becoming
available, thanks to the high-density sensor population today and accumula-
tion over time. While it is possible to integrate across time and space to
increase coverage areas and time spans, it is not possible to subdivide spatial
and temporal resolution into finer scales beyond the existing limit (leaving out
the topic of super-resolution for now). Unfortunately, these are important in
understanding the fundamental processes of heat exchange at the airsea
interface: momentum transfer from the wind into the ocean, which affects
vertical mixing; phytoplankton growth; chemical balance; gas exchange; and
so on. Finer-scale processes are also related to large-scale circulation models,
where turbulent dissipation rates of kinetic energy, heat, and salt are key not
only at boundary layers but also throughout the water column. Smaller-scale
events are often associated with higher temporal variability, which in turn
requires higher sampling frequency. All of these demand new sensors and
associated processing algorithms.
With increasing sensor availability, coverage, and sampling frequency,
more data become available. In terms of data storage of measured results, we
have evolved from kilobytes (103) merely a few decades ago, to megabytes
(106), to gigabytes (109) merely a few years ago. We are staring at terabytes
(1012) now, with words like petabytes beginning to show up. These are not
only from large satellite imaging sensors covering large areas with multiple
channels on a daily basis, but are also from sensors from ship-based
collections, moorings, floats, buoys, UUVs, UAVs, HF radar, and various
nodes of observatories. Automated processing of these data is critical if real-
time or near-real-time decision making is to be accomplished, as showcased by
the example of TODS. A higher level of cognitive understanding of data and
adaptive processing is urgently needed when diverse data sources are involved.
This automated process would benefit from the adaptive collection of data
by sensors and sensing platforms. A commonality of routine data processing
should include the establishment of standard baselines, with patterns
identified that involve multisensors and algorithm outputs. These baselines
could be used to identify anomalies and peculiarities during ocean sensing and
monitoring, and to alert the operator or scientist that further analysis
is necessary.
New sensors and systems are needed for these demanding tasks. In
addition to the advanced computer designs already employed today, sensor
output and performance under various environmental conditions should be
modeled and analyzed before an actual sensor and system is built. Multiphysics
242 Chapter 10

numerical simulations should be part of a standard routine to gauge sensor

performance under various scenarios. Smart sensors and sensing technology
with adaptive sampling in spatial and temporal domains should be actively
pursued. This is becoming a reality thanks to the miniaturization of computers
and power requirements. Imagine, if you will, a new class of sensors that is
capable of changing its sensing resolution on the fly, lowering it where/when
the scene content or sensed subject is uniform in property, and increasing
it when/where complex features are present. The same can and should be
applied in the temporal domain as well, not only by preset (which is widely
applied today) on a fixed sampling schedule such as satellite sensors, but
also by sensors in the field that are responsive to immediate environmental
Spectral or multifrequency sampling should follow the same reasoning,
when applicable, with various bandwidths and selectable channels. These
have already begun to emerge in some of the sampling strategies of UUVs,
in an effort to reduce data storage and power consumption to increase
their endurance. Smarter sensor designs have also emerged, not only those
involving nanotechnology and microelectronics, but also those involving
exciting developments in compressive sensing. Instead of collecting data at a
uniform spatial resolution and then proceeding with data reduction and
compression for transmission needs to save the bandwidth, technologies
have been developed to compressively sense needed areas at the required
resolutions of a sparsely populated incoherent sample space (Candes and
Wakin, 2008). This results in a much smaller sensor in terms of pixel
count; therefore, less power consumption is required during collection and
transmission, and less processing is needed down the chain. However,
daunting challenges can be expected, not only at the development stage of
sensor design (including evaluation of performance via simulations), but also
at the calibration stage of the system, when adaptive variations are part of the
built-in properties of the sensor. Simulations will likely prove to be the most
efficient tools at this stage. They will help to identify uncertainties of the
system parameter range (i.e., weakness), and will proceed in reducing these
errors accordingly, by shifting calibration assets to focus on high-gradient
areas and periods.

10.2.2 Long-term outlook

For the longer term, centennial research should focus on providing better
utilization of the ocean, in terms of living environments suitable for humans on
community scales, better production output for renewable energy and food, and
other related issues. This is necessary and practical when considering that
according to the United Nations (Fig. 10.7), the world population is likely to
increase 50% by the end of this century, reaching 10 billion people. This
increase will severely strain our natural resources of land and crop production
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 243

Figure 10.7 World population could reach 10 billion by the end of this century, according to
the United Nations (2011). (Reproduced with permission from The New York Times.)

(United Nations, 2011), which can lead to large-scale population loss due to
natural or man-made disastrous events.
Challenging issues will emerge with increased activities related to deep-sea
exploration, physical and biological processes in coastal areas and on the
continental shelf, to maintain a self-sustainable macroenvironment to support
underwater human communities. While many issues need to be resolved, the
pressing directional challenges will gravitate toward energy harvesting on a
large scale, food supply, waste management, communication, transportation,
and homeland security and defense.
Renewable energy will likely be the main energy source for underwater
sustainable living. New technology needs to be developed, along with new
sensors to provide continuous output for daily routines. This could come from
differentials in velocity, temperature, and density, or a combination of these.
Chemical potentials could also be used, such as those at the ocean and benthic
layer interface. Underwater natural resources, including thermal vents
near the MORs, could be harvested. Methane hydrate (or fire ice) is a solid
clathrate compound in which large amounts of methane are trapped inside
a crystal structure of water that resembles ice. The distribution and
availability on continental shelves suggests its possible use as an energy
source (Xu and Ruppel, 1999). Near the ocean surface, wind energy could
244 Chapter 10

be harvested, along with waves and currents, especially those associated with
tidal cycles. Biofuels in the form of ethanol could be obtained from seaweed
(Horn, Aasen, and stgaard, 2000) and various microalgae (Suali and
Sarbatly, 2012), and used cleanly. Multilayer microalgae biotechnology could
provide not only the energy source required for underwater communities,
but also other products such as food, fertilizer, and biomass for animals
(Fig. 10.8).
Food production could certainly come directly from wild catch in the sea.
However, to maintain a stability of stock, careful management and active
aquaculture would be needed, especially in shallow shelf areas. Sophisticated
biotechnology chains (such as the sketch in Fig. 10.8) would be required
to increase use while reducing waste, which would equal energy savings due
to the cost of disposal and cleanup. Better understanding and ecological
modeling capabilities are critical in the design, prediction, management, and
sensing of the ocean biosphere for food production and sustained growth.
Waste management in the ocean would certainly involve a high level of
biodegradation, with help from bacteria (including microalgae) and viruses, to
avoid degradation of the environment. Degradation could affect the chemical
balance of the ocean in a subtle but prolonged way. This is parallel to the
issues of ocean acidification (which was not discussed in detail in this book).
High-bandwidth communication would be neededeither addressed
by a leaping advancement in acoustical communication technology, or vast
improvements in optical methodsfor wireless requirements. The former
is the primary choice for underwater communication, due to its longer
range. However, its narrow bandwidth would not be suitable for future
communication requirements, if large underwater human communities are

Figure 10.8 Utilization of microalgae biotechnology. [Based on Harun et al. (2010).]

Integrated Solutions and Future Needs in Ocean Sensing and Monitoring 245

formed. Optical channels have much higher bandwidths but suffer from
severe attenuation, as discussed in Chapter 2. The NLOS approach,
mentioned in Chapter 4, shows strong promise, and laboratory testing results
have suggested up to 70 attenuation lengths for imaging needs. The same
20 factor compared to traditional visibility ranges can be expected for
optical communication, although the impacts of turbulence will need to be
considered and mitigated (Hou et al., 2012). For a wired approach, it is
less problematic to adopt the fiber optics method widely deployed in land
applications, although safeguarding and maintenance underwater would be
an issue, due to biological and sedimentation activities at the benthic layer.
In water, positioning is important in geolocation and navigations. These
requirements would likely be addressed through a network of georeference
points over acoustical modems, combined with dead reckoning, baseline
systems, and DVL or relay networks.
With underwater communities, the need for homeland security and a
defense radius would be expanded to cover wider areas, from the surface to
the bottom of the ocean. Constant sensing and monitoring over long ranges
and long periods of time would be required. These requirements could be
challenging without the long-range optical sensing capabilities (including IR
and microwave) we are accustomed to in the air. Improved optical means
through the optical transmission window in the ocean might be the only
choice. Significant efforts should be made in exploring new sensor
technologies for sensing as well as for delivery of optical contents in terms
of information and high-intensity energy.

10.3 Concluding Remarks

Ocean sensing and monitoring should be given more attention and be at the
center stage of research efforts. It is our duty and responsibility to educate
and inspire younger generations to realize the importance of the ocean to the
survival of humankind. Instead of looking to the stars for clues of life and
means of sustainability, we should first focus on the available resources right
in our front yard. This does not mean that we deplete the resources for future
generations, but rather we can enrich our means and quality of life, while
maintaining vigilance in monitoring our environment to ensure its health and

Candes, E. J. and Wakin, M. B. (2008). An introduction to compressive
sampling. IEEE Signal Proc. Mag. 25, 2130.
Carnes, M. and Hogan, P. (2010). Validation test report for LAGER 1.0.
Memorandum report. U.S. Naval Research Laboratory. p. 68.
246 Chapter 10

Giddings, T. E., Shirron, J. J., and Tirat-Gefen, A. (2005). EODES-3: an

electro-optic imaging and performance prediction model. Proc. MTS/IEEE
Oceans 1382, 13801387.
Harun, R. et al. (2010). Bioprocess engineering of microalgae to produce a
variety of consumer products. Renew. Sust. Energ. Rev. 14, 10371047.
Horn, S. J., Aasen, I. M., and stgaard, K. (2000). Ethanol production from
seaweed extract. J. Ind. Microbiol. Biot. 25, 249254.
Hou, W. et al. (2010a). Development and testing of local automated glider
editing routine for optical data quality control. Memorandum report. U.S.
Naval Research Laboratory. p. 37.
Hou, W. et al. (2010b). Glider optical measurements and BUFR format for
data QC and storage. Proc. SPIE 7678, 76780F. doi:10.1117/12.851361.
Hou, W. et al. (2010c). Glider optics and TODS components in supporting
MIW applications. Monterey: 9th Intl. Symp. Technol. Mine Problem.
Hou, W. et al. (2012). Optical turbulence on underwater image degradation
in natural environments. Appl. Opt. 51, 26782686.
Mahoney, K. L. et al. (2009). RIMPAC 08: Naval Oceanographic Office
glider operations. Proc. SPIE 7316, 731706. doi:10.1117/12.820492.
Suali, E. and Sarbatly, R. (2012). Conversion of microalgae to biofuel.
Renew. Sust. Energ. Rev. 16, 43164342.
United Nations (2011). World Population Prospects : the 2011 Revision. New
York: United Nations.
Xu, W. and Ruppel, C. (1999). Predicting the occurrence, distribution, and
evolution of methane gas hydrate in porous marine sediments. J. Geophys.
Res. 104, 50815095.
A bathymetric lidar, 151
absorption coefficient, 23 24, 32 battlespace on demand (BonD), 232
acoustic camera, 111 beam attenuation coefficient, 30
acoustical Doppler current profiler beam spread function (BSF), 52
(ADCP), 205 beam transmissometer, 205
Advanced Microwave Instrument Beer Lambert law, 23, 32, 189
(AMI), 178 bidirectional reflectance distribution
Advanced Microwave Scanning function (BRDF), 35
Radiometer for EOS (AMSR-E), bioluminescence, 40, 43
168 blackbody radiation, 40, 166, 190
Advanced Very High Resolution blooms, 13
Radiometer (AVHRR), 194 Bragg scattering, 222
Aerosol, 123 Braggs law, 154
afternoon train (A-train), 131 Brewsters law, 37
aggregates, 101 102 brightness temperature, 170, 190
Airborne Oceanographic Lidar Brillouin scattering, 40, 45, 152
(AOL), 150 Brillouin shift, 45
altimeter, 174 Brunt Visl frequency (BVF), 238
anti-Stokes process, 45
antisubmarine warfare (ASW), 44, 107 C
apparent optical property (AOP), CALIPSO, 123
25, 35 carbon fixation, 18
Argo float, 216 217 CDOM, 140
ASCAT, 178 Challenger Expedition, 4
atmospheric correction, 120 CHARTS, 151
attenuation length, 32 chlorophyll, 23, 133, 147, 156
Automated Processing System chronometer, 4
(APS), 235 circular polarization, 22
cloud shadow detection, 126
B Coastal Ocean Dynamics
backscattering, 138 Application Radar (CODAR),
backscattering coefficient, 27 223
band ratios, 135 coastal upwelling, 12

248 Index

Coastal Zone Color Scanner ecosystem, 14

(CZCS), 124,128 Ekman pump, 12
Coastal Zone Mapping and Ekman suction, 12
Imaging Lidar (CZMIL), 151 Ekman transport, 11
colored dissolved organic matter El Nio, 166
(CDOM), 42, 147, 157 electro-optical identification
Columbus, 3 (EOID), 93, 233
conductivity temperature and depth Electro-Optical Identification
(CTD), 220 System (EODES), 238
conductivity temperature and depth emissivity, 166
(CTD) sensors, 205 empirical approach, 135
convergence, 12 Endeavour, 4
Coriolis effect, 11 EOID, 93
Coriolis force, 11 ESFADOF, 156
crosstalk, 140 expeditions, 4
CZCS, 135
CZCS Scanning Multichannel F
Microwave Radiometer (SMMR), Fabry Prot interferometer, 154
170 Fish Lidar, Oceanic, Experimental
(FLOE), 159
decay transfer function (DTF), 57, Floating Instrument Platform
80 (FLIP), 205
degree of polarization (DOP), 22 flow cytometer, 100, 220
dielectric constant, 167 fluorescence, 40, 140, 147, 156
diffraction-limited resolution, 183 fluorescence imaging laser line
diffuse attenuation coefficient, 33, scanner (FILLS), 96
38, 69, 71 fluorometer, 205
diffuse transmission, 120 Fried parameter, 60
dinoflagellates, 44
dissolved organic matter (DOM), 14 G
dissolved oxygen (DO) sensor, 221 geometric calibration, 140
diurnal, 133 Geoscience Laser Altimeter System
diurnal heating, 188 (GLAS), 149
diver visibility, 51, 58, 233 Geostationary Ocean Color Imager
divergence, 12 (GOCI), 126, 133
Doppler binning, 181 geostrophic flow, 10, 12
Doppler velocity log (DVL), 210, Gershuns law, 39, 45, 89
222 global conveyer belt, 10
Global Ocean Observation System
E (GOOS), 206
echo locate, 6 global positioning system (GPS), 146
echo sounder, 108 Gulf Stream, 188
Index 249

Hawk Eye, 151 Langmuir circulation, 219
high-frequency radar, 178, 218, Laser Airborne Depth Sounder
222 (LADS), 151
Holosub, 104 laser line scanner (LLS), 93
hyperspectral, 117, 125 Local Automated Glider Editing
Hyperspectral Imager for Coastal Routine (LAGER), 233
Ocean (HICO), 126 long exposure (LE), 60
hypoxia, 9 LUCIE, 92

ICESat, 149 Magellan, 4
imaging radar, 180 magnons, 45
imaging sonar, 111 MAPPER, 103
index of refraction (IOR), 27, Marine Optical Buoy (MOBY),
107 141
inertial measurement unit (IMU), marine snow particles, 101
146 Marthas Vineyard Coastal
inherent optical property (IOP), Observatory (MVCO), 218
24 25, 51, 136, 151 material flux, 103
instantaneous field of view (IFOV), maximum band ratio (MBR), 136
97 maximum band ratio algorithm, 135
interferometer, 155 Maxwells equations, 21, 38
internal waves, 160 161, 184 MCSST, 195
International Hydrographic mean square angle (MSA), 57
Organization (IHO), 151 MERIS, 132
International Ocean Color microbial loop, 14
Coordinating Group (IOCCG), mid-ocean ridge, 14 15
128 milky seas, 44
irradiance, 34 mine countermeasure (MCM), 237
irradiance reflectance, 35 mine warfare (MIW), 107
mine-like object (MLO), 96
J mixed layer, 17, 203, 209, 238
Jablonski diagram, 42 modern oceanography, 4
JALBTCX, 151 MODIS, 125, 130, 133, 135
James Cook, 4 MODTRAN, 123
JASON, 218 modulation transfer function
JMMES, 208 (MTF), 53, 66, 78
John Harrison, 4 Monte Carlo simulation, 76, 78, 136
Muller matrices, 22
K multibeam echo sounding (MBES),
Kirchhoffs law, 190 108
Kolmogorov spectrum, 60 multispectral, 117
250 Index

NASA Scatterometer (NSCAT), quasi-analytical approach (QAA),
178 138
National Data Buoy Center QuickSCAT, 178
(NDBC), 216
NAVO, 198 R
Navy Coastal Ocean Model radiance, 33
(NCOM), 235 radiative transfer equation, 39
nine-channel ac meter (ac-9), radiometer, 225
224 radiometric calibration, 140
NIRDD, 77 Raman scattering, 40, 44, 140, 147,
Niskin bottle, 221 152
non line-of-sight (NLOS), 88 range gating, 91
nutrient elements, 9 Rayleigh scattering, 122 123
Nyquist sampling, 53 real aperture radar (RAR), 182
red tides, 14
O remote environmental monitoring
ocean color remote sensing, 117 unit (REMU), 210
Ocean Color-4 (OC4), 135 remote sensing reflectance, 35, 121,
ocean currents, 5 135
oceanography, 1 Reynolds SST, 196
optical depth, 32 rift, 15
optical length, 32
optical transfer function (OTF), S
53 scattering coefficient, 27, 51
optical turbulence, 58 59 scatterometer, 177
ozone layer, 2 scurvy, 4
sea surface height (SSH), 174, 176
P sea surface salinity (SSS), 167
package effect, 25, 136 sea surface temperature (SST), 117,
paleoceanography, 14 151, 166
path radiance, 127 SeaBAM, 135
Petzold, 28 Seasat-A Satellite Scatterometer
phase function, 27 (SASS), 178
phonons, 45 Sea-Viewing Wide-Field-of-View
photosynthesis, 13 Sensor (SeaWiFS), 124, 129,
phytoplankton, 2, 13 133, 135
Plancks law, 40, 166 Secchi disk, 30, 68, 203
plate tectonics, 14 sediment flux, 103
point spread function (PSF), 52, semi-analytical approach, 135 136
80 Ship of Opportunity Program
POSEIDON, 175 (SOOP), 206
primitive ocean, 2 SHOALS, 151
Index 251

shortwave infrared (SWIR) Topography Experiment (TOPEX),

channels, 125 175
side-looking radar (SLR), 181 total absorption, 138
side-scan sonar, 110 total internal reflection, 36
simple underwater imaging model total kinetic energy dissipation rate
(SUIM), 59, 66 (TKED), 59, 67
single scattering albedo, 32 trace elements, 9, 103
SIPPER, 104 turbulence, 11, 27, 38 39, 58
small-angle approximation, 76 turbules, 65
Snells law, 36
sonar, 6, 107 U
spatial frequency, 70 upwelling, 12
spectral calibration, 140
stimulated Brillouin scattering V
(SBS), 154 veiling effect, 89
Stokes process, 45 vicarious calibration, 141
Stokes shift, 42 vignetting, 140
Stokes vector, 22 visibility, 30, 50 51, 73, 87, 203
stripping, 140 Visible Infrared Imager Radiometer
sun glint, 125 Suite (VIIRS), 133, 140
synthetic aperture radar (SAR), 112, volume scattering function (VSF),
183, 185 25, 28
synthetic aperture sonar (SAS),
112 W
water vapor, 173
T water-leaving radiance, 121, 124, 135
Tafkaa, 125 Weber contrast, 68
temperature dissipation (TD), 59, weighted grayscale angle (WGSA),
64 77, 80
thermal IR (TIR), 168 westward intensification, 12
thermocline, 17 WindSat, 178
time-of-flight principle, 90 WVSST, 195
time-varying intensity (TVI), 99
TOPEX microwave radiometer Z
(TMR), 176 zooplanktons, 13
Weilin Will Hou has been working in the field of optics and
oceanography for more than 20 years. His diverse scientific
and engineering research covers underwater visibility theory,
imaging systems, turbulence, remote sensing with passive and
active sensors, numerical simulation, data management,
instrumentation, and platforms including unmanned aerial
and underwater vehicles. He received his Ph.D. in 1997 from
the University of South Florida.
Dr. Hou is currently an oceanographer at the U.S. Naval Research
Laboratory, where he heads the Ocean Hydro Optics Sensors and Systems
Section. He actively promotes the Science, Technology, Engineering and Math
(STEM) program, through mentoring at various levels ranging from middle
school to postdoctoral, involving FLL, SEAP, NREIP, NRC, and ASEE.
He developed and chairs SPIEs Ocean Sensing and Monitoring conference
series and teaches an SPIE short course on related topics. He is the editor of five
Proc. SPIE volumes. He has given numerous invited presentations, has
published more than 60 papers, and holds several U.S. patents on automated
underwater image restoration and automated cloud masking for atmospheric
correction in remote sensing. Dr. Hou has also served as a panel expert in
export control and NATO technical groups.
Ocean Sensing SPIE PRESS | Tutorial Text

Ocean Sensing and Monitoring:

and Monitoring
Optics and Other Methods
Weilin Hou

This is an introductory text that presents the major optical ocean sensing techniques. It
starts with a brief overview of the principal disciplines in ocean research, namely,
physical, chemical, biological, and geological oceanography. The basic optical properties
Ocean Sensing
and Monitoring
of the ocean are presented next, followed by underwater and remote sensing topics,
such as diver visibility; active underwater imaging techniques and comparison to sonars;
ocean color remote sensing theory and algorithms; lidar techniques in bathymetry,
chlorophyll, temperature, and subsurface layer explorations; microwave sensing of
surface features including sea surface height, roughness, temperature, sea ice, salinity
and wind; and infrared sensing of the sea surface temperature. Platforms and
Optics and Other Methods

Optics and Other Methods

instrumentation are also among the topics of discussion, from research vessels to
unmanned underwater and aerial vehicles, moorings and floats, and observatories.
Integrated solutions and future sensing needs are touched on to wrap up the text. A
significant portion of the book relies on sketches and illustrations to convey ideas,
although rigorous derivations are occasionally used when necessary.

Contents: Oceanography Overview Basic Optical Properties of the Ocean

Underwater Sensing: Diver Visibility Active Underwater Imaging Ocean Color
Remote Sensing Ocean Lidar Remote Sensing Microwave Remote Sensing of the
Ocean Infrared Remote Sensing of the Ocean Platforms and Instruments
Integrated Solutions and Future Needs in Ocean Sensing and Monitoring


P.O. Box 10 Weilin Hou

Bellingham, WA 98227-0010

ISBN: 9780819496317
SPIE Vol. No.: TT98