This action might not be possible to undo. Are you sure you want to continue?
NASA SP431
,o
:°
ILE Digital
Processing
S': ;:"
;_""_,.__."; m o te ly of Re
_i:¢;Sensed Imoges
*.o .°% °°
r .SA
2:.i
NASA
SP43t
Digital Processing of Remotely Sensed Images
Johannes G. Moik
Goddard Space Flight Center
_J_A
Scientific
National
and Technical
Aeronautics
Information
Branch and Space Administration 1980 Washington, DC
Library of Congress Calaloging in Publication Data Moik, Johannes G Digital processing of remotely sensed images. (NASA SP ;431 ) Includes index. 1. Image processing. 2. Remote sensing. I. Title. 11. Series: United States. National Administration. NASA SP ; 431. TA1632.M64 621.36'7 7916727 Aeronautics and Space
For
sale
by
the
Superintendent
of Washington.
Documents. D.C.
U.S. 20402
Government
Printing
Office
Preface
Digital and
image
processing in many
has
become
an important
tool
of research sensing with
applications
scientific
disciplines.
Remote
spaceborne and airborne instruments provides images for the study of earth resources and atmospheric phenomena. Medical research and clinical applications use images obtained from Xray, infrared, and ultrasonic sensors. Electron microscopy yields data concerning the molecular structure of materials. Astronomy uses images taken in the ultraviolet, visible, and infrared radiation ranges. Military reconnaissance relies on the analysis of images. The common element is that multidimensional distributions of physical variables are represented as images from which useful information has to be extracted. The image processing scientist must be versed in the electrooptics of sensing, in transmission and display technology, in system and probability theories, in numerical analysis, in statistics, in pattern recognition, and in the psychophysics of vision. The new discipline of image science has developed along these lines. The designer of image processing systems is required to know computer systems, manmachine communication, computer graphics, data management, and database management systems. A large number of papers and several textbooks years demonstrate the rapid growth of image science. published in recent The excellent book
by Rosenfeld and Kak [1] _ covers all aspects of processing monochrome images. Andrews [2], Gonzalez and Wintz [3], and Pratt [4] also emphasize digital processing of monochrome images. Andrews and Hunt [5] treat radiometric image restoration, and Duda and Hart [6] deal with scene analysis in the image processing part of their book. Huang and others [7] and Hunt [8] are two of many excellent survey papers. Billingsley [9] and O'Handley and Green [10] summarize the pioneering contributions made at the Jet Propulsion Laboratory. Rosenfeld's review paper [l I] contains literature. a large number of references to the image processing
This book was written to assist researchers in the analysis of remotely sensed images. Remote sensing generally obtains images by various sensors, at different resolutions, in a number of spectral bands, and at differ
References mentioned in the Preface tire listed at the end of chapter I. iii
iv
DIGITAI.
PROCESSING
OF
REMOTELY
SENSED
IMAGES
ent times. These images, with often severe geometric distortions, have to be combined and overlayed for analysis. Two separate approaches, one based on signal processing methods and the other on pattern recognition techniques, have been employed for the analysis. This book attempts to combine the two approaches, to give structure to the diversity of published techniques, (e.g., refs. [12] to [14]), and to present a unified framework for the digital analysis of remotely sensed images. The book developed from notes written to assist users of the Smips/VICAR system in their image processing applications. This system is a combination of the Small Interactive Image Processing System (Smips) [15, 16] developed at NASA Goddard Space Flight Center and of the Video Image Communication and Retrieval system (VICAR) [17] developed at the Jet Propulsion Laboratory. The author expresses his gratitude to P. A. Bracken, J. P. Gary, M. L. Forman, and T. Lynch of NASA Goddard Space Flight Center, and R. White of Computer Sciences Corp. for their critical review of the manuscript. The assistance of W. C. Shoup and R. K. Rum of Computer Sciences Corp. in software development and preparation of many image processing examples is greatly appreciated.
Contents
Preface ......................................... 1. Introduction ...................................... 2. Image Processing Foundations .......................... 2.1 Representation of Remotely Sensed Images ............ 2.2 Mathematical Preliminaries ....................... 2.2.1 Delta Function and Convolution .............. 2.2.2 2.2.3 Statistical Characterization of Images ........... Unitary Transforms .......................... 2.2.3.1 Fourier Transform .................... 2.2.3.2 Hankel Transform ..................... 2.2.3.3 KarhunenLo6ve Transform .............
iii 1 9 9 11 11 12 15 17 21 23 23 24 34 39 40 42 43 44 44 46 49 52 52 55 59 60 60 63 65 67 69 70 72 73
2.3 2.4
2.2.4 Description of Linear Systems .................. 2.2.5 Filtering ................................... Image Formation and Recording .................... Degradations .................................... 2.4.1 Geometric Distortion ......................... 2.4.2 Radiometric Point Degradation ................. 2.4.3 Radiometric Spatial Degradation ................ 2.4.4 Spectral and Temporal Differences .............. Digitization .................................... 2.5.1 Sampling ................................ 2.5.2 Quantization ................................ Operations on 2.6.1 Discrete 2.6.1.1 2.6.1.2 2.6.1.3 2.6.1.4 2.6.2 Discrete 2.6.3 Discrete Digital Images ........................ Image Transforms .................. Discrete Fourier Transform .............. Discrete Cosine Transform .............. Hadamard Transform .................. Discrete KarhunenLodve Transform Convolution ........................ Crosscorrelation .....................
2.5
2.6
......
2.7 2.8
Reconstruction and Display ........................ Visual Perception ................................. 2.8.1 Contrast and Contour ........................ 2.8.2 Color ..................................... 2.8.3 Texture ....................................
vi
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES 77 77 77 78 78 79 89 103 109 114 119 I20 121 122 127 127 128 130 141 !48 149 158 158 159
164 187 187 190 191 192 192 194 194 196 199 199 .... 199 200 201 203 203 205 207 208
3. Image Restoration................................. 3.1 Introduction ....................... 3.2 Preprocessing............................ 3.2.1Illumination Correction ............. 3.2.2Atmospheric Correction ....................... 3.2.3Noise Removal .......................... 3.3 Geometric Transformations............. 3.3.1Coordinate Transformations ............. 3.3.2Resampling ......................... 3.4 Radiometric estoration R ........................ 3.4.1 Determination ofImaging ystemharacteristics S C 3.4.2Inverse Filter .......................... 3.4.3Optimal Filter ...................... 3.4.4OtherRadiometric Restoration Techniques 4. Image Enhancement ........................... 4.1 Introduction .............................. 4.2 Contrast Enhancement .......................... 4.3 Edge Enhancement ............................. 4.4 ColorEnhancement ............................. 4.4.1Pseudocolor ........................... 4.4.2False Color.............................. 4.5 MultiImagenhancement E ................... 4.5.1 Ratioing............................. 4.5.2Differencing ............................. 4.5.3Transformation toPrincipal Components .........
5. Image Registration ............................ 5.1 Introduction .............................. 5.2 Matching by Crosscorrelation ................. 5.3 Registration Errors ......................... 5.3.1 Geometric Distortions .................... 5.3.2 Systematic Intensity Errors ................ 5.3.3 Preprocessing for Image Registration ..... Statistical Correlation ....................... Computation of the Correlation Function .............. and Mosaicking ................................. .................... and Mosaics
5.4 5.5
6. Image Overlaying 6.1 Introduction 6.2 6.3
Techniques for Generation of Overlays Map Projections ........................... 6.3.1 6.3.2 6.3.3 6.3.4 6.3.5 6.3.6
Classes of Map Projections ............. Coordinate Systems ........................ Perspective Projections ...................... Mercator Projection .................... Lambert Projection ....................... Universal Transverse Mercator Projection
.......
CONTENTS 6.4 MapProjection Images of ......................... 6.5 Creating igitalImage D Overlays..................... . 6.6 Creating DigitalImage Mosaics..................... . 7. Image Analysis................................... 7.1 Introduction ................................ 7.2 Image Segmentation ............................... 7.2.1Thresholding ............................... 7.2.2Edge Detection .............................. 7.2.3TextureAnalysis........................... 7.3 Image Description ............................. 7.4 lmage Analysis Applications....................... . 7.4.1WindFieldDetermination ..................... 7.4.2LandUse apping M ........................... 7.4.3Change etection D ............................ 8. ImageClassification ............................... 8.1 Introduction .................................. 8.2 Feature Selection ................................ 8.2.10rthogonal Transforms ...................... 8.2.2EvaluationfGiven o Features .................. 8.3 Supervised Classification ........................... 8.3.1Statistical Classification ....................... 8.3.2Geometriclassification C ......................
8.4 Unsupervised Classification ......................... 8.4.1 Statistical Unsupervised Classification 8.4.2 Clustering .................................. Classifier Evaluation .............................. ............
vii 209 211 212 223 223 223 224 225 232 234 234 234 237 243 249 249 253 254 259 263 266
273 278 278 278 281 284 293 293 and Compression 295 296 298 298 303 305 306 306 306 307 309 315 321
8.5 9.
8.6 Classification Examples ............................ Image Data Compression ........................... 9.1 Introduction ..................................... 9.2 9.3 9.4 Information Content, Image Ratio .......................................... Redundancy,
Statistical Image Characteristics ...................... Compression Techniques ........................... 9.4.1 Transform Compression ....................... 9.4.2 Predictive Compression ....................... 9.4.3 Hybrid Compression ......................... 9.5 Evaluation of Compression Techniques ................ 9.5.1 Mean Square Error ......................... 9.5.2 SignaltoNoise Ratio (SNR) .................. 9.5.3 Subjective Image Quality ...................... Symbols .......................................... Glossary of Imaging Processing Terms ...................... Index .............................................
1.
Introduction
hnage processing is concerned with the extraction of information from natural images. Extractive processing is based on the proposition that the information of concern to the observer may be characterized in terms of the properties of perceived objects or patterns. Thus, information extraction from images involves the detection and recognition of patterns. Most information extraction tasks require much human interpretation and interaction because of the complexity of the decisions involved and the lack of precise algorithms to direct automatic processing. The human visual system has an extraordinary pattern recognition capability. In spite of this capability, however, the eye is not always capable of extracting all the information from an image. Radiometric degradations, geometric distortions, and noise introduced during recording, transmission, and display of images may severely limit recognition. The purpose of image processing is to remove these distortions and thus aid man in extracting Image processing information operations from images. can be implemented by digital, optical,
and photographic methods. The accuracy and flexibility of digital computers to carry out linear and nonlinear operations and iterative processes account for the growth of digital image processing. Digital processing requires digitization of the measured analog signals. After processing, the digital data must be reconstructed to continuous images for display. Image processing must always use some kind of a priori knowledge about the characteristics of objects to be observed and of the imaging system. Otherwise there would be no basis for judging whether a picture is a good representation of an object. Thus, some form of a priori knowledge must be applied to a degraded image to extract information from it. Processing a degraded image may be different depending on whether the source of the image is known. Hence, one kind of a priori knowledge is concerned with intelligence information. Another kind of a priori knowledge of great importance to inaage processing is concerned with the physical process of forming an image. This area includes knowledge of object characteristics and of properties of sensor, recording, transmission, digitization, and display systems. For example, the correction of radiometric and geometric distortions that occur in imaging with a vidicon camera requires knowledge of the characteristics of the vidicon tube. All this information is used to reduce the number of variables involved in processing.
2
DIGITAL PROCESSING OFREMOTEI_Y SENSED IMAGES
Thevarious steps involved imaging in andimage processing maybe idealized shownin figure1.1.Thisblockdiagram as alsointroduces thenotation used thispublication. in Theradiation emitted andreflected by anobjectis represented a continuous by function](x, 3'). l(x, y) is
attenuated by the intervening atmosphere to the apparent object radiant energy J*(x,y) in the sensor's field of view. In the image formation process, the apparent object radiant energy is transformed by a linear optical system into image radiant energy g,(x,y). The image radiant energy is sensed and recorded with noise by a sensor .s to form the recorded image g. The recorded image is digitized to the image matrix (g(i, ])) by the operator q, which represents sampling and quantization. Digitization introduces a spatial degradation due to sampling and adds quantization noise. After transmission and processing, which use a priori information, the digital images are reconstructed to continuous pictures and maps and are displayed on cathode ray tubes (CRTs) or photographic transparencies and prints or are printed with ink on paper. Reconstruction introduces another spatial degradation. This book discusses the techniques employed in the analysis of remote sensing images. Remote sensing is the acquisition of physical data of objects or substances without contact. In Earthrelated remote sensing, the objects may be part of the Earth's surface or atmosphere. Because the chemical and physical properties of substances var.v, they reflect or emit unique spectra of electromagnetic energy, dependent on time and spatial location. Remote sensing derives information by observing and analyzing these variations in the radiation characteristics of substances. Spatial, spectral, and polarization Spatial and temporal variations differences can be observed. are the source of electromagnetic differences radiation between
variations
of brightness
elements in a scene and permit recognition of objects through contrast, texture, and shape. Spectral variations or variations of radiant energy with wavelength produce color (false color in the nonvisible region of the spectrum). This characteristic permits recognition of objects by measuring the radiant energy in different wavelengths. Temporal variations of radiation manmade from objects caused by seasonal or environmental changes or effects provide additional information for recognition. Polarizafrom objects
tion differences between the radiance reflected or emitted and their background may also be used for recognition.
Remote sensing imaging instruments onboard aircraft or spacecraft detect and measure electromagnetic energy at different wavelengths and convert the resulting signal into a form perceivable by the human visual system. The imaging instruments range from conventional photographic cameras, over television systems and opticalmechanical scanners, to fully electronic scanning systems with no moving parts. The choice of instruments is influenced by their ability to detect energy at desired wave
INTRODUCTION
3
.E _o • o ,¢.E
o
_
:_ c .c n" m
¢0
E_ ._ _ q...,,
._
0
=c .__ o _
.E"_"I
I1)
i
gE"
8
c_
.) _ i
o_
E
_s
Em
_
o
f
I
U_
Eo
._
_ ° .
4 lengths. view of
DIGITAL An
PROCESSING to
OF remote
REMOTELY sensing
SENSED
IMAGES and are an overgiven in
introduction image
instrumentation in remote sensing
digital [ 19], of
processing
activities
[ 18] and Because sensing lems
respectively. its global and repeated measurement to the capability, solution Earth of remote prob
is able in weather,
to make climate,
important and Earth
contributions resources in
research.
resources
applications include detection water resources and landuse mapping eases, the and and exploration of wind for yield fields,
of changes monitoring geologic forecasting. temperature
previously mapped areas, and management, geologic detection applications humidity profiles, of crop discloud
resources, Weather and
agricultural
include
determination and general 1.2. or station, images Landsat form The
heights, A figure airborne ground sensing ample, analog spectral are The the the to
severestorm block basic
analysis. diagram parts of of the imaging a remote are sensing the system scene to be is shown in the
system and
imaged, the
spaceborne and may Return and the be
transmission system. before or The after
system, transmission
receiving of remote For exin Multiand and
data
analysis
performed Beam Vidicon before images form. are spatial These data are
digitization. are
(RBV) processing, digitized the number
images whereas at the of
transmitted Landsat
digitized (MSS)
Scanner basic
sensor spectral
output bands,
transmitted
in digital system
parameters and the be
spectral, various be imaged
radiometric, objects determine may
resolution parameters rate and
required and processing by the
to discriminate size of the scene The or pretechniques. coding
in a scene.
amount
of data
reduced
before
transmission
processing that selects only useful data for transmission. In contrast to images taken at the ground, remote contain and the etfects radiation. of the Thus, atmosphere. the The atmosphere scatters information
sensing from
images the
also emits, scene
absorbs,
transmitted
Sensor
processing transmission
Iouds Atm°spher_Z_c H Resolu tion_ I GratU°nnd analysis Data system
e_emen_ FIGURE 1.2Block diagram of remote sensing system.
INTRODUCTION
5
to the sensor is attenuated and distorted. The radiant energy scattered and emitted diffusely by the atmosphere into the field of view of the sensor adds noise to the signal. The atmosphere is transparent enough for remote sensing only in small bands of the electromagnetic spectrum, which are called windows. The principal windows lie in the visible, infrared, and microwave regions of the spectrum. An organization using remote sensing images is not only confronted with image analysis but also with the problem of how to incorporate the acquired and analyzed data into a database management system. Without the capabilities of storing and retrieving data by attribute values and relationships between data attributes and integrating the data with ground information, the effective use of remote sensing data will be limited [20, 21]. Successful information extraction from remotely sensed images requires knowledge of the data characteristics, i.e., an understanding of the physics of signal propagation through the atmosphere, of the image formation process, and of the error sources. Digital image processing techniques can be divided into two basically different groups. The first group includes quantitive restoration of images to correct for degradations and noise, registration for overlaying and mosaicking, and subjective enhancement of image features for human interpretation. The required operators are mappings from images into images. The second group is concerned with the extraction of information from the images. This area of image analysis includes object detection, segmentation of images into characteristically different regions, and determination of structural relationships among the regions. Operators of this group are mappings from images into descriptions of images. These operators convert images into maps, numerical and graphical representations, or linguistic structures. The major branches of digital image processing are shown in figure 1.3. Because most distortions are nonlinear, digital processing with its pre
_
restoration, registration
k
J
Corrected
_
I .
Image analysis _ 
Image enhancement
of analyzed images
Recorded digitized images
images
Enhanced _1 images "P_visual for
=1
FIGURE 1.3Block diagram of digital image processing
interpretation
steps.
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
cision and its flexibility to implement nonlinear operations is currently the only feasible technique to solve restoration and registration problems. Digital image processing requires a system that provides a set of functions for image processing, data management, display, and communication between analyst and system. The functional requirements of such a system can be determined from an understanding of image formation, of recording and display processes, and of image analysis applications and by considering the techniques and strategies applied by a human analyst in the process of information extraction. An effective and convenient language is needed for the analyst to express his processing steps in terms of these functions. Image analysis with a digual computer in a batch mode requires specification of all processing steps before starting the computer run. Results can only be displayed and evaluated after completion of the run. Therefore, many processing rtms may be required to establish the correct analysis procedure. This limitation stretches lhe period for performing a careful analysis over a long time. The inconveniences and delays often prevent an analyst from exploring all interpretation techniques. hnage analysis is oflen a heuristic process, and display of the results of intermediate processing steps makes the analysis more comprehensible. Therefore, the analyst should be included in the image processing system. He is the specialist who is capable of integrating the information available in the image with his experience from other data. The combination of man and computer analysis problems uses an interactive in an interactive system leads to the solulion of image that neither could solve efficiently alone. The analyst terminal to direct the analysis by means of function
keys, alphanumeric keyboard, and light pen or joyslick. Results are immediately displayed as images or in graphical form on a display screen and, combined with experience, can be used for selecting the next operation. Thus, the short time between specification of a problem and the return of intermediate results permits a more intelligent choice and sequencing of operations applied to the images. Two interactive image processing systems used for remole sensing data analysis are described in references [161 and [22]. The efficient communication between man and computer requires a language adapted to the problems of the analyst, who is usually not a programmer. Only the operation of the interactive analysis terminal and not of the computer system has to be learned. This simplification reduces errors and permits concentration on the content of lhe dialogue rather than on its form. Operation of the system in a batch mode should, however, be possible. An interactively determined sequence of processing steps can often be applied Io images from particular problems without human interaction.
INTRODUCTION This image of book is organized concepts theory, are the way into and nine chapters. Chapter 2 introduces tools fields from and the digital fields
7
processing system that proofs. and enhancement
reviews
mathematical of random described references restoration, images.
linear
statistics, basis for
theory techniques and image in the
unitary chapters.
transforms This ness review and
in later provide the
is in no
complete, 3 treats distortions subjective discusses scene. maplike using
completeof with and of and the
Chapter radiometric , the 5 object into
correction 4 deals quality matching
geometric image
Chapter of image the
improvement image Chapter registration,
appearance. images mosaicking conversion the used specific of the of of
Chapter same images. images
6 is devoted 8 discuss image and
to overlaying segmentation, image classification, Chapter The
Chapters
7 and
descriptions recognition processed
segmentation an overview are Space
pattern data images
techniques. techniques. with Smips/VICAR
9 [15]
provides
of image digital Flight
compression
pictures
as examples
at the Goddard
Center.
REFERENCES [1] Rosenfeld, New York, A.; and 1976. Kak, A. C.: Digital
1 Picture in Image Image Processing. Processing. Processing. Academic Academic Press. Press.
[2] Andrews, H. C.: New York, 1970.
Computer Wintz, Image
Techniques P.: Digital
[3] Gonzalez, R. C.; and Reading, Mass., 1977. [41 Pratt. W. K.: Digital Toronto, 1978.
AddisonWesley. New York Hall, and EngleWileyProc.
Processing. Digital
WileyInterscience. Restoration.
[5] Andrews, H. C.; and Hunt, wood Cliffs, N.J., 1977. [6] Duda, R. O.; and Hart, Interscience, New York
R.:
Image
Prentice
P. E.: Pattern Classification and London, 1973. Tretiak, O. J.:
and Scene Image
Analysis. Processing,
[7] Huang, T. S.; Schreiber, W. F.; and IEEE, vol. 59, 1971, pp. 15861609.
[8] Hunt, B. R.: Digital Image Processing. Proc. IEEE, vol. 63. 1975, pp. 693708. [9] Billingsley, F. C.: Review of Digital Image Processing, Eurocomp 75 London. Sept. 1975. [10] O'Handley, D. A.; and Green, W. B.: Recent Developments in Digital Image Processing at the Image Processing Laboratory, Proc. IEEE, vol. 60, 1972, pp. 821828. [11] Rosenfeld, A.: Picture 1974, pp. 178194. Processing, 1973, Comp. Graph. Sensing of Image Proc., vol. 3,
[12] Proceedings of International Symposia on Remote University of Michigan, Ann Arbor, Mich. [13] Proceedings of Symposia on Machine Processing Purdue University, I.afayetle, Ind.
of the Environment. Sensed Data.
Remotely
1 These
references
include
references
mentioned
in the preface.
8
DIGITAl.
PROCESSING
OF
REMOTEIN
SENSED
IMAGES
[14] [15] [161 [17] [18] [19] 120]
Proceedings of the NASA Earth X58168, Houston, Tex., 1975.
Resources
Survey
Symposium.
NASA
TM
Moik, J. G.: Small Interactive Image Processing System (Smips)System Description. NASA Goddard Space Flight Center X65073286. 1973. Moik, J. G.: An Interactive System for Digital Image Analysis. Habilitationschrift (in German), Technical University, Graz, Austria, 1974. Im_ge Processing System VICAR, Guide to Systems Use..1PL Report 3241PG/ 1067, 1968. Linlz, J.; and Simonett, D. S.: Remote Sensing of Environment. AddisonWesley, Reading, Mass., 1976. Nagy, G.: Digital Image Processing Activities in Remote Sensing for Earth Resources, Proc. 1EEE, vol. 60, 1972, pp. 11771200. Moik, J. ternational Harvard G.: A Data Conference Base Management on Computer C_mbridge. Mass., System Mapping July 1978. for Remote Software Sensing Data. Inand Data Bases.
tiniversity,
[21]
Bry_mt, N. A.: and Zobrist, A. L.: IBIS: A Geographic Infolmation System Based on Digital Image Processing and Image Raster Datatype. Proceedings of a Symposium on Machine Processing of Remotely Sensed Data, Purdue University, l_afayette, Ind., 1976, pp. IAIIA7. [221 Bracken, P. A.; Dalton, J. T.: Quann, J. J.: and Billingsley, J. B.: AOIPSAn Interactive Image Processing System. National Computer Conference Proceedings, AFIPS Press, 1978. pp. 159171.
2.
Image
Processing
Foundations
2.1
Representation
of Remotely
Sensed
Images
Remote sensing derives information by observing and analyzing the spatial, spectral, temporal, and polarization variations of radiation emitted and reflected by the surface of the Earth or by the atmosphere in the optical region of the electromagnetic spectrum. This region extends from Xrays to microwaves and encompasses the ultraviolet, visible, and infrared. Within this broad spectral region, extending from 0.2 to 1,000 t_m, the optical techniques of refraction and reflection can be used to focus and redirect radiation. Remote sensing uses photographic and nonphotographic sensors as imaging devices. Nonphotographic devices are television systems and opticalmechanical scanners. An airborne or spaceborne sensor observes radiation emanating from a scene and modified by the intervening atmosphere. The spectral radiance L of an object at location x, y and at time t has two contributing factors, an emission and a reflectance component [1] L(x,y,A,t,p)=(1r(x, Fr(x, y,a,t,p) ) M(A) y, A, t, p)i(x, y, A, t)
(2.1)
The function r(x,y, a,t,p) is the spectral reflectance of the object; i(x, y, ,\, t) is the spectral irradiance (incident illumination) on the object; and M(,t) is the spectral radiant emittance of a blackbody,. The parameter p indicates the polarization and ,t is the wavelength. L is in general dependent on the solar zenith and on the viewing angle. In the visible and nearinfrared spectrum, where selfemission is negligible and reflected solar energy predominates, the radiance of an object consists of a reflectance and an illumination component. The illumination component is determined by the lighting of the scene, and the reflectance component characterizes the objects or materials in the scene [21. In the mid and farinfrared regions, the emission from the surface is dominant. Both reflection and emission must be considered in some microwave experiments [3]. In this text the interest is in analyzing images that are twodimensional
spatial distributions. An image can be represented by a real function of two spatial variables x and y, representing the value of a physical variable at the spatial location (x, 3'). Therefore, the spatial coordinates x and y of the radiance L are chosen as basic variables, with the spectral, tern
10 poral, t,,,p,,) ....
DIGITAL and be
PROCESSING variables distribution t,,, m=
OF
REMOTELY
SENSED Let j;(.r, spectral polari×ation
IMAGES 3') L(x, band p,, n y, ._x,\,, j= 1 ..... the real xa_, 1,
polarization the spatial time
as parameters. for a given
P_; a given
1.....
P_; and j_(.v,y)
P:,. The P=P,+P.,+p: vector function
functions
are
combined
into
ffx,
y) =
(2.2)
I
f,(.v, y) / h,(x, y )
which will be called multiimage. The measurements in several spectral bands for a given time, ignoring polarizalion, are called multispectral images. Measurements of different times in a given spectral band are
called multitemporal functions y) and ! O<x<x,,_, bounded, images. are defined O_<.y_<y,, every image over 1. Because function a usually energy tee'angular distributions and are region nonbounded; Image R= i.e., i=1 O_<f_(x,y)fBi The figure numbers. the gray orientation 2.1, where The value value of the of the the coordinate xaxis system ..... P (2.3) text is shown image in line {(x, negative
is nonnegative
for all x, y in R used in of this
is in the
direction
increasing
of the function image
[_ at a spatial at that
location point.
(x,,, 3',,) is called A 1' dimensional
component
(0,
O)
Ym
YN
Image fi
)¢m (x m, Ym )
FIGURE 2.1itmage
domain.
IMAGE
PROCESSING
FOUNDATIONS
11
vector f(x,,, y,,), consisting of the values of f for a given location (x_., yo) is called a multidimensional picture clement, or pixel. Thc range of pixel values is called gray scale, where the lowest value is considered black, and the highest value sent shades of gray. is considercd white. All intcrmediatc values repre
The quality of the information extracted from remotely sensed images is strongly influenced by the spatial, spectral, radiometric, and temporal resolution. Spatial resolution is the resolving power of an instrument needed for the discrimination of observed features. Spectral resolution encompasses the width of the regions in the electromagnetic are sensed and the number of channels used. Radiometric spectrum resolution that can
be defined as the sensitivity of the sensor to differences in signal strength. Thus, radiometric resolution defines the number of discernible signal levels. Temporal resolution is defined as the length of the time intervals between measurements. Adequate temporal resolution is important identification of dynamically changing processes, such as crop land use, hydrological events, and atmospheric flow. 2.2 Mathematical Preliminaries as deterministic functions or representatives of for the growth,
Images
may be considered
random fields. The mathematical tools for image processing are borrowed from the fields of linear system theory, unitary transforms, numerical analysis, and the theory of random fields. This section reviews the required mathematical methods. A complete discussion and proofs can be found in the references. Image processing involves mathematical operations that require certain analytical behavior of L(x, y). It is assumed that the functions representing images are analytically well behaved, i.e., that they are absolutely integrable and have Fourier transforms. The existence of Fourier transforms for functions that are not properly behaved (e.g., constant, impulse, and periodic functions) is guaranteed by assuming thc corresponding generalized functions [4, 5]. Existence problems are only of concern for continuous functions. The Fourier transform always exists in the discrete
case.
2.2.1 The
Delia concept
Function
and
Convolution or a point source of light is very useful for
of an impulse
the description and analysis of linear imaging systems. An ideal impulse in the x, y plane is represented by the Dirac delta function 3(x, y), which is defined as [4, 5]:
f_
3(x,y)
dx dy:
1
(2.4)
12 8(x,y)=0
properties
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
for all (x, y) other than (0, 0), where it is infinite. are Useful of the delta function for image processing
f_f:e.._,,_,,,dz,,tv=8(x,,,)_ _
where i \" 1 and
(2.5)
f
This equation is called and ,jare spatial The convolution
./(_,,/)
3(x_,y property
.,j)d_d,j
[(x,y)
(2.6) The quantities
the sifting
of the function. as
integration variables. _,,of two functions f and h is defined i(_, _q) h(x f(x, y) * h(x, 3') operator. A major g, 3'
g(x, 3')
.q) de d,j (2.7)
where
* is the convolution
application
of convolution The
in image processing is radiometric restoration and autocorrelation of a function f(x, y) is defined as Rrr(c., '1) = and the crosscorrelation R,,(_,,/) A principal registration image 2.2.2 areas. Statistical Characterization of Images = .f(x+_, of two functions
enhancement.
y+ ,j) ,f(x, 3' ) dx dy f and g is y+,j) dxdy
(2.8)
f(x, y) g(x+&
(2.9)
application of crosscorrelation in image where the problem is to find the closest
processing is image match between two
For some digital image processing applications an image must be regarded as a sample of a random field rather than a deterministic function. Random fields represent classes of images such as multispectral or multitemporal images of the same is due to noise and random design of some image and image compression of the underlying class This section provides that are required later and [7]. or of different scenes. The statistical nature signal variations in recorded images. The
processing algorithmsfor example, classification techniquesis based on the statistical description of images. a brief summary of definitions for random fields in the text. More information is contained in [6]
IMAGE PROCESSING FOUNDATIONS
13
Questions concerning information content ndredundancy images a in may only be answeredn the basisof probability o distributions nd a correlation functions. Theentropy of n random variables with probaH,
bility distribution pC.is defined as p,. log: (p,,)
I
H,.=  _
#
(2.10)
The the lion. unit tion
entropy of a probability distribution may be used as a measure of information content of a symbol (image) chosen from this distribuThe choice of the logarithmic base corresponds to the choice of a of information. For the logarithm to the base 2, the unit of informais the bit.
A twodimensional random field or random process will be denoted [(x, y, ,,,_) where ,,,; is an event and is an element of the set of all events _2 [,,,,,,,,:,...] representing all selections of an image form the given class of images. Each event ,,,_ is assigned a probability p_. For a given value of (x, y), f(x, y, ,,,_) is a random variable, but for a given event ,,,i, f(x, y, ,,,_) is a function over the x, >,plane. Thus, an image f(x, y) can always be considered as a sample of a random field for a given event. Possible events are the selection of a spectral band, or of a certain time for imaging. The definition of 9 multiimage in equation 2.2 is an example of a random field. In figure 2.2 the events are selected spectral bands. For a given point (x,,, y,,), f(x,,, y ...... _) is a random variable defining a multidimensional picture element (also referred to as a feature vector). For a fixed ,,,_, f(x, y, ,,_) is a twodimensional image f(x, y). A random field representing an image will hence be denoted by f(x, 3'). A random field is completely described by its joint probability density PI(z, ..... z,,; x,, y, ..... x,,, y,,). In general, higherorder joint probabilities of images are not known. The firstorder probability density pr(z, x, y) can sometimes be modeled on the basis of measurements or properties of the imagegenerating process. If the statistical properties are independent of the spatial location (x, y), the random field is called homogeneous or stationary [6]. In this case the mean value of f is defined as
_ and the autocorrelation function of f is
zpi(z,
v)dz
(2.11)
RIr(_, ,/) =E[f(x+_,
Y+v)
t(x,
y)]
(2.12)
where E is the expectation operator, _x, x., and _l=y,y_. Thus, the mean of a homogeneous random field is constant and the autocorrelation is dependent on the differences of spatial coordinates; i.e.,
14
DIGITAL
PROCESSING
OF 0
REMOTELY Yo
SENSED
IMAGES
_y
x
Vo
f
X
I ! I I I
_y
_
f( x,y, ?,.3 )
x
FIGURE 2.2_ultispectral
image (example
of a random
field).
the have
autocorrelation field an autocorrelation
function function
is
position of of the form ) /z/')e
independent. images may often
A
homogeneous be assumed to
random
t representing
a class
R,f(2,,I)=(R_.t(O,O
'_!z
_'_l,!f/_r=
(2.13)
where _ and/3 are positive constants and
Rr1(0, The jointly crosscorrelation homogeneous function is given Rr_,(_.,,i) The covariance function by E{f(x+c,y+,/) of f and g is defined 't) _! g(x,y)l by _,, if (2.17) (2.16) (2.15) 0) =E{i(x, of two real Y)='I random fields f and (2.14) g that are
C r,,(_, '/) =Rr_,(_, Two random fields f and g are called
uncorrelated 0
Cr,,(c:, ,/)
IMAGE The Fourier transform
PROCESSING sec. 2.2.3.1
FOUNDATIONS ) of the autocorrelation density (2.18)
15 function
(see
of a homogeneous SII(u, The convolution
random v) =
field f is the spectral RfI(_:, _l)e is also valid f(_, _/) h(.r_,
fS f
(2.7),
'"_ '_""7" d;; d, I for random y,/) fields:
operation,
g(x, y) =
d_ d,I
(2.7)
Let SM(u , v) and S,,,(u v) be the spectral densities of the homogeneous random fields f(x, y) and g(x, 3'), respectively. If f has zero mean, S_,r,(u, v) Sfs(u, v) ! H(u, v)['' (2.19)
where H(u, v) is the Fourier transform of h(x, y). Expressions (2.11) and (2.12) for the mean and the autocorrelation, respectively, of a random field are ensemble averages, each representing a family of equations. In practice, ensemble avcrages are seldom possible, and the mean and the autocorrclation of a random field are computed as spatial averages. If the autocorrelation functions computed with equation (2.8) for each member of a random field are the same, and if this value is equal to the ensemble average RIr(c' ,j), then the autocorrelation , function can be obtained from a single image in the field f with equation (2.8). Homogeneous random fields for which ensemble and spatial averages are the same are called ergodic. A general discussion of these problems is given in [6]. 2.2.3 Unitary Transforms a function [ defined
A unitary transform is a linear transform that expands over the region R in the x, yplane into the sum "/(x, y)= The transform coefficients ff
d,J/
_
/z= =o
2
p_o
Fu"4_"(x, by
Y)
(2.20a)
F_
are given
¢(x, y) ep_', (x, y) dx dy realor complexvalued and
(2.20b) ,b,,* is the complex
where
4,Y;, may bc either
conjugate of 4'/,,. The expansion (2.20a) is valid if / is square integrable and epic(x, y) is a complete set of orthonormal functions dcfined over the same region R of the x, yplane as /(x, y) [8]. A set of functions {4,#_(x, y) } is called orthonormal if
ff_
4_(x,
y) 4o* (x, 3') dx dy=0
(2.21)
16 fort_ ¢:
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
p, v _ ,_ and f,
j,
]6_(x,y)]'dxdy=l
(2.22)
It is said to be complete by (2.20a) approaches proaches infinity. It can be shown
if the mean square error in approximating f(x, y) zero as the number of terms in (2.20a) apthe stated conditions z...., ] F_,, i" (2.23) of conservathe coefficients
that under
I J(x, Y)i'' dx dy= This result is known as Parseval's minimize theorem
and is a statement are retained, then error
tion of energy. given by equation
If only m × n coefficients (2.20b)
the mean square
e,,,,,= f_ Complete functions sets and
If(x,
y)
_
_
F_,,, 4,_,,,(x, Y)]_ dx dy
(2.24)
of orthonormal the zeroorder and Hankel
functions are the complex trigonometric Bessel functions of the first kind, which transforms, respectively.
define the Fourier
Unitary transforms can also be applied to a random field representing a class of images. Let t(.v, y) denote a real homogeneous random field with autocorrelation function Rrr(¢:,,I). The expansion (2.20) for the random field f(x, y) can be expressed as [(x,y)= _
_t 0
_
_, :0
F;.,+;.,(x,y)
(2.25a)
F .....
[[ IJl
f(x, y) ,h,,_:'_ y) dx dy (x,
l
(2.25b)
The coefficients F,,, are now random variables having values that depend on the image selected for transformation. Instead of using a given set of basis functions, the expansion of a random field into a set of orthonormal functions may be adapted to the statistical properties of the class of images under consideration. The expansion is determined such that the coefficients are uncorrelated. Uncorrelated coefficients represent unique image properties. This transform is known as KarhunenLo6ve transform, or transform to principal components. It has the property that for some finite m, n, the mean square error c..... averaged over all images in the random field is a minimum, where
,.,,_=E{ffj
![(x,y)
_
_F;;,,q,_,,(x,y)!''dxdy}_
(2.26)
IMAGE PROCESSING FOUNDATIONS
17
Twodimensional unitary transforms are used for a number of image processing applications. The design of filters for image restoration and enhancement is facilitated by representing images in the Fourier transform domain. Unitary transforms compress the original image information into a relatively small region of the transform domain. Features for classification are selected from this region, where most of the information resides. Image data compression uses the same property to achieve a bandwidth reduction by discarding or grossly quantizing transform coefficients of low magnitude. The main advantages of unitary transforms for image processing are that the transform and its inverse can be easily computed, that fast algorithms exist for their computation, and that the information content of images is not modified.
2.2.3.1
Fourier
Transform Fourier transform of a function g(x, y) is defined
The twodimensional by [4]: G(u, The inverse
v) = is defined by
g(x,
y)e
"' _"i,,,,.. ,._, dx dy
(2.27a)
transform g(x,y)=
G(u,v)e function v) =R(u, u and v and
:'_ ..... !"'
dudv
(2.27b)
G(u,
v) is a complexvalued G(u,
v) +il(u,
v)
(2.28) in exponential
of the spatial form as
frequencies
can be expressed
G(u, where ] G(u, is the magnitude or amplitude ,_(u, is the phase. Figure Fourier transform. Let the vectors 2.3a
v)=]
G(u,
v)[ e_',''_
(2.29)
v) I= \/R(u,
v)''+l(u, and
v)''
(2.30)
of the transform v) :tan_a block
l(u, v) R(u, v) pattern and the magnitude
(2.31) of its
shows
z= (x, y)7, and w= (u, v) T be the spatial
coordinates
18
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Iiiii I
FIGURE
2.3Block
pattern
and
magnitude
of Fourier
transform.
IMAGE
PROCESSING
FOUNDATIONS the Fourier transform
19
and the spatial frequencies, and its inverse may be written
respectively. Then, in vector form as
G(w)= and
f__
g(z)e
_'_"w'z'
dz
(2.32a)
g(z)= where (w, z) is the inner properties product
G(w)e
"_'_ _w, z, dw
(2.32b)
(w, z) =ux+vy. transform '_ for image processing
Some useful are1. Linearity 2. Behavior 3. Relationship 4. Relationship transform 5. 6. Relationship transform Symmetry
of the Fourier
under
an affine transformation a convolution and its Fourier the crosscorrelation function the autocorrelation function transform and its Fourier and its Fourier
between between between
These properties will be described The Fourier transform is linear;
in the following i.e., =aG, as +bG...
paragraphs.
'.f{ag_ +bg.,} An affine transformation is defined
(2.33)
z'Az+t where
(2.34)
A(::a:)
The Fourier transform of g(z') is c_({g(Az+t)}=__! where J is the Jacobian
and t=
(tl) t..,
G((A') 7' w)
(2.35)
e_'_i_w'"
(2.36)
J_
I For a shift operation
OZ'_ OZ ,,,\ OZ'l Ox
(2.37)
z'=z+t
(2.38)
20
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
thetransform matrix istheidentity A matrix and
,_{g(z+t) }=e''_"'_, '_ G(w) (2.39) i.e., the Fourier transform is shift invariant. For scaling of the coordinate axes (2,40)
V' a
the transformation matrix is A= and _}{g(Az)}= Scaling is illustrated blocks in the spatial angle 4, x'=xcos y'= x the transformation matrix is 4,+y sin 4, } sin 4,+Y cos 4' (2.43) 1 G( u v'_ __ \a,, ' a...,] la,, a..L (2.42)
(o ,,0)
a_.=
and
l=0
(2.41)
in figure 2.3b, where the height of the rectangular domain is two times their width. For rotation by the
A=(c°ss'sin+) 4, cos 4' sin and '_{g(Az) } = G(Aw)
and
t=0
(2.44)
(2.45)
The transform is also rotated by the angle 4,. (See fig. 2.3c.) The convolution, given in equation (2.7), and its Fourier are related by: '_{f(x, '__{/(x, The convolution of two y) • h(x, y)h(x, y) }=F(u, v)H(u, v) , H(u, space v) v)
transform
(2.46a) (2.46b) is equivalent to
y) }=F(u, in the
functions
domain
multiplication in the spatial frequency domain. This relationship is the most important property of the Fourier transform for image processing. It is used to map image restoration and enhancement operations (see chs. 3 and 4) into the frequency domain. The Fourier transform of the crosscorrelation tion (2.9), is '_{Rt,_(_, '1) } = F* (u, v)G(u, where F* is the complex conjugate of F. v) (2.47) function, given in equa
IMAGE The Fourier tion (2.8), is transform
PROCESSING of the
FOUNDATIONS function, given
21 in equa
autocorrclation
:/{RII(_, The Fourier transform
'1))i
F(u,
v)]'' function is called
(2.48) the power
of the autocorrelation
spectrum. If f(x, y) is real, F(u, v)=F*(u, v) (2.49)
and the magnitude of the transform is symmetric about the origin. Because image functions are always real, only half of the transform magnitude has to be considered. If f(x, y) is a symmetric function; i.e., if f(x,y)=f(x, y) (2.50)
the Fourier transform F(u, v) is real. Equation (2.27b) is a representation of g(x, y) as a linear combination of elementary periodic patterns of the form exp 2_i(ux+vy). The function G(u, v) as a weighting factor is a measure of the relative contribution of the elementary pattern, with the spatial frequency components u and v, to the total image. The function G(u, v) is called the frequency spectrum of g(x, y). Edges in pictures introduce high spatial frequencies along a line orthogonal to the edge in the complex frequency plane. Large values of G(u, v) for high spatial frequencies u and v correspond to sharp edges, low values to regions of approximately uniform gray values. As an example, let for Ixl_½, elsewhere
f(x, y) = The Fourier transform
0 l
[yl_½
(2.51a)
of f(x, y) is F(u, v) = sin _u "rrb/
sin _v
'z,V
(2.5 lb)
Equation (2.51a) represents a square aperture. A plot of equation (2.5 l a) is shown in figure 2.4a in threedimensional perspective. Figure 2.4b is a plot of the magnitude, given by equation (2.51b). 2.2.3.2 Hankel Transform f(x,y) f([x'+y_]'_)=f(r), the
For circularly symmetric functions transform pair is given by [9] : F(u, v)=F(,,,) =2_
rf(r)Jo(27rro,)dr
(2.52a)
f(r) where .... (u''+v")!'
=2_radial
,,,F(,,,)Jo(2r, spatial
r,,,)d,,, and L,(r)
(2.52b) is the
is the
frequency,
22
DIGITAl.
PROCESSING
OF
REMO'[EI.Y
SENSED
IMAGES
II
,ii _i_ _
!1 i'
b
FIGURE 2.4Fourier
transform
pair.
(a) Square transform.
aperture.
(b) Magnitude
of Fourier
IMAGE zeroorder equations in optical 2.2.3.3 Bessel (2.52a) image
PROCESSING
FOUNDATIONS
23
function of the first kind. The transformation given in and (2.52b) is called the Hankel transform and is used processing where circular apertures can be easily realized.
KarhunenLo6ve
Transform (KL) transform does not but determines the expansion The images are assumed use a given set of from the statistics to be representatives
The KarhunenLo6ve orthonormal functions,
of the given class of images.
of a homogeneous random field whose correlation function is computed from the sample images. For zeromean random fields with autocorrclation function R the functions 4,u_(x, y), which yield uncorrelated coefficients F,_, must satisfy the integral equation [101: Ji R(x;_" Y_I)dPu_(x' y)d'd: d,l:,_ #_(.r, y) (2.53) by equation
The possibly complex expansion (2.25b) are uncorrelatcd; i.e., E{F_
coefficients
Fu, determined
F,_t_*] :E{Fu_}E[F,t_.
}
(2.54)
If a finite number of coefficients are used, an image is optionally approximated with these uncorrelated coefficients in a meansquareerror sense. The KarhunenLo6ve transform is used for image enhancement (ch. 4), feature selection (ch. 8), and image compression (ch. 9). 2.2.4 Description of Linear Syslems
Image formation in a linear optical system can be described by a superposition integral [9]. This representation permits the use of linear system theory for the analysis of imaging systems. A linear system d' is an operator that maps a set of input functions into a set of output functions such that _f.'{a[, + b/.. } = a _C{], ] +b An arbitrary image ] can be represented ](x, y) : .J'{[..] sources dq system (2.6) to an (2.55)
as a sum of point y_,/)dE
/(L:, ,D3(2c¢
where ,5 is the Dirac delta function. The response input function given in equation 2.6 is g(x, y) :_f'/j(x, y) ) :
of a linear
[(L:, ,/) d'{_(x
4:,y,,,)
]d¢: dq
: =j(x,
J(Gq)h(xGy.q)d_d, y) , h(x, y)
I
(2.56)
24
DIGITAl.
PR()('ESSIN(I
OF REMOTEIJY
SENSED IMAGES
where h(x _, Y ,/) is tile impulse response of the linear spaceinvariant system J'. In other words, tile output of /_ is found by convolving the input signal with tt. qhus, a linear spaceinvariant system is completely described by its impulse response. In tile context of imaging systems, the impulse response h is also called the point spread function PSF is the image of an ideal point source in the object plane. An alternative _epresentation of a linear spaceinvariant obtained by applying the convolution property (2.46a) (2.56), to yield G(u, where G(u, v), F(u, v), v) and F(u, H(u, v)H(u, v) are v) the Fourier (PSF). The
system is to equation
(2.57) transforms of
g(x, y),/(x, y), and h(x, y) respectively. of the PSF h(x, y), is called the optical linear spaceinvariant imaging system. complex, can be expressed in exponential H(u, The amplitude transfer function
M(tt,
II(u, v), the Fourier transform transfer function (OTF) of the The OTF, which is generally form as v)e _'_ ' ...... (2.58)
v)=M(u,
v) and phase ,I,(u, v) are callcd the modulation (MTF) and phase transfer function (PTF), respectively.
2.2.5
Filtering processing of images. operation A linear used in radiometric filter is a linear space
Filtering is a basic image restoration and enhancement
inw_riant system that modifies the spatial frequency characteristics of an image. Because the effect of any linear spaceinvariant system can be represented by a convolution, linear filtering may be described by g/(x,y)in the spatial domain, or by G1(u, in the frequency domain. v)G(u, v)H(u, v) transform property (2.60) given in g(c,,/)h(x _,y ,i)dc:d,I (2.59)
(See
the Fourier
eq. 2.46a.) The quantity g is the recorded image with Fourier transform G, 11 is the impulse response of the filter with Fourier tr_msform H, and gt is the liltcred image with Fourier transform (L. The variable H is called lhc filter transfer function. Only nonrccursivc tiltcrs will be considered here. Bccausc thc entire digital image is recorded, ideal filters can be realized numcrically, and the impulsc rcsponsc This symmctry implies a purely real transfer phaseless filter. h(x, y) may bc symmetric. function ll(u, v), i.e., a
IMAGE PROCESSING FOUNDATIONS
25
Filtersareconceptually easiero specify t andapplyin thefrequency domain ratherhanin thespatial omain, t d because theirrepresentation in thefrequency domain issimpler ndconvolution a isreplaced bymultiplication [ I I, 12]. Filters may be characterized in the frequency domain by the
shape of their transfer function H(u, v). Circularly represented in polar coordinates will be considered spatial frequency coordinate is ,,, = ( u' + v' ) ''. An ideal lowpass filter is defined by symmetric where the filters radial
H(u,
v) =H(,,) above
= the
0 for ,,,_,,,,. 1 cutoff frequency ..... An
(2.61) ideal
It suppresses all frequencies highpass filter is defined by
H(,,,)=
{ lofor ,,,>_.... for ,,,< ....
(2.62)
It suppresses all frequencies below the cutoff frequency. A bandpass filter is a combination of low and highpass filters; it suppresses all spatia[ frequencies outside its pass band. A notch filter is the inverse of a bandpass filter; it suppresses all frequencies in a specified band. An exponential filter is defined by
H(,,,) = I ce_'°:' for ,,,<,,,,, , H,,,, _ for ,,,> ,,,,. A Gaussian filter is defined by H(,,,) =ce .... ' Because of the convolution property, linear filtering may be either in the frequency domain with equation (2.60) or in domain with equation (2.59). For frequencydomain filtering, plication of G(u, v) with an ideal filter causes discontinuities in oscillations of gl in the spatial domain [13]. These oscillations become apparent Consequently, filters have to be designed
(2.63)
(2.64) performed the spatial the multithat result
due to the Gibbs phenomenon as ringing in the filtered image. so that ringing is minimized.
Spatialdomain filters are usually designed in the frequency domain by specifying the filter transfer functions H(u, 0) and H(0, v) for the main frequency axes. Under the assumption that curves of constant H(u, v) are ellipses, the transfer function H(u, v) may be computed. The filter response is given by the inverse Fourier transform
h(x'Y)=
f 3 f_
H(u, v)e"_""'"'"'
dud,,
(2.65)
26
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES filter response w(x, y)' (2.66) filter with h,
which is of infinite extent [12]. In reality, a finiteextent is used, obtained by truncating h(x, 3') with a window ht(x, Because transfer of the function y) =h(x, y)w(x, y)
property given by equation (2.46b), the actual is then the convolution of the ideal transfer function of the window H.(u, v)=H(u, function v) . W(u, v)
the Fourier
transform
(2.67)
Spatialdomain truncation also causes overshooting or ringing in the neighborhood of edges in the filtered image. To reduce ringing, the window function has to be chosen so that Ht(u, v) is close to H(u, v) and that the ripples H(u, v) are small. of H,(u, v) in the neighborhood of discontinuities of A window or apodizing function should only affect symmetric can be filters oneused [15].
the border region of an image [14]. For circularly' dimensional windows w(x,y)=w(x:+Y_)'_=w(r) For a constant window function
w(r)=
1 for [r'i 0forir]>r,, <r,, transform is
(2.68)
where r,, is the width of the filter. The Fourier
W(,,,)
=
sin 2r,r,. ,,, 2r, r .....
(2.69)
whose first sidelobe peak is about 23 percent of the peak at .... 0. The variable ,,, is the radial spatial frequency. For a triangular window w(r)
=
{
1
,rl for rl<r,, r_ 0 for [ri>r,,
(2.70)
the Fourier
transform
is
sin 5
W(,,,)= whose first sidelobe peak is about window w(r) the absolute of the main =0.54+0.46 ]r] (2.72) 2' 4 percent / of the main peak. (2.71)
For the Harming
value of the largest side lobe in W(,,,) is less than 1 percent peak. Another frequently used window is the raised cosine window.
bell, or Hanning
IMAGE PROCESSING FOUNDATIONS
O_ir[_pr,, w(r) = (0.5 where p is the applied [ 14]. Filter design I. 2. 3. 4. Specify domain. Compute H(u, v). Multiply Fourier transfer ( 1cOsrr(r°r)pro of the filter ) width steps: function as inverse function H(u, v) in frequency Pro_lri _ ( l p)r, (2.73)
27
ro(lp)_[ri<r,, over which the window is
fraction consists the ideal
of the following filter transfer h(x, y)
filter
response
Fourier w. evaluate
transform
of
filter h by a proper
window
transform the apodized function Hr. examples are only
filter fit and
the resulting
The
following
intended
to illustrate
the
effects
of
ideal filters. Because filters are not used
of their undesirable side effects (ringing), for filtering applications. The image used
ideal in the
examples was generated by digitizing a NASA slide to 512 × 512 samples on the Atmospheric and Oceanographic Image Processing System [16]. Figure 2.5 shows the image and the magnitude of its Fourier transform. It is obvious that most of the image energy is concentrated near the origin. The results of applying circularly symmetric image in figure 2.5 are shown in figures 2.6 to 2.8. in a region filters to the
Figure 2.6 is filtered with an ideal lowpass filter with cutoff frequency .... whcre ,,,_= (u/'+v,.'')_=0.2. , The blurring indicates that most of the edge information in the image is contained in a region above ,,,,.. Severe ringing effects are visible. Figure 2.7 shows the result of applying an ideal highpass filter with cutoff frequency ,,,,.=0.1 to the image in figure 2.5. Figure 2.8 is filtered with a Gaussian filter with a width of w,=0.12 at half maximum. The magnitude of the Fourier transform tion has been removed, and severe was shown that the Fourier features orthogonal to that shows that most blurring results. of the edge informaIn section 2.2.3.1 it
coefficients in a given direction represent direction. Wedge filtering can be used to
extract the information representing features in a given direction. Figure 2.9 shows the result of applying a wedge filter of width 30 ° in a direction of 50 ° . Although the magnitude IG(u, g(x, y) represents the sharpness contains the position information v)[ of the Fourier transform of an image of edges, it is the phase 4,(u, v) that of objects in the image. Figure 2.10b
28
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 2.5Moon
image
(a) and magnitude
of Fourier
transform
(b).
IMAGE
PROCESSING
FOUNDATIONS
29
FIGURE 2.6_/mage
resulting
from
lowpass transform
filtering (b).
(a) and
magnitude
of Fourier
30
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 2.7Image
resulting
from
highpass transform
filtering (b).
(a) and magnitude
of Fourier
IMAGE
PROCESSING
FOUNDATIONS
31
FIGURE 2.8_lmage
resulting
from Gaussian transform
filtering (b).
(a) and magnitude
of Fourier
32
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 2.9Image
resulting from wedge filtering transform (b).
(a) and magnitude
of Fourier
IMAGE
PROCESSING
FOUNDATIONS
33
FIGURE 2.10_Magnitude (a) /mage reconstructed component.
and phase images from magnitude.
of Fourier (b) linage
transform of reconstructed
figure 2.5a. from phase
34 shows was the form For morphic the The inverse given
DIGITAL the obtained corresponding after setting phase
PROCESSING image
OF
REMOTELY image
SENSED 2.5.
IMAGES This phase 2.10a Fourier image shows trans
of the Moon G(u, phase v) I image ,l,(u, systems, be used [17].
in figure
by setting the
1 in the u, v plane. reconstructed from in the u, v plane. linear
Figure the
magnitude
v) =0
certain filtering into
nonlinear may a new
generalized A nonlinear linear into back multiplied maps
filtering
or A
homomaps the
transformation may be the original space image into
system filtered
space A
in which
filtering signals,
performed. with model
output
is transformed '. For the has
transformation in equation (2.1), filtering A block 2.12
as in the
logarithm been very
multiplication applied filtering
addition. proc
Homomorphic essing 2.11. filtering while [21. Figure with enhancing
successfully
in image
diagram shows
of homomorphic image suppresses and
is shown
in figure
an original filter that part
the the
result
of homomorphic component
a highpass
illumination
the reflectance
of an image.
2.3
Image
Formation systems
and
Recording variations from which to Frame in object an image accomplish cameras, radiant can this e.g., be energy into an
Imaging image Two cameras television Scanning mechanical are formed or basic
transform signal are
an electrical systems and scanning
reconstructed. frame storage
employed cameras. all picture
function, film and
cameras, cameras, scanners, by
sense such
elements
of an object cameras time the
simultaneously. and opticalImages a photointo is amplifor a
as conventional the system The picture that
television elements projects
sense
sequentially. onto radiation that
an optical image plane.
radiation the signal and
sensor latent lied digital effects
in the image for
photosensor film and or into amplified, an
converts electric sampled,
on
photographic transmission
analog transmission. are the
quantized
Images and
always
degraded
to of the into
some sensing
extent and
because recording and
of
atmospheric These
characteristics may be grouped
system.
degradations Radiometric system, sion
radiometric from blurring vignetting (scattering,
geometric effects and of shading,
distortions. the imaging transmisand haze),
degradations nonlinear amplitude
arise
response,
noise,
atmospheric
interference
attenuation,
Exp I_l_ Lo9 filterin9 Linear
FIGURE
2.1 l_lock
diagram
of
homomorphic
filtering.
IMAGE
PROCESSING
FOUNDATIONS
35
FIGURE 2.12Homomorphic
filtering. (a) Original image. (b) Result of homomorphic filtering with a highpass filter.
36 variable
DIGITAL surface
PROCESSING illumination
OF REMOTEI.Y SENSED IMAGES in terrain slope and orientation),
(differences
and change of terrain radiance with viewing angle. Geometric distortions can be categorized into sensorrelated distortions, such as aberrations in the optical system; nonlinearities and noise in the scandeflection system; sensorplatformrelated distortions caused by changes in the attitude and altitude distortions caused by Earth rotation, of the sensor; Earth curvature, and objectrelated and terrain relief.
Perspective distortions are dependent on the type of camera. Although ideal images from frame cameras are perspective projections of the Earth scene, images from scanning cameras exhibit a variable distortion because of the combined motions of sensor and object during the imaging time. Knowledge of the processes by which images are formed is essential for image processing. The effects of the sensing and recording device on images must be understood to correct for the distortions. Image formation and recording can be mathematically described by a transformation T that maps an object f_(x', y') from the object plane with coordinate system system (x', y') into the recorded image g_(x, y) with coordinate (x, y), g_(x,y)=T_{Ji(x',y')} i=1, . . . , P (2.74)
where T_ represents the image degradations for the ith component of the multiimage. To recover the original information from the recorded observations, the nature of the transformation T_ must be determined, followed by the inverse transformation T_ ' on the image g,(x, y). The following discussion will refer to component images and the index i will be omitted. The mathematical treatment is facilitated by separating the degradations into geometric distortions T_; and radiometric g(x, y) = T,;T,:{[(x', Geometric of the gray given by x'=p(x, y'=q(x,y) The radiometric degradation y) I (2.76) the effects of atmospheric distortions values. affect Thus, only the position T_ is a coordinate degradations y')} rather than Tt,. (2.75) the magnitude which is
transformation,
)
T_: represents
transfer, image formation, sensing, and recording the image intensity distribution. The influence of the atmosphere on the object radiant energy f is determined by attenuation, scattering, and emission. A fraction _ (0<r,C 1 ) of the emitted and reflected object radiance is transmitted to the sensor. The radiance scattered or emitted by the atmosphere into the sensor's field of view is B and is sometimes called path radiance [3]. Thus,
IMAGE PROCESSING FOUNDATIONS the objectdistribution [(x,
apparent object radiance i=1 ..... P y) is modified by the atmosphere into
37
the
f_*(x,y)=r_(x,y)f_(x,y)+B,(x,y)
(2.77)
where r_ is the spectral transmittance of the atmosphere. The path radiance B_, consisting of a scattering and an emission component, limits the amount of information that can be extracted from the measured radiation. Image formation is the transformation of the apparent object radiant energy/*(x', y') in the object plane into image radiant energy g,,(x', y') in the image plane. Image formation in a linear optical system under the assumption of an ideal aberrationfree lens can be described by [9, 18] go(x',y')= h,,(x',y',_,_j)f*(_,_)d_d, I (2.78)
This equation expresses the fact that the radiant energy distribution in the image plane is the superposition of infinitesimal contributions due to all object point contributions. The function h,, is the PSF of the optical system. It determines the radiant energy distribution in the image plane due to a point source of radiant energy located in the object plane. The PSF h,,(x', y', _, _) describes a spacevariant imaging system, because ho varies with the position in both image and object plane. If the imaging system acts uniformly across image and object planes, the PSF is independent of position. For such a spaceinvariant system the image formation equation (2.78) becomes a convolution : go(x',y')=
f= f\
ho(x'_,y',_)
f* (_, _)d_ dq
(2.79)
The image radiant energy go is sensed and recorded by a sensor. Image sensing and recording can be performed by photochemical and electrooptical systems. Photochemical technology combines detection and recording into photographic film. The nonlinear film characteristic relates the incident light to optical density. The recorded image g represents the local density of photoactivated silver grains. In electrooptical systems, such as vidicons and opticalmechanical scanners, image detection and recording are separate processes. The images are recorded as electrical signals that are well suited for conversion to a digital representation. In these systems the sensor output is a function of the incident light intensity. The sensing and recording induces noise, which is, in general, signal dependent. To facilitate mathematical treatment, noise is assumed to be signal independent and additive. Noise may be grouped into coherent and random noise. Random noise is denoted by n, and is assumed to be a homogeneous random process that is uncorrelated with the signal. The effects of sensing and recording are
38
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
represented a nonlinear perator. ftena linearapproximation by o O to the nonlinear ensoresponse aswell as to the influence the s r [18], of
atmosphere, is justified. Under this assumption the radiometric transformation T;,,, representing atmospheric transfer, optical image formation, sensing and recording for spaceinvariant systems yields a radiometrically degraded image g;: given by
g(x',
y') =
_ .
_. h(x' 3 )
_, Y',I)I(_,
_l) d_ d,l+n,(x',
y')
(2.80)
= TI_.j(x',
The PSF h of the linearized imaging system is then a combination of the transfer characteristics of the atmosphere, of the optical system, of the photosensor aperture, and of the electronic system. Practically. h is zero outside a region Rj,{(x, 3') ! 0<x<xp,, 0<y'<yl, l. This formulation of the imaging process is identical for frame and scanning cameras. The opticalmechanical line scanning process, however, is a function of time. The formulation of the imaging process given in equation (2.80) implies that the convolution of the object radiance distribution l(x', y') with the system PSF h(x', y') be performed for each picture clement. If neither object radiance distribution nor camera response varies with time during imaging, the convolution can be performed for the entire image. Equation (2.80) represents only the spatial characteristics of object and imaging system. Inclusion of the spectral characteristics would require integration of equation (2.80) over wavelength. The formulation (2.80) is convenient if the spectral characteristics of object and camera are approximately constant over the measured spectral band. This assumption is justified for narrow spectral bands, as in the simplified image representation (eq. 2.1 ). A frame camera such as the Return Beam Vidicon (RBV) camera on the near polarorbiting Landsat 1 and 2 spacecrafts operates by shuttering three independent cameras simultaneously, each sensing a difl'erent spectral band. On Landsat 3, two panchromatic cameras are used whose shutters are opened sequentially, producing two sidebyside images rather than three overlapping images of the same scene. The viewed ground scene is stored on the photosensitive surface of the camera tube, and after shuttering, the image is scanned by an electron beam to produce a video signal. To produce overlapping images along the direction of spacecraft motion, the cameras are shuttered every 25 s. The video bandwidth during readout is 3.2 MHz [191. A scanning camera like the MSS on the Landsat spacecrafts uses an oscillating mirror to continuously scan lines perpendicular Io the spacecraft velocity. The image lines are scanned simultaneously in each spectral band for each mirror sweep. The spacecraft motion provides the alongtrack progression of the scan lines to form an image. The optical energy
IMAGE
PROCESSING
FOUNDATIONS
39
from the mirror is sensed simultaneously
spectral formatted spatial band. into resolution The detector outputs data stream 80 m. uses are a continuous
by an array of six detectors per
digitized of (see sec. per 2.5) and The 15 megabits second.
is approximately remote Synchronous orbit altitude, sensing
Meteorological lites placed Earth's matches relative orbit. orbit tion detector. a different lized (0.550.7 The plane, are To such in as the a
spinstabilizcd Satellite 22,300 orbital Earth for
geostationary (SMS), miles angular so that a truly equatorial radiation the radiation its which above
satelare the
Meteorological approximately the velocity surface its axis, image focuses about rate system satellite of the
synchronous At this
surface. the to
velocity position to the direconto a at
rotational features on
angular the Earth's
is fixed which lines the mirror electrical SMS has
equatorial
satellite
is rotated An an on with optical image,
is perpendicular in an deflects detector two and received
at a constant form
so that a stepping The
scanned.
angle _m)
each
rotation. ground.
output
is digia visible infrared rapidscan
for transmission
to the a 0.9km
The
channels, a thermal A
spatial
resolution,
(10.512.6 em) channel mode permits increasing The complete linearized g(x, The by radiometrically thc opcrator T,; g(x, y)
with an 8kin spatial the temporal resolution model for the imaging y')} gl,,(x', noise diagram n, +n_(x, y') of
resolution. to 3 rain. process y) is
= T_;T,_{f(x', image block
(2.81) distorted the shown final in is
degraded and y). A
is geometrically to model ob'ain this
coherent
is added
recorded image figure 2.13.
2.4 The
Degradalions sources of degradations model, equation in imaging (2.81), systems represented into
Random
in the image categories.
formation
can be grouped
several
Apparent Object energy radiant f(x; y') radiant f*(x;
object energy Image energy radiant go(X; y')
I
Radiometrically noise n r image y') Structured noise ns
I degraded [ gR(x;
y')
I
Atmospheric effects (haze, illumination) Image formation (optical system characteristics) Image detection recording (sensor effects) and
I
Geometric distortion
]R
ecorded =mage g(x, y)
FIGURE 2.13Linearized
imaging
mode/.
40
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
These categories give rise to corresponding classes of image processing techniques that are used to correct for the degradations. A perfect imaging system would cause no geometric distortions; i.e., T,=I where 1 is the identity transformation, or x'=x=p(x, y'=y=q(x, It would induce no noise: n,.(x, y) =0 n_(x, y) 0 and it would have an ideal PSF: h(x, y)  3(x, y) Thus, the recorded g(x, y) = image, given by [(_,,i)3(x_,y.q)d_d,l=J(x,y) (2.85) (2.84) /
?
y) / y) J
(2.82)
(2.83)
is identical to the object [ because of equation (2.6). Degradation categories that may be distinguished sections 2.4.1 to 2.4.4. 2.4.1 Geometric Distortion
are
discussed
in
In the absence of spatial and point degradation and noise, metric distortions given in equation (2.76), equation (2.81)
and with geobecomes
g(x, y) =
LL
describes
t(_, _l)_[P(X, 3') Y)I system
_, q(x, y) ,,jld_ d,j
(2.86)
= f[p(x, This equation
y), q(x,
an imaging change. can be caused
that
introduces
only a distor
tion due to a coordinate Geometric 1. distortions
by: in the optical system, and nonuniform sam
Instrument errorsExamples are distortions scan nonlinearities, scanlength variations,
2.
pling rate. Panoramic distortion or JoreshorteningThis error is caused by scanners using rotating mirrors with constant angular velocity. The velocity ._ of the scanning aperture over the Earth's surface is given by (see fig. 2.14a) : :__ aO C0S20 (2.87)
IMAGE
PROCESSING
FOUNDATIONS
41
Orn Maximum scan mirror deflection /
/_ / n \ JlSm_ RRoll an le g
/
a FIGURE 2.14Panoramic distortion
b and roll effect. of roll. (a) Panoramic distortion. (b) Effect
where a is the altitude of the sensor and 0 is the angle of the scanmirror deflection. Although the scanning aperture velocity is nonlinear, the produced image is recorded with constant velocity. Because of this difference in scanning and recording speed, the distance between sample centers and the sample size in the scan direction are functions of the mirror deflection 0 (fig. 2.14a). The effect is a scale distortion that increases with the deflection of the mirror from the vertical. For example, the maximum mirror deflection for the Landsat MSS is 5.78 °, resulting tortion of about 11 pixels.
.
in a cumulative
dis
Earth rotationSignificant to scan a frame causes
Earth rotation a skew distortion
during the time required that varies with latitude.
For 40 ° north latitude the skew angle for kandsat MSS images is about 3 °, resulting in a shift of 122 pixels between the top and bottom lines of a frame.
,
Attitude changes
changes that occur during the time to scan a JrameThese are yaw, pitch, and roll. Yaw is the rotation of the air or toward the center skew distortions. in the direction of
spacecraft about the local zenith vector (pointed of the Earth). Yaw causes rotation or additional Pitch, the rotation of the aircraft or spacecraft
motion, changes the scale across lines nonlinearly and causes aspect distortions. Roll is the rotation about the velocity vector (fig. 2.14b). It introduces scale changes in the line direction similar to panoramic distortion. Attitudc effects are a major cause of gcometric distortion in scanning camera images because of the serial nature of the scanning operation. The possibility of this distortion
42
DIGITAL may bility frame 5. 6. A ltitude Perspecti_,e from varying The effects distortions Techniques 3.3. cause of
PROCESSING serious sudden problems and images
OF
REMOTELY for aircraft
SENSED scanners changes.
IMAGES with The the geometry possiof
irregular is internally changes errors
attitude
camera
consistent. cause can The scale occur effect errors. if the is image to data result
changesThese errorsThese
a perspective scale factor
projection. error. geometric remotely correct errors sensed for
similar
a linearly
of these in to
are images
shown are
in figure discussed are
2.15. in [20]
Geoand in
metric [21]. section
geometric
distortions
discussed
2.4.2. In into optical the cathode an some the
Radiometric imaging image system of
Point systems
Degradation the object brightness light less sensitive do not that than is not passes light uniformly along that the passes The mapped axis of an
plane.
For
example, attenuated degradation equally levels
is generally This is not equal "Fhis spatial
through photoin
system
obliquely. a vidicon in which tile scene. without
is called
vignetting.
at all locations, correspond as
resulting bright
image in
gray
to equally shading. by Such the
points PSF:
degradation blurring
is known may
point
degradations
be represented
following
h(x¢,y,i)=e(x,y)8(.r
_,y,j)
(2.88)
I I
L
a b
_..J
C
1117
ml [
I I I
d e
X1
I I I
L ...... .....I f
FIGURE 2.15Geometric distortions. Solid figures are the correct images, and dashed figures are the distorted images. (a) Scan nonlinearity. (b) Panoramic and roll distortions. (c) Skew (from rotation of Earth). (d) Rotation and aspect distortions (attitude effects). (e) Scale distortion (altitude effect). (f) Perspective distortion.
IMAGE
PROCESSING distortions
FOUNDATIONS and noise, the image
43 formation
In the absence of geometric equation reduces to
f
= e(x, y)J(x, Such multiplicative contrastmodification
f(_, rl)e(x, y)
Y)3(xG
Y,1)d_
d,t
(2.89)
point degradations can be corrected techniques discussed in chapter 4.
by some
of the
2.4.3
Radiometric
Spatial
Degradation coordinates _ and '/; i.e., h=h(x&
If the PSF is a function
of the object
Y't), blurring or loss of resolution occurs because of the integrating effect of this imaging system. If no geometric distortions and noise are present, the image formation equation becomes g(x, y) = Radiometric spatial diffraction effects and h(xG y,j)I(G ,/)d_ d,i (2.90)
degradations are caused by aberrations in optical systems,
defocused systems, atmospheric turbu
lence, and relative motion between imaging system and object. For an extremely defocused lens the assumption can be made that the PSF is constant over the shape of the aperture and zero elsewhere. Thus, a defocused lens with circular in polar coordinates [9] : aperture of radius a has the following PSF
h(r)= The optical transfer function is
{ 10
r<a r>a
(2.91)
H(,,,) = 2_a J' (a,,,)
a(J )
(2.92) is the firstorder Bessel
where function
r=(x''+y'')_+
.....
(u_'+v'')
_, and
J,(,,,)
of the first kind. For a square h(x,y)={10
aperture [x[<a, elsewhere ]yi_a
(2.93)
the OTF is H(u, A more accurate v) =a ''sin 2_au 2ra u has to consider sin 2_a v 2_a v the effect of diffraction.
(2.94) The
derivation
functions H(,,,) and H(u, v) are real, because h(r) and h(x, y) are even functions. Because H may be positive or negative, the phase function ,l,(u, v) of the blurred image will have two values, 0 or _, depending on
44
DIGITAl. ROCESSING P OFREMOTELY SENSED IMAGES
The locations H0 where turbulence PSF [221: _"_ _'_ function is (2.96) (2.95) the phase changes are called times can
thesignof H.
be approximated
phase or contrast reversals. The blur caused by atmospheric by the following h(x,y)=e The corresponding optical transfer
for long exposure
H(u,
1 ,,",b,_u"+l,2_ v) = _e ,
Because H is positive, there are no phase reversals in the blurred image. The degradation caused by a uniform relative motion in the x direction of the form x(t) between scribed camera by h(x,y)=
' ,'2 7'/':
=V, t the image is recorded can
(2.97) be de
and the
scene
while
3(xV,t)dxdy is assumed
(2.98) to be invariant in
where
T is the recording
time, and the scene
time. The OTF is given by [4] : H(u, v)  sin . V,. T u _rV_u (2.99)
Alternate lobes of H(u, v) have negative values and a phase reversal of _ radians. Methods to correct for spatial degradations are discussed in section 3.4. The PSF and modulationtransfer function for the degradation given in equation (2.95) with b=0.008 are shown in figure 2.16. An example for the imaging process with equation (2.80) is given in figure 2.17. Figure 2.17a shows the undegraded image, and figure 2.17b shows the degradation caused by the blurring effect of the PSF in figure 2.16 and by additive noise 2.4.4 generated Spectral from uniformly and Temporal distributed Differences random numbers.
Spectral and temporal differences in multiimages can either be degradations, or they can convey new information. Compensation and correction of degradations that are due to misalinements and illumination changes in the component images are the topics of chapters 4 and 5. 2.5 Digitization
After images have been formed and recorded, they must be digitized for processing by a digital computer. Image formation and recording have
IMAGE
PROCESSING
FOUNDATIONS
45
h(x, V)
t
..: _:::W::.:('::::_?:,
,' ' "
."7.::!!::i !_::.::?.::::.(::.::;::: :.::::.
:_ i i !_!i_.V\\V:::V::,_.:_.::.::..:::::ii:!:!ii!::(:!:.:: _ .:_::.::.:_.: ,.....: :::::.:!:::!:::
,a
1.0
I 0.0 b FIGURE 2.16/maging
 0.1
I 0.2
I 0.3
I 0.4
I 0.5
).
u
system characteristics. (a) Pointspread (b) Modulation transfer function.
function.
been and
described g(x, y),
in terms respectively. at an M by at the
of twodimensional Digitization N matrix sampled (M, of the and numbers, from consists
continuous of sampling
functions the
f(x, gray
y) level
in an image ous The better The image image gray finer the
of points, points original into and
and K the
of quantizing usually uniform (K
the continuintervals. large), the
levels the
sampling
N large)
quantization
approximation of an sampling array of
image. is to samples, samples. samples. the represent such A digital The that a continuous a continuous with
aim by can
quantization called the
be reconstructed is represented and data. of the image row
multiimage
P components of the image sampled
by PMN Therefore, image be stored storage may
rasterscan row unit of the files, stored by raster
operation structure the rows resulting file. on data of
digitizers
scanners
imposes
a sequential fundamental For
structure different rows This Finally, from
is one
matrix.
multiimages in separate
components the
may (BSQ)
as records format. be concatenated
in a bandsequential
Alternatively, and
corresponding in one line point
P components format is referred from
storage the
to
as
bandinterleaved for a given
(BIL). may be
values
all components
46
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
b FIGURE 2.17Example of image formation. (a) Original scene. (b) Recorded image.
combined concatenated, format
to
a Pdimensional resulting in one
vector, record
and
the
vectors image
for
one
row
are
of the digital by pixel (BIP).
file. This
storage
is known
as bandinterleaved
2.5.1 Images .... the the
Sampling are M, x and sampled grid usually . .., sampled N), at fixed where increments zy are The matrix x=] the ±x, y=k g(]Ax, the Ay (]= kay) uniform [9, 23]: 1, in is
k=l,
..xx and
sampling
intervals
y directions, or digital
respectively. image. by In an
of samples system delta
a perfect array
sampling of Dirac
sampling
is represented
functions
s(x,y)=
2
j==__,_.
2
 _
8(x
jAx,),kAy)
(2.100)
IMAGE The Fourier transform
PROCESSING y) is S(u,
FOUNDATIONS v) [24], where
47
of s(x,
S(u,
Z v)  ",xAy .........
8
m u Xx'
v
n
y)
by
(2.101)
A sampled version g., of an image g(x, y) g(x, y) with the sampling function s(x, y) g"= where g is evaluated sampled image g._. 2 at 2 g(x, y)3(xiAx, coordinates equation
is obtained
multiplying
yk._Xy) (j,_Xx, kay) (2.46b),
(2.102) to form the
discrete
With the convolution theorem, form of the sampled image is
the Fourier
trans
AxAy .....
1
,,
_ G
(
u
,_x , v
Ay
(2.103)
The Fourier transform of the sampled image is a periodic replication of the transform G(u, v). (See fig. 2.18.) This replication of the basic transform introduced by sampling is called aliasing [25]. Aliasing can cause a distorted transform if the replicas overlap. It will be assumed that g(x, y) is a bandlimited function. A function g(x, y) is bandlimited if its Fourier transform G(u, v) is zero outside in the spatial frequency domain. The overlap of the replicated bandlimited, and if the sampling 1 Ax<_ 2_UThe terms frequencies. smaller than In practical but an array a bounded region !u] > U, Iv!> V is
spectra intervals
can be avoided if g(x,y) are chosen such that 1
AY<2V
(2.104)
I/(2Ax) and l/(2,Xy) are called the Nyquist, or folding In physical terms, the sampling intervals must be equal or onehalf the period of the finest detail within the image. systems the sampling function is not a Dirac delta function, of impulses of finite width. Thus, the sampling array
J/'I NI
s(x,y)=
Z
E
h._(xj±x,yk..Xy)
(2.105) impulses h_(x,y), impulses are of finite
is composed of M×N identical, nonoverlapping arranged on a grid of spacing AX, Ay. The sampling extent; therefore h,(x, outside a resolution obtained by a spatial cell. The integration y) =0
(2.106)
actual values of the image samples are of the product g(x, y)s(x, y) over each
48
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
v
FIGURE 2.18Twodimensional spectrum. (a) Spectrum (b) Spectrum of sampled image
of bandlimited g _ .
image
g.
resolution detector
cell [26]. The integration surface. Thus,
M1
is inherently
performed
on the image
the sampled
N1
image is given by the convolution
g_=
j
Z
I_
Z
A,=O
g(x,
y)hdxj.x:c,
yk_x),)
(2.107)
IMAGE
PROCESSING
FOUNDATIONS
49
which is evaluated at discrete coordinates spectrum G, of the sampled image now the spectrum, G_(u, degraded ,,, 1 Z by the convolution G u aX ,v
j_Xx and k._xy. The frequency becomes an aliased version of with the finite impulse n h,
V)=_xAy
m
)(.
H_
u m
Ax,v _vv
o)
(2.108) The effects of increasing the sampling intervals grid size are shown in figure 2.19. The original N by 2.19d 2.5.2 N (N=512) image with 256 gray show the same image with N=256, Quantization in the sampled image g_(jAx, k..Xy) must be divided into and reducing the sampling image in figure 2.5a is an
levels. Figures 2.19a through 128, 64, and 32, respectively.
The amplitudes discrete samples
values for digital processing. This conversion and discrete numbers is called quantization.
between analog The number of
quantization levels must be sufficiently large to represent fine detail, to avoid false contours in reconstruction, and to match the sensitivity of the human eye. Selective contrast enhancement in digital processing justifies quantization even well beyond the eye's sensitivity. In most digital image processing systems, a uniform quantization into Kq levels is used. Each quantized picture element is represented by a binary word. If natural binary code is used and the word length is b bits, the number of quantization levels is K_/=2 b (2.109)
The word length b is determined by the signaltonoise ratio and is chosen to be 6, 7, or 8 bits, resulting in 64, 128, or 256 quantization levels, respectively. The degradation resulting when an image is quantized with an insufficient number of bits is known as contouring effect. This effect, the formation of discrete, rather than gradual, brightness changes, becomes perceptible when b<6 [27]. This condition can be improved by nonlinear quantization, which increases the size of quantization intervals that are unlikely to be occupied, and reduces the size of those intervals whose use is highly probable [28]. Nonuniform quantization is also justified on the basis of the properties of the human visual system. In regions with slowly changing gray levels, it is important to have fine quantization. MSS 4, 5, and 6 digital images from Landsats 1 and 2 are logarithmically quantized to 6bit words onboard the satellite to meet transmission constraints and decompressed to 7bit words on the ground. Representing the sampled and quantized picture elements by binary code words is called pulse code modulation (PCM) [29]. A multiimage with M × N samples, P components, and K,_= 2 b quantization levels requires PMNb bits for its representation using PCM.
50
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
!
FIGURE
2.19Effects
of reducing
sampling
grid
size.
(a) N = 256.
(b) N = 128.
IMAGE
PROCESSING
FOUNDATIONS
51

FIGURE 2.19_Continued.
(c) N = 64. (d) N = 32.
52
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
Figure 2.20 illustrates the effects of reducing the number of quantization levels. Figures 2.20a through 2.20d are obtained by quantizing the image in figure 2.5a with b = 6, 4, 2, and 1 bits while the sampling intervals are kept the same. The false contouring becomes obvious as b is reduced. To obtain a faithful representation of digital images, at least 6 bits are required, and 8 bits are used in general.
2.6 2.6.1
Operations Discrete
on Digital Image
Images
Transforms
A digital image is represented by an M by N array of numbers, i.e., by a matrix. Matrices will be denoted by uppercase boldfaced letters A, B, etc., or by [a], [b], etc. Because uppercase letters also denote transforms, the latter notation will primarily be used for matrices. Vectors will be denoted by lowercase boldfaced letters, a, b, etc. A discrete representation of the transform pair (2.20b), is given by
)lI 1 NI
equations
(2.20a)
and
V(m,
n) = Z
j=o
__ 4_,,,,,(], k)](],
#o
k) 1 (2.110a)
m=0, n=0,
M I N I
1,..., 1.....
MN1
[(i, k)=
Z
DI 0
Z
"¢1 (I
4',,,*(],
k)F(m,
n) (2.110b)
]=0, k=0,
1,...,m1 1,...,N1
where [J] and [F] are M by N matrices. 4',,,,(], k) is an element of a fourdimensional operator. Equations (2. I 10a) and (2.110b) may be written in vector form as F=,I, F (2.111a) (2.11 lb) are created by lexico
f=q_* f The vectors F and f, each with MN components,
graphical ordering of the column vectors of matrices [F] and []], respectively (i.e., the first column of matrix [/] becomes vector elements 1 through M, the second column becomes vector elements M+ 1 through 2M .... ). q5 is an MN by MN matrix. The transformation matrices 4,..... are said to be separable if for each (m,n), 4,,,,,(], k) =p,,,(j )q,,( k ) (2.112)
IMAGE
PROCESSING
FOUNDATIONS
53
FIGURE 2.20_Effects
of reducing
quantization
levels.
(a) b = 6. (b) b = 4.
54
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 2.20Continued.
(c) b = 2. (d) b = 1,
IMAGE
PROCESSING
FOUNDATIONS
55
A separable twodimensional transform can be computed as a sequence of onedimensional transforms. First, the rows of/(], k) are transformed to
J[1
F(m, followed by the transformation F(m, With the M by M matrix
k) = Z
j=o
[(J, k)p,.(j) of F(m, k) to
(2.113)
of the columns
N1
n) = Z
1c_0
F(m,
k)q.(k)
(2.114)
P, where P(m, j) =p,,,(j)
and the N by N matrix
Q, where Q(n,k)=q.(k)
equation
(2.110b)
can be written
as (2.115a) transform is given (2.115b) by
[F] = pr[f]Q Because P and Q are unitary matrices, the inverse
[1] = P*[F] (Q*) r
where P* denotes the complex conjugate matrix of P. Matrix F may be considered as the expansion of an image [/] into a generalized spectrum. Each component of the expansion in the transform domain represents the contribution of that orthogonal matrix to the original image. In this context the concept of frequency may be generalized to orthogonal functions other than sine and cosine waveforms [30]. Separable unitary transforms useful for image processing are the Fourier, cosine, and Hadamard transforms. Fast computational algorithms exist for these transforms. The KarhunenLodve transform is a nonseparable transform with important imageprocessing applications. 2.6.1.1 Discrete Fourier Transform (DFT), the elements of the unitary
For the discrete Fourier transform transform matrices are given by
and (2.116) j, n=0, k, m=0, 1.... 1, ,M1 .,NI t
56
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
The properties f the continuous ouriertransform o F listedin section 2.2.3.1 arevalidforthediscrete also transform. Let [/] represent M by N matrix of numbers. The DFT, F, of f is an
then defined by .l_ .v1 [(j, k) exp 2_i_ [ jm _ _+]_ kn \ ) .i=o _ _.o m=0, n=0, The inverse transform is given by M 1 .v t Z Z F(m'n) j=0,1 ..... ./jm expZ_t_M+N M1 kn) / (2.117b) 1..... 1..... M1 N1
F(m,
n) =_4N1
(2.117a)
f(j,k)=
k=0,1,...,NI The periodicity properties of the exponential factor imply that
F(m, n) = F(m, Nn) F( m, n) = l:(Mrn, n) F(m, n)=F(Mm,Nn) [(J, k)=f(j,Nk) J(j,k)=](Mj,k) f(_j, k)=/(Mj,Nk) Therefore, the extensions of f(j, k) and F(m, n) beyond
(2.118)
the original
domain as given by [0< (j and m)<M1] and [O_(k and n)<N1] are periodic repetitions of the matrices. This periodicity has important consequences for the computation of the convolution of two M by N matrices [f] and [h] by multiplying their discrete Fourier transforms F and H. This computation is of great practical value in digital image processing, because the DFT can be efficiently computed by the fast Fourier transform (FFT) The FFT transformed algorithm [31, 32, 33]. algorithm assumes that all the data points in the array to be are kept in main storage simultaneously. The size of practical as magnetic disk on the sampled instruments and image matrix for difficult. Because transformed and matrix This F(m, k),
image arrays, however, requires secondary storage, such or tape. A sequential rowaccess structure is imposed image matrices by the rasterscan operation of scanning image digitizers. Each access retrieves one row of the processing. This structure makes operations on columns the DFT is separable, the rows of the image matrix are stored as intermediate results in the first step. In the second step, the columns of the intermediate given in equation (2.113), have to be transformed.
procedure
IMAGE PROCESSING FOUNDATIONS
would result in excessive input and output time, because have to be read for each of effort is to transpose
57 all rows woqld
column. One way to avoid such an expenditure the intermediate matrix F(m, k). An efficient
method for matrix transposition when only a part can be kept in main storage and operated on at the same time is described in [34]. A further improvement is achieved by processing several rows at a time and dividing the transposition algorithm into two parts executed when storing and reading blocks of rows of the intermediate transform matrix [35]. In general, the DFT is used to approximate the continuous Fourier transform. It is very important to understand the relationship between the DFT and the continuous transform. The approximation of the continuous transform by the DFT is effected by sampling and truncation. Consider the onedimensional continuous function f(x) (e.g., a line of an infinite picture f(x, y)) and its Fourier transform in figure 2.21a. It is assumed that f(x) digitized, is bandlimited by U. For digital processing f(x) has to be which is accomplished by multiplication of [(x) with the samThe f(j_xx) sampling interval is ._Xx (see and its Fourier transform arc pair is no
piing function s(x)=_ 3(xj.ax). fig. 2.21b). The sampled function
shown in figure 2.21c. This modification of the continuous transform caused by sampling is called aliasing [25, 31]. If ..Xx<l/2U there distortion of the transform due to aliasing.
Digital processing also requires truncation to a finite number of points. This operation may be represented by multiplication with a rectangular window function w(x), shown in figure 2.21d. Truncation causes convolution of the transform F(u)*S(u) with W(u), where W(u)(sin u)/u, which results in additional frequency components in the transform. This effect is called leakage. It is caused by the side lobes of (sin u)/u (fig. 2.21e). The transform is also digitized with a sampling interval au=(M...Xx) ', resulting in F(m,.Xu), which corresponds to a periodic spatial function /(j,.Xx) (fig. 2.21f). The discrete transform F(rn.Xu) differs from the continuous transform F(u) by the errors introduced in sampling (aliasing) and spatial truncation (leakage). Aliasing can be reduced by decreasing the sampling interval _x (if f(x) is not bandlimited). Leakage can be reduced by using a truncation function with smaller side lobes in the frequency domain than the rectangular window. A number of different data windows tion process. (See sec. 2.2.5.) have been proposed for this apodiza
The DFT computes a transform F(m_Xu), m=0, 1..... M 1, in which the negative half F(u) is produced to the right of the positive half. This result may be confusing because analysts are accustomed to viewing the continuous transform F(u) from  U to U. For the twodimensional DFT, the locations of the spatial frequency components (u, v) are shown in figure 2.22. A normal display quadrants of the transform matrix may be obtained as shown in section by rearranging 2.6.3. the
58
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Spatial
signal
Fourier
transform
U
U
l
slx)
a I S(u) (1/=3x)
ttttttttttttttttt_._x
1/_x l f(x)sJx)
s
rr 1]
l
rlI]1 I i_,
• r I
b
Folding
(Nyquist)
frequency
w(x)
c
t
x M = MAx flx)s(x)w(x)
IF(u) d
* S(u)
° W(u)l
lllll]l
f(j_x)
!'x
dl lllllI]111 x ]ll  ,ril'h, T lllt,?
_._ uLi_ 6 M/_u =u
Discontinuity f
I
e
F(mAu)
FIGURE
2.21Relationship
between (a) Continuous function. (e)
continuous
and
discrete
Fourier
transforms function. (c) (f) Discrete
(after Brigham [311). Sampled. (d) Window transform pair.
transform pair. (b) Sampling Sampled and truncated signal.
IMAGE
PROCESSING 1
V = _ V=___
FOUNDATIONS 1 Ay,._ v
59
v=0 u=0
2Av
1
U = m
2&x "Nyquist
...°"
frequencies
1
U =
&x
FIGURE 2.22Location
of spatial discrete
frequency components Fourier transform.
in twodimensional
2.6.1.2 The I(j),
Discrete
Cosine
Transform transform (DCT) of a sequence
onedimensional discrete cosine j= l ..... M is defined by [36]:
M1
F(O) F(m)
\/2 __, l(j) = _M j=o .u_ =_ s_._of(1)" cos __m 2j+l m= 1, 2 ..... M1 (2.1 19a)
The inverse
DCT is defined
as
.v1 f(j)= The twodimensional
,]I1 .V1
2j+ r(m) M
1
"1 (2.119b)
F(O) + Z j=0, I ..... DCT
cos 2=MTrm t 1 as
of an image [(i, j) is defined
,111
VI
F(m,
4 n) =MN
_" Z ](J' _.o,_o rn= 1, 2 ..... n=l,2 .....
k) cos MN1
/2j+l \ _rm)cos__n) 1
[2k+1 (2.120a)
60
DIGITAL
PROCESSING inverse
OF REMOTELY is defined as
SENSED IMAGES
The twodimensional
DCT
[(j,k)=
F(0,0)+ j=0, k=0,
E 1.....
_F(m'n) MI
cos\
2M
cos\
2N
7rn
l,...,NI (2.120b) DCT is separable and can, therefore, be obtained
t
In addition, the DCT Tile DCT is primarily (See ch. 9. )
The twodimensional
by successive onedimensional transformations. can be computed with the FFT algorithm [36]. used for image compression. 2.6.1.3 Hadamard Transform
It the transform matrices P and Q in equations (2.115a) and (2.115b) are Hadamard matrices, then IF] is called the Hadamard transform of [f]. A Hadamard matrix is a symmetric matrix with elements + 1 and 1 and mutually orthogonal rows and columns. For Hadamard matrices of order M=2 _, the twodimensional Hadamard transform of an M by M image matrix is defined as
1 ]f1 M1
F(m,n)=M,_) where
Z
f(J'k)(1)'_'_
........
(2.121)
Jr
1
r(j, k, m, n) = ___ (mji+niki) The terms m_, n, ]_, and k_ are the binary respectively [30]. In transform is primarily representations of m, n, j, and k,
the context of image processing, the Hadamard used for image compression. (See ch. 9.)
2.6.1.4 The
Discrete
KarhunenLo6ve transform
Transform is an orthogonal Section expansion dependent the concept
KarhunenLodve image
on the statistical
characteristics.
2.2 introduced
of representing an image ] as a sample of a twodimensional random field or stochastic process f. The mean vector and the correlation matrix that statistically describe the images can be computed in the spectral/ temporal elements elements dimension, or in the spatial dimension. In the first case, the of a multiimage are given by Pdimensional vectors f whose are the pixel values of the spectral/temporal components for a comM, ]=1,
given spatial location (i, ]). P is the number of spectral/temporal ponents. In the second case, one image matrix [(i, j), i= 1, ...,
IMAGE
PROCESSING
FOUNDATIONS
61
.... N of the random field is represented by a vector f of P=MN dimensions, which is obtained by lexicographically arranging the columns of matrix [/1 into vector [. The statistical characteristics of f (see sec. 2.2.2 for continuous functions) are given by the mean vector m=E{f} and the covariance matrix C=Rmm where R is the correlation matrix R=E{ff Let f be a vector in a Pdimensional vectors as r} vector space. space. Let Then, (2.124) {t,,} be a coman arbitrary v (2.123) (2.122)
plete set of orthonormal vector f may be expanded
in the same
P
f= where the coefficients of expansion F,, = t,,rf
_
F,,t,,
(2.125)
F,, are given by n= 1..... P (2.126)
For some applications (e.g., feature selection for classification or image compression) f must be approximated in a meansquareerror sense with as few coefficients F,,, 11= 1..... N<P, as possible. (Here N is different from the number of columns N.) Therefore, by minimizing the error e, where
e=E the best Euclidean
i
[i f7_== 1
F,,t,, ]]"
}
(2.127) is the
basis vectors t,,, n=l ..... norm of f. With equation
N are obtained, ii f ..I]_=frf (2.125) the error becomes
e=E Using equation the norm yield (2.126)
[I
F3,, II: property
(2.128) and expanding
with the orthonormality
p
e=
or
_
n_Nt 1
t,,rE{ffr]t,,
e=
_
t,,rRt,,
(2.129)
62
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
(2.129)
Because R
is a symmetric, positive definitive matrix, equation can be minimized with the Lagrange method, which yields Rtn =_,t,,
(2.130)
Thus, the optimal vectors belonging to {t,,] are the eigenvectors of R, and the values belonging to {;_,,} are the corresponding eigenvalues. The matrix R is the correlation matrix given in equation (2.124). It has exactly normal P positive eigenvectors different eigenvalues and P linear error becomes _,
tl N ! 1
independent
ortho
t,_. The minimum e=
n N ; 1
t,,T X,, t,, 1.....
X,,
(2.131) associated with
where
the values
A,,, n=N+
P, are the eigenvalues
the eigenvectors not included in the expansion equation (2.125). Thus, the approximation error will be minimized if the eigenvectors t,, corresponding to the N largest eigenvalues are chosen for the representation of f. The eigenvectors, ordered according to the decreasing magnitude of their corresponding eigenvalues (,_,>&> • • • ,_;,), can be combined into the P by P transform matrix T, given by
T_
(2.132)
A pixel
vector
of the transformed
image
(the vector
of tile coefficients
of
expansion)
is then given by F=Tf (2.133) be defined by combining eigenvalues the N<P
A reduced
transform
matrix belonging
T, may
ordered eigenvectors P matrix
to the N largest
into the N by
Tx=
(2.134)
\,57
An Ndimensional by F,,=T,, f (2.135) reduced pixel vector (feature vector) is then computed
IMAGE
PROCESSING
FOUNDATIONS
63
Because T is computed from a sample covariance matrix, the transformation will be different for each application. Equations (2.130) and (2.133 ) define the discrete KarhunenLo6ve transform or transform to principal components [ 10]. The correlation matrix component images) is of the transformed multiimages (the principal
Re. = E{F where values A =diag (A,,) is a diagonal &, of R. Thus, the principal _,, is the variance
F r } =T
R T _"= A
(2.136)
matrix consisting of the ordered eigencomponents are uncorrelated, and each component: (2.137) is preserved;
eigenvalue
of the nth principal a,, = (,_,f) =
Because i.e.,
T is an orthogonal
transform,
P
the total data variance
P
where (,r,/): is the variance of the nth original image component. It is evident that the ordering of the eigenvectors in equation (2.132) insures that each principal component has variance less than the previous component. The KarhunenLo6ve transform is usually applied in the spectral/temporal dimension of a multiimage for feature selection, enhancement, and image compression. The Fourier, cosine, and Hadamard transform are expansions independent of the images and are, therefore, not optimal in the sense of yielding an approximation with a minimum number of uncorrelated coefficients. The Fourier transform yields an optimal expansion only if the images have statistical characteristics such that ( 1 ) the diagonal elements of their covariance matrix are all equal, and (2) the correlation between picture elements i and j is only a function of ji. These characteristics are satisfied by a Markov covariance matrix, which in some cases is representative of image properties [7]. The Fourier, cosine, and Hadamard transforms are attractive because of the existence of fast algorithms for their implementation [30]. They are used for image compression in the spatial dimension. The principal application of the Fourier transform is in filtering for image restoration and enhancement. 2.6.2 Discrete Convolulion
Linear space invariant image formation and linear filtering are convolution operations. A discrete representation of the convolution integral, equation (2.80), is obtained by approximate integration, where the continuous functions are described by samples, spaced over a uniform grid .Xx, .xy.
64
DIGITAL PROCESSING OFREM()FELY SENSED IMAGES
g(j._Xx, k±y )
J_l( 1 L,L t
._ ._Xx&v __,
__, w,,,,f(m_Xx,
n_Xy) h([jm+Kl_xx,[k
n+L]_xy) ( 2.138 )
+ n (/.x.r, kay ) where w, .... is one of the integration K xh/..xx, and L_v ",v. For._xx can be written as
1(1 L t
coefficients, M:: x,,/xx, N=y,,,/.Xy, ..xv I and w,.... 1, equation (2.138)
g(j,k)=ZZ](m,n)h( j=K,K+I k=L,L+I
j ..... .....
rn, kn)+n(j,k MI N
(2.139)
In this discrete representation, []], [h], [g], and [n] are mamces, formed by sampling the corresponding continuous functions, of the following sizes: [f] is of size M by N. [h] is of size K by L, and [g] and [hi are of size M' byN', (M'=MK:N' NL). Equation (2.139) equations that can be written in vector form as g=Bf+n is a linear system of
(2.140)
where g and n are vectors with M'N' components each, and f is a vector with MN components, created by lexicographically ordering the column vectors of matrices [g], In], and If], respectively. The matrix B has dimensions M'N' by MN and can be partitioned as
B,
B_
B j, I. B,, I._ 0 0 0
. .. ... Bx, v i......
ti_._
B ...... B, I. 0 0 ...
0 \ Bx,, ._
)
(2.141) The structure of B is determined by B,. _= B, ,., ,. Each submatrix B,. _ is a circulant matrix. In matrices with this property, each row is equal to the row preceding it shifted one element to the right, with the last element wrapped property around to the first place. Circulant that they are diagonalized by the DFT matrices [37]. have the special
The computation of equation (2. 139) requires MNKL operations. The convolution theorem, equation (2.46a), permits computation of equation (2.139) in the spatial frequency domain with the FFT algorithm [32, 33]. Whenever the values of f(j, k) and h(j, k) are required for indices outside the ranges O<j<M l,O<k<NI and O<j_KI,O<k<L1, respectively, the), must be obtained by the rules given in equation (2.118).
IMAGE PROCESSING FOUNDATIONS
65
With this condition, quation e (2.139)becomes periodicor circular a convolution. To avoid distortion theconvolutionueto wraparound, of d theimages areextended ithzeroes. w Extended atrices [h,], [g,], and[n,] of m [/,], commonize by Q are defined according to s P
f,.(j,k)= jf(j,k) _ 0 _h(j, k) _ 0 I'g(j_k) _n(j, k) Q for O_j_M for M<j<P, for for O<j<KK_j<P, l,O<k<NI N<k<Q 1,0<k<LL_k<Q 1 (2.142) g,(j, k) = for O<j<M'I,O<k<N'I for M'Sj_P, N'_k_Q for O_]fM'1, for M'<]<P, 2'J_N+L[ O<k<N' N'_<k<Q (p,q integer). I If these in{ ]
h,(j,
k) =
n,,(], k): where
P:2_>M+K_I,
equalities are satisfied, h (]m, kn ) will never wrap around and engage a nonzero portion of fCm, n), and therefore the circular convolution will be identical to the desired linear convolution. The number of computational operations required to obtain the convolution with the FFT is on the order of PQ (2 log P+2 log Q+I). If multiplications by zero are avoided, the convolution in the transform domain can be more efficient than direct computation. Figure 2.23 shows a comparison of computer times for direct and Fourier transform convolutions for a 256 by 256 image for different sizes of the matrix [h] with an IBM 360/75 computer. The figure suggests that indirect computation should be used for filter sizes greater than K L : 13. It is important to note that the matrices [1,] and [h, ] must be of the same size, P by Q. If the matrix [h] is much smaller than [f] (K<<M, L<<N), block filtering by convolving segments of j with h can be used [38]. 2.6.3 Discrete Crosscorrelation between an M by N image/(], k) and P by Q
The discrete crosscorrelation image g(], k) is defined by
M
X
R(r,
s):
E
j 1
_
1, I
f(]' k) g(j+r,
k+s)
(2.143) given to be
R(r, s) may be computed directly or by the DFT with the property by equation (2.47). Because both f(], k) and g(i, k) are assumed
periodic twodimensional sequences for the indirect computation, they may be extended by zeroes as in equation (2.142) to avoid wraparound and distortion of the correlation function. In many image processing applications f is smaller than g(M<P, N<Q), and P, Q may be chosen
66
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
6O
Indirect
convoluti
5O
40
Ica. 30
2O
10
Direct
convolution
I 0 32
I 52
I 72
I 92 Filter
I 112
I 132
i 152
FIGURE 2.23Computation
time for discrete
convolution
(IBM 360/75).
such
that
P= 2p.
Q
2q
(p,
q integer).
In this
case
only
f will
be extended
by zeroes
to a P by Q matrix: /c(j,k)=j[(j,k 0 ) O_j_m1 M_j<P1 n) the of f,(j, k) and 0<k<NN<k<Q g(j, k) are I 1 computed (2.144)
The with
DFTs equation
F,.(m,
n)
and
G(rn, and
(2.117a), transform R(r,s)=_
crosscorrelation
function
is determined
by the inverse
by
' {F*(m,n) 2.117b. function s). The shifts
G(m,n)} There of is some size (P value directions
(2.145) wraparound M+ for no are 1) by shift given
where in
L;
' is computed but a valid
equation correlation
R(r,s),
(QN+I) is the point
is contained m 0, n=0,
in R(r, and
correlation in both
positive
IMAGE
PROCESSING
FOUNDATIONS
67
for r=l, s1 up to (PM)2, (QN)2. The point (P+M)/2, (Q+N)/2 represents maximum negative shift in both directions, and R(P1, Q 1 ) is the correlation value for a negative oneelement shift in both directions. (See fig. 2.24.) The remaining values for r[(PM)/2]+I  1 are invalid. to [(P+M)/2]1 and [(QN)/2]+I If R (r, s) is plotted as a correlation to [(Q+N)/2] surface, zero dis
placement appears in the upper has zero shift at the center, with valid quadrants of R have to be lation function with zero shift in
left corner. A more conventional display lines and samples starting at 1. Thus, the extracted and rearranged to yield a correthe center. The transformation
R(r',s')=R(r,s) where PM 2
r p
r+l r+l
r=O, l,..
rP+M 2 .....
PM_M
2 P1 (2.146)
3PM 2
s'= I s s+_NQ+N yields a (PM+ + 1 t1 I)
" ""' s=0, 1, s_ Q_ __ N, correlation
2
. . QN .,Q1 function with zero shift
1) by (QN+ of f and g) and
(no displacement 2.7 Reconstruction
at [(PM)21+
1, [(QN)2]1.
Display of continuous or write directly pictures on film.
Image display is concerned with the regeneration from sampled images. Display systems use CRTs QN 2
.1_4, ½ I
' 2 3 a b
0
Q
D t
1 .i_+ Positive shift 3 r _ Negativeshift
FIGURE2.24Twodimensional crosscorrelation computed by discrete Fourier transform.(a) Computed corre/ation matrix. (b) Rearranged, va/idcorre/ation matrix.
68
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
A light spot of finite size is focused and projected by optics onto the film or CRT surface. The spot intensity is modulated, and the spot sweeps across the display plane in a rasterscan fashion to create a continuous picture. A continuous picture interpolation or filtering. may be obtained from the samples by spatial Let hn(x, y) denote the impulse response of the
interpolation filter, and H,_(u, v), its transfer function. The reconstructed continuous picture g,(x, y) is obtained by a convolution of the digital image ,g,(i, j) with the reconstruction filter or display spot impulse response ha : ga(x, y) i
2
7: l,
2 _xy=bxy
g_(jax, are
k.Xy) the
h,,(x
j.5.2, y spot
k±y)
(2.147) The freeq.
where quency (2.108)
±x :a±x
and
display and
spacings. image
spectrum of the reconstructed and (2.42) and ref. [26]):
displayed
is (see
Ga(u,
v) = a 1b G8 (aU ,
;)
Ha(u,
v)
= Ha(u,
v)±
Af
G
u ±x
a
'
y
1t, I
\
a'
,,.Xx
b/
±Yl
(2.148) assuming difference that the spectrum G, was not modified by digital processing. The between the sampling and display spot spacings is reflected in
the scaled spectrum G,, where equation (2.42) is used. Equation (2.148) shows the aliasing and the degradations caused by sampling and display. It is evident that the spectrum of the reconstructed image could be made equal to the spectrum of the original image g, if no aliasing were present, if sampling would not degrade the spectrum, and it the reconstruction filter Ha would select the principal Fourier transform G(u, v) with mn=0, and reject all other replications in the frequency domain. The first condition is met by a bandlimited image if the sampling intervals are chosen according to equation (2.104). The second condition is met if the sampling impulse is an ideal delta function. The third condition is met if the reconstruction filter transfer function is Ha(u,v)=S! The impulse response
forlul<U
elsewhere
and filter
Iv<V (ideal lowpass
(2.149) filter) is
of this reconstruction
ha(x,
y) =U
V siE2_" Ux 2_ Ux
sin 27: Vy 2_ Vy
(2.150)
IMAGE
PROCESSING
FOUNDATIONS
69
Thus, the conditions for exact image reconstruction are that the original image is bandlimited, that it is spatially sampled at a rate twice its highest spatial frequency, that the sampling impulse is a delta function, and that the reconstruction filter is designed to pass the frequency spectrum at m = n = 0 without distortion, and reject all other spectra for which m, n¢=0. Practically, images are not bandlimited because they contain edges and noise, which cause high spatial frequency components in the transform. However, the assumption is a reasonable approximation because most of the image energy is contained in the lowfrequency region of the spectrum. The aliasing effects can be reduced by the filtering inherent in the sampling process. In practical systems, the sampling impulse is never a Dirac delta function. Consequently, the spectrum of the sampled image is degraded by the transfer function of the sampling spot. The reconstruction function of the display system cannot be a true (sin x)/x function, because of finite spot size and positive light. Therefore, there is always aliasing, because the display spot does not completely attenuate the replications of the sampled spectrum of equation (2.108). The aliasing effects in display systems consist of Moir6 patterns and edge effects. These effects, however, are negligible if 90 to 95 percent of the image energy region of frequencies below the Nyquist frequency. This criterion lies in a is satis
fied by most remote sensing images. Moir6 patterns, for example, are usually only visible if there are periodic structures with frequencies near the Nyquist limit in the image. The quality of the displayed image is also influenced by the transfer characteristics of the display system. Pixel values in digital images represent a particular intensity or optical density. The aim is to generate a display image g,_ with the same measurable intensity or density as represented by the corresponding digital image g. Therefore, the generally nonlinear transfer characteristic gdd(g) of actual display systems must be measured. Before displaying any image, it is first transformed by the inverse characteristic d' to compensate for the effect of the display system [39]. The inverse transformation is a point operation; it can be implemented in a lookup table in the display system, or in an image processing function. 2.8 Visual Perception
The designer and user of image processing algorithms and displays has to consider the characteristics of the human visual system. The application of mathematicalstatistical techniques to image processing problems frequently requires a measure of image fidelity and quality. For example, in radiometric restoration and image compression a criterion is needed to measure the closeness of the reconstructed image with the original. For visual interpretation it is important to know how the human eye sees an image to determine the best enhancement and display parameters. The
70
distortion
DIGITAL PROCESSING OF REMOTELY due to the limited dynamic range
SENSED IMAGES of display devices can be
avoided by consideration of the properties of the visual system [40, 41]. Image analysis is based on the assumption that the information of significance to the human observer may be characterized in terms of the properties of perceived objects or patterns [42]. These properties are determined by the observable psychophysical parameters contrast, contour, texture, shape, and color. Although machine analysis of images relies entirely on features measurable in a picture, visual interpretation consists of a sequence of visual processing stages from the initial detection of objects to the final recognition. Objects or patterns may only be detected when they can be distinguished from their surroundings with respect to the psychophysical parameters. Most of the information about objects resides within their border. Thus, objects may be detected by differing contrast, texture, or color. Recognition depends to a degree on information not resident in or derivable from an image, namely information based on prior experience of the human visual of the psychophysical of the analyst. system determine the useful parameters. On the other range hand, The properties and distribution
the objectives of the image analysis task determine which features in the image are to be represented by the parameters. The detection of objects is aided by image processing. Enhancement techniques are used to increase the contrast, and to enhance contours, texture, and colors.
2.8.1
Contrast
and
Contour difference in luminance and may be defined as the
Contrast
is a local
ratio of the average gray value of an object to the average gray level of its background. The sensitivity of the human visual system depends logarithmically on light intensities that enter the eye. Thus, the greater the brightness, the greater the contrast between objects must be to detect any differences. This relationship is known as the WeberFechner law. The apparent brightness of objects depends strongly on the local background intcnsity. This phenomenon is called simultaneous contrast. In figure 2.25, the small squares have different brightnesses, have equal intensity, but because their backgrounds they appear to have widely
differing intensities. The ability of the visual system to detect sharp edges that define contours of objects is known as acuity. The eye possesses a lower sensitivity for slowly and rapidly varying patterns, but the resolution of midrange spatial frequencies is excellent. Thus, the visual system behaves like a bandpass filter in its ability to detect fine spatial detail. This characteristic is demonstrated in figure 2.26, where the spatial frequency of the sinusoidal pattern increases to the right while the contrast increases downward. The curve along which the pattern is just visible represents the
IMAGE
PROCESSING
FOUNDATIONS
71
FIGURE
2.25Simultaneous
contrast.
FIGURE 2.26Sinusoidally
modulated
pattern.
72 modulation
DIGITAL transfer
PROCESSING function
OF REMO_fELY of the visual
SENSED The
IMAGES visual system
system.
enhances edges at abrupt changes in intensity. Each block in the gray scale in figure 2.25 has uniform intensity. However, each block appears to be darker near its lighter neighbor and lighter near its darker neighbor. This apparent overshoot in brightness of the eye [43]. is a consequence of the spatial frequency 2.8.2 characteristic
Color
Varying the wavelength of light that produces a visual stimulus changes the perceived color from violet (shortest visible wavelength), through blue, green, yellow, and orange, to red. Although the eye can simultaneously discriminate 20 to 30 gray levels, it has the ability to distinguish a larger number of colors [44]. Color can be described by the attributes hue, saturation, and brightness. Any color can be generated by the superposition of three primary colors (usually red, green, and blue) in an additive system or by the subtraction of three primaries (cyan, magenta, and yellow) from white in a subtractive system. Color pictures are produced from digital multiimages by selecting three components for modulation of the primary colors. The brightness values in these three component images are called tristimulus values. Color order systems based on the principle of equal visual perception of small color differences are of interest for assessing color characteristics for visual perception [45]. In the Munsell system, consisting of a cylindrical space (fig. 2.27), hue is represented by the polar angle; saturation, by the radius; and brightness, by the distance on the cylinder axis (achromatic axis). All gray shades lie on the achromatic axis, because blackandwhite images have no saturation or hue. Manipulation of color
Brightness White
Saturation _ Green  Black FIGURE2.27Colorperception space. Hue _
IMAGE images fluence 2.8.3 in this on the Texture space relative (e.g., color
PROCESSING filtering) balance.
FOUNDATIONS is possible (See sec. 4.4. without ) uncontrolled
73 in
An
object
or
pattern picture
is not elements.
perceived Rather,
by
the
visual are
system usually
as an seen of
array
of
independent coherent in these describes acteristics the
objects The spatial
as spatially subpatterns
regions regions, texture. of the
on
a background.
structure
characterized The local texture Texture as
by their subpattern the may perceived be
brightness, properties lightness, in terms
color, size, and sh'ape, give rise to such charthe coarseness, and
directionality.
defined
of regular
repetitions of averages
of subpatterns of local image
[46J, or in terms of the properties [47, 48].
frequency
of occurrence
REFERENCES
[1] Lowe, D. S.: Nonphotographic Optical eds.: Remote Sensing of Environment. pp. 155193. [2] Stockham, T. S.: Image Processing IEEE, vol. 60, 1972, pp. 828842.
Sensors, in Lintz, AddisonWesley, in the Context
J.: and Simonett, Reading, Mass., Model,
D. S., 1976, Proc.
of a Visual
[3] Fraser, R. S.; and Curran, R. J.: Effects of the Atmosphere on Remote Sensing, in Lintz, J.; and Simonett, D. S., eds.: Remote Sensing of Environment. AddisonWesley, Reading. Mass., 1976, pp. 3484. [4] Papoulis, A.: York, 1962. The Fourier Integral and Its Applications. McGrawHill, New
[5l Lighthill, M. J.: Introduction to Fourier Analysis and Generalized University Press of Cambridge, Catnbridge, England, 1960. [6] Wong, E.: Stochastic Processes in Information and Dynamical McGrawHill, New York, 1971. [7] Papoulis, A.: Probability, Random McGrawHill, New York, 1965. [8] Taylor, A. E.: York, 1958. [9] Goodman, 1968. Introduction Introduction to Functional to Fourier Variables, Analysis. Optics. and John Stochastic Wiley
Functions. Systems. Processes.
& Sons, New New York,
J. W.:
McGrawHill,
[10l Watanabe, S.: KarhunenLoEve Expansion and Factor Analysis, Theoretical Remarks and Applications. Transactions of the Fourth Prague Conference on Information Theory, Prague, Czechoslovakia, 1965. [11] Taylo, J. T.: Digital port CR880, 1967. Filters for NonRealTime Image Data Processing. with Computers. NASA Re
[12] Selzer, R. H.: Improving Biomedical JPL TR 321336, Oct. 1968.
Quality
NASA/
[13] Lancos, C.: Discourse on Fourier Series. Hafner Pub. Co., New York, 1966. [14] Brault, J. W.; and White, O. R.: The Analysis and Restoration of Astronomical Data Via the Fast Fourier Transform, Astron. Astrophys., vol. 13, 1971, pp. 169189. [15] Huang, T. S.: TwoDimensional Windows, vol. AU20, Mar. 1972, pp. 8889. IEEE T:",ns. Audio Electroacoust.,
[16] Bracken, P. A.: Dalton, J. T.: Quann, J. J.: and Billingsley, J. B.: AOIPSAn Interactive Image Processing System. National Computer Conference Proceedings, American Federation of Information Societies Press, 1978, pp. 159171.
74
[17]
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Oppenheim, A. V.: Schafer, R. W.; and Stockham, T. G.: Nonlinear Filtering of Multiplied and Convolved Signals, Proc. IEEE, vol. 56, Aug. 1968, pp. 12641291. [18] Andrews, H. C.; and Hunt, R.: Digital Image Restoration. Prentice Hall, Englewood Cliffs, N.J., 1977. [19] ERTS Data Users Handbook. NASA Doc. 712D4249, Washington, D.C., 1972. [20] Mikhail, E. M.: and Baker, J. R.: Geometric Aspects in Digital Analysis of Multispectral Scanner Data. American Society of Photogrammetry, Washington, D.C., Mar. 1973. [21] Kratky, V.: Cartographic Accuracy of ERTS, Photogramm. Eng., vol. 8, 1974, [22] pp. 203212. Hufnagel, R. E.; and Stanley, N. with Image Transmission through 1964, pp. 5261. Peterson, D. P.; NumberLimited 1962, pp. 279323. 13racewell, R. N.: New York, 1965. Legault, R.: The Biberman, L. M., New York, 1973. Hunt, essing 13. R.; Images and by and Middleton, Functions in The Fourier R.: Modulation Transfer Function Associated Turbulent Media, J. Opt. Soc. Am., vol. 54, D.: Sampling NDimensional Transform and and Reconstruction Spaces, Inf. Control, Its Applications. of Wavevol. 5,
[23]
[24] [25]
McGrawHill,
Aliasing Problem ed.: Perception Breedtove, J. R.: Digital Computer,
in TwoDimensional Sampled Imagery, in of Displayed Information. Plenum Press, Scan IEEE and Display Considerations Trans. Comput., vol. C24. in Proc1975, pp.
[26]
[27]
[28] [29] [30] [31] [32] [33]
848853. Gaven, J. V.: Taritian, J.: and Harabedian, A.: The Informative Sampled Images as a Function of the Number of Gray Levels Used the Images, Photogr. Sci. Eng., vol. 14, 1970, pp. 1620. Wood, R. C.: On Optimum Quantization, IEEE Trans. Inf. Theory. 1969, pp. 248252. Huang, T. S.: PCM Picture Transmission, IEEE Spectrum. vol.
Value of in Encoding vol. 2, IT15, no. 12,
1965, pp. 5760. Andrews, H. C.: Computer Techniques in Image Processing. Academic Press, New York, 1970. Brigham, E. O.: The Fast Fourier Transform. Prentice Hall, Englewood Cliffs, N.J., 1974. Cooley, J. W.; and Tukey, J. W.: An Algorithm for the Machine Calculation of Complex Fourier Series, Math. Comput., vol. 19, 1965, pp. 297301. Cooley, J. W.; Lewis, A. W.; and Welch, P. D.: Application of the Fast Fourier Transform to Computation of Fourier Integrals, Fourier Series, and Convolution Integrals, Eklundh, J.O.: IEEE Trans. Audio A Fast Computer Electroacoust., Method for vol. AU15, 1967, pp. Matrix Transposing, IEEE 7984. Trans.
[34] [35] [36] [37] [38] [391 [40]
Comput., vol. C21. 1972. pp. 801803. Rindtleisch, T.: JPL Communication, 1971. Ahmed, N.: Natarajan, T.: and Rao, K. R.: Trans. Comput., vol. C23, 1974, pp. 9093. Hunt, IEEE 13. R.: A Matrix Theory Trans. Audio Electroacoust.. Proof vol.
Discrete Discrete 1971.
Cosine
Transform,
IEEE Theorem,
of the AU19.
Convolution r_r. 285288.
Oppenheim. A. V.: and Schafer, R. W.: Digital Signal Processing. Prentice Hall, Englewood Cliffs, N.J., 1975. Hunt, B. R.: Digital Image Processing. Proc. IEEE, vol. 63, 1975, pp. 693708. Stockham, T. G.: The Role of Psychophysics in the Mathematics of Image Science. Symposium on hnage Science Mathematics, Monterey. Calif., Western Periodicals Comp.. Nov. 1976, pp. 5759. Jacobson, H.: Mar. 1951, pp. Lipkin, B. S.: pp. 126138. The Information 292293. Psychopictorics Capacity and Pattern of the Human Eye, SPIE Science, J., vol. vol. 8, 113, 1970,
[41] [42]
Recognition,
IMAGE
PROCESSING
FOUNDATIONS
75
[43] Cornsweet, T. N.: Visual Perception. Academic Press, New York and London, 1970. [44] Sheppard, J. J.; Stratton, R. H.; and Gazley, C. G.: PseudoColor as a Means of Image Enhancement, Am. J. Optom., vol. 46, 1969, pp. 735754. [45] Billmeyer, F. W.; and Saltzmann, M.: Principles of Color Technology. Interscience, New York, 1966. [46] Hawkins, J. K.: Textural Properties for Pattern Recognition, in Lipkin, B. S.; and Rosenfeld, A.: Picture Processing and Psychopictorics. Academic Press, New York and London, 1970. [47] Rosenfeld, A.: Visual Texture Analysis: An Overview. TR406 (University of Maryland, College Park, Md.), Aug. 1975. [48] Haralick, R. M.; Shanmugam, K.; and Dirnstein, I.: Texture Features for Image Classification, IEEE Trans. Systems, Man Cybernetics, vol. SMC3, 1973, pp. 610621.
3.
3.1 Introduction
Image
Restoration
Image restoration is concerned with the correction for distortions, degradations, and noise induced in the imaging process. The problem of image restoration is to determine a corrected image ](x', y') from the degraded recorded image g(x, y) that is as close as possible, both geometrically and radiometrically, to the original object radiant energy distribution [(x', y'). In section 2.3 a linearized model for the imaging process geometric and radiometric degradations was developed : g= T_ITiJ +n_ The geometric distortions T_ are formation, (2.76). The radiometric represented degradation that separated
(2.81) by the coordinate transTI_, equation (2.80), is
given by the convolution of the object radiant energy with the system point spread function (PSF) h and by random noise n,.. The additive term n, includes coherent noise, temperature effects on detector response, and cameradependent errors. Formally, an estimate i of the original i= T,__T,;The effects of the atmosphere described in the next section. metric 3.2 restoration T_t scene / is obtained by (3.1) by preprocessing, T, 1 and radio
'(gn,)
and n_ are removed Geometric corrections in sections
_ are discussed
3.3 and 3.4, respectively.
Preprocessing
The purpose of preprocessing is to remove the atmospheric effects described by equation (2.77) and the degradations and noise represented by the term n, in equation (2.81). Image analysis is susceptible to changes in illumination, in atmospheric conditions, in Sun angle, in viewing angle, and in surface reflectance and to systematic instrument errors. Some of these effects are greater in aircraft multispectral data than in satellite images. Preprocessing should remove all systematic variations from the data uses so that the effects of signal changes are minimized. a priori information. For example, Landsat Return images have to be corrected for nonuniform Preprocessing Beam Vidicon response of the 77
(RBV)
78
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
vidiconsurface (shading), ndLandsat ultispectral a M ScannerMSS) ( images require correction forvariations ofdetector andoffset. gain Illumination and atmospheric effects arealsoremoved y preprocb essing. Afterremoval thepathradiance of (seeeq.(2.77)), multiplicative effectshatarecorrelatedetweenhannelsf multispectral t b c o images canbe reducedy ratioing b pairsof datachannels (seesec. .5.2).For 4 classification (seech.8), atmospheric differences between trainingareas and areas be classified can cause changes in both magnitude and to
spectral distribution of signals and consequently misclassifications. Because preprocessing permits the use of multispectral pixels or signatures from localized areas to be applied to other locations and conditions, preprocessing techniques. 3.2.1 techniques are frequently called signature extension
Illumination
Correction
The correction for different lighting conditions in remote sensing images due to variable solar elevation cycles is important for comparison of the reflection of materials in different areas and for the generation of mosaics of images taken at different times. A firstorder correction ignores topographical effects and the dependence of the back scattering term in equation (2.77) on the Sun angle. This correction adjusts the average brightness of frames, and consists of a multiplication of each pixel with a constant derived from the Sun elevation angle. For kandsat MSS images the Sun angle effect is also dependent on the latitude [ 1]. 3.2.2 Atmospheric Correction
Atmospheric effects in remotely sensed images are primarily due to atmospheric attenuation of radiation emanating from the surface, to Rayleigh and aerosol scattering of solar radiation between the sensor and the scene, and to sensor scan geometry. (See eq. 2.1.) Scattering is the most serious effect. Aerosol scattering produces a luminosity in the atmosphere often called haze. Scattering is wavelength dependent, where the shorter wavelengths are most affected. In multispectral images the blue and green components often have visibly less scene contrast than the red and infrared. For Landsat MSS data, path radiance constitutes a major part of the received signal in band 4 and is not negligible in band 7 [1]. There is also a strong dependence of the radiance on haze content and some dependence on the scan angle, even though the MSS scans only about 6 ° from the nadir. A crude firstorder correction for the path radiance is based on the assumption that areas with zero reflectance can be located in a spectral component of a multiimage [2]. The reflectance of water is
IMAGE RESTORATION
79
essentially zeroin the nearinfrared regionof the spectrum suchas in band of theLandsat 7 MSS. herefore, canbeassumed T it thatthesignal of clearopenwaterrepresents thepathradiance. Thehistograms the of otherspectral omponents the sameareaare plotted.The lowest c for pixelvalue each in component isused asanestimate forthepathradiance andissubtracted fromeach pixel. Thesensor geometrynditsrelationship theSunposition scan a to are importantfactorsin remotesensing from aircraftand satellites. The longerobservation paththrough theatmosphere largerscanangles for tendsto 1"educe received the signal.An opposite effectis caused by scattering the atmosphere, in andthis scatteringddsextraneous a path radiance the signal.The relative balance between the two effects to
depends on the direction of the scan in relation Techniques for correcting remotely sensed data spheric effects are described in [ 16]. As the state of the science of remote sensing to the Sun position [I ]. for Sun angle and atmoadvances, it becomes
increasingly necessary to compare data obtained at different times and by different sensors. This comparison requires the determination of absolute target reflectances. The process of finding the correspondence between measurements and a quantity in a system of units is called calibration [7, 8]. 3.2.3 Noise Removal instrument and transmission noise is important, enhancement, image registration, and numerical
The removal of coherent so that subsequent image
image analysis can be performed on images with a high signaltonoise ratio. The precise separation of any noise from the data must be based on quantifiable characteristics of the noise uniquely from the other image components. The removal is to isolate and remove the identifiable components in a manner that does a minimum image data. In most cases, the errors caused removal process, although small, be measured if detailed knowledge types of coherent spike noises. noise appearing signal that distinguish it essence of coherent noise and characterizable noise of damage to the actual in the real signal by the
vary from point to point and can only about the scene is available. The main in images are periodic, striping, and
Periodic noise may be caused by the coupling of periodic signals related to the rasterscan and datasampling mechanism into the imaging electronics of electrooptical scanners or by power consumption variations and mechanical oscillations in electromechanical scanners or tape recorders. The recorded images contain periodic interference patterns, with varying amplitude, frequency, and phase, superimposed on the original scene. For typical spacecraft systems, these periodic noises often exhibit phase coherence over times that are long compared to the frame
80
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
For this reason, the periodic noise appears as a two
timeofthecamera.
dimensional pattern exhibiting periodicity along the scan lines and perpendicular to them. This periodicity is characterized in the twodimensional Fourier domain, where the coherent noise structure appears in the twodimensional amplitude spectrum as a series of spikes, representing energy concentrations at specific spatial frequency locations. The removal of periodic noise components can be achieved by bandpass or notch filtering. (See sec. 2.2.5.) Figure 3.1a shows an image obtained by an electromechanical aircraft scanner. The image is degraded by a strong periodic interference pattern and by data dropouts visible as black spots. The magnitude of the Fourier transform is shown in figure 3. lb. The spikes parallel and under a slight angle to the vertical frequency axis represent periodic noise components. This effect can be demonstrated by observing a single periodic component. Let n,(x, y) be a twodimensional sinusoidal pattern with spatial frequencies u,,, v,, and with amplitude A : n_(x, y) =A cos 2r(u,,x+v,,y) The Fourier transform Nl(u, v)of n, (x, y) is [9] 2 [8(uu,,, vv0) +3(u+u_,, v+v,,)] (3.3) (3.2)
and it represents a pair of impulses at (u,,, v,) and (u,,, vo) in the spatial frequency plane. The line connecting the two impulses is perpendicular to the cosine wave. The Fourier spectrum in figure 3. l b indicates that the noise is composed of several periodic components. The noise components be due to periodic along two lines parallel to the vertical frequency axis may scanlinedependent random phase shifts in a horizontal with frequency n..(x, y) =B If the phase 4,(Y) is assumed tion,b(y) = cythe Fourier uo: cos [2_ru.+4,(y)] on the scan (3.4) loca
noise pattern
to be linearly dependent transform of n:(x, y) is
N2(u,v)=2[8(u_u,,,vc)+3(u+u,,v+c)] Thus, with c varying, N_,(u, v) represents impulses located
(3.5) along two lines
parallel to the vertical frequency axis. (See fig. 3.2.) The effect of removing the noise components with a notch filter is shown in figure 3.3, where part a shows the Fourier spectrum after removal of the noise frequencies, and part b is the reconstructed image. The periodic noise is not completely removed, and too much information may be affected by this crude filtering procedure. of the image A technique
IMAGE
RESTORATION
81
FIGURE 3.1Example and spike noise. spikes.
of periodic noise• (a) Image with periodic interference (b) Magnitude of Fourier transform, showing periodic
pattern noise
82
DIGITAL
PROCESSING
OF
REMOTELY
v
SENSED
IMAGES
u
0
u
FIGURE 3.2Locations
of phasedependent
noise
components
in frequency
domain.
described by Seidman [10] first extracts the principal noise components from the Fourier transforms, creates a noise pattern by the inverse Fourier transform, and subtracts a weighted portion of the noise pattern from the image. Let G(u, v) be the Fourier transform of the noisy image g(x,y). A twodimensional filter H(u, v) is constructed noise components such that the noise spectrum is N(u, The determination interactively. the format v) =G(u, v)H(u, v) to pass only (3.6) and is best performed
of H requires
much
judgment
Figure 3.4a shows the Fourier transform of figure 3.1a in generated by the discrete Fourier transform (DFT). (See
fig. 2.22.) Figure 3.4b shows the isolated noise components extracted by the twodimensional bandpass filter H(u, v). The spatial representation of the noise is obtained by n(x, To minimize a weighted the corrected y) _ i_ l{N(u, v) } (3.7) in the noise estimate an estimate n, f of
the effect of components portion image: i(x, Y) =g(x, of n is subtracted
not present from
g to obtain
y) w(x,
y)n(x,
y)
(3.8)
The weighting function w is determined such that the variance of ] is minimized over a neighborhood of every point (x, y). Figure 3.5 shows the corrected image f obtained by equation (3.8) for a 15 by 15point spatial neighborhood. technique described figure 3.6 is obtained Furthermore, later in this by n,,(x, y) =g(x, y) J(x, y) (3.9) the spike noise was removed by the section. The actual noise shown in
IMAGE
RESTORATION
83
FIGURE 3.3Application of frequencydomain filter. (b) Corrected image reconstructed
(a) Filtered from part
Fourier a.
transform.
84
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
3.4Noise
filter
design. components
(a) Fourier
transform
of notsy
image.
(b) Noise
abstracted
by filter,
IMAGE
RESTORATION
85
FIGURE 3.5_lmage after removal of periodic and spike noise.
Striping as sensor dropouts. especially tinguishes scan line direction. present in
or streak
noise
is produced
by a variety
of mechanisms,
such
gain and offset variations, data outages, and tape recorder This type of noise becomes apparent as horizontal streaks, after removal of periodic noise. The characteristic that disstreak noise from the actual scene is its correlation along the direction and the lack of correlation in the perpendicular This distinction is not complete, because linear features are some natural scenes and noise removal based on this char
acteristic may result in major damage to the true signal in regions that contain scene components resembling the noise. Striping, if not removed before image enhancement, is often also enhanced and may become unacceptable in ratio images. (See sec. 4.5.1 .) A technique to correct intensity values of lines for streak noise is to compare adjacent and parallel to the the local average streaks with the
average value of the streak itself and to apply a gain factor to account for any differences. A multiplicative rather than an additive correction is applied because the physical origin of the noise is multiplicative (magnetic tape dropouts). This correction is particularly data dependent in its effect, and although providing a global improvement, it may introduce artifacts in the detail [ 11 ].
86
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
a
FIGURE 3.6Removed noise pattern. (a) Noise (b) Magnitude of its Fourier
subtracted transform.
from
Figure
3.1a.
IMAGE Regular striping may occur
RESTORATION in images taken by multidetector
87 sensors.
The mirror speed of opticalmechanical scanners is limited. For example, the orbital velocity of Landsat is such that the satellite moves forward six fields of view during the time needed to scan one image line. Therefore, six detectors are used in the MSS that image six lines for each spectral band during a single sweep of the mirror. The Visible Infrared Spin Scan Radiometer (VISSR) onboard the Synchronous Meteorological Satellite (SMS) has eight detectors per spectral band. Let D be the number of detectors in a sensor. Each detector records subimage interlacing consisting of every these subimages. Dth line. The complete The transfer functions
a
image is formed by of the individual
detectors are not identical, because of temperature variations and changes in the detector material. Some detectors have a nonlinear transfer characteristic and a response that depends on their exposure history. Because of these effects images with regular striping are obtained. The corrections derived from scanning a gray wedge at the end of each scan line and applied in ground processing do not remove the striping entirely. Therefore, the image data themselves are used to derive a relative correction of the individual detector subimages such that each one is related in the same way to the actual scene radiance. This correction is based on the assumption that over a sufficiently sensor is exposed to scene radiances For a linear and time invariant element of the subimage belonging large region W of size M by N, each with the same probability distribution. sensor transfer function, a recorded to detector d is given by (3.10) d+2D N are known, a corrected ....
g,_(j_, k) =a_/d(jd,
k) + bd d+D, .....
i_d, k=l If the gain a,_ and offset b,j for each detector output can be calculated by
detector
]a(io, k)  gd(jd, k) b,, ad Under variance, the assumption that each subimage the gain and offset are given by
O"
(3.11) has the same mean and
ad= o d
(3.12)
and b,r = m  a,tm,r (3.13)
where m,t and ,r,j are the mean gray value and the standard deviation, respectively, of the subimage for detector d; and m and ,_ are the total
88
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
meangrayvalueandstandard eviation,espectively, the reference d r in regionW. For a normal distribution of radiance, transformation (3.11)
equalizes the probability distribution of each detector subimage to the probability distribution of the total image. Nonlinear sensor effects distort the distribution, and the linear correction, equation (3.11), does not eliminate striping completely. A nonlinear correction, obtained by matching the cumulative histograms of the individual subimages to the cumulative histogram of the total image, successfully reduces striping [12, 13]. Let H be the cumulative histogram of the entire image; i.e., let H(]) be the number of occurrences of detector output values less than or equal to [. Let Ha be the cumulative histogram for detector the number of outputs of detector d less than or equal function [= [(g) is obtained by n,_H([) <nH,_(g) where n= MN and n,¢= MN/D. <n,_H([ + 1) d. Then H,s(g) is to g. The transfer
(3.14)
Figure 3.7a shows an area of a Landsat2 MSS image with severe striping. The periodic nature of the striping is also evident as spikes in the vertical power spectrum computed for columns of the image and shown in figure 3.7b. The image corrected with the linear transformation (3.11) and its onedimensional vertical power spectrum are shown in figures 3.8a and 3.8b, respectively. The result of the nonlinear correction, equation (3.14) by matching the detector histograms is shown in figures 3.8c and 3.8d. The transfer functions for detectors 1 and 5 obtained by matching detector means and standard deviations and by histogram matching are shown in figure 3.9. Figures 3.10a and 3.10c show an SMS/ VISSR image of the Florida peninsula with striping and after removal of striping by matching the means and standard deviations of the eight detectors to the values of the total image. Periodic striping noise may also be removed by filtering in the Fourier domain. The contribution of periodic noise to the frequency spectrum is concentrated barmonics. pattern dicular at points determined by the spatial frequencics of the noise For example, the frequency spectrum of a periodic line
with discrcte frequencies is given by points along a line perpento the direction of the noisy lines in the image. (See sec. 2.2.3.1.)
Removing the noise coefficients from the spectrum of the image by interpolation or by notch filtering and taking the inverse transform results in a smoothed picture with deleted noise. Figure 3.11a shows the twodimensional Fourier spectrum of the image in figure 3.7a. The spikes on the vertical frequency axis represent the noise frequencies caused by horizontal striping. Figure 3.1 lb is the corrected image, obtained by onedimensional frequencydomain filtering with a notch filter that sets the Fourier coefficients at the noise frequencies
IMAGE RESTORATION
89
in figure 3.7b to zero. The ringing near the border, caused by the discrete Fourier transform, is clearly visible. The disadvantage of this technique is that the transform size must be a power of 2, that windowing to reduce ringing is necessary, and that in the case of horizontal striping the image has to be rotated before and after filtering. Spike noise is caused by bit errors in data transmission or by the occurrence of temporary disturbances in the analog electronics. It produces isolated picture elements that significantly deviate from the surrounding data. Spike noise can be removed by comparing each picture element with its neighbors. If all differences exceed a certain threshold, the pixel is considered a noise point and is replaced by the average of its neighbors. The spike noise in figure 3.1 a was removed with this technique. Additive random noise in a recorded image g, g=l+n may be suppressed by averaging if multiple frames are available. The average of a given set of L images 1 g=__ If the noise n is uncorrelated r g_ then (3.17) (3.16) (3.15) of an invariant scene g_ is obtained by
and has zero mean, E{_} =[
and o_2= left2 (3.18)
where _r_ is the variance _ of the average. Thus, _ will approach the original image [ if L is sufficiently large. This technique requires a very accurate registration of the images. Multiple frames taken at the same time are generally not available for remotely sensed images. Therefore, the primary application of this method is in reducing the noise in digitized maps and photographs used as ancillary data for the analysis of remotely sensed data.
3.3
Geometric
Transformations are used to correct for the geometric distor
Geometric
transformations
tions T_; (see sec. 2.4); i.e., they are used to perform the inverse transformation T_'. They are also required to overlay images on other images or maps; to produce cartographic projections; to reduce or expand the size of pictures; to correct the aspect ratio (ratio between scales in the horizontal and vertical direction); and to rotate, skew, and flip images.
90
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
3.7aStriped
Landsat
MSS image with residual correction at GSFC.
striping
after
radiometric
IMAGE
RESTORATION
9 ]
o
C2;
0 o0
"8
0
III
0
0
00'l_t
O0'_t
00"0 t
e0
92
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 3.8aDestriped Landsat MSS image. Striping removed detector means and standard deviations.
by matching
94
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 3.8cDestriped
Landsat MSS image. Striping detector histograms.
removed
by matching
IMAGE
RESTORATION
93
J
lz
Q.
q} t_
t_
o
"ci
"(5
"6
E d>
tZ
c6
i,i r,lr C9 h
oo"t,t
oot_
oo'bL
oo'_
tunJl3ads
o0'9
SIAIH
oo",,
IMAGE
RESTORATION
95
o
0
(:5
O.
o
q)
(5
"6
E
o Cl
c_.
.3
o_
c
0
0 co
o
i i1 o
00"lTt
'00"g
t
00"0t
00"8
00'9 LunJloads
O0't' Sr/_
O0
z
96
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Detector transfer
function
1 nonlinear . ...
'_//f//"
_// /
i/
/ /
Detector transfer
l tinear function _...
// ./_'Z"/t ////_/_/'7_ "_/_/_// _
// ....... ?etector transfer 5_nonl function inear
/S,.
///_""____
Detector
'ao" r'uo 'on
5 linear
//// //
FIGURE
3.9Transfer
functions
for
detectors
1 and
5
for
correction
in
figure
3.8c.
IMAGE
RESTORATION
97
FIGURE 3.10aStriped
recorded
SMS/VISSR
image.
98
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
o)
c_
(%
c_
=o
I
¢n
d
q) {3"
o
...,, "o 0 C_
o
c;
_z
¢o
o,;
Ib
t
0
y
00"l_t
00'_ t
o
E
0
00'0 t
oob
oo'b
_nJ:l°adS S_EI
ooi,
oo'_
8
IMAGE
RESTORATION
99
FIGURE
3.10cImage
in part
a after
correction
to remove
striping.
100
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
o
d
",:5 d I;:::
• o Q..
E
_ "o
u.
o _
_
d
_
0
_f
r oo.t,L oo'_t oobt oo8 oo9
ua nJl::'ads Si_lH
/
oo_
o
E d,
°
•o
• o
5'
N
w _ (._
oo_
IMAGE
RESTORATION
10l
FIGURE
3.11a_emoval
of striping Fourier
by frequenc¥ of image
domain in figure
filtering, 3.7a.
Twodimensional
spectrum
102
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 3.1 1bImage effects
in figure 3.7a after filtering to suppress striping. near the top and bottom borders are visible.
Ringing
IMAGE RESTORATION 3.3.1 Coordinate Transformations T,/ is a vector function defined by
103
A geometric equations
transformation
the
x'=p(x, y'=q(x,
y) I y) J
(2.76)
which map a region R in the (x, y) coordinate system into a region R* in the (x', y') systemT_/ R _ R*. (See fig. 3.12.) To determine T_, for remotely sensed images, the errors contributing to the geometric distortions discussed in section 2.4. l must be known. The main error sources are variations in aircraft or spacecraft attitude and velocity, during imaging, instrument nonlinearities, Earth curvature, distortion. Earth rotation and panoramic
Two important classes of mappings T,/ for remotely sensed scanner images are (1) the transformations that relate the image coordinates (x, y) to a geodetic coordinate system, and (2) the transformations that relate the coordinate systems of two images. The precise Earth location of scanner measurements must be determined to produce map overlays and cartographic projections of images (see ch. 6), and for the objective analysis of time sequences of images, such as wind speed calculations from cloud displacements (see sec. 7.4). The coordinate transformations between images are employed in the relative registration of several images
(x, _,)
FIGURE 3.12Geometric transformation.
(x: v')
104 of the
DIGITAL same
PROCESSING (e.g.,
OF REMOTELY
SENSED IMAGES or multisource
scene
multispectral,
multitemporal,
images). The determination of T,, for this case is treated in chapter 5. The derivation of Tr_ for the first case is equivalent to the determination of the intersection of the scanner line of sight with the surface of the Earth. Given the aircraft or spacecraft position vector p; the vehicle velocity v; the vehicle attitude in yaw, pitch, and rolL; the pointing direction s of the scanner measured relative to the aircraft or spacecraft axis; and the description of the Earth surface, the location of the intersection e of the scanner line of sight with the Earth surface can be computed. In the approach described by Puccinelli [14] the Earth coordinate system is chosen as a Cartesian coordinate system (x', y', z') with the origin at the center of the Earth. The spacecraft position vector p defines the origin of the spacecraft coordinate system about which the attitude changes are measured. The definition of the orientation of the aircraft or spacecraft axes is arbitrary, and different coordinate systems are established for different satellites. For example, the Landsat and Nimbus attitude control systems aline the spacecraft yaw axis with the normal n from the satellite to the Earth surface. Therefore, the yaw axis is taken to be coincident with the normal vector n; the pitch axis is taken to be n × v, which is normal to the orbital plane and to the yaw axis; and the roll axis is (n×v) ×n. (See fig. 3.13.) For the spinstabilized SMS/GOES (Geostationary Operational Environmental Satellite) spacecraft with spin axis S, the yaw axis is taken to be n=lS+ (p.S)S, which is perpendicular to S in the plane defined by p and S and generally points to the center of the Earth. The pitch axis is taken to be Sxn, and the roll axis is the spin axis S itself [ 15]. The orientation of the spacecraft axes relative to the Earth coordinate system is given by the orthogonal matrix D, where D= (c_, c:, c::) The column vectors may be chosen as (3.19)
cl=:n
nxv c_= ]] V ]l_
c:_:c,×e,
(3.20)
for the first of the previously defined spacecraft coordinate systems. The images coordinates (x, y) of a picture element are determined by the viewing direction s of the scanner. The direction of the scanner can be described by rotations about the spacecraft yaw, pitch, and roll axes, measured relative to the yaw axis. The three generally timevarying rotation angles define the vector s: (_), ,I,, 0), where _)is the rotation about the yaw axis; ,I, is the rotation about the pitch axis; and 0 the rotation about the roll axis. Depending on the scanner type, one or two of the rotation angles are zero. For the Landsat MSS, ()=0, ,1,=0, and O=±y(yy,,), where ±y is the angular width of a pixel in the scan
IMAGE
/"
RESTORATION
105
Earth atitude
coordinates
(x
, V
r
, z
e )
or (X,_)
k
=
Longituc
p
= Spacecraft
position
vector
Image , coordinates (x, y)
$ = pointing X O Scanner (roll)
Satellite coordinates
e 2 (pitch)
Image plane YO
(c 1 ,c2,c
3)
FIGURE 3.13Relation
between
Earth, satellite,
and image coordinates.
direction,
and
y,, is the (,)=0, of the
coordinate rl,=±x(xx.), spacecraft of three
of the
center
of the
image
frame.
For
the SMS/VISSR, The described rotations
and axis rotation due
O=6y(yy.). to yaw r, pitch _, and roll 0 are
by the product
matrices
M=(sinT
cost 0 vectors
l \sinc'0cos_
0
0cospsinp \0sinp
cosp/
°/
describing are
(3.21)
The
column
of the orthogonal F:DM
matrix (3.22) in the The Earth rotations that the recall the coordinate system angles system the repreand of
are after
the
directions in viewing by
of the
the
spacecraft in the identical
axis attitude. spacecraft to To M
a change
spacecraft
scanner sented O replace the
direction M',
coordinate
a matrix r, _, and in the the
except
(,), ,h, direction
p, respectively. Earth yaw coordinate axis
determine system,
pointing that the
scanner
third
column
of IF represents
in Earth G:FM'
coordinates.
Thus,
G, where (3.23)
106
DIGITAL
PROCESSING matrix
OF REMOTELY
SENSED IMAGES representing
is the orthogonal
whose third column coordinate
is the unit vector system.
the scanner line of sight in the Earth column of M'; i.e.,
Let m' be the third
//cos 6) sin q, cos 0+sin 6) sin 00\ m' = [ sin 6) sin q_ cos 0 cos 6) sin \cos ,I, cos 0 then g, where g=Fm' is the unit vector representing the scanner nate system. The Earth surface is defined x"'+y'' a 2 The intersection of g with that surface e=p+u The parameter point u represents the distance
)
(3.24)
(3.25) line of sight in the Earth coordiby the following ellipsoid: (3.26) e, where (3.27) to the intersect
.I_ __,oz 1 = is given by the vector g from the scanner
and is given by u= B\/B"AC
A
(3.28)
where A = cZ(gJ+gu B.= c'(p_ C= z) +a _"g,," (3.29)
gx+Pu gu) +a '_Pz gz z)
c2(pz2 Wpu 2) + aZ(p_2c e_,, e:,)
The resulting location vector e=(e_,, centric latitude _0cand longitude X: _,_= tan_ll/
can be converted
to geo
ez, '_ \ Ve,,_ +e,,, 2/
(3.30)
X=tan 1 eJz'
e.c,
(3.31)
The geodetic
map latitude
_ is given by _=tan 1(c_)tan_,c (3.32) (x, y) to velocity,
Thus, the geometric transformation relating image coordinates geodetic coordinates (_, ,_) is a function of spacecraft position, attitude, scanner orientation, and surface ellipticity: T_=T_(p, v, 6), _, 0, s, a, c)
(3.33)
IMAGE RESTORATION
107
where each positional value thistransformationin realitya function in is oftime. The attitude information provided the satellite by instrumentation is generally accurate not enough meetthe precision to requirements for geometricorrection. obtaina precision f onepicture c To o element for
Landsat However, accurate precision spacecraft MSS, each attitude component should be known to 0.1 mrad. is only for a of the the Landsat 1 and 2 attitude measurement system to I mrad [16]. The SMS/GOES VISSR requirements of one visible picture element demand the determination attitude to within 5", or 24 _rad [15].
A precise estimate of the attitude time series over the time interval in which the image was scanned can be obtained by using ground control points. Ground control points are recognizable geographic features or landmarks in the image whose actual locations can be measured in maps. Thus, control points relate an image with the object. The approach is to estimate the attitude time series of the spacecraft with known ground control point locations by least squares [15] or by digital filtering [17]. Pass points are used to determine T,, for the relative registration of images of the same scene. Pass points are recognizable features that are invariant in a series of images of the same scene but whose absolute locations are unknown. Pass points relate while the object coordinates are not known. an image with other images
In practice the calculation of the exact location of each image point would require a prohibitive amount of computer time. Depending on the nature of the geometric distortions, points in the transformed image may be sparsely and not equally spaced. For reasons of compact data storage and limitations of film recorders, the output picture elements must be regularly elements spaced. To obtain a continuous picture in a reasonable time, the inverse approach with equally spaced is taken. A set of tie
points defining a rectangular or quadrilateral interpolation grid in the output image is selected. The exact transformation is only computed for the grid points. The locations of points within each quadrilateral are determined by bilinear interpolation between the vertex coordinates. Values for fractional pixel locations in the input picture are determined by interpolation. The calculation of the location of an output picture element in the original image and interpolation over surrounding pixels is called resampling. (See sec. 3.3.2.) The actual coordinate dimensional polynomials transformation of order m can be represented by the two
x,=Z Z ajkx,y,
j 0 k =0
.......
j ok
j
o
(3.34)
108 This permits from When varying yields
DIGITAL transformation a leastsquare a set of known the nature, a good
PROCESSING is linear procedure corresponding geometric
OF in the
REMOTELY coefficients used for (xi,
SENSED
IMAGES that bik
ash, b a., a property determining y_) is and of (x/, aih and y[).
to be
tie points transformation
required
a spatially or 3) Second
slowly usually and
a loworder approximation require It the
bivariate of the
polynomial actual
(m=2
transformation. 10 pairs tie points
thirdorder tie points, distributed require eral, nomial may Hermite polynomials boundaries to the be the
polynomials respectively. throughout higher are order
at least Larger
6 and that the
of corresponding are uniformly variations In the genpoly
is assumed image.
differential given highorder control
geometric error points. (e.g., on defining bounds. in This
polynomials associated to the location
to achieve with the of the
coefficients sensitive by
terms
problem or
avoided
using
orthogonal approach
polynomials relies
Legendre loworder
polynomials).
Another
for subareas of between the areas of the
the image and [18]. If the size variations, provided that
assuring continuity at the of the subareas is adapted the approximation number can of be tie
magnitude with arbitrary
geometric
made
accuracy,
a sufficient
points is available. A commonly used bilinear transformation
loworder
mapping
for
a subarea
is given
by
the
x' =ao+alx v'= Four pairs of of corresponding this transformation. In many
+a,.,yWaJt'y
(3.35)
bo + blx + b._,y + b:_xy tie points The cases x'=ao+alx y'= are tie required for to determine the subareas given the define by (3.36) coa
efficients net
points
of quadrilaterals.
an affine +a..,y
transformation
b. + b_x + b._,y geometric rotation, tie points transformation displacemcnt, are required of scaling, per a suband
is sufficient area. Affine
to represent transformations pairs
the
required include
skewing.
Three
of corresponding
subarea system
defining a net A measure is given by the
of triangles over the image. of how the transformation Jacobian determinant
T,, distorts
a coordinate
Ox'
_x' (3.37)
i
J=[ For the bilinear transformation J =alb_,a.,b_ O(x,y)
Y')i[
'%y_  Oy' ey the Jacobian is given by (3.35)
+ ( a_b:_a_b_
)x + ( a:,,b,,a_b:_)y
(3.38)
IMAGE For the identity transformation
RESTORATION
109
X'_
X
y'= y and the coefficients ai=b_= A singular formation have the following values: and J= 1. For the affine
(3.39)
1, ao=a2=aa=bo=b_=b:_=O, by J=0.
transformation is characterized (3.36) the Jacobian is J = a,bo.  a..,b,
trans
(3.40) Jacobian deter
Special cases minants: 1. Rotation
of affine transformations
have the following
by angle 4,: x'= y'= J=l x cos 4,+Y x sin (3.41)
sin 4_+Y cos _
2.
Scaling
by factors
a and b x' = ax y' = by J= ab (3.42)
3. Skewing
by angle 0 x'= y'= xty tan 0 y (3.43)
J 1 J= 1 indicates 3.3.2 that the area affected by the transformation is constant.
Resampling transformations can be accomplished in two ways: (pixels) are changed, Because of its limited geometric corrections defined on an equally
Geometric 1.
The actual locations of the picture elements but the elements retain their intensity values. accuracy, this method is only used for simple (aspect ratio and skew). The image is resampled. A digital image spaced grid is converted to a spaced grid. A feature at grid transformed image is located at In general, the location (x', y')
2.
picture on a transformed equally point (x', y') in the geometrically point (x, y) in the distorted image. will not coincide with grid points in
110
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
theinputimage. heintensity T values thepixelson theoutput of gridmustbedetermined interpolationsing by u neighboring pixels ontheinputgrid. Thebasic approach modelheoriginalmage isto t i definedntheinput o gridandthento resample thismodeledcene, s toyieldanimage withthe desired geometric characteristics. Forpractical reasons thescene only is modeled locally,.e.,in theneighborhood theinterpolationite.The i of s resamplingffects a image quality(lossof spatial esolution r andphotometric accuracy). A digitalimage,epresented a digitalpicturefunction k), is r by g_(j,
defined on an equally For an output spaced grid (j._xx, k_xy), j= 1..... M, k = 1 ..... N. grid point (x, y), where ]±x<x< (]+ 1 ) _ _x k:ay<_y< (k+ 1 )_Xy, (3.44)
and if g_ is bandlimited by U and V, g(x, y) can be reconstructed exactly by applying the twodimensional sampling theorem. (See sec. 2.7.) g(x, y) =
j
f_
_ k
_
_
g._(j±x, k..Xy) h(xj_Xx, to be ±x=
yk.Xy) 1/(2U),
(2.147) ±y=l/(2V),
If the sampling the reconstruction
intervals are chosen filter is given by h(x, y)=UV
sin 2_ Ux sin 2_ Vy 2_. Ux 2,_ Vy
(2.150)
If ±x< g exactly
I/2U and ._xy< 1/2V, other through equation (2.147).
functions h can be used to represent Equations (2.147) and (2.150)
represent the NyquistShannon expansion for bandlimited functions. To implement this interpolation formula on a computer, the sum has to be made finite, or equivalently, h(x, y)=0 outside an interval that must include the origin. The righthand side of equation (2.147) then does not represent g(x, y) exactly. Let l,,(x, y) be such an approximation of g(x, y), given by l,(x, y) = _ f_ g,,(j_x, kay) h,_(xj±x, yk_y) (3.45)
with h,,(x, y) =0 for Ix] >7, ]Y] _>3. Depending on the choice of h,(x, y), various interpolation schemes can be implemented, which differ in accuracy and speed.
1. NearestNeighbor Interpolation (n= l)In this first approximation, the value of the nearest pixel to (x, y) in the input grid is assigned
IMAGE RESTORATION to L(x,y).
jinteger Figure (x+0.5) 3.14 shows the function h, for one dimension. and kinteger (y+0.5), then, for ,.Xx=Ay_ 1 ll(x, y) =g,(j, k)
111
If
(3.46)
(j=integer (x) means j is the largest integer number not greater than x). The resulting intensity values correspond to true input pixel values, but the geometric spacings. of linear location of a pixel may be inaccurate by as much as ± 1/2 pixel The sudden shift of true pixel values causes a blocky appearance features. Nearestneighbor interpolation is used to correct for
scanner line length variations in Landsats 1 and 2 digital MSS images by inserting or deleting pixels at appropriate intervals (synthetic pixels [19]). The synthetic pixels may cause misregistration when comparing two MSS images of the same scene taken at different times and should, therefore, be removed by preprocessing. The computational requirements of nearestneighbor interpolation are relatively low, because only one data value is required to determine a resampled pixel value. 2. Bilinear Interpolation ( n =2 )Bilinear interpolation involves finding the four pixels on the input grid closest to (x, y) on the output grid and obtaining the value of l_,(x, y) by linear approximation, i.e., by assuming that the picture function is linear in the interval [(jax, (j+ 1 )Ax), (kAy, (k+ 1 )Ay)]. Figure 3.15 shows the function h_ for one dimension. The. approximated value is given by lz(x, y) _ ( l tx) ÷fl(1 (1fl)g,(j, _)g.,(j, k+ k) +tz(1fl)g,(j+ 1, k) (3.47)
1) +ctfl g_(j + 1, k+ 1)
,"I
a b
:, t
FIGURE 3.14Nearestneighbor interpolation. (a) Interpolation function. (b) Resamp/ed image grid.
112
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
h2
1 0
FIGURE 3.15Linear
interpolation.
(a) Interpolation image grid.
function. (b) Resampled
_here
j=integer k=integer Bilinear the hand, neighbor is higher 3. proved this function function (x, y) interpolation or may blurring appearance cause nature of
(x) (y) a small of linear The
ot=xJkt fl=Yloss of
(3.48) , image resolution On with of this be the due to
smoothing the blocky
linear
interpolation. associated cost can
other
features, computational
nearestmethod imUse sampling sampling of of
interpolation, because Bicubic by
is reduced.
of the additional lnterpolationResampling
operations
required. accuracy further
modeling surface in
the
picture implies
locally an increase a cubic
by
a polynomial of the the domain of the 16 nearest
surface. of the ideal
polynomial given is employed and
h, (x, y).
Frequently equation [18]. is given Such
approximation with
(2.150) a cubic
neighbors is continuous
approximation, (fig. 3.16)
which by
in value
slope,
in one dimension
h_(x)
481xl+glxl'Y:'
1  2ixi'' + )'l:' 0
l<_x
<2
(3.49)
0< I"! < m x'>2 to interpolate k1), direction (3.49) grid yields m) m) (3.50) for (x, k), in the (x, to obtain two line k+ I), l,_(x, direction (x, k+2) y). Using and the to
A possible obtain and the pixel then to
implementation values interpolate function values at locations in the
is first (x, sample
interpolation pixel = _z( +cz(1
in equation on the input 1, m)
dimensions
appropriate I(x, m)
1  ct)'' g._(j+0_c(')
+ ( 1  2_z'' + _ :_) g,(j, g_(j+2,
g,(j+l,m)_z'(1_z)
m=kl,k,k+l,k+2
IMAGE
RESTORATION
h3
1 13
FIGURE 3.16Cubic
interpolation.
The final interpolated 13(x, y) = /3(1
value l:,(x, y) is obtained _fl)2 l(x, kl(x,
by l(x, k)
1 ) + (1  2fl''+fl:') k+ 1 ) +/3'([3
(3.51)
+ fl( 1 +Bif") 1 ) l(x, k+2) where _, fl and j, k are defined as in equation (3.48). Bicubic interpolation is free of the dislocaton of values characteristic of nearestneighbor interpolation, and the resolution degradation associated with bilinear interpolation is reduced. An important application of bicubic interpolation is the generation of magnified or zoomed image displays. The replication of pixels with nearestneighbor interpolation is distracting to the eye, but bicubic interpolation represents fine image detail much better. Although higher order interpolation improves the visual appearance of images, analysis techniques such as classification (see ch. 8) may be sensitive to the interpolation method employed. Resampling results in an image degradation due to the attenuation of higher spatial frequencies caused by the interpolation function and by the aliasing effects associated with discrete interpolation. Interpolation is fundamentally a lowpass filtering operation [20]. The attenuation of higher spatial frequencies, which causes blurring, is a function of the distance of the resampled pixels from the original sampling sites. If the resampling and sampling sites coincide, there is no amplitude attenuation. At a distance of 0.5 pixel, the attenuation of high spatial frequencies is significant. Figure 3.17 shows the modulation transfer functions corresponding to h,, for nearestneighbor, bilinear, and bicubic interpolation; H4 corresponds to a truncated form of the ideal (sin x)/x interpolation function. Aliasing effects (see sec. 2.6.1.1 ) result from the extension of the interpolator transfer function beyond the cutoff frequency (see fig. 3.17) and cause ringing near edges in the resampled image. The effects of resampling in the course of geometric corrections are illustrated in figure 3.18, which shows a Landsat MSS falsecolor image corrected for skew, scale, and rotational errors. The nearestneighbor interpolation retains the original intensity values, but straight features
114
DIGITAL IH(u,v)l
PROCESSING
OF
REMOTELY
SENSED
IMAGES
H I nearestneighbor H 2 linear interpolation H 3 cubic interpolation
H 4 truncated (sin x)/x (4 lobes) H4
H I
0 FIGURE 3.17Image
u (cycles/sample) degradation caused by resampling when sampling midway between original samples. site is
(airport tion airport construction image much 3.4 The
runway,
roads) some linear
appear
stepped much but river the in
(fig.
3.18a). (fig. line upper
Bilinear 3.18b). indicating left straight section
interpolaAgain the
represents runways
features jagged,
better white the
appear
freeway of the
northeast represented. and edges
from With are
the
is well smoother Radiometric preprocessed
bicubic sharper
interpolation, (fig. 3.18c).
lines
appear
Restoration and geometrically f(x', y') corrected by , f(x',y')+n,(x',y') the is the y) of radiometric PSF of the the degradation. linearized noise and a 2.3.) of an and h [21]. 1 such random (2.80) The spacein the sensor simpler image gt_(x', y') is related
to the ideal
object
distribution
gze(x',y')=Tij(x',y')=h(x',y') where invariant recorded response, mathematical The estimate problem ] of the the operator function imaging image. and Tu h(x_, and system, The of of represents Y'I) n,(x,
degradation
represents linear image
assumptions
formation permit sec. determination the about
additive radiometric
signalindependent restoration. is restoration distribution gt_ and is to find
noise (See the
treatment
of radiometric
original
object image
f, given
preprocessed the PSF Tu
geometrically Mathematically, that
corrected the
knowledge the inverse
problem
operator
]=Tu
I
gz_
(3.52)
IMAGE RESTORATION
115
FIGURE 3.18aGeometrically corrected Landsat MSS image (scene 197416130) resampled with nearestneighbor interpolation.
1 16
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 3.18bImage
in part
a resampled
with
bilinear
interpolation.
IMAGE
RESTORATION
1 17
FIGURE 3.18cImage
in part
a resampled
with
bicubic
interpolation.
118
DIGITAL
PROCESSING
OF REMOTELY SENSED IMAGES
Even if the inverse operator TR 1 exists and is unique, the radiometric image restoration problem is ill conditioned, which means that small perturbations in gn can produce nontrivial perturbations in ] [2123]. Thus, inherent data perturbations in the recorded image can cause undesirable effects in the image restored by inverse transformation. Because random noise is always present, there is an irrecoverable uncertainty in the restored object distribution [. Discretization of the linear model, equation (2.80), for digital processing leads to the matrix vector equation (see sec. 2.6.2): g=B f+n (2.140)
where g, f, and n are vectors created from the sampled image, sampled object, and sampled noise fields; and B is a matrix resulting from the sampled PSF h. Because of the nature of the mathematical problem, the matrix B is always ill conditioned (nearly singular). The solution of the digital radiometric restoration problem is thus tied to the solution of illconditioned systems of linear equations. In the presence of noise, the matrix B can become singular within the bounds of uncertainty imposed by the noise. Both deterministic and stochastic approaches can be taken to solve the radiometric restoration problem. The deterministic approach implies the solution of a system of linear equations (2.140), but the stochastic approach implies the estimation of a vector subject to random disturbances. Because of noise and ill conditioning, there is no unique solution to equation (2.140), and some criterion must be used to select a specific solution from the infinite family of possible solutions. For the deterministic approach, radiometric restoration can be posed as an optimization problem. For example, a possible criterion for solution is minimization of the noise n,. This approach leads to a leastsquares problem, where it is necessary to find a solution f such that nrn = (gB f) r(g_ B f) ( 3.53 ) it is assumed of solution that is to
is minimized. In the stochastic approach to restoration the object distribution f is a random field. A criterion construct an estimate such distributions the expected that over the ensemble value of the difference
of estimates and object between estimate and to
original object distribution is minimized. This approach is equivalent finding a solution f such that the error _ is a minimum, where ,=E{ These two respectively. criteria lead to (f_ _) r (f__) the inverse } filter and the Wiener (3.54)
filter,
It was shown in section limit on the size of image
2.5 that for digitized images the fundamental detail is determined by the Nyquist frequency
IMAGE RESTORATION in equation (2.104).Radiometric
Nyquist frequency. Because restoration can recover of noise, detail below of the presence radiometric
119
the restoraimage insure The
tion has to consider the tradeoff between sharpness of the restored and the amount of noise in it. In addition, the criterion should positive restoration because the original image is everywhere positive.
restoration filters to be discussed may create negative intensity values in the restored image, values which have no physical meaning. An excellent survey of positive restoration methods, which now require an excessive amount of computation time, was prepared by Andrews [24]. Mathematically, equation the first kind, which tends (2.80) to have is a Fredholm integral equation an infinite number of solutions. of To
obtain a unique solution, some constraints must be imposed. A method described in [22] constrains the mean square error to a certain value and determines the smoothest solution with that error. Another technique [25] uses the constraint that the restored image is everywhere positive. 3.4.1 Determination of Imaging System Characteristics
Solution the PSF mination systems
of the radiometric restoration h(x, y) or the corresponding
problem requires knowledge of transfer function H(u, v). Deter
of the transfer characteristics is a classical problem in dynamic analysis. If the imaging system is available, the transfer charmay be determined by measuring the response of the system test patterns. An example is the work carried out at the Jet Laboratory, where extensive measurements of vidicon camera
acteristics to specific Propulsion
systems were made before their launch [26]. For spaceinvariant systems, it is sometimes possible to postulate a model for the camera system and to calculate the PSF h(x, y) or transfer function H(u, v). The PSFs for some typical degradations were derived under simplifying assumptions in sec. 2.4. If the imaging system is too complex for analytic determination of h(x, y), or if the degrading system is unavailable, h(x, y) must be estimated from the degraded picture itself. The existence of sharp points in the original scene can be used to measure the PSF directly in the image. (The PSF is the response of a linear imaging system to a point source.) For example, in astronomical pictures, the image of a point star could be used as an estimate of the PSF. Usually, natural scenes will not contain any sharp points, but they may contain edges in various orientations. In this case, the PSF can be determined from the derivatives of the images of these edges. An edge is an abrupt change in brightness. The response to such a step function in any direction is called the edgespread function (ESF) h,(x, y). The derivative of the ESF in any direction is called the linespread function (LSF) h_(x, y) in that direction. The LSF in any direction is thc integral of the PSF in that direction. Thus, the derivatives of edge images in different
120
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
with this method [2729]. Filter of equation (3.53), yields _=Blg the leastsquares estimate (3.55) The numerical solution of equation is that noise strongly effects the values of
orientations canbeused reconstruct PSFof the imaging to the system.
The problem the derivative 3.4.2 Inverse
Minimization
which
is the inverse
filter restoration.
(3.55) is in general impossible. Suppose that the with N512. Inversion of a 262,144 by 262,144 However, circulant. dimensional for a spaceinvariant A matrix with this DFT, leading to imaging structure
image was digitized matrix is required.
system, the matrix B is block is diagonalized by the two
2_ is the discrete restored image h, respectively. the convolution
nr_0.
twodimensional
Fourier
transform
operator,
] is the
matrix; and G and H are the Fourier transforms of g and The same result can be obtained by direct application of property of the Fourier transform to equation (2.80) if
The illconditioned nature of image restoration is preserved by the use of the leastsquares criterion. The elements of H are an approximation to the eigenvalues of B. As the eigenvalues become small (near singularity), the inverse becomes large, and an amplification of noise results. For high signaltonoise ratios and with small amounts of image blur, the inverse filter performs well, provided there are no zeros in H. Because the convolution performed by the DFT is circular, there is an edge effect from convolution wraparound at the image borders. The addition of zeros before the transformation of ] and h results in a suppression of wraparound. (See sec. 2.6.2.) Because of its computational simplicity the inverse 1 Hi(u, is used for radiometric v)H(u, v) sensed images (3.57) where filter
restoration
of remotely
atmospheric turbulence is the main cause of degradation. If, however, the transfer function H has zeros for defocus degradation (see sec. 2.4.3), the inverse filter approaches infinity. Even for transfer functions without any zeros, the inverse filter enhances noise to an extent that will interfere with visual interpretation more than does a loss of fine image detail.
IMAGE RESTORATION
121
Therefore, modified a inverse filter,shown figure3.19,is oftenused in [10],where
HI(u, v) = H(u, v) 1 HM u, v<S U, v>S ratio of the image
(3.58) to be
The limitations depend on the signaltonoise processed and is determined empirically.
3.4.3 The
Optimal optimal
Filter or Wiener restoration filter results from the minimization
of equation (3.54) in the stochastic approach. The problem becomes mathematically tractable if a linear estimate is chosen, that is, if a linear relationship between the estimate f and the gray values in g is assumed. The problem is then to find a restoration filter W such that the estimate minimizes the error measure in equation (3.54), where f=Wg Because signalindependent noise was assumed, E{fn T} :E{nrf} and the meansquareerror W: solution is [3032] (3.61) =0 (3.60) (3.59)
Rh,B T(BRrIB r + R,,. )  1
I
H(u, v)
HM . Hi(u, v)
I I
! 0 0.25
"'_
=
I
I
_
0.5 u
0
0.25
0.5 u
FIGURE3.19Inverse filter. (a) Modulation transfer function. (b) Modified inverse filter.
122 where
DIGITAL PROCESSING
OF REMOTELY SENSED
IMAGES
Ru=E{ffr} R,,,, = E{nn _"} } are the autocorrelation matrices of t and n, respectively;
(3.62) and Ru and R,.
represent the information about the signal and noise processes, respectively, necessary to carry out the restoration. The minimum meansquareerror restoration requires very large matrix computations. Conversion to an efficient representation in the spatial frequency domain with the DFT process for t is stationary, which considered a zeromean process is possible if the underlying random means that the restored image can be plus an additive constant. With this
assumption and for spaceinvariant imaging systems, all matrices in equation (3.61) can be diagonalized by the DFT. The Fourier transforms of the correlation functions R u and R .... are the spectral densities S u and S ..... respectively, in equation frequency domain is obtained (2.18). The optimal or Wiener from equation (3.61 ) as H(u, v)* _,_ _ S,,,(ul filter in the
W(u,
v)=7,.,
v)
(3.63)
It.,
where restored H* is the complex conjugate image is thus ](x, y) = y1 [ H(u, {, ,,_
+ s.(.,
of H. The estimate if.v. y) of the
v)*G(u, v) .... S_,7(u, _;)
(3.64)
There
is no illconditioned
behavior
associated
with the optimal
filter.
Even though H(u, v) may have zero elements, the denominator in equation (3.63) is then determined by the ratio S,,,/Su. Thus, a restored image can be generated if the matrix B is singular and noise is present. In fact, the presence of noise makes the restoration possible. If noise approaches zero (S,,, _ 0), the optimal filter becomes the inverse filter. The visual appearance of optimally restored images for low signaltonoise ratios is often not good. This deficiency may be due to using a linear estimate and a meansquareerror criterion. The error criterion should also take into account the frequency response of the human visual system and its logarithmic positive response to varying light intensities, and it should insure restoration.
3.4.4
Other
Radiometric restoration filtering
Restoration without [33,
Techniques of the PSF may be achieved the recorded image is mapped
Radiometric
knowledge 34]. Here
by homomorphic
IMAGE RESTORATION
123
from its original representation into another domain where the original image and the degrading function are additively related. Such a transformation is the Fourier transform, which maps convolution into multiplication, followed by the complex logarithm, which maps multiplication into addition [35, 36]. The inverse transform is the complex exponential function followed by the inverse Fourier transform. The restoration criterion is to find a linear operator W such that the power spectral densities of the restored and of the original image are equal. In the spatial frequency domain _'(u, with the criterion Sii(u, The homomorphic restoration v)=StI(u, v) (3.66) v) =G(u, v)W(u, v) (3.65)
filter is
W(u,
v)=(
Sff(u' v) ) J_ Sg_(u, v) S,t(u,v) v) +S,,,,(u, v)[2Slt(u, knowledge v) )_'
(3.67)
= ( [ H(u, which is obtained without
detailed
of H and S .... Hunt
[37]
proposed a constrained of the spectral densities.
leastsquares filter that does not require The restoration filter is H*(u, v)
knowledge
W(u, v)where C is a constraint iteration.
in(u,
matrix.
v)l_+_, I C(u, v)[ 2
The parameter ,/ is determined
(3.68) by
The Wiener filter requires a maximum amount of a priori information, namely, the spectral densities of / and n. The constrained leastsquares filter does not require knowledge of the power spectra. In the homomorphic filtering approach the PSF is estimated from the degraded image. The inverse filter requires no a priori information, and this fact is reflected in the usually poor quality of the restoration. REFERENCES [1] Turner, R. E.; et al.: Influence of the Atmosphere on Remotely Sensed Data. Proceedings of Conference on Scanners and Imagery Systems for Earth Observations, SPIE J., vol. 51, 1974, pp. 101114. [2] Chavez, P.: Atmospheric, Solar, and MTF Corrections for ERTS Digital Imagery, Proc. Am. Soc. Photogrammetry, Oct. 1975, pp. 6969a. [3l Rogers, R. H.; and Peacock, K.: A Technique for Correcting ERTS Data for Solar and Atmospheric Effects. Symposium on Significant Results Obtained from the Earth Resources Technology SatelliteI, NASA SP327, Washington, D.C., 1973, pp. 11151122.
124
[4]
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Murray, W. L.; and Jurica. J. G.: The Atmospheric Effect in Remote Sensing of Earth Surface Reflectivities, Laboratory for Applications of Remote Sensing Information. Note 110273, Purdue University, Lafayette, Ind., 1973. [5] Fraser, R. S.: Computed Atmospheric Corrections for Satellite Data. Proceedings of Conference on Scanners and Imagery Systems for Earth Observalions, SPIE J., vol. 51. 1974, pp. 6472. [6] Potter, J.; and Sheldon, M.: Effect of Atmospheric Haze and Sun Angle on Automatic Classification of ERTS1 Data. Proceedings of the Ninth International Symposuim on Remote Sensing of Environment, 1974. [7] Hammond, H. K.: and Mason, H. L.: Precision Measurement and Calibration. NBS Special Publication 300, vol. 7, 1971 (Order No. C13.10:300/V.7). [8] [9] [10] Advanced Scanners and Imaging Systems for Earth Observations. NASA SP335, Washington, D.C., Dec. 1972. Papoulis, A.: The Fourier Integral and Its Applications. McGrawHill, New York, 1962. Seidman, J.: Some Practical Applications of Digital Filtering in Image Processing. Proceedings of Computer Image Processing and Recognition, University of Missouri. Columbia, Mo., Aug. 1972. Rindfleisch, T. C., et ill.: Digital Processing of the Mariner 6 and 7 Pictures, J. Geophys. Res., vol. 76, 1971, pp. 394417. Goetz, A. F. H.: Billingsley, F. C.: Gillespie, A. R.; Abrams, M. J.: and Squires, R. L.: Application of ERTS Images and Image Processing to Regional Geologic Problems and Geologic Mapping in Northern Arizona. NASA/JPL TR 321597, May 1975. Horn, B. K. P.; and Woodham, R. J.: Destriping Satellite Images. Artificial Intelligence Lab. Rep. At 467, Massachusetts Institute of Technology, Cambridge, Mass., 1978. Puccinelli, E. F.: Ground Location of Satellite Scanner Data, Photogr. Eng. and Remote Sensing, vol. 42, 1976, pp. 537543. Mottershead, C. T.; and Phillips, D. R.: Image Navigation for Geosynchronous Meteorological Satellites. Seventh Conference on Aerospace and Aeronautical Meteorology and Symposium on Remote Sensing from Satellites. American Meteorological Society, Melbourne, Fla. 1976, pp. 260264. ERTS Data Users Handbook. Doc. 71SD4249, NASA. Washington, D.C., 1972. Appendix B. Caron, R. H.; and Simon, K. W.: Attitude Time Series Estimator for Rectification of Spaceborn Imagery, J. Spacecr., Rockets, vol. 12, 1975, pp. 2732. Rifman, S. S.: Digital Rectification of ERTS Multispectral Imagery. Symposium on Significant Results Obtained from the Earth Resources Technology Satellitel, NASA SP327, Washington, D.C.. 1973, pp. 11311142. Thomas, V. L: Generation and Physical Characteristics of the Landsat 1 and 2 MSS Computer Compatible Tapes. NASA/GSFC Report X56375223, Nov. 1975. Forman, M. L.: Interpolation Algorithms and Image Data Artefacts. NASA/ GSFC Report X93377235, Oct. 1977. Andrews, H. C.: and Hunt, R.: Digital Image Restoration. Prentice Hall, Englewood Cliffs, N. J., 1977. Twomey, S.: On the Numerical Solution of Fredholm Integral Equations of the First Kind by the Inversion of the Linear System Produced by Quadrature, J. Assoc. Comput. Mach., vol. 10, 1963, pp. 97101. Sondhi, M. M.: Image Restoration: The Removal of Spatially lnvariant Degradations, Proc. IEEE, vol. 60, 1972, pp. 842853. Andrews, H. C.: Positive Digital Image Restoration TechniquesA Survey. Report No. ATR73(8193)2, Aerospace Corp., Feb. 1973. McAd,_m, D. P.: Digital Image Restoration by Constrained Deconvolution, J. Opt. Soc. Am., vol. 60, 1970, pp. 16171627.
[11] [12]
]13]
[14] [15]
[16] [17] [18]
[19]
[20] [21] [22]
[23] [24] [251
125 [26]O'Handley, D.A.;and Green, W.B.: Recent Developments in Digital Image Processing of theImage rocessing P Laboratory oftheJetPropulsion Laboratory, roc. vol.60, 972, 821828. P IEEE, 1 pp. [27] ones, J R.A.;and Yeadon, E.C.: Determination Spread Function from of the
Noisy [28] [29] [30] [31] [32] [33] Edge Scans, Photogr. Sci. Eng., vol. 13, 1969, Deriving pp. 200204. MTF's from Edge Traces, Jones, R. A.: An Automated Photogr. Sci. Eng., vol. 11, Technique for 1967, pp. 102106. Berkovitz, M. A.: Edge Gradient Analysis OTF Accuracy Study, in Proceedings of SPIE Seminar on Modulation Transfer Function, Boston, Mass.. 1968. Horner. J. E.: Optical Spatial Filtering with the LeastMeanSquareError Filter, J. Opt. Soc. Am., vol. 59, 1969, pp. 553558. Helmstrom, Soc. Am., C. W.: Image Restoration vol. 57, 1967, pp. 297303. Linear LeastSquares 1967, pp. 918922. by Filtering the of Method Distorted of Least Squares, I. J. Opt. Opt. Soc.
IMAGE RESTORATION
Slepian, D.: Am., vol. 57,
Images,
Cole, E. R.: The Removal of Unknown Image Ph.D. dissertation, Department of Electrical Salt Lake City, Utah, June 1973. Stockham, IEEE, vol. T. G.: Image Processing 60, 1972, pp. 828842. in the
Blurs by Homomorphic Engineering, University Context of a Visual
Filtering. of Utah, Model, Proc. FilterLake
[34] [35]
Cannon, T. M.: ing. Ph.D. thesis, City, Utah, Aug.
Digital Image Deblurring by Computer Science Department, 1974.
Nonlinear Homomorphic University of Utah,
Salt
[36] [37]
Oppenheim, A. V.; Schafer, R. W._ and Stockbam, of Multiplied and Convolved Signals, Proc. 1EEE, Hunt, B. R.: An Application of Constrained Image 1973, Restoration pp. 805812. by Digital Computer, IEEE
T. G.: Nonlinear Filtering vol. 56, 1968. pp. 12641291. Least Squares Estimation to Trans. Comput., vol. C22,
4.
Image
Enhancement
4.1
Introduction
The goal of image enhancement is to aid the human analyst in the extraction and interpretation of pictorial information. The interpretation is impeded by degradations resulting from the imaging, scanning, transmission, or display processes. Enhancement is achieved by the articulation of features or patterns of interest within an image and by a display that is adapted to the properties of the human visual system. (See sec. 2.8.) Because the human visual system discriminates many more colors than shades of gray, a color display can represent more detailed information than a graytone display. The information of significance to a human observer is definable in terms of the observable parameters contrast, texture, shape, and color [1]. The characteristics of the data and display medium and the properties of the human visual system determine the transformation from the recorded to the enhanced image, and, therefore, the range and distribution of the observable parameters in the resulting image [24]. The decisions of which parameter to choose and which features to represent by that parameter are determined by the objectives of the particular application. Enhancement operations are applied without quantitative knowledge of the degrading phenomena, which include contrast attenuation, blurring, and noise. The emphasis is on human interpretation of the pictures for extraction of information that may not have been readily apparent in the original. The techniques try to attenuate or discard irrelevant features and at the same time to emphasize features or patterns of interest [57]. Multiimage enhancement operators generate new features by combining components (channels) of multiimages. For multiimages with more than three components, the dimensionality can be reduced to enable an unambiguous color assignment. Enhancement methods may be divided into: 1. 2. 3. 4. Contrast enhancement (grayscale Edge enhancement Color enhancement (pseudocolor Multiimage enhancement modification) and false color)
127
128
Contrast
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
enhancement, on edge enhancement, images and pseudocolor or on individual enhancement components of monochrome
are performed multiimages. 4.2 Contrast
Enhancement
The goal of contrast enhancement is to produce a picture that optimally uses the dynamic range of a display device. The device may be a television screen, photographic film, or any other equipment used to present an image for visual interpretation. The human eye can simultaneously discriminate only 20 to 30 gray levels [8]. This subjective brightness range does not cover the full gray scale of a display device. For the human eye to see subtle changes in brightness, the contrast characteristics of an image must be adapted to the subjective brightness range. The contrast characteristics of an image are influenced by such factors as camera exposure settings, atmospheric effects, solar lighting effects, and sensor sensitivity. These factors often cause a recorded image not to span the dynamic range to which it is digitized. On the other hand, contrast enhancement is limited by the darkest and the lightest areas in an image. Grayscale transformations are applied in a uniform way to the entire picture to stretch the contrast to the full dynamic range. Spatially dependent degradations (e.g., vignetting and shading) may be corrected by spatially nonuniform transformations. Spatially independent grayscale transformations can be expressed by g_=T_g (4.1)
where g and g, are the recorded and the enhanced image with M rows and N columns, respectively, and T,_ is a linear or nonlinear grayscale transformation that is appiied to every point in the image separately. The dynamic range of both gray scales is the same; i.e., O<g(j, k) <K and O<_g_(j, k)<_K, where K=2 _ 1. The number K is the maximum gray value and b is the number of quantization bits. The quantities g(j, k) and g_(j, k) are the gray values of g and g,. at row j and column k, respectively. Piecewise linear transformations may be used to enhance the dark, midrange, or bright region of the gray scale and to correct for display nonlinearities. The range [l, u] in the recorded image may be linearly transformed to the range [L, U] in the enhanced image by
ge(j, k)=
l u_[g(j, k)
k) 1]+L UL KU k u [g(j,k)u]+U
g(i, I,) <l l<g(j, k)<_u
g(j,k)>u
(4.2)
IMAGE
ENHANCEMENT
129
Figure 4.1a tion (4.2).
shows
the grayscale
transformation,
represented
by equa
Sometimes the high and low values of a digital picture represent saturation effects. It may be necessary to find a piecewise linear transformation that causes a specified percentage p_ of the low values and a percentage Pr of the high values to be set to L and U, respectively [1]. In practice L is usually set to zero, and U is set to K. Consider the following constraints:
I1
1 ZH_(z)=Pr_ MN ,) • 1 MN with
K
(4.3)
r Z
H_(z)=pv
MN=
z
EH,(z)
o
They define the gray levels l and u. The function H_,(z) is the frequency of occurrence of gray level z in g and is called the histogram of image g. The transformation is then given by g_(j, k)g,,(j, k) =L g_(j, k) = U UL ul [g(j,k)l]+L u<g(j,k)<l g(j, k) <l g(j, k) _>u (4.4)
ge
ge
K1 J U
J
Kl,
0 a FIGURE grayscale
I I
I u
I K1
= g
0 b
I
I u
_ K1
= 9
4.1Grayscale transformation.
transformations (b) Saturation
for enhancement. of low and high
(a) Piecewise values to black
linear and white.
130
DIGITAL PROCESSING
OF REMOTELY
SENSED
IMAGES
and shown in figure 4.lb. The shape of an image histogram provides information about the contrast characteristics of an image. For example, a narrow histogram indicates a lowcontrast image, and a multimodal histogram indicates the existence of regions with different brightness. Figure 4.2 illustrates linear contrast enhancement. Figure 4.2a shows the recorded Landsat MSS 7 image, and figure 4.2b is the result of applying the grayscale transformation, (4.3) and (4.4), with pL=l percent, and Pr,1 percent. The histogram of the recorded image is shown in figure 4.3. Nonlinear grayscale transformations may be used to correct for display nonlinearities. Figure 4.4a shows a logarithmic transform used to compensate for nonlinear film characteristics. For images with bimodal histograms, each of the histogram zones may be enhanced to the full brightness range by the transformation shown in figure 4.4b. The specification of the transformation T,. is facilitated by evaluating the contrast characteristics of a given image from its histogram. Another important type of contrast enhancement is histogram modification, in which a grayscale transformation is used to give the picture a specified distribution of gray values [9, 10]. Two frequently used distributions approximate a normally distributed (Gaussian) or a flat (constant) histogram. In pictures with a fiat histogram all gray levels occur equally often. Histogram flattening (also called histogram equalization) produces pictures with higher contrast, because the points in the densely populated regions of the gray scale are forced to occupy a larger number of gray levels, resulting in these regions of the gray scale being stretched. Points in sparse regions of the gray scale occupy fewer levels. Figure 4.5 illustrates nonlinear contrast enhancement through histogram modification of the image in figure 4.2a. Histogram modification may also be required before comparison of two pictures of the same scene taken at different ditions, pictures times. If the pictures were taken under different lighting the differences can be compensated for by transforming to a standard histogram. conboth
4.3
Edge
Enhancement
Imaging and scanning processes cause blurring degradations. Blurring is an averaging operation, the extent of which is determined by the point spread function (PSF) of the system. Blurred images may therefore be sharpened by a differentiation process. For spaceinvariant systems the imaging process can be described by the convolution of the original image f with the system PSF h. (See eq. (2.80).) Because of the lowpass characteristics of imaging systems, higher spatial frequencies are weakened more than lower frequencies. Thus, sharpening or edge enhancement can be achieved by highpass filtering, emphasizing higher spatial frequen
IMAGE
ENHANCEMENT
131
FIGURE
4.2aLinear
contrast
enhancement Recorded (scene 69215192).
Landsat
MSS7
image
132
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.2bImage
of part pixel
a enhanced with 1 percent of the values set to black and white.
lowest
and
highest
IIVlAGE
ENHANCEMENT
133
,i
u_
"6
E
,i
i"
o
,4
uJ n3 LL
, ,e io I .
i _J ,a
II
g
134
ge
DIGITAL PROCESSING
OF REMOTELY SENSED IMAGES
ge
K
1,
K
 K1
_ g 0 K1 g
a
b
FIGURE 4.4Nonlinear contrast enhancement. (a) Logarithmic mapping to correct for film nonlinearity. (b) Enhancement of different intensity ranges to full brightness.
cies without quantitative knowledge of the PSF [9]. Filtering may be performed in the spatial domain or by multiplication of Fourier transforms in the frequency domain. (See sec. 2.2.5. ) When a picture is blurred and noisy, differentiation or highpass filtering cannot be used indiscriminately for edge enhancement. Noise generally involves high rates of change of gray levels and hence high spatial frequencies. Sharpening enhances the noise. Therefore, the noise should be reduced or removed before edge enhancement. Simple differentiation operators are the gradient and the Laplacian [9]. The magnitude of the digital gradient at line j and column k of an image g is defined by ] re(j, and its direction
k)l\/[A_g(j,
k)]"+[,X,,g(j,
k)] =
(4.5)
is 4_(J, k), where 6(j, k) =tan' a_,g(j, k) ,.x=g(j, k)
(4.6)
where ±,g(j, k) =g(j, k) g(jk) g(i, k) 1, k) k1) (4.7)
,x_,g(j, k) =g(j, The quantities and column obtained by ±_g(j, k) and ±.g(j, respectively.
are the first differences An edgeenhanced
in the row image g_ is
directions,
g_=] Vg[
(4.8)
IMAGE
ENHANCEMENT
135
FIGURE
4.5aFlat
histogram
of image
in figure
4.2a.
136
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
J
FtGURE 4.5b_Normally distributed histogram of image in figure 42a.
IMAGE
ENHANCEMENT
137
Figure 4.6 shows the results of applying the digital gradient operator to the image in figure 4.2a. Each element in figure 4.6a represents the magnitude of the gradient, given by equation (4.5), at this pixel location, and each element in figure 4.6b represents the direction of the gradient, given by equation (4.6). Black is the direction to the neighbor on the left, and gray values of increasing lightness represent directions of increasing counterclockwise orientation. The digital Laplacian at location (/, k) of an image g is given by _72g(j, k) =g(j+ 1, k) + g(j1, k) + g(j, k+ 1) + g(j, kI) 4g(j, k) (4.9)
An edge enhanced
g,, is obtained
as g,.= V2g (4.10) from
Another method of edge enhancement is to subtract the Laplacian the blurred image g, yielding the enhanced image as [9]: g_=gV2g An element g,(], of g_ is given by k)[g(]+ 1, k) +g(]l, k) +g(], k+ 1) +g(j, k
(4.11)
k) =5g(],
1)1 (4.12)
Edge enhancement may also be performed by filtering. (See sec. 2.2.5.) The enhanced image is obtained by convolving g with a filter function h g_=g *h (4.13)
Filtering in the frequency domain is performed by multiplication of the Fourier transforms of g and h, G and H respectively, and inverse transformation of the product: g_=_jI{GH} (4.14)
The design of enhancement filters can be performed intuitively in the spatial frequency domain because of the relationship between sharpness and spatial frequencies. (See sec. 2.2.3.1.) Frequencydomain filtering also permits the enhancement of features in a specific direction. This enhancement is possible because the twodimensional Fourier transform contains information about the direction of features. High spatial frequencies in a certain direction in the frequency spectrum indicate sharp features orthogonal to that direction in the original image.
138
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.6aEdge
enhancement by differentiation. Each magnitude of the gradient.
element
represents
the
IMAGE
ENHANCEMENT
139
FIGURE
4.6bEdge
enhancement by differentiation. Each direction of the gradient.
element
represents
the
140
DIGITAl.
PROCESSING
OF REMOTEI.Y
SENSED
IMAGES
The Laplacian operations, (4.10) and (4.11), can be computed by the convolution operation, (4.13), with tile 3 by 3 filter matrices [12]:
[hi]=
4 1
(4.15)
[h._,]=
(o, i)
1 5 0 1 the dimensions of the input image
(4.16)
Whenever the filter weight matrix exceeds a size of about 13 by 13 elements, filtering in the frequency domain, including the necessary Fourier transforms, is faster than direct convolution. However, frequencydomain filtering requires that be a power of 2. The subtle brightness variations that define the edges and texture of objects are important for subjective interpretation [13]. An enhancement of local contrast may be achieved by suppressing slow brightness variations, which tend to obscure the interesting details in an image. Slow brightness variations are composed of low spatial frequencies, whereas fine detail is represented by higher spatial frequencies. A filter that suppresses low spatial frequency components enhances local contrast. One frequently used filter [ 14] has the transfer function H(u, v) = 1 [IH(0, 0)]KLsin _rKu _Ku sin _iv _Lv (4.17)
which is shown in figure 4.7 for one dimension. The variables K and L are the dimensions of the filter. For K and L on the order of 51 to 201, only the lowest spatial frequencies are removed. Smaller filter sizes (K, L=3 to 21 ) can be used for edge enhancement. Enhancement of features perpendicular to the row or column directions is possible with onedimensional filters (K or L = 1 ). However, distortions in the form of an enhancement in certain additional directions are introduced. In the spatial domain the enhanced image is efficiently computed by subtracting the average of a K by L area from the recorded image for each point. Figure 4.8 shows the effects of the filter described by equation (4.17) for K=L=101, 31, and 11. The larger the filter size, the smaller is the increase in amplitude at high spatial frequencies. Enhancement of fine detail in visible images homomorphic filtering tance model, equation component is usually may be obtained by
(see sec. 2.2.5) based on the illuminationreflec(2.1), for image formation [15]. The illumination composed of low spatial frequencies, but the re
IMAGE
ENHANCEMENT
141
H(0,
0).
0
1/K
0.1
0.2
0.3
0.4
0.5
u
FIGURE
4.7Transfer
function
of
localcontrastenhancement
filter
(for
K = 21).
flectance senting
component fine detail
is characterized in the image. In f=ln
by
high
spatial the r becomes the
frequencies,
repre
By applying (it) and =In/+In reflectance
logarithmic
transform (4.18) additive. Linear and enhanceimage encan or the to
the
relation
between filtering
illumination may then
highpass amplify ment back hanced
be applied
to suppress
illumination achieving the MSS
the of fine into the
reflectance detail.
component
relatively, exponentiation Figure An 4.9 overall that
thereby converts is a Landsat sharpening
A following domain. filtering.
filtered
intensity
7 image image lowpass to
by homomorphic by application of
of an for the
be achieved blurring bandpass 4.4 The gray Color human under
of a filter the imaging of the human
compensates system and system. that
characteristics characteristics Enhancement visual a given a much a dramatic [16]. of type the of The system
is adapted
visual
can
discriminate level. Under
only the hues. of
about same Thus,
20 to 30 conditions the use
shades it of
of dis
adaptation larger number
criminates provides displayed properties i.e., the
of color amount by the and
color can of be the
increase color
in the perceived
information
that
observer of the with
is a result display the [17]. device type of
human display film and
visual medium
system in
used, display display
connection
excitation
(e.g.,
primarycolor
illuminants_
When
142
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.SaRecorded
contrastenhanced
image
(scene
69215192).
IMAGE
ENHANCEMENT
143
FIGURE
4.8bImage
in part
a enhanced
with
a 101 by 101 filter.
144
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.8c_rnage
in part
a enhanced
with
a 31 by 31 filter.
IMAGE
ENHANCEMENT
145
FIGURE
4.Sc/Image
in part
a enhanced
with
an 11 by 11 filter.
146
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.9aRecorded
contrastenhanced
image.
IMAGE
ENHANCEMENT
147
FIGURE 4.9b_esult of highemphasis logarithmic transformation
filtering with a 31 by 31 filter after of image in part a.
148 devices
DIGITAl_ PROCESSING having slightly different
OF REMOTELY sets of primary
SENSED colors
IMAGES are employed, a
transformation may be necessary to obtain color consistency (e.g., if an image is displayed temporarily on a television monitor and then recorded on film). Digital multiimages may be displayed as color pictures by selecting three components for assignment to the primary colors. By varying the values of these components, all colors realizable within the constraints of the display medium may be generated. A color space, being linear in the color parameters brightness, hue, and saturation (see sec. 2.8.2), therefore in general does not lead to a visually perceived linear color range if linear relationships between color parameters and primary components are used. However, the components of a multiimage or the primary colors may be transformed to the color parameters brightness, hue, and saturation. An approximately equal color distribution may then be obtained by subsequent independent nonlinear transformations of each color parameter. This approach is justified because color perception cannot be as simply defined in an image as for isolated uniform areas (upon which color order systems are based) but must be defined for spatial changes (variegation) [4]. If l,,, n = 1 ..... P denotes the P given component images, hue H, saturation S, and brightness B images may be defined as
],
H(j,
_, f,,(j, k) sin 4,,, k) = tan 1_!1' Z [''(j' k) cos 6,,
(4.19)
S(j,k)=l
min L,(], k) maxf,,(],k) max f,,(j, k) K
n=l
.....
P
(4.20)
B(j,k)
n=l j=l k=l
..... ..... .....
P M N (4.21)
and K is the maximum possible intensity value of the original component images. The angles 4,,, determine the directions of the component image axis in a polar coordinate system (see fig. 2.27), and P is the dimensionality of the multirange. For P=3, a choice of 4,,=0 °, 4,_=120 °, and 4':,= 240° corresponds 4.4.1 Pseudocolor blackandwhite images, the eye responds only to brightness i.e., blackandwhite images restrict the operation of the to three primary colors.
In observing differences;
IMAGE ENHANCEMENT
149
visual system to the vertical axis of the color perception space (fig. 2.27); the ability of the visual system to distinguish many hues and many saturations at each brightness level is not used [18]. By simultaneous brightness and chromatic variation, many more levels of detail can be distinguished. Small grayscale differences in the blackandwhite image that cannot be distinguished by the human eye are mapped into different colors. Consequently, more information can be extracted in a shorter time through the substitution of color for black and white. The conversion of a blackandwhite into a color image is achieved by a pseudocolor transformation. Three pseudocolor component images are produced by controlling the relationship between the colors in the final image and the corresponding intensity in the original. Proper choice of this relationship permits full use of the abilities of the human visual system to use hue and saturation in addition to brightness for the purpose of discrimination. The three component images are then combined by using appropriate amounts of three primary colors. One pseudocolor technique is known as level slicing, where each gray level or a range the introduction of gray levels is mapped into a different color. To avoid of artificial contours, as in level slicing, a continuous
transformation of the gray scale into the color space may be performed [4]. One transformation that results in a maximum number of discernible levels is to project the gray values onto a scale of hues. The projection can be scaled and shifted to include only a particular part of the entire hue scale. Pseudocolor enhancement is illustrated in figure 4.10, showing a Heat Capacity Mapping Mission (HCMM) thermal infrared image (spectral band, 10.5 to 12.5 t_m; spatial resolution, 500 m) of the Eastern United States. Figure 4.10a is a contrastenhanced blackandwhite image. Figure 4.10b shows the corresponding levelsliced pseudocolor image with the intensity range divided into 32 colors. Blue represents the coldest, and red and white represent the warmest, areas in the thermal infrared image. Figure 4.10c shows the pseudocolor image obtained by mapping the gray scale of the blackandwhite image onto the hue scale. The transformation can be designed to obtain an approximately equal visual distribution of colors 4.4.2 produced False with a particular display device.
Color
Falsecolor enhancement is used to display multispectral information from the original scene where the spectral bands are not restricted to the visible spectrum. The goal is to present certain spectral information from the object scene rather than to have color fidelity. By assuming spatial registration, any three components of a multispectral image may be selected and combined by using appropriate primary colors. Variations in the spectral response of patterns then appear as color differences in the
150
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.10a_Contrastenhanced
blackandwhite
image.
IMAGE
ENHANCEMENT
151
FIGURE 4.1OhLevelsliced
pseudocolor
version
of image in part a with 32 colors.
152
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.10c_Gray
scale of image
in part
a mapped
onto
hue
scale
in color
space.
IMAGE
ENHANCEMENT
153
c_mposite image. These colors may show no similarity with the actual colors of the pattern. Ratios, differences, and other transformations of the spectral bands may also be displayed as falsecolor composites. Producing a good falsecolor image requires careful contrast enhancement of each component to obtain a good balance and range of colors in the composite. Generally, good results are obtained by applying contrast transformation to the three component images in such a way that their histograms look similar in shape and that each individual component has appropriate contrast when displayed as a blackandwhite image. These transformations insure good color and brightness variations. They can be performed by automatic histogram normalization or by individual determination of contrast characteristics from the original histograms. Histogram flattening by approximation of a ramp cumulative distribution function of the gray values often tends to produce high saturation and excessive contrast. Depending on the scene, this normalization may be desirable or not. The approximation of a normally distributed (Gaussian) histogram produces less saturation. The transformations should assign the mean of each enhanced component to the center of the dynamic range of the display device. Filtering of the component images may be required before falsecolor composition. Some filtering techniques, such as edge enhancement to correct for the lowpass characteristics of the imaging system or bandpass filtering to enhance visual perception, can be performed separately on the component images without loss of color information. The falsecolor image pair in figure 4.11 shows the result of filtering three Landsat MSS image components with a logarithmic bandpass filter adapted to the human visual system. The elimination of largescale brightness variations by edge enhancement with the localcontrastenhancement filter and the transfer function in equation (4.17), however, results in a loss of color information. This loss occurs because the average brightness of any homogeneous region in the image whose size is of the order of the filter size is zero. Therefore, a color composite of images enhanced with smaller filter sizes has a grayish appearance. This effect is illustrated in figure 4.12a, which shows a falsecolor display of three Landsat MSS components, each filtered with the edgeenhancement filter of equation (4.17), with KzL=31. The color loss is less obvious in the homomorphically filtered version of the same scene shown in figure 4.12b. This problem can be avoided by separating the color information from brightness and filtering only the brightness component. The original component images are transformed to the hue, saturation, and brightness color coordinate system, and filtering is performed only on the brightness component. The inverse transformation is then applied before display. Furthermore, more than three component images may be transformed to the color space. Figure 4.13 shows the result of transforming four
154
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.11Color edge enhancement by bandpass filtering. (a) Falsecolor display of three contrastenhanced Landsat images (MSS 4 = blue; MSS 5 = green; MSS 7 = red). (b) Falsecolor display of the edgeenhanced Landsat MSS image components.
IMAGE
ENHANCEMENT
155
FIGURE 4.12aColor
edge
enhancement the image
with a 31 by 31 filter applied components.
directly
to
156
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
4.12bColor
edge enhancement components with
by homomorphic a 31 by 31 filter,
filtering
of the image
IMAGE
ENHANCEMENT
157
FIGURE 4.13Color edge enhancement by filtering in color component was filtered with a 31 by 31 edgeenhancement characteristics given by equation (4.17).
space. Brightness filter with
158 Landsat performing 4.5
DIGITAL MSS
PROCESSING with
OF REMOTELY SENSED (4.19), (4.20),
IMAGES and (4.21) and
bands
equations filtering
edgeenhancement Enhancement
on the brightness
component.
MultiImage
Multiimages convey more information than monochrome images. Multiimages are obtained by imaging a scene in more than one spectral band or by monitoring a scene over a period of time. Multiimage enhancement techniques involve independent contrast enhancement of the component images or linear and nonlinear combinations of the component images, including ratioing, differencing, and principalcomponent analysis. The enhanced components may be displayed as falsecolor composites. 4.5.1 Ratioing
Multispectral images may be enhanced by ratioing individual spectral components and then displaying the various ratios as color composites. Ratioing two spectral component images suppresses brightness variations due to topographic relief and enhances subtle spectral (color) variations [191. If g is a multiimage with N components image gk I_, k= 1 ..... P(P1 ), is given by g_R=a The contrast in ratio pictures g_ +b gi for features with ratios g_, i 1 ..... P, then a ratio
(4.22) larger
is greater
than unity than for those with ratios less than unity. By computing the logarithm of the ratios, equal changes in the denominator and numerator pictures result in equal changes in the logarithmic ratio image. Thus, the logarithmic ratio image shows greater average contrast between features. Ratioing also enhances random noise or coherent noise that is not correlated in the component images. Thus, striping should be removed before ratioing. (See sec. 3.2.3.) Atmospheric effects may also be enhanced by ratioing. The diffuse scattered light from the sky assumes a larger portion of the total illumination as the incident angle of direct color illumination decreases. The effect is that the color of the scene is partly a function of topography. The scattered light from the sky can be estimated by examination of dark features shaded from the Sun by large clouds. The resulting values s_ represent the scanner readings that would occur if the scene were illuminated only by light scattered from the sky. (See sec. 3.2.2.) Because these values do not change significantly over a scene of limited size, a firstorder atmospheric correction for ratioing may be performed by gkR=a _si gis_ +b (4.23)
IMAGE The selection composites is image with P binations of [3! (n3)!]. of the most
ENHANCEMENT ratios and their combination
159 into color
useful
a problem. The number of possible ratios from a multicomponents is n=P(P1 ). The number of possible comthree of these ratios into a color composite is re=n!/ The primary colors may be assigned to each triplet in six a priori knowledge
different ways. Thus, ratioing is only efficient when of the useful ratios and color combinations is available.
Ratioing has been successfully applied for geologic applications [20]. Falsecolor composites of ratio images provide the geologist with information that can not be obtained from unprocessed images. Figure 4.14a shows a falsecolor composite of Landsat MSS bands 4, 5, and 7 of a geologically interesting area in Saudi Arabia. Figure 4.14b is a falsecolor composite of the ratio images of MSS bands 4 to 5, 5 to 6, and 6 to 7, where each ratio image was contrast enhanced by histogram flattening. Figure 4.15 compares the results of contrast enhancement and ratioing. Figure 4.15a represents a linear contrast enhancement of a Landsat MSS falsecolor image of the Sahl al Matran area in Saudi Arabia. Figure 4.15b is a falsecolor composite of the nonlinearly enhanced image components obtained by histogram flattening. Figure 4.15c shows the falsecolor composite of the contrastenhanced ratio images.
4.5.2
Differencing
Temporal changes or spectral differences may be enhanced by subtracting components of a multiimage from each other. As for ratioing, the component images must be spatially registered. Temporal changes are determined by subtracting two component images taken at different times. Spectral differences are determined by subtracting two component images taken in different wavelength bands. This difference is a representation of the slope of the spectral reflectance curve if the images are corrected for distorting factors, such as instrument calibration and atmospheric transmittance. Differences may have positive or negative values. For display, the differences must be scaled to lie within the dynamic range of the display device. The scaling should also produce sufficient contrast to make changes visible. These requirements are fulfilled by g_.° :a(g_gj) + b (4.24)
where g_ and gj are component images and gJ' is the difference image. The constants a and b are usually determined so that zero difference is represented as midscale gray (g_1'=128 for 8bit quantization), and differences of magnitude greater than 64 are saturated to white for positive differences and black for negative differences. Small differences are best displayed by pseudocolor enhancement.
160
DIGITAL _,7 •
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.1¢Color display of ratio images. (a) Falsecolor composite of Landsat MSS bands 4 (blue), 5 (green), and 7 (red). (b) Falsecolor composite of ratio images of MSS bands 4 to 5 (blue), 5 to 6 (green), and 6 to 7 (red) (scene 122607011).
IMAGE
ENHANCEMENT
161
FIGURE 4.15Contrast
enhancement
and ratioing (scene 135707285).
162
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
4.15Continued.
IMAGE
ENHANCEMENT
163
FIGURE
4.15Continued.
164
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
Differencing is a simple method for edge enhancement. Shifting an image by one row and by one column and subtracting it from the original produces a picture that represents first differences in the row and column directions. An application of image differencing for the detection and enhancement of temporal change is illustrated in figure 4.16 [21]. Figures 4.16a and 4.16b show two registered Landsat MSS falsecolor images taken at different times. Figure 4.16c is a display of the differences between these two images. Areas of major change appear light (increased reflectance). Areas with little change are displayed in midgray. To extract the areas of major change, the difference image is brought to a threshold (see sec. 7.2) for values three standard deviations above and below the mean. The resulting binary image representing the changed areas in white is shown in figure 4.16d. Differencing can only detect the location and size of changed areas. The type of change may be determined by image classification. (See ch. 8.) The binary image in figure 4.16d may be used as a mask to extract the changed areas for classification, thereby considerably reducing the amount of data to be processed [21 ]. The total change combined with figure 4.16b is shown in figure 4.17a. The classified changed areas are overlaid on one blackandwhite component (MSS 5) of the multiimage in figure 4.17b. Changes in agricultural and in urban and industrial areas are shown in green and red, respectively. changes that could not be uniquely identified. 4.5.3 Transformation to Principal Components The yellow areas represent
The problem of selecting a subset of a multiimage for enhancement by falsecolor compositing, ratioing, or differencing is generally difficult. Usually an intuitive selection, based on known physical characteristics of the scenes and on experience, is made. Mullispectral images often exhibit high correlations between spectral bands; therefore the redundancy between the components of such multiimages may be significant. The KarhunenLodve (KL) transform (see sec. 2.6.1.4) to principal components provides a new set of component images that are uncorrelated and are ranked so that each component component. Thus, the KL transform of spectral components to fewer has variance less than the previous can be used to reduce the number components that account for multispectral image. and combined into from the original P
principal
all but a negligible part of the variance in the original The principal component images may be enhanced falsecolor composites. The principal component components of a multiimage image if is obtained g by the transformation g"=T(gm)
(4.25)
IMAGE
ENHANCEMENT
165
FIGURE
4.16Detection
of temporal
change
by differencing.
166
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.16Continued.
IMAGE
ENHANCEMENT
167
FIGURE
4.16Continued.
168
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
4.16Continued.
IMAGE
ENHANCEMENT
169
FIGURE 4.17Combination of changed areas with original image. (a) Total change overlaid on image in figure 4.16b. Total change is shown in yellow. (b) Classified change overlaid on MSS band 5 (agriculture = green; urban and industrial areas = red; ambiguous areas = yellow). (Images courtesy of R. McKinney, Computer Sciences Corp.)
170
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
where g is a vector whose elements are the components at a given location (j, k) in the original multiimage, m is the mean vector of g; i.e., m=E(g). The components of vector g" are the principal components at the location (j, k), T is the P by P unitary matrix whose rows are the normalized eigenvectors tp, p= 1..... arranged in a descending P of the spectral order according covariance matrix to the magnitude tp r.) C of g of their
corresponding eigenvalues: T = (11, I_..... The covariance matrix is computed as: C :E{ The eigenvalues the equation (gm) (gm)
r}
(4.26) by solving
,\_ and the eigenvectors
t_ of C are determined
(see sec. 2.6.1.4) Ctp : A_tp (2.123 )
The eigenvectors t_ form the basis of a space in which the covariance matrix is diagonal. Therefore, the principal components are uncorrelated. The eigenvalues ,x_ are the variances ,rpz of the principal components; i.e., _p'' Ap, p = 1 ..... P. They indicate the relative importance of each component. The procedure to use the KL transform images consists of the following steps: 1. 2. 3. for the enhancement of multi
Compute the covariance matrix equation (4.26) of the multiimage and its eigenvectors in the spectral dimension (eq. 2.123). Transform the given image vector g to principal components by using equation For falsecolor (4.25). display, select and enhance the contrast of three
components and combine them into a color composite. The data that enter into the computation of the covariance matrix determine the characteristics of the enhancement. Computing the covariance matrix from all data of a scene results in a mean enhancement. To discriminate patterns of interest in the final display, the covariance matrix must be computed for selected training areas representing these patterns. An enhancement of the signal with respect to additive uncorrelated noise is achieved by the KL transform. Let g be given by g=f+n where the elements of the noise vector n are assumed (4.27) to be uncorrelated,
zeromean, covariance covariance
identically distributed random variables. Thus, the noise matrix is C,, =,r,:I, where I is the P by P identity matrix. The matrix of g is C=Ct+C,, (4.28)
and equation
(2.129)
becomes
(C1+C,,)tp=_ptp
(4.29)
IMAGE ENHANCEMENT
or
171
cltp = (;_p _2 )t_= _1t.
Thus, the eigenvectors forming the transformation to the noise, and the eigenvalues of the noisefree Xpf=,_p_ The transformation (4.25) _,2 on the variance
(4.30)
matrix T are insensitive image f are Apf, where (4.31 ) of uncorreratio
has no effect
lated, identically distributed noise. The maximum signaltonoise (SNR) in the original multiimage may be defined as [22]: SNRfwhere %_= {_I,:, _t, _..... The maximum SNR in the principal component _tp 2} image is _f2 or2
(4.32)
(4.33)
SNR,o = _X_i z
O"n
(4.34) to the noise is
Because ,X,> %_, an enhancement achieved. The KL transform Because permits ,Xl,f_0 multiimages.
of the signal with respect estimation for correlated _,2_Xl, of the noise data,
level in correlated (4.31) yields (4.35)
equation
Thus, the variances of the original components can be divided by the eigenvalue of the last principal component to give a measure of the SNRs in the original image. Figure 4.18 shows eight components of a multispectral image taken by an airborne Multispectral Scanner Data System (MSDS) [23] over an area in Utah. The scanner resolution is approximately 6 m, and the data are quantized to 8 bits. The spectral channels, the corresponding wavelength bands, and the means and variances are shown in table 4.1. Each pixel in the multispectral image is a Pdimensional vector g, with P=8, and the components are the quantized spectral intensities for that point. The covariance matrix associated with g is computed by equation (4.26). lts eigenvalues, the percentage variances in the principal components, and the cumulative percentage variances are shown in table 4.2. The principal component images g_ obtained by equation (4.25) are shown in figure 4.19. The first three principal component images contain 97 percent of the original data variance. The number of possible combinations of three of the original spectral components for falsecolor 3!(P3)[=56. Falsecolor composites of principal are shown in figure 4.20. display is component n=P!/ images
172
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
IMAGE ENHANCEMENT
173
,J
t_ t_ t_
t_
E
O
E
O
¢,'.1
tO
Q) C::
O
UJ n'O kk
174
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
E t,
t_
E
0 Q.
E
0
(o
L¢)
"6
0
,4
uJ ¢r 0
IMAGE
ENHANCEMENT
175
¢o
eu
RI, O
E
"6
O
"i ii
176
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
TABLE 4.1Wavelength EightChannel MSDS
Channel 1 2 3 4 5 6 7 8
Bands, Mean Values, Multispectral Image
band (_m)
and Variances
of
Wavelength
Mean 116.9 99.0 129.9 152.6 160.2 123.2 128.4 140.2
Variance 440.6 510.3 739.3 1,397.3 1,685.9 1,255.0 679.1 431.8
0.460.49 0.530.58 0.650.69 0.720.76 0.770.81 0.820.88 1.531.62 2.302.43
TABLE 4.2Eigenvalues Components
and Percent
Variances
of Principal
Principal component Parameter Eigenvalue Percent variance Cumulative percent variance 1 6090.1 85.66 2 472.8 6,65 3 342.7 4.82 4 119.1 t .68 5 34.8 0.49 6 24.4 0.34 7 23.3 0.32 8 2.6 0.04
85.66
92.31
97.13
98.81
99.3
99.64
99.96
100.0
IMAGE
ENHANCEMENT
177
a6
x..
,,...
t_
o
rj to
o
E
¢j
t:::
.4
ii
178
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
06
t_
.E
.O O..
E £ E
"O ...,, el. O ¢o ......
"O t_
t'O
O QI.
E
O
to
Q.
"6
C,
i1
IMAGE
ENHANCEMENT
179
,w,
.S
_p
E
E
0
E
0
O
0
O)
.,£
ul ee
LL
180
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
IMAGE ENHANCEMENT
181
FIGURE 4.20aFalsecolor
display of principal component images. Original channels 3 (red), 6 (green), and 8 (blue).
182
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 4.2OhFalsecolor display of principal components 1 (red), 2 (green),
component images. and 3 (blue).
Principal
IMAGE
ENHANCEMENT
183
FIGURE 4.20c_Falsecolor components
display of principal 2 (red), 3 (green),
component images. and 4 (blue).
Principal
184
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
REFERENCES [1] Schwartz, ings and [2] A. A.: New Techniques on and for Digital Image Enhancement. ProceedSources Institute 17, 1970,
of Caltech/JPL Software for
Conference Commercial
Image Processing Technology, Data Scientific Applications, California 1976, pp. 51510. Enhancement, Opt. Acta, vol.
of Technology, Levi, L.: On
Pasadena, Calif., Image Evaluation
Nov. and
pp. 5976. [3] Campbell, F. W.: The Human Eye as an Optical Filter, Proc. IEEE, vol. 56, 1968, pp. 10091014. [4] Fink, W.: Image Coloration as an Interpretation Aid. Proceedings of OSA/ SPIE Meeting on Image Processing, Asilomar, Calif., vol. 74, 1976. [5] Andrews, H. C.; Tescher, A. G.; and Kruger, R. P.: Image Processing by [6] Digital Computer, IEEE Spectrum, Nathan, R.: Picture Enhancement G. C., et al., eds.: Pictorial Pattern vol. 9, no. 7, 1972, pp. 2032. for the Moon, Mars and Man, in Recognition. Thompson, Washington, Cheng, D.C.,
1968, pp. 239266. [71 Selzer, R. H.: Improving Biomedical Image Quality with Computers. NASA JPL TR 321336, 1968. [8] Huang, T. S.: Image Enhancement: A Review, OptoElectronics, vol. 1. 1969, pp. 4959. [9] Rosenfeld, A.; and Kak, A. C.: Digital Picture Processing. Academic Press, New York, 1976. [10] Hummel, R. A.: Histogram Modification, Computer Graphics and Image Processing, vol. 4, 1975, p. 209, and vol. 6, 1977, p. 184. [I1] O'Handley, D. A.; and Green, W. B.: Recent Developments in Digital Image Processing at the Image Processing Laboratory at the Jet Propulsion Laboratory, Proc. IEEE, vol. 60, 1972, pp. 821828. [12] Prewitt, M. S.: Object Enhancement and Extraction, in Lipkin, B. S.; and Rosenfeld, A.: Picture Processing and Psychopictorics. Academic Press, New York and London, 1970, pp. 75149. [13] Podwysoki, M. H.; Moik, J. G.; and Shoup, W. C.: Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. NASA Goddard Space Flight Center, X92375183, July 1975. [14] Seidman, J.: Some Practical Applications of Digital Filtering in Image Processing. Proceedings of Computer Image Processing and Recognition, University of Missouri, Columbia, Mo., Aug. 1972. [15] [16] [17] [18] [19] Stockham, T. S.: Image Processing in the Context of a Visual Model, Proc. IEEE, vol. 60, 1972, pp. 828842. Billingsley, F. C.; Goetz, A. F. H.; and Lindslev, J. N.: Color Differentiation by Computer Image Processing, Photogr. Sci. Eng., vol. 17, 1970, pp. 2835. Billmeyer, F. W.; and Saltzmann, M.: Principles of Color Technology. Interscience, New York, 1966. Sheppard, J. J.; Stratton, R. H.; and Gazlev, C.G.: Pseudocolor as a Means of Image Enhancement, Am. J. Optom., vol. 46, 1969, pp. 735 754. Billingsley, F. C.: Some Digital Techniques for Enhancing ERTS Imagery. American Society of Photogrammetry, Sioux Falls Remote Sensing Symposium, Sioux Falls, N. Dak., Oct. 1973. Goetz, A. F. H.; Billingsley, F. C.; Gillespie, A. R.; Abrams, M. J.; and Squires, R.L.: Application of ERTS Image and Image Processing to Regional Geologic Problems and Geologic Mapping in Northern Arizona. NASA/IPL TR 321597, May 1975. Stouffer, M. mated Land TM78/6215, L.: and McKinney, R. L.: Cover Change Detection Aug. 1972. Landsat Image Differencing as an AutoTechnique. Computer Sciences Corp.,
[20]
[21]
IMAGE
ENHANCEMENT
185
[22] Readv, P. J.; and Wintz, P. A.; Information Extraction, SNR Improvement, and Data Compression in Multispectral Imagery, IEEE Trans. Commun., vol. COM21, 1973, pp. 11231131. [23] Zaitzeff, J. M.; Wilson, C. L.; and Ebert. D. H.: MSDS: An Experimental 24Channel Multispectral Scanner System, Bendix Technical Journal, vol. 3, no. 2, 1970, pp. 2032.
5.
5.1 Introduction
Image
Registration
In many image processing applications it is necessary to compare and analyze images of the same scene obtained from different sensors at the same time, or taken by one or several sensors at different times. Such applications include multispectral, multitemporal, and each and multisensor statistical pattern navigation [1]. recognition, change detection, Multiple measurements from map matching for resolution element
provide a means for detecting time varying properties and for improving the accuracy of recognition. A collection of images of the same scene is called a multiimage. An implicit assumption in the analysis component images are registered, i.e., that of multiimages a measurement is that the vector in a
multiimage is derived from a common ground resolution element. However, multiple images of the same scene are generally not registered, but are spatially distorted relative to one another. Misregistration results from the inability of sensing systems to produce congruent measurements because of design characteristics or because accurate spatial alinement of sensors at different times is impossible. Relative translational and rotational shifts and scale differences, as well as geometric distortions, can all combine to produce misregistration. Therefore, image registration can access contextually coincident of a multiimage by one unique registration is the procedure that matches, images of a scene. The is required resolution coordinate generates registration and intensity
before analysis procedures elements in each component pair. (See fig. 5.1.) Image a set of spatially alined, or procedure consists of two
steps, (1) the determination of matching context points in multiple images, and (2) the geometric transformation of the image so that the registration of each context point is achieved. In this chapter only techniques for the first step are described. The geometric transformation of images is discussed in section 3.3. Practically, two methods of image registration may be distinguished. For relative registration, one component of a multiimage is selected as the reference to which the other component images are to be registered. For absolute registration, a control grid (e.g., given by a cartographic projection) is defined, and all component images are registered to this reference. 187
188
DIGITAL
PROCESSING
OF
REMOTELY element
SENSED
IMAGES
Multiimage
mage 1
age 2
mage N
FIGURE 5.1_Multiimage
registration.
If the geometric registered, translation distortions
distortions
are exactly
the same
for all images
to be
the alinement is accomplished between the images. In situations between the images are small,
by determining the relative in which the relative spatial it can be assumed that the
spatial differences accomplished by
are negligible for small regions. Registration is then determining the relative translation of subimages and transformation of corresponding based points on the displacement of the images
applying a geometric subimages. The determination
in the component
is a problem of template matching. Subareas invariant features are extracted and referred
of the reference to as tcmplates.
that contain Correspond
ing subareas of the images to be registered are selected as search areas (fig. 5.2). A search area S is a matrix of ./× K picture elements. A template T is a matrix of M × N elements. It is assumed that a search area is larger than a template (]>M, K>N) and that enough a priori information is available about the displacement between the images to permit selection of the location and size of templates and search areas such that, at registration, a template is completely contained in its search area (fig. 5.3). The problem is to determine at which location (j*, k*) the template matches the corresponding search area. The existence of a matching location can be assumed, but because of geometric and intensity distor
IMAGE
REGISTRATION
189
Template 1 Sl I Search area 1
Template
[_
Reference image Search image FIGURE 5.2Se/ection of temp/ates and search areas.
K
Search area N
J M (j*, k*)
Template
FIGURE 5.3Template
and search area.
tions, certain that be the
real that
changes a correct are from produce can be be
in
the match
scene, has
and been
noise, achieved.
there At
is
no
way the
to other
make can algoeach of the of
most,
probability
images
in a certain the Once defined
geometrical data. a posteriori probabilities requirement The
relationship optimum probabilities are
to each registration
determined would rule
available of these
rithm possible decision cost the and
a set
describing measure characteristics
relationship.
determined, that some the
a statistical
by the
of a decision distortions its mapping the and
minimum. that search probability
However, define area are density the
because relationship
noise
between unknown, the
a template computaim
in the
in general
tion of possible.
required
distributions
is practically
190
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
Therefore, pproximations the form of maximizing similarity a in a measure areused. hedecisionegarding T r thelocation a match made of is bysearching forthemaximum thesimilarity of measure andcomparing it with a predetermined threshold. enerally, G thereis no theoretically derived evaluation f the errorperformance a registration o of technique before itsactual application [2].
5.2 The ured Matching similarity in several quadratic by Crosscorrelation between ways [3]. two images ! and used g over similarity a region S can be meas
Commonly
or distance
measures
are the
difference
M N
deand the absolute difference
_
j tic
_
1
[[(j,
k) g(j,
k)]'
(5.1)
M
N
da= Equation (5.1) can
_,
ji
_
k: 1
] f(J, by using
k) g(j, the
k)l
(5.2) inequality
be expressed
CauchySchwartz
Zj with and the with
Zf(J'k)g(j'k)_(Z h" equality c constant. Z
j
j
_,](J'k)ZE : only the and if g(j, following Z
j
j
Eg(j'k)Z) k k) for are
1'5
(5.3) k
holding Thus, E
h"
if and when
k) =el(j, quantities Z
k
all j and given:
f(J'k)2
g(j'k)_
de is given
by
M N
de= and is a measure For that match and for template of the matching a search search degree
__,
jt
__,
k 1
f(j,
k)g(j,
k) f and g. a template parts
(5.4)
of match
between that
it is assumed area S. The into
f represents
T and of g that to g, Thus,
g represents f. For each (5.3) this
problem all possible dj, or
is to find positions dA are
f is shifted (m, n) the
relative
position becomes
distances
computed.
equation
j
/,"
:
j
/,
(5.5)
IMAGE
REGISTRATION
191
The lefthand side of equation (5.5) is the crosscorrelation between [ and g. The righthand side depends on m, and, therefore, the normalized crosscorrelation is used as a measure of match :
k)g(j+m, 1,+n)
R(m,n)= _ __ff _/(j, To compensate ized by subtracting for differences their average _, R(m, n)= J _,](j, _
O'1" 13"(/( m, n )
_ k )2 _7/_,S_ g(j+m, k +n)_] '/_
(5.6)
in illumination, f and g may be normalgray values land g. This step yields k )g(j+m, k +n) ig(m, n) (5.7)
where ,_i is the standard deviation of the gray values of f and ,_o ..... , is the standard deviation of the gray values of g in an area of the size of f at location (m, n), respectively. The variable R takes on its maximum value for displacements (m*, n* ) at which g=cL i.e., for a perfect correlation between i and g. Thus, template matching involves the computation of the similarity measures dr for each possible displacement, and search for the displacement (m*, n*) at which dl, is maximum. For the absolute difference similarity measure, (5.8)
S(m,n)=ZZ
j !;
]l(j,k)g(j+m,k+n)!
is computed, and the displacement mum is the location of best match. similarity detection algorithm [4]. The correlation only translational
(m*, n*) at which S(rn, n) is miniThis metric is used in the sequential
measures given in equations (5.6) or (5.8) determine differences. It is assumed that [ and g have the same
scale and rotational alinement. This alinement does not occur in general, and scaling and rotational transformations must be carried out in addition to translation if there are severe scale and angular differences between j and g. Because of the presence of noise and distortions between [ and g, there is only a certain probability that the extrema of equation (5.6) or (5.8) actually define the correct match point. 5.3 Registration Errors and systematic intensity the correlation performance. changes are the main
Geometric distortions errors that can degrade
192
8.3.1
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
Geometric Distortions
Any geometric distortion of the search image coordinates relative to the reference image coordinates degrades the performance of the registration process. The most important types of geometric distortions are scale, rotation, skew, scan nonlinearity, and perspective errors. Geometric error sources are discussed in section 2.4.1. Scale elements errors are primarily caused by altitude changes. The template are either somewhat larger or smaller than the searcharea
elements. Consequently, elements of the template, when overlaid on the search area, encompass both matching and nonmatching elements, and the amount of nonmatching overlap increases outward from the center. Rotation errors can be caused by attitude or heading changes. If the template is centered but rotated relative to the search area, the correlation algorithm compares a single template element with a combination of fractions of both matching and nonmatching searcharea elements. The amount of overlap with nonmatching elements increases outward from the center of the template. Skew errors in satellite scanner images are caused by the Earth's rotation between successive mirror sweeps. Again the correlation algorithm compares a single template element caused with both matching and nonof the matching searcharea Scan nonlinearity elements. errors are
by the
nonlinear
motion
scanner, resulting in a nonlinear scale change in the scan line direction. Scan length variations, caused by changes in the period of the oscillating scan mirror, are often corrected in ground processing, e.g., synthetic pixels in Landsat MSS. These pixels must be removed before registration. Perspective errors occur when the reference and search images were taken from different positions. The effect is similar to a linearly varying scalefactor error. The geometric distortions between reference and search 5.3.2 images are shown in figure 5.4. Errors
Systematic
Intensity
Systematic intensity errors include all changes in the intensity of the search image, relative to the reference image, that cannot be attributed to sensor noise. The overall signal level of the search image relative to the reference image can be altered by changes in scene illumination (e.g., day to night or sunny to overcast) or by changes in sensor gain settings. Changes in the optical properties of the atmosphere can also change the overall signal level, the contrast perceived by the sensor, or both. Shadows due to clouds or changes in Sun angle cause blocks of search image elements to be totally dissimilar to the corresponding reference image elements. The reflectivity of certain portions of a scene can change as a result of
IMAGE
REGISTRATION
193
r
I I
t l ....
1
I I
I _.J
¢
</
IL' l/, /
c e
7 /
I I I
/ /
b
1t
FIGURE
5.4Geometric
distortions
between dashed (d) Scan
reference
and search
images. images. length
Solid
figures are the reference images; (a) Scale. (b) Rotation. (c) Skew. variations. (e) Perspective.
figures are the search nonlinearity and scan
physical changes on the ground, such as snowfall or flooding; as a result of differences in moisture content or seasonal changes in foliage and vegetation; or, to a lesser degree, simply as a result of differences in the direction of the illumination by either active sensors or the Sun changing orientation at different times of day. Finally, the search image can be different from the reference image owing to actual changes in the reference scene (e.g., new manmade objects). These systematic errors generally do not significantly increase the width of the correlation function, but they reduce the differential between the in and outofregister values, and thereby increase the possibility for false correlations. Uniform intensity or gain changes by a factor c do not affect the performance of algorithms using equation 5.6 because this measure of match attains its maximum whenever the search area and template values are proportional to each other. The similarity measure given in equation (5.8) attains its minimum when the template and search area are equal to each other. Thus, changes in the overall signal level can severely influence the performance of algorithms using this equation. The question of whether equation (5.6) or (5.8) should be used depends on the image statistics that have been ignored in the definition of the similarity measures. Experiments have shown that at low signaltonoise ratios (SNRs)SNR < lalgorithms using the normalized correlation coefficient perform better [5]. At high SNRs (SNR>3), absolutedifference algorithms are better. However, in practical applications, these
194
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
highSNRs areseldom realized. When1 < SNR< 3,thechoice algoof rithmis notcritical.(Forthecomputation theSNR,t is assumed of i that thetemplatehas variance '_s" and that the search image g is the template [
corrupted by noise n, such that variance _,,''. Then SNR =,r/'/,C"). 5.3.3 Preprocessing tor Image g=I+n, where n has zero mean and
Registration
If severe geometric distortions exist between the reference and search images, crosscorrelation will not yield useful results unless the distortions are first estimated and removed. If the reference and search image areas containing the same object differ substantially in average gray level, their graylevel distribution should be normalized. After the geometric and contrast correction, the translational misregistration is estimated for each subarea by determining the location of the peak of the correlation surface. This surface is computed by crosscorrelating a template, obtained from the reference image, with the corresponding search area. The peak of this correlation surface is assumed to be the point of correct superposition of the template on the search area. In general, the correlation surface computed by equation (5.6) may be rather broad, making detection of the peak difficult. 5.4 Statistical Correlation
The detection of the correlation peak may be facilitated by including the statistical properties of the reference image into the correlation measure [6, 7]. The objective is to filter the template f so that the correlation measure corresponding to the correct superposition of / on the search area g is maximally discriminable from the correlation results of all other positions of f or g. This idea can be implemented as maximizing the ratio of the correlation measure at the matching location to the variance of its values taken over all other locations [8]. The statistical properties of the reference image are assumed to be characterized by the spatial covariance matrix C. The linear space invariant Z= is maximized. The statistical _,
__ j
filter
h is chosen
such that
the ratio (5.9)
R_''(m*, n* ) var [R,(m, n)] correlation Z
k
measure
R_ is defined k+n)
as
s(j,
k)g(j+m, _
R+(m,
n)
[ _
_
s''(j, k)]'_' [ _
g"(j+m,
k +n)
]''"
(5.10)
where the new template s is obtained by convolving )' with the filter h; i.e., s=[ *h. Determination of the optimal registration filter requires
IMAGE REGISTRATION
195
computation of the covariance matrix and its inverse. This operation is numerically difficult, because for an M by N template, C will be of dimension MN by MN. Furthermore, a large set of data is required for estimating the covariance matrix. By making simplifying assumptions about the statistical nature of the images to be registered, it is possible to reduce the computational problems significantly. If it is assumed that the statistical properties of the reference image are modeled by an isotropic exponential covariance matrix [6, 7] C, given by C: where p is the average adjacent registration filter is given by (pl__l) element correlation coefficient, (5.11) then the
[h]=
p(l+p
2) p,.
(l+p_') _p(1 spatially q_p_)
2
p(l÷p . (o =0),
(5.12)
If the images given by
are completely
uncorrelated
the filter is
[hi =
1 0
(5.13)
and the template pletely correlated
is directly taken from the reference images (p = 1 ), the filter becomes
image.
For
com
[hl=
2 1 2
4
2 1
(5.14)
This equation is the discrete approximation to the mixed fourth derivative, obtained by convolving the discrete approximations second partial derivatives along each coordinate axis:
partial to the
[h]=[h,]
• [h_]
(Zo0
2 1 • 0 0
2 1
0 0
(5.15)
Thus, when the images are highly correlated, the correlation concentrates on the edge comparison between the reference and search images. The use of derivatives of [ as registration filters can also be heuristically justified. If the features of interest in the template and search area are characterized by shape rather than by contrast, the edge images of [ and g can be correlated. Correlating edge images tends to yield sharper matches than does correlation of grayvalue images. Edge images also permit the
196
composite measures 5.5
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
in the compact form of a single edge image [91 1]. An evaluation and comparison of similarity is given in [ 12] and [ 13]. of the Correlation Function
useof multispectral shape information
Computation
In general, translations
the correlation of the template
function must be computed for all possible within the search area to determine its maxi
mum value and obtain an estimate for the misregistration. The number L of these translations is given by L = (J M + I ) (KN + 1 ). Finding the maximum value of the correlation function requires MN multiplications to be performed for each of L relative shifts. For larger ! and g the computation time can be reduced by applying the fast Fourier transform (FFT) algorithm [14]. By the convolution theorem, equation (2.46), of Fourier analysis, crosscorrelating to pointwise multiplying the inverse transform: a template the Fourier f with a search area g is equivalent transforms F* and G and then taking
R(m,n)=!S
l[F.(u,v)
G(u,
v)]
(5.16)
where _i: denotes the Fourier transform operator, and u and v are spatial frequencies. Correlation functions obtained with the discrete Fourier transform (DCT) are cyclic, because the transform assumes the pictures to be periodic functions. (See sec. 2.6.1.1.) Thus, cyclic convolutions have values even for shifts such that the template is no longer entirely inside the picture. The Fourier transform matrices to be multiplied pointwise must be of the same size. Therefore, the template is extended by zeroes to the size of the search area. The valid part of the computed correlation function is rearranged for determination of the correlation maximum and for display. (See sec. 2.6.3.) An estimate of the location of the correlation peak to subpixel accuracy may be obtained by fitting a bivariate polynomial to R(m, n) and computing its maximum.
REFERENCES [1] Littestrand, R. L.: Techniques for Change Detection, IEEE Trans. Comput., vol. C21, 1972, pp. 654659. [2] Pinsin, L. J.; Boland, J. S.; and Malcolm, W. W.: Statistical Analysis for a Binary Image Correlator in the Absence of Geometric Distortion, Opt. Eng., vol. 6, 1978, pp. 635639. [3] Rosenfeld, A.; and Kak, A. C.: Digital Picture Processing. Academic Press, New York, 1976. [4] Barnea, D. I.; and Silverman, H. F.: A Class of Algorithms for Fast Digital Image Registration, IEEE Trans. Comput., vol. C21, 1972, pp. 179186. [5] Bailey, H. H., et al.: Image Correlation: Part 1, Simulation and Analysis, Rand Corp. Report R2057/IPR, 1976.
IMAGE
REGISTRATION
197
[6] Arcese, A.; Mengert, P. H.; and Trombini, E. W.: Image Detection through Bipolar Correlation, IEEE Trans. Info. Theory, vol. IT16, 1970, pp. 534541. [7] Pratt, W. K.: Correlation Techniques of Image Registration, IEEE Trans. on Aerosp. and Electron. Syst., vol. AES10, 1974, pp. 353358. [8] Emmert, R. A.; and McGillem, C. D.: Conjugate Point Determination for Multitemporal Data Overlay. LARS Information Note 111872, Purdue University, Lafayette, Ind., 1973. [9] Nack, M. L.: Temporal Registration of Multispectral Digital Satellite Images Using Their Edge Images. AAS/AIAA Astrodynamics Specialist Conference, Nassau, Bahamas, July 1975. [10] Nack, M. L.: Rectification and Registration of Digital Images and the Effect of Cloud Detection. Proceedings of Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafayette, Ind., 1977, pp. 1223. [11] Jayroe, R. R.; Andrus, J. F.; and Campbell, C. W.: Digital Image Registration Method Based upon Binary Boundary Maps. NASA TND7607, Washington, D.C., Mar. 1974. [12] Svedlow, M.; McGillem, C. D.; and Anuta, P. E.: Experimental Examination of Similarity Measures and Preprocessing Methods Used for Image Registration. Proceedings of Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafayette, Ind., 1976, pp. 4A94A13. [13] Kaneko, T.: Evaluation of Landsat Image Registration Accuracy, Photogr. Eng. and Remote Sensing, vol. 42, 1976, pp. 12851299. [14] Anuta, P. E.: Spatial Registration of Multispectral and Multitemporal Digital Imagery Using Fast Fourier Transform Techniques, IEEE Trans. Geosci. Electron, vol. 8, 1970, pp. 353368.
6. Image
6.1 The Introduction generation of image
Overlaying
and
Mosaicking
overlays
and mosaics
is a key requirement
for
image analysis. Overlaying is the spatial different wavelengths, at different times,
superposition of images taken at or by different sensors such that
congruent measurements for each raster element are obtained. Multiple congruent spatial distributions were defined as a multiimage in section 2.1. Congruent measurements are required for many image analysis applications such as multidimensional classification, change detection, and modeling. For example, overlaying multispectral and multitemporal measurements with data from other sensors and with ancillary information (e.g., terrain elevation and slope) offers a means of improving the recognition accuracy in classification. Change detection involves the comparison scene taken at different times. The problem of two images of the same is to detect the amount of
change of image properties rather than their absolute magnitude. A possible application is detection of the change in the width of a river, in the size of a lake after a storm, or in land use or urban patterns. An overlay of a temporal sequence may be used for trend analysis. Overlaying two images taken from different positions in space results in a pair of stereoscopic images that enable calculations of the third spatial coordinate of every image point. For example, cloudheight analysis can be performed with images taken by two geosynchronous satellites. Terrain elevation images may be created from stereoscopic image pairs. Map overlays of images and of image analysis results can be used to update maps and to produce new maps. Overlays of spatially congruent data from various sensors are required as input to climatic, environmental, and landuse models. Mosaicking is the combination of several image frames into photomosaics covering a specified area. Such mosaics can be used for mapmaking. In general, the frame size is given by the field of view of the sensor, and the frame location is determined by spacecraft operation. Thus, an area of interest may be partly covered by several frames, each with its particular geometric distortions and radiometric degradations. 6.2 Techniques for Generation of Overlays and Mosaics mosaics requires are usually taken similar image with different 199
The generation of image overlays and processing techniques. The image frames
200
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
attitudes ndpositionsf thesensors, a o atdifferent times andseasons, and underdifferentatmospheric conditions. Varyinggeometric distortions prevent ccurate verlay corresponding a o of frames. cale S andshape differences adjacent in images aybesosevere m thata setof frames cannot bemosaicked without isalinement m atboundaries. Radiometric ifferences adjacentramescaused Sunangled in f by dependent shadows, seasonal changes fields,orests, of f water odies, b and different tmospheric a conditions ayproduce m artificial dges mosaics. e in Clouds andnoise theborder in areaof oneframe canalsoproduce discontinuities t the seams a between images. herefore, T geometric and radiometricorrections c andtransformation a common to referencere a required foroverlayingndmosaicking a images. Twobasic approaches are available generation overlaysndmosaics. for of a Foroverlays, oneimage maybeselectedsreferencendtheotherframes a a arethenregistered to thisreference. Techniques geometric for transformation andimage registrationarediscussed chapters and5. Thesecond in 3 approach to is select cartographic a projection ndtoregister images a all tothiscommon mapreference. Mapprojections bediscussed section will in 6.3. Similarly, mosaics beproduced selecting may by oneframe asreference andregistering adjacent frameso thereference. t Thisoperationequires r a sufficiently largeareaof overlap betweendjacent a frames, ndonly a limited geometric accuracy beachieved. may Thisapproach isthus limited to thegeneration mosaics of consisting onlya fewframes. of Thesecond approach to choose cartographic is a projection referenceridand asa g to transform allframes it. Mapprojections to arecontinuous representationsof a surface. Therefore, setof frames a transformed the same to projection ill mosaic erfectly. w p Mapprojectionsrea basis a standardepresentationdiscrete a for r of spatially distributed easurements, m suchasdigitalremote sensing images andpointmeasurements andground truthdata.To relate these data,a common frameworkn the formof a welldefined i coordinateystem s is required. Thelocation eachmeasurement thesurface theearth of on of isuniquely efinedythegeographic geodetic d b or coordinates (longitude , X latitude andtheelevation above sea level. A map projection defines _) z
the transformation of data locations from geographic to plane coordinates and provides the common framework for analysis, graphical display, and building of a data base. 6.3 Map Projections
A map projection is the representation of a curved surface in a plane. For the Earth, the curved reference or datum surface is assumed to be an ellipsoid or a sphere. Projection surfaces are planes, cylinders, and cones, and the latter two types are developable into planes [1]. The transformation from the datum to the projection surface is defined by a set of
IMAGE OVERLAYING ANDMOSAICKING
201
mathematical expressions describinghe relationship t between latitudes andlongitudes thedatum in surface andthecoordinates theprojection in plane, hich dependent thetypeof projection. projection f a w is on Any o curved surface ontoa planeinvolves distortions f distances, o shapes, or areas Mapprojectionriteriathatpreserveistance, [2]. c d shape, andarea aremutually exclusive. Therefore, thereis noidealprojection, butonlya best lanar epresentation p r foragiven purpose.
6.3.1 Classes of Map Projections are divided into classes according to the following
Map projections criteria: 1. 2. 3.
The nature of the geometric properties of the projection surface The contact of the projection surface with the datum surface The position of the projection surface with relation to the datum surface
These classes are not mutually exclusive. Criterion 1 leads to planar, cylindrical, and conical projections, each representing one of the basic projection surfaces: plane, cylinder, and cone. The simplest of these projection surfaces is the plane, which when tangent to the datum surface, would have a single point of contact, this point also being the center of the area of minimum distortion. The cone and the cylinder, which are both developable into a plane, increase the extent of contact and, consequently, the area of minimum distortion. (See fig. 6.1.) Criterion 2 yields three groups of projections, representing three types of contact between the datum and projection surface: tangent, secant, and polysuperficial. Tangency between the datum and projection surfaces results in a point contact if the projection surface is a plane and a line contact if the projection surface is either a cone or a cylinder. In the secant case a line of contact is obtained when the projection surface is a plane, and two lines of contact are obtained when the projection surface is either a cone or a cylinder. These principles are illustrated in figure 6.2 for the plane and cone. A further faces, and successive polyhedric and a series Criterion three basic increase of contact between the datum and projection surthus a reduction of the distortion, is achieved by a series of projection surfaces. A series of planes would produce a (multipleplane) projection; a series of cones, a polyconic; of cylinders, a polycylindrical projection. 3 leads positions to subdivision into three groups, representing of the projection surface relative to the datum the sur
face: normal, transverse, and oblique. If the purpose of the projection is to represent a limited area of the datum surface, it is advantageous to achieve the minimum of distortion for that particular area. Minimizing distortion is possible by varying the attitude of the projection surface. If
202
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Single
point
of contact
..
Line contact
of
Plane Cylinder
Line
of
contact
Cone
FIGURE
6.1Projection
surfaces
(after
Richardus
and
Adler
11 ]).
Single /_/of / .=,,.._(_
line /
/_ k .  Two lines
contact
/..___.._\
XZ ]\
/L_
A\
Secant
projection
plane
Secant
projection
cone
FIGURE
6.2Increase
of
contact
between
projection
and
datum
surface.
IMAGE
OVERLAYING
AND MOSAICKING
203
the axis of symmetry of the projection surface coincides with the rotational axis of the ellipsoid or the sphere, the normal case is obtained. With the axis of symmetry perpendicular to the axis of rotation, the transverse projection is obtained. Any other attitudes of the axis of symmetry result in oblique projections. (See fig. 6.3.) Projections may also be characterized according to the cartographic properties equidistance, conformality, and equivalency. These properties are mutually exclusive. Equidistance is the correct representation, on the projection surface, of the distance between two points of the datum surface. This property is not a general one, and it is limited to certain specified points. Conformality means the correct representation of the shape or form of objects. This property may be limited to small areas. Equivalency is the correct representation of areas on the projection surface at the expense of shape distortions. 6.3.2 Coordinate Systems points on the datum and proEarth is usually an ellipsoid or longitude _, counted positive _, counted positive from the Cartesian referred
Coordinate systems are required to relate jection surfaces. The datum surface of the sphere with the coordinates expressed as from a reference meridian, and latitude equator (fig. 6.4).
The coordinate system in the projection plane is a rectangular system (x, y) with the positive yaxis pointing north (sometimes
to as northing), and the positive xaxis pointing east (easting). The coordinate systems may be graphically represented by regularly spaced grids of longitudes and latitudes, or northings and eastings. A map projection is the transformation of grids from the curved surface to the projection plane. The origin is usually the central point of the projected area. With cylindrical or conical projections, this central point may be located on the tangent parallel or meridian. The relationship spherical coordinate between the projection system is given by (x, y) =T_(_, where T_ is a vector function determined _) by the type of projection. plane and the eilipsoidal or
(6.1)
6.3.3
Perspective
Projections
Perspective projections are projections onto a plane that is perpendicular to a line through the center of the sphere or perpendicular to an ellipsoidal normal. Thus, the projection axis is perpendicular to both the datum and the projection surfaces, and the projection center lies at the intersection of the projection axis and the projection plane. A point on the projection axis serves as the perspective point, and straight lines from the perspective
204
DIGITAl.
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Meridian
t
Equator
q
a
/
r
v__ c
Great
circle
FIGURE 6.3Positions (b) Transverse, contact
of projection surface. (a) Normal, contact along equator. along a meridian. (c) Oblique, contact along a great circle.
point Images
through taken
the by if the surface.
datum cameras camera
surface onboard axis
locate coincides
points and with
on the
thc aircraft
projection are of
plane. a normal
spacecraft
perspective
projections to the datum If the
direction
projection at the are
plane center, and lines along of the of the the
is
tangent all great
to
the
datum passing surface. only the
surface, through
there the
is point
no of of
distortion tangency the tion.
circles
straight plane
on the
the axis
projection changes point point plane projection the
A displacement scale the of the form projecof to results the
projection The
location Placing
perspective perspective
determines diametrically
the the in rota
projection. point tion a stereographic axis
opposite surface with
of tangency of the
projection If the the or
with
the datum axis coincides
projection. sphere
ellipsoid,
normal
or polar
stereographic
IMAGE
OVERLAYING
AND MOSAICKING
205
Parallel
_
_
P
Centralmeridian
k
ator
FIGURE
6.4Coordinate system.
projection is obtained. In this projection the projection plane is tangent to one of the poles, with the perspective point at the other pole. The meridians are straight lines converging at the pole; the projected parallels are concentric circles about the pole. (See fig. 6.5.) The major application of this projection is the depiction of polar areas. The transformation Tv for the polar stereographic projection for the sphere is given by
x=2R y=2R where 6.3.4 The R is the radius Mercator Mercator
tan (2 tan (2
_) v)
sin X (6.2) cos X
of the sphere.
Projection projection is a conformal cylindrical projection, with the
meridians fig. 6.6.)
and parallels forming an orthogonal The meridians are equally spaced, progressively By increasing
grid of straight lines. (See and the intervals between such that the proyscale is matched This conformality
the parallels increase jection is conformal. exactly at every
from the equator the xscale, the is maintained.
latitude;
thus true shape
206
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
n
90 °
;r_allel 0° FIGURE6.5Polar stereographic meridians and parallels.
_=m
°
_=2rn
° = 2n °
Central meridian
L.__
_=n 0° x ¢o G
Equator

n
FIGURE 6.6Mercator grid of meridians and parallels.
means that any straight line on the Mercator projection crosses successive meridians at a constant angle, and hence is a line of constant direction (compass course, or loxodrome). In the normal Mercator projection, distances and areas are seriously exaggerated at latitudes greater than 40 ° . The transformation soid is Tp for the normal Mercator projection for the ellip
x=RA y=Rln where R is the radius
I tan (_ _+_
4 ) (1E+Esinsin _) _:;_ 1} circle
(6.3) of
of the equatorial
and E is the eccentricity
the ellipsoid.
IMAGE OVERLAYING MOSAICKING AND
207
The oblique Mercator projection is centered on any great circle other than the equator or a meridian. It has all the properties of the normal Mercator projection, except for the loxodrome property. It is usually defined for the sphere and not for the ellipsoid. Therefore, it is most useful for mapping small areas that are not oriented in the northsouth direction, such as satellite and aircraft images. The angle between flight path and a meridian determines the direction of the axis of symmetry. The transformation "Iv for the oblique Mercator projection for the sphere is x=R tan 1 cos _, sin (XX_) sin socos sop cos sosin sopcos (,_. xp) sosin soy+cos socos _e cos (x,_p) sosin sop+cos socos sopcos (hxp)
R 1 +sin Y= 2 In 1 sin
(6.4)
where xp and sol, are the longitude and latitude of the oblique pole, respectively. The transverse Mercator projection uses a meridian rather than the equator as line of contact or true scale. All conformal properties of the normal Mercator projection except the loxodrome property are retained in the transverse Mercator projection. This projection is very useful for a 15 ° to 20 ° band centered on its central meridian. The transformation Tp is obtained from equation x=R (6.4) for sop 0: =
tan 1 [cos sosin (XXp)]) socos (hhp) socos (AAv) t (6.5)
y=_R In 1 +cos 1 cos
6.3.5
Lambert
Projection
The Lambert conical projection is a conformal projection. The apex of the cone lies on the rotational axis of the ellipsoid or sphere. Meridians are straight lines converging at the apex, which is also the center of all projected circular parallels. In the secant Lambert projection the cone intersects the datum surface at two standard parallels sol and so... The condition that there be no distortion at the two standard parallels determines the latitude sooof the central parallel circle. The intersection of the central meridian and the central parallel is the origin of the Cartesian coordinate system (x, y) in the projection plane with the y axis along the control meridian (fig. 6.7). The transformation Tp for the Lambert normal the sphere with two standard parallels _, and _._ is x=o sin 0 } (6.6) conical projection for
Y=PoO
COS 0 j
208
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
_
_
_ d"
/
_
_"
\
Central
parallel
Central
meridian
FIGURE
6.7Conical
projection
with
two
standard
parallels.
where
sm ,;0/ tan 0= X sin _o In cos _1 In cos _.. sin _o = In tan (; The scale distortion is dependent _)ln only on tan (4the _ ) _, and not the
latitude
longitude x. Therefore, the scale distortion of a parallel circle is constant, making the Lambert conical projection suitable for areas extended in an eastwest direction. 6.3.6 The Universal universal Transverse transverse Mercator Mercator Projection projection is actually not a
(UTM)
projection, but a grid system based on the transverse Mercator projection. Central meridians are constructed every 6 ° of longitude, extending from 80 ° north to 80 ° south. Thus 60 zones, extending 3 ° to either side of each central meridian, are defined, and each zone is overlaid by a rectangular grid. A scale distortion or grid scale constant of 0.9996 is applied along the central meridian of each zone to reduce scale distortion of the projection. The effect is that the transverse cylinder is secant to the datum surface instead of tangent. Whereas in the ordinary transverse Mercator projection there is no scale distortion along the central meridian, and small circles parallel to it are represented by vcrtical lines with increasing scale distortion away from
IMAGE OVERLAYING MOSAICKING AND
the central the scale coordinates meridian, in the UTM there are two standard meridians,
209 and
distortions are more evenly spread over the zone. Surface are measured in meters from the central meridian and from a bias of 500,000 m to mainperpendicular to the central value and are called easting of 10 million m is assigned to is four_d by subtracting the In the Northern Hemisphere
the equator. The central meridian is assigned tain positive values over the zone. Distances meridian are added or subtracted from this values. For the Southern Hemisphere a bias the equator, and the northing coordinate distance to the equator from the bias value.
northing is simply the distance north of the equator in meters. The northing and casting coordinates, together with the zone number, define locations on the Earth within the UTM system. Polar areas are excluded from the UTM system. 6.4 Map Projection of Images
Remotely sensed images are mappings of a curved surface onto a plane and therefore contain the distortions of a map projection. Furthermore, the images are subject to the geometric distortions discussed in section 2.4.1. The transformation of an image to a map projection involves basically two steps. First, the relationship between point locations (I, s) in the distorted input image and geodetic coordinates (latitude _ and longitude ,X) must be established: (_,, _o)=T¢(l,s) (6.7)
Second, with the equations of the desired map projection, the x, y coordinates of the points, the projection plane, must be computed: (x, y) = Tv(_., _) Finally, the projection plane coordinates output picture with M rows and N columns, (L,S)=T,(x, where T,, T_, and T, are vector functions. the distorted input to the projected output (L, S) =T,ToTc(I, must be scaled to produce (6.8) an
y) The composite mapping image is given by s)
(6.9) from
s) T(/,
(6.10)
The coordinate systems used are shown in figure 6.8. The origin of the input space is the upper left corner of the input image (l, s). The origin of the projection plane or tangent space (x, y) is the image nadir point. The origin of the output space is the upper left corner of the output image. In practice, calculation of the exact location of each image point would require a prohibitive amount of computer time. Depending on the nature of the geometric distortions and the chosen map projection, points in the projected image may be sparsely and not equally spaced. To obtain a
210
$
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
S
IMAGES
Ts
L
Interpolation grid
Input image
Earth surface
Projection plane
Projected output image
FIGURE 6.8_Map
projection
coordinate
systems.
continuous the inverse
picture with equally spaced elements in a reasonable time, approach is taken. A set of tie points defining a rectangular or
quadrilateral interpolation grid in the output image is selected. The exact mapping transformation is only computed for the grid points. The locations of points within each quadrilateral are determined by bilinear interpolation between the vertex coordinates. Values for fractional pixel locations in the input picture are determined by one of the resampling interpolation schemes. (See sec. 3.3.2.) The latitude and longitude for each grid point with the inverses of T._ and the map projection T_ (g,_)o=Tp The relationship (l, s)o is given by (1, s)o=Tcl(A, _')o (6.12) between (,L _)o 'T# and X(L,S),; the input image grid (L, S)_; are determined
(6.11) coordinates
where Tc describes the viewing geometry of the imaging system. The form of T,. depends on the optical characteristics of the sensor, the shape and size of the datum surface, and the position and attitude of the sensor [3]. In scanning imaging systems each pixel is obtained at a different time [4], and scanner images may be considered to approximate onedimensional perspective projections of the object scene. Often the attitude of the sensor is either not available or only inaccurately given, and the transformation T_. can not be calculated from a priori information. Therefore, another approach, based on the displacement of ground control points (GCPs), is used to determine T,.. GCPs are recognizable geographic features or landmarks whose actual geodetic positions can be measured in existing maps. The coordinates (l, s)_e of GCPs in the input image may be determined from shade prints or by a crosscorrelation technique if a library of GCP templates is available. (See sec. 5.3 for registration techniques.)
IMAGE OVERLAYING MOSAICKING AND
With the assumption that the transformation projection plane T_Tc can be represented with bivariate polynomial of degree m:
m mj
211
from input image to sufficient accuracy by a
io
¢1$
_=o
¢nj
(6.13)
r:Z
.i=0
Z
k=0
l,s
polynomial of the same
The inverse degree.
(T_Tc)1
is also given by a bivariate
Let the coordinates of the given ground control points in the geodetic coordinate system and the projection plane be (A, _)n and (x, Y)R, respectively. The following approach to produce map projections of remotely sensed images is used [5]: 1. Choose an appropriate map projection (x, y)n=Tv(A, 2. Coordinates _)R by T_ and determine (6.14)
(x, Y)R and (l, s)R are related (x, y)R=TpTc(l, s)R
Determine (TpT_) 1 coordinates.
the coefficients of the polynomials by least squares by using the
representing TvTc and ground control point
3. Determine the extent of the projection plane, circumscribe a rectangle, and scale to output image coordinates (L, S) with T,. 4. Divide the output image into an interpolation determine grid point locations in input image (l, s)o= (T_Tc)1 TI(L, S)o grid (L,S)o and
(6.15)
5. Perform a geometric transformation of the input image by using interpolation grid coordinates (L, S)o, (l, s)a, and a selected resampling algorithm. The actual geometric transformation is described in section 3.3. 6.5 Creating Digital Image Overlays
Overlays of remotely required for change ing with multisensor ments, overlays may tion or to a reference
sensed images with maps and other images are detection, map generation and updating, and modeland multitemporal data. Depending on the requirebe generated with respect to a standard map projecframe.
212
The 6.10. Baton 6.9b. image
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
generation of imagemap overlays is illustrated in figures 6.9 and Figure 6.9a shows an unprocessed Landsat MSS 5 image of the Rouge, La., area. A UTM map of the same area is shown in figure A UTM and a normal Mercator projection of the Landsat MSS are shown in figures 6.10a and 6.10b, respectively. Ninetyfive
ground control points were available to determine the coefficients of a fourthorder polynomial representing the transformation (TpT_)'. The transformation T_ was determined such that the pixel size in the projected image is 100 by 100 m. The geometric transformation to the map projection was performed by using an interpolation grid of 10 by l0 rectangular areas. An application of image overlays for change detection is shown in figure 6.11. Two Landsat MSS scenes centered near Cairo, Ill. are used to observe the effects of spring flooding in the Mississippi Valley [6]. The extracted subimage shows areas affected by high water.
6.6
Creating
Digital
Image
Mosaics across frame be mosaicked frame. In the image frames
Mosaicking permits the analysis of remotely sensed images boundaries. Depending on the requirements, the images may with respect to a standard map projection or to a reference latter case a distinction can be made between mosaicking of
from the same or from different orbits. Adjacent frames from the same orbit exhibit fairly consistent geometric distortions, and, therefore, geometrically correct mosaics can usually be obtained by simple translation of one image with respect to the other. The attitude changes of the sensor at different orbits, however, cause geometric distortions that make it impossible to create geometrically correct mosaics of frames from different orbits without geometric rectification. This step requires that the frames to be mosaicked share a sufficiently large region of overlap. In addition to geometric distortions, there are intensity differences that cause artificial edges at the seam between the frames. These intensity differences are due to changes in atmospheric transmittance and in illumination caused by different Sun angles. Seasonal changes of surface reflectance, precipitation, and changes caused by human activities also contribute to artificial edges in mosaics and thus interfere with image analysis. Figure 6.12, a mosaic of two unprocessed from different orbits, shows pronounced artificial Landsat MSS 5 frames edges along the vertical
and horizontal seams. The geometric distortions in the overlap region of the two frames are shown in figure 6.13. A firstorder correction consists of adjusting the average gray level of each image to the same value. This preprocessing operation is in general not sufficient to eliminate the artificial edges. An improvement may be achieved by selecting subareas in the overlap region and determining a
IMAGE
OVERLAYING
AND
MOSAICKING
213
.E
tt_ O)
....I
ct.
t:::
t_
O
E
t_
,O
d_
kl_
214
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
IMAGE
OVERLAYING
AND
MOSAICKING
215
FIGURE
6.10Map
projections of image in figure 6.9a. (b) Normal Mercator projection.
(a) UTM
projection.
216
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
IMAGE
OVERLAYING
AND
MOSAICKING
217
\
O t_
"6
tt3 CO O')
t_ ..I
O
"6
t_
t6
t.E
218
0
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
200 I 400 I Common 600 I reference
X
800 I
Columns
point
\
200
400"
600'
7
800 •
1000'
1200"
J
1400.
Lines
Scale: ,_
= 10 pixels
FIGURE 6.13_elative
geometric
distortions in overlap in figure 6.12.
region
of the
mosaic
IMAGE OVERLAYING linear twodimensional grayscale
AND
MOSAICKING that matches
219 the aver
transformation
age gray levels in the subareas. Unless there are severe brightness changes, e.g., clouds and snow fields, this technique eliminates most of the artificial edges along the vertical and horizontal seams. A further improvement is possible by finding the seam point on each line that causes the minimal artificial edge [7]. Let g_ and g2 be the two images to be mosaicked, and let K be the width of the overlap region. For the definition of a vertical seam, the best seam point on line j is chosen at the location where the sum of grayvalue differences over a neighborhood of L pixels on the same line is minimized. That is, the seam point k* is determined such that
L1
__, I gl(J, k +l) t=0
g2(j,
k+l)]
=minimum (6.16) k= 1 ..... KL+ 1
The graylevel difference at the seam point may be smoothed by interpolation. Better results may be achieved by twodimensional seam definition and smoothing and by extension to the spectral dimension if mosaicking of multispectral images is required. Figure 6.14 shows a digital mosaic of parts of three Landsat MSS frames created by geometric correction, linear brightness adjustment, seam definition with equation (6.16), and seam smoothing. Figure 6.15 is a falsecolor mosaic of the same area.
220
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
IMAGE
OVERLAYING
AND
MOSAICKING
221
ro
O
C,
cb
IAI n" t9 m h
222
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
REFERENCES [1] Richardus, R.; and Adler, R. K.: Map Projections. NorthHolland/American Elsevier, Amsterdam, London, and New York, 1972. [2] Gilbert, E. N.: Distortion in Maps, SIAM Rev., vol. 16, no. 1, 1974, pp. 4762. [3] Elliot, D. A.: Digital Cartographic Projection. Proceedings of Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications, California Institute of Technology, Pasadena, Calif., Nov. 1976, pp. 51510. [4] Puccinelli, E. F.: Ground Location of Satellite Scanner Data, Photogr. Eng. and Remote Sensing, vol. 42, 1976, pp. 537543. [5] Moik, J. G.: Smips/VICAR Application Program Description, NASA TM 80255, 1979. [6] Van Wie, P.; and Stein, M.: A Landsat Digital Image Rectification System, IEEE Trans. Geosci. Electron., vol. GE15, 1977, pp. 130137. [7] Milgram, D. L.: Computer Methods for Creating Photomosaics, IEEE Trans. Comput., vol. c24, 1975, pp. 11131119.
7.
Image
Analysis
7.1
Introduction
Image analysis is concerned with the description of images in terms of the properties of objects or regions in the images and the relationships between them. Although image restoration and enhancement produce images again, the result of image analysis operations is a description of the input image. This description may be a list describing the properties of objects such as location, size, and shape; a relational structure; a vector field representing the movement of objects in a sequence of images; or a map representing regions. torial, but its construction tion of their shapes. In the latter case the description is again picrequires the location of regions and determina
The description always refers to specific parts in the image. Therefore, to generate the description, it is necessary to segment the image into these parts. The parts are determined by their homogeneity with respect to a given graylevel property such as constant gray value and texture, or a geometric property based on connectedness, size, and shape [1]. Thus, image analysis involves image segmentation and description of the segmented image in terms of properties and relationships.
7.2
Image
Segmentation deals with the have two basic
Image spatial
segmentation is that part of image analysis that definition of objects or regions in an image. Objects
characteristics: (1) They exhibit some internal uniformity with respect to an image property, and (2) they contrast with their surroundings. Because of noise, the nature of these characteristics is not deterministic. One property is gray level, because many objects are characterized by constant reflectance or emissivity on their surface. Thus, regions of approximately constant gray level indicate objects. Another property is texture, and regions of approximately uniform texture may represent objects. A region R_ is a set of points length. Regions have the property surrounded by a closed curve of finite of being simply connected. A segmenta223
224
such that
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
is a finite set of regions (R,, R_ ..... RI)
tion of theimage domain R
R= u] Ri
i 1
,
(7.1)
for j_/=i
Rj n
Ri=_
where Q3 is the empty set and W and N represent the set operations union and intersection, respectively. (See fig. 7.1.) Image segmentation can be obtained on the basis of both regional and border properties. Given a regional property such as intensity, color distribution, or texture, picture elements that are similar with respect to this property may be combined into regions. Alternatively, the borders between regions may be located by detecting discontinuities in image properties. An image property is a function that maps images into numbers. The value of the property for a given image g is the number obtained by the operation. Examples of image properties are: (1) The gray level g(jo, ko) of g at a given point (J,,, k,,); (2) the average gray level of a neighborhood of (j0, ko); (3) the coefficients of an orthogonal transformation (e.g., Fourier and KarhunenLodve); and (4) geometrical properties such as connectedness, area, and convexity. Property 2 is a local image property. 7.2.1 Threshoiding
Graylevel thresholding is an effective and simple segmentation technique when the objects have a characteristic range of gray levels. If g is a singlecomponent image with graylevel range [z,, zl,] that contains I
y
FIGURE7.1Segmentation of image into regions.
IMAGE
ANALYSIS ranges Zi C [z,, z_], i_ 1 .....
225 1,
regions with the nonoverlapping graylevel then a threshold image gt is defined by gt(j, The histogram of gray k) = { i0 levels
otherwise if g(j, k )_Z_ and if it is strongly
(7.2) multi
is examined,
modal, it can be assumed that the image contains approximately uniform areas that constitute regions [1, 2]. Thus, the image can be segmented by bringing it to a threshold at the lowest gray levels between histogram peaks. Threshold selection is facilitated in an interactive system by repeatedly displaying the histogram and evaluating the result of such selection. Thresholding can also be used to segment an image into rcgions of uniform texture. A given image g is transformed into an image gt by computing a local texture property at every image point (see sec. 7.4) and bringing to a threshold.
7.2.2
Edge Detection
Edge detection is an image segmentation method based on the discontinuity of gray levels or texture at the boundary between different objects. Such a discontinuity is called an edge. An edge separates two regions of relatively uniform but different gray level or texture. Another type of graylevel discontinuity is the line; it differs from the regions on both sides. Edge detection involves contextfree algorithms that make no assumptions about edge continuity. A common approach to edge detection in monochrome images is edge enhancement (see sec. 4.3) followed by a thresholding operation to determine the locations of significant edges. Classical edge detectors are derivative operators, which give high values at points where the gray level of the image g changes rapidly [1, 35]. For digital images, differences rather than derivatives are used. The firstorder differences in the x and y directions are A_g(j, k) =g(j, A,,g(j, k)=g(j, First differences in other directions as k) cos O+ ±,,g(j, directional by k)]"+ k) sin 0 difference, (7.5) the gradient, k) g(jk)g(j, 1, k) k1 ) as linear (7.3) (7.4) combina
0 can be defined
tions of the x and y differences aog(j, k) =±rg(j,
With the magnitude of the maximum an edgeenhanced image g,. is obtained g_(j, k)= \/[_,g(j,
[,x_,g(j, k)] _
(7.6)
226
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
Themagnitude thegradient, of asgiven equation in (7.6),detectsdges e in allorientations withequal ensitivity. s Equation (7.6) is oftenapproximated by
g_l:x_g(i, or by g_ max [l&,g(j, k)] , k)]+ Im,,g(i, k ) ] (7.7)
k)[]
sensitive operators
(7.8)
to edges in all can be defined
These approximations orientations. Various [3, 6]. Let
are other
no longer equally edgeenhancement
d, = ]g(j, k+ 1 ) g(i,
k
1 )[
(7.9)
d.,= !g(i+ l, k) g(jd:,=lg(j+ d4=lg(jd:,=lg(j 1, k+ 1)g(j1, k+ 1)g(j+
1, k)]
1, k1, k1)[ 1) I
(7.10)
(7.11) (7.12)
l,k+l)+2g(j,k+l)+g(j+l,k+l) (7.13) 1, k) + g(j + 1, k+ 1 ) (7.14) by (dl, d,.) (7.15) (7.16) (7.17) +d..+d:_+d4 of the image (7.18) g with proper
g(jl,k1)2g(j,k1)g(j+l,k1)] d_,=lg(j+ 1, k1) +2g(j+
g(jl,k1)2g(jl,k)g(jl,k+l)[ Then, edgeenhanced images are obtained max
g,.(j, k)=
g_(j,k)=d_+d., g,.(j,k)=d_+d,, g_(j, k) =d, Edges may also be enhanced masks. (See sec. 4.3.)
by convolution
Threshold selection is a key problem in edge detection in noisy images. Too high a threshold does not permit detection of subtle, lowamplitude edges. Conversely, setting the threshold too low causes noise to be detected as edges. Nack [6] proposed an adaptive threshold technique using a desired edge density D as key parameter. malized histogram of the enhanced image g,. is formed n(z)The function edgeenhanced H_(z) MN z=0, 1, K1 selection The nor
....
(7.19)
H_(z) is the frequency of occurrence of gray level z in the image g,., K is the number of quantization levels, and M
IMAGE
ANALYSIS T that determines is an edge point
227 whether is calcu
and N are the image dimensions. The threshold a picture element in the edgeenhanced image lated as T=Kwhere z is determined 1 z
(7.20)
such that the actual edge density D_= _
_0
n(Kli)
(7.21) D thickens of edges in
matches the desired edge density D. Varying the edge density or thins edges. The edge image e(i, k), indicating the position the image g, is obtained by e(j,k)= { 1 0 g,(j, k) g_(j,k)>_T<T
(7.22)
Figure 7.2 illustrates the effects of applying different edgeenhancement operators and varying the edge density for threshold selection. Edges are represented by white pixel values against a black background. Edges were enhanced with the operators given in equations (7.15), (7.17), and (7.18), and edge densities of 5, 10, and 15 percent, respectively, were used for threshold selection. The threshold values determined by equation (7.20) are also shown. Visual evaluation indicates that the edge density is a significant parameter for edge detection, but the choice of edgeenhancement operator has little influence for this class of images. A study of edgedetector performance was reported in [7]. For multiimages, edge detection is usually performed on the component images, and the edge images obtained may be combined into a composite edge image by a logical OR operation. An important application of edge detection is in image registration where edge images are used for binary correlation. (See sec. 5.4.) Experiments have shown that for remotely sensed Earth resources images, an edge density of D 15 percent is appropriate. Edge detection is of limited value as an approach to segmentation of noisy remotely sensed images. Often the edges have gaps at places where the transitions between regions are not sufficiently abrupt. Additional edges may be detected at points that are not part of region boundaries, and the detected edges will not form a set of closcd, connected object boundaries. However, the object boundaries may be constructed by connecting the extracted edge elements. Thus, boundary detection is achieved by the combination of local edge detection, followed by operations that thin and link the segments obtained into continuous boundaries [8, 9]. A boundaryfinding algorithm that segments multiimages into regions of any shape by merging statistically similar subregions is given in [10].
228
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 7.2Edge detection. Effects of various edgeenhancement T and edge densities D. Images in parts a, b, and c are enhanced equation (7.15); parts d, e, and f, with equation (7.17); and parts with equation (7.18). (a) D  5 percent, T = 48. (b) D  10 percent,
operators with g, h, and T = 34.
i,
IMAGE
ANALYSIS
229
FIGURE
7.2C, ontinued.
(c)
D = 15 percent,
T = 27. (d) D = 5 percent,
T = 232.
230
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 7.2Continued.
(e) D = 10 percent,
T = 164. (f) D = 15 percent,
T = 128.
IMAGE
ANALYSIS
23
1
FIGURE 7.2_Continued.
(g) D = 5 percent,
T = 146.
(h) D = 10 percent,
T = 103.
232
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
FIGURE7.2Continued. (i) D = 15 percent, T  80.
Another described 7.2.3
technique in [11].
for partitioning
multispectral
images
into
regions
is
Texture
Analysis
By thresholding, images may be segmented into regions that are homogeneous with respect to a given image property. If objects have approximately constant reflectance over their surfaces, regions of constant gray level represent objects. More generally, regions of homogeneous texture (see sec. 2.8.3) may indicate objects [12]. Conversely, edges may not only be defined by abrupt changes in gray level but also at locations at which there is an abrupt change in texture. Thus, textural features are important for image segmentation and classification. Current texture analysis techniques are based on Fourier expansion or statistical analysis. Hawkins [13] described texture as a nonrandom arrangement of a local elementary pattern repeated over a region that is large in comparison to the pattern's size. Texture is often qualitatively described by its coarseness. The coarseness of a texture is related to the spatial repetition period of the local structure. A large period implies a coarse texture. Therefore, texture properties such as coarseness and directionality can be derived from the power spectrum of a texture sample. High values of the power
IMAGE ANALYSIS
233
spectrum IF[2,given equation in (2.48),neartheoriginindicate coarse a texture, ut in a finetexture b thevalues 1F] of °arespread overmuchof thespatialrequency f domain. Thetwodimensional spectrum power also represents thedirectionsf edges ndlines animage. texture o a in A witha preferredirection will have d _9 highvalues f IF] around o 2 theperpendicular direction (_/2). Thus, texture measures can be derived from averages 0+
of the power spectrum taken spatial frequency domain. over ring and wedgeshaped regions of the
_=
]F(r,O)l°dO
(7.23)
_o=
IF(r,
O)[_"dr
(7.24)
Statistics of local image property values, such as means and variances, computed at every point of a given image g may be used as texture measures. For example, the directional differences between pairs of average gray levels were proposed as texture measures in [141. Let _,,,(j, k) be the average gray level of image g in a square region of side m+ 1 centered at (j, k). Differences of these averages, for pairs of horizontally, vertically, or diagonally adjacent local regions may be used as texture measures. The definitions of the differences are: 1. Horizontal _, m km+ l ) _,, ( j_ m _,k+l
T"(J'k)= 2. Vertical
g"( j
)
(7.25)
Tr(j,k)= 3. Diagonal
_,,,(jm+l,k2)_,,,(j+l,k
2)
(7.26)
T4_(j,k)=l._,,,(jm+l,km+l)_,,_(j,k) Tl_5(j,k)=l_,,,(jm+l,k)_,,,(j,km+l) re=l, 2 ....
I I
(7.27) (7.28)
For a coarse texture and small displacements m of the local regions, the values in T(j, k) should be small; i.e., the histogram of T(j, k) should have values near zero. Conversely, for a fine texture comparable to the local region size, the elements of T(j, k) should have different values so that the histogram of T(j, k) is spread out. Textural properties can also be derived from the probabilities that gray levels occur as neighbors [15]. The higher that the probability that a gray level occurs as a neighbor of the same or a similar gray level is, the finer
234
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
is thetexture. Texture a localproperty f a picture is o element. Therefore, texture measures aredependent onthesize ofthelocalobservation region.
7.3 Image Description
Once an image has been segmented into regions, a description in terms of the region properties and the relationships between the regions may be obtained. Measuring properties of the regions and establishing relationships between them are often very complex processes. Rosenfeld [1] summarized the problems and emphasized that prior knowledge about the class of images under consideration should be used as a model to guide both the segmentation and the measurement of properties. Use of prior knowledge is greatly facilitated in an interactive image processing system for which the analyst combines the information available in the image with his experience from previous data. A special case of image description is classification. Here the description is simply the name of the class to which a region or a picture element belongs. Classification has been successfully applied for the analysis of remotely sensed images and is extensively treated in chapter 8. However, classification employing statistical pattern recognition techniques uses only sets of property values of picture elements or regions to characterize an image and does not use relationships between regions. An adequate structural description of remotely sensed images has not yet been obtained because of the complexity of the images and the presence of noise. An attempt to describe the structure of a class of remotely sensed images by a web grammar with use of labeled graphs is described in [16]. Web grammars [17] provide a convenient model to represent spatial relationships. Reviews of the structural or syntactic approach to image analysis are given in [18] and [19]. Despite the lack of explicit models for remotely sensed images, complex analysis problems have been solved successfully with systems in which the knowledge about the problem domain is implicitly contained in the analysis functions and in the communication with the user. The representation of knowledge in programs is a powerful tool in the development of image analysis systems. 7.4 This Image Analysis section Applications the successful solution of three image analysis
illustrates
problems, the determination of wind fields, landuse mapping, and change detection with image analysis systems that permit the use of prior knowledge about the problem at each analysis step [20, 21]. 7.4.1 Wind Field Determination how models Oceanographic implicit in the analysis functions of Information Processing System
This example illustrates the Atmospheric and
IMAGE ANALYSIS
235
(AOIPS)[20]combined iththeanalyst's w experience areused deterto mineatmospheric motions from a series satellite of images. Remotely sensed images fromgeosynchronous satellites rovide possibility f p the o studying thedynamics theatmosphere of [22,23].Atmospheric properties such asdivergence, describes which horizontal atmospheric motions, can beassociated withthedevelopment severetorms of s andare,therefore, importantor stormprediction. f Divergence canbe derived froma wind vectorielddescribing f theatmospheric fluidflow. Windvectorfieldsmaybe determined measuring by clouddisplacements a series images btained knowntimeintervals. disin of o at The placement cloud ofa divided bytheelapsed timebetween theimagesives g thewindvelocity. ccurate A registrationf successive o imagesseech.5) ( anddetermination thelocation theclouds of of withrespect theEarth to arenecessary convert elative to r clouddisplacementswindvectors to in geodetic r Earthcoordinates. o Therefore,he transformationetween t b image coordinates anda referenceoordinateystem c s ontheEarthmust befound.Becausef thelackof a perfect eosynchronous o g orbitof the satellite, thistransformationrathercomplex is andtimedependent. The precisettitude thespacecraftdetermined using a of is by orbitinformation andfittinglandmarks groundcontrolpoints.(Seesec.3.3 and6.4.) or This navigation process spinstabilized for spacecraft wasdescribed previously [24,25]. Thus,thefirststeps estimating indfields in w areto identifylandmarks in a series f imagesovering stormandto measure o c a theirlocations in theimages nda map.The AOIPS[20] providesunctions identify a f to landmarks ith a cursor, o increase w t thescale andthecontrast subof imagesurrounding s landmarks, andto extracthe image t coordinates of landmarks. a sufficient Oncc setoflandmarks hasbeen defined, thenavigation process isperformed. Thenextstepis to select nique u clouds thefirstimage theseries in of andto tracktheirmotion theremainingequence. in s Thissteprequires thesegmentation theimages of intoclouds andbackground. Bothautomaticsegmentation thresholding by andmanual identificationf clouds o witha cursorarepossible. Theexactocation a given l of cloudin subsequentimages determinedy crosscorrelation. is b (Seesec.5.3.) The problem to select is clouds forcorrelation thatarefairlysmall ndwhose a shape approximately is invariant ithinthe sequence w [26,27].Foreach tracked cloud windvectors determined. a i Thecloudheights calculated i by estimating cloudoptical hicknessndis used assign the t a to thelevel for whichthecloudis tracked. Thewindvectors obtained thiscloud by tracking process rerandomly a distributed. windfieldon a uniformly A spaced grid in the earth coordinate system can be determined y b interpolation. Figure7.3ashows Geostationary a Operational Environmental Satellite 1/VideoInfraredSpinScanRadiometer (GOESI/VISSR) image
236
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
7.3_Wind
field
determination
for tropical
storm
Anita.
(a) Visible
Geostationary Operational Environmental Satellite l/Video /nfrared Spin Scan Radiometer (GOES//V/SSR) image taken on August 31, 1977, at 1600 G.m.t. (b) Lower tropospheric wind field determined from four images taken 3 rain. apart.
IMAGE
ANALYSIS
237
of tropical storm Anita obtained on August 31, 1977, over the Gulf of Mexico. A series of four images taken 3 min. apart was used to derive the wind field shown in figure 7.3b [28]. The length of an arrow is proportional to the wind speed. Figure 7.4 shows a visible image of the storm combined with the derived lower tropospheric wind field and with the wind field interpolated to a uniform grid. Various field parameters can be calculated from the uniform wind field. The radial and tangential wind velocity components in a polar coordinate system with the origin at the center of the storm are shown in figure 7.5. The contours represent constant values of velocity. The radial and tangential components can be used to calculate the areal horizontal mean divergence and the areal mean relative vorticity of the field, respectively. These parameters may be used as input to models for studying the dynamics of the atmosphere. 7.4.2 LandUse Mapping
This image analysis example illustrates the description of regions that are not the result of segmentation but are defined independently and then superimposed on the remotely sensed image. Examples are urban area divisions according to population or political jurisdiction, such as census tracts or municipalities. The region boundaries are usually defined by polygonal boundaries given by a list of vertex coordinates. This spatial data structure is handled by most geographic information systems. To overlay the region boundaries on an image requires a conversion to the raster image data structure. The Image Based Information System (IBIS) [29] converts polygonal data structures to an image raster and provides functions for the registration of boundary and grayscale images. An important application for this combination is the integration of socioeconomic and remotely sensed data to determine landuse changes [30]. The first processing steps are to convert the polygonal data structure of the Census Bureau Urban Atlas to an image and to register the resulting boundary image to the corresponding remotely sensed image. This registration process involves the selection of a sufficient number of ground control points. (See sec. 6.4.) Here the geographic locations of ground control points are known from the Urban Atlas file and do not have to be extracted from a map as described in section 6.4. A problem, however, is the registration of tract boundaries that do not coincide with physical features in the image. Figure 7.6a shows the census tract boundaries of the Richmond, Va., area. These boundaries are combined with two bands of a corresponding Landsat MSS image in figure 7.6b. the original census tracts, and the yellow boundaries image. The blue lines define are registered to the
The next step is to segment the remotely sensed multispectral image into natural regions applying, for example, classification. Figure 7.6c shows the classification map obtained by a clustering technique. (See
238
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 7.4Combination of visible image of tropical (a) Image combined with wind field derived by cloud with interpolated wind field.
storm Anita with wind field. tracking. (b) Image combined
IMAGE
ANALYSIS
239
FIGURE
7.5a_Ftadial
wind
velocity
component
in polar
coordinates.
240
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE
7.5bTangential
wind
velocity
component
in polar
coordinates.
IMAGE
ANALYSIS
241
F;GURE 7.6Landuse mapping boundaries. (b) Original and bands 4 and 5 image (scene
of Richmond, Va. area with IBIS. (a) Census registered boundaries combined with Landsat 534014420)•
tract MSS
242
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
I
_
I
FIGURE 7.6_ontinued.
(c) Unsupervised
classification
map. (d) Census tract map.
IMAGE
ANALYSIS
243
sec. 8.4.) Seven different classes were distinguished. The third step involves the identification of each census tract in the boundary image with a unique color or gray value, generating a map as shown in figure 7.6d. The last major processing step is to combine the segmented image with the census tract map. Description of the census tract regions in terms of image properties is then simply a counting operation and report generation. A part of the report listing properties of the seven classes for the census tracts is shown in table 7.1. 7.4.3 Change Detection
An important application of remotely sensed images is the monitoring of changes on the Earth's surface caused by natural or manmade activities. Image segmentation techniques may be applied to detect temporal changes of specific objects or regions. Differencing of registered images (see sec. 4.5.2) shows all changes and causes errors in delineating the changed regions of interest. However, segmentation of a reference image into the regions of interest and background, and comparison of only these regions in a sequence of images permits detection of the specific changes. In an application to assess the intensity and spatial distribution of forest insect damage in the Northeastern United States hardwood forests, a multispectral Landsat MSS image from one date is segmented by multidimensional thresholding into forest and nonforest regions [31]. All nonforest regions in the registered images from other dates are then eliminated, a step that permits an accurate detection of forest alterations without confusion caused by changes in agricultural areas and reduction of data volume. Figure 7.7 shows two Landsat MSS images obtained 1 year apart, the segmentation and the map displaying of one image into forest and nonforest regions, the forest areas changed by insect infestation.
244
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
oooooo_o_ooooo_oooooooooooooooo
®_
oooooo_o_ooooo_oooooooooooooooo
o_o_oooooooo_o_oo_ooooooo OoddddddoodddddoOd_oddo_ddOOOb_
o_o_oooooooo_o_oo_oooooo_
oooooooo_ooooo_oooooooooo
oooooooo_ooooo_oooooooooooooooo
o_o_ooo_oo_oo_o_ooo_ooooo_
_g
o_o_ooo_oo=oo_o_ooo_ooooo_
_o____o_
_
"lff
O
E ...9.
®_ t,,.
2
C 0 Q
_N_ "_ "Nffd_ _ ' "o '_
"6
,,,j Q.
E
oc z
_8888_8888888_._888_888888
ila ,.J m
88888
<
IMAGE
ANALYSIS
245
FIGURE 7.7Detection of forest cover alterations. (a) Landsat MSS image Harrisburg, Pa. area taken July 19, 1976 (scene 54415001). (b) Landsat image with insect infestation taken June 27, 1977 (scene 288714520).
of MSS
246
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 7.7Continued. (c) Segmented version regions in red. (d) Change map showing forest in blue.
of image in part regions in yellow
a showing forest and infested areas
IMAGE
ANALYSIS
247
REFERENCES
[1] Rosenfeld, A.; and Kak, A. C.: Digital Picture Processing. Academic Press, New York, 1976. [2] Prewitt, M. S.: Object Enhancement and Extraction, in Lipkin, B. S.; and Rosenfeld, A.: Picture Processing and Psychopictorics. Academic Press, New York and London, 1970, pp. 75149. [3] Duda, R. O.; and Hart, P. E.: Pattern Classification and Scene Analysis. WileyInterscience, New York and London, 1973. [4] Rosenfeld, A.; and Thurston, M.: Edge and Curve Detection for Visual Scene Analysis, IEEE Trans. Comput., vol. C20, 1971, pp. 562569. [5] Hueckel, M.: An Operator Which Locates Edges in Digital Pictures, J. Assoc. Comput. Mach., vol. 18, 1971, pp. 113125. [6] Nack, M. L.: Temporal Registration of Multispectral Digital Satellite Images Using Their Edge Images. AAS/AIAA Astrodynamics Specialist Conference, Nassau, Bahamas, July 1975. [7] Fram, J. R.; and Deutsch, E. S.: On the Evaluation of Edge Detection Schemes and Their Comparison with Human Performance, IEEE Trans. Comput., vol. C24, 1975, pp. 616628. [8] Ehrich, R. W.: Detection of Global Edges in Textured Images. Technical Report, ECE Dept., University of Massachusetts, Amherst, Mass., 1975. [9] Frei, W.; and Chen, C.: Fast Boundary Detection: A Generalization and a New Algorithm, IEEE Trans. Comput., vol. C26, 1977, pp. 988998. [10] Gupta, T. N.; and Wintz, P. A.: A Boundary Finding Algorithm and Its Applications, IEEE Trans. Circuits Syst., vol. CAS22, 1975, pp. 351362. [11] Robertson, T. V.; Fu, K. S.; and Swain, P. H.: Multispectral Image Partitioning. LARS Information Note 071373, Purdue University, Lafayette, Ind., 1973. [12] Zucker, S. W.; Rosenfeld, A.; and Davis, L. S.: Picture Segmentation by Texture Discrimination, IEEE Trans. Comput., vol. C24, 1975, pp. 12281233. [13] Hawkins, J. K.: Textural Properties for Pattern Recognition, in Lipkin, B. S.; and Rosenfeld, A.: Picture Processing and Psychopictorics. Academic Press, New York and London, 1970, pp. 347370. [14] Weszka, J. S.; Dyer, C. R.; and Rosenfeld, A.: A Comparative Study of Texture Measures for Terrain Classification, IEEE Trans. Systems, Man Cybernetics, vol. SMC6, 1976, pp. 269285. [15] Haralick, R. M.; Shanmugam, K.; and Dirnstein, I.: Texture Features for Image Classification, IEEE Trans. Systems, Man Cybernetics, vol. SMC3, 1973, pp. 610621. [16] Brayer, J. M.; and Fu, K. S.: Application of Web Grammar Model to an Earth Resources Satellite Picture. Proceedings of Third International Joint Conference on Pattern Recognition, Coronado, Calif., 1976. [17] Pfaltz, J. L.; and Rosenfeld, A.: Web Grammars. Proceedings of First International Joint Conference on Artificial Intelligence, Washington, DC., 1969. [18] Miller, W. F.; and Shaw, A. C.: Linguistic Methods in Picture ProcessingA Survey. Proceedings of Fall Joint Computer Conference, Thompson, Washington, D.C., 1968, pp. 279290. [19] Fu, K. S.: Syntactic Methods in Pattern Recognition. Academic Press. New York, 1974. [20] Bracken, P. A.; Dalton, J. T.; Quann, J. J.; and Billingsley, J. B.: AOIPSAn Interactive Image Processing System. National Computer Conference Proceedings, AFIPS Press, 1978, pp. 159171. [21] Moik, J. G.: Smips/VICAR Image Processing Description. NASA TM 80255, 1979. SystemApplication Program
[22] Hubert, L. F.; and Whitney, L. F., Jr.: Wind Estimation from Geostationary Satellite Pictures, Mon. Weather Rev., vol. 99, 1971, pp. 665672. [23] Arking, A.; Lo, R. C.; and Rosenfeld, A.: A Fourier Approach to Cloud Motion Estimation, J. Appl. Meteorol., vol. 17, 1978, pp. 735744.
248
[24]
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
[25]
[26]
Smith, E. A.; and Phillips, D. R.: Automated Cloud Tracking Using Precisely Aligned Digital ATS Pictures, IEEE Trans. Comput., vol. C21, 1972, pp. 715729. Mottershead, C. T.; and Phillips, D. R.: Image Navigation for Geosynchronous Meteorological Satellites. Seventh Conference on Aerospace and Aeronautical Meteorology and Symposium on Remote Sensing from Satellites, American Meteorological Society, Melbourne, Fla., 1976, pp. 260264. Leese, J. A.; Novak, C. S.; and Clark, B. B.: An Automated Technique for Obtaining Cloud Motion from Geosynchronous Satellite Data Using Crosscorrelation, J. App]. Meteorol., vol. 10, 1971, pp. 118132. Billingsley, MetpakA J.; Chen, J.; Meteorological Mottershead, C.; Bellian, A.; and DeMott, T.: Data Processing System. Computer Sciences AOIPS Corp.
[27]
[28]
Report CSC/SD77/6084, 1977. Rodgers, E.; Gentry, R. C.; Shenk, W.; and Oliver, V.: The Benefits of Using Short Interval Satellite Images to Derive Winds for Tropical Cyclones, Mon. Weather Rev., vol. 107, May 1979. [29] Bryant, N. A.; and Zobrist, A. L.; IBIS: A Geographic Information System Based on Digital Image Processing and Image Raster Datatype. Proceedings of Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafayette, Ind., 1976, p. 1A1 to 1A7. [30] Bryant, N. A.: Integration of Socioeconomic Data and Remotely Sensed Imagery for Land Use Applications. Proceedings of Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications, California Institute of Technology. Pasadena, Calif., Nov. 1976, pp. 9198. [31] Williams, D. L.; and Stouffer, M. L.: Monitoring Gypsy Moth Defoliation Via Landsat Image Differencing. Symposium on Remote Sensirg for Vegetation Damage Assessment, American Society of Photogrammetry, 1978, pp. 221229.
8.
8.1 Introduction
Image
Classification
An important segmentation method for multiimages is classification, whereby objects (points or regions) of an image are assigned to one of a prespecified set of classes. The description is simply the name of the class to which the object belongs. A multiimage is represented by a set of property values measured or computed for each component. A property may be the gray level, a texture measure, the coefficient of an orthogonal image transformation (e.g., Fourier, KarhunenLo6ve transformation), or the description of size and shape of a region in the image. The set of property values for a given point is called a pattern. The property values are combined into a Pdimensional vector f, which can be represented as a point in pattern space. The basic condition for classification is that the representative patterns of a class form compact regions or clusters in pattern space, i.e., that the pattern vectors are not randomly distributed. The assumption is then made that each pattern f belongs to one and only one of K classes. For remotely sensed image data, this assumption is justified, because most materials reflect or emit a unique spectrum of electromagnetic energy. (For example, see fig. 8.1. ) Because of the variations of object characteristics and noise, remotely sensed images may be regarded as samples of random processes. (See sec. 2.2.2). Thus, image properties or patterns are random variables. The variations and clustering of pattern vectors with the spectral characteristics shown in figure 8.1 are illustrated in figure 8.2. Given a set of patterns for an image, statistical decision theory or geometric techniques may be used to decide to which class a pattern should be assigned. The set of decision rules is called a classifier. Let the set of all patterns be S, where S: Formally, the classes Sk such that {if, 2f.... } (8.1) the set S into K subsets
are obtained
by partitioning
S_,NSs=
K
Q_
kve=j (8.2)
[..J S,.= S
1:1
249
250
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Reflectance
_4_,,_,,_
Veget at ion
Water
MJ._\ _
! fl ! f2 Wavelength FIGURE 8.1Spectra/characteristics of different obiect types. f2
so,, /
\
•
•
e•
vegetation
•
•
X
•
X
•
x × × × x x x x x Soil
XC. } X>(xX OX y,x 00 00000 Water 000 0 0 0 0 0 x xo00 x Xx
fl
FIGURE
8.2Clusters
in pattern
space.
The general where
design this
of
a classifier is
requires incomplete,
some
information and only a
about subset
the s is
set
S.
In
information
known,
s={'f The set s is called the training
.....
_} C S information
(8.3) about
set. It is used to obtain
IMAGE
CLASSIFICATION
251
the classes Sk and to derive the class boundaries. Depending able knowledge, the following cases may be distinguished: 1. The training set s is available, known, such that and a partition
on the avail
into K subsets
sl_ is
skCSk
(8.4)
Thus, the class membership of each pattern in the training set is known. It is assumed that each subset sl, contains M_. training patterns tk. This case is known as supervised classification. 2. A training set s is available. However, the partition into subsets sk is unknown. This case is known as unsupervised classification. The problem is further complicated if the number of classes K is also unknown. classification problem is to find a decision rule (a classifier) that
The
partitions an image into K disjoint class regions according to knowledge about a limited set of patterns. The classifier should be effective for patterns not in the training set, and it should be efficient with respect to execution time. Because of the random variations, a given pattern may belong to more than one class. To reduce ambiguous decisions and to assign a pattern to only one class, an additional reject class So is often introduced. All patterns with dubious class membership are assigned to So. Image classification consists of the following steps (fig. 8.3) : ( 1 ) Preprocessing, (2) training set selection and determination of class characteristics, (3) feature selection, and (4) classification. The digitized images are preprocessed to correct for radiometric and geometric errors and enhanced to facilitate the selection of a training set. This training set s is used to determine the class characteristics with a supervised technique if the partition of s into K classes an unsupervised technique if no partition of s is known. determines a set of image properties that best describe is known, or with Feature selection and discriminate
object categories. These properties are called features. Often, the original measurements, i.e., the gray values of a multiimage, are used as features. Finally, classification of the image patterns based on the selected features is performed. It is assumed that the training set s contains .... , M}
map
M patterns
;I: (8.5)
Classification
s= {if, j:l
Recorded images
_
Preprocessing
set HTraining selection FIGURE 8.3Image
_.4_1
selection I:eature
H
Classitication
__
c/assification.
252
DIGITAL
PROCESSING
OF REMOTELY patterns
SENSED
IMAGES
For supervised by JL,. Thus,
classification, & is given by
the training
for class S_,.are denoted
s_= {JL, J= 1 .....
M_.}
k= 1 .....
K
(8.6)
where it is assumed that M_ training patterns that the number of classes is K. The features
are given for class S_,. and for a given point or region feature vector z, primarily by the Most important
in an image will be represented by an Ndimensional The structure of statistical classifiers is determined probability density function p(z) for the feature vectors. is the multivariate normal (Gaussian) density, given by 1 e__/,,z_m,Tc_._z_m>
p(z)
 (2,_)x/2
IC]a/.,
(8.7)
where z is an Ndimensional feature or pattern vector, m is the Ndimensional mean vector, C= (,r_j) is the N by N covariance matrix, and [C] is the determinant of C. Mean vector and covariance are defined by m=E{z} and C=E{ where the expected value (zm)T(zm) } or matrix is found by taking (8.9) the (8.8)
of a vector
expected values of its components. The covariance matrix is always symmetric. The diagonal element _r, is the variance of z,, and the offdiagonal element _0 is the covariance of z_ and zj. If z_ and zj are statistically independent, ,_,=0. The matrix C is positive definite; so IC] is strictly positive. Cases where [C I =0 would occur (e.g., when one component of z has zero variance or when two components are identical) are excluded. The multivariate normal density is completely specified by the mean vector and the covariance matrix. Knowledge of the covariance matrix allows calculation of the dispersion of the data in any direction. Normally distributed patterns form a single cluster (fig. 8.4). The center of the cluster is determined by the mean vector, and the shape of the cluster is determined by the covariance matrix. The loci of points of constant density, as quadratic given by equation form is constant: (8.7), are hyperellipsoids for which the
d= (zm)rCl(zm)
(8.10)
The quantity d is called the Mahalanobis distance from z to m. The principal axes of the hyperellipsoids are given by the eigenvectors of C, and the eigenvalues determine the length of the axes. The volume of the hyperellipsoid corresponding to a Mahalanobis distance d is given by V= Vx ]C[ _.'_ :< d (8.11 )
IMAGE
CLASSIFICATION
253
z 2
m 2
I ml FIGURE 8.4Bivariate normal distribution. z 1
where
VN is the volume
of an Ndimensional 7ry/z (N/2) ]
unit
hypersphere:
N even
VN=
2S,r_N_I,/2 N!
(__1), N odd
(8.12)
Thus, for a given pattern dimensionality N, the scatter of the patterns varies directly with ICi'_+. In general m and C are not known. They can be estimated from the M training patterns _z by
1 M
m= and
1 3!
_ __
Yz
(8.13)
C= 8.2 Feature Selection
M_
_
._'_ !
(izm)
(Jzm)r
(8.14)
The representation of patterns in terms of measured image often does not lead to efficient classification schemes, because
properties the classes
may be difficult to separate or the number of measurements is large, or both. One reason is that sensors are in general defined by other than pattern classification specifications. There may be linear or nonlinear
254
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
combinations themeasurements of thatafforda betterseparation the of classes. concept f featureselection usedto determine The o is those measurements are mosteffective classification. the dimenthat in If sionality of measurement P space is large (e.g., a scanner with 12 or
more channels), classification algorithms cannot be efficiently mented in measurement space, and classification with even algorithms therefore becomes desirable very time consuming on to reduce the dimensionality implesimple digital computers. It is of the space in which a feature derived space from
classification algorithms must be computed. of dimensionality N<P is introduced. Features are sets of combined or selected the originally features that measured has lower
Therefore, measurements
image properties. The goal is to find a set of dimensionality than the original measurements
and optimizes classifier performance. The features for a given spatial location (x,y) are represented by an Ndimensional feature vector z=(zl,...,z.v) T. Thus, feature selection is a mapping of the set of Pdimensional patterns {f} into the set of Ndimensional feature vectors {z}. There is no general, theoretically justified method for feature selection. Many proposed techniques are based on heuristic considerations, such as intuition and experience. For example, in an interactive image analysis and recognition system the analyst with his previously acquired knowledge is part of the selection process [1]. In this context feature selection is more than just the transformation of the measurement space into a form that can simplify the pattern classification procedure. It provides for inserting a priori knowledge that reduces the number of required pattern samples to achieve a specified performance. A more mathematically founded technique is the expansion of a pattern into an orthonominal series (e.g., Fourier, KarhunenLo6ve expansion), where the expansion coefficients are used as features. Here new features with, it is hoped, lower dimensionality and better class discrimination are obtained by a linear transformation of the patterns [2, 3]. Another approach is the evaluation of the quality of pattern components and selection of a subset as features. For statistically independent pattern components, distance measures between the probability densities characterizing the pattern classes may be used. Two such distance measures that have been widely used are divcrgence and Battacharyya distance [4]. Transforms
8.2.10rthogonal
The coefficients of expansion of the patterns f into a complete set of orthonormal matrices may be used as components of a feature vector z. The feature vectors are obtained by a linear transformation: z=Tf (8.15)
IMAGE CLASSIFICATION
255
whereT is a unitarymatrixwhose rowsare the basisvectors the of expansion. sec. .6.1.4.) (See 2 Expansions thatareindependent theimage of patternsretheFourier a andHadamard transforms. Theorthonormal matrices retheharmonic a andtheHadamard atrices. m Theexpansion mayalsobeadapted the to image characteristics. this casethe image In patternsare considered random variables, anda trainingsetis required compute to statistical characteristics andorthogonal matrices ccording a given a to criterion. A
criterion that minimizes the error of approximation of the patterns by the features without class distinction leads to the KarhunenLo6ve (KL) transform. (See sec. 2.6.1.4.) The KL transform determines the orthonormal vector system that provides the best approximation of the original patterns in a meansquareerror sense. The coefficients are ordered such that the patterns are represented by the fewest possible number of features. Furthermore, the obtained features are uncorrelated. In classification, however, the interest is in discrimination between patterns from different classes, not in accurate representation. Feature vectors should emphasize differences between classes. Therefore, other criteria that maximize the distance between feature vectors from different classes are used. In section 2.6.1.4 it was shown that the basis vectors of the optimal expansion are obtained as eigenvectors of a symmetric, positive definite kernel matrix R which will now be denoted by Q. This matrix is computcd from a training set. In section 8.1 two cases were distinguished: (1) a partition of the training set s into K classes sl,. is known, and (2) the class membership of the training patterns is not known. In the first case, the error criterion, equation (2.127), has to be modified to account for the class membership of the patterns. A meansquare approximation error c is defined as ,= __, P(Sk)E fkzk,,t, ][_(8.16)
where P(S#) is the a priori class probability, L.. is a Pdimensional training pattern from class sT., {t,,} is a set of orthogonal vectors, and the expansion coefficients z_,, are used as elements of the new class feature vectors z_:. The best approximation of the original patterns in a meansquareerror sense is given by determining {t,,} such that equation (8.16) is minimized. The new features are given by Z_., :t, Proceeding becomes as in section rfk n=l the error, ..... N (8.17) (8.16),
2.6.1.4,
as given in equation
c=
_
n:=N+l
t.
e(s_:)e(f_f_}
t.
(8.18)
256
DIGITAL
PROCESSING matrix
OF REMOTELY correlation z)
SENSED IMAGES matrix (8.19) matrices Rt., where (8.20)
Thus, the kernel
Q is the overall
K
R= which is a weighted average
_
1;1
P(S,.)E[fkf,.
of the class
correlation
R_ = E(L,.f_. 7" ) For patterns used: with nonzero means, the class covariance matrices
C_,. are
C_ = E{ (f_. m_.) (f_.  m_.) _ } If the a priori class probabilities matrix C becomes P(St.) are all equal, the total
(8.21) covariance
C=_?c_1
Ck (see sec. 2.6.1.4)
(8.22) leads to the following
Minimization of equation eigenvalue problem:
(8.18)
Ct,, = a,,t,, The first N of the P orthogonal eigcnvectors
( 2.130 ) of C are the rows of T:
T= /tfl'
tN 7'
(2.132) /
Mean vector are given by
nt. and covariance
matrix
D_, of the class
feature
vectors
zz_.
n_.= E{zl. } =Tmk and
(8.23)
Dt: =E{ (z_. nk) (zk
n_.) 7')
/ ( 8.24 )
= TCtT _'= diag (,\_,,) = diag (,_k,/) J
Thus D_,. is a diagonal matrix whose elements are the eigenvalues of Ch. and the variances of the new features as well. If the patterns L for class S_. are normally distributed with the probability density 1 p(f_. [Sk)= (2rr)z,/_ [C_],/., e _':'''' .... c_ ,,t ........ (8.25)
IMAGE
CLASSIFICATION
257
then the features zj,:obtained by equation with the following density function: p(z_ ]Sk)= because of equations
(8.15 ) are normally distributed
1 (2_.)N, _ ]D_.['_e !_,z_ .,_,Tm_,z....... . and (8.24). Expanding the exponent
(8.26) yields (8.27)
(8.23)
(zknk) rDkl(Zk nk) =
i= t
.... Ai
It then becomes evident that the contours of constant probability density are hyperellipsoids with centers at nt. The direction of the principal axis is along the eigenvectors of the covariance matrix and the diameters of a hyperellipsoid are proportional to the square roots of the corresponding eigenvalues or variances:
The expansion based on equation (8.16) is known as generalized KL transform [5]. Geometrically, the transformations (8.15) and (2.129), are a rotation of the pattern space. Figure 8.5 illustrates such a twodimensional rotation for two classes. Feature selection by means of the generalized KL transform is made irrespective of the mean vectors of the individual classes. To retain the means and, therefore, the spatial distribution of individual classes, the total covariance matrix C may be defined as
K
C= _
k_l
P(Sk)E{(I_.m)
(fkm) r}
(8.28)
+f2
÷÷÷÷_ ++: ÷1 " ". It,,
o FIGURE 8.5Rotation of pattern
b space. (a) Pattern space. (b) Rotated pattern
z1
space.
258 where
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
m=E{f}
(8.29)
The vector m is the total mean of all patterns in the training set s. Here the transform matrix T is a function of both the variances and the means of each class. For classification the interest is in transforms that emphasize the dissimilarity between patterns of different classes rather than provide fidelity of representation. One possibility is to determine the linear transformation (8.15), such that the mean square interclass distance d is maximized, where 2 K(K1) ^ k1 E Z d_.['
d=
(8.30)
The quantity dkt is the mean distance between two feature vectors from different classes k and I. For each vector in class k the distance to all vectors in classes l= 1 to k1 is computed, and this computation performed for k = 2 to K. The mean distance between feature vectors different classes may be defined as the Euclidean distance: dkt: =E( With equation (8.17), the mean (zA  z,) r(z_:square zz) } d is obtained: is of
(8.31 )
distance
d=
n_l
t.
__R,_
k_l
K(K_I)_
k_2
_
I_l
(mkmf+mzmk
r)
t. (8.20)
(8.32) and
where R1,. is the class m_., mz are class means.
correlation matrix given in equation Thus the distance d can be written as
N
d= with the kernel matrix 1 C=__ K R,: C, where
E
t"rCt"
(8.33)
1 K(K1)
K k1 E E (m_mf+m*mkr) maximize d in equation
(8.34) (8.33)
As in section 2.6.1.4 the vectors t. that are obtained as eigenvectors of 12: CI_ =,k,t,, The vectors (2.133). t,, are combined into
(2.130) matrix T as in equation
the transform
In the second case no partition of the training set s into classes is known. The optimal expansion is the KL transform defined in section
IMAGE CLASSIFICATION 2.6.1.4. patterns The covariance matrix C is computed as (fm) in s. r} from the M
259 training
f in s without
class distinction C=E{ (fm)
(8.35)
and m is the mean vector 8.2.2 Evaluation
of the patterns Features
of Given
In the previous section new features were obtained by a linear transformation of the patterns. Class separability may be increased by proper choice of the criterion used to derive the transformation. The resulting P components are ordered according to the magnitude of their variance. Therefore, dimensionality reduction may be achieved by selecting the first N<P components as features for classification. Another approach is to evaluate the P given pattern components and to select a subset of N<P features that yields the smallest classification error. Ideally, this problem could be solved by computing the probability of correct classification associated with each Nfeature subset and then selecting the one giving the best performance. However, it is generally not feasible to perform the required computations. The number of subsets of features that must be examined is
 N!(PN)!
,,
(8.36) features requires 70 alternative methods
For example, to select the best four of eight available computations of the error probabilities. Therefore, must be found for feature selection.
The distances between the class probability distributions may be used for feature evaluation. Intuitively, a feature for which the distance between class means is large and the sum of variances is small affords good class separation. With the assumption that the original features are statistically independent, the evaluation is simplified. Each of the P features is evaluated independently and the N best features are selected. For two classes S_ and Sk with mean vectors mj, mk and variances ,r)2 and ,_k2, a quality measure for feature z,, n 1..... N may be defined as G.= (m_"mk")2 o'jn + _r_n 2 2 (8.37)
Obviously 0<G,_< _. A large value for G, indicates that feature z, is useful for separating classes Sj and Sk. Figure 8.6 shows examples for the applicability of the measure given in equation (8.37). In figure 8.6a the feature z, is sufficient to separate the two classes Sj and SA., but the large overlapping area in figure 8.6b requires additional features. The limits of the measure given in equation
260 DIGITAl. ROCESSING P OFREMOTELY SENSED IMAGES
P{Z n}
l P(Zn)
v i y
rnj
mk
zn
b Sk
mj
mk
zn
P(z n)
rnj = mk
zn
FIGURE 8.6Separability of features. (8.37) become obvious in figure 8.6c, where the distribution of class Sj has two maxima for feature z,. Although complete separation is possible (there is no overlap of the distributions), G. =0 gives no indication of the quality of feature z,. Thus, if the class probability densities are not normal, mean and variance are not sufficient to evaluate the separability. Therefore, distance measures that are dependent on the distribution must be used. One measure of the distance between classes is known as divergence [4]. Divergence between two classes Sj and Sp, is defined as
D(S,,S_)=
where p(z Divergence
J
f lnP! "Is')
ptz]S,,.)
p(z[Sj)p(zlS,.)dz
(8.38)
I Sk) is the probability density is a measure of the dissimilarity
distribution of z for class Sk. of two distributions and thus
provides an indirect measure of the ability of the classifier to discriminate successfully between them. Computation of this measure for groups of N of the available features provides a basis for selecting an optimal set of N features. The subset of N features for which D is maximum is best suited for separation of the two classes Sj and S_,. In the case of normal distributions with mean ml,. and covariance matrix C_,., the divergence becomes 1 D(Si, Sk) =2+_tr [(CjCk) 1 tr [(CF _Ck _) (mjm_) (m_m_.) r] (8.39) (C_'C_. ')]
where
tr [C] is the trace of the matrix
12.
IMAGE CLASSIFICATION
261
Divergence defined two classes. extension K classes is is for An to possible by computing the average divergence D.I over all pairs of classes and selecting the subset of N features for which the average divergence is maximum; that is, by maximizing Da with respect to all Ndimensional features, where
D4(Z)_ This strategy, although 2 K(K_I) reasonable,
K1 K
_,
I,._1
__,
j_k+l
D(S,,Sk) For instance,
(8.40) a large
is not optimal.
single pairwise divergence term in equation (8.40) could significantly bias the average divergence. So in the process of ranking feature combinations by D:_, it is useful to examine each of the pairwise divergences as well. The behavior of the pairwise divergence D(Si, S_,.) with respect to the probability of correct classification P,. is inconsistent. As the separability of a pair of classes increases, D(S_, S_) also increases without limit, whereas Pc saturates at 100 percent. A modified form of the divergence, referred to as the transformed divergence DT, has a behavior more like probability of correct classification [6, 7]: '__/_) (8.41)
Dr(S t, S_.) = 100( 1 e I'sj,
The saturating behavior of Dr reduces the effects of widely separated classes when taking the average over all pairwise separations. The average divergence based on transformed divergence has been found a much more reliable criterion for feature selection than the average divergence based on ordinary divergence. As an example, consider feature selection for the classification of forest types in a Landsat image. Figure 8.7 shows a Landsat MSS falsecolor image of an area in North Carolina, obtained February 26, 1974. Polygonal training areas representing the prototypes for seven classes are outlined in white. The class names and the number M_ of training vectors for each class are listed in table 8.1. The total covariance matrices computed from the training vectors with equations (8.22), (8.34), the eigenvalues and eigenvectors of the covariance the percentage variance in the principal components table 8.2. (8.28), and matrices and are shown in
The pairwisetransformed divergences for the P=4 features and the K = 7 classes computed from the training data for the multispectral image in figure 8.6 are listed in the left column of table 8.3. Classes 1 and 3 (closed canopy and partial close) can not be reliably separated with the available features. To improve separability of the given classes, additional features derived from other measurements are required. These measurements may include other spectral bands, images obtained at different times or by other sensors and ground measurements. The pairwise transformed divergences for eight features obtained by addition of a second Landsat
262
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 8.7Landsat
MSS image with training areas for seven classes outlined (scene 153815100).
MSS
multispectral
image
of the same
area
taken
on
August average
30,
1973,
are
shown in the right column of table 8.3. The maximum Da for several feature subsets are shown in table 8.4. In summary, of variables sionality therefore that reduction the objective a large too of feature relevance far, selection for
divergences number dimenand the
is to find classification. significant
a small If the information other
have
is pushed
however, is lost. If,
discriminating
capability
on
the
hand,
IMAGE TABLE 8.1Class Training Vectors Classification Class number 1 2 3 4 5 6 7
CLASSIFICATION Names and Number Mk for Forest Type of
263
Class name Closed canopy Open canopy Partial close Regeneration Hardwood/pine Clearcut OId/clearcut
Mk 548 642 199 702 469 692 125
dimensionality of the feature space is too large, the available training patterns will be so sparsely distributed that the estimation of the probability densities becomes very inaccurate. Consequently, the dimensionality of the feature space should only be reduced to a certain N, where N<P, at which point the class probability density p(z!S_.) of the features will be used for classification. 8.3 Supervised Classification
In supervised classification a partition of the available training set into K subsets s1_ is known. The training features are used to determine the class boundaries such that the classification of unknown features results in a minimum error rate. Two approaches are available. In the statistical approach, the class boundaries are given by the parameters of probability densities. In the geometric approach, the boundaries are represented by the coefficients of discriminant functions. Supervised 1. 2. 3. 4. classification consists of the following steps:
Determination of the number of classes of interest Selection of training set and determination of class boundaries Feature selection Classification of new patterns based on class characteristics selected features
and
(See fig. 8.8.) Important and practically difficult problems are the determination the number of classes K and the selection of a representative training An interactive system facilitates this task. Images may be displayed the screen of a display device. The determination and evaluation of s_. are performed interactively.
of set. on
of K and the selection
264
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
II
0
{g
$.
A xl" o')
¢.O
O
,=
I,,,,
{7" (.
II
o
_D
tin
E
O o x k_
0o ("4 co v
E
(..) c
O O _ II _ 0 0
I,,,
N,,.
O
O
.__ o
I,.
_
fO
,,,,
_omoooo
_
o
E
_x
i11 .,1 m n
2_
._._ .=
c>._
E _.=
I=
IMAGE CLASSIFICATION
TABLE Classes 8.3Pairwise of Forest Transformed Types Versus Divergences Class Numbers (Dr) for Seven
265
Four features Sj 3 3 1 1 1 3 4 2 2 2 5 7 5 1 4 3 2 6 1 2 1 S, 4 7 4 7 6 6 5 7 6 4 7 6 6 5 7 5 5 7 2 3 3 Dr(Sj,S,) 100.0 100.0 100.0 100.0 100.0 99.99 99.94 99.91 99.87 99.74 98.54 98.35 91.29 87.12 85.46 82.84 66.76 63.20 57.73 54.38 12.50 St 3 3 3 2 1 1 1 2 4 4 5 2 5 3 1 2 6 4 1 1 2
Eight S, 7 6 4 6 7 6 4 7 6 5 6 4 7 5 2 5 7 7 5 3 3
features D,(Sj,S,) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.99 99.98 99.97 99.95 99.49 96.88 95.48 95.31 95.14 93.80 93.62 86.37 76.28
TABLE
8.4Average
Divergences
of
Various
Feature
Subsets
Number of original measurements P 4
Number features 4 3 3 3 3 2 2 8 6 4 2
of N
Feature subset _ 1,2,3,4 2,3,4 1, 2,4 1,2,3 1,3, 4 2, 4 3, 4 1,2,3,4,5,6,7,8 2, 3, 4, 6, 7, 8 2,4, 7, 8 2, 8
Average divergence 90.4 84.5 83.3 83.0 76.8 282.2 352.9 96.8 z96.2 _flG.1 2 87.6
DA
8
1 Features 1, 2, 3, and 4 are MSS bands 4, 5, 6, and 7, respectively, 1974. Features 5, 6, 7, and 8 are MSS bands 4, 5, 6, and 7, respectively, 1973. z Best. Worst.
of February of August
266
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Selection of number of
classes training and set
Determination of class characteristics
1
Patt Feature selection
l eature I I
vectors z _ Classification
1
Class names
FIGURE 8.8_Supervised
c/assification.
8.3.1 In the
Statistical statistical Their p(z I Sk)
Classification approach description and of the the feature requires a priori vectors that the z are considered random probability Because conditional containing p from
variables. density of the
conditional be that the to be parametric known. the object
probability it may be
P(Sk) assumed to which form
homogeneity
objects, only the process are on that
probability z belongs. is known the training
of z depends It is also and that set. only This
the the
class
assumed
functional to as form
of the
distribution classification. densities which
parameters is referred used classifier decisions. class if the
of p have
determined
Nonparametric is unknown. The evaluates density probability a pattern ditional design
techniques of a statistical and given
of the on p(z Let
underlying function,
is based Let S_.
a loss
correct for z,
incorrect that
I SA) be the P(Sk) loss class be
probability the a priori when con
z is from Let
of class actually average loss
S_. occurring. belonging L(z,
)_(S_ I $I,.) be the Sk by is assigned to
incurred S_. The
to class S_.) is given
K
L(z, It is the various loss classes associated S_, each The
Sk) with
=
_
_t(Sk I S,)P(S, feature the loss
I z) vector incurred z) z and by is the assigning that
(8.42) it to
observing by
weighted
particular of in
classification. class The equation rule states Si occurring classifier
a posteriori having that observed minimizes
probability z [8, 9]. the or assign Si)
P(&I
probability loss, Bayes' given decision
conditional Bayesian z to the
average classifier. class 1.....
(8.42), that the
is called classifier L(z,
optimal must
S_, where K.
$1,.) <L(z,
for all i=
IMAGE CLASSIFICATION The a posteriori probability by the Bayes rule: P(S_]z) can be computed from
267 p(z[SO
e(s, I z) = p(z
k_l
[ s,)p(s,)
(8.43)
p(* ISk)P(Sk)
Consider the following symmetric loss function:
X(Sk I S_) = It assigns no loss to a correct Thus, all errors are equally average loss is L(z, Sk), where L(z, Sk) =
0 i=k k i _= decision costly.
i,k=l
.....
K
(8.44)
and a unit loss to any error [9]. The corresponding conditional
___ P(S,
i=I¢
I z) : 1 P(Sk
I z)
(8.45)
and P(S1,. [ z) is the conditional probability that class Sk is correct. Thus, to minimize the conditional average loss, the class Sk that maximizes the a posteriori probability P(Sk I z) should be selected. A classifier can be represented by a set of discriminant functions g_(z), i= 1 ..... K. Such a classifier assigns a feature vector z to class Sk if gk(z) The classifier computes >g_(z) for all i _ functions k and selects the (8.46) class
K discriminant
corresponding to the largest symmetric loss function can functions: gk(z) =P(Sk
discriminant. A Bayesian classifier with now be represented by the discriminant
[ z)
k= 1 ....
, K
(8.47)
It assigns a feature z to the class with the largest a posteriori probability. The choice of discriminant functions is not unique. If every gk(z) is replaced by ](g_,,(z)), where / is a monotonically increasing function, the resulting classification is unchanged. With use of equation (8.43), an equivalent representation of the Bayesian g,_(z) =p(z Now the decision p(z This criterion rule. rule is: Given I Sk)P(Sk) >p(z referred l Sk)P(S,_ classifier ) z¢Sk if (8.49) decision is given by (8.48)
the feature I SOP(SO
z, decide
for all i :/: k likelihood
is commonly
to as the maximum
268
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
For remotely ensed s imagesheassumption p(z S_) is a multit that variate normal (Gaussian) probability density distribution with mean m_, and covariance matrix C_, is often justified [11]. Thus:
1 p(z where I S,,)= (2_.)x/: [Cx.l'_ of C_. The mean ma=E{zal and covariance matrices C_,.by C,,,= E{ (zare estimated set:
l Mt
e_._zm,,,Tc_
'_zm_._
(8.50) by
[C_.I is the determinant
vectors
m# are given
(8.51)
m,,) (zvectors
m,, ) T } Jzx in each
(8.52) class of the training
from the M# feature
.i
1
(8.53)
and
Ck _
M_.L1
_
(iz_m_)(Jzxm_)T
(8.54)
The vectors iz_. represent the training patterns, where k indexes the particular class and j indicates the jth prototype of class S_. There may be M_,. prototypes that are descriptive of the kth class S_,.. Taking the logarithm of equation (8.48) and eliminating the constant term yield for a new gk 1 gT_(z)=lnP(Sk) Thus, for normally ratic classifier. _distributed In/C_l 1
 _ (zm_,)rC_
the optimal
 '(zm_)
classifier
(8.55) is a quad
patterns,
Some of the assumptions made in the derivation of the maximum likelihood decision rule are often not realized in remotely sensed data: 1. The data from each class are normally distributed. This assumption has been shown to be erroneous at the 1percent level of significance with a chisquare test on various data sets [11]. However, the assumption performs sufficiently well, and the use of a more complicated decision rule is not justified. Rather, radiometric errors and misregistration [12] should be corrected by preprocessing. 2. Class mean vectors and covariance matrices can be estimated from training statistics sufficient, conditions data. The training set may of the classes if the number not adequately of measurements describe the in &. is in
if a class is composed of subclasses, if the atmospheric and the Sun and sensor positions relative to a ground
IMAGE CLASSIFICATION
269
resolution element aredifferentor trainingandnontrainingata, f d andif the sensor enerates g noise(e.g.,striping).Some these of errorscanberemovedy radiometric b correction forhaze,lluminai tion,andsensor effects. sec. .2.) (See 3 3. Thelossfunctions andthea prioriprobabilities X P(Si) are known.
These functions, however, cannot be accurately estimated. Despite the radiometric corrections performed during preprocessing, there will be pattern vectors that do not belong to any of the classes defined by the training set. In remotely sensed images, such picture elements may represent roads, small water bodies, and mixtures of object points. The classification procedure assigns these patterns to one of the training classes, although they may yield very small discriminant values gl,.(z) for all classes. In the onedimensional twoclass example in figure 8.9, the patterns having a low probability of belonging to any of the training classes may be assigned to a reject class So [7]. This operation can be performed by computing the probability density value associated with the feature vector and rejecting the point if the value is below a specified threshold. Alternatively, the discriminant values stored as part of the classificat!on result can be used. If z is N dimensional and normally distributed, the quadratic form Q1_(z) has a chisquare (x _) distribution with N degrees of freedom (cs(x _') ), where Qk(z) is given by Qk(z) Therefore thresholding = (zmk) rc_l(zmk) distribution (8.56) P shown in
r percent
of the normal
figure 8.10 is equivalent to thresholding r percent of the chisquare distribution of Q_ (z). The quadratic form Q_.(z) is related to g_,(z) in equation (8.55) in the following manner: Ql_(z) =  2g_,.(z) In Thus, every class So: pattern with the following !CA.[  2 In P(Sa.) condition is assigned ( 8.57 ) to the reject
r
Q_.(z) >x"
for which
c._ x") (
100
(8.58)
A different threshold value may be applied to each class. Figure 8.11 shows the classification map obtained by classifying the image in figure 8.7 with the Bayesian classifier as given in equations (8.46) and (8.55). Feature vectors with low probability of correct classification are assigned to the reject class So, which is displayed in white. The large size of So is due to the classifier being designed to recognize only forest areas, although the image also contains water and agricultural areas, which are assigned to So.
270
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
P(S k I z) S1
s2
Rejected patterns /
_ /_ _
Rejected patterns
"_'_
m
1
m
2
z
FIGURE 8.9Rejection
of patterns with low probability
of correct
classification.
J P
0erceo,
z a
____4
Reject region t
t)
z
b FIGURE 8.10Reject regions. (a) Normal distribution. (b) Chisquare distribution.
The function this time feature maximum lookup sensing pixels regions
direct given
implementation in equation function for
of (8.55)
equation results
(8.46) for every and K are
with pixel. the
the that The
discriminant must compute classification of the the a table
in a classifier
discriminant is proportional vector
all classes where of may N
to N°'K, the number
dimension However, by
and
classes, be
respectively. implemented feature of the class
likelihood technique, images
classifier because
effectively of unique 10 percent
the number approximately
vectors total
in remote number the decision points of
is often
[10]. as
Specifying hyperellipsoids.
a threshold (See
value equation
for each (8.10).)
defines The
boundary
IMAGE
CLASSIFICATION
271
FIGURE
8.11Classification image
map obtained by maximum likelihood in figure 8. 7 into seven classes.
classification
of
of these hyperellipsoids classification.
are computed
and stored
in lookup
tables
for later
Building the tables and classifying an image by looking up the prestored class names for a feature vector requires considerably less computer time than classification with the direct method. In determining the boundary for a particular class, only a localized region of the feature space has to be searched, but in the direct implementation, all classes must be considered for every pixel. Figure 8.12 compares the classification times for both methods for 7 and 20 classes with 4 features used. The time is given in seconds of central processing unit (CPU) time measured for assembly language programs on an IBM 360/91 computer. A further increase in computational speed may be obtained by approximation of the hyperellipsoid decision boundaries by hyperrectangles or parallelepipeds. Figure 8.13 shows the decision boundaries obtained from
272
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
Direct, 400:
K
= 20
v
300
t200 Direct, K = 7
Lookup, 100 Lookup,
K = 20
K = 7
• 0 TV size image of
,
I 106
Number
Feature
Vectors
(image
size)
FIGURE
8.12Comparison of classes.)
of maximum
classification likelihood
times classifier
for
direct for four
and
table
lookup (K is the
implementation number of
features.
z2
m32
+ t32
m32
m32

t32
I
m31 t31
I
m31
I
m31 + t31
_Zl
FIGURE
8.13Maximum
likelihood
and
parallelepiped
decision
boundaries.
IMAGE
CLASSIFICATION
273 parallelepiped apA feature vector
equation (8.55) by holding &(z) constant and their proximations for a threeclass, twofeature problem. z = (z,, z_ ..... zx) r is assigned to class & if m_._tki<zi_mki+tk_ where m_,_ and tl,_ are the mean and threshold values
(8.59) of feature z,_for class
&,., respectively. If the parallelepipeds overlap, no unambiguous decisions are possible. Addington [13] proposed a hybrid classifier that uses the Bayesian decision rule to resolve ambiguities. Currently most classifiers are implemented in computer programs. Hardware implementations are only known for simple decision rules such as the parallelepiped and the maximum likelihood classifiers. Because of the variability of remotely sensed data, a derived set of discriminant functions may not be used for different images. Classifiers usually have to be designed for each new image to be classified with training data. 8.3.2 Geometric Classification
Statistical classifiers are based on decision theory, and it can be shown that a given loss function is minimized. The drawback is that the conditional probability densities p(z!&) of the feature vectors must be known. For practical cases the determination of the multivariate probability densities is only an approximation to the real situation. Therefore, attempts classes. are made to avoid random processes as models for the pattern
In the geometric approach feature vectors z are viewed as points in the Ndimensional feature space. If the points z_. belonging to classes &, k= 1 ..... K, form clusters, classification is achieved by finding decision surfaces that separate K, j =_ k. Construction of p(zl&). However, points of class &: from those of classes St, j= 1 ..... of these surfaces is possible without knowledge the determination of hypersurfaces may be as diffi
cult as the determination of multivariate probability densities. A computationally feasible solution is obtained only if the separating decision surfaces are hyperplanes. The justification for this assumption depends essentially on the characteristics of the feature vectors. The geometric lem. The feature approach is usually considered for the twoclass probspace is divided by the hyperplane into two regions, one
containing points of class S,, the other containing all points not belonging to class S,. The problem is to determine the equation of a hyperplane that optimally separates the classes S, and S._., by using the classified feature vectors in the training set s, where s= {s,, s_} (8.60)
274
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
The decision surface givenby a discriminant is functiong(z). The equation g(z)= 0 defines the surface that separates points assigned to S, from points assigned to S_. The problem of finding the discriminant functions can be formulated as a problem of minimizing a criterion function, which gives the average loss incurred in classifying the set of training vectors. Because it is very difficult to derive the minimumrisk linear discriminant, an iterative gradient descent procedure is used for minimizing the criterion function. A linear discriminant function is given by
N
g(z)
=wo+
_
WiZ_=Wo+wTz
(8.61)
where w_ is the ith component of weight, and N is the dimension of the ith component of the feature implements the following decision if g(z) <0, decide class S.,. Thus, z is assigned to class $1 threshold  w0. The equation
the weight factor, w0 is the threshold the feature vector z. The quantity z_ is vector. A twoclass linear classifier rule: If g(z)>0, decide class $1; and if the inner product wrz exceeds the
g(z)
=0
(8.62)
defines the decision surface that separates points assigned to class S, from points assigned to class S_. Because g(z) is linear, this decision surface is a hyperplane H. The discriminant function given in equation (8.61) gives an algebraic measure of the distance from z to the hyperplane. Vector z can be expressed as w ]] w I[
z=zp+r where Zp is the normal z and H. Then, projection
(8.63) between
of z onto H and r is the distance
rand the distance 8.14.) The written from the origin
g(z) i_w ]] to H is given given by r,,=wo/iiw][. (8.61)
(8.64) (See fig. can be
linear discriminant function in homogeneous form g(z)
in equation
= ary
(8.65)
IMAGE CLASSIFICATION
275
FIGURE8.14_Linear decision boundary.
where
y
(l) (w0)
Zl Wt
"
and
a=
(8.66)
Zx
W.v
Given two distinct classes of patterns, the classifier design reduces to the problem of finding the weight vector a. Let Yyl, ] 1 ..... M1 be the feature vectors representing class S, (training vectors for class S,), and let Jy..,, j= 1,..., M_ be the training vectors for class Sz. These vectors will be used to determine the weights in the linear discriminant function given in equation (8.65). If a solution exists for which all training vectors are correctly classified, the classes are said to be linearly separable. A training vector Jyl, is classified correctly if a r iy,>0 or if a T iy_<0. Because at(Sy2) >0, replacing every _y_ by its negative normalizes the design problem to finding a weight vector a such that a r Sy>O for all MMI vector [8]. The weight space. Each + M._,training vector training vectors. Such a vector a is called (8.67) a separating in weight location
a can be thought vector iy places
of as specifying a constraint
a point
on the possible
276
DIGITAL
PROCESSING vector. The space, must
OF equations having
REMOTELY arjy
SENSED
IMAGES through separating Thus, soluthe that
of a separating the origin of
= 0 define
hyperplanes The hyperplane. vector
weight
Jy as normal positive (See side fig.
vectors. of every and any
vector, it must tion
if it exists, be in the region
be on the vector. unique. to find
intersection
of M half
spaces,
in this
is a separating vector is not taken
8.15.)
It is evident
separating The arJy>0 separating scalar important point ing a margin,
approach
a solution function
to J(a)
the
set
of linear is minimized
inequalities if a is a a It is
is to define vector. that the i.e., by
a criterion This can iterative This requiring step be
that
reduces solved procedure problem by
the
problem
to one descent not
of
minimizing
function that
a gradient used can does
procedure.
converge by
to a limit introduc
on the
boundary.
always
be avoided
aWy>b>0 The ducing row Then treatment matrix of notation iyr, simultaneous [9]. and
for all j= linear
1 .....
M is simplified (N+ 1 ) matrix b= (b ...... by
(8.68) introjth r.
equations by
Let Y be let b be a weight
the M
whose by)
is the vector the problem
the column vector
vector
is to find
a satisfying (8.69)
Ya>_b>0 The matrix Y is rectangular and can be with more rows by than columns. The
vector between
a
is overdetermined Ya and b.
computed
minimizing
the error
a2
Solution region
f"
..
Separating
vector
[]
ClassS2_
/_O Class S 1 a 1
Separating
plane
FIGURE 8.15Linearly
separable
training samples Hart [8 ]).
in weight
space (from Duda and
IMAGE CLASSIFICATION The criterion function to be minimized is
277
J(a, b) : ½[[Yab]l
(8.70)
where a and b are allowed to vary subject to the constraint b>0. The a that achieves the minimum is a separating vector if the training vectors are linearly separable. To minimize J, a gradient descent procedure is used. The gradient of J with respect to a is given by VaJ=Yr(Yab) and the gradient of J with respect to b is given by VbJ= For a given b a= (yry)_yr b (8.73)  (Yab) (8.72) (8.71)
thereby minimizing J with respect to a. In modifying b, the constraint b > 0 must be respected. The vector b is determined iteratively by starting with b>0 and preventing b from converging to zero by setting all positive components of Vb J to zero. This is the HoKashyap algorithm [14] for minimizing J(a, b), summarized as follows: b(0) a(0) b(i+ a(i+ > 0, otherwise : (YrY)lYrb(0) +0[e(i) + ]e(i)l] l) (8.74) arbitrary
1 )  b(i) 1 ) = (yry)
_yrb(i+ i0, 1 ....
with the error
vector e(i) : Ya(i) b(i) (8.75)
If the training samples are linearly separable, the HoKashyap algorithm yields a solution in a finite number of steps with e converging to zero. If the samples are not linearly separable, e converges to a nonzero value. If K classes are present, linear classifiers may be designed as parallel or sequential classifiers. A parallel classifier requires K(K1 ) segments of linear discriminant functions, one for every pair of classes. A sequential classifier is structurally simpler, because the Kclass problem is reduced to K 1 twoclass problems that are solved sequentially. Thus, a sequential classifier makes a series of pairwise decisions. At the kth stage a linear discriminant function separates feature vectors assigned to class $/. from those not assigned to class Sj [8, 15]. The classification time is proportional to N(K1). Because an iterative technique is used, the time to design the linear sequential classifier is dependent on the size and characteristics of the training set.
278 DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
8.4 Unsupervised Classification
Supervised classification techniques require a set of training patterns whose class membership is known. If a labeled or classified training set is not available, unsupervised classification techniques must be used. The only available knowledge is that the patterns in the given training belong to one of K classes. Additional problems are encountered number of classes K is also unknown. set s if the
There are several reasons why unsupervised classification is of great practical importance. The determination of the number of object classes in an image and the determination of a labeled training set for supervised classification often present practical difficulties. In the early stages of image analysis, unsupervised classification is valuable to gain insight into the structure of the data. Image patterns often change slowly with time. These temporal changes may be tracked by an unsupervised classifier. Two approaches to unsupervised classification may be distinguished: statistical and clustering. Unsupervised classification is only possible if assumptions about the structure of the patterns can be made. This structure is reflected in conditional probability density distributions or in similarity measures. Classification is impossible if the feature vectors are randomly distributed in feature space. 8.4.1 Statistical Unsupervised Classification
In the statistical approach, the probability density function for the feature vectors z in the unlabeled training set s is the mixture density p(z), given by
K
p(z)
:
Z
L1
P(S_)p(z[S,,)
(8.76) be unNo the and
Some or all of the quantities {K, P(S_,), p(z [ Sj.), k= 1 ..... K} may unknown. Thus, unsupervised classification is the estimation of the known quantities in equation (8.76) with the feature vectors in s. general solution of this problem is known. A solution exists under assumption that K, P(S_.) and the form of p(zIS_) are only the parameters of p(z ] S_.) have to be estimated [8]. 8.4.2 Clustering known,
Clustering docs not assume any knowledge of probability density distributions of the feature vectors and formulates the problem as one of partitioning the patterns into subgroups or clusters. This approach is based on a measure of similarity. Thus, clustering is a technique for pattern classification in terms of groups of patterns or clusters that possess strong internal similarities. Viewed geometrically, the patterns form clouds of points in Ndimensional feature space. Clustering consists of two prob
IMAGE
CLASSIFICATION
279
lems: (1) the definition of a measure of similarity between the patterns and (2) the evaluation of a partition of a set of patterns into clusters. The two basic data characteristics that can be used as measures of similarity are distance between patterns in feature or pattern space and density of patterns (i.e., the number of points) in different regions of these spaces. The distance between patterns in the same cluster can be expected to be significantly less than the distance between patterns in different clusters. Alternatively, relatively dense regions in feature space, each of which is separated from the other by sparsely populated regions, can be regarded as the set of pattern clusters. More than one of the clusters may belong to the same pattern class, or patterns from different classes may contribute to one cluster. The problem of clustering is to partition subsets a set of M patterns s_, where z_.... or feature vectors z, .... , z.u into the entire set s: K disjoint , z,lr compose
s= {zl ..... Each subset is to represent more similar than patterns larity measure. The Euclidean distance
z._}
(8.77)
a cluster, with patterns in one cluster being in different clusters according to some simibetween two feature vectors __ z_ and z_ (8.78)
d= II z_zi
I]= [(z_z_)
r(z,z_)]
may be used as a measure of similarity. Euclidean distance will only be invariant not to linear transformations axes, for example, can result clusters [8, 16]. Instead of a distance tion may be used:
However, clusters defined by the to translations and rotations but
in general. Simple scaling of the coordinate in a different grouping of the patterns into the following nonmetric similarity func
measure,
ziTzi
S(z_, z_) = [izill i[z_l] which is the cosine of the angle between the vectors
(8.79) zt and z_. Use of this separation has to be
measure is governed by certain assumptions, such as sufficient of clusters with respect to the coordinate system origin. After adoption of a similarity measure, a criterion function
defined that measures the clustering quality of any partition of the patterns. One possibility is to define a performance index and select the partition that extremizes this index. A simple criterion function for clustering is the sum of squared errors index
K
J: Z
1," ! Ze_¢l:
II
Jr
(8.80)
280 where
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
1 m_.=M_ k _
z
(8.81)
i.e., ml,. is the mean vector of cluster sl. The quantity M_ is the number of patterns in s_,.. Thus, for a given cluster sj., the mean vector m_: is the best representative of the patterns in st, in the sense that it minimizes the sum of the squared errors between the patterns of the cluster and its mean. An optimal partitioning is defined as one that minimizes the criterion function J. A clustering algorithm usually chooses an initial partition followed by an iterative procedure that reassigns feature vectors to clusters until an extremum of J is reached. Such a procedure can only guarantee a local extremum. Different starting points can lead to different solutions. This procedure is represented by the following basic clustering algorithm: 1. Select number 2. of clusters K. K
Choose an initial partition of the M feature vectors z into clusters. Compute cluster means m,,..., m_. 3. Calculate the distance of a feature vector z to each cluster mean. Assign z to the nearest cluster. If no reassignment occurs (algorithm converged), mum number of iterations is reached, stop. Otherwise calculate new cluster means and go to 3.
4. 5. 6.
or if the maxi
The disadvantage of this clustering procedure is that the number of clusters must be known and that the number of distances to be computed increases with the square of the number of pattern vectors to be analyzed. Additionally, the assumption is made that the classes are nonoverlapping in feature space. Overlapping pattern classes can lead to clusters that are mixtures of different pattern classes. An alternative to this concept of intersample distance measures for cluster development is the use of the density of samples. The sample density is used for parametric classification, where the distributions of the different pattern classes are known and their parameters are determined from a given set of training vectors. In a nonparametric unsupervised environment the regions of the feature space with higher density are regarded as the pattern clusters. A clustering algorithm that learns the number of clusters from the data distribution and permits overlapping of classes as long as the sample density in the region of overlap is less than the sample densities in the neighboring nonoverlapping regions was described by Dasarathy [17, 18]. The sample densities are estimated from the multidimensional histogram of the multiimage. The clusters are developed from the multidimensional histogram by merging each cell in the histogram space with its higher
IMAGE CLASSIFICATION
281
density neighbor. The algorithm identifies the hills and valleys in the histogram where some of the hills may be the result of overlapping distributions. Defining the centroids of such hills as cluster centers will lead to clusters that may represent a mixture of more than one pattern class. Not all cells determined by the merging process represent significant clusters. Mean measure densities definitely cluster density and intercluster of significance for each cluster. exceed the average significant clusters. distance are used First, all candidate to derive a cells whose as
density D, before clustering are selected The average density is given by Da =M (8.82)
where H is the number of nonempty cells in the original histogram, and M is the number of feature vectors to be analyzed. The distances between these clusters are then computed by dk,=I[ m_.m, tl (8.83) of the is the
where m_. is the centroid of cluster s_. and m_ is the centroid definitely significant cluster s_ closest to s_.. The quantity d .... maximum distance between the definitely significant clusters. A measure of significance of a cluster &. is defined as
dkiMk
qk = dmaxM,,, where dk_ significant M,,, is the of q1¢ over acceptable is the distance between cluster. The variable Mk population of the highest the set of all definitely level of significance
(8.84)
cluster s_. and its nearest definitely is the population of cluster &, and density cluster. The minimum value significant clusters is defined as the
q=min(q_.)
(8.85)
Any cluster sj for which q)>q is accepted as significant. If q_<q, cluster sj is merged with its nearest definitely significant neighbor. Classification of the unknown patterns may be achieved with any classification algorithm with the information derived in the clustering process. A similar clustering algorithm is described in [19]. 8.5 Classifier Evaluation of a classifier is evaluated in terms of its error rate. The
The performance
calculation of the error rate is too difficult for multiclass problems. Therefore, classifiers are tested experimentally by using the fraction of the pattern vectors of a test set that are misclassified as an estimate of the
282
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
error
rate. If the true error are misclassified,
rate of a classifier then m has a binomial =( M)p_,,(1pc) are
is pc, and if m of the M test distribution: im tabulated large, with the (8.86) size of of
patterns
p(m) Confidence probability intervals
for this distribution [8]. Unless classification, where
the test set M as parameter p,, of correct
M is fairly
the estimate
k,=_m
must 0.15,
(8.87)
be interpreted with caution. For example, if the error estimate is obtained with 100 test patterns, the true error rate is between 8 and 0.95. and test sets influence the estimated classifier
24 percent with probability The sizes of the training
performance. The training set s is used to estimate distribution parameters or parameters of a discriminant function. Theoretically this estimation requires an infinite size of s. On the other hand, the cost of selecting and processing a training and test set increases with its size. It has also been observed that with a finite size of s, classifier performance does often not improve when the number of features is increased. The relationship between a finite training set size and the dimensionality of the feature vectors has been investigated by Kanal [20]. The results of the comparison of two feature selection methods and their effect on classifier performance are summarized in table 8.5. The evaluation is based on the forest type classification problem with seven classes, introduced in section 8.2. Measurements from one Landsat MSS image with four spectral bands (shown in fig. 8.7) from February 26, 1974, and from a second MSS image taken on August 30, 1973, are used as patterns. The KL expansion based on the three covariance matrices in equations (8.22), (8.28), and (8.34), and average transformed divergence are used for feature selection. The selected feature subsets are classified with a Bayesian classifier and compared with the performance of a linear classifier. Classifier performance is expressed as probability of correct classification and classification time. The time is given in seconds CPU time per 10'; feature vectors on an IBM 360/91 computer. To select the best N features from P original measurements of transformed divergence, m=P!/[N!(PN)!] binations have to be evaluated. For the KL with use
possible feature comtransform the first N com
ponents are the best selection. Table 8.5 also compares the performance of the criteria used to derive the orthogonal transform. The covariance matrix in equation (8.22) describes the optimal representation of the pattern classes, but use of equation (8.34) maximizes the distance between feature vectors from different classes. For the test data, both criteria yield approximately the same performance and are inferior to
IMAGE
CLASSIFICATION
283
.__m_
eo_
nl
o o
®
"0 0 eI', II O0 _
_
...
_.
_
......
II
II
E
C 0 (J 020 • m_ ....
o o
i
(/)_ C (/) _ 02 ..C _ 02 0 •_ 00. I,,. 0r;02 •m m ,
m
*__ C _ 02 02 _ 02 ,_I" _= CC:: ..C _C: O0 02 0 X 02 0.0. 0 0._ 0
m
_;u)
t02 0 0._
u) _
_ _C _ 00.
v02 0
C: _) C 0 O.
C:,e C C:: _ O') 02 _ 02 02 02 GO C:_CC C 0 02 0 0 0 _ 0.0 0.0. 0.._
m
oi
02 ,,
0_02 I
02 I I II
02 I I
O_ _'
E
O O t_
¢J "o t: t: o (.1
E_ z
t
8
02 O(M'O . ¢o
g _
"_" '0
Q
OO_
,,o
E
®o_=_=_ _.___ _
_ , _ .___
02_
tO
._o E_
"_ _ 0 ._ • .m _
_EO_
114 .,I m
__ _ E ®
o
0
I,
284
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
feature selection with the average transformed divergence method in both correct classification and time. The time required to perform the KL transform is considerably longer than the time to calculate the average divergences. The performance of features obtained with covariance matrix in equation (8.28) is significantly inferior. Figure 8.16 compares the probability of correct classification and classification time for KL transform and transformed divergence. The graphical presentation uses values from table 8.5 for P=8 measurements and the covariance matrix in equation (8.22). The classification accuracies of the individual classes are listed in table 8.6. For the multitemporal data the best four channels are selected according to the transformed divergence criterion. The classification accuracies for classes 1, 2, and 3 using all four channels from one multispectral Landsat image are very low. The poor separability of these classes is reflected in the corresponding low divergences in table 8.3. A graphical representation of the estimate of the classification accuracies using four features is shown in figure 8.17. Selection of the best four features from the eight measurements of a multitemporal image yields an almost uniform accuracy for all classes at only a slight increase in classification time. Most existing classifiers use only spectral and temporal information by classifying individual picture elements as pattern vectors. Classification accuracy can be increased by using spatial information. The combination of segmentation into regions and classification of each region as a unit rather than classifying each individual picture element allows the use of texture and other spatial characteristics of objects [21, 22]. Another problem affecting classifier performance in remote sensing applications is estimating the expected proportions of objects that cannot be observed directly or distinctly. Because of the limits in the spatial resolution of the instruments, different classes of objects may contribute to a single resolution element. With the radiation measured being a mixture of object classes, the pattern vectors are not characteristic of any object class [23, 24]. 8.6 Classification Examples
Multispectral classification has been successfully applied as an image analysis technique for remote sensing studies in land use and agriculture [25]. In the field of geology, classification has not been as successful, primarily because of the nonhomogeneity of geologic units, presence of gradational boundaries, topographic effects, confusing influence of vegetative cover, and similarity of the spectral signature of different lithologies. The results of classifying the Landsat MSS image in figure 4.15 with four spectral measurements as features are shown in figure 8.18. The thematic map obtained by supervised classification with the maximum likelihood classifier given by equation (8.55) into 20 training classes is
IMAGE
A
CLASSIFICATION
285
100. 400. _
c 0
90, I"
KL transform N:4
tO
,,_ r igi nal
"_/'_/_/'0
= "_ 70. o
Subset of original measurements First N components of KL expansion
'_ 200 8 features '9'N = 4
I
I
I
I
_
0
2 Number
4
6 of Features
8 b
0 Number of Feature Vectors
a FtGURE 8.16Effect of feature selection
on classification
accuracy
and classification
time. (a) Classification accuracy versus number of features with original spectral bands and principal components. (b) Classification time versus number of feature vectors classified (360/91 CPU time).
TABLE 8.6Correct Individual Classes
Classification
of
Correct
classification
(percent) image P=8, N=8 90.1 92.1 88.5 95.5 90.6 96.7 91.2 92.1
Multitemporal Class Single P=4, image N=4 P=8, N=4 87.8 90.2 90.5 92.7 89.2 95.7 87.2 90.5
1 2 3 4 5 6 7 Total
68.6 76.7 54.3 93.7 87.0 88.5 82.4 78.8
shown in figure 8.18a. technique described in clusters were obtained Geologists agree that figure 4.15c is superior
Figure 8.18b is the result of applying the clustering section 8.4.2 to the same image. Twenty significant by analysis of the fourdimensional histogram. the color enhancement of ratio images shown in to the classification results.
An example of supervised classification as a technique for the analysis of remotely sensed images using bipolarized rather than multispectral measurements is the detection of rainfall areas over land. Remote sensing of precipitation is fundamental to weather, climate, and Earth resources research Nimbus6 activities. Upwelling Electrically Scanning microwave Microwave radiation measured by Radiometer (ESMR6) the can
286
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
lOO c 90,
.__
80,
_ _
70, 60,
so;
I 1 I 2 I 3 I 4 Class I 5 ! 6 ; 7
FIGURE8.17Class classification accuracies using four features.
be used to distinguish areas with rain over land and over ocean from areas with dry ground, with moist soil, or with no rain over ocean [26]. The ESMR6 system measures thermal microwave radiation upwelling from the Earth's surface and atmosphere in a 250MHz band centered at 37 GHz in two orthogonal (horizontal and vertical) polarizations [27]. The spatial resolution is approximately 20 by 40 km. The problem is to classify twodimensional feature vectors (horizontal and vertical brightness temperatures) into five classes. The training set required to design the classifier was derived from the ESMR6 data by using radar and ground station measurements coinciding with the Nimbus6 overpass. Figure 8.19 shows a scatter plot of the horizontal and vertical polarized brightness temperatures for the five classes. With the assumption of normally distributed data, the ellipses represent the decision boundaries defined by equation (8.55) for r.=32 (i.e., 68 percent of the data within a class populationthe data within one standard deviationare encompassed by each ellipse). Pattern vectors outside the ellipses are assigned to the reject class So. The lines represent the linear decision boundaries obtained for the linear classifier given by equation (8.60) with sequential decisions. No rain over ocean areas is first separated from rain over ocean, dry ground, wet soil, and rain over land areas. Next, rain over ocean areas is separated from the remaining classes. Then dry ground areas are separated from the two classes most difficult to separate: wet ground and rain over land areas. A large overlap occurs between data obtained from rainfall
IMAGE
CLASSIFICATION
287
FIGURE 8.18aMap
obtained
by supervised classification
into 20 classes.
288
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
FIGURE 8.1WoMap
obtained
by clustering.
IMAGE
CLASSIFICATION
289
t_
o
E
t_
4:: t_
v
"t_
N
t'
=
f:
_
'r"r 0
0
0
5'
11:
0
290
DIGITAL
PROCESSING
OF REMOTELY
SENSED
IMAGES
over land areas and wet ground surfaces. Consequently, these two classes are difficult to separate. The results of a chisquare test [28] show that the assumption of a normal distribution of the class populations is justified. The classification map obtained with a Bayesian classifier for an area over the Southeastern United States is shown in figure 8.20. The geometric distortions caused by the conical scanner were not corrected.
FIGURE 8.20Rain classification map from ESMR6 data.
IMAGE
CLASSIFICATION REFERENCES
291
[1] [2]
Patrick,
E.
A.:
Interactive
Pattern
Analysis
and
Classification
Utilizing
Prior
Knowledge, Tou, J. T.; traction, in Press, New
Pattern Recognition, vol. 3, 1971, pp. 5371. and Heydorn, R. P.: Some Approaches to Optimum Tou, J., ed.: Computers and Information SciencesII. York, 1967.
Feature ExAcademic
[3]
Watanabe, S., et al.: Evaluation and Selection of Variables in Pattern Recognition, in Tou, J., ed.: Computers and Information SciencesII. Academic Press, New York, 1967. [4] Kailath, T.: The Divergence and Battacharyya Distance Measures in Signal Detection, IEEE Trans. Commun. Technol., vol. 15, no. 1, 1967, pp. 5260. [5] Chien, Y. T.; and Fu, K. S.: On the Generalized KarhunenLo6ve Expansion, IEEE Trans. Inf. Theory, vol. IT13, 1967, pp. 518520. [6] Swain, P. H.; and King, R. C.: Two Effective Feature Selection Criteria for Multispectral Remote Sensing. LARS Information Note 042673, Laboratory [7] for Applications of Remote Sensing, Swain, P. H.: Pattern Recognition: LARS Information Note 111572, Purdue University, Lafayette, Ind., 1973. A Basis for Remote Sensing Data Analysis. Laboratory for Applications of Remote
Sensing, Purdue University, Lafayette, Ind., 1973. Duda, R. D.; and Hart, P. E.: Pattern Classification and Scene Analysis. WileyInterscience, New York, 1973. [9] Andrews, H. C.: Mathematical Techniques in Pattern Recognition. WileyInterscience, New York, 1972. [10] Eppler, W. G.: An Improved Version of the Table LookUp Algorithm for Pattern Recognition. Ninth International Symposium on Remote Sensing of the Environment, Ann Arbor, Mich., 1974, pp. 793812. ill]Crane, R. B.; Malila, W. A.; and Richardson, W.: Suitability of the Normal Density Assumption for Processing Multispectral Scanner Data, IEEE Trans. Geosci. Electron., vol. GE10, 1972, pp. 158165. [12] Cicone, R. C.; Malila, W. A.; Gleason, J. M.; and Nalepka, R. F.: Effects of [8] Misregistration on Machine Processing [13] Multispectral of Remotely Recognition. Proceedings Sensed Data, Purdue of Symposium on University, Lafayette,
[14] [15]
Ind., 1976, pp. 4A14A8. Addington, J. D.: A Hybrid Classifier Using the Parallelepiped and Bayesian Techniques. Proceedings of the American Society of Photogrammetry, Mar. 1975, Washington, D.C., pp. 772784. Ho, Y. C.; and Kashyap, R. L.: A Class of Iterative Procedures for Linear Inequalities, SIAM J. Control, vol. 4, 1966, pp. 112115. Bond, A. D.; and Atkinson, R. J.: An Integrated Feature Selection and Supervised Learning Scheme for Fast Computer Classification of Multispectral Data. Conference on Earth Resources Observation and Information Analysis Systems, University of Tennessee, Knoxville, Tenn., Mar. 1972. Tou, J. T.; and Gonsalez, R. C.: Pattern Recognition Principles. AddisonWesley, Reading, Mass., 1974. Dasarathy, B. V.: An Innovative Clustering Technique for Unsupervised Learning in the Context of Remotely Sensed Earth Resources Data Analysis, Int. J. Syst. Sci., vol. 6, 1975, pp. 2332. Dasarathy, B. V.: HINDUHistogram Inspired Neighborhood Discerning Unsupervised System of Pattern Recognition: System Concepts. Computer Sciences Corp. Memo., 5E308048, July 1976. Goldberg, M.; and Shlien, S.: A Clustering Scheme for Multispectral Images, IEEE Trans. Systems, Man Cybernetics, vol. SMC8, 1978, pp. 8692. Kanal, L. N.; and Chandrasekaran, B.: On Dimensionality and Sample Size in Statistical Pattern Recognition, Pattern Recognition, vol. 3, 1971, pp. 225234. Kettig, R. L.; and Landgrebe. D. Data by Extraction and Classification Geosci. Electron., vol. GE14, 1976, A.: Classification of Homogeneous pp. 1926. of Multispectral Objects, IEEE Image Trans.
[16] [17]
[18]
[19] [20]
[21]
292
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
[22]Wiersma, D. J.; and Landgrebe, D.: The Use of Spatial Characteristics for the Improvement of Multispectral Classification of Remotely Sensed Data, Proceedings of Svmposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafavette, Ind., 1976, pp. 2AI 82A22. [23] Horwitz, H. M.; Nalepka, R. F.; Hyde, P. D., and Morgenstern, J. P.: Estimating Proportions of Objects within a Single Resolution Element of a Multispectral Scanner. Seventh International Symposium on Remote Sensing of the Environment, Ann Arbor, Mich., Mav 1971. [24] Chhikara, R. S.; and Odell, P. L: Estimation of Proportions of Objects and Determination of Training SampleSize in a Remote Sensing Application. Proceedings of Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafavette, Ind., 1973, pp. 4B164B24. Fu. K. S.. et al.: Information Processing of Remotely Sensed Agricultural Data, Proc. IEEE, vol. 57, 1969, pp. 639653. Rodgers, E.: Siddalingaiah, H.: Chang. A. T. C.; and Wilheit, T.: A Statistical Technique for Determinin_ Rainfall Over Land Employing Nimbus6 ESMR Measurements. NASA TM 79631, Aug. 1978. Wilheit, T.: The Electricallv Scanning Microwave Radiometer (ESMR) Experiment, in The Nimbus6 User's Guide. NASA/GOddard Space Flight Center, Greenbelt, Md., Feb. 1975, po. 87108. Cochran, vol. 23, W. 1952, G.: The ChiSquare pp. 315345. Test of Goodness of Fit, Ann. Math. Stat.,
[25] [26]
[27]
[28]
9.
Image
Data
Compression
9.1
Introduction
The high data rates of multispectral remote sensing instruments create requirements for data transmission, storage, and processing that tend to exceed available capacities. These requirements are continually increasing with user demands for improved spatial and spectral resolution, Earth coverage, and data timeliness. For example, the future Landsat D system will transmit data from the Thematic Mapper instrument at a rate of 85 million bits per second and produce digital images in the form of 50 to 100 multiimages per day, each approximately 6,100 lines by 6,100 columns and containing seven spectral bands. One way to meet the requirements is through onboard data compression for transmission or through groundbased compression for archiving. Working with the image data in compressed form could reduce the cost of storage, dissemination, and processing [ 1]. A principal consideration in the decision to employ data compression is the effect on image fidelity. The criteria for image fidelity vary because many investigators use the data for several different purposes. According to figure 1.3, remotely sensed images are analyzed by machine and by subjective human evaluation. Therefore, fidelity measures should include diverse criteria, such as classification accuracy and properties of the human visual system. These properties are not clearly understood, and the performance criteria for machine analysis vary with the application. In spite of considerable progress in data compression research, compression onboard spacecraft for transmission or on the ground for data dissemination and archiving has very seldom been used because of risks in reliability and data alteration [2]. Recent advances in system reliability and reduction of cost are making image data compression increasingly practical [3]. Image data compression can be accomplished either by exploiting statistical dependencies that exist between image samples or by discarding the data that are of no interest to the user. Thus, two types of compression, informationpreserving and entropyreducing compression, may be distinguished [4]. Informationpreserving image compression is a transformation of an image for which the resulting image contains fewer bits. Hence, the original image can always be exactly reconstructed from the compressed image. The image statistics must be known to realize the transformation. 293
294
DIGITAL PROCESSING
OF REMOTELY SENSED IMAGES
The stronger the correlation of the picture elements, the greater is the redundancy. Informationpreserving compression removes this redundancy. Entropyreducing compression is an irreversible operation on the image and results in an acceptable reduction in fidelity. The operation depends also on the properties of the receiver. A compression acceptable in one application can be unacceptable in another. Examples are the image segmentation and classification techniques discussed in chapters 7 and 8, where the compressed data are represented by clusters or boundaries of regions in a scene. Entropyreducing compression is in general not acceptable because of different user requirements. Only informationpreserving compression techniques will be discussed in this chapter. The redundancy in multiimages is due to the spatial correlation between adjacent picture elements and to the spectral or temporal correlation between the components of the multiimage. This redundancy can be modeled with image statistics and is therefore predictable. The output of the redundancy reduction step is called the derived data [5]. The nonuniform probability density of the remaining nonpredictable part of the image is a second source of redundancy that may be removed by coding [5, 6]. The purpose of image data compression is to remove the statistical predictability and then to encode the derived data for transmission or storage. Figure 9.1 shows the basic elements of an image data compression system. In the first step the redundancy due to the high correlation in the images is reduced. This redundancy reduction is a reversible process and thus preserves include predictive methods. information. compression, Compression transform techniques compression, for this step and hybrid
In the next step, the derived data are encoded with a fixedlength or variablelength code. Natural code, in which the data samples are represented in binary form, is a fixedlength code. The advantage of a fixedlength code is the constant word length. Its drawback is that redundancy due to the nonuniform data distribution still exists in the nonpredictable signal. This redundancy can be removed by using a variablelength code. A variablelength code maps short code words to the picture elements with higher probability of occurrence and long code words to rarely occurring data. After transmission or storage, the compressed data are decoded and reconstructed to images. To permit exact reconstruction, the nonredundant information content of the images must not be changed by compression. The design of an image data compression system involves two basic steps. First, image properties and statistics, i.e., probability distribution functions for the gray values, entropy, and correlation functions, have to be calculated. The image statistics are used to determine information
IMAGE
DATA
COMPRESSION
295
ransmission
I
image Digitized
_!_1
reduction Redundancy
H
Encoding
torage
Decoding
H
Reconstruction
_
User
FIGURE
9.1Block
diagram
of
image
data
compression
system.
content next step
and
redundancy define The
and the
to model compression of
the
predictive
part and gives to
of images. establish a summary
The perof
is to criteria.
technique this chapter techniques.
formance
remainder image
informationpreserving
compression
9.2 The
Information determination given by
Content, of the the
Image average (see entropy
Redundancy, information sec. 2.2.2)
and
Compression of a
Ratio class of
content important
images for data
entropy The
is an
requirement is defined as
compression.
of a picture
element
H_=
_
p(k)
log2 p(k)
(9.1)
where
p(k)
is in the as
the
probability field
of
gray
value the
k,
and
n is The
the
number
of r
members is defined
random
representing
images.
redundancy
r=bHc It can The to only quantity represent images the are be calculated b is the a picture to if a good number element. the average used remote of lines, estimate of the bits entropy (see in content them. experiments. random per field line, of g is and the of on
(9.2) Hc is available. sec. digital of 2.5.2) used
of quantization The
redundancy
remotely being 10 bits
sensed less per than pixel
is due number used of A
information to represent sensing a digital N
images 6 to
of bits
Usually
on NASA the entropy with n=2 the with the M
Calculation impossible. sample equation the actual field. may (9.1) image the gray
practically b bits per with in of basis
field
samples
generate requires
_uv_' images. knowledge of may histogram.
Calculation the only probability be
entropy each the image
Therefore, data
entropy image
calculated
Given rence levels of can
histogram level
H a of g, where k and 0<k<2 by p(k) _
H_(k) 1, the
is the probability
frequency density
of occurof gray
be approximated
_n_(k) MN
(9.3)
296
DIGITAL
PROCESSING
OF REMOTELY probability
2b1
SENSED IMAGES is then (9.4)
The entropy
of the graylevel H,=
density
Z
k_o
p(k)
Iog._.p(k)
and the redundancy
is rp=bHp (9.5)
The compression
ratio may be defined
as (9.6)
b CR = Hv
Using the graylevel distribution results in an incorrect estimate of the entropy because of the correlation between gray levels. A better estimate for the entropy is obtained from the probability distribution of first graylevel differences
p(Ak)
H_(Ak)MN
(2b1)<Ak<2_l__ difference Ak. The differences is log_ p(Ak)
(9.7) entropy
where H,j(Ak) is the frequency of graylevel of the probability distribution of first graylevel
2b1
Hd(Ak)
= '_k
__,
('2b1 )
p(±k)
(9.8)
The representation of a digital multiimage consisting of P components, each given by an M by N matrix of picture elements with b bits per pixel, requires I bits with a conventional pulsecode modulation (PCM) code [7], where 1 = bPMN ( 9.9 )
In a PCM code, each pixel value is represented by its bbit binary number. The Multispectral Scanner (MSS) data of Landsats 1 and 2 are quantized to b=7 bits for bands 4, 5, and 6 and to b=6 bits for band 7. Given a frame size of 2,340 by 3,240 pixels, the total number of bits per multiimage is I _ 2 × 10L For a Landsat D Thematic Mapper multiimage with b=8, P=7, and M=N _ 6,100, the total number of bits is I _ 2× 10". The entropies of the Landsat MSS image shown in figure 9.2 are listed in table 9.1. The entropy Ha is smaller than the number of quantization bits required by conventional PCM. Therefore, it should be possible to compress this image to an average of 4.2 bits per pixel with no loss of information and to achieve a compression ratio of 1.9. 9.3 Statistical Image Characteristics
For the analysis of compression techniques, it is desirable to have a model characterizing the image properties and involving only a few essential
IMAGE
DATA
COMPRESSION
297
FIGURE
9.2_andsat
multispectral image of Washington, (b) MSS 5. (c) MSS 6. (d) MSS 7.
D.C.,
area.
(a) MSS
4.
TABLE 9.1Entropies in Figure 9.2
of Landsat
MSS Image
MSS spectral band 4 5 6 7 Average
Entropies H. 4.15 4.4 4.86 4.28 4.4
' Hd 3.94 4.38 4.63 3.99 4.2
1H_ is the entropy of the graylevel probability density, given in equation (9.4); H_ is the entropy of the probability distribution of first graylevel differences, given in equation (9.8).
298
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES field g(x, y, ,o) o, refers to the of a large class autocorrelation
parameters. A useful model for multiimages is a random (see sec. 2.2.2) where (x, y) are spatial variables and spectral or temporal variable. The spatial characteristics of remotely sensed images may be approximated by an function for a homogeneous random field g of the form [8] R(,_, _) = (R(0, This autocorrelation function variance C(0, 0), where C(0, 0) t_2)e_l_l°_l_12 depends only on the +_ ''
(9.10) value t_, the
mean
0) =R(O,
0) ___2
(9.11 )
and the two parameters 0_ and fl, which specify the average number of statistically independent gray levels in a unit distance along the horizontal and vertical direction, respectively. In practice the autocorrelation function is computed as the spatial average given in equation (2.8). Figure 9.3 shows the horizontal and vertical autocorrelation functions of the image in figure 9.2, computed functions. as averages of line and column correlation
The correlation between spectral bands cannot be represented by the exponential model of equation (9.10). For example, Landsat MSS images often exhibit strong positive correlations between bands 4 and 5 and between bands 6 and 7 and small negative correlations between bands 5 and 7. 9.4 Compression Techniques
Although fine sampling and quantization arc essential to preserve the subjective quality of a digital image, the information content could be conveyed with considerably fewer data bits. The approach taken in image compression is to convert the image data samples into a new set of uncorrelated variables that will contribute with a varying degree to the information content and subjective quality of the image. The less significant of these information content This transformation prediction 9.4.1 variables can then be discarded without affecting the and the subjective quality of the reconstructed image. to uncorrelated variables can be accomplished by transforms.
or by unitary
Transform
Compression
Transform compression uses unitary transforms to remove the correlation of the data and to rank them according to the degree of significance to the information content of the image. The KarhunenLo6ve (KL) transform (see sec. 2.6.1.4) results in a set of uncorrelated variables with monotonically decreasing variances. Because the information content of digital images of a variable is invariant is a measure under a unitary transform, of its information content, and the variance the compression
IMAGE
DATA
COMPRESSION
299
MSS 4
_\
Hori.oo,.,
!o S "X x \ '_ t"
0
1=0
20
30
40
50
= _, 'q
FIGURE 9.3a_Average horizontal and vertical spatial autocorrelation of Landsat multispectral image in figure 9.2a.
functions
300
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
1.0
R(/L r/)
0,8
MSS 5 0.6
0.4
o2t
%Horizontal Vertical a 0 10 20 ...... 30 40 50 i_,_ FIGURE 9.3b_Average horizontal and vertical spatial autocorrelation Landsat multispectral image in figure 9.2b. functions of
IMAGE
DATA
COMPRESSION
301
MSS 6
j
Vertical
\...
0
10
20
30
40
50
FIGURE 9.3cAverage
horizontal and vertical spatial autocorrelation Landsat multispectral image in figure 9,2c.
functions
of
DIGITAL
PROCESSING
OF
REMOTELY
SENSED
IMAGES
R(_,
,1)
0.8
MSS 7 0.6
i 0.4
0.2 _ _ .__ Vertical _""_" _ _"\ x..., Horizontal
I I I 10 t
I
I
l
 ____.
l
l
,E,r_ 50
2b
30
40
FIGURE 9.3dAverage horizontal and Landsat mu/tispectra/
vertical spatial autocorrelation image in figure 9.2d.
functions
of
IMAGE
DATA
COMPRESSION
303 [911]. The redistri
strategy
is to discard
variables
with
low variances
bution of variance in the principal components is important in an informationtheoretic sense, because the KL transform minimizes the entropy function defined over the data variance distribution [12]. The shortcomings of the KL transform are that knowledge of the covariance matrix is required and that the computational requirements for the twodimensional transform in the spatial domain are proportional to MON'' for the forward and inverse transform. Furthermore, the eigenvalues and eigenvectors for the MN by MN covariance matrix have to be computed. The threedimensional KL transform for compression in the spatial and spectral dimensions is too complex to be considered. Therefore, only the onedimensional KL transform is applied in the spectral dimension where the correlation in general cannot be modeled by the exponential correlation function in equation (9.10). The computation be avoided if unitary are used. Such of the covariance transforms with matrix and a deterministic the cosine, its eigenvectors can set of basis vectors and the Hadamard
transforms
are the Fourier,
transforms. (See sec. 2.6.1.) Because for these transforms, the computational sional transformation are proportional The performance of these transforms formance of the KL transform, which uncorrelated coefficients. Only if the exponential form in equation (9.10) Hadamard transforms generate nearly
of the existence of fast algorithms requirements for the twodimento MN log._.MN operations [9, 14]. is, however, inferior to the peris the only transform that generates autocorrelation function is of the will the Fourier, cosine, and uncorrelated coefficients. Because
the spectral autocorrelation function of remotely sensed images is not exponential, these transforms are only employed in the spatial dimension. Table 9.2 shows the spectral correlation matrix and its eigenvalues for the image in figure 9.2. The table also shows that 98.0 percent of the variance in the transformed data is contained in the first two components. Similar characteristics are shown in table 4.2 for aircraft scanner data. 9.4.2 Predictive Compression to of
Predictive compression uses the correlation between picture elements derive an estimate _(i, j) for a given picture element g(i, j) in terms
neighboring elements [15]. The difference d(i,j)=_(i,j)g(i,j) between the estimate and the real value is quantized. Images are reconstructed by estimating values that are added to the differences. This technique is called differential pulsecode modulation (DPCM) [13]. Figure 9.4 shows the block diagram of a DPCM system. The transmitter is composed of a predictor and quantizer. The predictor samples to predict the value of the present sample. between this estimate and the actual value is quantized. figure 9.4b reconstructs the image from the differential uses n previous The difference The decoder in signal.
304 DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
TABLE 9.2_Spectral Figure 9.2 Correlation Matrix of Landsat MSS Image in
MSS spectral band 4 5 6 7
Spectral correlation matrix by MSS spectral band number 4 1.00 0.91 0.48 0.16 5 0.91 1.00 0.39 0.07 6 0.48 0.39 1,00 0.91 7 0.16 0.07 0.91 1.00 Eigenvalues 653.1 241.4 11.0 7,3 Percent variance 78.6 26.4 1.2 0.8
Quantizer _C,/) nthorder predictor
=
nthorder predictor
a
b
FIGURE 9.4B/ock diagram of DPCM system.(a) Generation of differentia/signa/. (b) Reconstruction of image,
Experimental results with various image data have indicated that a thirdorder linear predictor is sufficient to model digital images rather accurately [13]. With the assumption that the image is scanned row by row from top to bottom, the predicted value _(i, j) at line i and column j is then of the form _(i,j)za,g(i,j1)+a_g(il,j1)+a:+g(il,j) The predictor square error c=E([g(i, j) _(i, j)]2) (9.13) g with zero mean, the the n3 simultaneous coefficients a_., k 1, 2, 3 are determined (9.12) such that the mean
is minimized. For a homogeneous random field solution to this problem is obtained by solving equations aiR(0, a,R(O, atR(1, where R(m, n) 0) +a_R(O, 1) +a_R(O, 1) +a2R(1, 1) +a:_R(1, O) +a:_R(1, O) +a._R(O,
1) _R(1, 0)R(1, O) _R(O, of the
0) I) / 1) random (9.14) field g.
is the
autocorrelation
function
IMAGE DATA COMPRESSION
305
For an exponential correlation functionof the formgivenin equation (9.10), the variance the differential ata d(i, j) is less than the of d
variance of the picture elements predictor g(i, j), and the differential for the image data are un9.2 correlated [13]. The are listed in table 9.3. 9.4.3 Hybrid coefficients in figure
Compression
Transform and predictive compression techniques generate uncorrelated or nearly uncorrelated data. Both unitary transforms and DPCM have advantages and limitations. Unitary transforms maintain subjective image quality better and are less sensitive to changes in image statistics than DPCM. On the other hand, DPCM achieves better compression at a lower cost [16]. Hybrid compression techniques combine the attractive features of both transform and predictive compression and avoid the limitations of each method [17]. Three categories of hybrid compression systems have been investigated [3]. First, twodimensional transform compression of the individual components of a multiimage, followed by predictive compression data across the components, was used. Specifically, twodimensional cosine and Hadamard transforms in combination with DPCM were evaluated. The second group involved onedimensional transform compression in the spectral dimension with twodimensional predictive compression in the transformed domain. Here, the combination KL transform with twodimensional DPCM was studied. The third group spatial In this bined uses onedimensional transforms in the spectral and horizontal dimension and predictive compression in the vertical dimension. category, KL/cosine/DPCM and KL/Hadamard/DPCM comtransforms were selected for evaluation. The results showed that
twodimensional transforms with DPCM compression in the spectral dimension were inferior. This result occurred because spectral correlation is in general not exponential, and the number of spectral bands is usually small. A DPCM encoder does not reach a steady state, thus in this case causing inefficient performance. The two recommended hybrid com
TABLE 9.3Predictor MSS spectral band 4 5 6 7
coefficients Horizontal Vertical aa 0.071 0.065 0.071 0.081 81 0.072 0.154 0.251 0.328 82 0.197 0.146 0.181 0.172 aa 0.129 0.083 0.082 0.059
al 6.337 0.315 0.396 0.432
82 0.089 0.078 0.078 0.072
306
DIGITAL
PROCESSING
OF REMOTELY
SENSED IMAGES
pression techniques by onedimensional spectral dimension 9.5 Evaluation
are KL transform in the spectral dimension followed cosine transform and DPCM, and KL transform in followed by a twodimensional DPCM. Techniques techniques, criteria measur
of Compression
To evaluate
the performance
of compression
ing the distortion in reconstructed images must be defined. These criteria are necessarily application dependent. They include the mean square reconstruction error, signaltonoise ratio (SNR), compression ratio, computational complexity, cost and error effects, subjective image quality, and influence on the accuracy of subsequent information extraction. For example, the effect of a particular compression technique on classification accuracy can be determined by comparing classifier performance on an original image and on the reconstruction of the compressed 9.5.1 image. Mean Square Error
The meansquareerror criterion is frequently used to compare the reconstructed image f with the original image g. Let c(i, ]) =g(i, j) l(i, j) be the error at spatial coordinates (i, j). The mean square error e2 is defined as 1 e= MN The error distribution is defined as P(') where H(_) is the frequency The peak error is defined as H(,) =MN of errors with (9.16) magnitude _. ,_r x _ Z
• = j.:l
,(i, j)_"
(9.15)
of occurrence
,m._x= max [,(i, j)[
i,j
(9.17) image is
and the average
signal
power 1 MN
s of the reconstructed
.11 N
s= 9.5.2 SignaltoNoise
_" _" g(i, j)=
I I j I
(9.18)
Ratio
(SNR) image may be of the original
The difference between the original and the reconstructed considered as noise. Then the reconstructed image consists image plus noise: f=g+c
(9.19)
IMAGE DATACOMPRESSION
An SNR may be defined as the average image divided by the mean square error: SNRI= An alternate definition of an SNR is SNR2where A/m_ is the structed image. 9.5.3 Subjective maximum 20 log
Afmax
307 reconstructed
power
s of the
s 10 log _
(9.20)
e2 signal value in the
(9.21) recon
peaktopeak
Image
Quality considerations of the human that enter the A/between is known
The properties of the human visual system are important in the evaluation of compression techniques. The sensitivity visual system depends logarithmically on light intensities
eye. Thus, the higher the brightness/, the higher the contrast objects must be to detect any differences. This relationship
as WeberFechner law. Furthermore, in the ability to detect fine spatial detail, the human visual system behaves like a bandpass filter. Thus, it is insensitive to the highest and lowest spatial frequencies in a scene. The importance of these nonlinear and spatialfrequencydependent properties for image data compression was advocated by Stockham [18] and by Manos and Sakrison [19]. A simple characteristic psychophysical of the visual error (PE) criterion adapted to the nonlinear system can be defined [20] as a maximum: PEm =max ,,j or an average value: I,(i,j)l /(i,j)+l
(9.22)
MN,
_j
,f(i,j)+l
(9.23)
These quantities are calculated for each component of a multiimage and the average over all components is used to evaluate compression techniques for multiimages. REFERENCES [1] Lynch, T. J.: Data Compression Requirements for the Landsat FollowOn Mission. NASA Goddard Space Flight Center, Report X9307655, Feb. 1976. [21 Miller, W. H.; and Lynch, T. J.: OnBoard Image Compression for the RAE Lunar Mission, IEEE Trans. Aerosp. and Electron. Syst., vol. AES12, 1976, pp. 327335. [3l Habibi, A.: Study of OnBoard Compression of Earth Resources Data. TRW Report CR137752, Sept. 1975.
308 [4] [5]
DIGITAL Blasbalg, Electron.
PROCESSING
OF
REMOTELY Message 228238.
SENSED Compression,
IMAGES IRE Trans. Space
H.; and Telemetry,
VanBlerkom, R.: vol. 8, 1962, pp.
[6] [7] [8] [9] [10]
Chert, P. H.; and Wintz, P. A.: Data Compression for Satellite Images. Tech. Report TREE 779, School of Electrical Engineering, Purdue University, Lafayette, Ind., 1976. Capon, J.: A Probabilistic Model for Run Length Coding of Pictures, IRE Trans. on Inf. Theory, vol. IT5, 1959, pp. 157163. Huang, T. S.: PCM Picture Transmission, IEEE Spectrum, vol. 2, Dec. 1965, pp. 5763. Franks, L. E.: A Model for the Random Video Process, Bell Syst. Techn. J., vol. 45, Apr. 1966, pp. 609630. Wintz, P. A.: Transform Picture Coding, Proc. IEEE, vol. 60, 1972, pp. 809820. Habibi, A.; and Wintz, P. Block Quantization, IEEE pp. 5062. A.: Image Coding Trans. Commun. by Linear Technol., Transformation vol. COM19, and 1971,
[11] [12]
[13]
[14]
Pratt, W. K.; Kane, J.; and Andrews, H. C.: Hadamard Transform Image Coding, Proc. IEEE, vol. 57, 1969, pp. 5868. Watanabe, S.: KarhunenLo6ve Expansion and Factor Analysis, Theoretical Remarks and Applications. Transactions of the Fourth Prague Conference on Information Theory, Prague, Czechoslovakia, 1965. Habibi, A.: Comparison of nth Order DPCM Encoder with Linear Transformations and Block Quantization Techniques, IEEE Trans. Commun. Technol., vol. COM19, no. 6, 1971, pp. 948956. Anderson, G. B.; and Huang, T. S.: Piecewise Fourier Transformation for Picture Bandwidth Compression, IEEE Trans. Commun. Technol., vol. COM19, 1971, pp. 133140. Elias, 3033. P.: Predictive Coding, IRE Trans. Inf. Theory, vol. ITI, t965, pp. 1623,
[15] [16] [17] [18]
[19]
[20]
Habibi, A.; and Robinson, G. S.: A Survey of Digital Picture Coding, IEEE Comput., vol. 7, 1974, pp. 2235. Habibi, A.: Hybrid Coding of Pictorial Data, IEEE Trans. Commun., vol. COM22, 1974, pp. 614623. Stockham, T. G.: Intraframe Encoding for Monochrome Images by Means of a Psychophysical Model Based on Nonlinear Filtering of Signals. Proceedings of 1969 Symposium on Picture Bandwidth Reduction, Gordon and Breach Sci. Pub., New York, 1972. Manos, F.; and Sakrison, D. L.: The Effects of a Visual Fidelity Criterion on the Encoding of Images, IEEE Trans. Inf. Theory, vol. IT20, 1974, pp. 525536. Bruderle, E., et al.: Study on the Compression of Image Data Onboard an Applications HP, Mar. or Scientific 1976. Spacecraft. Report for ESRO/ESTEC Contract 2120/73
Symbols
a a art
,4 A b
bd
altitude sensor, aperture radius weight vector, separating vector gain amplitude of periodic pattern transformation matrix word length of natural quantization bits offset k) path radiance brightness image covariance matrix covariance function of f and g covariance matrix for class S_. Dirac delta function spatial distance for a given spectral band Laplacian of image g degree of match between two images absolute difference between images Mahalanobis distance, interclass distance number of detectors in sensors, edge density average divergence divergence between classes of Sj and Sk transformed divergence spacecraft orientation matrix error mean square error expected mean square error expectation operator edge image spatial integration variable multiimage, random field restored image matrix restored image, estimate of original vector estimate of f training pattern apparent object multidimensional radiant energy picture element, binary code, number of
B B(j, C C_. 8
AAj
V'_g de dA d D D.4 D(S_, Sk) DT D
e
E e(j, k) f
f i h I* (x, y)
[(Xo, Yo)
image
of ]
pixel 309
310
F.v F*
DIGITAL
PROCESSING
OF REMOTELY SENSED
IMAGES
Fourier transform operator transform coefficients complex conjugate of F discrete cosine transform twodimensional DCT Hadamard transform Fourier transform of/(x, (DCT)
V(O) V(O, O) F(m,n) F(u, v) g g,t gl gl: g. gt g,. go gk(z) g gc gk l_ g_.n GI G,. Ga G(u, v) G(w)
y)
"
recorded image reconstructed display image filtered image radiometrically degraded image digital or sampled image threshold image enhanced image image radiant energy linear discriminant function convolution of two functions principal component difference image ratioed image magnitude of Fourier Fourier transform of Fourier transform or image
transform G(u, v) filtered image frequency spectrum
of g.,
g(z)
ho ha h_ hi ht h.(x, y) h(x,y)
frequency spectrum of display image Fourier transform of g(x, y) Fourier transform (vector form) inverse Fourier transform (vector form) optical system point spread function impulse response of display interpolation edge spread function line spread function filter
truncated filter impulse response sampling impulse point spread function (PSF) of linear spaceinvariant imaging system, filter impulse response hue image histogram of image g inverse filter Fourier transform of ha. transfer function of display entropy histogram of enhanced
H(j,k) H_( t ) H_(u, v) H_ Ha H_ H,.(z)
interpolation
filter
image
SYMBOLS H(u,v) i if i(x, y, _, t) 1 P J J(a) Jo(r) Fourier transform of h, optical transfer tion (OTF), filter transfer function imaginary unit training pattern irradiance identity transformation correlation coefficient Jacobian criterion function zeroorder Bessel function first order Bessel function number of clusters, maximum number of quantization levels diagonal matrix longitude, wavelength loss function radiance average loss linear system mean value mean vector operator gray value
311 func
/,(w)
K Ka A A ,_ (S_, Sk) L L(z, S_,)
3."
in
nlrc
M(u,
v)
mean gray value mean vector for class Sk modulation transfer function spectral radiant emittance spacecraft rotation matrix number of training vectors noise random noise structured noise
(MTF) of a blackbody for class Sk
M(x)
M Mk
rl
11 r
Us n
N N(u, _2 oh(Y) v)
normal from satellite to Earth number of features Fourier transform of noise n set of all events latitude
surface,
yaw axis
,t,(u, v)
P p(x, P
y), q(x, y)
phase complete set of orthonormal functions phase of Fourier transform, phase transfer tion (PTF) polarization coordinate transformation functions spacecraft probability position vector density function for feature
func
p(z)
vectors
312
DIGITAL PROCESSING OF REMOTELY
SENSED IMAGES
p(z) p(z/S_) p¢ p_ p_ pt PE P P(c) P_. P(Sk) P(Si[z) Q r(x, y, A, t, p)
r
mixture probability density probability density distribution of z for class estimate of probability of correct classification probability of event o,s error rate of a classifier joint probability density psychological error criterion dimensionality of multiimage error distribution probability distribution a priori class probability a posteriori probability kernel matrix reflectance redundancy rectangular region image domain region in image domain correlation matrix for class Sj. correlation matrix autocorrelation function of [ crosscorrelation function of [ and g crosscorrelation metric statistical correlation measure homomorphic restoration standard deviation variance variances of the principal standard deviation average signal power, scanner pointing direction saturation image training set for class Si_. sampling function satellite spin axis, roll axis reject class set of all patterns spectral densities subset of S, pattern class similarity measure Fourier transform of sampling spectral transmittance angle of scanmirror deflection time filter
R R Ri Rk R RIr Rr:, R(m,n) R, R(u,v)
¢T 17 2
crp 2
trd S S
components set
training
s(j,
St.:
k)
s(x. y) S Se S $I,. Sir and S_., S(m, n) S(u, v)
T
function
o t
SYMBOLS
t_
31 3
orthonormal
vectors time
T T 7"
T¢
image recording threshold texture image transformation coordinates
from
input
image
to geodetic
Tp
T_ T T. Ti
T.
map projection transformation sealing transformation transformation matrix geometric transformation geometric distortion transformation image degradation transformation reduced transformed matrix
V
T1,,
II,
U, V
V Wl
radiometric degradation transformation spatial frequencies frequency limits of a bandlimited function spacecraft velocity vector sensor velocity y), v) w(T) volume of Ndimensional windowfunction weight vector Wiener filter Fourier Fourier transform transform of w hypersphere
.'(x, W W(u,
WCu, v), W(,,,) W(.,)
oJ
cutoff" frequency radial spatial frcquency an event image coordination system object coordinate system grayscale transformation spatial integration feature vector variable
(x',y') (x', g') T.
Z
GLOSSARY
OF
IMAGE
PROCESSING
TERMS
Acutance: Aliasing:
Measure of the sharpness Image misrepresentation
of edges in an image. and/or loss of information in sampled and vertical frequencies
due
to
undersampling. Overlap of frequency spectra Aspect ratio: Ratio between scales in horizontal Bandpass specified filter: Spatial filter that suppresses frequency range.
images. direction. outside a
spatial
Bayes decision rule: Decision rule that treats the patterns independently and assigns a pattern or feature vector c to the class S_ whose conditional probability P(Sk [ c), given pattern c, is highest. Change detection: Process by which two images are compared pixel by pixel, and an output is generated whenever corresponding pixels have sufficiently different gray values. Cluster: another density. Homogeneous as determined that group by of patterns the distance patterns that are very patterns similar or to one by their between to a cluster
Clustering: Process training patterns. Clustering patterns
assigns
on the basis of the training to three or feature true in an
algorithm: Function that is applied to yield a sequence of clusters. by
to the unlabelled assigning colors
Color composite: Color image produced selected compounds of a multiimage. Conditional vector probability density: Probability by p(c c is derived c given class S_., denoted
density from
of a pattern
] $I_) and defined an image
as the relative area whose
number of times the vector class identification is $1,.
Control points: Recognizable geographic features image that relate the image with the object.
or landmarks
Decision boundary: Boundary between classes S, and S_ in pattern space. (It may be thought of as a subset /4 of pattern space _ such that H (c_S i gi(c)=gk(C)), where for classes S_ and S_.. ) Decision observed rule: Rule in which pattern ,_,,_and g_, are the discriminant is assigned set. of gray values functions to each
one and only Operation
one class
or feature on the basis of a training to assign
Density slicing or thresholding: to one value.
a range
315
316
DIGITAL PROCESSING OFREMOTELY SENSED IMAGES
cells, zero then width or
Digitization: Partitioning of an image into discrete resolution assignment of a representative gray value to each cell. Dirac delta function: An ideal pulse of infinite height and whose integrated area is equal to unity. Discriminant function: Scalar function gt.(c) whose domain
is pattern
feature space, whose range is the class numbers, and which assigns class number $1. to a pattern c. Edge: Abrupt change in brightness in an image. Edge spread function: Response of a linear spaceinvariant imaging system to an edge input. False color: Scc color composite. Feature: Vector whose components arc functions of the initial measurement patterns. Gray value, texturc measure, or coefficient of orthogonal transform. Feature selection: Process by which determined from the measurement the features patterns. used in classification are
Feature space: The set of all possible feature vectors. Filter or spatial filter: An image transformation that assigns a gray value at location (x, y) in the transformed image on the basis of gray values in a neighborhood of (x, 3') in the original image. Frequency spectrum: Function representing thc image components for each spatial frequency. Formally the magnitude of the Fourier transform of an image. Geometric distortion: Distortion due to movements of either sensor, platform, or object. Gray scale: Range of gray values from black to white.
Gray shade or gray value: Number that is assigned to a position (x, y) on an image and that is proportional to the integrated image value (reflectance, radiance, brightness, color coordinate, density) of a small area, called a resolution cell, or a pixel, centered on the position (x, y). Highpass filter: Spatial filter that suppresses low spatial frequencies in an image, and enhances fine detail. Histogram: Function representing the frequency of occurrence of gray values in an image. Homomorphic filtering: Nonlinear filtering that transforms arising the problem from the use
by a logarithmic operation to linear filtering. Hyperplane decision boundary: Decision boundary of linear discriminant functions.
Image: Spatial representation of an object, a scene, or a map, which may be abstractly represented by a continuous function of two variables defined on some bounded region of a plane. Image classification: Process, often preceded by feature which a decision rule is applied that assigns class numbers patterns or features on the basis of the training set. selection, in to unknown
GLOSSARY
317
Image compression: Operation thatreduceshe amount f image t o data andthetimeneeded transmit to animage whilepreserving or most all of theinformation intheimage. Image enhancement: Improvement thedetectability objects r patof of o ternsin animage contrast nhancement, enhancement, by e edge color enhancement, or multiimage enhancement. Imageprocessing: All operations that can be applied to image data,
including preprocessing, image restoration, registration, image segmentation, sampling, fication, and image compression. image enhancement, quantization, image image classi
Image registration: Alignment process by which two images of the same scene are positioned coincident with respect to each other so that corresponding elements of the same ground area appear in the same position on the registered images. Image restoration: condition. Image segmentation: image constitute Image Operation that restores a degraded image regions image to its original or areas as input in an and
Operation to determine which objects or patterns of interest. Operation that takes an
transformation:
produces an image as output. (The transform spatial domain and its range is the transform and Hadamard transformations, an entirely different character hunenLodve transformed domain. ) Line spread system
operator's domain is the domain. For the Fourier
for example, the transform domain has from the spatial domain. For the Karin the spatial imaging comor
transform or filtering transformations the image domain may appear similar to the image in the function: Response of a linear spaceinvariant frequency
to a line input.
Lowpass filter: Spatial filter that suppresses high spatial ponents in an image, and suppresses fine detail or noise. Map: Representation of physical and/or cultural features
of a region
a surface such as that of Earth, indicating by a combination of symbols and colors those regions having designated category identifications. Representation displaying classification category assignments. Maximum likelihood decision rule: Decision rule that treats the patterns independently and assigns a pattern or feature vector c to that class S¢, that most probably that the highest. Modulation conditional transfer gave rise to pattern probability function: or feature vector c; that S_, p(c the is, such ! Sz_), is fredensity of c given that measures
Function
spatial
quency modulation response of a linear imaging system and indicates for each spatial frequency the ratio of the contrast modulation of the output image to the contrast modulation of the input image.
318
DIGITAL PROCESSING OFREMOTELY SENSED
IMAGES
Mosaic: Combination of registered images to cover an area larger than an image frame. Multiimage: Set of images, each taken of the same scene at different times, or at different electromagnetic wavelengths, or with different sensors, or with different polarizations. Multispectrai image: Multiimage whose components are taken at the same time in different spectral wavelengths. Multitemporal image: Multiimage whose components are taken in one spectral wavelength at different times. Nonparametric decision rule: Decision rule that makes no assumptions about the functional form of the conditional probability distribution of the patterns given the classes. Notch filter: Inverse of a bandpass filter; suppresscs all frequencies within a given band of spatial frequencies. Nyquist frequency: Onehalf the sampling rate (_,_ _,_x), two samples per cycle of the Nyquist frequency being the highest observable frequency. Pass points: Rccognizable features in an image that relate a series of images. Pattern class or category: Set of patterns of the same type. Pattern or pattern vector: The ordered ,tuple or vector of measurements obtained from a resolution cell. (Each component of the pattern measures a particular property.) Picture element or pixel: The gray value of a resolution or the gray values of a resolution cell in a multiimage. Pixeh See picture element. Point spread function: Responsc of a linear spaceinvariant wave cell in an image,
imaging
sys
tem to a point light source. Polarization: Restriction of vibration directionthe direction of the electric
of a transverse vector
to a single
in a light wave.
Preprocessing: Operation applied beforc image analysis or image classification is performed, which can remove noise from, bring into registration, and enhance images. Psendocolor: Assigning colors to specific ranges of gray values in an image. Quantization: Process by which a gray value or a range of gray values in an image is assigned a new value from a given finitc set of gray values. Radiometric degradation: The effects of atmosphere and imaging systcms that result in a blurred response, image. vignetting, Degradation shading, resulting transmission from nonlinear atmosin by the amplitude noise,
pheric interfercncc, variable surface illumination, etc. Radiometric resolution: The sensitivity of the sensor to the differences signal strength, defining the number of discernible signal levels. Reflectance: an object Ratio of the energy per unit lime per unit area reflected to the energy per unit time per unit area incident on
GLOSSARY
319
object.A function theincident ngle theenergy, of a of viewing angle of thesensor, spectral wavelength andbandwidth, andthe nature the of object. Resolution Thesmallest cell: elementary constituent grayvalues areal of considered animage, in referred by itsspatial oordinates. to c Resolvingower an imaging p of syslem: n imaging A system's abilityto image closely spacedbjects, o usually measured line pairspermilliin meter,.e.,thegreatest i number linesandspacesermillimeterhat of p t canjustberecognized. Roll: Rotation abouthevelocity t vector causinganoramic p distortion. Sampling: Process measuring of thebrightness intensity f a continuor o ousimage discrete oints, roducing narrayof numbers. of p p a Signature: Thecharacteristic patternsr featureserived o d fromunitsof a particular or category. class Skew:Distortion dueto the rotationof the spacecraft aboutthe local zenith vectoryaw). (
Spatial resolution: A description of how well a system or image can reproduce an isolated object or separate closely spaced objects or lines in terms of the smallest dimension of the object that can just be observed or discriminated. Spectral bands: An interval in the clectromagnetic spectrum defined by two wavelengths, frequencies, or wave numbers. Spread function: A description of the spatial distribution of gray values produccd by a linear imaging system when the input to the system is some welldefined object. Template matching: Operation that determines how well two images match each other by crosscorrelating the two images or by evaluating the sum of the squared gray value differences of corresponding pixels. Temporal resolution: The time interval between measurements. Training set: Sequence of pattern subsets, s= (s_..... s_.) such that sj, is derived from class k, which is used to estimate the class conditional probability structed. distributions from which the decision rule may be con
Transmittance: Ratio of the energy per unit time per unit area transmitted through an object to the energy per unit time per unit area incident on thc object. Vidicon: Vacuum tube with a photosensitive surface. Vignelting: Gradual reduction in density of parts of a photographic image caused by preventing some of the rays from entering the lens.
Index
A Posteriori Probability, 189, 266, 267 A Priori Probability, 12, 255, 266, 269 Aberration, 36, 43 Acuity, 70 Adaptive Threshold Selection, 226 Additive Noise, 89 Affine Transformation, 19, 108 Aliased Spatial Frequency, 57 Aliasing, 47, 68, 69, 113 AOIPSsee Atmospheric and Oceanographic Image Processing System Aperture, 21, 43 Apodization, 57 Archiving, 293 Artificial Contour, 149 Artificial Edge, 200, 227 Aspect Ratio, 89 Atmosphere, 4, 5, 7779 Atmospheric Correction, 7879, 158 Effect, 2, 45, 34, 43, 7778, 128 Motion. 235 Transmittance. 45, 34, 36, 38,212 Atmospheric and Oceanographic Information Processing System, 27, 235 Attenuation, 127 Atmospheric, 5, 34 Spatial Frequency, 113 Attitude, 36, 41 Determination, 107 Errors, 36, 4142 Precision, 107 Time Series, 107 Autocorrelation, 298 Of a Function, 12. 13, 21,298 Of a Random Field, 13, 15 Average Divergence, Ensemble, Loss, 274 262 15
Bayesian Classifier, 282, 284 Bayesian Rule, Bessel Function,
266,
268269,
273,
267268 16, 23, 43 111 227
Bilinear Interpolation, Binary Correlation, Binary Image. 164 Binary Mask, 164 Binomial Distribution, Bipolarized Bit I Unit Bivariate Blur
282 285 13, 49 108, 196.211
Measurement, of Information), Polynomial,
Atmospheric, 34, 44 Motion, 34 Blurring, 27, 42, 44, 113, 134, 137 Border, 70, 224
120,
127,
t30,
Bound_ry, 270276 Boundary Detection, 227 Brightness, 42, 7072, 148149 Component, 78, 148 Difference, 72 Temperature, 149 Variation, 48, 128, 148 Calibration, 79, 153 Camera Frame, 34 Camera Error, 77 Aberration, 36, 43 Shading, Vignetting, Cartographic CauchySchwartz Census Tract, Change Channel. 78, 193 42 Projection, Inequality, 237, 243 187, 127 269 149 21 199, 89, 103. 200 190 211,243
Detection, 7778,
ChiSquare Distribution, Test, 268,290 Chromatic Circuhmt Circular Circular Class 110
Transformed Divergence, 284 Averaging, 89, 298 Background, 70 Band Interleaved by Line, 45 Band Interleaved by Pixel, 46 BandLimited Function, 47, 57, 68, BandPass Filter, 70, 80, 82, 153 Band Sequential, Bandwidth, 38 Battacharyya 45 Distance, 254
Variation, Matrix, 64 ICyclic) Symmetric
Convolution, Filters, 27
Boundary, 251 Characteristics, Discrimination, Separability, Separationsee
251 255
249284 Classification 321
322
INDEX
Classifier Predictive, 303305 294, Transform, 298303 294, Bayesian, 268,273,282,284 266, Conditional Evaluation, 281284 Average 266267 Loss, Hybrid, 305306 293, Implementation, 249253 Probability Density, 266 Linear, 253,274 Contour, 70 Maximum Likelihood, 284 False. 49, 52 Contrast Optimal, 268 Parallel, 277 Attenuation, 127 Characteristics, 70, 153. 192 Parallelepiped, 271273 Quadraticsee Classifier Bayesian Enhancement, 49, 127130, 149, 153, 192 Sequential, 277 Classification Convolution, 1112, 15, 2426, 56, 57, Accuracy,261 78. 6365, 120, 137 Algorithm, 251,254 Circular (Cyclic), 196 Error, 78 Theorem. 15, 47, 64 Geometric, 273278 Coordinate Transformation, 103109, 203 Nonparametric, 266 Parametric, 266 Correlation. 194196 Coefficient, 195 Time, 277 270, Unsupervised, 251,278 Function, 13, 122, 196, 298 Matrix. 6061.63. 256 Supervised, 251,263,285 Statislical, 266273 Measure, 191,256, 298 Cloud Peak, 194 Surface, 194 Displacement. 103,235 Height, 235 199. Cosine Transform. 55, 5960 Covariance Cluster, 280 279, Clustering, 278281 237, Function. 14, 252, 253 Matrix. 170. 194, 195, 252, 256259 Algorithm, 280 Total, 256 Coefficients ofExpansion, 254 Coherent 37, 7.79 Noise, 7 Convolution Integral. 12 Color Criterion Function, 274. 276277. 279280 Assignment, 148149 Crosscorrelation Difference, 72 Display, 149 Normalized, 191 Distribution. 148149 Matching, 12, 19(/, 191, 196, 235 Enhancement. 72, 27 1 Of a Function, 12, 1920.21, 6567 Of a Random Field, 14 Information, 149158 Parameter. 148 127, Cumulative Distribution Function, 153 Perception, 127 Cutoff Frequency, 27, 113 Primary. 159 141. Data Compression, 293307 Datum Surface, 20121)3 Space, 72, 49 48, 1 Variation, 149 Deblurring, 120, 127, 130, 134, 137 Decision Color Composite, 153, 159 149, 158, Boundary, 286 Compass 206 Course, Maximum Likelihood, 266268 Component Region, 260, 270 Image, 148 Illumination, 34.44 Rule, 251,267,268 OfaMultiImage, 9.10 Surface, 274 Principal, 82,164, 63, 170171, 176 Theory, 273 Reflectance, 34 Degradation, 3946, 68 Composite Image Atmospheric, 2.45, 34, 43, 7778. 128 Edge Image, 196 27, Geometric, 34, 36, 4042.77, 89, Image 149, 158, Color, 153, 159 199222 Compression,9 63, 6 Motion, 43 Entropy Reducing, 293294 Point, 40 Evaluation. 305306 Radiometric. 38, 77 Hybrid, 3(/5306 294, Information Preserving. 293305 Temporal, 44
INDEX
323
Delta
Function,
1112,
69
Density Edge, 227 Normal, 252, 258 Spectral, 79. 122. Slicing, 130, 149 Derived Data. 294
Pairwise, 260, 261 Transformed. 284 DPCMsce Differential Modulation Dynamic Range, 128 Earth Coordinatessee Coordinates
Pulse
Code
129
Geodetic
Description of Images, 223,234 Determimmt, 252 Deterministic Restoration, 118 Detector, 39, 77. 87 DFTsee Discrete Fourier" Transform Diagonalization of Matrix, 63 Difference Image, 153, 159, 164 Differencing, 70. 159164, 191,243 Differentiation, 131) Differential Pulse Code Modulation, 303305 Diffraction Effects, 43 Pixel Line Digital Data Format BlPwe Band Interleaved by BILsec Band Interleaved by BSQ see Band Sequential Digital Image. 2549, 110 Digital Image Processing. 2, 6, 49 Digitization, 2, 4446 Dimensionality Reduction, 254263 Dirac Delta Function, 11, 12, 23, Directional Difference, 225 Discrete Fourier Transform, 5559 Discriminant Function, 267,270, Discrete Transform, 5269
Easting, 203 Edge, 21, 69 Density, 227 Detection, 27. 225232 Delector, 226 Effect, 69, 21)¢), 212 Enhancement, 72, 127, 130, 137, 153, 158,225227 Following, 227 Image. 27, 196 Spread Function. 119 Eigenvalues, 62, 120, 170171 Eigenvectors, 62, 170171,252. 256
133134.
255.
46
274
Discrete Approximation to Convolution. 6365 Displacement, 108, 191 Display. 6769 Device, 67, 128, 148149 Nonlinearity, 130, 148 Systems, 69, 148149 Distance Battacharyya. 254 Euclidian, 279 lnterclass, 255, 258 Mahalanobis. 252 MeanSquare, 255, 258. Measure. Distortion Geometric. 279 4042, 192
304,306
ElectroOptical System, 37 Electrically Scanning Microwave Radiometer, 285 Emission. 37 Encoder, 305 Encoding, 294 Energy, 10 Enhancement, 12, 70, 167, 170171 Color, 72. 127 Contrast, 49, 71/72. 127130, 149, 153, 192 Directional. 134 Edge, 72, 127, 130, 133134, 137. 153, 158,225227 False Color, 149. 159. 164 Fillet, 34, 35.63, 123, 137, 141, 153 Image, 23, 63 M'ultiImage, 158164, 187 Pseudocolor, 72, 149 Ensemble Average, 15 Entropy, 13,295296 Entropy Reducing Compression, 293 Ergodic. 15 Ergodic Random Field, 15 Error Rate, 281 ESMRsee Microwave Euclidean Event, 13 60 61.62,255 Electrically Radiometer Distance, 279 Scanning
Map, 200201 Measure, 187306 Radiometric, Distribution Binomial. 282 199
Expansion, 89 ExpansionOrthogonal. ExpansionOrthonormal, Expected Value, 252 Eye. 148 Contours, 49, 52 Color. 149158 159. 164 113 False False
ChiSquare, 269 Normal, 268 Parameter, Of a Function, Divergence Average, 261262 282 Normal. 252. 268
Enhancement. Image,
324 Fast Fourier Transform, 56.196 Feature Evaluation, 259,284
Extraction. Selection, Space, Subset, Vector, FFT.see 254 63,251. 253254, 255 271 259262 251252, 273 Fast Fourier Transform
INDEX
GEOSsee Geostationary Environmental Satellite Geostationary Operational mental Satellite, 104, Gibbs Phenomenon, Gradient. 134 Descent, 277 Digital, 225 25 Operational Environ235
107,
Fidelity Criteria, 149,294 Field of View. 2, 199 Film Characteristics, 128130 Filter, 2526 Bandpass, 25 Design, 17.27 Exponential, 25 Gaussian, 25, 27 HighPass, 25, 27, 34, 130 Homomorphic, 141 Inverse, 25. 118, 120121 Linear. 24.25 LowPass, 25, 27, 113 Notch. 25 Wedge, Wiener, 27 118 34, 35, 123, 141, 153
Gray Value, 10 Gray Level, 45, 128, 224 Gray Scale, 11 Difference, 149 Normalization, 153,266 Transformation, 128, 129 Ground Control Hadamard Point, 210, 235 237
Compression. 303 Matrix, 303 Transform, 55, 60, Hankel Transform,
303,305 16.21 23
Hanning Window, 26 Haze, 34.78 H(MMsee Heat Capacity Mission
Mapping 149
Filtering. 24 34 Homomorphic, Edge, Linear, Forest Fourier 137 63 40
Foreshortening, Insect
Damage.
243,282
Heat Capacity Mapping Mission. High Pass Filter. 27, 34 Histogram Bimodal, 130 Equalization, 130 Flattening, 130, 153, 159 Gaussian. 130, 153 Of Gray Levels, 79, 129 Modification. 130 Multidimensional, 130 Normalization, 153,226 HOKashyap Homogeneity, Classification Algorithm, 232 1 (also, 277 see Field,
Compression, 303 Fourier Transform, 11, 2427.55. 254. 255
15. 1721,
Discrete, 47, 55 56, 82, 120 Inverse. 17, 18.56. 210 Noise Removal, 8082 Properties, 15. 1721, 5558, FOVsee Field of View Frame. Fredholm Image, 34, 80. 212, 296 119
80
Homogeneous Homomorphic Homomorphic 140 Hue, 149 Scale, 72, Human 149.
Random
13. 15, 23 123,
Filler, 34, Filtering,
153 34, 35,
Integral
Equation.
Frequency Spatial, 17, 25 Spectrum, 88 Domain, 25.88 Gaussian Gaussian GCP,see Geodetic Densitysee Filter. 25, 27 Ground Coordinates. 273278 89, Density, Point 209 Normal
148,
149 49, 6973, 127, 293,
Visual 293,307
System,
Hybrid Classifier 305306 Hyperellipsoid, Hyperrectangle, Hypersurfaces,
Compression, 257, 27 I 273 271
Control
103108.
Geometric Classification,
Correction, 200 Distortion, 34, 36. 4(I42.77, 199222 Errors. 4042, 192 Transformation, 89. 103119 Geosynchronous Orbit, 235
Hyperplane. 273,274 lBlS_see Image Based System Ideal Filter, 121122 IFOVsee Instantaneous Illumination, Illumination 7778 Component.
Information
Field
of View
34, 44
INDEX
325
Kernel Matrix, 255256 Image Land Use, 237, 243 Analysis, 223246 Landmarksee Ground Control Point Characteristics, 232234 Classification,249290 234, Landsat, 38, 49 Data Compression,9, 93306 Landsat MSS, 49, 87, 88, 107, 111, 113 63, 2 6 Laplacian, 134, 137 Description, 234 223, Latitude. 106, 207 Digital, 45 Leakage, 57 Display System, 119 Enhancement, 23,127171,223 Least Squares Entropy, 13,295,296 Estimate, 120 Filtering, 120 Fidelity, 293 Formulation, 6, 7, 4, 5 34, 3 4 4 3 Legendre Polynomial, 108 Mosaic, 78,199200, 219 212, Level Slicing. 149 Multi, ,10 9 Lexicographical Ordering, 52 Monochromic,149 128, lane Spread Function, 119 Multispectral, 10, 49,149 Linear Multitemporal, 82, 10, 283,284 2 Classifier, 253, 274 Overlay, 89,103, 199200, 211212 Discrimination Function, 267,270, Plane, 37 274 Properties, 249, 63,232, 253,294,296 Inequality, 276 Quality, 307 36. Operation, 1719, 23, 38 Recording,36, 7, 8, 4, 5 34, 3 3 4 4 System, 2, 2324. 38 Registration, 89,187196, 235 Transform. 15.23, 108, 128129, 254, Restoration, 77123,223 259 Statistics, 294 Linear Approximation to Imaging Segmentation, 243 223234, Equation, 111 Transformsee Transform Linearized Imaging Model, 77 Reconstruction, 294 Linearly Separable. 275 ImageBased Information 237 System, I.inear Shift Image Processingsee Image Digital System, 19 Processing lnvariant Operation. 20 Image Radiant Energy, 9,37 Longitude, 106 Impulse Response, 8 24, 6 Information Content, 13, 95296, Loss Function, 269 12, 2 Loss FunctionSymmetric, 267 298 LowPass Filter, 25, 27, 113 Extraction, 293 Loxodrome, 206207 Information Preserving Compression, Luminance, 70, 78. 153 293 Insect Infestation, 243,282 Instantaneous Field of View, Intensity, 307 Intensity Interactive Interactive lnterclass Interference Interpolation 36, 109, 110, Mahalanobis 2, 199 212, 224. Distance, 252 Map Projection. 209 Map Overlay, 211 Map Matching. 78, 199200, Markov Process, 63 MaskBinary, 164 Matching, 190, 192 Match Point, 191 Matrix I Block) Circulant, Covariance, 6163, 252, 256259 Hadamard. 303
149,
212,
219
Errors, 192194 Image Analysis System, 67 Image Processing System, 6 Distance, 255, 258 Pattern, Grid, 7980 107, 109112
64 170,
194,
195,
Interpolation Bicubic, 111113 Bilinear, 11t113,210 Nearest Neighbor, 110I Inverse Filter, 25, 120121 Inverse Inverse Jacobian, Jacobian Filtering, Transform, 25,
11,
113
118, 120121 17, 18, 56, 210 108
19 Determinant,
Orthogonal, 55, 106, Singular, 60 Symmetric, 60, 252 Transpose. 57.62 Unitary, 55 Maximum Likelihood Algorithm, Classifier, Decision 268269 270 Rule, 267
255
KarhunenLo6ve Transform, 16, 23, 55, 164, 170171, 176, 254255, 298
326
INDEX
Normal Probability Distribution, 273 Normal Projection. Northing, 203 Notch Filter, 80, 88 Nyquist Frequency, 47, 59, 69, 118 NyquistShannon Theorem, 110 Object Plane, 37 Object Radiant Energy, 2, 36, 37. 38, Object Distribution, 37, 114, 118 Oblique Mercator Projection, 207 Oblique Projection, 207 Operation Linear, 1719, 23.38 Shift Invariant, 1920 Optical Mechanical Scanner, 34 Optical System, 38, 4042 Optical Transfer Function. 24, 43, Optimal Classifiersee Bayesian Classifier Optimal Filter, 121122 Optimal Quantization, 128 Orthogonal Expansion, 60 Function, 55 Matrix, 55, 106, 255 Transformation, 63,254259 Vectors, 255 Orthonormal Basis, 16 Function, Density, Gaussian) 85 10, 49, 149 Data System. 10, 282, 38, 49 72 283. 171 284 252, 258 15, 16 62, 255 Transfer Function Density 203204
Mean OfaRandom 15 Field, OfaRandom Variable, 249 OfaClass, 252,258,259,273 Mean quare 16, 19, 122, S Error, 1 121, 255, 306 304, Mean ector, 268,280 V 252, Mercator Projection, 205207 Meridian, 205 Misregistration,196 187, Modulation Pulse 49, 296 Code,
Transfer 121 Function, 24, 44, 72, 113, Moire Pattern, 69 Mosaic, 78, 199200, Motion Blur, 34, 235 Degradation, 43 MSDSsee Multispectral System MSSsce Multispectral MTFsee Modulation Function MultiImage, MultiImage 187 Mullispectral Classification, Image, 10.49, Information, Scanner, Multivariate 171 Normal 9, 10, 13 Enhancement, 212,219
77
44
Scanner Scanner Transfer
Data
158164,
281284 149 149158
Expansion. 61, OTFsee Optical Overlay, Pairwise Panoramic
(also, see Density, Multipllcalive Noise, Multispectral Multispectral Multitemporal Multispectral Image, Scanner Image, Sensor,
89, 103. 199200, 211212 Divergence. 260, 261 Distortion. 4/)41 Classifier. Classification. Theorem, 116 271272 266
Parallelepiped Parametric Parseval's
Munsell Color System. Nadir Point, 78,209 Nearest Neighbor 110111 Nimbus, Noise Additive, 104, 40 286
Interpolation,
Partitioning, 278 Path Radiance, 36, 37, 7879 Pattern Clusters, 279 Recognition, 187, 279 Space, 249. 257258. 279 Vector. 249, 257258 PCMsce Pulse Code Modulation Perception, Performance Performance 148 Index, of Noise. 40, 43 Transfer 34 78, 199200, 212, 219 and Image 37 Detection 798(I 2032(15 27, 34 279 281284
Coherent, 37, 77.79 Film Gain, 37 Multiplicative. 85 Random, 37.77, 114 Removal, Transmission, Uncorrelated, Nonlinear Nonlinear Contrast Sensor 7989, 79 37, 119. 89 Enhancement, Characteristics, 130 87 170171
a Classifier,
Periodic Point, Phase Phase.
Projection.
42,
Ftmclion,
Nonparametric Classification, 266 Normal Distribution, 252, 268 Normal Mercator Projection, 205207
Photomosaic, Photochemical Recording,
INDEX Picture Elementsee Pixel Piecewise Transformation, l.inear 128129 Pitch, 104, 41, 105 Pixel, 38 Planar Projection, 201 Point Degradation, 40, 43
Source, 11, 24 Spread Function, 24.37, 114, 119, 130, 134 Polarization, 2, 910, 286 Polarstereographic Polar Coordinates, Projection, 148 4044, 77. Tangent, 201 Transverse. 201 UTM, 208209 Property of an Object 249, 253,294, 296 PSFxee Pseudocolor Point Spread 149, or Image, Function 159 232,
327
Enhancement, Image, 148149 Transformation,
148149
Psychophysical Error, 307 PlFsee Phase '1 ransfer Function 199 Pulse Code Modulation. Quadratic Classifiersee Classifier Quadratic Form. 269 49. 296 Bayesian
Polyhedric Projection, 201 Polycylindrical Projection, 201 Polyconic Projection, 201 Polysuperficial, 201 Polynomial Orthogonal, 55, 60 Bivariate, 108 Positive Restoration, 119, 120 Power Spectrum, 21, 88, 232233 Predictor, 303304 Predictive Compression, 294, 303305 Primary Color, 141, 159 Principal Component. 63, 82, 164, 170 171, 176 Principal ComponentImage, 164, 170171, 176 Probability A Posteriori, 189, 266, 267 259 260, 258 263, 63, 82,
Quality, 298 Quantization, 34, 45, 4952, Bits, 41,295 Error, 47 Linear, 49, 128 Noise, 2 Nonlinear, 49, 128 Nonuniform, 49, 128 Optimal, Uniform, Quantizer, 128 49, 303 128 9, 37
128
Radiant Energy, Image, 37 Object, 34 Radiometric Correction, Degradation, Difference, Error, 38, Rcstorution, Transformation, Random Field, Noise.
21")1") 34, 4244. 200 77 12, 24 34, 103109 298 114
A Priori, 255, 266, 269 Of Correct Classification, Probability Density, 13,252, 269, 294 Multivariate Normal, 252, Probability l)istribution, Projection Axis 207, 263 Center, 201 Conformal, 203,205,207 Conical, 200201 Cylindrical. Lambert, Mercator, Normal. 200201,205 207208 205207 201,204
69,
114123
13, 87, 294
11, 1215,295, 12
Variable, 1215, Random Field Ergodic, 15 Homogeneous, Independent, Uncorrelated, Raster Image,
249,266
13, 15, 13 13 45, 199
16, 23
Oblique. 201 Perspective. 203205 Planar, 201 Plane, 200, 21)9 201 Polyconic,
Ratio Image, 78.85, 153, 158 Ratioing, 158159 RBVsce Return Beam Vidicon Recorded Image, Reconstruction, Reconstructed Reconstruction Recognition 37, 38, 40. 41 6769 Image, Filter, Accuracysee 69, 294 118, 121
Polycylindrical, 201 Polyhedric, 2111 Polystercographic, 204, 205 Polysuperficial Projection, 201 Secant, 201 Sur f,'_ce. 200201
Classification Accuracy Redundancy, 13. 293 Redundancy Reduction,
294305
328 Reference 192194 Image, Reference 200 Grid, Reflectance, 77 Reflectance Component, 34 Region, 227 9,224, Registration
Algorithm, 189194 Correlation, 190192, Error, 191192 Filter, 194 Image, 89, 187196,235 Procedure, 187 Statistical Correlation, Reject Class, 269 Region, 270 Remotely Sensed Images, Resampling, 107, 109114 Resolution Element, 11 Radiometric, 11 Spatial, 11, 39, 286 Spectral, 11 Temporal, 11 Restoration Constrained, l 19 Deterministic. 118 Inverse Filter. 120121, 194196
INDEX
Sensor Characteristics, 36, 4041, 87, 192, 199 Sensor Response, 85, 88, 128 Separability of Classes, 249284 Separable Linearly, 276, 277 Separating Vector, 276 Severe Storm, 4, 235 Shading, 34, 42, 78, 128 Shape, 70, 127 Sharpening, 134, 141 Sharpness, 119 Shifting Property, SignaltoNoise 193, 306 Signature Significance Similarity 278279 19, 6667 Ratio, 49, 79, 78,
187
120,
122,
2, 911,
79
Extension, 78 of a Cluster. Measure, 190.
279, 191.
280 193.
Skew Errors, 192 Skewing, 41, 89, 108, 109 Small Interactive Image Processing System, 67 SMIPSsee Processing Smoothing, SMSsee Satellite 122 Small Interactive System 89, 219 Synchronous Image
Meteorological Ratio 37, 122,
LeastSquares Filter, 118 MeanSquareError, 121122 Positive, 119, 122 Stochastic, 118 Wiener Filter. 118. 121 Return Beam Vidicon, 4, 38, 77 Ringing. 26, 27, 113 Roll, 41, 104. 105 Rotation, 20, 4041, 192
SNR.we SignaltoNoise Space Invariant System. 130 Space 24, InvariantPoint 37, 4044, 77, 103105 39, 106107 103105 38, 103104
24,
Spread Function, 114, 119, 130, 134
89,
103,
108109.
Spacecraft Altitude, Attitude, Position, Velocity,
Sampling, 4449, 68 Error, 47 Function, 4649, 57 Grid, 46.49 Interval, 41.45, 46, 49, 57 Theorem, 47 Saturation, 72. 129, 149, Scale Errors, 192, 208 Scaling, Scanning 4041, Scan 20, 108, 109, 279 3436. Imaging 127 78 4041, 80, 4041 79 103107, System, 159
Spatial Average, 89. 298 Coordinates, 103109, Correlation, 195 Distortion, 42, 44 Domain, 12, 13, 25.26
203
Frequency, 21, 24, 25, 47, 69, FrequencyAliased, 57 Frequency Attenuation, 113 FrequencySpectrum, 910 Registration. 187196 Spectr,'d Band. 4, 13 Component, 78 Difference, 44.63. Domain, 9 Emittance, 9 h'radiance, 9 Radiance, 9 Reflectance, 9 Spike Noise, 79, 89 Spacecraft, 104 SpinStabilized
80,
113
Angle,
Scan Geometry, 196 Scanner Orientation, 34, Scattering, Seam, 200 Search Area, Segmentation 243 Sensor,
158
37, 78, 188I 94
of :in Image,
223234,
2, 9, 199
INDEX Spread Function Edge, 119 Line, 19 1 Point, 37, 044, 24, 4 77,114, 119, 130, 134 Standard Deviation, 191 Parallel, 207 Statistical Analysis, 12 Characterization 1216, ofImages, 60 Correlation, 194196 Classification. 266273 Predictability,303305 294, Stereographic Projection, 199 Stochastic Restoration, 118 Striping, 8588, 79, 269 Structural Description ofanImage, 223. 234 Subimage, 249 188, Subjective Interpretation, 140 Sun Angle, 77 Supervised Classification,263, 251, 285 Superposition, 194 Symmetric 252 Matrix, Synchronous Meteorological Satellite. 39, 7, 04 8 1 Systematic Errors, Intensity 193 Table Lookup, 69, 270
Television Display, Camera, Template, Template Temporal 200, Test Test 243 71 284 282, Pattern, Set, 2 2, 34 188. 194195 188192, 194, 1952, 11,44, 63, 164, Training Set, 250, 255,266, Selection, 251263 Tracking, Transfer Transform Compression, 294, Cosine, 55, 5960 Discrete, 5255 DiscreteFourier, DiscreteCosine, 298303 235237 Function, 24, 141 268,282
329
5559 5960 60
DiscreteKarh unenLo6ve, Fourier, 11, 15, 1721,303 Hadamard, 60, 303 Hankel, 16, 2123 To Principal 170171 Components, 17, 5558,298,303 I 149158 103109 153,226
164,
Unitary, 15 Transformation, False Color, Geometric.
Gray Scale, 130, Linear, 1517,279 Orthogonal, Pseudocolor, Rotational,
63. 254259 148149 279 164, 284 45, 34,
Scaling. 108. 109, 279 To Principal Components, 170171, 176 Transformed Divergence, Transmission, 34, 127 TransmiltanceAtmospheric, 36, 38, 212 Transverse Mercator Projection, 208209 Tristimulus Uncorrelated 296305 Unitary Universal Value. 72 Fields,
Matching, Change,
Random
14, 298
Transform, Transverse 208
1517, 5558, Mercator 209 251,278
Texture, 73, 127, 224 Analysis, 232234 Edge Delection, Feature, 233 Measure, Property, Thematic Thermal Thresholding, Threshold Adaptive, TiePoint, Topographic 233 225,232234 Mapper, Infrared 164, Selection. 226 108 Relief, 158 293, Image, 224225. 226 296 149 243, 269 225,232233
Projection, Unsupervised Urban Atlas
Classification, File. 237
UTM Projectionsee Transverse Mercator Variance, 63, 164, 171.
Universal Projection 252. 259, 298 System and Radiometer,
VICARsee Video Image Communication and Retrieval Video Video 87. Vidicon Viewing Vignetting, VISSRsee Radiometer Image Infrared 107 Camera, Geometry, 34, 42, Video Commtmication System, Spin 6 Scan Retrieval
1, 38 77 128 Infrared Spin Scan
Training Area, 251263 Pattern, 251252 Vector, 252
330 Visual Perception, 6970 Visual System, 6973 Web rammar, G 234 WeberFechner70, 07 Law, 3 Weight 275 Space, Vector, 276 275, Wiener 118, Filter, 121122
INDEX Wind Vector, 235 Field, 4,234237 Velocity, 235 Windowanning, H 26 Wraparound, 65 Yaw, 104, 41, 105
,; U.S. GOVERNMENT PRINTING OFFICE: 1980 0296053